I have an example of using 'http://tech.ml.dataset' including reading/writing parquet with graal native. I spent a bunch of time tracking down things that were making the build bigger/slower than necessary and reworked the parquet bindings so they would work with the latest parquet/hadoop ecosystem pieces. The example parses a csv, does some calculations, and the saves/loads from parquet to make sure everything is good - https://github.com/cnuernber/ds-graal.