Preferences

ianmcook
Joined 16 karma
https://github.com/ianmcook https://twitter.com/ianmcook

  1. @1egg0myegg0 that's great to hear. I'll check to see if it applies to Arrow.

    Another performance issue with DuckDB/Arrow integration that we've been working to solve is that Arrow lacked a canonical way to pass statistics along with a stream of data. So for example if you're reading Parquet files and passing them to DuckDB, you would lose the ability to pass the Parquet column statistics to DuckDB for things like join order optimization. We recently added an API to Arrow to enable passing statistics, and the DuckDB devs are working to implement this. Discussion at https://github.com/apache/arrow/issues/38837.

  2. Arrow developer here, we've invested a lot in seamless DuckDB interop, great to see it getting traction.

    Recent blog post here that breaks down why the Arrow format (which underlies Arrow Flight) is so fast in applications like this: https://arrow.apache.org/blog/2025/01/10/arrow-result-transf...

  3. Anyone know what format they are serializing the data in to move it between Excel and Python? Are they using Apache Arrow?
  4. Thanks for the heads up. The post is intended to be up but there's an intermittent error happening. It's been reported to the Apache infrastructure team.
  5. Parquet is not based on Arrow. The Parquet libraries are built into Arrow, but the two projects are separate and Arrow is not a dependency of Parquet.
  6. From https://arrow.apache.org/faq/: "Parquet files cannot be directly operated on but must be decoded in large chunks... Arrow is an in-memory format meant for direct and efficient use for computational purposes. Arrow data is... laid out in natural format for the CPU, so that data can be accessed at arbitrary places at full speed."
  7. The Arrow Feather format is an on-disk representation of Arrow memory. To read a Feather file, Arrow just copies it byte for byte from disk into memory. Or Arrow can memory-map a Feather file so you can operate on it without reading the whole file into memory.
  8. Re this second point: Arrow opens up a great deal of language and framework flexibility for data engineering-type tasks. Pre-Arrow, common kinds of data warehouse ETL tasks like writing Parquet files with explicit control over column types, compression, etc. often meant you needed to use Python, probably with PySpark, or maybe one of the other Spark API languages. With Arrow now there are a bunch more languages where you can code up tasks like this, with consistent results. Less code switching, lower complexity, less cognitive overhead.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal