This is my VFS: https://github.com/ncruces/go-sqlite3/blob/main/vfs/readervf...
And using it with range requests: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...
And having it work with a Zstandard compressed SQLite database, is one library away: https://pkg.go.dev/github.com/SaveTheRbtz/zstd-seekable-form...
But you can use it (e.g.) in a small VPS to access a multi-TB database directly from S3.
But, also, SQLite caches data; you can simply increase the page cache.
And I just now added a 'me' view. Enter your username and it will show your comments/posts on any day. So you can scrub back through your 2006 - 2025 retrospective using the calendar buttons.
edit: Fixed! Also I just pushed a new version with a Dec 29th Data Dump, so ... updates - yay!
I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.
It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.
I want something like a db with indexes
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
It’s a fun time to be a dev in this space!
But, when using this on frontend, are portions of files fetched specifically with http range requests? I tried to search for it but couldn't find details
Looks like it's still on PyPI though: https://pypi.org/project/sqlite-s3vfs/
You can see inside it with my PyPI package explorer: https://tools.simonwillison.net/zip-wheel-explorer?package=s...
https://github.com/simonw/sqlite-s3vfs
This comment was helpful in figuring out how to get a full Git clone out of the heritage archive: https://www.hackerneue.com/item?id=37516523#37517378
Here's a TIL I wrote up of the process: https://til.simonwillison.net/github/software-archive-recove...
From what I see in GitHub in your copy of the repo, it looks like you don’t have the tags.
Do you have the tags locally?
If you don’t have the tags, I can push a copy of the repo to GitHub too and you can get the tags from my copy.
Sure, the LLM fills in all the boilerplate and makes an easy-to-use, reproducible tool with loads of documentation, and credit for that. But is it not more accurate to say that Simon is absurdly efficient, LLM or sans LLM? :)
https://simonwillison.net/2021/May/2/hosting-sqlite-database...
https://phiresky.github.io/blog/2021/hosting-sqlite-database...
https://just.billywhizz.io/sqlite/demo/#https://raw.githubus...
There is also a file format to optimize this https://cogeo.org/
I believe that there are also indexing opportunities (not necessarily via eg hive partitioning) but frankly - am kinda out of my depth pn it.
The sequence of shards you saw when you paginated to days is faciliated by the static-manifest which maps HN item ID ranges to shards, and since IDs are increasing and a pretty good proxy of time (a "HN clock"), we can also map the shards that we cut up by ID to the time spans their items cover. An in memory table sorted by time is created from the manifest on load so we can easily look up which shard we need when you pick a day.
Funnily enough, this system was thrown off early on by a handful of "ID/timestamp" outliers in the data: items with weird future timestamps (offset by a couple years), or null timestamps. To cleanse our pure data from this noise, and restore proper adjacent-in-time shard cuts we just did a 1/99 percentile grouping and discarded the outliers leaving shards with sensible 'effective' time spans.
Sometimes we end up fetching two shards when you enter a new day because some items' comments exist "cross shard". We needed another index for that and it lives in cross-shard-index.bin which is just a list of 4-byte item IDs that have children in more than 1 shard (2-bytes), which occurs when people have the self-indulgence to respond to comments a few days after a post has died down ;)
Thankfully HN imposes a 2 week horizon for replies so there aren't that many cross-shard comments (those living outside the 2-3 days span of most, recent, shards). But I think there's still around 1M or so, IIRC.
Where did you get the 22GB figure from? On the site it says:
> 46,399,072 items, 1,637 shards, 8.5GB, spanning Oct 9, 2006 to Dec 28, 2025
The HN post title (:
How was the entirety of HN stored in a single SQLite database? In other words, how was the data acquired? And how does the page load instantly if there's 22GB of data having to be downloaded to the browser?
- 1. download_hn.sh - bash script that queries BigQuery and saves the data to *.json.gz
- 2. etl-hn.js - does the sharding and ID -> shard map, plus the user stats shards.
- 3. Then either npx serve docs or upload to CloudFlare Pages.
The ./toool/s/predeploy-checks.sh script basically runs the entire pipeline. You can do it unattended with AUTO_RUN=true
I watched it in the browser network panel and saw it fetch:
As I paginated to previous days.It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.
The interactive SQL query interface at https://hackerbook.dosaygo.com/?view=query asks you to select which shards to run the query against, there are 1636 total.