I wonder if one could couple it with OCR so that you could point a phone at a page and drop into an emdash experience on the text that you've got a physical copy of. Or, you know, point it at your kindle so that your notes aren't locked into their ecosystem.
I'm building a backend that would support that kind of thing in a peer to peer kind of way (indexes content by piecewise hash so that you can recognize content you or your peers have annotations for and reattach those annotations despite differences in pagination, etc). If I ever get it into a demo-worthy state, I may reach out to see if we can make them work together.
Your content addressable system sounds very interesting, let me know when you have a demo.
[1]: https://ishmael.app
Perhaps for the first you just didn't have any more snippets that were closer?
Are the related snippets taken from a selection of snippets you created, or from the full text of other books?
A nice workflow might be to select a passage I'm reading in a book, and then see related passages from other books. But that requires I have DRM-free ebooks, and that these have already been chunked and indexed.
You're right that it would be nice to see things in situ as you're reading, but it would seem that most e-reading experiences are locked down. I appreciate the feedback!
i like it a lot!
I'm testing out a summarization/rephrase feature backed by LLMs that you can try in the demo. In HN fashion I'm trying to build this openly and gather feedback to see what works. I'd like to push this further in the active direction the article mentions with something like a Socratic dialogue mode where you're nudged to re-explain and examine ideas.
If anyone uses this thing/has feedback, let me know. Source is available too [2].
[1] https://emdash.ai
[2] https://github.com/dmotz/emdash