Preferences

georgeck
Joined 30 karma
Lifelong Learner & Technology Enthusiast. Currently working on open source projects.

https://george.chiramattel.com/

https://github.com/georgeck

Currently working on HN companion: https://github.com/hncompanion/browser-extension


  1. Thanks, that’s helpful.

    I’m also hoping similar media management options are available on iOS and desktop, since I use Signal across devices.

    By the way, does Signal treat synced devices (like desktop or a second phone) as “replicas” vs a “primary”? If so, does this affect how storage or message history is handled between them?

    Would appreciate any insight from folks familiar with the technical side of this!

  2. It would be really useful to have more client-side control over media storage. That way, I could better manage storage growth without wiping entire threads.

    For example, being able to see all media across chats, sort by file size, and optionally group by conversation would make it much easier to clean things up.

  3. SQLite's backward compatibility means many best practices - like WAL mode, foreign key enforcement, and sane busy timeouts - are not enabled by default.

    The author's Go library, sqlitebp, automates these settings and others (NORMAL synchronous, private cache, tuned page cache, connection pool limits, automatic PRAGMA optimize, and in-memory temp storage) to make high-concurrency, reliable usage safer and easier right out of the box

  4. This is a great idea! Exactly what I was also thinking and started working on a side-project. Currently the project can create summaries like this [1].

    Since HN Homepage stories change throughtout the day, I thought it is better to create the Newsletter based on https://news.ycombinator.com/front

    So, you are getting the news a day late, but it will capture the top stories for that day. The newsletter will have high-level summary for each post and a link to get the details for that story from a static site.

    [1] - https://www.hackerneue.com/item?id=43597782

  5. I tried summarizing the thread so far (339 comments) with a custom system prompt [0] and a user-prompt that captures the structure (hierarchy and upvotes) of the thread [1].

    This is the output that we got (based on the HN-Companion project) [2]:

    LLama 4 Scout - https://gist.github.com/annjose/9303af60a38acd5454732e915e33...

    Llama 4 Maverick - https://gist.github.com/annjose/4d8425ea3410adab2de4fe9a5785...

    Claude 3.7 - https://gist.github.com/annjose/5f838f5c8d105fbbd815c5359f20...

    The summary from Scout and Maverick both look good (comparable to Claude), and with this structure, Scout seems to follow the prompt slightly better.

    In this case, we used the models 'meta-llama/llama-4-maverick' and 'meta-llama/llama-4-scout' from OpenRouter.

    --

    [0] - https://gist.github.com/annjose/5145ad3b7e2e400162f4fe784a14...

    [1] - https://gist.github.com/annjose/d30386aa5ce81c628a88bd86111a...

    [2] - https://github.com/levelup-apps/hn-enhancer

    edited: To add OpenRouter model details.

  6. Is this solution similar to the Direct Preference Optimization (DPO) [1] provided by another 'fine-tuning as a service' - OpenPipe?

    [1] https://docs.openpipe.ai/features/dpo/overview

  7. > To quickly find text, select some text and press ⌘E followed by ⌘G.

    This is really nice. Once I am in this 'search' mode, I couldn't figure out how to get out of this mode.

    - Edited to make question more descriptive.

  8. Tools like https://pi-hole.net does this for the whole house. It comes with a default set of blocked domains and you can easily add to it. It acts as your local DNS for the network.
  9. Thank you for trying out the extension and for this great suggestion!

    We've actually been thinking along similar lines. Here are a couple of improvements we're considering:

    1. Built-in prompt templates - Support multiple flavors (e.g. On similar to is there already, in addition to knowledge of up/down votes, another one similar to what Simon had - which is more detailed etc.)

    2. User-editable prompts - Exactly like you said - make the prompts user editable.

    One additional thought: Since summaries currently take ~20 seconds and incur API costs for each user, we're exploring the idea of an optional "shared summaries" feature. This would let users access cached summaries instantly (shared by someone else), while still having the option to generate fresh ones when needed. Would this be something you'd find useful?

    We'd love to hear your thoughts on these ideas.

  10. I have been trying to approach the problem in a similar way, and in my observation, it is also important to capture the discussion hierarchy in the context that we share with the LLM.

    The solution that I have adopted is as follows. Each comment is represented in the following notation:

       [discussion_hierarchy] Author Name: <comment>
    
    To this end, I format the output from Algolia as follows:

       [1] author1: First reply to the post
       [1.1] author2: First reply to [1]
       [1.1.1] author3: Second-level reply to [1.1]
       [1.2] author4: Second reply to [1]
    
    After this, I provide a system prompt as follows:

      You are an AI assistant specialized in summarizing Hacker News discussions. 
      Your task is to provide concise, meaningful summaries that capture the essence of the thread without losing important details. 
      Follow these guidelines:
      1. Identify and highlight the main topics and key arguments.
      2. Capture diverse viewpoints and notable opinions.
      3. Analyze the hierarchical structure of the conversation, paying close attention to the path numbers (e.g., [1], [1.1], [1.1.1]) to track reply relationships.
      4. Note where significant conversation shifts occur.
      5. Include brief, relevant quotes to support main points.
      6. Maintain a neutral, objective tone.
      7. Aim for a summary length of 150-300 words, adjusting based on thread complexity.
      
      Input Format:
      The conversation will be provided as text with path-based identifiers showing the hierarchical structure of the comments: [path_id] Author: Comment
      This list is sorted based on relevance and engagement, with the most active and engaging branches at the top.
      
      Example:
      [1] author1: First reply to the post
      [1.1] author2: First reply to [1]
      [1.1.1] author3: Second-level reply to [1.1]
      [1.2] author4: Second reply to [1]
      
      Your output should be well-structured, informative, and easily digestible for someone who hasn't read the original thread. 
      Use markdown formatting for clarity and readability.
    
    
    The benefit is that, I can parse the output from the LLM and create links back to the original comment thread.

    You can read about my approach in more detail here: https://gist.github.com/simonw/09e5922be0cbb85894cf05e6d75ae...

  11. I agree. Capturing the interaction as a movie (like .mov file) makes it really difficult to understand what the user is doing. e.g. What keystrokes did the user press to finish this interaction. I wish folks would post screen grabs with tools like https://asciinema.org/ - this is what the helix-editor homepage uses to show the features. This is ideal for terminal apps.

    That said, I wish asciinema can also show the key strokes a an annotation with the ability for the viewer to pause on each keyboard interaction.

  12. One of the coauthors here - the tool uses the following data pipeline.

    Resume data in unstructured format (including PDF) -> structured data with Claude’s structured JSON API -> portfolio templates that take structured data input -> further refinement using prompts in IDEs like Bolt or V0 -> publish in Vercel, Netlify etc.

    This allows the tool to be generic and not tied to LinkedIn.

  13. > We’re only testing models up to 4b, as larger ones don’t run well on this Jetson.

    According to spec, this model has 4GB 64-bit LPDDR4 (25.6GB/s) memory

  14. Thanks for taking time to review this extension.

    Yes, we plan to continue development and will publish the extension in other stores as well (including FireFox and Edge).

    Do you have any suggestions for features that you would like us to prioritize?

  15. How to stream chat completions from OpenAI’s API Article talks about the differences between OpenAI’s streaming API and standard SSE and how to use them with Node.js.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal