Preferences

I have been trying to approach the problem in a similar way, and in my observation, it is also important to capture the discussion hierarchy in the context that we share with the LLM.

The solution that I have adopted is as follows. Each comment is represented in the following notation:

   [discussion_hierarchy] Author Name: <comment>
To this end, I format the output from Algolia as follows:

   [1] author1: First reply to the post
   [1.1] author2: First reply to [1]
   [1.1.1] author3: Second-level reply to [1.1]
   [1.2] author4: Second reply to [1]
After this, I provide a system prompt as follows:

  You are an AI assistant specialized in summarizing Hacker News discussions. 
  Your task is to provide concise, meaningful summaries that capture the essence of the thread without losing important details. 
  Follow these guidelines:
  1. Identify and highlight the main topics and key arguments.
  2. Capture diverse viewpoints and notable opinions.
  3. Analyze the hierarchical structure of the conversation, paying close attention to the path numbers (e.g., [1], [1.1], [1.1.1]) to track reply relationships.
  4. Note where significant conversation shifts occur.
  5. Include brief, relevant quotes to support main points.
  6. Maintain a neutral, objective tone.
  7. Aim for a summary length of 150-300 words, adjusting based on thread complexity.
  
  Input Format:
  The conversation will be provided as text with path-based identifiers showing the hierarchical structure of the comments: [path_id] Author: Comment
  This list is sorted based on relevance and engagement, with the most active and engaging branches at the top.
  
  Example:
  [1] author1: First reply to the post
  [1.1] author2: First reply to [1]
  [1.1.1] author3: Second-level reply to [1.1]
  [1.2] author4: Second reply to [1]
  
  Your output should be well-structured, informative, and easily digestible for someone who hasn't read the original thread. 
  Use markdown formatting for clarity and readability.

The benefit is that, I can parse the output from the LLM and create links back to the original comment thread.

You can read about my approach in more detail here: https://gist.github.com/simonw/09e5922be0cbb85894cf05e6d75ae...


I just installed and tried. Pretty neat stuff!

Would be great if the addon allows user to override the sys prompt (it might need minor tweak when changing different server backend)?

Thank you for trying out the extension and for this great suggestion!

We've actually been thinking along similar lines. Here are a couple of improvements we're considering:

1. Built-in prompt templates - Support multiple flavors (e.g. On similar to is there already, in addition to knowledge of up/down votes, another one similar to what Simon had - which is more detailed etc.)

2. User-editable prompts - Exactly like you said - make the prompts user editable.

One additional thought: Since summaries currently take ~20 seconds and incur API costs for each user, we're exploring the idea of an optional "shared summaries" feature. This would let users access cached summaries instantly (shared by someone else), while still having the option to generate fresh ones when needed. Would this be something you'd find useful?

We'd love to hear your thoughts on these ideas.

the shared summaries sounds like a great idea to save most people's inference cost! There might be some details need to figure out - e.g. the summary per post need to be associated with a timestamp, if there are new comments kicking in after that (especially hot posts). Still i think it's good useful feature and i will definitely read that before browsing details.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal