Preferences

scosman
Joined 3,418 karma

  1. Looks great for opengraph images.
  2. Just keep the tv offline.

    Alternatively block it from the internet at the router, or connect to a LAN-only subnet. Keeps the benefits of local AirPlay, Chromecast, and HomeKit without being able to phone home.

  3. Sonos is a software company with a history of pushing bad updates. But Framework sounds great.
  4. Offline smart TVs are great. As long as they support wake over CEC, they are close enough to a dumb display connected to an Apple TV.

    I let my latest LG TV on the network, but block internet access at the router. HomeKit integration (Siri turn off tv), Chromecast, Airplay, and other local services all work, without the ability for it to phone home.

  5. Houses with rads don’t have ducts or registers. There’s a whole other class of solutions for these houses.
  6. How does one debug issues like this?

    I have a page that ranks well worldwide, but is completely missing in Canada. Not just poorly ranked, gone. It shows up #1 for keyword in the US, but won't show up with precise unique quotes in Canada.

  7. Gemini API keys are an absolute delight compared to VertexAI. It’s like Google decided the most important part of competing with AWS was a horrific console.
  8. Anyone have a branch that I can run to target my own comments? I'd love to see where I was right and where I was off base. Seems like a genuinely great way to learn about my own biases.
  9. > unless your usage is illegal

    Like copyright infringement of Google's search results?

  10. I had the same insight for in-home air movement. Purpose built inter-room fans from broan/etc are 3x louder and several times more expensive than computer fans at the same CFM. I've been very happy with them.

    https://scosman.net/blog/using_in_wall_computer_fans_for_hom...

  11. Not there yet. The biggest vectors for optimizing aren’t in the agents yet (RAG method, embedding model, etc)
  12. Agree totally. I’m spending half my time focused that problem (mostly synthetic data gen with guidance), and the other half on how to optimize once it works.
  13. Alternative advice: just test and see what works best for your use case. Totally agreed embeddings are often overkill. However, sometimes they really help. The flow is something like:

    - Iterate over your docs to build eval data: hundreds of pairs of [synthetic query, correct answer]. Focus on content from the docs not general LLM knowledge.

    - Kick off a few parallel evaluations of different RAG configurations to see what works best for your use case: BM25, Vector, Hybrid. You can do a second pass to tune parameters: embedding model, top k, re-ranking, etc.

    I build a free system that does all this (synthetic data from docs, evals, test various RAG configs without coding each version). https://docs.kiln.tech/docs/evaluations/evaluate-rag-accurac...

  14. I build a system to do exactly this: https://docs.kiln.tech/docs/evaluations/evaluate-rag-accurac...

    Basically it:

    - iterates over your docs to find knowledge specific to the content

    - generates hundreds of pairs of [synthetic query, correct answer]

    - evaluates different RAG configurations for recall

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal