Preferences

jacobedawson
Joined 1,841 karma

  1. This mirrors my experience, the non-technical people in my life either shrugged and said 'oh yeah that's cool' or started pointing out gnarly edge cases where it didn't work perfectly. Meanwhile as a techie my mind was (and still is) spinning with the shock and joy of using natural human language to converse with a super-humanly adept machine.
  2. From 10 December 2025, anyone under 16 in Australia won’t be able to keep or make accounts on social media apps like TikTok, Instagram, YouTube, Snapchat, X, Facebook and more. The rule doesn’t punish young people or their families, instead, social media companies have to stop under-16s from having accounts or risk serious fines (up to about $50 million).
  3. The strongest counterpoint to that is the intense chilling effect that zero anonymity would have on political dissent and discourse that doesn't match the status quo or party line. I feel that would be much more dangerous for our society than occasionally suffering the consequence of some radicalized edge cases.
  4. "But anything beyond writing a simple function always leads to useless junk."

    If that's the article writer's experience then they are simply using it incorrectly. The author seems to suggest most of their usage involves pasting code or chatting via the text interface, likely with a non-sota model.

    It would be surprising to find that someone using e.g. Claude Code cli with full access to a codebase and carefully considered prompts is always getting useless junk.

  5. Without sidetracking with definitions, there's a strong case to make that developing AGI is a winner takes all event. You would have access to any number of tireless human level experts that you could put to work at improving the AGI system, likely leading to ASI in a short amount of time, with a lead of even a day growing exponentially.

    Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.

  6. An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment. Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical. A tireless, capable, well-versed assistant on call 24/7 is an autodidact's dream.

    I'm puzzled (but not surprised) by the standard HN resistance & skepticism. Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

    Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process. Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

    Personally I'm over the moon to be living at a time where we have access to incredible tools like this, and I'm impressed with the speed at which they're improving.

  7. Come on lads! Don't give up, we're only just past the tonsils!
  8. As a heavy LLM user, professionally and personally, I use "summarize this" a lot - I find that most content in the world is low signal-to-noise and a lot of the time the salient / useful information is hidden within unnecessary layers.

    The only time I didn't think summaries are useful is with creative fiction or pure entertainment content.

  9. I'd add to that that the best results are with clear spec sheets, which you can create using Claude (web) or another model like ChatGPT or Grok. Telling them what you want and what tech you're using helps them create a technical description with clear segments and objectives, and in my experience works wonders in getting Claude Code on the right track, where it has full access to the entire context of your code base.
  10. We present GEN3C, a generative video model with precise Camera Control and temporal 3D Consistency. Prior video models already generate realistic videos, but they tend to leverage little 3D information, leading to inconsistencies, such as objects popping in and out of existence. Camera control, if implemented at all, is imprecise, because camera parameters are mere inputs to the neural network which must then infer how the video depends on the camera. In contrast, GEN3C is guided by a 3D cache: point clouds obtained by predicting the pixel-wise depth of seed images or previously generated frames. When generating the next frames, GEN3C is conditioned on the 2D renderings of the 3D cache with the new camera trajectory provided by the user. Crucially, this means that GEN3C neither has to remember what it previously generated nor does it have to infer the image structure from the camera pose. The model, instead, can focus all its generative power on previously unobserved regions, as well as advancing the scene state to the next frame. Our results demonstrate more precise camera control than prior work, as well as state-of-the-art results in sparse-view novel view synthesis, even in challenging settings such as driving scenes and monocular dynamic video. Results are best viewed in videos. Check out our webpage! https://research.nvidia.com/labs/toronto-ai/GEN3C/

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal