Preferences

adw
Joined 2,153 karma
Machine learning and machine learning accessories. Previously; startups, academia (mineral physics and chemoinformatics).

  1. Inside scoop: the pub group who owned that pub (still going, owns four in Cambridge and environs) was cofounded by Steve Early, a Cambridge computer scientist who wrote his own POS software, so it was very much a case of "yeah, that sounds like fun, I'll add it". (Until tax and primary rate risk made it not fun, so it was removed.)

    The POS software's on GitHub: https://github.com/sde1000/quicktill

  2. Wouldn’t hurt. (The key thing is; have you done work which involved serious linear algebra? I was an electronic structure person, so I’m not familiar with the geophys stuff.)
  3. Did you do any serious simulations work? If so, machine learning. (PhD in mineral physics, staff ML engineer at a FAANG.)
  4. Skills are just prompt conventions; the exact form may change but the substance is reasonable. MCP, eh, it’s pretty bad, I can see it vanishing.

    The agent loop architectural pattern (and that’s the relevant bit) is going to continue to matter. There will be new patterns for sure, but tool calling plus while loop (which is all an “agent” is) is powerful and highly general.

  5. Yes, that sentence is simply untrue for, at the very least, BrE. For example: https://www.the-independent.com/news/uk/home-news/chart-show... (2015)
  6. The joke is that Calvin is aligned with Hobbes’s philosophy and vice versa.
  7. This collection is, indeed, ticking all the boxes.
  8. LLMs speed you up more if you have an appropriate theory in greenfield tasks (and if you do the work of writing your scaffold yourself).

    Brownfield tasks are harder for the LLM at least in part because it’s harder to retroactively explain regular structure in a way the LLM understands and can serialize into eg CLAUDE.md.

  9. Which is itself intuitive if you have the prior that “making the claim is the stronger headline, so if the claim is true, it’ll be in the headline”
  10. Target Schools was a thing in the 90s. This isn’t even slightly new. (And last I checked the Target Schools students had, if anything, slightly better outcomes than the main pool.)
  11. In my experience startups have as much bullshit as FAANG companies, it’s just different bullshit.
  12. If you’re using ChatGPT directly for work then I believe that you are doing it so profoundly wrong, at this point, that you’re going to make really incorrect assumptions.

    As we have all observed, the models get things wrong, and if you’re wrong 5% of the time, then ten edits in you’re at 60-40. So you need to run them in a loop where they’re constantly sanity checking themselves—-linting, styling, typing and testing. In other words; calling tools in a loop. Agents are so much better than any other approach it’s comical precisely because they’re scaffolding to let models self-correct.

    This is likely somewhat domain-specific; I can’t imagine the models are that great at domains they haven’t seen much code in, so they probably suck at HFT infrastructure for example, though they are decent at reading docs by this point. There’s also a lot of skill in setting up the right documentation, testing structure, interfaces, etc etc etc to make the agents more reliable and productive (fringe benefit; your LLM-wielding colleagues actually write docs now, even if they’re full of em-dashes and emoji). You also need to be willing to let it write a bunch of code, look at it, work out why it’s structurally deficient, throw it away, and build the structure you want to guide it - but the typing is essentially free, so that’s tractable. Don’t view it as bad code, view it as a useful null result.

    But if you’re not using Claude Code or Codex or Roo or relatives, you’re living in an entirely different world to the people who have gone messianic about these things.

  13. Nonfree license. You may or may not like the Elastic license but it's definitely not OSD.
  14. Entirely agree. Most of the critical path roles are not foundation model training, even in places that do foundation model training.
  15. This very very very much depends what you do. ML critical-path roles are doing much better than everything else as far as I can tell.
  16. These are also depreciating assets.
  17. 25mpg is still insanely, obscenely profligate when a reasonable vehicle (say a Renault Clio) gets somewhere between 50 and 70. That will make some people angry and it’s hard to see that as entirely irrational.
  18. C# is the Java of Lua (thanks to Unity) which will never not be weird.
  19. Peak PHP was Facebook around 2008!
  20. Python was everywhere in science by earlier than that (Numeric and numarray, Numpy’s predecessors, are from the late 90s/early 2000s).
  21. Fewer characters than “initialize” or “constructor”, clearly marked as being “not a normal method”. Python is better here.
  22. Not even all of the UK. Scotland is a hybrid system.
  23. It’s legal gambling (same as the retail crypto and stock trades in the US). I’d expect that the legalisation of sports markets in the US has meaningfully moved exploitable punters out of the markets and into the bookmakers.
  24. It’s a company limited by guarantee, which is the structure you use in the UK for non-charity non-profits.
  25. > As much as I love the aesthetic, I'm developing a fear that they'll soon spin off into a startup with some kind of paid model, and that government websites will regress.

    gov.uk got started, in part, because the 2009 financial meltdown left a lot of good startup designers and engineers with not enough to do (and made civil service jobs more attractive for a bit!)

  26. It’s also signal processing.
  27. > Sometimes better than that of a software engineer

    There is a reason so many of us work as software engineers now; I earn about 5x more than I would as a university lecturer/assistant professor.

  28. NNs have absolutely revolutionized systems biology (itself a John Hopfield joint, and the AlphaFold team are reasonably likely to get a Nobel for medicine and physiology, possibly as soon as 'this year') and are becoming relevant in all kinds of weird parts of solid-state physics (trained functionals for DFT, eg https://www.nature.com/articles/s41598-020-64619-8).

    The idea that academic disciplines are in any way isolated from each other is nonsense. Machine learning is computer science; it's also information theory; that means it's thermodynamics, which means it's physics. (Or, rather, it can be understood properly through all of these lenses).

    John Hopfield himself has written about this; he views his work as physics because _it is performed from the viewpoint of a physicist_. Disciplines are subjective, not objective, phenomena.

  29. > Hopfield networks and Boltzmann machines

    Think of this as a Nobel prize for systems physics – essentially "creative application of statistical mechanics" – and it makes a lot more sense why you'd pick these two.

    (I am a mineral physicist who now works in machine learning, and I absolutely think of the entire field as applied statistical mechanics; is that correct? Yes and no: it's a valid metaphor.)

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal