Preferences

crystal_revenge parent
> but you still need to have a good understanding of the code

I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.

Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.

Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).

For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.


sothatsit
I agree in a lot of ways, but I also feel nervous that AI could lull me into a false sense of security. I think AI could easily convince you that you understand something when really you don't.

Regardless, I do find that o3 is great at auditing my plans or implementations. I will just ask "please audit this code" and it has like a 50% hit rate on giving valuable feedback to improve my work. This feels like it has a meaningful impact on improving the quality of the software that I write, and my understanding of its edge cases.

This item has no comments currently.