Professionally, I do banking. It's a lot of integration work, sprinkled with a little algorithm every now and then. Lately I've been on capital requirements. The core of that is a system called AxiomSL, which is quite a lot of work for one guy to keep running.
In my spare time I write some algorithmic C, you can check that stuff out on github (https://github.com/DelusionalLogic) if you're curious.
I was an early adoter of LLM's. I used to lurk in the old EleutherAI discord and monitor their progress in reconstructing GPT-2 (I recall it being called GPT-J). I also played around a bunch with image generation. At this point nobody really tried applying them to code. We were just fascinated that it wrote back at all.
I have tried most of the modern models for development. I find then to generate a lot of nonsensical and unexplainable code. I've had no success (in the 30 or so times I've tried) at getting any of the models to debug or develop even small features. They usually get lost in some "best practice" and start looping on that forever. They're also constantly breaking style and violating module boundaries.
If i use them to generate documentation I find it to be surface level and repetitive. It'll make a lot of text about structures that are obvious to me just glancing at the code, but will (obviously) not have any context about the thought process that created that code, which is the only part I care about. I can read the code just fine myself. This is the same problem I find in commit messages generated with AI tools.
For the reversing I also do, I find the models to be too imprecise. It'll take large logical leaps that ruin understanding of the code I'm trying to understand. This is the only place I actually believe a properly trained (not a chatbot) model could actually succeed past the state of the art.
I don't really use stackoverflow either, I don't trust its accuracy, and it's easy to get cargo culted in software. I generally try to find my answers in official documentation, and if I can't get that I'll read the source code. If that's unavailable I'll take a guess, or reverse the thing If it's really important to me.
In my spare time I write some algorithmic C, you can check that stuff out on github (https://github.com/DelusionalLogic) if you're curious.
I was an early adoter of LLM's. I used to lurk in the old EleutherAI discord and monitor their progress in reconstructing GPT-2 (I recall it being called GPT-J). I also played around a bunch with image generation. At this point nobody really tried applying them to code. We were just fascinated that it wrote back at all.
I have tried most of the modern models for development. I find then to generate a lot of nonsensical and unexplainable code. I've had no success (in the 30 or so times I've tried) at getting any of the models to debug or develop even small features. They usually get lost in some "best practice" and start looping on that forever. They're also constantly breaking style and violating module boundaries.
If i use them to generate documentation I find it to be surface level and repetitive. It'll make a lot of text about structures that are obvious to me just glancing at the code, but will (obviously) not have any context about the thought process that created that code, which is the only part I care about. I can read the code just fine myself. This is the same problem I find in commit messages generated with AI tools.
For the reversing I also do, I find the models to be too imprecise. It'll take large logical leaps that ruin understanding of the code I'm trying to understand. This is the only place I actually believe a properly trained (not a chatbot) model could actually succeed past the state of the art.
I don't really use stackoverflow either, I don't trust its accuracy, and it's easy to get cargo culted in software. I generally try to find my answers in official documentation, and if I can't get that I'll read the source code. If that's unavailable I'll take a guess, or reverse the thing If it's really important to me.