- dexterlaganMy attempt: https://www.cleverthinkingsoftware.com/truth-or-extinction/
- We've been through many technological revolutions, in computing alone, through the past 50 years. The rate of progress of LLMs and AI in general over the past 2 years alone makes me think that this may be unwarranted worry and akin to premature optimization. Also, it seems to be rooted in a slightly out of date, human understanding of the tech/complexity debt problem. I don't really buy it. Yes complexity will increase as a result of LLM use. Yes eventually code will be hard to understand. That's a given, but there's no turning back. Let that sink in: AI will never be as limited as it is today. It can only get better. We will never go back to a pre-LLM world, unless we obliterate all technology by some catastrophy. Today we can already grok nearly any codebase of any complexity, get models to write fantastic documentation and explain the finer points to nearly anybody. Next year we might not even need to generate any docs, the model built in the codebase will answer any question about it, and will semi-autonomously conduct feature upgrades or more.
Staying realistic, we can say with some confidence that within the next 6-12 months alone, there are good reasons to believe that local, open source models will equate their bigger cloud cousins in coding ability, or get very close. Within the next year or two, we will quite probably see GPT6 and Sonnet 5.0 come out, dwarfing all the models that came before. With this, there is a high probability that any comprehension or technical debt accumulated over the past year or more will be rendered completely irrelevant.
The benefits given by any development made until then, even sloppy, should more than make up for the downside caused by tech debt or any kind of overly high complexity problem. Even if I'm dead wrong, and we hit a ceiling to LLM's ability to grok huge/complex codebases, it is unlikely to appear within the next few months. Additionally, behind closed doors the progress made is nothing short of astounding. Recent research at Stanford might quite simply change all of these naysayers' mind.
- Racket has a very nice built-in debugger in its DrRacket editor, with thread visuals and all. Too bad nobody uses DrRacket, or Racket anymore. Admittedly, even with the best debugger, finding the cause of runtime errors has always been a pain. Hence everybody's moving towards statically compiled, strongly typed languages.
- I’ve had enough of misinformation. It’s killing our civilization. So I decided to do something about it.
- This resonates with me, a lot. Few months ago I wrote about my initial thoughts here: https://www.cleverthinkingsoftware.com/programmers-will-be-r... Things have changed quite a bit since, but I'm glad they changed for the better. Or so it seems.
- Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.
- I can imagine there are plenty of use cases, but I could not find one for myself. Can you give an example?
- This is a Xojo project, you can load the source code and all the UI objects by opening the binary project in Xojo (free).
- I made my own. I needed to have a calendar that showed every todo item per day, and a text editor to edit the tasks just like in a todo.txt. Used it all day every day for over 15 years. I still have it installed on nearly all my Win systems, just because it opens instantly, has priority and colors. I also used it to produce reports for work, so I eventually added export options for HTML to paste directly into an email.
- I have a similar setup but with 32 GB of RAM. Do you partly offload the model to RAM? Do you use LMStudio or other to achieve this? Thanks!
- Not to be confused with Suno - Simulation of Musical Ability :)
- I ended up adding a prompt to all my projects that forbids all these annoying repetitive apologies. Best thing I've ever done to Claude. Now he's blunt, efficient and SUCCINCT.
- I feel ya, but there's a better way. I've been writing detailed specs to direct LLMs, and that's what changed everything for me. I wrote about it at length: https://www.cleverthinkingsoftware.com/spec-first-developmen...
- ...funny how it coincides with the Android 16 release. now that Android has the same UI as Apple... the replication is complete and Apple has to differentiate itself again.
- This is one possible elegant and esthetically pleasing solution to keeping readability/visibility as high as possible while allowing dark/light modes AND complex backgrounds. It has the advantage of having the exact same visibility properties in both modes, no matter what you have as background. The downside... well everything is blurry and glassy.
- I was driving to my partner’s place across town every other day. Somehow his address would refuse to come up when I’d search his name, even though I had driven to his place dozens of times. I got so tired of dealing with Google and Apple Maps poor favorites page with no search and random results that I decided to make an app just for that. It’s a dead simple locations address book. One tap and you get directions. Done.
- 1 point
- Thanks for the feedback! Agreed, the one problem with the approach is reproducibility. It can be mitigated by going temp. 0, and detailing further the specs. The one method that nearly completely solves this problem is the hybrid approach: Write detailed specs, feed them to the LLM, get an MVP (or module etc); fix any and all issues found with the MVP, implement missing/new features; ask the LLM to update the specs to take the changes in account, also recording the lessons learned - and maximize reproducibility. Treat the latest specs + code package as a checkpoint you can always resume work from.