Preferences

gaigalas
Joined 97 karma
Previously https://news.ycombinator.com/user?id=alganet

  1. Can you elaborate on what is the reasoning here?
  2. > Introduce new concepts that doesn't exist in the original stack

    That is also true for "macro" frameworks.

    > Wraps around the company/org-shared tech stack or framework

    That is often also true for "macro" frameworks.

    > Creators claim that the framework "magically" solves many problems, and push more people to use it

    That is often also true for "macro" frameworks.

    ---

    It is not clear from the reader's perspective what actually characterizes a "micro" framework. It's also not clear why the size is the issue here, when all complaints seems to be about design or quality.

    Is googletest a micro or macro framework? Is google/zx a micro or a macro framework? Give us some clarifying examples. Actual things people can look for, not internal unknowable projects. There must be some exceptions too (silver bullet rules don't exist), mention them.

    Also, rethink the title. Maybe "makeshift frameworks" is better terminology, as it more accurately reflects the problem that is described in the content.

  3. Anthony Jackson interview from 1992:

    https://www.youtube.com/watch?v=IS-xDsic84Q

    Please listen to it.

    > The machines are here.

    > We have to live with that. Things are different.

    > Hopefully, if one has done his homework, one can continue to pick and choose what to do.

    > If you keep your skills up, there is a place for you. If you don't, then there isn't. Very simple equation.

    > I will not permit myself to be outplayed by someone using the machine.

    ---

    Anthony Jackson is regarded as one of the most talented bassists that ever lived. He did, in fact, outplayed the machine, re-invented his own tone and technique, and proved over and over again that synthesizers could not do what he did. Synthetisers could replace thousands of pop musicians (they still do), but not him.

    So yeah, do the grind. Don't break the machines, don't bow to them. Instead, outplay them. Keep your skills up, so you are free to pick and choose.

  4. When we talk about code, you think it's about code, but it's communication _about solving problems_ which happens to use code as a language.

    If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.

    It becomes this entity, "the code". A fantasy.

    Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.

  5. This is a reusable, re-distributable simplified version of my local setup.

    It is a self-describing dispatching framework. Docs, code, style all in one single file.

    You don't like AI? Cool, you can use it as a framework for writing general-purpose automation.

    You like AI? Use this single file as context, and ask your agent stuff like "write me an umfile for converting pdfs to text", or any other task.

    "umfiles" are very, very light in context. They are intended to embody code and prose in a single expression, and be more effective than pure markdown.

    It is barebones simple intentionally. An umfile can be anything, exactly like SKILLS stuff, but also not redundant or boring "document what is obvious" in style.

  6. I am currently watching closely both yours, pnut.sh (already mentioned in the thread) and https://github.com/cosinusoidally experiments. I think they all have something good to contribute.

    These first steps interest me very much, and perhaps x86-only is the best we'll get for a foreseeable future (10 years or so), considering how difficult builder-hex0/fiwix is to port to another arch and how crucial it is for the whole concept.

    The GCC 4.7 RISC port and other things are nice (higher stages), but they are short term practical goals. I think the future lies in very specialized bootstrap-specific software that can cut lots of corners at once.

  7. > can it run in WebAssembly?

    You can safely assume so. Bellard is the creator of jslinux. The news here would be if it _didn't_.

    > What's the difference to regular JavaScript?

    It's in the project's README!

    > Is it safe to use as a sandbox against attacks like the regex thing?

    This is not a sandbox design. It's a resource-constrained design like cesanta/mjs.

    ---

    If you vibe coded a microcontroller emulation demo, perhaps there would be less pushback.

  8. If I could save only one of my books from destruction, it would be that one.
  9. The thing about Mes is that it does riscv64 too. I don't know the current state of the support, but there is something there.

    There are still many pieces of riscv64 missing in the whole thing (an equivalent to Fiwix being the most challenging right now), and realistically only x86 is currently viable. I think riscv64 is the next in line though.

  10. This whole "clever code" has become a social thing.

    It's one of the things people say when they don't like some piece of code, but they also can't justify it with a more in-depth explanation on why the cleverness is unecessary/counter-productive/etc.

    Truth is, we need "clever code". Lots of it. Your OS and browser are full of it, and they would suck even more without that. We also need people willing to work on things that are only possible with "clever code".

    From this point of view, the idea of the Lever makes sense. The quote also works for criticizing clever code, as long as we follow up with concrete justification (not being abstract about some general god-given rule). In a world where _some clever code is always required_, it makes sense that this quote should work for both scenarios.

  11. There's a danger in taking guidelines as dogmas. There's also a danger in dismissing guidelines as dogmas.
  12. This post is a bait for enthusiasts. I like it.

    > Chain of thought is now a fundamental way to improve LLM output.

    That kinda proves _that LLMs back then were pretty much stochastic parrots indeed_, and the skeptics were right at the time. Today, enthusiasts agree with what they previously said: without CoT, the AI feels underwhelming, repetitive and dumb and it's obvious that something more was needed.

    Just search past discussions about it, people were saying the problem would be solved with "larger models" (just repeating marketing stuff) and were oblivious to the possibility of other kinds of innovations.

    > The fundamental challenge in AI for the next 20 years is avoiding extinction.

    That is a low level sick burn on whoever believes AI will be economically viable short-term. And I have to agree.

  13. Again, thanks for your feedback. Moving the topological sort not only yileded a better experience but actually simplified the code and thread model.

    I've made the changes and they'll be on the next version.

  14. Author doesn't consider the possibility that engineers dismiss AI after they constantly tried it. Not once, not twice, but consistently.

    I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.

    Talk is cheap. Show me your AI generated code. Talk tech, not drama.

  15. You are right, that is an overlook on my part. Throwing on the constructor makes much more sense than upon access.

    Thanks!

  16. That does not work as advertised.

    If you leave an agent for hours trying to increase coverage by percentage without further guiding instructions you will end up with lots of garbage.

    In order to achieve this, you need several distinct loops. One that creates tests (there will be garbage), one that consolidates redundant tests, one that parametrizes repetitive tests, and so on.

    Agents create redundant tests for all sorts of reasons. Maybe they're trying a hard to reach line and leave several attempts behind. Or maybe they "get creative" and try to guess what is uncovered instead of actually following the coverage report, etc.

    Less capable models are actually better at doing this. They're faster, don't "get creative" with weird ideas mid-task and cost less. Just make them work one test at the time. Spawn, do one test that verifiably increases overall coverage, exit. Once you reach a treshold, start the consolidating loop: pick a redundant pair of tests, consolidate, exit. And so on...

    Of course, you can use a powerful model and babysit it as well. A few disambiguating questions and interruptions will guide them well. If you want true unattended though, it's damn hard to get stable results.

  17. Ditch the scroll.

    Pick: all previous "Pick" buttons become "Place". You choose one.

    Done. Simple, explicit, intuitive.

  18. There's something wrong with this vibe coded stuff, any kind of it.

    _It limps faster than you can walk_, in simple terms.

    At each model release, it limps faster, but still can't walk. That is not a good sign.

    > Do we want this?

    No. However, there's a deeper question: do people even recognize they don't want this?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal