see seppe.net and blog.macuyiko.com
- On the homepage it says "Sinmple" above "Export SQL", fyi
- A coin measurer is still my goto explanation. Especially with most models having an inset for the coin to rest on / fit in. The hole itself is then just to quickly/easily get the coin out again with your finger.
With so many different coin sizes and types in the empire, I think this makes most sense.
Wikipedia also mentions this:
> Several dodecahedra were found in coin hoards, suggesting either that their owners considered them valuable objects, or that their use was connected with coins — as, for example, for easily checking coins fit a certain diameter and were not clipped.
- I've noticed that puzzles that can be solved with CP-SAT's presolver so that the SAT search does not even need to be invoked basically adhere to this (no backtracking, known rules), e.g.:
Together with validating that there is only 1 solution you would probably be able to make the search for good boards a more guided than random creation.#Variables: 121 (91 primary variables) - 121 Booleans in [0,1] #kLinear1: 200 (#enforced: 200) #kLinear2: 1 #kLinear3: 2 #kLinearN: 30 (#terms: 355) Presolve summary: - 1 affine relations were detected. - rule 'affine: new relation' was applied 1 time. - rule 'at_most_one: empty or all false' was applied 148 times. - rule 'at_most_one: removed literals' was applied 148 times. - rule 'at_most_one: satisfied' was applied 36 times. - rule 'deductions: 200 stored' was applied 1 time. - rule 'exactly_one: removed literals' was applied 2 times. - rule 'exactly_one: satisfied' was applied 31 times. - rule 'linear: empty' was applied 1 time. - rule 'linear: fixed or dup variables' was applied 12 times. - rule 'linear: positive equal one' was applied 31 times. - rule 'linear: reduced variable domains' was applied 1 time. - rule 'linear: remapped using affine relations' was applied 4 times. - rule 'presolve: 120 unused variables removed.' was applied 1 time. - rule 'presolve: iteration' was applied 2 times. Presolved satisfaction model '': (model_fingerprint: 0xa5b85c5e198ed849) #Variables: 0 (0 primary variables) The solution hint is complete and is feasible. #1 0.00s main a a a a a a a a a a *A* a a a b b b b *B* a a a a a *C* b d d d b b a a a c c d d *E* d d b b a a c d *D* d e d d d b a a f d d d e e e d *G* a a *F* d d d d d d d g a a f f d d d d d *H* g a *I* i f f d d d h h a a i i i f *J* j j j a a a i i i i i k *K* j a a a - All of the above is true, but between solving quicker, and admitting we gave context:
I do agree with you that an LLM should not always start from scratch.
In a way it is like an animal which we have given the ultimate human instinct.
What has nature given us? Homo Erectus is 2 million years ago.
A weird world we live in.
What is context.
- Weirdly it has gotten so far that I have embedded this into my workflow and will often prompt:
> "Good work so far, now I want to take it to another step (somewhat related but feeling it too hard): <short description>. Do you think we can do it in this conversation or is it better to start fresh? If so, prepare an initial prompt for your next fresh instantiation."
Sometimes the model says that it might be better to start fresh, and prepares a good summary prompt (including a final 'see you later'), whereas in other cases it assures me it can continue.
I have a lot of notebooks with "initial prompts to explore forward". But given the sycophancy going on as well as one-step RL (sigh) post-training [1], it indeed seems AI platforms would like to keep the conversation going.
[1] RL in post-training has little to do with real RL and just uses one shot preference mechanisms with an RL inspired training loop. There is very little work in terms of long-term preferences slash conversations, as that would increase requirements exponentially.
- A bit of a rant, but this is the kind of fact checking I wish the media and all our EU "trusted sources" would have jumped on instead of going for the most trivial and idiotic cases only a toddler (or a journalist) would get stumped by. (Example: recent posts on Tiktok 'claiming to be images from Pakistan but taken from Battlefield 3...' again. Who is impressed or even surprised by this kind of investigation?)
Much more interesting, but also with more effort required, so of course it never happens.
It would have a more beneficial societal effect, because it is this kind of article, neutrally written, deep investigation, that truly would make people capable to self-discover "maybe I should question a bit more things".
- The model seems to be viewable here:
https://netron.app/?url=https://madebyoll.in/posts/world_emu...
- From an age perspective (but the crowd here will not like that): before I trusted myself I could always find it back so I don't need to save it. Now I can't anymore, but I don't care so much.
- I am not so sure, but indeed it is perhaps also a sad realization.
You compare this to "a human" but also admit there is a high variation.
And, I would say there are a lot humans being paid ~=$3400 per month. Not for a single task, true, but for honestly for no value creating task at all. Just for their time.
So what about we think in terms of output rather than time?
- Some more interesting approaches in the same space:
- https://github.com/openai/evolution-strategies-starter
- https://cloud.google.com/blog/topics/developers-practitioner...
And perhaps most close:
- https://weightagnostic.github.io/
Which also showed that you can make NNs weight agnostic and just let the architecture evolve using a GA.
Even though these approaches are cool and NEAT even is somewhat easier to implement than getting started with RL (at least that is what based on so many AI Youtubers starting with NEAT first) they didn't ever seem to fully take off. Although knowing about metaheuristics is still a good tool to know IMO.
- A few weeks ago I was planning to design a model I could send to a local 3d printer to replace a broken piece in the house for which I knew it would be impossible to find something that would fit exactly.
I looked around through a couple of open source/free offerings and all found them frustrating. Either the focus on easy of use was too limiting, the focus was too much on blob, clay-like modeling rather than strong parametric models (many online tools), or they were too pushy to make you pay, or the UI was not intuitive (FreeCAD).
OpenSCAD was the one which allowed me to get the model done, and I loved the code-first, parametric-first approach and way of thinking. But that said I also found POV-Ray enjoyable to play around with around the 2000s. Build123D looks interesting as well, thanks for recommending that.
- I follow RL from the sides (I have dabbled with it myself), and have seen some of the cool videos the article also lists. I think one of the key points (and a bit of a personal nitpick) the article makes is this:
> Thus far, every attempt at training a Trackmania-playing program has trained the program on one map at a time. As a result, no matter how well the network did on one track, it would have to be retrained - probably significantly retrained
This is a crucial aspect when talking about RL. Most of the Trackmania AI attempts focuses on a track at a time, which is not really a problem since they want to, given an individual track, outperform the best human racers.
However, it is this nuance that a lot of more business oriented users don't get when being sold on some fancy new RL project. In the real world (think self-driving cars), we typically want agents to be way more able to generalize.
Most of the RL techniques we have do rather well in these kinds of constrained environments (in a sense they eventually start overfitting on the given environment), but making them behave well in more varied environments is way harder. A lot of beginner RL tutorials also fail to make this very explicit, and will e.g. show how to train an agent to find the exit in a maze without ever trying it on a newly generated maze :).
- Very disheartening. HF is doing so much good in the AI community, much more than regulators understand at the moment.
- Have a look at https://arxiv.org/pdf/2306.11695.pdf which also uses the norm of inputs based on calibration
- Wow, this brought back memories. I could swear I wrote a blog post about this years ago but couldn't find it.
A quick search on the local file system revealed `vnccrawl/crawler.py` from 2016 [1] using what looks like a Shodan data dump and calling out to `vncviewer.exe`. I remember randomly logging into some instances and also seeing a lot of cool random systems, including a lot of them controlling industrial systems. Guess I never ended up writing that post.
One would think that on today's Internet it would take only a couple of seconds for those to get compromised, but obfuscation as security, perhaps?
[1]: A random tip from that file: Using a password of 12345678 gives access to way more 'weakly secure' instances.
- This reminds me of a short story by Ken Liu, The Message, which details a xeno-archaeologist digging into a place full of radiation. The main character doesn't get the warning message until it is too late and almost loses his daughter.
Googling it now it seems at one point is was going to get adapted to film [1], but seems like that went nowhere.
[1]: https://reactormag.com/ken-lius-the-message-to-get-big-scree...
- v1 used a very limited (albeit very easy and already quite impressive) form of transfer learning, e.g. take a pretrained network's 1000dim vector outputs given a bunch of images belonging to three sets (since the original was trained on Imagenet), and then just use K-NN to predict what a set "new" image falls into.
v2 does actually finetune weights of a pretrained network. At the time, it was a nice showcase how fast fast JS ML libraries were evolving.
- Came here to cite your work, I even mention "CloudForest" in my slides still as "an interesting implementation that is also capable of handling NANs in DTs in a slightly different way." Crazy this has already been 10 years.
- Very interesting, indeed, they seem to be driven by better fluid simulations... remarkable that they find their way into games. I was always under the impression that Navier Stokes was hard in 3d, but it does seem like there are performant solutions now that are easily offloaded to the GPU, e.g. https://github.com/chrismile/cfd3d (and NVIDIA also has some blog posts about it).
Edit: I also just found this: https://www.youtube.com/live/569oSOSoKDc?si=8V5buRMoI3IKqLQp... -- which is very close to what you describe and fully matches the kind of particle systems I was hinting at, thanks!
It's fun how we are so quick to assign meaning to the way these models act. This is of course due to training, RLHF, available tool calls, system prompt (all mostly invisible) and the way we prompt them.
I've been wondering about a new kind of benchmark how one would be able to extract these more intangible tendencies from models rather than well-controlled "how good at coding is it" style environments. This is mainly the reason why I pay less and less attention to benchmark scores.
For what it's worth: I still best converse with Claude when doing code. Its reasoning sounds like me, and it finds a good middle ground between conservative and crazy, being explorative and daring (even although it too often exclaims "I see the issue now!"). If Anthropic would lift the usage rates I would use it as my primary. The CLI tool is also better. E.g. Codex with 5.1 gets stuck in powershell scripts whilst Claude realizes it can use python to do heavy lifting, but I think that might be largely due to being mainly on Windows (still, Claude does work best, realizing quickly what environment it lives in rather than trying Unix commands or powershell invocations that don't work because my powershell is outdated).
Qwen is great in an IDE for quick auto-complete tasks, especially given that you can run it locally, but even the VSCode copilot is good enough for that. Kimi is promising for long running agentic tasks but that is something I've barely explored and just started playing with. Gemini is fantastic as a research assistant. Especially Gemini 3 Pro points out clear and to the point jargon without fear of the user being stupid, which the other commercial models are too often hesitant to do.
Again, it would be fun to have some unbiased method to uncover some of those underlying persona's.