I'm making this, there is a free demo if you want to try it. https://store.steampowered.com/app/1537490/Tentacle_Typer/
- 3 points
- > Both people in the conversation imagine that the other 'gets it' - a delusory and false assumption
'getting it' isn't an all or nothing thing. It would be an illusion to take it to an extreme.
The idea of some people in your life being able to get you better than others, more quickly and with fewer words, is a fact of life. Comparative human connection bandwidth can be estimated by vibes, history, outcomes.
- > Such reachouts are very very rare unless your software has gone viral in the right circles
Another anecdote. I had job offers coming out of my ears while I was posting videos of my indiegame on twitter. Only one video had substantial reach -- near the end of my time actively twittering. I think what helps is doing something as well as you can, and be persistently visible.
- > that's a skill issue and not a fundamental property
This made me laugh.
You seem like you may know something I've been curious about.
I'm a shader author these days, haven't been a data scientist for a while, so it's going to distort my vocab.
Say you've got a trained neural network living in a 512x512 structured buffer. It's doing great, but you get a new video card with more memory so you can afford to migrate it to a 1024x1024. Is the state of the art way to retrain with the same data but bigger initial parameters, or are there other methods that smear the old weights over a larger space to get a leg up? Anything like this accelerate training time?
... can you up sample a language model like you can lowres anime profile pictures? I wonder what the made up words would be like.
- I co-run a guild in WoW right now on the anniversary server with >150 members.
It's people who play these games and almost any weirdness you can imagine that could exist in a person in real life gets brought into these games.
Often it's magnified because of the pseudo anonymity. We've only been "big" for a few months and a handful of things we've already ran into would make you not want to visit several western hemisphere countries if held to the same standard. :)
- I wouldn't bet against him. "The Bitter Lesson" may imply an advantage to someone who historically has been at the tip of the spear for squeezing the most juice out of GPU hosted parallel computation.
Graphics rendering and AI live on the same pyramid of technology. A pyramid with a lot of bricks with the initials "JC" carved into them, as it turns out.
- This is the first I've heard of oklch. Here's the perspective of somebody who writes a lot of shaders: It looks similar to Hue/saturation/value encoding, which as tons of uses, but with saturation replaced with "chroma" where it seems to nonlinearly adjust saturation probably based on some perceptual study that makes it extra spicy and high science.
- Reminds me of something from like 2007ish(?)~ called (fluxus). It was a lisp text editor with a render target in the background and some nice standard library for making 3d objects appear or sfx play. Everything was constantly evaluating/hotloading in the background.
So much fun, I can't find any of the videos in a quick search, so maybe lost to time. Great performative lisping in them hills though.
EDIT: I did find the old page on the wayback machine. https://web.archive.org/web/20120810224932/http://www.pawfal...
- There's quite a few monocular depth estimation models out there, have been for years. This one looks pretty good. That said, the temporal stability seems pretty wobbly, I don't think I'd use it for a self driving car.
The most impressive example was the point cloud they generated from the extreme fisheye lens, that was nice.
Predicting that the background on cloud city was a flat matte painting is also impressive in a way. It does seem to collapse all far field objects into a single plane. That's a decent compromise for many things.
- It's a cute package, but that resolution is wild. 24x24? I suppose it might have a place in manufacturing automation tasks.
I don't know where you'd have room for one of these but no room for something like the D435 which has a resolution of 1280 × 720 on the depth side and an RGB sensor. Maybe robotic vacuum cleaners or something.
Maybe this is a folksy anecdote about a junior developer working for John Email designing the protocol for trinary morse code over a token ring of twisted pair barbed wire. An RFC for that kind of project would be natural.
In the spirit of this, I propose we start calling things like flowcharts, SVG images of digraphs, UML diagrams etc "articles of war" just to spice things up.