- john_minsk parentDo you think prices will go up for mac?
- What's the problem?
- Really nice. Thanks for self promo! Will definitely keep an eye on your project.
What is the ideal final state you want to achieve? Do you agree that data capture is the main issue here?
My latest experiments:
2 days ago I started to capture screenshots of my mac every 5 seconds and later use it to approximately tell me what I'm doing. - a lot of issues with this approach.
Yesterday I setup ActivityWatch. It captures a lot of stuff from the get go. TBD if it can capture background YouTube video playing in addition to the active tab.
The main value I want to extract is to be able to see what was the plan vs what actually happened. And if I can make what actually happens closer to the plan - this is WIN.
But capturing what you are thinking of, what you are working on etc. - quite challenging and often happens offline, on the phone, another computer, messenger, email etc.
My quick take: companies were closing their data for decades and now it will bite them in their back - AI needs as full of a context as possible across devices, services and programs, to make AI as powerful as it can be and current architecture works against it. May be one day AI is so powerful that it will ETL all this data for itself, but for now it is painful to try to build something like this.
- Real world might be boring, but very useful. Especially for companies that move boxes around, not fantasy characters.
- That's exactly what I think will happen. 3D is endgame for AI. 3D models are deterministic objects that provide continuity, while AI does non-deterministic abstract generation(thinking) + plans action plan for these 3d models.
Recent news on major AI scientists starting "world AI" companies confirm this trend.
So 3D soon will become a very important tech even compared to today.
- Why do you think so? As far as I can tell - a lot of GPU empowered data centers are being built
- There is CAD Sketcher, which is an extension for Blender which could allow to unite both worlds.
I'd like to hear someone's perspective on how difficult it would be to unify OpenUSD and CAD file formats so that they are portable between programs?
- OpenUSD got just 1 mention in the release notes. Also wondering what is the state of this feature.
May I ask your opinion, as industry insider, regarding what makes a good and bad OpenUSD support?
- Do you think the next generation of Roblox will be a dominating games engine?
- How is Evangelion still running if the series was finished and even alternative ending was produced?
- Is there any good test scene that one can download to "test" capabilities of different software? Sorry if it is an ignorant question
- You also can't regenerate with the new point of origin. Generate -> stop ... No way to continue?
Update - it is a paid feature
- This is great. Can I use it with existing scan of my room to fill the gaps? Not a random world
Update - yes you can. To be tested.
- Can it be used with AI to create your personal context?
- We think alike. Have you tried to replace point cloud of white wall with a generic white wall automatically?
- True, but a human child is taught a language. He doesn't come with it. It is an important part of how our brains form.
- Well, you can explain to a plant in your room that E=mc2 in a couple of sentences, a plant can't explain to you how it feels the world.
If cows were eating grass and conceptualising what is infinity, and what is her role in the universe, and how she was born, and what would happen after she is dead... we would see a lot of jumpy cows out there.
- Agreed. Everything that looks like intelligence to ME is intelligent.
My measurement of outside intelligence is limited by my intelligence. So I can understand when something is stupider compared to me. For example, industrial machine vs human worker, human worker is infinitely more intelligent compared to machine, because this human worker can do all kinds of interesting stuff. this metaphorical "human worker" did everything around from laying a brick to launching a man to the Moon.
....
Imagine Super-future, where humanity created nanobots and they ate everything around. And now instead of Earth there is just a cloud of them.
These nanonobots were clever and could adapt, and they had all the knowledge that humans had and even more(as they were eating earth a swarm was running global science experiments to understand as much as possible before the energy ends).
Once they ate the last bite of our Earth(an important note here: they left an optimal amount of matter to keep running experiments. Humans were kept in a controlled state and were studied to increase Swarm's intelligence), they launched next stage. A project, grand architect named "Optimise Energy capture from the Sun".
Nanobots re-created the most efficient ways of capturing the Sun energy - ancient plants, which swarm studied for centuries. Swarm added some upgrades on top of what nature came up with, but it was still built on top of what nature figured by itself. A perfect plant to capture the Sun's energy. All of them a perfect copy of itself + adaptive movements based on their geolocation and time(which makes all of them unique).
For plants nanobots needed water, so they created efficient oceans to feed the plants. They added clouds and rains as transport mechanism between oceans and plants... etc etc.
One night the human, which you already know by the name "Ivan the Liberator"(back then everyone called him just Ivan), didn't sleep on his usual hour. Suddenly all the lights went off and he saw a spark on the horizon. Horizon, that was strongly prohibited to approach. He took his rifle, jumped on a truck and raced to the shore - closest point to the spark vector.
Once he approached - there was no horizon or water. A wall of dark glass-like material, edges barely noticeable. Just 30 cm wide. On the left and on the right from a 30 cm wide wall - an image as real as his hands - of a water and sky. At the top of the wall - a hole. He used his gun to hit the wall with the light - and it wasn't very thick, but once he hit - it regenerated very quickly. But once he hit a black wall - it shattered and he saw a different world - world of plants.
He stepped into the forest, but these plants, were behaving differently. This part of the swarm wasn't supposed to face the human, so these nanobots never saw one and didn't have optimised instructions on what to do in that case. They started reporting new values back to the main computer and performing default behaviour until the updated software arrived from an intelligence center of the Swarm.
A human was observing a strange thing - plants were smoothly flowing around him to keep a safe distance, like water steps away from your hands in a pond.
"That's different" thought Ivan, extended his hand in a friendly gesture and said - Nice to meet you. I'm Ivan.
....
In this story a human sees a forest with plants and has no clue that it is a swarm of intelligence far greater than him. To him it looks repetitive simple action that doesn't look random -> let's test how intelligent outside entity is -> If entity wants to show its intelligence - it answers to communication -> If entity wants to hide its intelligence - it pretends to be not intelligent.
If Swarm decides to show you that it is intelligent - it can show you that it is intelligent up to your level. It won't be able to explain everything that it knows or understands to you, because you will be limited by your hardware. The limit for the Swarm is only computation power it can get.
- How is it shady?
It beats WA on UI in most cases (especially on desktop), has open source client, much better groups/channels for one-to-many, many-to-many communications. Has bots support like I never seen on WA.
- Amazing project. In the era of AI I can see the software like this being used daily.
- If the data changes, how would a torrent client pick it up and download changes?
- Is it realistic to have enough power in such a small form factor for this effect?
- Interesting idea, but I would say that it is orders of magnitude harder compared to having an integrated system. Vibration in such a compact space with a very sharp blade... I want this system be stable around me.
I would say, if this idea becomes popular, knife producers can create their own versions in the new models, or retrofit old knives at the shop.
- That escalated quickly. Thank you!
- I don't use knives in my kitchen. My romantic partner does. Yesterday I decided to cut some tomatoes only to find out that all knives are dull.
She never said anything, I didn't know it. Why?
Because she is just "used" to it and to her these knives were just fine. So she never thought about sharpening knives in the first place.
I will take those knives to a pro and he will sharpen them for me, as in a rental I stay in, I don't have the tools to do that and as I said in another comment - I don't have a pain free process to do that as I don't do it often.
- I admire you. You are a minority, you know that, right?
I don't have time in my schedule at the moment, which says "sharpen the knifes". So for me - it would be amazing if someone solved this problem in a radical way.
Sporadically I would sharpen the knives and since I don't have it in my "skills" section of the brain, I always have to "figure out" sharpening process.
- It doesn't work for me in Chrome either. Oh irony.
- May be. I don't know. Thanks for answering - will check the excalidraw!
- How did you create such amazing animation with svgs? Cool docs
- Valid point, however the promise of AI is that it will be able to manufacture a metaphorical “plane” for each and every prompt user inputs I.e. give 100% overall reliability by using all kinds of techniques (testing, decomposing etc) that intelligence can come up with.
So until these techniques are baked into the model by OpenAI, you have to come up with these ideas yourself.