Email: pradeepbs at gmail dot com
- deepGemAm using Claude code as an approximation here. 2 years down the line the tooling around analytics will get integrated in AU assistants and they will be absolutely able to figure out unused features.
- What will eventually pan out is that senior devs will be replaced with junior devs powered by AI assistants. Simply because of the reasons you stated. They will ask the dumb important questions and then after a while, will even solve for them.
Now that their minds are free from routine and boilerplate work, they will start asking more 'whys' which will be very good for the organization overall.
Take any product - nearly 50% of the features are unused and it's a genuine engineering waste to maintain those features. A junior dev spending 3 months on the code base with Claude code will figure out these hidden unwanted features, cull them or ask them to be culled.
It'll take a while to navigate the hierarchy but they'll figure it out. The old guard will have no option but to move up or move out.
- They had 9.99 for the first year.
- There is no competing product for GPT Voice. Hands down. I have tried Claude, Gemini - they don't even comes close.
But voice is not a huge traffic funnel. Text is. And the verdict is more or less unanimous at this time. Gemini 3.0 has outdone ChatGPT. I unsubscribed from GPT plus today. I was a happy camper until the last month when I started noticing deplorable bugs.
1. The conversation contexts are getting intertwined.Two months ago, I could ask multiple random queries in a conversation and I would get correct responses but the last couple of weeks, it's been a harrowing experience having to start a new chat window for almost any change in thread topic. 2. I had asked ChatGPT to once treat me as a co-founder and hash out some ideas. Now for every query - I get a 'cofounder type' response. Nothing inherently wrong but annoying as hell. I can live with the other end of the spectrum in which Claude doesn't remember most of the context.
Now that Gemini pro is out, yes the UI lacks polish, you can lose conversations, but the benefits of low latency search and a one year near free subscription is a clincher. I am out of ChatGPT for now, 5.2 or otherwise. I wish them well.
- Product doesn't see the point of engineers being engaged and feed the engineering team like an in-house outsourcing shop.
Because they want to feel superior as the ‘this was my idea and you executed on my idea’ nonsense. Their answers to most ‘why are we doing this ?’ ‘trust me bro’. I am perhaps generalizing and there are outlier product managers who have earned the ‘trust me bro’ adage, but most haven’t.
This PM behaviour will never change. Engineers have said enough is enough and are now taking over product roles, in essence eliminating the communication gap.
- Precisely.
- I would any day take chatGPT/Claude over an IBM consultant. I worked at IBM.
- "I don't "understand" how LLMs "understand" anything."
Why does the LLM need to understand anything. What today's chatbots have achieved is a software engineering feat. They have taken a stateless token generation machine that has compressed the entire internet's vocabulary to predict the next token and have 'hacked' a whole state management machinery around it. End result is a product that just feels like another human conversing with you and remembering your last birthday.
Engineering will surely get better and while purists can argue that a new research perspective is needed, the current growth trajectory of chatbots, agents and code generation tools will carry the torch forward for years to come.
If you ask me, this new AI winter will thaw in the atmosphere even before it settles on the ground.
- This means there's a certain inertia: it can be better to handle the interim reports the same, even if they've been biased one way for several years, than to introduce a change that makes the numbers not comparable to history.
This is a very interesting point. So if BLS suddenly became more accurate, all the agencies have to re-tune their own biases and corrections => Could lead to short term discrepancies.
What one sees as inefficiency is actually efficient from a totally different lens.
- Create punishment system? Unless compaies report data back to BLS very fast, they pay big fee or are taxed higher. Small shops would hate it.
Or incentivize companies to report accurate data pretty fast. Payroll management systems can be plugged in real time, but that costs money and yeah small businesses are not going to be happy. So incentivization works better than punishment I think.
- The earlier reports are intentionally more noisy because there is value is being fast and then revising later, and everyone who uses this data is aware of that.
I get it and yeah my tone is very exaggerated. I don't think anyone in BLS should be fired and whoever is suggesting that does not understand how public institutions work.
I am just curious why there is so much of a discrepancy. This has been pretty much the status quo in BLS for a long time. They issue numbers and then they revise them later. However, you'd expect the revision to be moderately within an error %age.
Also how will this retroactive change help everyone involved. Ok, the new job numbers reflect a gloomier past (or a more vibrant past) how is that even helping everyone who is so focused on 'what's going to happen tomorrow'.
I retract my stance about BLS being intentionally corrupt - that's uncalled for.
- What I really fail to understand - how can departments like BLS screw up to this extent. Either they are grossly incompetent or they are intentionally corrupt.
The data covers the period from March 2024 to March 2025 and trims the average monthly jobs gains seen during this period (roughly the last 10 months of Joe Biden's presidency and the first two months of Trump's) from a monthly average of 147,000 to about 71,000.
50% error. This is more or less consistent. How can a department have this error % and still have their job. I understand the data collection mechanism is not the most sophisticated, but even accounting for that, this consistent error % is not to be overlooked.
I wonder why there is such lack of accountability from firms whose data pretty much feeds the world's economy.
- I spent a few weeks trying to build an alternative to self attention that scales memory linearly. I I got surprisingly good results. While in principle this makes a lot of sense, I am struggling to push the test accuracy above 86%.
Some of the alternatives I am about to consider:
1. Diffusion with sparse attention layers. 2. Hierarchical diffusion - next token diffusion combined with higher order chunk diffusion.
Still figuring out the code and I would love any feedback on these approaches.
- 3 points
- I first thought this is about Kevin Kelly. Then somewhere midway I thought I was reading an autobiography. It was only towards the latter half that I realized this is the author talking about Kevin Kelly and visiting his house.
Even though the language is very simple, the writing is quite convoluted.
- Currently,
You train a VLA (vision language action) model for a specific pair of robotic arms, for a specific task. The end actuator actions are embedded in the model (actions). So let's say you train a pair of arms to pick an apple. You cannot zero shot it to pick up a glass. What you see in demos is the result of lots of training and fine tuning (few shot) on specific object types and with specific robotic arms or bodies.
The language intermediary embedding brings some generalising skills to the table but it isn't much. The vision -> language -> action translation is, how do I put this, brittle at best.
What these guys are showing is a zero shot approach to new tasks in new environments with 80% accuracy. This is a big deal. Pi0 from Physical Intelligence is the best model to compare I think.
- Ditto with my daughter. I don't really know if this helps them get into programming but she now types faster than me with much better accuracy.
- I guess I'll stick to drinking water. But I'm sure there's a reason why that's bad for me.
Here's one :)
Too much water also erodes your body of salts. If you already are on a low salt diet this could be a problem. For us Indians eating mountain loads of salt, this is a non issue.