- starchild3001 parentDespite consensus forecasts predicting a slowdown to 1.8% growth, the US economy appears poised for acceleration in 2026 driven by a potent combination of aggressive fiscal and monetary loosening. Treasury Secretary Scott Bessent’s optimism is underpinned by the implementation of the "One Big Beautiful Bill Act," which delivers retroactive tax cuts, alongside a rebound in government spending following a record 43-day shutdown and the potential for the Supreme Court to invalidate certain tariffs, resulting in significant corporate refunds. This fiscal stimulus coincides with the Federal Reserve’s pivot to lower interest rates—a trend likely to intensify as President Trump seeks to appoint dovish leadership to the central bank. While this synchronized stimulus supports bullish stock market projections and complements favorable global conditions like low oil prices, it carries significant risks of reigniting inflation and spiking long-term bond yields; nevertheless, the absence of immediate shocks suggests the economy has ample scope to outperform expectations.
- By late 2025, Boston’s Kendall Square biotech hub faces a severe downturn marked by a "biotech winter" of plummeting venture capital and soaring lab vacancies. Caused by high interest rates, domestic policy uncertainties, and intensifying global competition, this crisis has triggered a significant talent exodus, leaving recent PhD graduates overqualified and underemployed as companies freeze hiring to cut costs. The contraction has stalled critical medical research and threatened Boston’s economic stability, though industry leaders remain cautiously optimistic that adaptation strategies—such as AI integration and renewed merger activity—could spark a recovery by 2026.
- This reads like a mid-life crisis. A few rebuttals:
1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?
2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.
Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.
- The distinction Karpathy draws between "growing animals" and "summoning ghosts" via RLVR is the mental model I didn't know I needed to explain the current state of jagged intelligence. It perfectly articulates why trust in benchmarks is collapsing; we aren't creating generally adaptive survivors, but rather over-optimizing specific pockets of the embedding space against verifiable rewards.
I’m also sold on his take on "vibe coding" leading to ephemeral software; the idea of spinning up a custom, one-off tokenizer or app just to debug a single issue, and then deleting it, feels like a real shift.
- The author is conflating a financial correction with a technological failure.
I agree that the economics of GenAI are currently upside down. The CapEx spend is eye-watering, and the path to profitability for the foundational model providers is still hazy. We are almost certainly in an age of inflated-expectations hype-cycle peak that will self-correct, and yes, "winter is harsh on tulips".
However, the claim that the technology itself is a failure is objectively disconnected from reality. Unlike crypto or VR (in their hype cycles), LLMs found immediate, massive product-market fit. I use K-means clustering and logistic regression every day; they aren't AGI either, but they aren't failures.
If 95% of corporate AI projects fail, it's not because the tech is broken; it's because middle management is aspiring to replace humans with a terminal-bound chatbot instead of giving workers an AI companion. The tech isn't going away, even if AI valuations might be questioned in the short term.
- AI-text detection software is BS. Let me explain why.
Many of us use AI to not write text, but re-write text. My favorite prompt: "Write this better." In other words, AI is often used to fix awkward phrasing, poor flow, bad english, bad grammar etc.
It's very unlikely that an author or reviewer purely relies on AI written text, with none of their original ideas incorporated.
As AI detectors cannot tell rewrites from AI-incepted writing, it's fair to call them BS.
Ignore...
- What kind of world do you live in? Actually Google ads tend to be some of the highest ROI for the advertiser and most likely to be beneficial for the user. Vs the pure junk ads that aren't personalized, and just banner ads that have zero relationship to me. Google Ads is the enabler of free internet. I for one am thankful to them. Else you end up paying for NYT, Washinton Post, Information etc -- virtually for any high quality web site (including Search).
- Lowering LDL cholesterol is arguably the most evidence-backed longevity intervention available today. Mendelian randomization studies suggest that each standard deviation of lifelong LDL reduction translates to roughly +1.2 years of additional lifespan, implying ~+2.4 to +3.6 years from sustained, meaningful lowering alone.
Pair this with tight blood-pressure control (aim systolic <130 mmHg) and a healthy BMI—every incremental improvement helps. Together, LDL, BP, and BMI form the most potent triad of interventions most people can implement now and expect to see substantial benefits 20–40 years down the line.
A few references: https://mylongevityjourney.blogspot.com/2022/08/a-short-summ...
- I recently downloaded about 10 years of monthly price returns for QQQ, TQQQ, NVDA, GBTC, and a few others. Then I asked ChatGPT and Gemini (separately) to find the portfolio that maximizes an adjusted CAGR — roughly, mean return minus ½ × standard deviation².
Result: 70% NVDA, 30% GBTC (Bitcoin), and 0% QQQ or TQQQ. Honestly, not a bad mix — especially for a small, high-risk slice of your portfolio.
Next, I compared TQQQ (Triple Qs) vs. QQQ using the same 10-year monthly data. The optimizer picked 100% TQQQ, which again makes sense if you’re doing this in a tax-advantaged account like a 401(k) or IRA and only with money you’re willing to take some risk on.
Then I expanded the dataset — 55 years of returns across major asset classes (S&P 500, gold, short- and long-term Treasuries, corporate bonds, real estate, etc.) — and asked for the optimal portfolio. The winner: ~85% S&P 500, 15% gold, though 75/25 gives nearly the same return with a better Sharpe ratio.
A few quick takeaways:
Gold → GLDM ETF is the best vehicle.
QQQ → QQQM or TQQQ are the best versions.
And if you’re feeling adventurous: 70% NVDA, 30% IBIT (Bitcoin) isn’t crazy.
For what it’s worth, I’ve been running 75% stocks / 25% gold for a while now, but I’m thinking of carving out ~10% of the stock portion for a more aggressive tilt: TQQQ (6%), NVDA (2%), IBIT (1%) — because why not?
- 1. I find Gemini 2.5 Pro's text very easy and smooth to read. Whereas GPT5 thinking is often too terse, and has a weird writing style.
2. GPT5 thinking tends to do better with i) trick questions ii) puzzles iii) queries that involve search plus citations.
3. Gemini deep research is pretty good -- somewhat long reports, but almost always quite informative with unique insights.
4. Gemini 2.5 pro is favored in side by side comparisons (LMsys) whereas trick question benchmarks slightly favor GPT5 Thinking (livebench.ai).
5. Overall, I use both, usually simulatenously in two separate tabs. Then pick and choose the better response.
If I were forced to choose one model only, that'd be GPT5 today. But the choice was Gemini 2.5 Pro when it first came out. Next week it might go back to Gemini 3.0 Pro.
- I’m a big advocate of branchless programming — keeping configurations to a minimum and maintaining as much linear flow as possible, with little to no cfg-driven branching.
Why? I once took over a massive statistics codebase with hundreds of configuration variables. That meant, in theory, upwards of 2^100 possible execution paths — a combinatorial explosion that turned testing into a nightmare. After I linearized the system, removing the exponential branching and reducing it to a straightforward flow, things became dramatically simpler. What had once taken years to stabilize, messy codebase, became easy to reason about and, in practice, guaranteed bug-free.
Some people dismissed the result as “monolithic,” which is a meaningless label if you think about it. Yes, the code did one thing and only one thing —- but it did that thing perfectly, every single time. It wasn’t pretending to be a bloated, half-tested “jack of all trades” statistics library with hidden modes and brittle edge cases.
I’m proud of writing branchless (or “monolithic” code if you prefer). To me, it’s a hallmark of programming maturity -- choosing correctness and clarity over endless configurability, complexity and hidden modes.
- He's a very smart and imaginative guy. The most imaginative people sometimes aren't the best at predicting the future (top 1), but rather good at predicting possibilities (top N).
- I've seen this happen before. President ordering people to be sued, and even jailed. There's a name for this kind of system and it isn't congressional democracy.
- The video "How AI Datacenters Eat the World" argues that the rise of artificial intelligence has triggered a seismic shift, fundamentally reinventing the datacenter from a facility for storing data into a new type of infrastructure best described as an "AI supercomputer." Unlike traditional datacenters that must be located near population centers for low-latency services like video streaming, AI datacenters are indifferent to location because their workloads are limited by immense computational demands, not network speed. This shift is perfectly encapsulated by the video's central story of Meta demolishing a multi-million dollar, half-built traditional datacenter in Texas, only to replace it with a radically different, higher-density design capable of supporting the next generation of AI hardware.
This new breed of AI facility is defined by an unprecedented push for density and power at every level. At the component level, AI chips like Nvidia's Blackwell GPUs consume over 1,000 watts each, leading to server racks that draw over 130 kilowatts—a 30-40x increase over traditional racks. Such extreme power density has made conventional air cooling obsolete, forcing a complete industry transition to complex liquid cooling systems. This explosive growth extends to the entire facility, with new AI campuses requiring hundreds of megawatts of power, and gigawatt-scale projects already underway. Unlike the fluctuating usage of traditional datacenters, these AI supercomputers run at near-maximum capacity 24/7, placing a constant, massive strain on energy grids.
The consequence of this technological revolution is a global "arms race" for energy, driven by the belief that achieving Artificial General Intelligence (AGI) is a multi-trillion-dollar prize. Hyperscalers are no longer just tech companies; they are becoming major energy players. The video highlights stunning examples, such as Microsoft restarting the Three Mile Island nuclear reactor and Amazon building next to another nuclear plant to secure power. With individual AI campuses planned to consume as much electricity as entire industrialized nations, the video concludes that the insatiable appetite of AI is setting it on a trajectory to become the world's single largest consumer of power, quite literally beginning to "eat the world."
- 3 points
- "I guess it’s nice that he’s become more optimistic here. He usually just talks about how it’ll probably kill us all." (From Reddit)
- Niall Ferguson is a huge Trump apologist, which is a bigger and bigger losing position by day. Let me tell you what will happen under Trump
1. Runaway inflation driven by unchecked bank leverage and the collapse of FED independence. Mark my words on this.
2. Cascading crises, financial and otherwise, because deregulation always ends up the same way. When government abandons its responsibility to regulate (protect), disaster follows. COVID was just one example of such systemic failure. Financial Crisis was another.
3. Structural decay of U.S. institutions -- a uniquely Trump-era risk. Once core systems are weakened, the damage can become permanent, as history shows in other countries.
4. Erosion of America’s scientific and technological edge through diminished research funding, reduced skilled immigration, and a breakdown of meritocracy.
It increasingly feels like watching the fall of Rome, with at least a 25% probability -- something even Niall Ferguson acknowledges.
Comparison with the "evils of Biden administration" feels so hollow, I feel speechless when people bring that up as an argument. It's like comparing stage 4 cancer with flu.
- 2 points
- 4. Ethical and Regulatory Challenges The development and deployment of superintelligent AI systems will likely face significant ethical and regulatory challenges that could limit their impact on society. There are many concerns about the potential risks and negative consequences of advanced AI, such as job displacement, privacy violations, and the misuse of AI for malicious purposes.
To mitigate these risks, there will likely be a need for robust ethical frameworks, safety protocols, and regulatory oversight to ensure that superintelligent AI systems are developed and used in a responsible and beneficial manner. However, establishing and enforcing these frameworks will be a complex and challenging process that may slow down the development and adoption of superintelligent AI.
Moreover, there may be public resistance and backlash against the use of superintelligent AI in certain domains, such as decision-making roles that have significant consequences for individuals and society. This resistance could further limit the impact and influence of superintelligent AI on daily life.
5. Gradual Integration and Adaptation Finally, even if superintelligent AI does emerge, its impact on society may be more gradual and less disruptive than some predict. Throughout history, humans have shown a remarkable ability to adapt to and integrate new technologies into their lives. From the invention of the printing press to the rise of the internet, technological advancements have often been met with initial resistance and skepticism before eventually becoming an integral part of daily life.
Similarly, the integration of superintelligent AI into society may be a gradual process that unfolds over many years or even decades. Rather than a sudden and dramatic singularity event, the impact of superintelligent AI may be more incremental, with people slowly learning to work alongside and benefit from these advanced systems.
Moreover, as superintelligent AI becomes more prevalent, humans may adapt by developing new skills, roles, and ways of living that complement rather than compete with these systems. This gradual adaptation could help to mitigate some of the potential negative consequences of superintelligent AI and ensure that its benefits are more evenly distributed across society.
In conclusion, while the idea of a technological singularity driven by superintelligent AI is certainly intriguing, there are several reasons to believe that its impact on society may be less significant and disruptive than some predict. From resistance to recognizing and listening to superintelligent systems to the challenges of defining and achieving true superintelligence, there are many factors that could limit the influence of advanced AI on daily life. Moreover, the gradual integration and adaptation of superintelligent AI into society may help to mitigate some of the potential risks and negative consequences associated with this technology. As such, while the development of superintelligent AI is certainly an important and exciting area of research and innovation, it may not necessarily lead to the kind of dramatic and world-changing singularity event that some envision.
(This article was written in collaboration with an AI. Its title, the first two arguments and major edits to the third idea came from the human author. The topic and arguments are highly inspired by Vernor Vinge, who passed away this past week, and his very influential essay.)
- 8 points
- I don't hate Tesla. I've owned two of them, and still have one. I just think severe underperformance and hallucinations are going on with the company.
FSD has been a complete lie since the beginning. Any reasonable person who followed the saga (and the name "FSD") can tell you that. It was mobileye in 2015-2016, which worked quite well for what it's, followed by unfilled "FSD next year" promise since then every year.
Fool me once, shame on you; fool me twice, shame on me.
- If I had to guess, I’d say the original Tesla founders had a greater influence than Musk. His track record, frankly, is unimpressive. He’s been promising full self-driving “next year” since 2016, yet it’s still nowhere close. Aside from the Model S and X, there hasn’t been a major innovation under his watch. The real groundbreaking work likely came before him. His reign? Far from remarkable. Each year has been a cycle of overpromising (often outright lying) and underdelivering. As for Tesla’s stock? Well, markets can stay irrational far longer than most people can remain solvent.
- Here's some heavy research for you -- Model 3 is competing with the likes of BMW, Audi etc. That's not considered the "affordable" tier. It's called luxury. Here's a comparison:
https://www.truecar.com/compare/bmw-3-series-vs-tesla-model-...
- Feels like Musk should step down from the CEO role. The company hasn’t really delivered on its big promises: no real self-driving, Cybertruck turned into a flop, the affordable Tesla never materialized. Model S was revolutionary, but Model 3 is basically a cheaper version of that design, and in the last decade there hasn’t been a comparable breakthrough. Innovation seems stalled.
At this point, Tesla looks less like a disruptive startup and more like a large-cap company struggling to find its next act. Musk still runs it like a scrappy startup, but you can’t operate a trillion-dollar business with the same playbook. He’d probably be better off going back to building something new from scratch and letting someone else run Tesla like the large company it already is.
- I get the sentiment, but I think declaring scaling "dead" is premature and misses the point.
First, let's be honest about GPT-5. The article cherry-picks the failures. For 95% of my workflow -- generating complex standalone code, summarizing and finding issues in new code, drafting technical documentation, summarizing dense research papers -- it's a massive step up from GPT-4. The "AGI" narrative was always a VC-fueled fantasy. The real story is the continued, compounding utility as a tool. A calculator can't write a poem, but it was still revolutionary.
Second, "scaling" isn't just compute * data. It's also algorithmic improvements. Reasoning was a huge step forward. Maybe the next leap isn't just a 100x parameter increase, but a fundamental architectural shift we haven't discovered yet, which will then unlock the next phase of scaling. Think of it like the transition from single-core to multi-core CPUs. We hit a frequency wall, so we went parallel. We're hitting a density wall with LLMs, the next move is likely towards smarter, more efficient architectures.
The fever dream isn't superintelligence. The fever dream was thinking we'd get there on a single, straight-line trajectory with one single architecture. The progress is still happening, it's just getting harder and requires more ingenuity, which is how all mature engineering fields work.
- My hunch: we’ll see three things happen in parallel
- AI backend providers vertically integrating into energy production (like xAI’s gas plants, or Meta’s local generation experiments),
- renewed interest in genuinely efficient computing paradigms (e.g. reversible/approximate computing, analog accelerators),
- a political battle over whether AI workloads deserve priority access to power vs. EVs, homes, or manufacturing, alongside an increase in energy prices.
You need cheap, reliable power + political/regulatory willingness + cooling. That’s a very short list of geographies. And even then, power buildout timelines (whether nuclear, gas, or grid-scale solar+batteries) move at "utility speed", which is decades, not quarters. That doesn’t match the cadence of GPU product launches.
- An MIT study concluded that only about 5% of news articles show genuine subject expertise and report facts responsibly. The other 95% are primarily driven by clickbait—optimizing for attention and engagement at the cost of accuracy—leaving readers misinformed. Forbes, for example, ranks among the worst offenders, with its truth-to-clickbait ratio barely scraping above 5%. Unfortunately, most outlets fare little better, functioning as copycats of dubious sources like Forbes. Ironically, the authors of this study admitted their “research” drew mainly from their own imagination and an in-house LLM.
- 3 points
- This feels like a real inflection point for image editing models. What stood out to me isn’t just the raw generative quality, but the consistency across edits and the ability to blend multiple references without falling apart. That’s something people have been hacking around with pipelines (Midjourney → Photoshop → Inpainting tool), but seeing it consolidated in one model/API makes workflows dramatically simpler.
That said, I think we’re still in the “GPT-3.5” phase of image editing: amazing compared to what came before, but still tripping over repeated patterns (keyboards, clocks, Go boards, hands) and sometimes refusing edits due to safety policies. The gap between hype demos and reproducible results is also very real; I’ve seen outputs veer from flawless to poor with just a tiny prompt tweak.