- > Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
You needn't be a genius. Go on a few vipassana meditation retreats and your perception of all this may shift a bit.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Hence the suggestion by all mystical traditions that truth can only be experienced, not explained.
It may be possible for an AI to have access to the same experiences of consciousness that humans have (around thought, that make human expressions of thought what they are) - but we will first need to understand the parts of the mind / body that facilitate this and replicate them (or a sufficient subset of them) such that AI can use them as part of its computational substrate.
- Likewise. I was 2-3 days into testing positive and had a fever that could not be controlled by maximum strength OTC antipyretics, awful cough producing glue-like greyish globs, headache, blood oxygen consistently 2-3% below typical for me, extreme fatigue.
~48 hours after beginning Paxlovid I felt almost back to normal. spO2 returned to typical wake / sleep levels, lungs clearing, little fatigue, etc.
Based on how sick I was when I started treatment, if historical patterns of recovery from respiratory illness are any indication I would have expected an additional ~9-14 days of tapering symptoms at minimum.
Instead I was basically totally normal again after ~5-6 days.
If I get COVID again I will absolutely ask for Paxlovid.
- This makes sense. I've mostly been successful doing these sorts of things as well and really appreciate the way it saves me some typing (even in cases where I only keep 40-80% of what it writes, this is still a huge savings).
It's when I try to give it a clear, logical specification for a full feature and expect it to write everything that's required to deliver that feature (or the entirety of slightly-more-than-non-trivial personal project) that it falls over.
I've experimented trying to get it to do this (for features or personal projects that require maybe 200-400 LOC) mostly just to see what the limitations of the tool are.
Interestingly, I hit a wall with GPT-4 on a ~300 LOC personal project that o3-mini-high was able to overcome. So, as you'd expect - the models are getting better. Pushing my use case only a little bit further with a few more enhancements, however, o3-mini-high similarly fell over in precisely the same ways as GPT-4, only a bit worse in the volume and severity of errors.
The improvement between GPT-4 and o3-mini-high felt nominally incremental (which I guess is what they're claiming it offers).
Just to say: having seen similar small bumps in capability over the last few years of model releases, I tend to agree with other posters that it feels like we'll need something revolutionary to deliver on a lot of the hype being sold at the moment. I don't think current LLM models / approaches are going to cut it.
- I wonder about this too - and also wonder what the difference of order is between the historical shifts you mention and the one we're seeing now (or will see soon).
Is it 10 times the "abstracting away complexity and understanding"? 100, 1000, [...]?
This seems important.
There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.
- In a similar situation at my workplace.
What models are you using that you feel comfortable trusting it to understand and operate on 10-20k LOC?
Using the latest and greatest from OpenAI, I've seen output become unreliable with as little as ~300 LOC on a pretty simple personal project. It will drop features as new ones are added, make obvious mistakes, refuse to follow instructions no matter how many different ways I try to tell it to fix a bug, etc.
Tried taking those 300 LOC (generated by o3-mini-high) to cursor and didn't fare much better with the variety of models it offers.
I haven't tried OpenAI's APIs yet - I think I read that they accommodate quite a bit more context than the web interface.
I do find OpenAI's web-based offerings extremely useful for generating short 50-200 LOC support scripts, generating boilerplate, creating short single-purpose functions, etc.
Anything beyond this just hasn't worked all that well for me. Maybe I just need better or different tools though?
- The other side of this challenge is that the "technology" is mostly irrelevant for above-average applicants with solid CS chops.
I apply for lots of jobs featuring technologies I haven't used (beyond toy personal projects or something in college) because I have a long history of picking up new tools and being productive in weeks or months at most - because I understand the underlying semantics of the tool regardless of its presentation, syntax, etc.
Keyword scanners (and humans focused on keywords) are unable to hire me for roles where I haven't used the technology (much) before - and I guess that's fine and well as I am indistinguishable on paper from someone who doesn't know what they're doing.
Just presenting it as another part of the challenge of both finding good people and for good people finding good jobs.
- I haven't used the remarkable but I bought a screen protector for my iPad that's intended to yield a paper-like writing and drawing experience when using the Apple pencil. It gets pretty close I think.
N.B. if you go this route you'd need to replace the Apple pencil tips a bit more regularly than you otherwise would given the rougher surface you're "writing" on.
- I want to share a personal anecdote, as I recently spent quite a bit of time deciding whether or not to leave Mac OS for linux (Manjaro) this year for my personal computing needs - primarily because of gaming. Ultimately, I did make the switch. Several years back, Apple broke most of my steam library with Catalina. I wasn't a serious gamer, but I had a lot of nostalgic games from the aughts and early 2010s that ran great (pre-Catalina) on my top-of-the-line mid-2015 retina MBP.
I finally retired that machine this year and bought a higher-end gaming pc for less than half of what it would cost me to get a less-than-the-best Apple Silicon MBP.
Yeah, it's heavy. Battery life isn't great. But my phone has replaced so much of my mobile computing needs that I don't really need to take a laptop with me when I'm traveling unless it's for work, in which case I'll have my company-issued machine with me anyway.
I never thought I'd leave MacOS for Linux - but I recently got back into gaming and basically wasn't willing to spend $4k+ on a machine I couldn't game on when all of my personal project needs, etc. can be attended to on a cheaper gaming laptop.
The fact that Apple Silicon is an absolute beast, graphically, made it all worse somehow. Like having a Ferrari in your garage that you aren't allowed to drive - only pay for and look at.
Am I Apple's target demographic? Apart from being a developer - probably not. I don't do a lot of multimedia stuff (at least, I don't do anything that isn't adequately served by a PC with a solid GPU). Because I grew up on linux I'm right at home there with all of my non-work dev / geek / fun stuff and that probably makes me an outlier.
Apple's success clearly speaks for their business savvy - and, there's now a number of chinks in my (previously 100%) Apple loyalty across my wide array of gadgetry (several Apple TVs and one each for me and my wife of: laptop, ipad, iphone, watch, airpods). After a disappointing battery experience with my airpods - I replaced them with some excellent-sounding soundcore buds. My apple watch needs to be replaced soon and I'll probably get a Garmin (again - battery life and consistent failure to capture VO2 max on outdoor runs, other frustrations). I enjoy VR gaming and plan to upgrade from a quest 2 to a quest 3 instead of buying a vision pro. What's next to go in my Apple line-up? I don't know but I've become much more open to shopping around for non-Apple tech than I was in, say, 2016 when it seemed to me that nothing else could compete with Apple.
I wonder how true this is for Apple's "geek core" of tech professionals, and how much of it is just my unique little anecdote? And, in any case - does Apple care? They've cornered the market for both the technical-artistic and "luxury" class. Plenty of meat on those bones without worrying about the geeky whims of the pesky few that are open to (and capable of) something like abandoning Mac OS for Linux.
Still, it was sad to give up my Mac OS personal computing environment. I love Apple - but for what I care about, Apple just doesn't seem to love me. We'll always have our iOS time together I guess, for now anyway.
I'm a buyer 3-4 generations in if price comes down significantly and I can replace my monitors with it. The seamless use with a mac for work will be really nice - but isn't worth $3500 to me for a gen 1 device / experience.* $3500 price tag * 2D / "Arcade" games support (lol)For now, I'm probably just going to get a Quest 3 when it drops this September. In terms of a virtual work environment - immersed is _almost_ there on a Quest 2. Maybe Quest 3 will be the ticket to a compelling experience. If not, well... I still have my library of dozens of VR games to make it worth $600 (or w/e).
Still, excited to see what Apple does with this platform over the next five or so years. The "macbook air" version of this a few years down the road will probably be more my speed!
- Anecdotal: the app "One Sec" broke my twitter habit over the course of a few weeks.
Via iOS' automations feature the app allows you to configure a per-app waiting period during which you can decide you don't actually want to open whatever app you've tried to open.
Very grateful for this tool.
- > Given a simple problem A, when adding more options, at some point, choosing among the options requires more effort than solving the simple problem, if only by brute force.
Hick's Law, more or less: https://en.wikipedia.org/wiki/Hick%27s_law
- I can only speculate - but I'm really curious.
Looks like they may have launched a hypersonic missle that reached speeds of ~Mach 10[1], and the US (publicly, at least) seems to be lagging behind China and Russia in Hypersonic R&D - at least as of 2019[2].
How long would it take for a missile traveling at Mach 10 to reach a point where a high-altitude EMP attack[3] would cripple the west coast of the US?
Is that time longer than it would take for aircraft to return to a place where they could land? If NORAD spots a bogey moving that fast, is the rule right now just... everything stops until we understand what's happening?
Would love for someone who knows about hypersonics, NK missle capabilities, EMP attacks, etc. generally to say more.
[1] - https://www.reuters.com/world/asia-pacific/nkorea-launches-p...
[2] - https://www.defensenews.com/naval/the-drift/2019/11/15/dont-...
[3] - https://spectrum.ieee.org/one-atmospheric-nuclear-explosion-...
- Previous discussion: https://www.hackerneue.com/item?id=23848039
- "Car Guys vs. Bean Counters: The Battle For the Soul of American Business"[1] by Bob Lutz tells this story really well, from Lutz' vantage point trying to salvage General Motors from the clutches of an army of MBAs.
There's a decent summary of the book (and the general problem) in the 2012 Time article "Driven off the Road by MBAs"[2] as well.
[1] - https://www.amazon.com/Car-Guys-vs-Bean-Counters-ebook/dp/B0...
[2] - http://content.time.com/time/magazine/article/0,9171,2081930...
- Off topic, but perhaps of interest to anyone in Thailand who would like to experience similar views firsthand:
https://www.booking.com/hotel/th/thirty-nine-boulevard-execu...
The room my wife and I booked in 2019 offered a panoramic, bird's eye view of Bangkok's skyline from one of the higher floors in the building. Not bad at ~$90 / night at the time.
- Ownership of single-family residential properties by large financial institutions should be limited to their traditional role - i.e. custodial ownership of a home for the duration of a mortgage.
- Whether or not we are in a bubble is increasingly the wrong question to be asking in an age where failure for sufficiently-large institutions is no longer permitted.
The question used to be pretty simple: "has a sufficient portion of the market been priced out to the extent that demand collapses"?
As we've seen in equities / derivatives (and increasingly, commodity) markets - the answer to that question is now perpetually "lol, number go up" because large investment banks have essentially infinite access to free money and will be bailed out if they get in trouble.
On a long enough timeline the end result of this is that more and more Americans write their rent checks to institutional investors. [1]
Sure, many Americans own their homes now (or have a nice cushion of equity). But what happens as wage growth continues to stay relatively flat while cost of living rises dramatically and folks need money for medical bills or their kid's college tuition (or w/e)? They sell and become renters.
There's a pretty bleak future for American housing absent regulation in this space.
[1] - https://www.theatlantic.com/technology/archive/2019/02/singl...
- I can totally understand that sentiment.
I don't know about you, but for me the obvious way in which politicians and other powerful sorts have abused and perverted religious devices (and systems of control) to achieve their own ends has left a really bad taste in my mouth generally w.r.t anything "control-y' about religion.
The religious motivation behind control in monasteries is something different, though (at least when uncorrupted by politics and power).
Monasteries are, by design, very controlled environments. That's _exactly_ what they are supposed to be.
A place where you can safely get lost in ecstatic bliss, altered states of consciousness and the sometimes-difficult psychological territory of self discovery that typically follows these experiences.
The guard rails are put there by people who have travelled the road before and know what the pitfalls are.
For anyone curious about that (from a Christian monastery context):
* Cloud of Unknowing (Anonymous)
* The Dark Night of the Soul (St. John of the Cross)
* The Interior Castle (Teresa of Ávila)
Similar material exists for guiding e.g. Buddhist monks through the sort of territory that comes up when people spend a lot of time alone in contemplation (The Visuddhimagga, The Vimuttimagga in the Theravada tradition, The Tibetan Book of the Dead in the Vajrayana tradition).
Shamanic traditions likewise have very strict schedules of diet and spiritual preparation before aspirants can consume psychedelics - and ceremonies are (traditionally) performed under exquisitely controlled conditions.
Incidentally - westerners who play with meditative technologies or "psychedelic" therapies absent a regular, working relationship with a guide who know the territory do so at their own peril, IMO.
Thousands of years of contemplative and meditative practice have yielded independently arising systems in many different cultures which call for a controlled environment where a practitioner is surrounded by peers who know what to do, and more importantly what _not_ to do when things get a bit weird.
That's JUST for the individual practitioner.
Now add another layer for "things that can go wrong when trying to manage / lead a large community of people doing these things together".
Many mystical traditions (particularly eastern ones) solve this problem by having rules about how long monks and abbots can stay in one place.
Here, we see some western solutions to the common problems of community governance (at least: the sorts of problems one was likely to encounter at the time).
I've meandered really widely around the point! :)
Really I just wanted to call out that there is a valid (within the context of the goals of spiritual practice) use for a very controlled environment that should be considered separately from the common understanding of "religious control" (i.e. the powerful and political abusing religious devices to exert control over the masses).
- Right.
For a very good (but not exceptional) developer, I wonder if ending up "out on the street" would be a reasonable expectation if something like this became truly widespread.
If companies like Google were no longer able to filter false positives __at Google scale__ using their current hiring practice, I wonder how long it would take to decide that the next best thing is to contract some tunable number of N contractors for K positions where N >> K and only keep the best M (K <= M << N) of them. (I expect a company like Google to occasionally keep more than K because they can't afford to throw away rockstars if they get a great cohort).
So, even if you're pretty good - if you aren't better than the bottom x% of your cohort (or some other aggregate measure) - you're out. Stack ranking for C2H, basically.
Typing isn't the fun part of it for me. It's a necessary evil to realize a solution.
The fun part of being an engineer for me is figuring out how it all should work and fit together. Once that's done - I already basically have all of the code for the solution in my head - I've just got to get it out through my fingers and slog through all the little ways it isn't quite right, doesn't satisfy x or y best practice, needs to be reshaped to accommodate some legacy thing it has to integrate that is utterly uninteresting to me, etc.
In the old model, I'd enjoy the first few hours or days of working on something as I was designing it in my mind, figuring out how it was all going to work. Then would come the boring part. Toiling for days or weeks to actually get all the code just so and closing that long-tail gap from 90% done (and all interesting problems solved) to 100% done (and all frustrating minutia resolved).
AI has dramatically reduced the amount of time the unsatisfying latter part of a given effort lasts for me. As someone with high-functioning ADD, I'm able to stay in the "stimulation zone" of _thinking_ about the hard / enjoyable part of the problem and let AI do (50-70%, depending on domain / accuracy) of the "typing toil".
Really good prompts that specify _exactly_ what I want (in technical terms) are important and I still have to re-shape, clean up, correct things - but it's vastly different than it was before AI.
I'm seeing on the horizon an ability to materialize solutions as quickly as I can think / articulate - and that to me is very exciting.
I will say that I am ruthlessly pragmatic in my approach to development, focusing on the most direct solution to meet the need. For those that obsesses over beautiful, elegant code - personalizing their work as a reflection of their soul / identity or whatever, I can see how AI would suck all the joy from the process. Engineering vs. art, basically. AI art sucks and I expect that's as true for code as it is for anything else.