- billtiI was hesitant buying my Tesla this year (first one) as I really liked having CarPlay in my prior car (Jeep). But after having it a while, it's really a non-issue. The Tesla Apple Music app is pretty good. Their maps and navigation is pretty good (and integrated with FSD). And I can easily just use the bluetooth connection for a couple other minor things I occasionally use.
- It was disappointing to see one of the most advertised Apple “AI” features was “Genmoji”, which falls squarely in the “gimmick” category for me.
- I didn't even notice who I was replying to at first - so let me start by saying thank you for Ghostty. I spend a great deal of my day in it, and it's a beautifully put together piece of software. I appreciate the work you do and admire your attitude to software and life in general. Enjoy your windfall, ignore the haters, and my best wishes to you and your family with the upcoming addition.
The project I'm mostly working on uses the wgpu crate, https://github.com/gfx-rs/wgpu, which may be of interest if writing cross-platform GPU code. (Though obviously if using Rust, not Zig). With it my project easily runs on Windows (via DX12), Linux (via Vulkan), macOS (via Metal), and directly on the web via Wasm/WebGPU. It is a "lowest common denominator", but good enough for most use-cases.
That said, ever with simple shaders I had to implement some workarounds for Xcode issues (e.g. https://github.com/gfx-rs/wgpu/issues/8111). But still vastly preferable to other debugging approaches and has been indispensable in tracking down a few bugs.
- It's a full featured and beautifully designed experience, and when it works it's amazing. However it regularly freezes of hangs for me, and I've lost count of the number of times I've had to 'force quit' Xcode or it's just outright crashed. Also, for anything non-trivial it often refuses to profile and I have to try to write a minimal repro to get it to capture anything.
I am writing compute shaders though, where one command buffer can run for seconds repeatedly processing over a 1GB buffer, and it seems the tools are heavily geared towards graphics work where the workload per frame is much lighter. (Will all the AI focus, hopefully they'll start addressing this use-case more).
- This reminds me of the whole Lerna debacle a few years back.
https://www.vice.com/en/article/open-source-devs-reverse-dec...
That aside, even if something like this was “legally enforceable”, it adds enough friction, risk, and uncertainty to downstream consumers compared to a “vanilla” open source license that I expect most folks would choose an alternative to the “bespoke” license project where they could. Fine if you don’t care about getting usage, but that defeats much of the value that open source brings.
- > Also, code that compiles with older CUDA toolkit versions may not compile with newer CUDA toolkit versions. Newer hardware may require a CUDA toolkit version that is newer than what the project maintainer intended.
This is the part I find confusing, especially as NVIDIA doesn't make it easy to find and download the old toolkits. Is this effectively saying that just choosing the right --arch and --code flags isn't enough to support older versions? But that as it statically links in the runtime library (by default) that newer toolkits may produce code that just won't run on older drivers? In other words, is it true that to support old hardware you need to download and use old CUDA Toolkits, regardless of nvcc flags? (And to support newer hardware you may need to compile with newer toolkits).
That's how I read it, which seems unfortunate.
- That’s one of the primary reasons we built the tooling for Q# to run in the browser (by writing in Rust and compiling to wasm). The “try with copilot” experience [1] and the “katas” for learning [2] all have a full language service and runtime in the browser.
- Horses for courses. Better camera (for those special moments while the kids are young) and better battery life are two big ones for me, so I'll likely upgrade. (Kind of digging the orange color too).
It's not the dullest refresh, and they always sell plenty. I'm sure this will be no exception.
- That's what I do. Works great. Yes a couple of extra clicks is annoying, and apps are often "Hey how about you go into settings and let me access all your photos for a better experience!", but I'm happy with 2 or 3 extra clicks the few times a month I share a photo in order to limit access.
- Yeah. Anki (and flashcards in general) are great for helping you remember something _you_ learned (from a book, video, class, etc.). Not for transferring knowledge someone else learned.
Writing my own cards as I'm learning is the only way I've found it effective.
- Isn't that kinda why we have collaboration and get in room with colleagues to discuss ideas? i.e., thinking about different ideas, getting different perspectives, considering trade-offs in various approaches, etc. results in a better solution than just letting one person go off and try to solve it with their thoughts alone.
Not sure if that's a good parallel, but seems plausible.
- Yeah. On Windows some apps (the new Terminal) used to have the opacity set to 0.9 or something by default. First thing I did was set it to 1.0. Having the background bleed through is distracting for no real value.
I’m usually a big fan of Apple design and UX. Any change faces some initial resistance, but this is first real “Ugh, hard no” reaction I can recall after seeing some of those.
- > While the U.S. often has a goods trade deficit, it maintains a surplus in services, including finance, education, and technology
Maybe a naive question, but are only goods considered? I had always assumed the 'deficit' was "money in minus money out", and thus selling services, etc. would be included in it.
- I have no 'recently' to compare to, as I'd never been in a Tesla before. I was dubious because of all the "this year for sure" history, but after test driving one, I bought the new Model Y, and it now pretty much drives me to work and back every day with little to no intervention.
Knowing how prone to exaggeration Elon is, my bar was low. But it blew me away honestly. After nearly 30 years working in software and with some background in machine learning and computer vision and generally just trying to make software that works reliably, it's a pretty jaw dropping experience.
Would I take a nap in the back seat and let it drive? No. But does it allow me to sit there focused on a technical podcast or an audiobook so I feel like I'm getting back an hour or two a day instead of worrying about driving? Absolutely.
- If it’s predicting a next token to maximize scores against a training/test set, naively, wouldn’t that be expected?
I would imagine very little of the training data consists of a question followed by an answer of “I don’t know”, thus making it statistically very unlikely as a “next token”.
- I did see a talk where someone was making the case that if the U.S. depends heavily on China for steel, and Taiwan for chips, then if China invades Taiwan the U.S. would very quickly be unable to wage a war with both supplies cut off. So the goal is to build some capacity for such products domestically.
They also tied that to Trump's desire to get out of Europe and deescalate the Middle East, being that if the U.S. was already stretched across Ukraine and say Iran, it would be way to stretched to also wage anything in the far east.
I'm no expert by a long way, but I can see some logic in the argument.
- I've never understood why countries like the EU pass strict environment and labor laws and then DONT place some kind of tariff or tax on countries without those restrictions. Not only to offset making your own market less able to compete on cost, but to provide a financial incentive for the other countries to up their game on environmental and worker quality-of-life issues (if that is the overall aim).
- Having recently gotten into quantum and listening to a lot of audiobooks on the history of it, that’s one of biggest takeaways for me. So many major advances in theory that languished for years because of the politics of the day of the personal opinions of their advisor, only for a physicist with greater standing to rediscover the same thing later and finally get it some attention. (Hugh Everett and David Bohm being two examples)
- The sound designer goes over their use of this type of music in this excellent podcast. You'd really enjoy it. (Available Apple Podcasts, which is where I listened to it, but linking directly here).
The next one about the voices is excellent too, so linking both. The care they put into both the show and the people involved is almost as touching as the show itself.
https://www.20k.org/episodes/thesoundofbluey https://www.20k.org/episodes/thevoicesofbluey
- I work on the quantum developer tools team at Microsoft. We put a lot of work into what we call the Quantum Katas to learn the basics via coding - https://quantum.microsoft.com/en-us/tools/quantum-katas
Our VS Code extension is trivial to install (https://learn.microsoft.com/en-us/azure/quantum/install-over...) or just try it entirely in the browser with Visual Studio Code online (https://vscode.dev/quantum/playground/)
To support that last scenario, where the language service, debugger, simulator, even package references, can run entirely in the browser, we built the whole thing using Rust compiled to WebAssembly, and our VS Code extension runs as pure JavaScript and Wasm. If interested you can dig into the implementation at https://github.com/microsoft/qsharp .
Happy to answer any questions!