- > how can I utilize AI without degenerating my own abilities?
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
- > However, honestly, 99% of my multitasking pain on MacOS comes from the un-removable ~300ms animation delay when switching spaces. "Reduced Motion" changes the animation to a fade and doesn't solve the problem.
That's basically why I stopped using them altogether. I'm using COSMIC DE now on my Linux systems, and while it also has animations, it doesn't look nearly as bad as MacOS.
On MacOS I resorted to tiling and alt-tabbing my way through because of the delays. I don't want to wait for the window system to draw pointless animations, but I can't disable them.
And then in Sequoia they implemented primitive tiling too and of course decided they HAD to add a non-configurable, impossible to disable resizing delay on tiling which nearly brought me to install a VM and use the MacBook as a glorified VM host (before Sequoia it used to be instant).
- > I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already
> I wish people would understand that waste is waste
I think the point is that the configuration from the post can easily run as low as maybe 30-40W on idle, but as high as a couple hundred depending on utilization. An off-the-shelf NAS probably spikes at most in the ~35W range, with idle/spindle-off utilization in the 10W range (I'm using my 4-bay Synology DS920+ as a reference). Normally the biggest contributor to NAS energy usage is the number of HDDs, so the more you add, the more it consumes, but in this configuration the CPU, the RAM, and the GPU are all "oversized" for the NAS purpose.
While reusing parts for longer helps a lot for carbon footprint of the material itself, running that machine 24/7/365 is definitely more CO2-heavy w.r.t. electricity usage than an off-the-shelf NAS. And additional entropy in the environment in the form of heat is still additional entropy, whether it comes from coal or solar panels.
- > the main reason is probably that Tesla no longer has much competitive advantage
Yep, I think Tesla simply squandered its clear advantage and slowed research and innovation, while everybody else was accelerating.
I sat in one of the first BYD a couple years back, and for all the mocking of Tesla quality standards, it was a rattle fest, I thought I'd never buy one for sure. But if there's one thing I really appreciate about China is the ability to iterate stupid fast and make it totally about business, zero emotions. Car rattles too much? We'll fix it.
Forward to last year when I took an Uber in London which was a BYD Seal: dead silent, spacious and good looking, and they keep improving the hardware. The brand is back on my possible next-buy list.
- > the same can be provided by a local provider that also has the ability to deal with large DDOS and cap off the outside when it comes down to the wire
Local providers often can be 2-3x to 10x+ expensive compared to hyperscalers for the same featureset. If you're willing to compromise on features, you can get down to 2x but with basically vendor lock-in and Swiss German support (!= German - which in Switzerland can fly if you're a medium-small company, but if you want to attract talent you'll need also English). I'm not sure there's any local provider capable of mitigating large-scale DDOS either.
Hyperscalers understood the need for local presence despite being located right across the border and in EU (Germany, Italy, France): Azure, AWS and Google all opened up locations in Switzerland in the past 3-4 years.
Basically every medium/big Swiss client I've worked for was or is still in the process of migrating away from local providers (even the big-S one) due to costs. Add to that that most companies use some form of AD and most were already using Outlook or the Office suite, you can integrate everything with less costs via Azure. If you are a big company and have multiple locations all over the world, you anyway also need hyperscalers to allow the team in Spain, US or India to interact with familiar tools.
EDIT: replying to the "local services, local tools" part: I wouldn't like to be stranded at 2am in Zurich kanton in some god-forgotten town I went to exactly once, because the SBB app relies on a local provider which has a small team of on-call people that still need to wake up. There's also people interacting with government services at all times, I've seen logs of people trying to access apps at 3:30 in the night. While I can agree it can be fixed the next morning, the question becomes: why spend more for the lesser choice?
- > Personally, I find this a move in the wrong direction where hostile behavior by websites is normalized and hidden. Cookie banners show web site true colors. When someone asks me to share data with a thousand of "partners", I leave.
I kind of agree, but at the same time basically all websites are using some kind of tracking to know what kind of users visit, and I'm tired of clicking "allow all" just to read an article. Many websites don't even work if you refuse non-essential trackers, because their tag manager is configured incorrectly, or because by law if there's even a single textbox where users can put their email or name, they need to have the consent to show that and allow input on it.
Having a browser default of "nope" with the option to whitelist a broken website would save a ton of time for people and machines the same, and also reduce website latency a lot. There's a nice website that "tracks" this cost: https://cookiecost.eu/
- If anybody wants to read a comic with the perspective of someone that went through one of these places and spent the years after fighting against them, I stumbled upon this one a few years ago: https://elan.school/
I am not in any way affiliated with the author, it's just one of the few books with real content that I've read in a long time.
- I'm also seriously considering dropping Grafana for good for the same reasons stated in the post. Every year I need to rebuild a dashboard, reconfigure alerts, use the shiny new toy, etc etc. I'm tired.
I just want the thing to alert me when something's down, and ideally if the check doesn't change and the datasource and metric don't change, the dashboard definition and the alert definition should be the same for the last and the next 10 years.
The UI used to have the most 4-5 important links in the sidebar, now it's 10 menus with submenus of submenus, and I never know where to find the basics: Dashboards and Alerts. When something goes off I don't have time to re-learn the UI I look at maybe once a month.
- I delete almost everything:
- 1:1 or 1:n conversations get archived
- appointments, receipts, ... snoozed until day before use/event, then deleted afterwards
- newsletters, automated messages deleted after read
- promotions, discounts, ... deleted immediately
The trash serves as a 30-day buffer for things I may need to recover, e.g. shop discounts I threw out that I end up needing before they expire.
I also use Fastmail's expiration period on my inbox, so anything older than 1 month is deleted too. If it received no action in 1 month, the chances it was actually important are close to zero.
- It's not just that. LED headlights are much more focused beams than the old bulb lamps.
As a guy with moderate myopia, even low beams can be extremely annoying up to the point of physically hurting my eyes if there are no street lights to reduce the contrast.
I like that I can see better and further, but at the same time if I put my car's low beams just slightly higher so they project more than ~30m away, I get flashed from cars passing in the opposite direction, no high beams required.
Matrix/Adaptive headlights should be mandatory with LED headlights to be honest.
- It's not like they need to `sudo apt install openvpn` and tweak the config file manually and tinker with routes and firewall rules afterwards.
Basically every youtube video for the past decade has been sponsored by a VPN service offering first-joiner discounts. My cousin uses a VPN and has no idea what it is and how it works, just that "he should protect himself while browsing". Those VPNs have invested massively in UX and ease of use so out of that 77% of users, I'd guess more than 80% of it switched to VPNs.
- > best of both worlds
And the worst too: https://evclinic.eu/2025/09/27/if-you-drive-a-hybrid-may-god...
I don't have first-hand experience, but these guys have an EV repair shop for a while and do also hybrids, their articles always offer lots of insight.
Short run down:
- micro/mild hybrids are useless: batteries too small, engines too small to be the sole source of power, so contribution to emission reduction is very small, batteries tend to fail early because they're very small
- full hybrids have bigger batteries and engines large enough to run pure EV, but you still rely on ICE engine for everything, so there's no ability to charge at home or save on gas
- plug-in hybrids are full hybrids, but you can charge them externally; according to many studies the estimated emissions are much higher than declared, because people simply don't charge them at home and run on ICE the whole time
In all these types of hybrids the batteries are smaller than pure EVs, so they cycle faster and degrade faster. You're carrying two drivetrains all the time with added weight, one of which has plenty of maintenance items. So they're not drop-in replacements.
From what I've seen from EVClinic above, many manufacturers use custom pouch cells, not cylindrical modules like the more advanced pure EVs, so you can't repair an individual failed cell. That means full pack replacement. For many manufacturers you can't order replacement parts of the electric drivetrain, and if you do, they cost a huge chunk of the car.
So all in all if everything's well, you're good. If something goes wrong, be prepared to spend the same as you would spend for a battery replacement of a pure EV, or even more.
- What does particularly relevant mean?
[lets set aside the CT, agree)
They lost a lot of the advantage they had on hardware, but if you want a non-chinese EV with really good software and well-thought and working UX they're still a perfectly valid all-rounder with a very good charging network, and they also refine their hardware over time.
They refreshed the M3 and MY looks recently and changed the shapes a bit, but I always understood their looks to be function over form for efficiency reasons, and they don't look bad at all if you ask me. Simple, effective, efficient and timeless designs.
I agree with the other comments here that changing shapes for the sake of changing shapes it's just marketing.
EDIT: to be honest, the only thing that really annoys me is they didn't release an EU A/B/C segment ~4m car with all the features of a standard Model 3/Y. Instead they took the existing models and made them cheaper.
- Yep, close to regular browser tabs from my point of view. I don't know all the shortcuts, but the few that I used - CTRL+{T,W} - behaved like Chrome or Firefox.
- I'm 93% useless for having written in 5m a plan that covers all layers of failure, keeps in touch with stakeholders and would very likely lead to the resolution of the issue in <15m (nvm that I literally did this job in the past with great success).
The question was loaded as it told me that "stakeholders want to know whether it's your autoscaling script you wrote last week", it gave me the context of "alerts firing off at 2:43 am, nobody knows why" and then afterwards implied I should have replied with a very specific plan to code-review and debug my script... at 2:43 am in production with "catastrophical failures coast to coast". I have the feeling it wanted me to use all the available information to reply, rather than follow a sound plan to respond to an emergency.
Without a doubt I should have hotfixed with root cause analysis in 1m in production at 2:43 am after being thrown off the bed, and simply stared at the application recovering for the remaining 4m.
I really don't understand what's the point of this LLM-backed roaster, and if there is one, it doesn't seem to close to achieving it.
- This is far from a new story. What is notable is that the failure is so widespread it's prompting government inquiry and action.
When I still had my reddit account, maybe 95% of battery failures showing up in r/TeslaLounge or r/TeslaModel3 were from 2021 models. If I understood correctly by reading the many threads, 2021 it's around the time that Tesla changed the manufacturing process to seal the battery, so this is likely the initial batch of batteries with the new sealing process (perhaps not so refined/polished).
For most of these cars the powertrain and battery are still under warranty (8y or 100k/120k miles). As noted in the article, Tesla normally replaces failed packs with refurbished packs of about the same capacity/degradation, which are bound to fail sooner than brand new ones of course.
I don't see a way out of this for Tesla except to recognize the manufacturing defect and waive the mileage warranty limit (and only hold to the 8y age) when a 2021 car rolls in to swap the battery pack.
- I have a similar setup, but with AdGuardHome. I used Pi-Hole in the past, but AdGuardHome's UI is from this century at least. That, and the fact that with Pi-Hole it was very difficult have IPv6 working.
I have an instance on my router in my home network for covering all devices by default, and a hosted one to which I connect when outside via mobile network. Split-tunneling with only the DNS routed, so that I don't have to push all traffic through the VPN.
- My fairly recent experience with some timelines, posted 20d ago: https://www.hackerneue.com/item?id=45210911
Some of the most catastrophic ones were 3 years ago or earlier, but the latest kernel bug (point 5) was with 6.16.3, ~1 month ago. It did recover, but I already mentally prepared to a night of restores from backups...
- Thank you for sharing this! I've been using Witch Daemon for a while, but it does have the occasional glitches (esp. multi-screen setups). At first impression this is super-fast without lag, I'll keep testing it for a while.
A 3-minute chat with Claude suggests 30FPS should be plenty (perhaps minor cursor lag can be noticed if it's drawn), with a GOP of 2s (60 frames) for fast recovery, VBR 1mbps average with a max bitrate at 1.2mbps for crappy connections, and bframes to minimize bandwidth usage (because we have hw encoding).
The crappiest of internet cafes should still be able to guarantee 1.2mbps (150kb/s). If they can do 5-10FPS with 150kb frames, they have 6-12mbps available. Worst case GOP can be reduced to 15 frames, so that there's 2x I-frames every second, and the latency is 500ms tops.