- > People need to stop making Meme distributions.
Heh. I've been saying that since I was on Mandrake in the early 2000s. This is just what the Linux landscape is like.
That said, I'm generally not easily impressed, especially by random *nix distro 347, but CachyOS is surprisingly good. I've finally switched full time from Windows. I don't even need VS anymore because Rider is x-platform.
- After I read that emotive response I couldn't help but wondering if this wasn't part of a scheme to help someone cover up a crime. This is how I would have responded:
"Hi,
These do appear to be quite serious crimes. I've sent all the URLs, your email address, emails and responses to the relevant law agencies.
Regards, AdGuard"
- These guys are quite well-known in China and have recently started uploading tto Youtube as well. Their videos are quite entertaining and have extremely high production value compared to many other creators.
https://www.youtube.com/@HTXStudio/videos
I love the one about the automated trash cans.
- 4 points
- I switched to iPhone from Android for a few months earlier this year. I don't think I qualify as an elderly person yet (I'm 47) but even I had trouble figuring things out. I don't think it was super-hard to use, but I often found myself asking "Why would they do it like this? Who uses a smart-phone like this?". I just found some things very unintuitive. Take for example re-arranging icons. I don't know if I would ever have figured out this technique without looking it up:
- 2 points
- I would recommend watching Curt Jaimungal's series of talks with Jacob Barandes. He gives a nice background history of various aspects of QM, including the formulation of Matrix and Wave mechanics (and loads of other ideas). Barandes is excellent at clearly articulating complex ideas in very simple, concise terms. He also has his own formulation of QM based on "Indivisible non-Markovian Stochastic Processes". Even if you disagree with his ideas, the interviews are quite fascinating.
In this interview he goes over pretty much exactly what you mentioned (and a lot more):
- Pre-print of a paper which studied 1950 "transients" which - in tl;dr terms - might be evidence of artificial objects in orbit before the satellite era.
Recent comment from one of the main authors:
https://x.com/DrBeaVillarroel/status/1949780669141332205
Previous work: https://www.nature.com/articles/s41598-021-92162-7
- 2 points
- > In that snippet are links to Postgres docs and two blog posts
Yes, that's what a snippet generally is. The generated document from my very basic research prompt is over 300k in length. There are also sources from the official mailing lists, graphile, and various community discussions.
I'm not going to post the entire outout because it is completely beside the point. In my original post, I expliclity asked "What is the qualitative and quantitative nature of relevant workloads?" exactly because it's not clear from the blog post. If, for example, they only started hitting these issues with 10k simultaneous reads/writes, then it's reasonable to assume that many people who don't have such high work loads won't really care.
The ChatGPT snippet was included to show that that's what ChatGPT research told me. Nothing more. I basically typed a 2-line prompt and asked it to include the original article. Anyone who thinks that what I posted is authoritative in any way shouldn't be considering doing this type of work.
- What is wrong with you? Why would you even bother posting a comment like this?
Maybe you also don't know what ChatGPT Research is (the Enterprise version, if you really need to know), or what Executive Summary implies, but here's a snippet of the 28 sources used:
- > The structured data gets written to our Postgres database by tens of thousands of simultaneous writers. Each of these writers is a “meeting bot”, which joins a video call and captures the data in real-time.
Maybe I missed it in some folded up embedded content, or some graph (or maybe I'm probably just blind...), but is it mentioned at which point they started running into issues? The quoted bit about "10s of thousands of simultaneous writers" is all I can find.
What is the qualitative and quantitative nature of relevant workloads? Depending on the answers, some people may not care.
I asked ChatGPT to research it and this is the executive summary:
For PostgreSQL’s LISTEN/NOTIFY, a realistic safe throughput is: Up to ~100–500 notifications/sec: Handles well on most systems with minimal tuning. Low risk of contention. ~500–2,000 notifications/sec: Reasonable with good tuning (short transactions, fast listeners, few concurrent writers). May start to see lock contention. ~2,000–5,000 notifications/sec: Pushing the upper bounds. Requires careful batching, dedicated listeners, possibly separate Postgres instances for pub/sub. >5,000 notifications/sec: Not recommended for sustained load. You’ll likely hit serialization bottlenecks due to the global commit lock held during NOTIFY. - I currently have a big problem with AI-generated code and some of the junior devs on my team. Our execs keep pushing "vibe-coding" and agentic coding, but IMO these are just tools. And if you don't know how to use the tools effectively, you're still gonna generate bad code. One of the problems is that the devs don't realise why it's bad code.
As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.
AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.
I rejected the PR and implemented the same functionality by adding two new methods and one extra field.
Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.
I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.
- Here's some of what I think is my personal best advice:
Learn to live in the gray areas. Don't be dogmatic. The world isn't black and white. Take some parts of the black and white. And, don't be afraid to change your mind about some things.
This may sound obvious to some of you, and sure, in theory this is simple. But in practice? Definitely not, at least in my experience. It requires a change in mindset and worldview, which generally becomes harder as you age ("because your want to conserve the way of life you enjoy").
- It still blows my mind how dogmatic some people can be about things like this. I don't understand why anyone takes these things as gospel.
Who else has had to deal with idiots who froth at the mouth when you exceed an 80 line character margin?
And it's not just programming styles, patterns and idioms. It's arguably even worse when it comes to tech stacks and solution architecture.
It's super-frustrating when I'm dealing with people in a professional setting and they're quick to point out something they read in a book, or even worse - a blog - with very little else to add.
This was especially bad during the NoSQL and Microservice hype. Still somewhat feeling it with PAAS/SAAS and containerization. We have so many really really basic things running as Function Apps or lambdas, or simple transformations running in ADF or Talend that add zero value and only add to the support and maintenance overhead.
Always keep in mind that sometimes the only difference between yourself and the person writing the book/blog/article is that they actually wrote it. And that their opinions were written down don't make them fact. Apply your own mind and experience.
- > I think what we're attempting to define is something closer to seasoned developer.
I'm fully aware of that, which is why "Senior" is in double-quotes, but experienced (aka "seasoned") is not. My point is that you can be seasoned at delivering bad products. The point about seniority just speaks to tenure at a company. Sure, you can join a company as a "Senior dev", but that's not quite what I'm referring to here. One would think that they would be exposed during the interview process, but alas, we all know that's often not the case.
- Hard to disagree. I can usually tell what type of type of experience devs have by the snarky, dismissive responses I've gotten on various internet forums over the last two decades. e.g. "Oh but you would never have this problem if you performed proper code review" - Random_Rockstar_Dev_254
However not all legacy projects are bad to work on. If they're decently developed, then often you'll find that most of the pain is in setting up your local codebase. e.g. sorting out make files, or header file clashes etc. And if you're lucky, some poor bastard has already done the hard work for you.
As an aside, I know tons of experienced "Senior" developers who just suck at their jobs. The problem is they have tons of experience in delivering terrible products based on terrible code and architectural decisions. They just never had anyone to show them any better. And now that they're "Senior", noone can tell them anything. Many devs who work in corporates understand this pain. Shoutouts to my peers who have to "fix" 3000 line stored procs with a 100 line "change control comment saying stuff like "2009-04-03 JS Added TAX_REF_ID". I live your pain.
EDIT: Also, if you happen to work at company that thrives on terrible legacy products, try to drill it into their heads that BAU Support is not part of the solution. Every time I've raised the issue of the mounting tech debt I've gotten the response "Business does not have the apetite to solve these issues. And why would they? That's why we have support teams".
- I have used a myriad of programming languages in production in my nearly 25 years of professional programming. There are things I love (almost never "hate") about most languages.
But the reason C# is one of my favourite languages to code in professionally is simply because of how easy it is to setup the environment and just get to work. Admittedly on Windows, but I've learned over the last 5 years it's a pretty simliar experience on Linux too. No messing with environment variables or paths; no header file wrangling; no macro shenanigans; no messing with virtual memory settings etc etc etc.
Yeah, I get it. Choice is nice. But when it comes to my job what's most important is that I can just get to work.
- > give feedback to such a person will get you this response: "you need to earn trust and learn to disagree and commit"
Sure, but you won't get fired, right?
Even so, this is just about dealing in general with people who have ego (for the sake of brevity) issues. I don't understand why this should be advice for dealing with senior leadership in general.
- This title of my article doesn't make much sense to me. Why would you get fired for giving feedback? Is this just a US thing? I give feedback to my superiors all the time, and expect my subordinates to do the same. In fact, as far as my team goes, you're more likely to get into trouble (not fired) if you rarely give feedback.
- I'm dealing with related issues at work right. Leadership is beating us over the head with the "innovation" hammer, without any consideration why and where they want to innovate. This mandate is a recipe for disaster in the hands of inexperienced tech-leads.
For example, there was an initiative to move from Talend to Azure Data Factory. Devs had to upskill, and it took them several months to deliver something that fails intermittently (with a huge cost behind it), and noone knows how to fix it.
I rewrote the pipeline in 2 hours (as simple Windows Service that parses a file and writes data to a DB), and polished it in around a day or so. Added some informative and error notifications, and we immediately saw benefits.
Innovation is fun, but some things just don't really need much of it. We've known how to do stuff like ETL for decades. We really don't need cloud-hosted solutions behind client secrets and gated services to load data into a DB.
Ironically, my boring solution seemed more "innovative" because users can now get customised notifications in different Teams Channels.
"Boring"/"exciting" tech isn't stifling or improving anything. These are orthoganal issues. Innovation can emerge from boring tech.
- 1 point
- > One does things like this because one is afraid of being assailed by pedants and made to feel inferior.
Do you pronounce "tortoise" like "bourgeoise" because you don't want to sound inferior?
I jest, but it's like your argument is making my case for me. Replace "snob" with "pedant" to see what I mean.
It's unfortunate that so many people end up parroting fanciful ideas without fully appreciating the different contexts around software development.