GCS/IT/SS/O d-(++)>--- s:- a C$ UBL++++$>+++ P+++(++++)$ L+++$>++++ !E !W+++()@>- !N !o K--? !w !O !M !V PS@ !PE Y-- !PGP !t 5 !X !R tv b- DI !D G e- h+ r y++*
The views and opinions I express on HN do not reflect those of my employer.
--
#OpenToWork - Information Systems Engineering (DevOps/SRE/Admin/Architect/Security/Programming/etc) and I only work remote
- > but no where is there an evaluation of the value granted by upgradability and repeatability
Back in the day we used to have upgradeable laptops that weren't rattling tin cans with uncomfortable displays. Making something worse than it was 20 years ago for more money isn't a value.
- Actually it's not even years of experience, I've seen grads with 2 yrs experience promoted to Senior with a minor raise because otherwise they might leave the company.
Licensed professionals don't have identity crises, their titles and what is required of them is legally enforced. The software industry has never lobbied for the interests of "engineers", the way other professions have (taxi drivers, barbers, plumbers, real estate agents, etc formed professional groups which lobbied for laws requiring official licensing). I think it's because software developers are the laziest people on the planet, and they are happy to continue doing almost nothing in order to get hired.
- Wirth's Law in action. Eventually it's going to take an entire datacenter to read the news.
- We stare at screens full of text and pictures every day. We had screens full of text and pictures 20 years ago. Yet somehow we have justified re-creating every single component multiple times over, spending hundreds of trillions of dollars, to get the same thing we had 20 years ago.
We've been able to talk to machines, have them understand that speech, and do work based on it, for decades. But we're all still typing into keyboards.
We've had devices which can track our eyes to move a mouse pointer for 37 years, but we all still use our hands/thumbs to move a mouse.
We had mobile devices which had dedicated keys for input which allowed us to input without looking, and we replaced those with mobile devices with no dedicated keys (so we have to look to provide input) and bodies made of glass so they would shatter when dropped and required additional plastic coverings to protect them. Even automobiles, where safety is a high priority, also adopted input devices which require looking away from the road.
Our world includes a government which is indented to be led via decisions from all the people, and could easily be overthrown by all the people, but only a select few people actually get to make decisions, and they don't have to listen to the people, and basically do whatever they want (wrt the other few people who get to make decisions).
Yes, life is needlessly absurd. It's best not to think about it unless you wanna end up in a padded room.
- Only if you're sending data you don't mind losing and getting out of order
- I don't know if you're new to the internet, but low-effort comments have existed before AI, and will continue to exist regardless of AI.
- > AI writing tends to suck more verbosely
So, it's the style you oppose, the way a grammar nazi complains about "improper" English
> and in exciting new ways (e.g. by introducing factual errors).
Because factually incorrect comments didn't exist before AI?
Your concern is that you read something you don't like, so you pick the lowest-effort criteria to complain about. Speaks more about you than the original commenter.
- Say they used AI to write it, it came out bad, and they published it anyway. They had the opportunity to "make it better" before publishing, but didn't. The only conclusion for this is, they just aren't good at writing. So whether AI is used or not, it'll suck either way. So there's no need to complain about the AI.
It's like complaining that somebody typed a crappy letter rather than hand-wrote it. Either way the letter's gonna suck, so why complain that it was typed?
- > Here's the mental model shift that changes everything: Instead of logging what your code is doing, log what happened to this request.
Yeah that doesn't magically fix everything. Logging is still an arbitrary, clunky, unintuitive process that requires intentional design and extra systems to be useful.
The "Wide Event log" example is 949 bytes, which isn't unmanageably large, but it is 3x larger than most log messages which are about 300 bytes. And in that blob of data might be key insights, but it is left up to an extra engineering process to discover what might be unusual in that blob. It lacks things like code line numbers, stack trace, and context given by the program about its particular functions (rather than assumptions based on a few pieces of metadata). And it's excessively verbose, as it has a trace and request ID and service name, but duplicates information already available to tracing systems based on those 3 metrics.
> Wide events are a philosophy: one comprehensive event per request, with all context attached.
That's simply impossible. You cannot have all context from viewing a single point in the network, regardless of how hard you try to record or pass on information. That's the whole point of tracing: you correlate the context of different network points, specifically because that's the only way to discover the missing details.
> Modern columnar databases (ClickHouse, BigQuery, etc.) are specifically designed for high-cardinality, high-dimensionality data. The tooling has caught up. Your practices should too.
You should not depend on a space shuttle to get to the grocery store. Logging is intended to be an abstracted component which can be built on by other systems. Your app should work just as well running from Docker on your laptop as it does in the cloud.
- Why is anyone still developing for these stagnant walled gardens?
- "one third the cost of AWS" is just AWS with savings plans enabled
> if you're [...] a researcher who just needs a beefy VM without surprise egress fees, we're 1/3 the price
The AWS egress fee is $0.08/GB, whereas Hetzner has $0.00/GB. So, why pay $0.0225/GB?
These companies are the worst kind of scam. If you could really provide a product on par with the big boys but somehow lower than commodity prices, you'd corner the entire hosting/cloud market. But you can't, because they already made things hyper-efficient.
It's like trying to sell someone a $5 hamburger by advertising that some other restaurant sells a $15 hamburger. It turns out that other restaurant also sells a $5 hamburger, it's just not at the top of the menu, because cheap isn't always a sales leader.
- It's partly fact, partly reasoning. One fact comes from STUXnet and Snowden Leaks, where they developed and deployed vulns that persisted for years without notice. The other fact is I've interviewed at the research centers and my eyes got pretty wide at the stuff they told me without an NDA, so they're definitely paying a lot to develop and acquire more vulns/new attacks. That was all 20 years ago, but the contracts are still there so there's no reason to suppose it stopped. There's also past NSA directors that've spoken at DEFCON for years about how they want more hackers, and the new cold war with China and Russia has been ongoing for nearly as long.
I'm not saying they stockpile vulns; I'm saying if somebody on the dark web said they had a vuln for sale for $50k, and it could help an agency penetrate China/Iran strategically, it would make no sense to turn it down, when they already pay many times more money to try to develop similar vulns.
- They have a class of attacks which are used for targeted intrusion into foreign entities. Typically espionage or cyberwarfare, so they're not often used (they're aware they might be a one-use attack), but some persist for a long time. Foreign entities also tend not to admit to the attacks when found, so if the vendor is a US entity, often the vendor doesn't find out. We do the same; when our intelligence agencies find out about a US compromise, they often keep mum about it.
I'm not talking about XSS specifically, I mean in general. An XSS isn't usually high-value, but if it affects the right target, it can be very valuable. Imagine an XSS or CSRF vuln in a web interface for firmware for industrial controls used by an enemy state, or a corporation in that state. It might only take 2 or 3 vectors to get to that point and then you have remote control of critical infrastructure.
Oh - and the idea that a vendor will always patch a hole when they find it? Not completely true. I have seen very suspicious things going on at high value vendors (w/their products), and asked questions, and nobody did anything. In my experience, management/devs are often quite willing to ignore potential compromise just to keep focusing on the quarterly goals.
- I can't imagine intelligence agencies/DoD not doing this with their gargantuan black budgets, if it's relevant to a specific target. They already contract with private research centers to develop exploits, and it's not like they're gonna run short on cash
Rather than trying to hide things to "ease adoption", the correct answer is to educate people. Devs hate learning things. But once they learn the new thing, the pain goes away, and the results are better. The more you try to avoid it, the more problems you create later.