- 5 points
- There's still a way to load it under Chrome 138, but when Chrome 139 lands, that's when MV2 will finally be removed.
https://developer.chrome.com/docs/extensions/develop/migrate...
> Just as before, Enterprises using the ExtensionManifestV2Availability policy will continue to be exempt from any browser changes until at least June 2025. Starting in June, the branch for Chrome 139 will begin, in which support for Manifest V2 extensions will be removed from Chrome. Unlike the previous changes to disable Manifest V2 extensions which gradually rolled out to users, this change will impact all users on Chrome 139 at once. As a result, Chrome 138 is the final version of Chrome to support Manifest V2 extensions (when paired with the ExtensionManifestV2Availability key). You can find the release information about Chrome 138 and 139, include ChromeOS's LTS support, on the Chromium release schedule
- I still have my copy of Skitch installed, and agree, it is and was the best solution for so many use cases.
However, given the fact that it has been abandonware for a long time now, I did look for alternatives for that fateful day when Apple fucks up macOS sufficiently where Skitch is no longer executable, which is not a question of if, but a matter of when.
I decided that CleanShot X was the closest thing to a suitable alternative to Skitch, although it has a number of really annoying design decisions that keep it from being a suitable replacement for Skitch, unfortunately.
If someone could convince the owners of the Skitch IP to open source it, I would be very interested in helping maintain it.
- Very cool. I'd love to know what technique they used to build the dialogs with the simulated chat, with the text animation sequences filling out the "DMARC Results" area -- what does the source for this UI look like?
Edit: Just found the `terminal.analyze` function: very cool. That `terminal.echo` is so clever.
- 1 point
- 1 point
- https://portal.office.com/servicestatus
> Last refreshed: 2024-01-26 21:04:32Z (UTC) > > Current status: Our failover operation did not provide the anticipated relief to end users in North and South America regions and we are now working to optimize traffic patterns as part of the mitigation effort. We’re applying configuration changes across the affected network infrastructure to reduce user impact as quickly as possible. > > Next update by: Friday, January 26, 2024, at 11:00 PM UTC - There are a number of companies out there that pay well above market rates, and are unable to hire enough _sufficiently skilled_ developers. They literally hire every single one they can find and pay them at top-of-market--I mean, what they pay developers actually defines what top-of-market is at any given time--and they'll tell you they would hire more if they could, but there aren't any more, which is the definition of shortage.
The statement "there is a shortage of skilled software developers" is true and not disingenuous.
There is a glut of unskilled software developers. There are many companies that are unwilling or unable to pay market rate for talent. But, neither of those things changes the fact that there is a real shortage of skilled software developers, as the demand for them cannot be fulfilled.
- > What? b is 2? It should be 4. Or produce an error, or something. But not 2. There's no universe in which that result could be considered correct, it's just wrong.
It is correct, and what I would expect based on the behavior in MySQL documentation.
https://dev.mysql.com/doc/refman/8.0/en/insert.html
"An expression expr can refer to any column that was set earlier in a value list."
As the columns are evaluated in left-to-right order, "earlier" in this context means "to the left of".
Therefore:
At the time `b` is being evaluated, `a` is to the right of it, so the current value of `a` is its own default value, which is `1`. So, `a+1` will evaluate to `1+1` or `2`.INSERT INTO t1 (b,a) values (DEFAULT, 3);This is not a bug, this is the documented and expected behavior.
To try and illustrate more clearly, try this:
To start simply:CREATE TABLE t1 (a int default 11, b int default 22, c int default (a+b+33));
This gives us:INSERT INTO t1 (a, b, c) VALUES (DEFAULT, DEFAULT, DEFAULT);
No surprises.SELECT * FROM t1; +------+------+------+ | a | b | c | +------+------+------+ | 11 | 22 | 66 | +------+------+------+Next:
Again, no surprises. Now, let's reorder things a bit:DELETE FROM t1; INSERT INTO t1 (a, b, c) VALUES (44, 55, DEFAULT); SELECT * FROM t1; +------+------+------+ | a | b | c | +------+------+------+ | 44 | 55 | 132 | +------+------+------+
Knowing that we're evaluating this left-to-right, and `a`, `b` and `c` start out set to their default values as defined in the table schema, what do we expect? <a=65, b=87, c=??>DELETE FROM t1; INSERT INTO t1 (b, c, a) VALUES (87, DEFAULT, 65);At the time `c` is being computed though, what is the value of `a`? Is it `65`, or `11`? We would expect it to be equal to `11`, as we haven't evaluated the value of `a` in the INSERT statement, yet.
Therefore, we expect `c = 11 + 87 + 33 = 131` and NOT `c = 65 + 87 + 33 = 185`.
And, sure enough, there it is.SELECT * FROM t1; +------+------+------+ | a | b | c | +------+------+------+ | 65 | 87 | 131 | +------+------+------+This is well-defined and expected behavior. Sorry, not a bug.
- > These days this is just not possible.
Why not?
I still format my code to an 80 column width.
I can still use `pr` to paginate my source for printing, and use `enscript` to produce a PostScript document of it, then `ps2pdf` to convert that to PDF, then I can spool that PDF to any printer.
And, on my home network, I have a HP LaserJet 2300dn and I can actually just shove the plain text output from `pr` straight to the JetDirect port on 9100/tcp.
What exactly is the obstacle that's preventing you from printing out your source code?
- 18 points
- All visits to Calendly.com are met with the following boilerplate message:
And, "Learn more" is linked to https://calendly.com/privacy - so, yes, but it may be possible that by that point you're already being tracked.> We respect your personal privacy > > We and our third party partners use cookies and other tracking technologies to provide a proactive support experience, enhance site navigation, analyze site usage, and assist in our marketing efforts. > > Learn moreI suppose it's an interesting question that needs a definitive answer: by clicking on a link to a website, are you thereby implicitly agreeing to the site's terms and conditions before you've had a chance to review them, at least until you've had the opportunity to review them and then decide to discontinue using that site or not?
Is a brick-and-mortar business obligated to present you with a copy of their business policies _before_ you enter their establishment? No. You are free to inquire before patronizing their business, and they are free to inform you of them after-the-fact. I would expect that a website should be treated similarly: by visiting the site, until you choose to discontinue using the site, you are agreeing to the site's terms and conditions, sight-unseen, whatever they may be.
- > Also, lmao at the guy saying you can create a 1B dollar product in 5 years. If it were that easy everyone would be a billionaire.
I never said it was _easy_. I said it was _possible_, and it hasn't always been possible to do with a $50/mo starting cost.
That was my point.
Opportunity today is more affordable than it has ever been in the history of computing.
- Operating my own consulting practice, working on a startup, working with non-profits, and getting to pick and choose what I work on.
I've been in the industry since the late 1980s, and I will say that things have only gotten better with respect to opportunity as time has passed, and everything going on today is just more signal that things are, at least in the short term, continuing to get better.
Regarding avenues for growth, it's the same story that repeats throughout history: automation. Just as the agricultural revolution and industrial revolutions supported the growth of population through increased output, the technological revolution is having the same impact over the last 20 years.
Despite the radical impact that recent, where "recent" is the past 10-20 years, technological advancement has had on life as we know it, it's my opinion that we are actually only just scratching the surface of the impact that it will have on humanity, likely over the next 100-200 years.
It's going to take a lot of people with a lot of novel ideas to take us through that revolution. And, each step will introduce changes that will free up people to pursue what comes next, to continue to build on top of the advancements that came before.
If that doesn't excite you and make you want to be part of that, then yeah, you probably should find a different career.
With our aging population that's only living longer and longer, there's already a dire shortage of healthcare professionals: perhaps you'd be better suited to pursuing that, instead?
- > Should I even try getting a SE job anymore?
Based on what you've written, my answer is: no. You don't sound cut out for it.
> The market isn't the limitless growth starry-eyed future it was during the first 20 years of the 21st century. Now I imagine with AI, headcounts are just going to whittle down to the bare essentials as junior devs become completely unneeded, thus removing their chance of getting enough experience to level up to senior.
You couldn't be more wrong.
20 years ago, you needed to buy or rent servers, pay for bandwidth, marketing/reaching your audience was hard, eyeballs were expensive, etc. To get a service off the ground, you needed to have at least $50k of cash for table stakes.
Today, you can get a virtual slice of very fast compute with very cheap bandwidth for $50/mo. It isn't unreasonable to work a minimum wage job, live very modestly, for 5 years, covering that $50/mo expense, building a product by yourself, that could become worth $1B or more.
In other words, there is practically limitless opportunity today, whereas before only those with significantly more resources had the opportunities.
If you aren't able to see the incredible opportunity you have in front of you, then yeah, you should most likely get out of the industry, because you ARE right that as software continues to get easier to create through innovation, the need for simple laborers will continue to decline.
- If Calendly didn't include language in their Terms & Conditions that stated using the Calendly application requires users to give permission to listed third-party services like Heap to collect usage information, then shame on them.
But, wouldn't you know it, they aren't dummies:
So, yeah, if you use Calendly, you accept their terms of use and their privacy policy has informed users of such third-party collection of data, so there's the informed consent by accepting these terms.> Information Collected Automatically From You. > > [...] > > Third-Party Tools. > We may disclose information to third parties or allow third parties to directly collect information using these technologies on our Website, such as social media companies, advertising networks, companies that provide analytics including ad tracking and reporting, security providers, and others that help us operate our business and Website. We use such third-party tools subject to your consent, opt-out preferences, or other appropriate legal basis where legally required. [...]IANAL, but if this class action doesn't get thrown out, it will have a serious chilling effect on any company that has users in California.
- > The distributed model means that a single post from an account with followers on (e.g.) 400 instances means that that’s 400 connections to 400 servers, all at once.
If only the kids working on Mastodon were old enough to know what NNTP is, they wouldn't have made such poor engineering decisions.
:sadpanda:
- If the packages are of GPL'ed software, then you (anyone) are within your rights to charge a fee to download the software:
https://www.gnu.org/licenses/gpl-faq.en.html#DoesTheGPLAllow...
Of course, in order to comply with the GPL, you must make the source available as well. Which means, anyone else can turn around and either do the same thing, or even make a copy available for free:
https://www.gnu.org/licenses/gpl-faq.en.html#DoesTheGPLRequi...
If anyone feels strongly enough about these package maintainers charging a fee, they are welcome to pay the fee and then make the packages available free of charge.
But, that would be work. Less work than the work done by the original package creator, but still, work.
And, many people want to be compensated for their time. Ultimately, as long as the package maintainer charges a reasonable enough fee that the value provided exceeds the relative cost of the fee, then everyone wins.
- It's hard to know if "their code was horribly written" without privileged knowledge about their code, but we can infer some things or at least formulate some reasonable assumptions based on what factual information is publicly available:
https://www.sec.gov/litigation/admin/2013/34-70694.pdf
> On August 1, 2012, Knight Capital Americas LLC (“Knight”) experienced a significant error in the operation of its automated routing system for equity orders, known as SMARS. [...]
> Upon deployment, the new RLP code in SMARS was intended to replace unused code in the relevant portion of the order router. This unused code previously had been used for functionality called “Power Peg,” which Knight had discontinued using many years earlier. Despite the lack of use, the Power Peg functionality remained present and callable at the time of the RLP deployment. The new RLP code also repurposed a flag that was formerly used to activate the Power Peg code. Knight intended to delete the Power Peg code so that when this flag was set to “yes,” the new RLP functionality—rather than Power Peg—would be engaged."
> [...] In 2003, Knight ceased using the Power Peg functionality. In 2005, Knight moved the tracking of cumulative shares function in the Power Peg code to an earlier point in the SMARS code sequence. Knight did not retest the Power Peg code after moving the cumulative quantity function to determine whether Power Peg would still function correctly if called.
A system that can, in a catastrophic failure, single-handedly put your entire company out of business, is what I'd consider mission-critical.
Mission-critical systems should, by definition, be held to a higher standard of quality than non-critical systems.
A mission-critical system that has dead code still in production for 9 years, untested, and allowing the "repurpos[ing of] a flag," suggests the level of quality that may be at play.
Your guess is as good as mine, but this smells like a pretty grievous coding error, lack of quality around deployment processes, and insufficient testing and validation.
For a system that is so mission-critical, that a simple "mistake" put the entire company out of business in one single event, this seems pretty "horrible" to me.
YMMV.
- > Most of my roles have been through recommendations which short-circuited the typical tech interviews or was I hired by non-technical people who only cared about the output, so I have been fortunate in that regard.
If you realize that this approach leads to success, why are you trying to do things differently?
When you know of a path that leads to success, and you decide to not travel that path, you shouldn't be surprised when you find yourself on a path that doesn't lead to success.
You already know which path does. Talk to your therapist and try to figure out why you're choosing to not take that path.
- Ah, the Pareto principle hard at work. Love it!
When people talk about "10X developers," this is what we are referring to.
It is possible for an individual to have outsized impact, and it's not necessarily about having extraordinary skill, it's about knowing what to work on and what to ignore.
- How sensitive are the secrets? How often will you be rotating them?
Are the secrets sensitive enough to encrypt them at rest?
Keeping the lock (the encrypted secret) and the key (the decryption key) in two separate places makes it slightly harder for an attacker to recover the plaintext secret, but also means you need to take the necessary precautions to not leak the key accidentally.
Sometimes, we can't even trust our system to be secure enough to prevent the key from becoming compromised, so Hardware Security Modules (HSMs)[1] became a thing, something with, presumably, a smaller attack surface that holds the key and can be used to decrypt the secret.
- > AI has failed to automate driving. (Despite enormous efforts)
Sure, if you consider this an all-or-nothing boolean, the OP is right.
However, AI has automated many driving tasks and outperforms the average human: take parallel parking, for example. Not an incredibly difficult task, but it's one where our modest AI absolutely outperforms the average human.
Similarly, the precursor to AI, autocomplete, can at times outperform the average human programmer. AI-enabled autocomplete (e.g., GitHub Copilot, etc.) almost consistently can outperform the average human programmer.
Defining the goal as "AI completely replacing programmers" is not a useful goal.
Defining the goal as "AI making the below-average programmer useful if they learn to use AI-enabled tools, and making the below-average programmer obsolete if they refuse to" is very pragmatic, and AI is practically there.
Temporary workaround is to set your computer's clock back to before the certificate expires, until a new client build is released with an updated certificate, presumably.
Surprisingly, not a DNS issue, for a change.