- 9rxIf it weren't already in the same domain you wouldn't be able to read a non-HttpOnly cookie anyway, so that's moot.
- I tried it once. It uncannily picked up on what I was interested that day. However, by the second day I had moved on to new interests, but it didn't. It keep trying to push the same thing as the day before that I was no longer interested in anymore.
Perhaps the algorithm has gotten better since, but I had no reason to want to use it after that.
- There is no difference. It is all testing. Testing captures the full gamut, from simply manually using the software all the way up to formal proofs. Although the advantages of formal proofs over other modes of testing was already written about, so it is unclear what you are trying to add. Perhaps you want to clarify?
- BDD was trying to recapture what TDD was originally, renamed from TDD in an effort to shed all the confusion that surrounded TDD. Of course, BDD picked up all of its own confusion (e.g. Gherkin/Cucumber and all that ridiculousness). So now it is rebranded as SDD to try and shed all of that confusion, with a sprinkle of "AI" because why not. Of course, SDD already is clouded in its own confusion.
Testing is the least understood aspect of computer science and it turns out that you cannot keep changing the name and expect everyone to suddenly get it. But that won't stop anyone. We patiently await the next rebrand.
- > You need a very different mindset to write in JS (or TS), in Rust, in Rocq, in Esterel or on a Quantum Computer.
"Senior", "principle", etc. are not about your ability to write. They speak to one's capacity to make decisions. A "junior" has absolutely no clue when to use JS, Rust, or Rocq, or if code should be written at all. But someone who has written (well-written) tests in JS, and maybe written some types in Typescript, now has some concept of verification and can start to recognize some of the tradeoffs in the different approaches. With that past experience in hand, they can begin to consider if the new project in front of them needs Rocq, Dafny, or if Javascript will do. Couple that with other types of experiences to draw from and you can move beyond being considered a "junior".
> You might be able to have "seen it all" in a tiny corner of tech
Of course there being a corner of some sort is a given. We already talked about management being a different corner, for example. Having absolutely no experience designing a PCB is not going to keep you a "junior" at a place developing CRUD web apps. Obviously nobody is talking about "seeing it all" as being about everything in the entire universe. There aren't that many different patterns, really, though. As the terms are used, you absolutely can "see it all", and when you don't have to wait around for the season to return next year, you can "see it all" quite quickly.
- I wonder how many more times we'll rebrand TDD (BDD, SDD)?
Just 23 more times? ADD, CDD, EDD, DDD, etc.
Or maybe more?! AADD, ABDD, ACDD, ..., AAADD, AABDD, etc.
- The scare quotes are significant. Obviously nobody can ever see it all as taken in its most literal sense. But one can start to see enough that they can recognize the patterns.
If your job is dependent on the weather, one year might be rainy, one year might be drought, one year might be a flood, etc. You need to see them to understand them. But eventually you don't have to need to see the year where it is exceptionally rainy, but not to the point of flood, to be able to make good decisions around it. You can take what you learned in the earlier not-quite-so rainy year and what you learned during the flood year and extrapolate from that what the exceptionally rainy year entails. That is what levels up someone.
Much the same is true in software. For example, once you write a (well-written) automated test in Javascript and perhaps create something in Typescript, you also have a pretty good understanding of what Rocq is trying to do well enough to determine when it would be appropriate for you to use. It would no doubt take much, much longer to understand all of its minutia, but it is not knowledge of intimate details that "senior", "principle", etc. is looking for. It is about being able to draw on past experience to make well-reasoned choices going forward.
- Testing is not perfect, but what else is there? Even formal proofs are just another expression of testing. With greater mathematical guarantees than other expressions, granted, but still testing all the same; prone to all the very same human problems testing is burdened with.
- Even "code monkey" is generous.
- > If you reach "senior" after only two years and "principle" after 5, what is left for the next 20 years?
There is nothing left. Not everyone puts in the same dedication towards the craft, of course. It very well might take someone 30 years to reach "principle" (and maybe even never). But 5 years to have "seen it all" is more than reasonable for someone who has a keen interest in what they are doing. It is not like a job dependent on the season, where you only get one each year. In computing, you can see many different scenarios play out in milliseconds. It doesn't need years to go from no experience to having "seen it all".
That is why many in this industry seek management roles as a next step. It opens a new place to find scenarios one has never seen before; to get the start the process all over again.
- Automated testing (there aren't different kinds; to try and draw a distinction misunderstands what it is) doesn't catch bugs, it defines a contract. Code is then written to conform to that contract. Bugs cannot be introduced to be caught as they would violate the contract.
Of course that is not a panacea. What can happen in the real world is not truly understanding what the software needs to do. That can result in the contract not being aligned with what the software actually needs. It is quite reasonable to call the outcome of that "bugs", but tests cannot catch that either. In that case, the tests are where the problem lies!
Most aspects of software are pretty clear cut, though. You can reasonably define a full contract without needing to see it. UX is a particular area where I've struggled to find a way to determine what the software needs before seeing it. There is seemingly no objective measure that can be applied in determining if a UX is going to spark joy in order to encode that in a contract ahead of time. Although, as before, I'm quite interested to learn about how others are solving that problem as leaving it up to "I'll know it when I see it" is a rather horrible approach.
- > Without explicit instruction, LLMs are really bad at this
They used to be. They have become quite good at it, even without instruction. Impressively so.
But it does require that the humans who laid the foundation also followed consistent patterns and conventions. If there is deviation to be found, the LLM will see it and be forced to choose which direction to go, and that's when things quickly fall off the rails. LLMs are not (yet) good at that, and maybe never can be as not even the humans were able to get it right.
Garbage in, garbage out, as they say.
- > if you don't at least look at the running code, you don't know that it works.
Your tests run the code. You know it works. I know the article is trying to say that testing is not comprehensive enough, but my experience disagrees. But I also recognize that testing is not well understood (quite likely the least understood aspect of computer science!) — and if you don't have a good understanding you can get caught not testing the right things or not testing what you think you are. I would argue that you would be better off using your time to learn how to write great tests instead of using it to manually test your code, but to each their own.
What is more likely to happen is not understanding the customer needs well enough, leaving it impossible to write tests that align with what the software needs to do. Software development can break down very quickly here. However, manual testing does not help. You can't know what to manually test without understanding the problem either. However, as before, your job is not to deliver proven code. Your job is to solve customer problems. When you realize that, it becomes much less likely that you write tests that are not in line with the solution you need.
- Maybe a bit pedantic, but does manual testing really need to be done, or is the intent here more towards being a usability review? I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests (there is no reason to write code that doesn't have a contractual purpose), but, after trying it, finding out that what you've created has an awful UX is something I have encountered and that is something much harder to encode in tests[1].
[1] As far as I can tell. If there are good solutions for this too, I'd love to learn.
- > Your job is to deliver code you have proven to work.
Your job is to solve customer problems. Their problems may only be solvable with code that is proven to work, but it is equally likely (I dare say even more likely) that their problem isn't best solved with code at all, or even solved with code that doesn't work properly but works well enough.
- > consider how many years of experience a "senior" is typically expected to have
That entirely depends on what the experience is towards. If it is something like farming where you only get to experience a different scenario once per year due to worldly constraints, then one would expect many years — decades, even — before considering someone "senior".
But when the domain allows experiencing a new scenario every handful of milliseconds, you can shorten that tremendously. In that case, a couple of years is more than enough time to become a "senior" even with only a modicum of attention given to it. If you haven't "seen it all" after a couple of years in that kind of environment, you're never going to become "senior" as you are hardly engaging with it at all.
- The job is never to develop software. The job is always to solve problems for customers. Developing software is just a tool in the toolbox. As is, increasingly, using AI. As such, it is valuable to have those who are experienced in using AI on staff.
Which is nothing new. It has always been understood that it is valuable to have experienced people on board. The "cut the juniors" talk has never been about letting those who offer value go. Trying to frame it as being about those who offer experiential value — just not in the places you've arbitrary chosen — is absurd.
- Has there been any successful app stores since the mobile app stores, which made developers fortunes selling fart apps, thus making it highly appealing for others to try and chase the same?
Every one I can think of since gets a bit of initial interest hoping to relive the mobile app store days, but interest wanes quickly when they realize that nobody wants to buy fart apps anymore. That ship sailed a long time ago.
And ChatGPT apps are in a worse position as it doesn't even have a direct monetization strategy. It suggests that maybe you can send users to your website to buy products, but they admit that they don't really know what monetization should look like...
- Not right now.
In this early phase, developers can link out from their ChatGPT apps to their own websites or native apps to complete transactions for physical goods. We’re exploring additional monetization options over time, including digital goods, and will share more as we learn from how developers and users build and engage. - While, again, anyone can define words as they see fit, most people consider the "junior" and "senior" labels to apply to the activity being conducted, not something off to the side. As the job is to use AI tools, these most experienced people would be considered "seniors" by most. Nobody was ever suggesting that you should cut good help because they're juniors in knitting or dirt biking.