- rbongers parentFinally, a tool optimized for creating Git commit hash collisions
- I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
- Keywords should definitely be highlighted. It's part of the structure of the code. Being highlighted makes it very quick to distinguish between keywords and variables and helps readability by making them easier to skim over and jump to. Maybe they could be the same color as punctuation, if number of colors is a problem.
- >The discipline required to use AI tools responsibly is surprisingly difficult to maintain
I don't find that this requires discipline. AI code simply requires code review the same as anything else. I don't feel the need to let AI code in unchecked in the same way I don't feel the need to go to my pull request page one day and gleefully hit approve and merge on all of them without checking anything.
- Prime numbers are a pattern; take the natural numbers - starting after 2, exclude every number that isn't 2, starting after 3, exclude every number that isn't 3, etc.
It repeats like this predictably. Even though it changes, the way in which it changes is also predictable. Their repetition and predictability make prime numbers a pattern.
Out of the fundamental pattern of prime numbers, higher-level patterns also appear, and studying these patterns is a whole branch of math. You can find all kinds of visualizations of these patterns, including ones linked in this thread.
It's not that you're seeing a pattern that's not there, it's that you're seeing a pattern that gradually becomes infinitely complex.
- I've often thought that this (and every problem where a manual process is required that is tough to automatically enforce) an AI code reviewer could be very useful.
It's the type of thing you might add to a long checklist of things to make sure you do (or don't do) in an MR template that quickly becomes difficult, if not impossible, for MR authors and especially reviewers to reliably follow.
Tests is another example - you can check that coverage doesn't slip over time, but not that every change is tested. And a human can maybe remember to check if there are tests, even if there are good tests, even if there are tests for every change if coverage tools are well integrated in your system, but not if every change is tested well, and not reliably.
AIs are great at sorting through lots of data to check for errors that a human would miss. Letting it add MR review comments, not letting it make any changes it wants, would allow for a human to provide checks and balances.
So I like the idea, I'm not sure how I feel about limiting it to docs or letting it write changes itself.
- Thank you, that was the example I needed to hear to see why this could be an issue.
I will still say though, I have not actually had this happen to me yet with all the years of using hooks. Generally when I'm fetching when X prop changes, it's not in response to functions or objects, and I guess if it's ever happened it's been fixed and never broke or hasn't caused problems.
Not to say it isn't an issue - it is - but the number and degree of issues I saw with lifecycle functions was much worse. That was with a less experienced team, so it could just be bias.
- I don't have a problem with needing to memoize props passed to child components for their memoization to work.
If your parent component doesn't need the optimization, you don't use it. If it does need it, your intention for using useMemo and useCallback us obvious. It doesn't make your code more confusing inherently.
The article paints it as this odd way of optimizing the component tree that creates an invisible link between the parent and child - but it's the way to prevent unnecessary renders, and for that reason I think it's pretty self-documenting. If I'm using useMemo and useCallback, it's because I am optimizing renders.
At worst it's unnecessary - which is the point of the article - but I suppose I don't care as much about having unnecessary calls to useMemo and useCallback and that's the crux of it. Even if it's not impacting my renders now, it could in the future, and I don't think it comes at much cost.
I don't think it's an egregious level of indirection either. You're moving your callbacks to the top of the same function where all of your state and props are already.
- In my opinion, unless if you need its ability to figure out when something should rebuild or potentially if you already use it, Make is not the right tool for the job. You should capture your pipeline jobs in scripts or similar, but Make just adds another language for developers to learn on top of everything. Make is not a simple script runner.
I maintained a Javascript project that used Make and it just turned into a mess. We simply changed all of our `make some-job` jobs into `./scripts/some-job.sh` and not only was the code much nicer, less experienced developers were suddenly more comfortable making changes to scripts. We didn't really need Make to figure out when to rebuild anything, all of our tools already had caching.
- I have had a lot of opportunity to estimate a lot of projects, but one thing I still can't figure out is estimate education.
If a client wants to know "why is this going to take so long?" I can list the unknowns and third-party touch points, which are always things that make tasks take longer, but then they'll wonder why those are going to make it take longer. From there it's a challenge to communicate how unknowns are part of every project, how they are a good indicator of the risk of a task, and how there are some things you just won't know until you start work on a task in earnest.
Doesn't seem to matter how much detail I go into, it always comes back to "but I thought this would be easy."
The best I can come up with is to educate clients on what bad estimation looks like (Did they come right back with a fixed estimate for your type of project? Are they even asking questions?), hope they come back after getting different estimates with the exact red flags I warned them about, and then maintaining client trust by any means necessary so that when I say something is going to take a certain amount of time they know I'm not exaggerating.
- I don't think it's quite the same. We live in an inbetween time - AI is not quite there yet.
AI struggles with knowledge from after its training date (so it can't help very well for anything relating to new versions of libraries) and often just generally gets things wrong or comes up with suboptimal answers. It's still only optimized to create answers that look correct, after all.
With these problems, someone on the team still needs to understand or be able to figure out what's going on. And dangit if it isn't getting hard to hire for that.
And the day that AI can actually replace the work of junior devs is just going to cause more complications for the software industry. Who will get the experience to become senior devs? Who will direct them? And even if those people also get replaced eventually, we will still probably have more awkward inbetween times with their own problems.
Can't say it's not convenient, but no use pretending the challenges don't exist.
- What I tell people who get into software because they need a job is always that the industry also needs people with quality assurance and management skills. Not because I want to offload people who are just looking for a quick buck onto those fields, but because I sometimes find that people with the right skills just take the long way around to transition into those roles. (I've even seen this happen once with a UX designer, but I think artistic people mostly know to try for those roles.) People don't really consider the fact that it is a successful industry, but there are more roles in the industry than just developer. When the company I'm part of was a small startup, it was hard to find good people who were interested in taking on entry-level QA/scrum master positions.
- I think the other reply to my post is on to something when they say there are many paths to mastery. I have just laid out one path. I don't know what the ideal general learning path is, but I know what worked for me. There is certainly a level where you gain enough knowledge to remove magical thinking entirely and you "learn how to learn", and there's no one way to reach that. I have not seen anyone reach that level by learning only specific abstractions, but I can only talk about my own experience.
I also don't think every programmer needs to follow this kind of path. All I'm saying is someone has to write "jpeg.open()" in the first place.
- I think it's very hard to use an abstraction effectively if you don't know why it exists and you think it's magic.
I have seen programmers who don't understand, for instance, how key/value stores are implemented, come up with all kinds of improbable explanation about what might be happening to their code when it goes wrong, because they blame the data structure. Perhaps they even know the old idiom "don't blame your tools", but they've reached the end of their knowledge - something must be going wrong because of that magic data structure. Meanwhile, it's some kind of side effect they missed. Magical thinking. Sure, they learn eventually what to do when something goes wrong with that data structure, but they come right back to the same problem with everything they don't understand. Taking the magic out of anything, even if not always directly used, is extremely powerful.
I didn't say this before but I also have experience down to the level of designing and building circuits. I think the bottom-up model falls apart here because thinking about the physical properties of electronics is much, much harder than just dealing with idealized logic gates. You can learn it later, but its application is extremely specialized in my opinion.
I don't know, maybe I am biased because of my learning path, maybe students don't need to go as deep, but I can't help but feel that if I did it anyone can.
- You bring up good points. In my opinion it's difficult to put together any kind of significant curriculum, and it's inevitable that it's not going to work for everyone and some people are going to struggle with it. However, there's probably some tipping point where the material just doesn't work for anyone.
I don't know how to account for that. I don't know what the curriculum should be, or how to make it more digestible. I just know that, as as person who hires other programmers and got a taste of both a software development program and my own self learning, I don't like what schools are putting out, and MIT's change in course material seems to be a drastic turn in that direction.
- The problem is not the learning path doesn't exist, the problem is finding it and knowing if you should follow it. When you get started, you don't know what you don't know.
That's why I say I got lucky... I somehow found this learning path and decided to follow it without really knowing if it would be useful.
Yes, I think that dedicating yourself to any project for a long time will teach you a lot, but you can also end up with a lot of holes in your knowledge depending on the path that you take.
For instance, the most common mistake I see people make regarding learning programming is focusing too much on specific languages and frameworks and never learning anything that could be called fundamentals.
I suppose that's what this is really about. Fundamentals versus specifics.
- I definitely agree about how it may appear to young programmers that they're casting magic spells. I had the fortunate experience early on of building my own virtual CPU and RAM, then I learned assembly, then C, then algorithms, then OSes, then about Unix. I'm not trying to toot my own horn, I've always been a little slow and struggled through this material. However, learning from the bottom up has granted me a lot of insight into how things work and an ability to learn pretty much anything software development related. SCIP has a different approach to "bottom-up" learning than I took but I think it still applies. Maybe with all the tools out there not every programmer needs to learn from the bottom up, but someone needs to learn how everything works to build good tools, and if you can't learn that from MIT, I don't know where.
- 1 point
- I have been on teams that never review code and teams that always review code, and I can confidently say I never want to work on the former again. That was with a junior team, but your team is going to have new people, if not junior, at some point. People who are familiar with each other's code and have agreed on standards can review code pretty quick. I would rather have it and not need it than need it and not have it.