- justin_oaksI agree that it's not a serious project, but I wouldn't call it a joke. Jokes are funny.
- Without TLS, sometimes still referred to as SSL, a webite's content can be modified by anyone controlling the network path. This includes ISPs and WiFi operators.
Sure, your website may have unimportant stuff on it that nobody relies on, but do you want visitors to see ads in your content that you didn't put there?
- This has been my experience as well.
I implemented 2FA for my previous employer and we would have gladly skipped SMS 2FA if we could get away with it. It's more expensive for the company and the customer. And it sucks to implement because you have to integrate with a phone service. The whole phone system is unreliable or has unexpected problems (e.g. using specific words in a message can get your texts blocked). Problems with the SMS 2FA is a pain for customer service too.
- I implemented 2FA at a previous job and I was responsible for the production implementation working as expected. My thoughts were that uncompleted 2FA attempts are common for a number of reasons: typos, someone gets distracted, didn't have access to phone at the time, SMS sucks (either our sending side or the receiving side), etc. I didn't put much thought into it beyond that. (Should I?)
I implemented rate limiting/lockouts for too many 2FA failures. I added the ability to clear the failed attempt count in our customer support portal. If we had any problems after those were implemented, I never heard about them.
- You may have to temper your expectations. Free usually means "sells/uses your data to offset costs". If you're OK with that, there's no need to switch off of GMail. If you're not OK with that, you'll have to pay.
Also, hosting email under your own domain gives you the freedom to move from one email provider to another even if they do shut down.
I put my money where my mouth is. I wanted to degoogle and so pay $50/year for Fastmail. One feature I like is automatically snoozing certain emails. Most of my non-personal email is automatically snoozed until 6pm every day. This way I don't get multiple notifications throughout the day for emails that aren't time sensitive.
- I wish consoles were like the Steam Deck: computers running a common OS that just so happen to be used for gaming.
- Knowing that there are only 25 responses, it makes it all the more funny that rate limiting is mentioned.
And you can host the service yourself! Hard pass. I'll read the 25 responses from your gist. Thanks!
- Once you remove the duplicates that are different only because of the typos in them, yes, that's correct.
- The author doesn't seem to consider someone's desire to behave morally as an incentive. How odd.
- Avoiding shame is an incentive.
It boils down to: "Why are people violating these unenforced rules? Sure it benefits them, but don't they feel bad?"
- Generally it isn't done because the people in power benefit from the status quo.
To fix it, you need collective action. And now you have a collective action problem: https://en.wikipedia.org/wiki/Collective_action_problem
- Your comment could have been simplified to: "I don't like the syntax"
And that could be optimized further by leaving no comment.
All syntax is learned. None of it is "intuitive". Anything unfamiliar to you will seem unpleasant. Some syntaxes can be better than others, but to make that distinction you'd have to at least cite reasons why one syntax is better than another.
- In my day we memorized IPs AND ports. 10.24.67.22:78342 was to access our bug tracker and 192.168.240.17:21282 was for our CVS repository!
Seriously though, one of the first things I did when I was hired as the sysadmin for a small company was to eliminate the need for memorizing/bookmarking ip-port combos. I moved everything to standard ports and DNS names.
Any services running on the same machine that needed the same ports were put behind a reverse proxy with virtual hosts to route to the right service. Each IP address was assigned an easy-to-remember DNS name. And each service was setup with TLS/SSL instead of the bare HTTP they had previously.
- I recommend using the .test TLD.
* It's reserved so it's not going to be used on the public internet.
* It is shorter than .local or .localhost.
* On QWERTY keyboards "test" is easy to type with one hand.
- I'd recommend using some other reserved IP address block like 169.254.0.0/16 or 100.64.0.0/16 and assigning it to your local loopback interface. (Nitpick: you can actually use all of 127.0.0.0/8 instead of just 127.0.0.0/24).
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
Once that's set up I can expose the port of one Docker container at 100.64.0.2:80, another at 100.64.0.3:80, etc.auto lo:1 iface lo:1 inet static address 100.64.0.1 gateway 100.64.0.0 netmask 255.255.0.0 - One of my favorites in this space is Feather Wiki: https://feather.wiki/
- So we're back to another maxim: Everything should be made as simple as possible, but not simpler.
Or perhaps: Bugs happen.
- Like many of the other commenters, I have no code to show. I'm strongly motivated at work to solve problems and create correct, performant, maintainable code. I appreciate a job well done.
Outside of work, I just don't have the motivation to code anything. I don't have sufficient at-home problems where code will fix them.
In an interview, ask me anything! ... except to show you code on Github.
- I like the code review approach and tried it a few times when I was needed to do interviews.
The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.
Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.
For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.
Poor: "Everything looks fine here"
Good: "Eliminate that global variable. We can do that by refactoring this function to..."
Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."
- Yup, that's just one of the many ways to do a code-review interview wrong.
Each code sample should have multiple things wrong. The best people will find most (not necessarily all) of them. The mediocre will find a few.