- zrm parentA VPN provider could easily support Port Control Protocol / NAT-PMP without giving each VPN client its own public IPv4.
- > Bus width is 64 vs 384.
The bus width is the number of channels. They don't call them channels when they're soldered but 384 is already the equivalent of 6. The premise is that you would have more. Dual socket Epyc systems already have 24 channels (12 channels per socket). It costs money but so does 256GB of GDDR.
> Look at modern AM5 struggling to boot at over 6000 with more than two sticks.
The relevant number for this is the number of sticks per channel. With 16 channels and 64GB sticks you could have 1TB of RAM with only one stick per channel. Use CAMM2 instead of DIMMs and you get the same speed and capacity from 8 slots.
- To some extent the only way around that is to use non-uniform hardware though.
Suppose you have each server commit the data "to disk" but it's really a RAID controller with a battery-backed write cache or enterprise SSD with a DRAM cache and an internal capacitor to flush the cache on power failure. If they're all the same model and you find a usage pattern that will crash the firmware before it does the write, you lose the data. It's little different than having the storage node do it. If the code has a bug and they all run the same code then they all run the same bug.
- > Then your memory requirements always were potentially 512GB. It may just happen to be even with that amount of allocation you may only need 64GB of actual physical storage; however, there is clearly a path for your application to suddenly require 512GB of storage.
If an allocator unconditionally maps in 512GB at once to minimize expensive reallocations, that doesn't inherently have any relationship to the maximum that could actually be used in the program.
Or suppose a generic library uses buffers that are ten times bigger than the maximum message supported by your application. Your program would deterministically never access 90% of the memory pages the library allocated.
> If your failure strategy is "just let the server fall over under pressure" then this might be fine for you.
The question is, what do you intend to happen when there is memory pressure?
If you start denying allocations, even if your program is designed to deal with that, so many others aren't that your system is likely to crash, or worse, take a trip down rarely-exercised code paths into the land of eldritch bugs.
- Which is a major way turning off overcommit can cause problems. The expectation for disabling it is that if you request memory you're going to use it, which is frequently not true. So if you turn it off, your memory requirements go from, say, 64GB to 512GB.
Obviously you don't want to have to octuple your physical memory for pages that will never be used, especially these days, so the typical way around that is to allocate a lot of swap. Then the allocations that aren't actually used can be backed by swap instead of RAM.
Except then you've essentially reimplemented overcommit. Allocations report success because you have plenty of swap but if you try to really use that much the system grinds to a halt.
- These have been my go-to for a while now:
https://en.wikipedia.org/wiki/List_of_Intel_Core_processors
https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors
It doesn't have the CPUID but it's a pretty good mapping of model numbers to code names and on top of that has the rest of the specs.
- The price difference in terms of manufacturing cost is immaterial. But if people can't afford a machine with 32GB anymore then they're going to suffer one with 8GB knowing from the outset that it's not enough and then have a strong preference for the ability to upgrade it later when prices come back down or they get more money.
- Ritalin is a chemical relative of amphetamine. In prescribed amounts it's often an effective treatment. In recreational amounts, ask your doctor about ΔFosB:
- > If you use `&` instead of `&&` (so that all array elements are accessed unconditionally), the optimization will happen
But then you're accessing four elements of a string that could have a strlen of less than 3. If the strlen is 1 then the short circuit case saves you because s[1] will be '\0' instead of 'e' and then you don't access elements past the end of the string. The "optimized" version is UB for short strings.
- > I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
That task was never simple and is unrelated to Cloudflare or AWS. The internet at a fundamental level only knows where the next hop is, not where the source or destination is. And even if it did, it would only know where the machine is, not where the person writing the code that runs on the machine is.
- Note that those two links are using different configs. Here's the link for Threadripper 9995WX:
https://www.phoronix.com/review/amd-threadripper-9995wx-trx5...
That's using the same config as the server systems (allmodconfig) but it has the 9950X listed there and on that config it takes 547.23 seconds instead 47.27. That puts all of the consumer CPUs as slower than any of the server systems on the list. You can also see the five year old 2.9GHz Zen2 Threadripper 3990X in front of the brand new top of the range 4.3GHz Zen5 9950X3D because it has more cores.
You can get a pretty good idea of how kernel compiles scale with threads by comparing the results for the 1P and 2P EPYC systems that use the same CPU model. It's generally getting ~75% faster by doubling the number of cores, and that's including the cost of introducing cross-socket latency when you go from 1P to 2P systems.
- At which point you're asking why Apple doesn't have default support for something like ext4, which is a decent point.
That would both get you easier compatibility between Mac and Linux and solve the NTFS write issue without any more trouble than it's giving people now because then you'd just install the ext4 driver on the Windows machine instead of the NTFS driver on the Mac.
- NTFS writing isn't that inexplicable. NTFS is a proprietary filesystem that isn't at all simple to implement and the ntfs-3g driver got there by reverse engineering. Apple doesn't want to enable something by default that could potentially corrupt the filesystem because Microsoft could be doing something unexpected and undocumented.
Meanwhile if you need widespread compatibility nearly everything supports exFAT and if you need a real filesystem then the Mac and Windows drivers for open source filesystems are less likely to corrupt your data.
- > If Mozilla or Google were to make their code freely available on some git forge like GitHub
- It's the same language most of the code in Chrome and Firefox is written in.
It's also not clear what you're looking for in terms of cross-platform support. Some languages provide better standard library support for UI elements, but that's the part a browser will be implementing for itself regardless.
- > For example, I regularly order via companies that use Shopify. Now, all of the shopify emails are going straight to spam in Gmail, despite constantly marking them as not spam. (These even pass dmarc/spf/dkim etc, so who knows what's going on here.)
There's a pretty good chance this is because Shopify is sending a lot of email users mark as spam, or is using the same mail server as someone who does. Then you marking them as not spam gives them a better score but the sender's reputation is still so bad that it can't break the threshold to stay out of the spam folder.
- > multiple world routeable IPv4 addresses
It's pretty rare that you would need more than one.
If you're running different types of services (e.g. http, mail, ftp) then they each use their own ports and the ports can be mapped to different local machines from the same public IP address.
The most common one where you're likely to have multiple public services using the same protocol is http[s], and for that you can use a reverse proxy. This is only a few lines of config for nginx or haproxy and then you're doing yourself a favor because adding a new one is just adding a single line to the reverse proxy's config instead of having to configure and pay for another IPv4 address.
And if you want to expose multiple private services then have your clients use a VPN and then it's only the VPN that needs a public IP because the clients just use the private IPs over the VPN.
To actually need multiple public IPs you'd have to be doing something like running multiple independent public FTP servers while needing them all to use the official port. Don't contribute to the IPv4 address shortage. :)
- > https://pjm.adobeconnect.com/p63ultsdb2v/
Apparently my browser does not support some content in the file I'm trying to view and I'm instructed to use, among other things, "Firefox undefined or later". Which may or may not be what I was trying to use to begin with.
Though it seems to work anyway, so okay then.