Preferences

mpyne
Joined 7,759 karma

  1. Yeah, a big thing is latency vs. throughput.

    That's a great article you link and it basically notes up front what the throughput requirements are in terms of cores per player, which then sets the budget for what the latency can be for a single player's game.

    Now, if you imagine for a second that they managed to get it so that the average game will just barely meet their frame time threshold, and try to optimize it so that they are running right at 99% capacity, they have put themselves in an extremely dangerous position in terms of meeting latency requirements.

    Any variability in hitting that frame time would cause a player to bleed over into the next player's game, reducing the amount of time the server had to process that other player's game ticks. That would percolate down the line, impacting a great many players' games just because of one tiny little delay in handling one player's game.

    In fact it's reasons like this that they started off with a flat 10% fudge adjustment to account for OS/scheduling/software overhead. By doing so they've in principle already baked-in a 5-8% reduction in capacity usage compared to theoretical.

    But you'll notice in the chart that they show from recent game sessions in 2020 that the aggregate server frame time didn't hang out at 2.34 ms (their adjusted per-server target), it actually tended to average at 2.0 ms, or about 85% of the already-lowered target.

    And that same chart makes clear why that is important, as there was some pretty significant variability in each day's aggregate frame times, with some play sessions even going above 2.34 ms on average. Had they been operating at exactly 2.34 ms they would definitely have needed to add more server capacity.

    But because they were in practice aiming at 85% usage (of a 95% usage figure), they had enough slack to absorb the variability they were seeing, and stay within their overall server expectations within ±1%.

    Statistical variability is a fact of life, especially when humans and/or networks are involved, and systems don't respond well to variability when they are loaded to maximum capacity, even if it seems like that would be the most cost-effective.

    Typically this only works where it's OK to ignore variability of time, such as in batch processing (where cost-effective throughput is more valuable than low-latency).

  2. > I’d like to understand the constraints and systems involved that make 80% considered full utilization. There’s obviously something that limits a OS; is it tunable?

    There are OS tunables, and these tunables will have some measure of impact on the overall system performance.

    But the things that make high-utilization systems so bad for cycle time are inherent aspects of a queue-based system that you cannot escape through better tuning, because the issues these cause to cycle time were not due to a lack of tuning.

    If you can tune a system so that what previously would have been 95% loading is instead 82% loading that will show significant performance improvements, but you'd erase all those improvements if you just allowed the system to go back up to 95% loaded.

  3. You absolutely do not want 90-95% utilization. At that level of utilitization random variability alone is enough to cause massive whiplash in average queue lengths.

    The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.

    As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.

  4. https://en.wikipedia.org/wiki/PRISM

    They collect it straight from the company after it's already been transmitted. It's not a wiretap, it's more akin to an automated subpoena enforcement.

  5. > Also, PRISM.

    PRISM works fine to recover HTTPS-protected communications. If anything NSA would be happier if every site they could use PRISM on used HTTPS, that's simply in keeping with NOBUS principles.

  6. They are certainly sometimes a key part of the retro look that makes things nostalgic.

    But even during the PSX era I found it distracting and annoying to look at so I can't say I have any nostalgia for it even now in the way I do for N64-style low-poly 3-D games or good pixel art.

  7. > How do you do either of the following without spending any time at all on estimates?

    > "Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale."

    This is 'just' bum-standard continuous delivery (which is where most organizations should be heading). You pull the next todo from the backlog, start working on it. If it takes more than a day to commit something, you split the task into something smaller.

    You don't need to estimate ahead of time at all as long as the task is small enough, all you need is to be able to put the near-term backlog of work into a good priority order of business value.

    If the high-value task was small it doesn't prevent you from doing more work, because the next unit of work to do is the same either way (the next item on the backlog).

    If the high-value task was too big, it can cause you to take a pause to reflect on whether you scoped the task properly and if it is still high-value, but an estimate wouldn't have saved you from it because if you'd truly understood the work ahead of time you wouldn't be pausing to reflect. An estimate, had you performed it, would not have changed the priority.

    But this Kanban-style process can be performed without estimates at all, and organizations that work to setup an appropriate context for this will find that they get faster delivery of value than trying to shoehorn delivery into prior estimates instead. But there are people who work faster with the fire of a deadline under their tail so I can't say it's unilaterally better.

    > "At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly."

    If it's hard to do the work as a team, you should be able to tell it was hard retrospectively, with or without having done an estimate ahead of time.

    You might say that failing to hit your prior schedule estimates would be a good topic to discuss at a retrospective session, but I would tell you that this is a self-licking ice cream cone. If your customers are happy despite missing internal schedule estimates you're in a good spot, and if your customers are unhappy even though you're "hitting schedule projections" you're in a bad spot.

    There's a lot more productive discussions to be done when the team reflects on how things are going, and they typically relate to identifying and addressing obstacles to the continuous flow of value from the product team to the end users.

  8. > Have you heard about the horror that is SaFe?

    Yes, and I successfully argued against it by pointing it was was a wolf in sheep's clothing. It has as much to do with agile as waterfall does.

    > I'm not convinced that true agile works or has ever worked on a project that was bigger than a dozen devs.

    It works fine, large projects are inherently trouble which is why organizations should spend some energy into reducing the scale of an individual team's work rather than piling dozens or hundreds of people into yet another Too Big to Fail megaproject. If Google can build and maintain Bigtable with like a dozen devs then maybe you don't need 200 people and a PMO for your enterprise data warehouse consolidation project.

    In fact the biggest issue with SAFe is the size of the project you'd try to use it on, not that it references agile style methods. Waterfall methods were even worse, which is the only reason charlatan consultation manage to keep selling organizations on things like SAFe.

    > One of the key ways these agile people are incredibly dishonest, is that Agile at the top level is sold to enterprises as a way of keeping the old high-level project management style, with push-only command-structures, and agile people subsequently try to sugarcoat it as it somehow 'empowering' the devs and giving them autonomy, when the truth couldn't be farther from it.

    You're right that this is dishonest and that people try and fail to cargo cult successful efforts they watch from afar. But that doesn't mean these successful teams weren't successful, or that there aren't common attributes of those successes.

    That's always been the problem with methodological fixes to the software delivery process for organizations, you usually can't impose the structure from the outside anymore than you can meld your bones with adamantium without having a crazy mutant healing factor...

  9. > Which if you try to do - those agile people will kill you for it.

    Does this actually happen to you? This is literally the whole point of agile, is to change the plan as you learn more about your work. If you didn't want to change the plan you'd spend a lot of time on up-front planning and do waterfall.

    Like, a Gantt chart is more or less explicitly anti-agile. I'm aware of the 'no true Scotsman' thing but we shouldn't buy into people using agile terms for what is really a BDUF-based plan.

  10. There are agile methods that forgo estimates and deadlines though

    This is what "agile" is: https://agilemanifesto.org/

    More specific methodologies that say they are agile may use concepts like estimates (story points or time or whatever), but even with Scrum I've never run into a Scrum-imposed "deadline". In Scrum the sprint ends, yes, but sprints often end without hitting all the sprint goals and that, in conjunction with whatever you were able to deliver, just informs your backlog for the next sprint.

    Real "hard" deadlines are usually imposed by the business stakeholders. But with agile methods the thing they try to do most of all isn't manage deadlines, but maximize pace at which you can understand and solve a relevant business problem. That can often be just as well done by iteratively shipping and adjusting at high velocity, but without a lot of time spent on estimates or calendar management.

  11. I think it's more fair to call it a distinguisher of American English vs. British English.

    Even just reading "I've a train to catch" gives a British accent in my mind.

  12. > If you're 97% over budget, are you successful?

    I don't like this as a metric of success, because who came up with the budget in the first place?

    If they did a good job and you're still 97% over then sure, not successful.

    But if the initial budget was a dream with no basis in reality then 97% over budget may simply have been "the cost of doing business".

    It's easier to say what the budget could be when you're doing something that has already been done a dozen times (as skyscraper construction used to be for New York City). It's harder when the effort is novel, as is often the case for software projects since even "do an ERP project for this organization" can be wildly different in terms of requirements and constraints.

    That's why the other comment about big projects ideally being evolutions of small projects is so important. It's nearly impossible to accurately forecast a budget for something where even the basic user needs aren't yet understood, so the best way to bound the amount of budget/cost mismatch is to bound the size of the initial effort.

  13. And to add to that, the blurb you link notes explicitly that for IETF purposes, "rough consensus" is reached when the Chair determines is has been reached.
  14. Given the emphasis on reliability of implementations of an algorith, it's ironic that the Curve 25519-based Ed25519 digital signature standard was itself specified and originally implemented in such a way as to lead to implementation divergence on what a valid and invalid signature actually was. See https://hdevalence.ca/blog/2020-10-04-its-25519am/

    Not a criticism, if anything it reinforces DJB's point. But it makes clear that ease of (proper) implementation also needs to cover things like proper canonicalization of relevant security variables and that supporting multiple modes of operation doesn't actually lead to different answers of security questions meant to give the same answer.

  15. > We’re a quarter of the way through the 21st century, gas taxes have been optional for driving for quite a while now.

    States mostly take the equivalent of those taxes out of vehicle registration fees for electric vehicles.

    And bicycle usage is nearly a nil cost on the existing public roads, so the costs here would be appropriate to come out of the general sales/property taxes that fun the city/county. If anything you might argue to try to subsidize bicycle ridership more in urban areas, whether with bicycle paths or otherwise, to reduce the number of cars on the roads and reduce congestion for those still on the roads.

  16. It's worth reading about, but it's kind of wave-like even then: https://en.wikipedia.org/wiki/Double-slit_experiment#Interfe...

    It would be going too far to say it's only a wave though. It's both wave and particle.

  17. > PII has a very clear definition.

    It doesn't, actually, as many would-be DoD IT system owners are surprised to find that simply generating a 32-bit random UUID as a user ID is, per the regs, PII, and therefore makes your proposed IT system IL4 with a Privacy Overlay (and a requirement to go into GovCloud with a cloud access point) instead of IL2 and hostable on a public cloud.

    Oh and now you need to file a System of Records Notice into the Federal Register (which is updated only by DoD, and only infrequently) before you can accept production workloads.

    There is a separate concept of "sensitive PII" (now Moderate or High Confidentiality impact under NIST 800-122) which replaces what people used to call the "Rolodex Business Exemption" to PII/privacy rules.

    But PII is very clear: "Personally Identifiable Information". Any information that identifies a specific individual, like for example, your HN username. Unless a collective is posting on your handle's behalf?

  18. And even there, AMD did the GPU for the Wii U, that console was an evolution of the Wii (which was itself an evolution to the Gamecube). AMD had acquired the makers of the Wii/Gamecube graphics chip, and also separately designed the Wii U-specific upgrade GPU used for native Wii U games.
  19. > Something is very wrong if it takes 20+ years to field new military technologies. By the time these technologies are fielded, a whole generation of employees have retired and leadership has turned over multiple times.

    Conversely, the Navy's first SSBN went start to finish in something like 4 years.

    And unlike the F-35, which could easily have been an evolution of the existing F-22 design, the Navy had to develop 4 major new pieces of technology, simultaneously, and get them all integrated and working.

    1. A reduced-size nuclear warhead (the missile would need to fit inside the submarine for any of this to matter) 2. A way to launch the nuclear missile while submerged 3. A way to reliably provide the nuclear missile with its initial navigation fix at launch 4. A way to fuel the nuclear missile with a safe-enough propellant to be usable on a submerged submarine without significant risk to the crew

    The USAF's Century series of fighters were turned around quick. So was the B-52.

    Having been involved in defense innovation efforts during my time in uniform, I cannot overemphasize how much the existing acquisition system is counter-productive to the nation's defense, despite 10+ years of earnest efforts dating back to before Trump's first term.

    Most of the aspects to it are well-intentioned and all, but as they say the purpose of the system is what it does, and what America's defense acquisition system does is burn up tax dollars just to get us a warmed-over version of something grandma and granddad's generation cooked up during the Cold War.

    Its turned into a death spiral because as these programs get more onerous the cost goes up, and who in their right mind thinks it's a good idea to just let people go off on a $1B effort with less oversight?

    Until it's even possible to deliver things cheaply through the DAS (or WAS or whatever it will be now) we'll never be able to tackle the rest of the improvements. I look forward to reviewing the upcoming changes but Hegseth isn't the first one to push on this, it's a huge rat's nest of problems.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal