- malwrar parentThanks for this perspective, I think I’ll reconsider this plan (to be clear, haven’t done it) and try to think up some alternative training strategy that doesn’t involve live issues.
- Can you expand on that?
Currently we do shadow shifts for a month or two first, but still eventually drop people into the deep end with whatever experience production gifts them in that time. That experience is almost certainly going to be a subset of the types of issues we see in a year, and the quantity isn’t predictable. Even if the shadowee drives the recovery, the shadow is still available for support & assurance. I don’t otherwise have a good solution for getting folks familiar with actually solving real-world problems with our systems, by themselves, under severe time pressure, and I was thinking controlled chaos could help bridge the gap.
- We burned thru pretty much all of our public /8, RFC1918, and have begun digging into RFC6589 (a /10 I didn’t even know existed prior to job). Still shocks me. Hardly an expert in the space, but I think the issue comes from subnetting to distribute ranges to teams that need a consistent IP address space for some project or another. Lots of inefficiency & hoarding over time. We’ve had legitimate outages and impending platform death staved off by last minute horse-trading & spooky technical work due to such things. IPV6 has always been a distant aspiration.
- I’ve been toying around with the idea of using chaos engineering as a method of training new on-call folks. My first ever on-call shift was during a major product launch for a FAANG and I more or less just hoped that’d I’d be able to handle whatever broke. I got lucky and it turned out that I can usually fix things when they break, but have also found that jumping people in like that isn’t exactly consistent. I wonder if controlled, limited outages (maybe even as a surprise) would be a less hellish way of doing it. could be a good way to build instinct under pressure without risking too much.
- I totally missed that part of your comment, my bad. Thanks for elaborating on those, I feel inspired to experiment!
So far my kernel journey has been about making my hardware work + enabling features, and that’s mostly how I’ve been discovering config options. Do you have any suggestions on where one aught to read further on this sort of kernel tuning?
EDIT: doing some further research, couldn’t you just set those options via sysctl w/o needing to build a separate kernel?
- Let us assume, at some point in the near future, it is possible to build a humanoid robot that is able to operate human-run machines and mimic human labor:
> Man made stuff does not self-repair and self-replicate.
If robots can repair a man-made object or build an entirely new one, the object is effectively self-repairing and self-replicating for the purposes of a larger goal to automate manufacturing.
> You miss even repairs of the tiniest item - which in turn requires repairing he repairers, everything eventually stops
So… don’t? Surely the robots can be tasked to perform proactive inspections and maintenance of their machines and “coworkers” too.
> But it is a complex vast network
…that already exists, and doesn’t even need to be reimagined in our scenario. If one day our hypothetical robots become available, each individual factory owner could independently decide the next day to purchase them. If all of the factories in the “supply chain graph” for a particular product do this, the complex decentralized system they represent doesn’t require human labor to run. It doesn’t even need to happen all at once. By this mechanism I propose the supply chain could rapidly organically automate itself.
- Thanks for providing the archive link!
> Pointing to any single part really makes no sense, the point is the complexity and interconnectedness of everything
Doesn’t it though?
The bauxite mine owners in Pincarra could purchase hypothetical robotic mining & smelting equipment. The mill owners in Downey, the cocoa leaf processor in New Jersey, the syrup factory in Atlanta, and others could purchase similar equipment. Maybe they all just buy humanoid robots and surveil their works for awhile to train the robots and replace the workers.
If all of those events happen, Coca Cola supply chain has been automated. Also, since e.g. the aluminum mill probably handles more orders beyond just coke cans, other supply chains for other products will now be that much more automated. Thereby the same mechanism that built these deep supply chains will (I bet) also automate them.
> Biological systems don't have that problem, they are self-assembling no matter how you slice and dice them.
If the machines used to implement manufacturing processes are also built in an automated way, the system is effectively self-healing as you describe for biological systems.
> did like "Gaia" in Horizon Zero Dawn (game) because it made a great story though. This would be pretty much exactly the kind of AI fantasized about here.
Perhaps the centralized AI “Gaia” becomes an orchestrator in this scheme, rather than the sole intelligence in all of manufacturing? Not too familiar with this franchise to make a more direct comparison, but my larger point is that the complexity of the system doesn’t need to be focused on one single greenfield entity.
- > These are adequate reactions and preparations before war.
> Essentially, it’s an attempt to prevent the looming war.
I don’t see how mass surveillance is going to be effective against Russian/Chinese/etc agents, let alone prevent war. Why do you assume that spies and saboteurs are going to use the surveilled systems? I think this is more about controlling citizens than it is about monitoring foreigners.
- I think our full understanding of the spectrum of these threats will lead to the construction of robust safeguards against them. Reputational attacks at scale are a weakness of the current platforms within which we consume news, form community, and build trust. Computer attacks described in the article are caused by sloppy design/implementation brought into existence by folks whose daily incentives are less about making safe code and more about delivering features. "Designer pathogens" have been described as an accessible form of terrorism since far before AI has existed. All of these threats and similar have existed since before AI, and will continue to exist if AI is snapped out of existence right now. The excuse for not preventing/addressing them has always been about knowledge and development resources, which current generative AI tech addresses.
- We should assume sophisticated attackers, AI-enabled or otherwise, as our time with computers goes on, and no longer give leeway to organizations who are unable to secure their systems properly or keep customers safe in the event that they are breached. Decades of warnings from the infosec community have fallen upon the deaf ears of "it doesn't hurt so I'm not going to fix it" of those whose opinions have mattered in the places that count.
I remember once a decade or so ago talking to a team at defcon of _loose_ affiliation where one guy would look for the app exploit, another guy would figure out how to pivot out of the sandbox to the OS, and another guy would figure out how to get root, and once they all got their pieces figured out they'd just smash it (and variants) together for a campaign. I hadn't heard of them before meeting them, and haven't heard about them since since, and they put a face for me though on a silent coordinated adversary model that must be increasing in prevalence as more and more folks out there realize the value of computer knowledge and gain access to it through once means or another.
Open source tooling enables large-scale participation in security testing, and something about humans seems to generally result in a distribution where some nuts use their lighters to burn down forests but most use them to light their campfires. We urgently need to design systems that can survive in the era of advanced threats, at least to the point where the best adversaries can achieve is service disruption. I'd rather live in a world where we can all work towards a better future than one where we hope that limiting access will prevent catastrophe. Assuming such limits can even be maintained, and that allowing architects to pretend that fires can never happen in their buildings means that they don't have to obey fire codes or install alarms & marked exits.
- I wish journalists would explore why the technical methods & information sharing that enable this surveillance are allowed to exist. Highlighting instances of abuse and the quasi-legal nature of the industry doesn’t really get at the interesting part, which is _what motivates our leaders to allow surveillance in the first place_.
I recently completed Barack Obama’s A Promised Land (a partial account of his presidency), and he mentions in his book that although he wanted to reform mass surveillance, it looked a little different once he was actually responsible for people’s safety. I often think about this when I drive past Flock cameras or walk into grocery stores; our leaders seem more enticed by the power of this technology than they are afraid of vague abuses happening in _not here_. It seems like no one sees a cost to just not addressing the issue.
By analogy, I feel that reporting on the dangers of fire isn’t really as effective as reporting on why we don’t have arson laws and fire alarms and social norms that make our society more robust to abuse of a useful capability. People who like cooked food aren’t going to engage with anti-fire positions if they just talk about people occasionally burning each other alive. We need to know more about what can be done to protect the average person from downsides of fire, as well as who is responsible for regulating fire and what their agenda for addressing it is. I’d love to see an article identifying who is responsible for installing these Flock cameras in my area, why they did so, and how we can achieve the positive outcomes desired from them (e.g. find car thieves) without the negatives (profiling, stalking, tracking non-criminals, etc).
- I’d suggest reading their (free, online) book if you haven’t already, that’s what motivated me to actually try using it. It sells its core features pretty well and eased me into it better than just trying to grok the syntax. What kept me using it is how annoyingly frequently my code would _just work_ once I got it compiling, which I could often get to pretty quickly by catching errors early with a linter. I’d highly recommend giving it an honest try, the aesthetics make sense with a minimal amount of experience.
- The child in OP sounds to me like she thought she was making a bad-taste joke in a private forum, and was shocked when it promptly led to an entanglement with an unfeeling system who was looking over her shoulder. I’ve never threatened to kill anyone in my work DMs, but I’ve definitely written stuff that I wouldn’t post in public threads. I think we all to some degree use these “private” systems this way until it burns us, only then do we adjust. Privacy is much more a feeling than a technical reality.
In that sense, it isn’t “normal”, it’s just “something that’s happening in theory but eh maybe it only affects scary people or whatever idk”. I feel like this tolerance we’re developing for outside forces invading “private” spaces, nominally for these loose justifications of harm reduction, will be what _actually does_ make it normal.
Once it’s truly normal, and people think it’s what keeps them safe from mass shootings or whatever, it will be too late to get rid of it. I think fear and normalcy will motivate its spread to places beyond school chat platforms and Snapchat.
- This will condition children to think this sort of surveillance is normal, and when they’re adults the ones who think it kept them safe from mass shootings will try and advocate using our existing mass-surveillance powers to proactively monitor everyone like this. Please, we need to stop terrifying children with this lazy oppression, this is not worth the damage to society we’re causing by conditioning kids this way.
- This one took me years to develop: build minimal demos with all new technologies you want to use in a new project, _then_ begin thinking about how you’re going to build your project. You need to understand how a tech works and _if it even works the way you think it does_ before you use it. Otherwise, you’re gambling on hope and overconfidence.
Maybe I’m just dumb, but I always find that learning new tech while simultaneously trying to build with that new tech usually ends up in me rethinking the project repeatedly as I learn new tricks and techniques. I’ve dropped projects that I realized were too ambitious or just weren’t evolving right after months, years of effort. I’ve since learned that building needs to feel more like assembly than fabrication. You can dream, but it shouldn’t leave the whiteboard until _all_ of your technical assumptions not backed by experience are resolved into certainty. You move so much quicker and more predictably if you can predict success.