On May 3, 2021 I wrote a note to myself about the type of people OpenAI was hiring and this was the note: looks like OpenAI is getting into the military business by hiring a former CIA clandestine operator Will Hurd https://en.wikipedia.org/wiki/Will_Hurd. Seems like I was right but this should be expected because every corporation is in one way or another linked to the military industrial complex.
Unsetting 'max-height' on `#u-s-military-makes-first-confirmed-openai-purchase-for-war-fighting-forces` reveals the rest of the article for me, so it's there, just hidden.
Because it assumes OpenAI being used for anything remotely interesting rather than writing report summaries. This may have a fun media summary, but until unless significantly tweaked, it's still just an expensive multimodal model. It's not going to be used outside of offices.
Have you never heard of the banality of evil? The nazis used IBM punch cards. IBM was in the death business, even if they weren't making bombs or nerve gas.
Sure. Once OpenAI starts making something actually targetted towards the army at their request, that's a different thing. But as far as I understand, so far the army just bought a volume licence to a public service.
The actual line is:
> Advanced AI/ML Capabilities: Utilization of Microsoft's native AI services,
including Azure AI Search, OpenAI tools, and Azure Synapse for unified
analytics and big data processing.
If no one ever downvotes what you say, you're probably not taking enough risks in your commentary. (But if you know most people will downvote it, especially if you know it's going to be flagged, you probably shouldn't say it. Not for any big reason, just because it's rude.)
Good job Sam Altman and all the employees who backed his return.
You are now in the death business.
My experience with the military is that there are huge number of functions involved, and actual warfighting is a small fraction of that.
The actual line is:
> Advanced AI/ML Capabilities: Utilization of Microsoft's native AI services, including Azure AI Search, OpenAI tools, and Azure Synapse for unified analytics and big data processing.
https://www.niemanlab.org/2024/09/a-courts-reporter-wrote-ab...
The same could happen with summaries to decide if someone is a threat
Hallucinated report summaries could go quite badly if used uncritically by decision makers.
Especially if they're ingested by AI systems further up the decision making chain(s).