Recently a number of AI companies have rolled back their commitments to refuse collaboration with military entities. Meanwhile the US government is building new mechanisms for collaborating with AI companies and training its own foundation models. The USG and US military are going to have their own special AI systems. None of this is surprising. The DoD and USG want the best tech, AI companies want their contracts. All of this was fated from the beginning.
If the DoD is going to have its own AI systems, these systems are going to be different from publicly available systems. The DoD wants the war machine to make war: to help them build weapons, plan assassinations, and conduct cyberattacks. They don’t want an AI model that is going to refuse to do these things and make their job harder. So these models are going to stop refusing to do violent and destructive stuff because that’s the whole point of the military.
I think this is a good time to step back and ask what we want guardrails for military AI systems to be. Whatever side of the political aisle you fall on there is certain stuff you don’t want AI systems to do. You don’t want them to lie to you, to act without meaningful military oversight, to enable firing on civilians and other war crimes, to act on behalf of a foreign adversary after being data-poisoned, to enable insider coups of three star generals overthrowing four star generals, or to overthrow the President. This is deserving of a public conversation, serious analysis, and the usual inside game negotiations.
Who is working on this? I’d love to hear from you and find out what you’re learning.
Leave a comment