On AI Weapons
I’ve done a lot of thinking about AI and weapons. I’m going to talk about it for a moment because I see a lot of lack of nuance in the reporting.
The logic of AI and weapons is pretty sound. I came to the same conclusion: if your competitors are using it, harm minimization seems to require you to use it to. Just like with MAD nuclear weapons, it’s about balance of power. The problem, of course, is the stakes, and the fact that the worst case scenario is a little something less, but in many ways worse, than outright destruction.
This dynamic is partly why I’m getting involved in AI. I feel that I have something to contribute in both the depth of the intelligence and in the energy efficiency of the architecture.
Would I choose to move forward with this sort of technology in an alternative world where this environment somehow is not the way it is? Probably not.
Indeed, IP is like gun control. It’s important but also it’s a solution that ignores certain realities. You can push for strong controls with clear principles, but then the people who don’t obey laws have the guns. That’s not great. So you make a state that gets to have all the guns and is seen as the sort of “legitimate threat of violence” as distinct from others and so on.
Now that these things are here, getting involved, when I can potentially make a positive difference, is the best thing I can do. There is no guarantee of success, and there is the potential of “simple failure” or a more terrible failure, but when one considers themselves to be good, and I do, deciding to not be provoked is de facto accepting the worst.
There are many parallel ethical currents: some you can influence and some you cannot. The best thing to do is likely to choose to do your best to be influential where one can. This is all humans can be expected to do.
As much as humans have gotten things wrong in history, sometimes I wonder whether things would be much wrong-er were decent people to have not redirected events even just a little.
I should be perfectly clear: I oppose AI weapons. The problem is there isn’t a clear line between what we already have and what we call AI. What we have decided to call AI is a particular flavor of a bigger ecosystem of technologies, many of which are already deployed in conflicts of all sorts.
I think the biggest risk of LLMs is simply that they are trained on human norms, whereas many of the other technologies did not learn through our relational language.
