[Linkpost] A Chinese AI optimized for killing

https://www.lesswrong.com/posts/TrGBvEawoHvgu4JN7/linkpost-a-chinese-ai-optimized-for-killing

The AI depicted in the Terminator movies is rather stupid: there are much more efficient ways to kill all humans than robots with guns. We can safely ignore the unrealistic Terminator-like scenario of AI X-risk. ...Or can we? Tsinghua University is a top university located in Beijing. It is heavily involved in research for the Chinese military. One of its military labs is called "The State Key Laboratory of Intelligent Technology and Systems". In 2021, two of the university’s researchers released a paper called Counter-Strike Deathmatch with Large-Scale Behavioural Cloning. Some highlights:

Comment

https://www.lesswrong.com/posts/TrGBvEawoHvgu4JN7/linkpost-a-chinese-ai-optimized-for-killing?commentId=yeg9SvsrSv9uwvfcp

The article title here is hyperbolic.

The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading. Should we also say that the earlier western work on Doom—see VizDoom—was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same—researchers trying to find interesting video games to work on.

This work transfers with just as much easy /​ difficulty to real-world scenarios as AI work on entirely non-military-skinned video games—that is, it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different. (I.e., military robots can’t work with behavioral cloning based on absolutely unchanging + static environments /​ maps, with clean command /​ movement relations, for many reasons). Many researcher’s work on navigating environments—though not military-themed—would be just as applicable.

Comment

https://www.lesswrong.com/posts/TrGBvEawoHvgu4JN7/linkpost-a-chinese-ai-optimized-for-killing?commentId=cWn9yK65FAW4SLjzv

The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading. That’s a fair description of AlphaStar. For example, see this report NATO report (pdf): From the Game Map to the Battlefield – Using DeepMind’s Advanced AlphaStar Techniques to Support Military Decision-Makers Obviously, military people of both NATO and China are trying to apply any promising AI research that they deem relevant for the battlefield. And if your promising research is military-themed, it is much more likely to get their attention. Especially if you’re working at a university that does AI research for the military (like the aforementioned Tsinghua University). Should we also say that the earlier western work on Doom—see VizDoom—was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same—researchers trying to find interesting video games to work on. There is a qualitative difference between the primitive pixelated Doom and the realistic CS. The second one is much easier to transfer to the battlefield, because of the much more realistic graphics, physics, military tactics, weaponry. This work transfers with just as much easy /​ difficulty to real-world scenarios as AI work on entirely non-military-skinned video games... Not sure about that. Clearly, CS is much more similar to the real battlefield, than, say, Super Mario. Thus, the transfer should be much easier. ...it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different... Also not sure about that. For example, in the article, one of the simple scenarios they have is a gun turret-like scenario, where the agent is fixed in one place, and is shooting moving targets (that look like real humans). I can imagine that one can put the exact same agent in a real automated turret, and with a suitable middleware it will be capable of shooting down moving targets at decent rates. The main issue is that once you have a mid-quality agent that can shoot at people, it is trivial to improve its skill, and get it to superhuman levels. The task is much easier than, say, self-driving cars, as the agent’s only goal is to maximize the damage, and the agent’s body is expendable.