The summery of ‘Human-Machine Autonomies’ by Lucy Suchman in the wake of Carnegie Mellon University’s Maven project scandal.
Murders in an act of war have been legitimized ever since we started having wars, and with the continual of technological advancement, our civilization has been becoming more and more efficient at killing each other.
The evolution of the 21st century’s weapon of war has been closely linked to the advancement of Artificial Intelligence. This technology has many names; AI, machine learning, big data, autonomous weapon. In her paper, ‘Human-Machine Autonomies’, Suchman proposes questions and provides examples to argue that we need to examine these technologies with a critical point of view.
The recent news about CMU Robotic Institute dealings with the Department of Defense made the local community to question the ethics of the school and how is going to bear the responsibility of these decisions. The controversial project Maven that CMU Robotics in involved with aims to advance a machine that can analyze multiple data such as planes, drones, while considering other irregular patterns such as weather. According to Colonel Matty, this technology is supposed to help the commanders to make better and quicker decisions.
However, this comment is more complicated than it seems, Suchman argues. She starts contextualizing this issue of the autonomous weapons by scrutinizing the long historic discussion of autonomy. Starting in the 1920s, the concept of cybernetics and system theory began to think about how much of the natural world can be mapped out in a computational machine. This movement was focused on goal-oriented behavior as the main indication of autonomy. During World War II, scientist Norbert Wiener attempted to design an anti-aircraft predictor. This was never fully realized by him but it laid out the major framework of the war machine by viewing the airplane as an extension of the pilot.
Soon, the goal-oriented behavior forced the experts to think about the goal-setting context. How could the moving target be a goal? Is the object of the target the goal or the movement? Is the tracking the object the goal or eliminating the object fast the goal? How does the context such as proximity play a role in this? From these questions, the logic of dynamic goal-oriented behavior has developed. In this system, the goal is defined while the route and method are partially flexible, making these weapons a somewhat of autonomous.
However, designing these autonomous machines or systems to act like how human would become an impossible task when it needs to accommodate for interactions (machine and target or victim) that cannot be anticipated. It is up to the military, engineers, and in the project Maven’s case, the academics, to link the relations, map the scenarios and designate the goal. And Suchman discusses that this is not only impossible to achieve but also problematic because giving the machine to eliminate a human’s life autonomously can have enormous implications.
The weight of this matter is felt all throughout the controversy of the relationship between Carnegie Mellon University and the Department of Defense. The school administration kept this matter quiet without any input from the students, alumni, nor community members. However, we need to take a moment to think about what this technology means to our future. We need to wrestle with the fact that we are imagining the possibility of unsupervised lethal machine commissioned by our government and designed by our school. This is an issue we have to face as a student and future designers.