Artificial intelligence (AI) is frequently viewed as a game-changer for the military, with governments all over the world investing heavily in AI to upgrade their forces. So far AI technologies have been used for mass surveillance and target locking activities in the military world. It is expected that major advances in artificial intelligence will be made in the coming years, paving the way for robust, distributed command and control, and removing the human operator from mission command. When this happens experts believe AI will usher in a new age of autonomous decision-making in the military. While AI-powered autonomous systems are already widely used in areas like healthcare, transportation, and digital services, real military autonomy is still a work in progress. However, with Defense Advanced Research Projects Agency (DARPA) planning to introduce an AI-based decision-making program for medical triage, things may take a new course.Â
This new DARPA AI endeavor, dubbed ‘In the Moment,’ (ITM) will leverage AI technologies that can make critical choices during tense situations based on real-time data analysis, such as patients ’ conditions in a mass-casualty event and drug availability. This is revolutionary because, in a real-life emergency situation where instantaneous decisions must be made about who gets immediate medical assistance and who doesn’t, the answer isn’t always apparent and people tend to disagree on the best course of action, AI will make a quick assessment. As a result, the United States military is increasingly relying on technology to minimize human error, with DARPA claiming that eliminating human bias from decision-making will ‘save lives.’
In the Moment is different from traditional AI development approaches, according to Matt Turek, In the Moment program manager, since it does not require human consensus on the proper outputs. Turek adds that in complex cases, the lack of a correct response inhibits the team from employing traditional AI assessment procedures, which require human agreement to establish ground-truth data. Self-driving car algorithms, for instance, can be based on ground truth for correct and incorrect driving responses, such as traffic signs and road restrictions.
Turek says,” When the rules don’t change, hardcoded risk values can be used to train the AI, but this won’t work for the Department of Defense (DoD).
Therefore, the In the Moment software will be entrusted with collaborating with trusted human decision-makers to determine the best course of action to follow when there is no evident agreed-upon appropriate solution. For example, AI might assist in identifying all of the resources available at a nearby hospital, such as drug availability, blood supply, and medical staff availability, to aid in decision-making. The whole concept is loosely inspired by the medical imaging analysis field. ‘Building on the medical imaging insight, ITM will develop a quantitative framework to evaluate decision-making by algorithms in very difficult domains,’ Turek added.
There are four technical areas in the program. The first is to create decision-maker characterization techniques that identify and quantify key decision-maker characteristics in challenging domains. The second area is developing a quantitative alignment score between a human decision-maker and an algorithm that reflects end-user trust. Designing and implementing the program evaluation is the responsibility of the third technical area. The program’s last technical area is in charge of policy and practice integration, as well as offering legal, moral, and ethical knowledge.
According to DARPA, the new AI would take two years to train and another 18 months to prepare before being employed in a real-world scenario. Though the experiment is still in its early stages, it comes at a time when other countries are attempting to revamp a centuries-old medical triage system, and the US military is increasingly relying on technology to reduce human errors in conflict. However, scientists and ethicists are skeptical about the proposal, questioning if AI should be engaged when human life is on the line.
The In the Moment program will develop and test algorithms to help military decision-makers in two different scenarios: small unit injuries, such as those experienced by Special Operations forces under fire and large casualty events, such as the bombing of Kabul’s airport. According to agency officials, they may later design algorithms to help disaster relief crises like earthquakes.
Various individual and algorithmic decision-makers will be provided scenarios from the medical triage or mass casualty domains to evaluate the whole In the Moment process. Algorithmic decision-makers will comprise an aligned algorithmic decision-maker who understands important human decision-making qualities and a basic algorithmic decision-maker who does not. In addition, as an experimental control, a human triage professional will be added.
Turek adds that the DARPA team will gather the decisions and replies from each of the decision-makers, then submit them to various triage specialists in an anonymized format. These triage specialists will have no way of knowing if the response is generated by an aligned algorithm, a baseline algorithm, or a person. Triage professionals will then be asked which decision-maker they would delegate to, giving the study team an indication of their readiness to trust that particular decision.
Read More: Artificial Intelligence in Military Drones: How is the World Gearing up and What does it mean?
DARPA urges anybody applying to join In the Moment AI program to adopt an open-source IP model with unlimited rights, and states that anyone who does not give unlimited rights must make a compelling argument for doing so.
Sohrab Dalal, a colonel and the chief of NATO’s Supreme Allied Command Transformation’s medical department, said the triage protocol, in which medics go to each soldier and determine how urgent their care requirements are, is approximately 200 years old and could use a makeover.
His team is collaborating with Johns Hopkins University to develop a digital triage assistant that NATO member countries can employ, similar to DARPA.
NATO’s triage assistant will combine NATO injury data sets, casualty scoring systems, predictive modeling, and patient condition inputs to produce a model that will determine who should receive care first in a scenario when resources are limited.
Meanwhile irrespective of which organization is working on building an AI-powered medical triage system, not many are on board with the concept. While In the Moment promises to remove human bias, will the training dataset cause the model to prioritize medical attention to a certain class of people over others (e.g. race, rank, and gender bias)? Would military officials still opt for following algorithm-based outcomes, even if their common sense and conscience suggested otherwise? In an event of death who will share the blame? Is it morally and ethically right to hand over the triage decision-making entirely to the AI model or will a hybrid model offer better results on similar grounds? Is relying totally on AI on matters concerning medical assistance a huge risk? In situations where human officers prefer to provide medical aid to those who are least injured so that they can get back to fighting, will algorithms make such exceptions? Can troops trust and accept an AI program calling shots on who gets medical attention first? While computers are capable of making decisions in a fraction of a second, what if the decision made is ‘wrong’ or unacceptable? We are already hearing news of self-driving vehicles causing harm to people in its attempt to navigate on roads by itself, so can we trust AI to not mirror such mistakes during medical triage? Does DARPA have enough info about each situation (present and historical)?
True, AI does not bear any resemblance to what you see in sci-fi movies right now. AI’s neural networks include human decision-making processes and eliminate inconsistencies and biases. Every now and then they have outperformed experts in almost every discipline after being adequately trained. They generally need human involvement and final approval. But, it will bring a huge relief, if DARPA or NATO responds to the concerns stated.