A team of researchers from the University of Illinois Urbana-Champaign has created a novel technique for teaching numerous agents, such as robots and drones, to cooperate using artificial intelligence. Using multi-agent reinforcement learning, a form of artificial intelligence, they created a technique for teaching numerous agents to cooperate. As it enables us to attain high degrees of coordination and collaboration across AI agents, the area of multi-agent reinforcement learning (MARL) is becoming more and more prominent. It examines how different agents interact with one another and with a shared environment, allowing us to observe how they cooperate, coordinate, compete, or collectively learn to complete an assigned assignment.
An illustration of multi-agent reinforcement learning can be a swarm of high-rise fire fighting drones attempting to stop a wildfire. To prevent the wildfire from causing more environmental damage, the drones must work together because each drone can only view a limited portion of its surroundings.
According to Huy Tran, an Illinois aerospace engineer, the study aimed to enable decentralized agent communication. The team also concentrated on circumstances where it is not immediately clear what each agent’s responsibilities or tasks should be.
Because sometimes, it can be confusing what one agent ought to do in contrast to another agent, Tran said this research experiment is far more complicated and demanding. Individual agents, such as, can cooperate and execute tasks even when communication channels are available, but what if they lack the necessary hardware or the signals are blocked, rendering communication impossible? Tran believes how agents can gradually learn to work together to complete a goal makes it an intriguing research topic.
Tran and his colleagues used machine learning to design a utility function that informs the agent when it is functioning in a way that is advantageous to the team to resolve this issue.
The team created a machine learning method that enables us to recognize when a single agent contributes to the overall team goal. This is also important cause with a swarm of robot agents accomplish common or collective goals; it can be challenging to know which agent contributed the most to make it possible. As per Tran, if you compare it to sports, one soccer player may score, but we also want to know about the teammate’s contributions, such as assists.
The researchers’ algorithms also detect whether an agent or robot is acting in a way that doesn’t align with or help achieve the goal. As a result, robot agents just opted to perform something that wasn’t helpful to the overall objective.
The research team used simulations of games like Capture the Flag and StarCraft, a well-known computer game, to evaluate their algorithms. The team was ecstatic to learn that their strategy worked well in StarCraft, which Tran noted was slightly unexpected.
According to Tran, this specific multi-agent reinforcement learning-based algorithm is relevant to many real-world scenarios, including military surveillance, robot collaboration in a warehouse, traffic signal management, delivery coordination by autonomous vehicles, and grid control.
Tran stated that Seung Hyun Kim developed most of the theory behind the proposal as a mechanical engineering undergraduate student, with Neale Van Stralen, an aerospace student, assisting with implementation. Their paper titled, “Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning,” was published in the Proceedings of the 21st International Conference on Autonomous Agents and Multi-agent Systems, which took place in May 2022.
When implemented, reinforcement learning aims to discover an optimum strategy that maximizes the anticipated reward from the surrounding environment. When reinforcement learning is used to control several agents, the term multi-agent reinforcement learning is used. Since each agent attempts to learn its strategy to maximize its reward, MARL is essentially the same as single-agent reinforcement learning. Although it is theoretically feasible to employ a single policy for all agents, doing so would require complicated communication between several agents and a central server, which is difficult in the majority of real-world situations. Instead, decentralized multi-agent reinforcement learning is utilized in reality.
It is critical in the multiple agent robot system to complete path planning in the process of avoiding interference, allocating resources, and exchanging information in a coordinated and effective manner. Most of the solutions in conventional multi-agent coordination algorithms take place in well-known settings, and the agent autonomy is constrained by the predetermined target positions and priorities for each robot or drone. To address the issue of multi-agent coordination utilizing only visual information is still insufficient. Therefore, this research study promises new avenues of multi-agent communication using multi-agent reinforcement learning.