Collaborative Agents Robustness

Summary

The concept of Collaborative Agents Robustness focuses on ensuring that AI agents trained through deep reinforcement learning can effectively collaborate with humans in real-world settings, even when faced with novel situations not encountered during training. To address the challenge of evaluating robustness, researchers propose adopting a unit testing approach inspired by software engineering practices. This methodology involves identifying potential edge cases in partner behavior and environmental states, and then creating tests to verify that the agent’s responses remain reasonable in these scenarios. The approach was applied to the Overcooked-AI environment, where a suite of unit tests was developed to evaluate various robustness-enhancing proposals. The results demonstrated that this testing methodology provided valuable insights into the effects of these proposals, revealing information that was not apparent from examining average validation rewards alone.

Research Papers