The goal of our project is to develop AI agents that achieve social intelligence, collaborating to resolve conflicts and arrive at diplomatic solutions to multi-agent disputes. In order to achieve collaboration, it is important that agents have the ability to communicate with one another. To this end, we are exploring how large language models (LLMs) can be used to parameterize effective communication policies and other agent behaviors. A primary interest is to understand how collaboration can be incentivized and can emerge even when agents have differing goals, and how a collection of agents can work together to solve problems that are beyond the abilities of any single agent. In addition to studying the basic science of multi-agent collaboration, we are also developing the tools and platforms necessary for efficiently simulating complex and realistic multi-agent scenarios.