Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience
Author(s)
Dao, Nguyen Luc
DownloadThesis PDF (4.031Mb)
Advisor
Moser, Bryan R.
Terms of use
Metadata
Show full item recordAbstract
Large Language Models (LLMs) have been increasingly adopted by businesses to support their workflows, driving significant investment in developing generative agents. These agents can collaborate and exchange information to solve complex problems. Previous research has found that the benefits of such multi-agent systems include better performance and the potential emergence of collective intelligence characterized functionally as leadership, debate, and feedback. However, expanding multi-agent systems to include agents beyond trusted boundaries introduces the risks of malicious agents that provide incorrect or harmful information to deteriorate collective decisions or cause systemic failure. This study investigates how architectural decisions, including group size, agent prompting, and collaboration schemes, impact the system's resilience against malicious agents. Our experiment results show that increasing group size improves both accuracy and resilience at the cost of more tokens. Step-back abstraction prompting enhances accuracy and mitigates the likelihood of hallucinations induced by malicious agents. Group Chat topology is highly vulnerable to malicious interferences. Reflexion, Crowdsourcing, and Blackboard topologies offer safeguards against such risks. Eventually, we expand our research to investigate accountability gaps in generative AI systems. Designing generative multi-agent systems requires careful consideration of the trade-offs between performance, cost, resilience, and accountability.
Date issued
2025-05Department
System Design and Management Program.Publisher
Massachusetts Institute of Technology