Show simple item record

dc.contributor.advisorMoser, Bryan R.
dc.contributor.authorDao, Nguyen Luc
dc.date.accessioned2025-08-27T14:30:18Z
dc.date.available2025-08-27T14:30:18Z
dc.date.issued2025-05
dc.date.submitted2025-06-20T18:50:12.581Z
dc.identifier.urihttps://hdl.handle.net/1721.1/162506
dc.description.abstractLarge Language Models (LLMs) have been increasingly adopted by businesses to support their workflows, driving significant investment in developing generative agents. These agents can collaborate and exchange information to solve complex problems. Previous research has found that the benefits of such multi-agent systems include better performance and the potential emergence of collective intelligence characterized functionally as leadership, debate, and feedback. However, expanding multi-agent systems to include agents beyond trusted boundaries introduces the risks of malicious agents that provide incorrect or harmful information to deteriorate collective decisions or cause systemic failure. This study investigates how architectural decisions, including group size, agent prompting, and collaboration schemes, impact the system's resilience against malicious agents. Our experiment results show that increasing group size improves both accuracy and resilience at the cost of more tokens. Step-back abstraction prompting enhances accuracy and mitigates the likelihood of hallucinations induced by malicious agents. Group Chat topology is highly vulnerable to malicious interferences. Reflexion, Crowdsourcing, and Blackboard topologies offer safeguards against such risks. Eventually, we expand our research to investigate accountability gaps in generative AI systems. Designing generative multi-agent systems requires careful consideration of the trade-offs between performance, cost, resilience, and accountability.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleDesigning Generative Multi-Agent Systems for Collective Intelligence and Resilience
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentSystem Design and Management Program.
dc.identifier.orcid0009-0003-9460-7697
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Engineering and Management


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record