Show simple item record

dc.contributor.authorJorgensen, Steven
dc.contributor.authorNadizar, Giorgia
dc.contributor.authorPietropolli, Gloria
dc.contributor.authorManzoni, Luca
dc.contributor.authorMedvet, Eric
dc.contributor.authorO'Reilly, Una-May
dc.contributor.authorHemberg, Erik
dc.date.accessioned2024-08-02T15:43:01Z
dc.date.available2024-08-02T15:43:01Z
dc.date.issued2024-07-14
dc.identifier.isbn979-8-4007-0494-9
dc.identifier.urihttps://hdl.handle.net/1721.1/155922
dc.descriptionGECCO ’24, July 14–18, 2024, Melbourne, VIC, Australiaen_US
dc.description.abstractGenetic programming (GP) is a popular problem-solving and optimization technique. However, generating effective test cases for training and evaluating GP programs requires strong domain knowledge. Furthermore, GP programs often prematurely converge on local optima when given excessively difficult problems early in their training. Curriculum learning (CL) has been effective in addressing similar issues across different reinforcement learning (RL) domains, but it requires the manual generation of progressively difficult test cases as well as their careful scheduling. In this work, we leverage the domain knowledge and the strong generative abilities of large language models (LLMs) to generate effective test cases of increasing difficulties and schedule them according to various curricula. We show that by integrating a curriculum scheduler with LLM-generated test cases we can effectively train a GP agent player with environments-based curricula for a single-player game and opponent-based curricula for a multi-player game. Finally, we discuss the benefits and challenges of implementing this method for other problem domains.en_US
dc.publisherACM|Genetic and Evolutionary Computation Conferenceen_US
dc.relation.isversionof10.1145/3638529.3654056en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleLarge Language Model-based Test Case Generation for GP Agentsen_US
dc.typeArticleen_US
dc.identifier.citationJorgensen, Steven, Nadizar, Giorgia, Pietropolli, Gloria, Manzoni, Luca, Medvet, Eric et al. 2024. "Large Language Model-based Test Case Generation for GP Agents."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentLincoln Laboratory
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-08-01T07:46:34Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-08-01T07:46:34Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record