MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments

Author(s)
Thirumalai, Vittal
Thumbnail
DownloadThesis PDF (4.754Mb)
Advisor
Balakrishnan, Hamsa
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Autonomous agents operating in real-world environments must make decisions under uncertainty, facing challenges such as partial observability, sparse rewards, and long-horizon planning. While reinforcement learning (RL) enables agents to learn from experience, standard policies often struggle to generalize in the presence of ambiguous tasks or incomplete information. Large language models (LLMs) can provide valuable semantic guidance, but their high computational cost and latency make constant querying impractical. This thesis introduces WhatWhen2Ask, a framework for cost-aware, confidence-driven querying of external multimodal large language models (MLLMs). The agent employs a Deep Q-Network (DQN) as its internal action planner, selectively querying open- and closed-source models (BLIP-2 and GPT-4o) in a hierarchical manner when its confidence is low and external guidance is likely to improve performance. Accepted hints are embedded and fused with structured state representations, supported by tailored reward shaping for improved learning in sparse environments. Evaluated in the HomeGrid environment, WhatWhen2Ask improves the success rate from 38% (DQN-only) to 54%, while querying in fewer than 6% of steps. Ablation studies show that semantic hints, confidence-based querying, selective hint filtering, and hierarchical fallback each contribute meaningfully to performance. These results suggest that principled, confidence-aware LLM querying can enhance decision-making in uncertain environments, offering a step toward more efficient and cost-aware language-augmented agents.
Date issued
2025-05
URI
https://hdl.handle.net/1721.1/162960
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.