PhD Scholarship: Autonomous Planning and Execution Aligned with Contingencies from Humans

PhD Scholarship: Autonomous Planning and Execution Aligned with Contingencies from Humans

LAAS CNRS France Deadline: Apr 16, 2026

Details

● Scientific background, comparison with the state of the art The term ‘autonomous agent’ refers to any automated system operating in the real world that is capable of interacting with its environment via inputs/sensors (cameras, microphones, as well as text messages, network data, etc.) and outputs/actuators (wheels, arms, loudspeakers, or messages/data), and reasoning and decision- making capabilities at multiple levels (navigation, vision, manipulation, sequences of actions, assigned missions, etc.), based on dedicated models for representing data and knowledge. This thesis focuses on so-called ‘high-level’ reasoning capabilities, related to task planning, where tasks (or actions/activties) are to be carried out to achieve or maintain goals or missions assigned to the agent. As the work carried out is generic, it can in principle be applied to various types of agents depending on the targeted applications: web services, autonomous buses, satellites, etc. The RIS team, within the Robotics Department at LAAS, is particularly interested in robots (ranging from drones to personal service robots), which will constitute the priority application area. More specifically, the focus will be on robots interacting with humans, taking into account the three possible levels of interaction, each of which presents different challenges: - (I1) The human as an external observed variable within the environment under consideration (e.g. robots moving through a crowd) - (I2) Humans as external agents interacting during the execution of the plan (e.g. robots performing handling or logistics tasks in cooperation with human agents) - (I3) Humans as agents interacting during the design of the task plan (e.g. supervision of surveillance tasks carried out by a fleet of robots, or healthcare service robots capable of taking patients’ needs into account in real time). We will focus primarily on the case of a single agent interacting with its environment and with humans, but the specific challenges posed by multi-agent systems (fleet of robots) may be considered at a later stage. Traditionally, an autonomous agent is assigned a goal (e.g. fetching an object and returning it to the operator); the planner searches among the known available actions for a logical sequence that leads from the initial state to a state satisfying the goal. Finally, this sequence of actions is executed by the agent using its low-level capabilities (sensors/actuators). All the above assumes a deterministic environment. The real world is not so predictable; numerous uncertainties necessitate a re-evaluation of this paradigm, as the plan devised rarely plays out as intended. These uncertainties may be temporal (varying durations), related to the resources used (e.g. a faulty robot component, insufficient battery power, etc.) or to the environment (e.g. an impracticable road). Furthermore, uncertainties and contingencies stem from three main sources: - Approximations in low-level tasks (for example, the actual duration of a navigation task may exceed the initial estimate) - Incomplete information about the state of the world (for example, an undetected closed door or a missing item in stock) - Unforeseen events (such as hardware failures or collisions) The work to be carried out will build on five elements derived from previous work conducted by the supervisors and the RIS team: (C1) The use of the deterministic single-agent temporal planner ARIES, a constraint-based optimisation solver developed in the team (in the Rust programming language) [Bit-Monnot, 2023], capable of generating and updating a global task schedule. (C2) Several studies [Van de Vonder et al., 2007] [Bidot et al., 2009] have studied various strategies for accounting for uncertainties within an architecture that dynamically combines task planning/scheduling and plan execution, distinguishing between proactive approaches (which anticipate certain uncertainties by modelling them into the initial model to produce predictive plans that are robust against such contingencies), reactive approaches (which provide for local adaptation of the plan or online replanning when an event affects the current state), and finally progressive/continuous approaches (where planning is carried out online ahead of the execution by integrating observed events to plan the next step). (C3) The HumFleet project (anr.fr/Project-ANR-23-CE33-0003), led by the RIS team at LAAS-CNRS and headed by Arthur BIT-MONNOT, is developing a human-system collaboration architecture for task planning within a fleet of heterogeneous robots operating in industrial environments (logistics, manufacturing workshops, etc. [Blanchard et al., 2024]). The ARIES system is used and extended to provide explanations of decisions made and of conflicts to a human operator, who can intervene to modify the plan via an LLM-based protocol, which falls within the case study (I3) above. Planning and execution in HumFleet remain sequential, and uncertainty management is limited to low-level navigation or manipulation failures, and is purely reactive: deviations and disturbances require ARIES to replan, again in collaboration with the operator. This presents two major limitations: (a) a risk of frequent re-evaluation and (b) a high cognitive load for the operator. (C4) A substantial piece of work, to which both supervisors contributed [Morris et al., 2001] [Bit-Monnot & Morris, 2023] , exists on the consideration of temporal uncertainties in the form of possible duration intervals for activities not controlled by the agent, with a proactive approach to ensure the successful execution of a plan subject to these types of uncertainties. The generic STNU model is a constraint satisfaction model and is therefore compatible with ARIES, and offers extensions applicable to the multi-agent framework [Sumic et al., 2024]. (C5) Finally, the context of social robots, and thus robot-human interactions —particularly the uncertainties surrounding the humans’ objectives and intentions— is also addressed by several ongoing projects within the team [Shekhar et al., 2024] [Vigné et al., 2024], and in other teams with which the supervisors have contacts, and with which collaborations are being considered: the CRS Lab at Örebro University (work on proactive robots [Grosinger et al., 2019] or active data collection [Veiga & Renoux, 2023]) as well as the CNR-ISTC in Rome (work on the agent’s intrinsic motivation [Sartor et al., 2023]). All this research focuses on the discovery of goals induced by interaction and seeks to integrate active learning capabilities combined with various formal symbolic methods, with a focus on considering humans as a source of uncertainty. In contrast, the thesis envisaged here will focus on predefined goals but aims to integrate uncertainties arising from human factors with those from other sources. Nevertheless, the PhD candidate will have the opportunity to undertake visits to these teams as part of his work. ● Objectives, challenges, innovation aspects, envisioned models/methods/tools, and foreseen contributions In summary, the thesis, based on the ARIES system (C1) used in the HumFleet project (C3), will pursue the following objectives: - (O1) combine multiple sources of uncertainty —those arising from the physical environment or the agent itself with those arising from humans (I1) and their intentions (I2) (C5), - (O2) take into account that these uncertainties may impact several dimensions (temporal (C2), effects of actions, state changes), and may be addressed at different stages (proactively or reactively), - (O3) integrate all of this into a dynamic task-level decision-making architecture inspired by previous theoretical work (C2). By way of comparison, MDPs (Markov Decision Processes [Beynier & Mouaddib, 2004]), classical decision models in planning under uncertainty, focus on the proactive management of uncertainties related to the effects of actions, and are limited to conditional decisions regarding the execution of the plan. The main challenge is therefore to unify those three objectives, which requires distinguishing between two levels: Operational level (model of the current plan): the plan generated by ARIES must be supplemented by a model of uncertainties that can be addressed proactively and synchronously: - duration intervals that cannot be controlled, using STNUs, - integration of alternatives into the plan: conditional plans (CTP [Tsamardinos et al., 2003]), or MDPs; macro-activities with multiple implementation modalities (drawing inspiration, for example, from Hierarchical Task Planning models [Cavrel et al., 2024]). Decision-making level: we plan to design a generic model to represent the current state in the form of belief states, as in, e.g., model-based diagnosis [Cordier et al., 2020]. These states can encapsulate various possible developments in the plan: exceeding a time limit, distinct unobservable world states, a new human intention ● Ideal candidate profile – Required skills Technical skills: — Constraint-based optimisation (CSP, LP, SAT), particularly for scheduling — Symbolic AI (knowledge representation, planning) — C/C++ programming; knowledge of the Rust language is optional Soft skills: — Independence and analytical rigour — Ability to formalise real-world problems — Interest in collaborative robotic systems — Fluency in written English and spoken French or English

Additional Resources

Download Resource
Login to Save

Related Scholarships

Loading scholarships...