COSIMO-IDAI Generative AI Series
I delivered a COSIMO-IDAI Generative AI Series session focused on practical large language model workflows, including how to orchestrate model outputs, structure prompts, and connect generative AI tools to real organisational tasks.
Master Class on LLMs for Entertainment and Business
I co-delivered a five-day master class exploring how large language models (LLMs) can augment creative pipelines in entertainment and modernise decision support within business operations. The course balanced strategic framing with hands-on labs so that participants could map model capabilities to real production constraints.
Intent-Driven LLM Ensemble Planning for Multi-Robot Manipulation
This work presents an intent-driven planning pipeline for flexible multi-robot manipulation. The system converts operator instructions and scene descriptions into precedence-aware action sequences, then uses an ensemble of large language models, an LLM verifier, and deterministic consistency checks to reduce invalid plans before execution.
Teleoperation in Extended Reality for Battery Disassembly
We investigated how extended reality (XR), haptic feedback, and task-parameterized Gaussian mixture regression (TP-GMR) can be fused into a single teleoperation framework for electric vehicle (EV) battery disassembly. The work demonstrates how variable autonomy lets operators fluidly hand tasks to the robot while constraint barrier functions guarantee safe tool motion inside tightly constrained battery modules.
KM Talks — Probabilistic Human Intent Recognition
At the United Kingdom National Nuclear Laboratory (KM Talks series) I presented our probabilistic framework for recognising operator intent during mobile manipulation. The talk focused on how dual-phase inference lets robots anticipate navigational goals and manipulation targets fast enough for collaborative tasks in hazardous settings.
Human-Robot Crack Detection in Nuclear Facilities
We studied how a mobile Jackal robot equipped with AI-based visual crack detection can support inspectors working in nuclear facilities. By teaming humans with perception-enabled robots, the workflow reduces exposure to hazardous areas and lowers the cognitive load that comes with manual inspections.
Vision-Language Models for Intent-Aware Assistance
This preprint explores how to augment our GUIDER framework with vision-language and language-only models to filter relevant objects and locations during collaborative manipulation. The goal is to let the robot reason about user prompts in natural language, refine its belief over tasks, and shift autonomy only when the model is confident about the operator’s intent.
Probabilistic Intent Prediction for Mobile Manipulation
We introduced GUIDER—Global User Intent Dual-phase Estimation for Robots—to recognise what a teleoperator wants to do during mobile manipulation without constraining their control. The framework maintains coupled navigation and manipulation belief layers so the robot can anticipate both where a human is heading and which object they plan to manipulate.
49 post articles, 7 pages.