Abstract
Reinforcement learning has emerged as a promising approach for enabling robots to learn from interactions with their environments, without relying on predefined behaviors. However, robots face significant challenges when learning directly from real-world interactions. Real-world learning is time-consuming and resource-intensive, often requiring extensive data collection over long periods. Additionally, the risks involved in trial-and-error learning in physical settings are high, as faulty policies can lead to safety issues or system damage. Simulations offer a safer and more efficient alternative, allowing robots to learn in simulated environments at faster-than-real-time speeds. Despite these benefits, simulations often serve as imperfect approximations of reality. As a result, robots may learn behaviors that exploit simulation-specific quirks, which may not perform well in real-world settings, creating difficulties in transferring learned behaviors from simulation to real environments—a challenge known as the sim-to-real gap. Several factors contribute to the sim-to-real gap, such as unmodeled physical phenomena like friction and deformation, and the asynchronous nature of real-world systems that simulations often fail to capture accurately. Additionally, using separate software stacks for simulation and deployment can unintentionally lead to discrepancies. Finally, simulating at faster-than-real-time speeds with asynchronous frameworks that distribute computation across multiple cores may also introduce inaccuracies without proper synchronization.
This thesis focuses on improving simulation tools and methodologies to enhance the efficiency and effectiveness of learning-based approaches in robotics. The work addresses key trade-offs between flexibility, speed, and accuracy in robotic simulations, which are critical for successfully transferring learned policies from simulation to real-world environments. Additionally, it introduces a strategy to improve resilience, ensuring that learned behaviors are robust to irrelevant and unknown dynamics.
By tackling these challenges, this thesis provides insights into the design of effective robotic simulators and presents contributions that help bridge the gap between simulated and real-world robotic learning.
This thesis focuses on improving simulation tools and methodologies to enhance the efficiency and effectiveness of learning-based approaches in robotics. The work addresses key trade-offs between flexibility, speed, and accuracy in robotic simulations, which are critical for successfully transferring learned policies from simulation to real-world environments. Additionally, it introduces a strategy to improve resilience, ensuring that learned behaviors are robust to irrelevant and unknown dynamics.
By tackling these challenges, this thesis provides insights into the design of effective robotic simulators and presents contributions that help bridge the gap between simulated and real-world robotic learning.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 29 Apr 2025 |
Electronic ISBNs | 978-94-6518-019-9 |
DOIs | |
Publication status | Published - 2025 |
Keywords
- Robotics
- Reinforcement Leaning (RL)
- GPU Computing
- Robot Simulation
- Sim2Real
- Sim-to-Real
- Machine Learning