Monocular Vision-Based Pose Estimation of Uncooperative Spacecraft

L. Pasqualetto Cassinis

Research output: ThesisDissertation (TU Delft)

203 Downloads (Pure)

Abstract

Activities in outer space have entered a new era of growth, fostering human development and improving key Earth-based applications such as remote sensing, navigation, and telecommunication. The recent creation of SpaceX's Starlink constellation as well as the steep increase in CubeSat launches are expected to revolutionize the way we use space and extend the current capabilities of satellite-based technology. However, this steep increase in the number of human-made objects is rapidly leading to higher collision risks in congested Earth orbits. This has led to questioning whether this trend is sustainable on the long term, and ultimately to the need to tackle sustainability in space.

The recent decade has seen considerable efforts by Space Agencies to both prevent major collisions in orbit via Active Debris Removal (ADR) missions and to extend the lifetime of the functioning satellites with On-Orbit Servicing (OOS). Unfortunately, the approach and capture of space debris objects is complicated by the fact that these targets are uncooperative and cannot aid close-proximity operations, leading to critical challenges in the estimation of their relative position and attitude (pose) with respect to the servicer spacecraft. Several missions have been proposed as technology demonstrators of debris removal and servicing technologies, in which passive monocular cameras are combined with active sensors to improve the robustness and accuracy of the navigation system. Yet, despite the inherent challenges that come together with the use of monocular cameras in space, navigation systems based on a single camera are becoming an attractive alternative to systems based on active sensors, due to their reduced mass, power consumption and system complexity. The research work presented in this thesis aims at developing and validating a robust and accurate monocular camera-based pose estimation system compliant with navigation requirements of both ADR and OOS missions. \\
Two fundamental open challenges are addressed:

\begin{enumerate}
\item The robustness and applicability of image processing algorithms and pose estimation methods.
\item The validation of relative navigation filters and their interface with image processing and pose estimation.
\end{enumerate}

\noindent This research begins with a survey on the robustness and applicability of existing monocular vision-based pose estimation systems. After identifying the characteristics and limitations of each subsystem implemented in state-of-the-art architectures, a comparative assessment of the current solutions is given at different levels of the pose estimation process, in order to bring a novel and broad perspective. Special focus is put on the improved robustness of novel image processing schemes and pose estimators based on Convolutional Neural Networks (CNN). The limitations and drawbacks of the validation of current pose estimation schemes with synthetic images are further discussed, together with the critical trade-offs for the selection of visual-based navigation filters.

Building on the results of the survey, a novel framework is introduced to enable a robust and accurate pose estimation. Two investigated CNNs are used at image processing level to identify a set of pre-selected features on the target spacecraft, which are fed to a pose estimator prior to the navigation filter (loosely-coupled) or directly to the navigation filter as measurements (tightly-coupled). A novel method to derive covariance matrices directly from the CNN heatmaps is introduced to improve the modeling of the feature detection uncertainty prior to pose estimation. The performance results indicate that a tightly-coupled approach can guarantee an advantageous coupling between the rotational and translational states within the filter, while reflecting a representative measurements covariance. Synthetic monocular images of the European Space Agency's Envisat spacecraft are used to generate datasets for training, validation and testing of the CNN. Likewise, the images are used to recreate a representative close-proximity scenario for the validation of the proposed filter.

This research work then extends the validation from a purely synthetic one to a more comprehensive on-ground validation. To this end, ESA's GNC Rendezvous, Approach and Landing Simulator testbed is used to validate the proposed CNN-based pose estimation system on representative rendezvous scenarios, with special focus on solving the domain shift problem which characterizes CNNs trained on synthetic datasets when tested on more realistic imagery. To solve the domain shift problem, a novel augmentation technique focused on texture randomization was introduced, aimed at improving the CNN robustness against previously unseen target textures. The results prove an increase in robustness towards realistic imagery, as randomizing the texture of the target spacecraft during training allows the CNN to generalize textures and to focus on the shape of the target. However, a performance decrease in highly adverse illumination conditions or low camera exposures suggests that additional augmentation techniques are required to tackle the domain shift from an illumination standpoint.

In response to this need and in order to extend the on-ground validation to the entire navigation system, this research work proceeds by introducing the on-ground validation of a CNN-based Unscented Kalman Filter. The validation is carried out at Stanford's robotic Testbed for Rendezvous and Optical Navigation on a dataset of realistic laboratory images, which simulate rendezvous trajectories of a servicer spacecraft to the Tango spacecraft from the PRISMA mission. The validation is performed at different levels of the navigation system by first training and testing the adopted CNN on SPEED+, the next generation spacecraft pose estimation dataset with specific emphasis on domain shift between a synthetic domain and a laboratory domain. A novel data augmentation scheme based on light randomization is proposed to improve the CNN robustness under adverse viewing conditions. Next, the entire navigation system is tested on two representative rendezvous trajectories. Results indicate that the inclusion of a new scheme to adaptively scale the heatmaps-based measurement error covariance improves filter robustness by returning centimeter-level position errors and moderate attitude accuracies at steady-state. Thanks to the proposed adaptive method, the filter does not diverge in periods of low measurements accuracy, suggesting that a proper representation of the measurements uncertainty combined with an adaptive measurement error covariance is key in improving the navigation robustness.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • Gill, E.K.A., Supervisor
  • Menicucci, A., Supervisor
Award date16 Nov 2022
DOIs
Publication statusPublished - 2022

Keywords

  • Active Debris Removal
  • Relative Navigation
  • Convolutional Neural Networks
  • Relative Pose Estimation
  • On-ground Validation
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Monocular Vision-Based Pose Estimation of Uncooperative Spacecraft'. Together they form a unique fingerprint.

Cite this