TY - JOUR
T1 - 4D Feet
T2 - Registering Walking Foot Shapes Using Attention Enhanced Dynamic-Synchronized Graph Convolutional LSTM Network
AU - Tajdari, Farzam
AU - Huysmans, Toon
AU - Yao, Xinhe
AU - Xu, Jun
AU - Zebarjadi, Maryam
AU - Song, Yu
PY - 2024
Y1 - 2024
N2 - 4D-scans of dynamic deformable human body parts help researchers have a better understanding of spatiotemporal features. However, reconstructing 4D-scans utilizing multiple asynchronous cameras encounters two main challenges: 1) finding dynamic correspondences among different frames captured by each camera at the timestamps of the camera in terms of dynamic feature recognition, and 2) reconstructing 3D-shapes from the combined point clouds captured by different cameras at asynchronous timestamps in terms of multi-view fusion. Here, we introduce a generic framework able to 1) find and align dynamic features in the 3D-scans captured by each camera using the nonrigid-iterative-closest-farthestpoints algorithm; 2) synchronize scans captured by asynchronous cameras through a novel ADGC-LSTMbased-network capable of aligning 3D-scans captured by different cameras to the timeline of a specific camera; and 3) register a high-quality template to synchronized scans at each timestamp to form a highquality 3D-mesh model using a non-rigid registration method. With a newly developed 4D-foot-scanner, we validate the framework and create the first open-access data-set, namely the 4D-feet. It includes 4Dshapes (15 fps) of the right and left feet of 58 participants (116 feet including 5147 3D-frames), covering significant phases of the gait cycle. The results demonstrate the effectiveness of the proposed framework, especially in synchronizing asynchronous 4D-scans.
AB - 4D-scans of dynamic deformable human body parts help researchers have a better understanding of spatiotemporal features. However, reconstructing 4D-scans utilizing multiple asynchronous cameras encounters two main challenges: 1) finding dynamic correspondences among different frames captured by each camera at the timestamps of the camera in terms of dynamic feature recognition, and 2) reconstructing 3D-shapes from the combined point clouds captured by different cameras at asynchronous timestamps in terms of multi-view fusion. Here, we introduce a generic framework able to 1) find and align dynamic features in the 3D-scans captured by each camera using the nonrigid-iterative-closest-farthestpoints algorithm; 2) synchronize scans captured by asynchronous cameras through a novel ADGC-LSTMbased-network capable of aligning 3D-scans captured by different cameras to the timeline of a specific camera; and 3) register a high-quality template to synchronized scans at each timestamp to form a highquality 3D-mesh model using a non-rigid registration method. With a newly developed 4D-foot-scanner, we validate the framework and create the first open-access data-set, namely the 4D-feet. It includes 4Dshapes (15 fps) of the right and left feet of 58 participants (116 feet including 5147 3D-frames), covering significant phases of the gait cycle. The results demonstrate the effectiveness of the proposed framework, especially in synchronizing asynchronous 4D-scans.
KW - 4D foot scanner
KW - Cameras
KW - dynamic feature recognition
KW - Feature extraction
KW - Foot
KW - LSTM network
KW - nonrigid registration
KW - Point cloud compression
KW - Shape
KW - Synchronization
KW - synchronized scans
KW - Three-dimensional displays
UR - http://www.scopus.com/inward/record.url?scp=85174825546&partnerID=8YFLogxK
U2 - 10.1109/OJCS.2024.3406645
DO - 10.1109/OJCS.2024.3406645
M3 - Article
AN - SCOPUS:85174825546
SN - 2644-1268
VL - 5
SP - 343
EP - 355
JO - IEEE Open Journal of the Computer Society
JF - IEEE Open Journal of the Computer Society
ER -