Tissue tracking under long-horizon occlusions with contrastive learning

Fuente: PubMed "hive"
Int J Comput Assist Radiol Surg. 2026 Mar 6. doi: 10.1007/s11548-026-03585-4. Online ahead of print.ABSTRACTPURPOSE: Continuous tracking of soft-tissue regions in minimally invasive surgery is essential for computer-assisted interventions, yet remains highly challenging due to non-rigid tissue deformation, unconstrained endoscopic camera motion, and frequent occlusions caused by surgical instruments. In particular, long-horizon occlusions, where regions of interest exit the field of view and later re-enter from different angles, remain largely unaddressed by existing online tracking methods.METHODS: We propose a real-time tracking pipeline that integrates dense optical flow for short-term region tracking, monocular visual odometry for camera localization and depth estimation, and a self-supervised template matching module based on contrastive learning for robust tissue re-identification. The template matching component employs a variational encoder trained using time cycle consistency, enabling the learning of deformation-aware visual representations without requiring manual annotations.RESULTS: To evaluate our approach, we rely on the public SurgT dataset and a synthetic dataset explicitly designed to feature long-horizon occlusions. The results show that the proposed pipeline maintains stable tracking performance under extended occlusions and viewpoint changes, enabling accurate re-identification of soft-tissue regions after reappearance. The contrastive variational encoder contributes to improved robustness against tissue deformation and appearance variability compared to reconstruction-based or purely geometric baselines.CONCLUSIONS: Overall, the proposed framework provides a practical, self-supervised solution for long-horizon tissue tracking in minimally invasive surgery, demonstrating promising performance despite current quantitative evaluation being limited to synthetic data due to the lack of suitable real-world benchmarks. The code is available at https://github.com/Essex-AI-Innovation-Centre/cl-ve-tracking.PMID:41790427 | DOI:10.1007/s11548-026-03585-4