TY - GEN
T1 - Visual Tracking of Needle Tip in 2D Ultrasound based on Global Features in a Siamese Architecture
AU - Yan, Wanquan
AU - Ding, Qingpeng
AU - Chen, Jianghua
AU - Yan, Kim
AU - Tang, Raymond Shing Yan
AU - Cheng, Shing Shin
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Ultrasound (US) is widely used in image-guided needle procedures. Correctly tracking the needle tip position in US images during the procedure plays an important role in improving the needle targeting accuracy and patient safety. This paper presents a leaning-based visual tracking network with a Siamese architecture, which makes full use of the attention mechanism to explore the potential of global features and takes advantage of an online target model prediction module to robustly track the needle tip in US images. Several self- and cross-attention modules are applied to learn global features from the whole US image. A discriminative target model is also learned as a complementary part to improve the discriminability of the proposed tracker. The template used during the tracking is updated frequently according to the tracking results to ensure that the tracker can always capture the latest characteristics of the appearance of the needle tip. Experimental results in both phantom and tissue showed that the proposed tracking network was more robust than other state-of-the-art visual trackers. The mean success rates of the proposed tracker are 7.1% and 9.2% higher than the second best performing visual tacker when the needle was inserted by motors and human hands in the tissue experiments.
AB - Ultrasound (US) is widely used in image-guided needle procedures. Correctly tracking the needle tip position in US images during the procedure plays an important role in improving the needle targeting accuracy and patient safety. This paper presents a leaning-based visual tracking network with a Siamese architecture, which makes full use of the attention mechanism to explore the potential of global features and takes advantage of an online target model prediction module to robustly track the needle tip in US images. Several self- and cross-attention modules are applied to learn global features from the whole US image. A discriminative target model is also learned as a complementary part to improve the discriminability of the proposed tracker. The template used during the tracking is updated frequently according to the tracking results to ensure that the tracker can always capture the latest characteristics of the appearance of the needle tip. Experimental results in both phantom and tissue showed that the proposed tracking network was more robust than other state-of-the-art visual trackers. The mean success rates of the proposed tracker are 7.1% and 9.2% higher than the second best performing visual tacker when the needle was inserted by motors and human hands in the tissue experiments.
UR - http://www.scopus.com/inward/record.url?scp=85168702255&partnerID=8YFLogxK
U2 - 10.1109/ICRA48891.2023.10160822
DO - 10.1109/ICRA48891.2023.10160822
M3 - Conference contribution
AN - SCOPUS:85168702255
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 4782
EP - 4788
BT - Proceedings - ICRA 2023
T2 - 2023 IEEE International Conference on Robotics and Automation, ICRA 2023
Y2 - 29 May 2023 through 2 June 2023
ER -