{"id":2,"date":"2018-03-21T14:14:28","date_gmt":"2018-03-21T14:14:28","guid":{"rendered":"https:\/\/workshops.ap-lab.ca\/workshops-2\/?page_id=2"},"modified":"2023-11-16T11:51:26","modified_gmt":"2023-11-16T16:51:26","slug":"sample-page","status":"publish","type":"page","link":"https:\/\/workshops.ap-lab.ca\/aecai2023\/","title":{"rendered":"Home"},"content":{"rendered":"<hr \/>\n<h2>CONGRATULATIONS TO OUR BEST PAPER WINNERS!<\/h2>\n<p style=\"font-weight: 400;\"><strong><span style=\"color: #800080;\"><a style=\"color: #800080;\" href=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2023\/10\/FORMATTED_UNBLINDED_Submission21.pdf\">Real-time surgical tool detection with multi-scale positional encoding and contrastive learning<\/a>, Gerardo Loza Galindo (University of Leeds)<\/span><\/strong><\/p>\n<p style=\"font-weight: 400;\"><strong><span style=\"color: #800080;\"><a style=\"color: #800080;\" href=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2023\/10\/FORMATTED_UNBLINDED_Submission5.pdf\">First-in-human Realtime AI-assisted Augmented Reality for Renal Surgery<\/a>, Jasper Hofman (Orsi Academy)<\/span><\/strong><\/p>\n<p style=\"font-weight: 400;\"><strong><span style=\"color: #800080;\"><a style=\"color: #800080;\" href=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2023\/10\/FORMATTED_UNBLINDED_Submission33.pdf\">ASSIST-U: A System for Segmentation and Image Style Transfer for Ureteroscopy<\/a>, Daiwei Lu (Vanderbilt University)<\/span><\/strong><\/p>\n<hr \/>\n<p>The joint 17th AE-CAI, 10th CARE and 6th OR 2.0 workshop will bring together researchers, clinicians, and medical companies that are working on advancing the field of Medical Imaging and Augmented Reality, Augmented Environments\u00a0for Computer Assisted Interventions (AE-CAI), Computer Assisted and Robotic Endoscopy (CARE) and Context-Aware Operating Theaters\u00a0(OR 2.0). This workshop will feature high-quality, original papers and invited keynote presentations on the latest scientific, technical and translational advances in developing the next generation of AE-CAI, CARE and OR systems.<\/p>\n<h3>IMPORTANT DATES<\/h3>\n<p><strong>Paper Submission Deadline:<\/strong> <del>July 6th, 2023<\/del> \u00a0<del>July 14th, 2023 \u00a0<\/del>July 16th, 2023<\/p>\n<p><span style=\"color: #808000;\"><strong>***\u00a0<\/strong><strong>In response to popular demand, we will leave the <b>AE-CAI submission portal open through July 16<sup>th<\/sup>, so please upload your final manuscript by July 16<sup>th<\/sup><\/b>. However, we would really appreciate it if you could <b>start your submission, list your authors, and insert your abstract by July 6<sup>th<\/sup><\/b>, so we can sort out our review assignments.***\u00a0<\/strong><\/span><\/p>\n<p><del><strong>Notification of Acceptance:<\/strong> Aug 7th, 2023<\/del><br \/>\n<del><strong>Response to Reviewers and Revised Manuscripts Due:<\/strong> Aug 20th, 2023<\/del><br \/>\n<del><strong>Camera-ready Papers Due &amp; Presenting Author Registration Deadline:<\/strong> Aug 25th, 2023<\/del><br \/>\n<strong>Workshop:<\/strong> Oct 8th, 2023<\/p>\n<h3>PROCEEDINGS<\/h3>\n<h3>THEMES<\/h3>\n<p>We encourage submission of papers that demonstrate clinical relevance, clinical applications and validation studies. We encourage the submission of papers describing bold new ideas, proof of concepts, early-stage prototypes and innovative ways to make such tools more accessible, affordable and environmentally friendly, rather than incremental improvements. Topics to be addressed include:<\/p>\n<ul>\n<li>Augmented, virtual and mixed reality<\/li>\n<li>Medical image visualization and understanding<\/li>\n<li>Surgical simulation and training<\/li>\n<li>Image-guided and robotic surgery<\/li>\n<li>HCI considerations in computer-assisted surgery<\/li>\n<li>Accessible and affordable surgical technologies<\/li>\n<li>Robotic and navigated endoscopy<\/li>\n<li>Stereoscopic and ultrasound endoscopy<\/li>\n<li>Endoscopy skill training and evaluation<\/li>\n<li>Endoscope tracking, navigation, planning and simulation<\/li>\n<li>Endoscopic video computing and computer vision for endoscopic applications<\/li>\n<li>Surgical training and assessment<\/li>\n<li>Context-Aware Operating Theatres and team communication<\/li>\n<li>Cognitive models for clinical environment, intervention and training<\/li>\n<li>Sensors, wearable and implantable electronics<\/li>\n<li>Decision support networks to enhance surgical procedural assistance<\/li>\n<\/ul>\n<h3>AWARDS<\/h3>\n<p>Sponsored by:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-413\" src=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-300x109.jpg\" alt=\"\" width=\"300\" height=\"109\" srcset=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-300x109.jpg 300w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-1024x372.jpg 1024w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-768x279.jpg 768w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-1536x558.jpg 1536w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2021\/06\/NDI-Logo-2048x745.jpg 2048w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-367\" src=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2020\/10\/image-e1623334269885-300x86.png\" alt=\"\" width=\"300\" height=\"86\" srcset=\"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2020\/10\/image-e1623334269885-300x86.png 300w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2020\/10\/image-e1623334269885-768x220.png 768w, https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-content\/uploads\/sites\/11\/2020\/10\/image-e1623334269885.png 911w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><!-- \/wp:paragraph --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>CONGRATULATIONS TO OUR BEST PAPER WINNERS! Real-time surgical tool detection with multi-scale positional encoding and contrastive learning, Gerardo Loza Galindo&hellip; <\/p>\n","protected":false},"author":2,"featured_media":608,"parent":0,"menu_order":1,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-2","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/pages\/2","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/comments?post=2"}],"version-history":[{"count":98,"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/pages\/2\/revisions"}],"predecessor-version":[{"id":711,"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/pages\/2\/revisions\/711"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/media\/608"}],"wp:attachment":[{"href":"https:\/\/workshops.ap-lab.ca\/aecai2023\/wp-json\/wp\/v2\/media?parent=2"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}