Source: An Augmented Reality Microscope for Cancer Detection fromResearch
Posted by Martin Stumpe, Technical Lead and Craig Mermel, Product Manager, Google Brain Team
li ions of deep learning to medical disc lines including ophthalmology , dermatology , radiology , and pathology have recently shown great promise to increase both the accuracy and availability of high-quality healthcare to patients around the world. At Google, we have also published results showing that a convolutional neural network is able to detect breast cancer metastases in lymph s at a level of accuracy comparable to a trained pathologist. However, because direct tissue visualization using a compound light microscope remains the predo nt means by which a pathologist diagnoses illness, a critical barrier to the w espread adoption of deep learning in pathology is the dependence on having a di al representation of the microscopic tissue.
Today, in a talk delivered at the Annual Meeting of the American Association for Cancer Research ( AACR ), with an accompanying paper “ An Augmented Reality Microscope for Real-time Automated Detection of Cancer ” (under review), we describe a prototype Augmented Reality Microscope (ARM) platform that we believe can possibly help accelerate and democratize the adoption of deep learning tools for pathologists around the world. The platform consists of a modified light microscope that enables real-time image analysis and presentation of the results of machine learning algorithms directly into the field of view. Importantly, the ARM can be retrofitted into existing light microscopes found in hospitals and clinics around the world using low-cost, readily-available components, and without the need for whole slide digital versions of the tissue being analyzed.
Modern computational components and deep learning models, such as those built upon TensorFlow , will allow a wide range of pre-trained models to run on this platform. As in a traditional analog microscope, the user views the sample through the eyepiece. A machine learning algorithm projects its output back into the optical path in real-time. This digital projection is visually superimposed on the original (analog) image of the specimen to assist the viewer in localizing or quantifying features of interest. Importantly, the computation and visual feeack s quickly — our present implementation runs at approximately 10 frames per second, so the model output updates seamle y as the user scans the tissue by moving the slide and/or changing magnification.
|Left: Schematic overview of the ARM. A digital camera captures the same field of view (FoV) as the user and passes the image to an attached compute unit capable of running real-time inference of a machine learning model. The results are fed back into a custom AR display which is inline with the ocular lens and projects the model output on the same plane as the slide. Right: A picture of our prototype which has been retrofitted into a typical clinical-grade light microscope.|
In principle, the ARM can provide a wide variety of visual feedback, including text, arrows, contours, heats, or animations, and is capable of running many types of machine learning algorithms aimed at solving different problems such as object detection, quantification, or classification.
As a demonstration of the potential utility of the ARM, we configured it to run two different cancer detection algorithms: one that detects breast cancer metastases in lymph node specimens, and another that detects prostate cancer in prostatectomy specimens. These models can run at magnifications between 4-40x, and the result of a given model is displayed by outlining detected tumor regions with a green contour. These contours help draw the pathologist’s attention to areas of interest without obscuring the underlying tumor cell appearance.
|Example view through the lens of the ARM. These images shows of the lymph node metastasis model with 4x, 10x, 20x, and 40x microscope objectives.|
While both cancer models were originally trained on images from a whole slide scanner with a significantly different optical configuration, the models performed remarkably well on the ARM with no additional re-training. For example, the lymph node metastasis model had an area-under-the-curve (AUC) of 0.98 and our prostate cancer model had an AUC of 0.96 for cancer detection in the field of view (FoV) when run on the ARM, only slightly decreased performance than obtained on WSI. We believe it is likely that the performance of these models can be further improved by additional training on digital images captured directly from the ARM itself.
We believe that the ARM has potential for a large impact on global health, particularly for the diagnosis of infectious diseases, including tuberculosis and malaria, in developing countries. Furthermore, even in hospitals that will adopt a digital pathology workflow in the near future, ARM could be used in combination with the digital workflow where scanners still face major challenges or where rapid turnaround is required (e.g. cytology, fluorescent imaging, or intra-operative frozen sections). Of course, light microscopes have proven useful in many industries other than pathology, and we believe the ARM can be adapted for a broad range of applications across healthcare, life sciences research, and material science. We’re excited to continue to explore how the ARM can help accelerate the adoption of machine learning for positive impact around the world.
除非特别声明，此内容采用 知识共享署名 3.0 许可， 示例采用 Apache 2.0 许可。更多细节请查看我们的 服务条款 。
本站部分文章源于互联网，本着传播知识、有益学习和研究的目的进行的转载，为网友免费提供。如有著作权人或出版方提出异议，本站将立即删除。如果您对文章转载有任何疑问请告之我们，以便我们及时纠正。PS：推荐一个微信公众号: askHarries 或者qq群：474807195，里面会分享一些资深架构师录制的视频录像：有Spring，MyBatis，Netty源码分析，高并发、高性能、分布式、微服务架构的原理，JVM性能优化这些成为架构师必备的知识体系。还能领取免费的学习资源，目前受益良多