The Future of Teleradiology
Author: Naoyuki Kitamura, M.D.
President, MNES Inc. & Director, Kasumi Clinic
Editor: Tetsuya Tanimoto, M.D.
I graduated from Hiroshima University School of Medicine in 1993, and I have belonged to the affiliated group of the Department of Radiology at the university. After working in general hospitals, in 2000, I launched a startup company that specializes in teleradiology: MNES Inc. Furthermore, in 2015, I established Kasumi Clinic, which performs various radiological examinations, and I am in charge of both institutions as the president of the company and the director of the clinic. MNES is an acronym for Medical Network Systems, and the company has two major undertakings: one is to provide teleradiological diagnostics for clinical imaging, and the other is to popularize our electronic medical record system called LOOKREC, which works as a platform for cloud computing technology. Regarding doctors among employees, we employ eleven full-time diagnostic radiologists and fourteen part-time diagnostic radiologists, twelve part-time neurosurgeons, and other part-time specialists, including surgeons, internists, and pathologists. More than half of our doctors are female, and they can work from home if necessary, such as when their children get sick. Actually, three female doctors continued to work before and after giving a birth while with our company.
Our clinic is located at Higashi-Shinonome-Honmachi town, South Ward in Hiroshima. We have two 1.5-Tesla MRI scanners and a CT scanner, and the diagnostic imaging center is situated on the third floor of our company’s building. The clinic is on the first and second floor. The number of annual examinations is about 8,000 cases. We receive most of our requests for examinations from Hiroshima University Hospital, which is nearby, and our main business is to send clients the radiological imaging with diagnostic reports. Numerous areas in Hiroshima Prefecture are without doctors, following Hokkaido among the 47 prefectures. Japan has a lot of diagnostic imaging equipment, including computed tomography (CT) and magnetic resonance imaging (MRI) scanners, compared to other countries, but the number of diagnostic radiologists is limited. Therefore, I took the incentive to introduce teleradiology to patients living on remote islands and in mountainous areas. Currently, we have a network of 42 medical institutions, which are mainly in Hiroshima Prefecture. Previously, we handled radiological images printed on film, but we have utilized a system on Google Cloud Platform (GCP) for 4 to 5 years to host digital images, and LOOKREC, which is on the cloud, was completed just one year ago. Due to this situation, we aim to build a wider network, and we are entering a turning point in our entrepreneurship.
In January 2018, the Medical Check Studio Tokyo Ginza Clinic opened in Tokyo; it specializes in performing health checks called Brain Docks for brain diseases. We are affiliated with the clinic and provide teleradiological diagnostics for these images. The brain images captured in Tokyo are uploaded to the electronic medical record cloud, and a diagnostic radiologist in Hiroshima or another place performs the first screening. Then, a neurosurgeon performs the second screening, which has rarely been done in the past. Our cloud system has enabled such a double-check procedure, and currently, various doctors—even those who work abroad—have joined our group from Hiroshima University, Kagoshima University, Tokushima University and others. Our method is one way to compensate for the shortage of doctors, and we can collaborate with many doctors both in Japan and in other countries.
Furthermore, for Brain Dock-based aneurysm diagnoses, we are affiliated with a venture company, LPixel Inc., which specializes in image analyses, and the development of computer-assisted detection (CAD) system using artificial intelligence (AI) has been started using real clinical data. The CAD system using AI has not yet obtained the regulatory approval, and it is still in the developmental stage as collaborative research; however, we hope that AI will enhance the diagnostic ability of radiologists and neurosurgeons in the near future. In magnetic resonance angiography (MRA) scans that examine cerebral arteries, it is necessary to evaluate anywhere from 100 to around 200 original images, and we usually search for an aneurysm while rotating the three-dimensionally constructed images processed by the maximum intensity projection (MIP) method. To improve diagnostic accuracy, we need to confirm the original images, but we aim for AI to take over such a laborious diagnostic process, which will further reduce oversight. As other collaborative research, we are developing various image-detection technologies for cerebral blood vessel stenosis; brain or cerebrospinal fluid volume metrics; and the quantification of white matter lesions. Because we can obtain a considerable amount of data concerning brain images, we plan to build a highly reliable diagnostic support system using AI by utilizing the data from high-quality images.
Additionally, we have constructed a CAD system for breast cancer using breast MRI.In finding lesions at the time of breast cancer scrutiny and in evaluating their spread, the rapid infusion of the contrast medium is used to perform a dynamic study that follows the time in which the contrast medium moves through the vessels in the lesions or mammary glands. During the examination, nearly 800 images are taken because in addition to simple imaging, various images are taken at the early phase of contrast medium infusion, as well as at the second and the third phase, with nearly 200 images in each phase. In the traditional method, doctors are obliged to engage in laborious work to find lesions by synchronizing those numerous images. When one finds a suspected lesion, the area is circumscribed to create a time-signal curve (dynamic curve), which is an indicator for the diagnosis that captures the characteristics of cancer in which the contrast medium leaves quickly once it enters the lesion. In our system, these procedures are automatically set for each pixel and voxel to represent the change in the signal value, and the curve is divided into several patterns; rapid in-and-out lesions (rapid wash-out), which are considered common in breast cancer, are shown in red on the display, and such a CAD system is very useful for diagnosing cancer. Additionally, evaluating the inhomogeneous lesions in tumors is now highly promising; the cancer is not of a type that forms a tumor mass but is rather something like ductal carcinoma in situ (DCIS) of the breasts, but we have an impression that the CAD system can depict the lesions with considerable accuracy.
Thus, we try two types of CAD systems for imaging cerebral aneurysms and breast cancer using real clinical data, and I realize that I cannot do my job as a diagnostic radiologist without this technology. In addition, we have constructed a CAD system for pulmonary nodules in a chest CT scan and an imaging system that automatically reconstructs the meniscus of the knee using an MRI scan three-dimensionally with AI. The latter is scheduled to proceed with further research and development by comparing it to arthroscopic data of a few millimeters in diameter.
As described above, I, as a diagnostic radiologist, expect to construct an AI system to assist doctors in the process of handling a particularly large number of images daily by identifying, extracting, and quantifying specific structures or areas of disease. For example, cerebral atrophy is a subjective diagnosis, and diagnostic results may sometimes differ considerably among neurosurgeons and radiologists. However, we might manage to make new diagnostic criteria by automatically quantifying them. By comparing a lot of image series and by measuring the amount of change using AI, detecting lesions or capturing dynamic changes, even in normal structures, will be clinically very useful. Finally, AI might be able to make the same qualitative diagnosis of a detected lesion as a diagnostic radiologist, but such a situation is still unknown at this stage.
Also, this is not directly related radiologic imaging, but an auxiliary AI system using natural language processing is in operation when making radiological diagnostic reports. When a doctor enters keywords and sentences, such as “hepatocellular carcinoma,” the system instantaneously puts out the analytical results of the past data that include the word “hepatocellular carcinoma”. We can not only quote past reports but also analyze which radiologists used what kind of expressions, including individual differences, which may enable us to standardize the reporting methods. Because such supplementary information can also be obtained with AI technology, we might manage to reach some serendipitous ideas that are useful for future clinical practices.
In summary, we are focusing on teleradiology using AI and making a new platform for telemedicine. So many obstacles still need to be solved to make medical data available on the cloud, but we have currently established the foundation using GCP. In the future, we will make our system available across various operating systems, including Amazon and Azure, and will upload the data to the cloud, regardless of whether the medical institution is domestic or overseas. We would like to invite not only radiologists but doctors involved in radiologic images to help and accumulate our knowledge and wisdom here. Because we live in an era where data from millions of people are easily available, we aim to use high-quality medical data from the excellent medical practices in Japan to quickly construct a clinically useful system in corporation with the AI engineers with whom we are working.