In big data, Holy Grail = actionable insights. Which scattered pieces of data can we find to feed algorithms in order to extract valuable information, create patterns and improve efficiency and productivity? In other words, it’s not the data, it’s what you do with it.
In the UK alone, the NHS reports over 300 million patient consultations each year to GPs and 23 million A&E visits. In the past thirty years since the widespread use of computers in health facilities, and subsequently, electronic health records (EHRs), seas of data points have been amassed and archived. Think of each visit anyone makes to a hospital or treatment center, during every such visit, data on symptoms, lifestyle, race, gender, medical history, etc. is stored. Now think of those 323 million patient visits in the UK alone, of all the data points generated, multiply that by the years since we’ve had EHRs, take that number and add to it the plethora of manually transcribed medical records from decades prior, multiply that by the number of countries there are, adjust for population size and standard deviation and you get, roughly… a headache. And a text with too many commas.
In reality, you get a change of incentives. Massive availability of data at scale coupled with powerful technological capabilities make it a must to improve the quality of the healthcare that is delivered. From a patient and doctor’s perspective, personalization of treatment using big data means that, based on the predicted outcome, the right care is provided to the patient at the right time by the right provider. Not only does that save lives but it also saves time and money.
The specific piece of computational arsenal underlying this shift is deep learning. The latter is a modern reincarnation of artificial neural networks, which were invented in the 1960s. It represents a collection of simple trainable units, organized in layers. The data, or “input” gets passed through these different layers of the network, with each network hierarchically defining specific features of the input. Deep learning can do this without feature engineering, meaning that we don’t have to explicitly write the rules (nor resort to hardcore feature extraction or domain expertise). This is, in a way similar to how the brain works to solve problems – by passing queries through various hierarchies of internalised concepts and related questions to find an answer. In the healthcare industry, the possible deep learning applications are countless. That is because, contrary to traditional machine learning (ML) algorithms, deep learning requires much more data. In fact, deep learning is only able to identify edges (concepts, differences) within layers of neural networks when exposed to over roughly a million data points.
So in general, with smaller data sets and limited computing power, which is what we had in the 1980s and ’90s, ML works better. But with the type of larger datasets we have in healthcare, the automotive industry, aerospace, etc. Deep learning’s neural networks are the way to go.
Below are some of the main use cases of AI in healthcare. Most, if not all of them must be evaluated with a critical eye as their applications are imperfect and still underway. We will explore the challenges they bear as well as ethical implications in part II.
DEEP LEARNING IN SCANS AND PATTERN RECOGNITION
It is held by a great many medical specialists that AI in healthcare is not artificial intelligence, but rather augmented intelligence. That is because it frees the path for specialists to place their energy in higher level intellectual tasks, i.e. synthesizing diagnostic information rather than spending their time organising and sifting through said pieces of information. In radiology, AI algorithms can examine images hundreds of times faster than humans. Over the last few years, they have been trained to recognise patterns in medical scans, pathology slides, radiology, and even skin lesions. They are also able to automatically classify pathology images of tumours and identify which type of cancer it most resembles with a performance that is as good and often better than pathologists in the best academic health centers. Specifically for image recognition, “Convolutional Neural Networks” (CNN) are the state of the art deep learning technique.
AI can take the guesswork out of self diagnosis. By collecting patient data via apps, text messages, or chat bots – which ask patients a series of questions regarding their symptoms – patients can self-triage, saving both the patient and provider time. Time to treatment can be much quicker but also avoided altogether in some instances. Indeed, many healthcare systems are contending with high rates of patients repeatedly using the emergency department, which drives up healthcare costs and does not lead to better outcomes for these patients. Using predictive analytics, hospitals could reduce the number of ER visits by identifying high-risk patients and offering customized, patient-centric care. However, this is still underway as more data points are needed to get an accurate picture of the real risks, as well as the reasons for these risks.
The robot today is just an extension of the surgeon’s hands and eyes. The oft-mentioned example is the da Vinci Surgical system which incorporates remote center technology to sense the surgeon’s hand movements and translate them into scaled-down micro-movements. Since 2000, the device has enabled minimally invasive surgeries through smaller and more precise incisions.
What academic health science centres want to reach is a level where the robot actively helps the surgical team explore and deliver more precise procedures – through narrow AI. Intelligent surgical robots with varying degrees of autonomy are already proving in early tests to be the equals of surgeons at some technical tasks, such as locating wounds, suturing and removing tumours. In 2016, the Sheikh Zayed Institute in Washington DC successfully deployed its Smart Tissue Autonomous Robot’ (STAR) —although not completely autonomous— to perform suturing on a pig’s gut with tighter seals and more even spacing between the stitches. This was a feat of programming because the tissue changes shape and moves around during procedures, making it especially challenging for a robotic system. Another example would be having a robot that can spread multiple tools in vivo through a single incision and autonomously perform a series of delicate tasks. This would open new dimensions for caregivers to explore. As a way forward, we are also seeing the spread of robotics in all subspecialities: from being mostly used in urology and cardiology to fields that were previously underdeveloped like pulmonary or orthopedic surgery (using endoscopies and stereotactics).
IN-HOME CARE AND ASSISTIVE TECHNOLOGY
Ageing populations imply new societal demands and added pressure on social workers. Of those demands is tackling the problem of isolation, loneliness and cognitive decline in elders. Socially assistive robots could address these issues, revolutionising in-home care. A class of social robots – mobile robotic telepresence (MRT) systems – have already been shown to generate positive social interactions with elderly patients. MRTs are basically video screens on wheels raised to head height that can be controlled remotely using a simple smartphone app. They allow relatives and social workers to “visit” elderly people more often, even if they live in remote places. Other classes of robots include pet-like companions and general purpose humanoid service robots (such as Pepper and Care-O-bot) that assist users in various tasks where human-like behavior and interfaces are desired. The latter class of robots could potentially eliminate the need long-term acute care facilities.
Primary care could also be revolutionised with the advent of smartphone apps which are designed to assist people concerned with ailments. Popularly touted as “skin cancer apps”, SkinVision, UMSkinCheck and MoleMapper, to name a few, use deep neural networks to classify skin cancers. They can distinguish between moles, birthmarks and melanomas. It takes time to see a doctor and these types of apps not only provide an early diagnosis (which can save lives) but they also make the patient part of the solution. This is known as secondary prevention.
DECISION SUPPORT AND BEST PRACTICES
Diagnoses and referrals are two sides of the same coin. It is paramount that patients receive the right diagnosis and get to the right doctors. As a general rule, An AI algorithm can analyze a patient’s symptoms and vital signs, compare it with their medical history, that of their family and those of the millions of other patients it has in EHRs and help the doctor by giving suggestions of what the causes might be.
In pharmacogenomics, the objective is to analyse how the genetic makeup of an individual affects their response to drugs. AI, using deep learning, is able to do whole genome sequencing on patients, look at every single one of the 3 billion letters in their genome and figure out what’s different from reference human beings. This then allows doctors to refer the patient to the right expert and provide the right drug regimen.
Written by Nada Fouad