Categories
Machine Learning Viewpoint

AI will never replace the doctor. Or will it?

Let me start with the usual and widely accepted narrative: AI is emerging as a major technological disrupter in medicine, crunching lots of data, providing accurate diagnosis and treatment. But it will (and can) never replace a doctor even in specialties amenable to machine-driven automation such as radiology or dermatology​1​. However, these assumptions are based on the current paradigms of medicine constrained by the boundaries of current cognitive abilities. Are we oblivious to a paradigm shift happening in medicine?


The human genome mapping project​2​ and the subsequent democratization of the ‘omics’ fields promised the new paradigm of ‘personalized medicine’ which never really materialized (at least till now)​3​. AI (used here as an encompassing term including big data analytics and machine learning) can potentially take personalized medicine to the realm of holistic medicine. Time will tell whether this paradigm shift will materialize. But it is important to understand how some of the concepts that we take for granted may get redefined and reconceptualized in the new paradigm (if it happens), just as modern medicine emerged from natural and alternative medical traditions.


The major tenets of modern medicine are diagnosis, prognosis and therapeutics (treatment). Diagnosis is the process of bucketing a given case into a pattern of observations that has been previously characterized — often represented by a recognizable name. Diabetes, Hypertension and typhoid fever are examples. The prognosis and the treatment depend on the diagnostic label assigned. Patterns that do not fit into the list emerge from time to time. A pattern that resembled pneumonia that emerged recently in Wuhan, China, caused by a coronavirus was labelled SARS-Cov-2. A common use case of AI in medicine is to assign a given set of observations into one of these named entities (diagnostic decision support systems). The clinical community argues that AI can help a clinician in this process, but cannot replace him or her. One of the main reasons for the clinician’s self-belief in irreplaceability is the fact that AI learns from existing labels — the training data set — that the clinicians themselves prepare.


The process of making a diagnosis is to reduce the stochastic observations in the human body into a set of named patterns (diagnoses) that humans can comprehend, identify and utilize. In an AI-dominated world ‘diagnoses’ lose their relevance as the machines can recognize, identify and utilize a potentially infinite number of patterns and entities. Even if ‘diagnoses’ exist, their number is likely to be huge, much beyond the cognitive capabilities of humans.


Currently, the prognosis of any disease state is based on limited observations and limited data points. Big data will extend these limits thereby making prognostic predictions more accurate. Machine learning models that drive such predictions are likely to be at best partially explainable and at worst complete black boxes. However, explainable or not, such prognostic predictors are likely to improve health system optimizations. The role of clinicians is going to be identifying the variables to optimize.


In the therapeutics realm, AI may push us closer to the promised personalized medicine. Traditional clinical research relying mostly on the ‘rigorous’ randomized controlled trials (RCT) may lose its relevance in the new paradigm. Some argue that RCTs have already become unsustainable with long turnover times and mounting costs. With no two humans having the same omics profile — the level of abstraction introduced by a statistically significant difference between the ‘random’ treatment and control groups — is useful for humans, but not for AI. The emerging methods such as nanotechnology, nanorobotics and 3D printing, combined with advanced predictive analytics, molecular modelling and drug designing would lead to tailored interventions that are created ‘just-in-time’ for every individual according to his or her needs. This process is likely to be beyond the reach of human comprehension, but human intervention may be needed to maintain the flow of data through the system.

‘Health’ is another concept that is taken for granted as something that everybody can instinctively understand. Health is widely recognized as a state of absence of disease. As disease/diagnosis states become infinite, ‘health’ may need a reconceptualization too. Let us call it Health 3.0 for now. Medicine ceases to be the art of restoring health but optimizing Health 3.0. I do not attempt to provide a framework to define Health 3.0 here, but posit that it will include abstract concepts such as happiness and quality of life, paradoxically beyond the cognitive capabilities of AI.

Clinicians may still be irreplaceable, but in helping AI to define health!
Some of the changes that AI and allied technologies can bring are already visible. The omics fields have introduced several subcategories of existing diagnostic entities​4​. In most cases, clinicians ignore these subtypes, seeing things at a higher and manageable level. Reinforcement Learning (RL) algorithms can potentially learn from big data that are not labelled by clinicians​5​. RL is closer to cognitive computing — computerized models that simulate human thought — optimizing ‘reward’, a concept closer to Health 3.0. Computer-aided drug design is becoming increasingly popular supplemented by an enormous amount of data derived from electronic medical records​6​.

I am neither trying to predict the future impact of AI in medicine nor arguing for or against the role of ‘human’ clinicians. The media and the scientific literature are replete with stories of AI approaching and in some cases surpassing, the clinicians in certain tasks. AI may not be an incremental disrupter that may change the way we practice. As paradigms change, some of the questions that we ask today such as — Can AI make the correct diagnosis, Can AI choose the correct treatment — may lose relevance? AI may never replace doctors, but it may change what doctors do and may take us a step closer to holistic medicine!

References

  1. 1.
    Karches KE. Against the iDoctor: why artificial intelligence should not replace physician judgment. Theor Med Bioeth. Published online April 2018:91-110. doi:10.1007/s11017-018-9442-3
  2. 2.
    Collins FS. The Human Genome Project: Lessons from Large-Scale Biology. Science. Published online April 11, 2003:286-290. doi:10.1126/science.1084564
  3. 3.
    Chen R, Snyder M. Promise of personalized omics to precision medicine. WIREs Syst Biol Med. Published online November 26, 2012:73-82. doi:10.1002/wsbm.1198
  4. 4.
    Boyd S, Galli S, Schrijver I, Zehnder J, Ashley E, Merker J. A Balanced Look at the Implications of Genomic (and Other “Omics”) Testing for Disease Diagnosis and Clinical Care. Genes. Published online September 1, 2014:748-766. doi:10.3390/genes5030748
  5. 5.
    Chen M, Herrera F, Hwang K. Cognitive Computing: Architecture, Technologies and Intelligent Applications. IEEE Access. Published online 2018:19774-19783. doi:10.1109/access.2018.2791469
  6. 6.
    Qian T, Zhu S, Hoshida Y. Use of big data in drug development for precision medicine: an update. Expert Review of Precision Medicine and Drug Development. Published online May 4, 2019:189-200. doi:10.1080/23808993.2019.1617632
Cite this article as: Eapen BR. (July 7, 2021). CanEHealth.com - AI will never replace the doctor. Or will it?. Retrieved April 28, 2024, from https://canehealth.com/2021/07/ai-will-never-replace-the-doctor-or-will-it/.
Categories
Machine Learning

Machine Learning in population health: Creating conditions that ensure good health.

Machine Learning (ML) in healthcare has an affinity for patient-centred care and individual-level predictions. Population health deals with health outcomes in a group of individuals and the outcome distribution in the group. Both individual health and population health are not divergent, but at the same time, both are not the same and may require different approaches. ML in public health applications receives far less attention.

The skills available to public health organizations to transition towards an integrated data analytics is limited. Hence the latest advances in ML and artificial intelligence (AI) have made very little impact on public health analytics and decision making. The biggest barrier is the lack of expertise in conceiving and implementing data warehouse systems for public health that can integrate health information systems currently in use. 

The data in public health organizations are generally scattered in disparate information systems within the region or even within the same organization. Efficient and effective health data warehousing requires a common data model for integrated data analytics. The OHDSI – OMOP Common Data Model allows for the systematic analysis of disparate observational databases and EMRs. However, the emphasis is on patient-level prediction. Research on how patient-centred data models to observation-centred population health data models are the need of the hour.

We are making a difficult yet important transition towards integrated health by providing new ways of delivering services in local communities by local health teams. The emphasis is clearly on digital health. We need efficient and effective digital tools and techniques. Motivated by the Ontario Health Teams’ digital strategy, I have been working on tools to support this transition.

Hephestus is a software tool for ETL (Extract-Transform-Load) for open-source EMR systems such as OSCAR EMR and national datasets such as Discharge Abstract Database (DAD). It is organized into modules to allow code reuse. Hephestus uses SqlAlchemy for database connection and auto-mapping tables to classes and bonobo for managing ETL. Hephaestus aims to support common machine learning workflows such as model building with Apache spark and model deployment using serverless architecture. I am also working on FHIR based standards for ML model deployments.

Hephaestus is a work in progress and any help will be highly appreciated. Hephaestus is an open-source project on GitHub. If you are looking for an open-source project to contribute to Hacktoberfest, consider Hephaestus! 

Categories
Machine Learning

Creating, serializing and deploying a machine learning model for healthcare: Part 2

This is a series on serializing and deploying machine learning pipelines developed using pyspark. Part 1 is here. This is specifically for apache spark and is basically notes to myself.

We will be using the Mleap for serializing the model. I have added below a brief introduction about Mleap copied from their website. For more information, please visit the Mleap website.

MLeap is a common serialization format and execution engine for machine learning pipelines. It supports Spark, Scikit-learn and Tensorflow for training pipelines and exporting them to an MLeap Bundle. Serialized pipelines (bundles) can be deserialized back into Spark for batch-mode scoring or the MLeap runtime to power realtime API services.

This series is about serializing and deploying. If you are interested in model building, Susan’s article here is an excellent resource.

In part one we imported the dependencies. The next step is to initialize spark and import the data.

 _logger = logging.getLogger(__name__)
    findspark.init(ConfigParams.__SPARK_HOME__)

    # Configuration
    conf = SparkConf(). \
        setAppName('BellSpark')
    # Spark Session replaces SparkContext
    spark = SparkSession.builder. \
        appName("BellSparkTest1"). \
        config('spark.jars.packages',
               'ml.combust.mleap:mleap-spark-base_2.11:0.9.3,ml.combust.mleap:mleap-spark_2.11:0.9.3'). \
        config(conf=conf). \
        getOrCreate()

    # Read csv
    df = spark.read.csv(ConfigParams.__DAD_PATH__, header=True, inferSchema=True)

In the above code, you have to set the spark home and path to DAD csv file. Obviously, you can name your app whatever you need. Mleap packages are loaded in the spark session.

To keep it simple, we are going to create a logistic regression model. The required variables are selected:

# Select the columns that we need
    df = df.select('TLOS_CAT', 'ACT_LCAT', 'ALC_LCAT', \
                    'ICDCOUNT', 'CCICOUNT')

TLOS_CAT (Total length of stay) is the dependent variable (DV) and the rest are IVs. Please note that the choice of variables may not be ideal, but that is not our focus.

Now, recode TLOS_CAT to binary as we are going to build a logistic regression model.

# Change all NA to 0
    df = df.na.fill(0)

    # Recode TLOS_CAT to binary
    df = df \
        .withColumn('TLOS_CAT_NEW', F.when(df.TLOS_CAT <= 5, 0).otherwise(1)) \
        .drop(df.TLOS_CAT)

    df.printSchema()

We will create and serialize the pipeline next week. I promised to deploy using Java 11 and spring boot 2.1. Java 11 was released on Sept 25 and I feel it can have a huge impact on java based EMRs like OSCAR and OpenMRS. More about that story soon on NuChange Blog!

Categories
Machine Learning

Creating, serializing and deploying a machine learning model for healthcare: Part 1

Machine Learning (ML) and Artificial Intelligence (AI) are the buzzwords lately and it is heartening to find local HSPs scrambling to get on the bandwagon. The emphasis is mostly on creating models which require technical as well as clinical expertise. The quintessential ‘blackbox’ model is a good healthcare analytics exercise, but deploying the model to be useful at the bedside belongs to the IT domain.

This article is about creating a simple model using discharge abstract database (DAD) as the database and Apache spark as the framework, serialize it into a format that can be used externally and building a simple website that deploys the model for users to make predictions. To make this interesting, we will create the website using Java 11 and Spring boot 2.1 that are yet to be released at the time of writing. Both will be released when we reach there. But, please note that this is about deploying a model/pipeline created with spark (which may be an overkill for most projects). Here are some good resources if have small data/simple model.

https://github.com/mtobeiyf/keras-flask-deploy-webapp

https://towardsdatascience.com/deploying-keras-deep-learning-models-with-flask-5da4181436a2

https://blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html

This post is actually a note to myself as I explore the process. As always the focus is on understanding the process and not on the utility of the model. Feel free to comment below and add your own notes/ideas.

TL;DR the code will be available on our GitHub repository as we progress.

 

First, let us start with a brief description of Apache Spark. Apache spark is an open-source big-data API with inbuilt cluster computing ability. Spark is highly accessible and offers simple APIs in Python, Java, Scala, and R. I have picked python as I can use the python interpreter at  CC right from pycharm IDE. Pyspark is the python library for interacting with spark which can be linked to sys.path at runtime using the findspark library. Most machine learning pipelines are available in pyspark. We will be building a simple logistic regression model. The necessary libraries can be imported as below.

import logging

import findspark
import pyspark.sql.functions as F
from pyspark import SparkContext
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.util import MLUtils

I will be back again with more next week. In the meantime have a look at DAD and the data dictionary. As always the customary disclaimer below:

Read Part 2.

Parts of this material are based on the Canadian Institute for Health Information Discharge Abstract Database Research Analytic Files (sampled from fiscal years 2014-15). However the analysis, conclusions, opinions and statements expressed herein are those of the author(s) and not those of the Canadian Institute for Health Information.