Categories
Health Research Methodology Healthcare Analytics OpenSource

DADpy: The swiss army knife for discharge abstract database

Discharge Abstract Database (DAD) is a Canada-wide database of hospital admission and discharge data excluding the province of Quebec, maintained by the Canadian Institute for Health Information (CIHI). The data points in DAD include patient demographics, comorbidities coded in the International Statistical Classification of Diseases and Related Health Problems (ICD), interventions encoded in the Canadian Classification of Health Interventions (CCI) and the length of stay. DAD is the de-identified 10% sample available under the Data Liberation Initiative (DLI) for academic researchers. DAD is arguably the most comprehensive country-wide discharge dataset in the world.

The Swiss army knife for Discharge Abstract Database

Discharge Abstract Database is used for creating public reports for hospitals, researchers, and the general public. DAD data has also been used for disease-specific research and analysis, including public health, disease surveillance, and health services research. CIHI provides DAD in the SPSS (.sav) format with each record having horizontal fields for 20 comorbidities and 25 interventions. The format is not ideal for slicing and dicing the data for visualization for clinicians to obtain clinical insights.

DADpy provides a set of functions for using the DAD dataset for machine learning and visualization. The package does not include the dataset. Academic researchers can request the DAD dataset from CIHI. This is an unofficial repo, and I’m not affiliated with CIHI. Please retain the disclaimer below in forks.

Installation: (Will add to pypi soon)

pip install https://github.com/E-Health/dadpy/releases/download/1.0.0/dadpy-1.0.0-py3-none-any.whl
from dadpy import DadLoad
from dadpy import DadRead

# with the trailing slash
dl = DadLoad('/path/to/dad/sample/spss/sav/file/') # clin_sample_spss.sav
dr = dad_read(dl.sample)

# records with obesity as pandas df
print(dr.has_diagnosis('E66'))
# Partial gastrectomy for repair of gastric diverticulum
print(dr.has_treatment('1NF80')) 

# comorbidities as dict for visualization
print(dr.comorbidity('E66')) # Obesity
# co-occurance of treatments as dict
print(dr.interventions('1NF80')) # Partial gastrectomy for repair of gastric diverticulum

# Get the one-hot-encoded vector for machine learning
dr.vector(dr.has_diagnosis('E66'), significant_chars=3, include_treatments=True)

We use poetry for development. PR are welcome. Please see CONTRIBUTING.md in the repo. Start by renaming .env.example to .env and add path for tests to run. Add jupiter notebooks to the notebook folder. Include the disclaimer below.

Disclaimer: Parts of this material are based on the Canadian Institute for Health Information Discharge Abstract Database Research Analytic Files (sampled from fiscal years 2016-17). However the analysis, conclusions, opinions and statements expressed herein are those of the author(s) and not those of the Canadian Institute for Health Information.

Let us know if you use DADpy for creating interesting jupyter notebooks. 

Categories
Healthcare Analytics OpenSource Resources

OSCAR EMR EForm Export (CSV) to FHIR

This is a simple application to convert a CSV file to a FHIR bundle and post it to a FHIR server in Golang. The OSCAR EMR has an EForm export tool that exports EForms to a CSV file that can be downloaded. This tool can load that CSV file to a FHIR server for consolidated analysis. This tool can be used with any CSV, if columns specified below (CSV format section) are present.

Use Cases

This is useful for family practice groups with multiple OSCAR EMR instances. Analysts at each site can use this to send data to a central FHIR server for centralized data analysis and reporting. Public health agencies using OSCAR or similar health information systems can use this to consolidate data collection.

How to build

First go get all dependencies This package includes three tools (Go build them separately from the cmd folder):

Fhirpost: The application for posting the csv fie to the FHIR server

Serverfhir: A simple FHIR server for testing (requires mongodb). We recommend using PHIS-DW for production.

Report: A simple application for descriptive statistics on the csv file

Format of the CSV file


Using vocabulary such as SNOMED for field names in the E-Form is very useful for consolidated analysis.

Each record should have:

demographicNo → The patient ID
dateCreated
efmfid → The ID of the eform
fdid → The ID of the each form field.
(The Eform export csv of OSCAR typically has all these fields and requires no further processing)

Mapping

  • Bundle with unique patients. All columns mapped to observations.
  • Submitter mapped to Practitioner.
  • Document type bundle with composition as the first entry
  • Unique fullUrls are generated.
  • PatientID is location + demographicNo
  • Budle of 1 composition, 1 practitioner, 1 or more patients, and many observations
  • Validates with R4 schema

How to use:

  • Change the settings in .env
  • You can compile this for Windows, Mac or Linux. Check the fhirmap.go file and make any desired changes. You should be able to figure out the mapping rules from this file.
  • It reads data.csv file from the same folder by default. (can be specified by the -file commandline argument: fhirpost -file=data.csv)
  • Start mongodb and run server and fhirpost in separate windows for testing.
  • On windows, you can just double-click executables to run. (Closes automatically after run)

Privacy and security:
This application does not encrypt the data. Use it only in a secure network.

Disclaimer:
This is an experimental application. Use it at your own risk.

Categories
Research Resources

McMaster develops tool for COVID-19 battle

This article was first published on Brighter World. Read the original article.

McMaster University researchers have developed a tool to share with the international health sciences community which can help determine how the coronavirus that causes COVID-19 is spreading and whether it is evolving.

Simply put, the tool is a set of molecular ‘fishing hooks’ to isolate the virus, SARS-CoV-2, from biological samples. This allows laboratory researchers to gain insight into the properties of the isolated virus COVID-19 by then using a technology called next-generation sequencing.

The details were published on Preprints.org.

“You wouldn’t use this technology to diagnose the patient, but you could use it to track how the virus evolves over time, how it transmits between people, how well it survives outside the body, and to find answers to other questions,” said principal investigator Andrew McArthur, associate professor of biochemistry and biomedical sciences, and a member of the Michael G. DeGroote Institute for Infectious Disease Research (IIDR) at McMaster.

“Our tool, partnered with next-generation sequencing, can help scientists understand, for example, if the virus has evolved between patient A and patient B.”

McArthur points out that the standard technique to isolate the virus involves culturing it in cells in contained labs by trained specialists. The McMaster tool gives a faster, safer, easier and less-expensive alternative, he said.

“Not every municipality or country will have specialized labs and researchers, not to mention that culturing a virus is dangerous,” he said.

“This tool removes some of these barriers and allows for more widespread testing and analyses.”

First author Jalees Nasir, a PhD candidate in biochemistry and biomedical sciences at McMaster, has been working with McMaster and Sunnybrook Health Sciences Centre researchers to develop a bait capture tool that can specifically isolate respiratory viruses. When news recently broke of COVID-19, Nasir knew he could develop a “sequence recipe” to help researchers to isolate the novel virus more easily.

“When you have samples from a patient, for example, it can consist of a combination of virus, bacteria and human material, but you’re really only interested in the virus,” Nasir said. “It’s almost like a fishing expedition. We are designing baits that we can throw into the sample as hooks and pull out the virus from that mixture.”

The decision was made to release the sequences publicly without the normal practice of peer-review or clinical evaluation to ensure this tool was available to all quickly, recognizing the urgency of the situation, said McArthur.

The research team plans to collaborate with Sunnybrook for further testing but also hopes other scientists can quickly perform their own validation.

McArthur added that a postdoctoral fellow in his lab, David Speicher, is currently communicating details of the technology to the international clinical epidemiology community.

“Since we’re dealing with an outbreak, there was no value in us doing a traditional academic study and the experiments,” said McArthur. “We designed this tool and are releasing it for use by others.

“In part, we’re relying on our track record of knowing what we are doing, but we’re also relying on people who have the virus samples in hand being able to do the validation experiment so that it’s reliable.”

The research was funded by the Comprehensive Antibiotic Resistance Database at McMaster.

This article was first published on Brighter World. Read the original article.

Categories
HIS

Public Health Data Warehouse on FHIR

The Ontario government is building a connected health care system centred around patients, families and caregivers through the newly established Ontario Health Teams (OHT). As disparate healthcare and public health teams move towards a unified structure, there is a growing need to reconsider our information system strategy. Most off the shelf solutions are pricey, while open-source solutions such as DHIS2 is not popular in Canada. Some of the public health units have existing systems, and it will be too resource-intensive to switch to another system. The interoperability challenge needs an innovative solution, beyond finding the single, provincial EMR.

artificial intelligence

We have written about the theoretical aspects, especially the need to envision public health information systems separate from an EMR. In this working paper, we propose a maturity model for PHIS and offer some pragmatic recommendations for dealing with the common challenges faced by public health teams. 

Below is a demo project on GitHub from the data-intel lab that showcases a potential solution for a scalable data warehouse for health information system integration. Public health databases are vital for the community for efficient planning, surveillance and effective interventions. Public health data needs to be integrated at various levels for effective policymaking. PHIS-DW adopts FHIR as the data model for storage with the integrated Elasticsearch stack. Kibana provides the visualization engine. PHIS-DW can support complex algorithms for disease surveillance such as machine learning methods, hidden Markov models, and Bayesian to multivariate analytics. PHIS-DW is work in progress and code contributions are welcome. We intend to use Bunsen to integrate PHIS-DW with Apache Spark for big data applications. 

Public Health Data Warehouse Framework on FHIR

FHIR has some advantages as a data persistence schema for public health. Apart from its popularity, the FHIR bundle makes it possible to send observations to FHIR servers without the associated patient resource, thereby ensuring reasonable privacy. This is especially useful in the surveillance of pandemics such as COVID19. Some useful yet complicated integrations with OSCAR EMR and DHIS2 is under consideration. If any of the OHTs find our approach interesting, give us a shout. 

BTW, have you seen Drishti, our framework for FHIR based behavioural intervention? 

Categories
Machine Learning

Machine Learning in population health: Creating conditions that ensure good health.

Machine Learning (ML) in healthcare has an affinity for patient-centred care and individual-level predictions. Population health deals with health outcomes in a group of individuals and the outcome distribution in the group. Both individual health and population health are not divergent, but at the same time, both are not the same and may require different approaches. ML in public health applications receives far less attention.

The skills available to public health organizations to transition towards an integrated data analytics is limited. Hence the latest advances in ML and artificial intelligence (AI) have made very little impact on public health analytics and decision making. The biggest barrier is the lack of expertise in conceiving and implementing data warehouse systems for public health that can integrate health information systems currently in use. 

The data in public health organizations are generally scattered in disparate information systems within the region or even within the same organization. Efficient and effective health data warehousing requires a common data model for integrated data analytics. The OHDSI – OMOP Common Data Model allows for the systematic analysis of disparate observational databases and EMRs. However, the emphasis is on patient-level prediction. Research on how patient-centred data models to observation-centred population health data models are the need of the hour.

We are making a difficult yet important transition towards integrated health by providing new ways of delivering services in local communities by local health teams. The emphasis is clearly on digital health. We need efficient and effective digital tools and techniques. Motivated by the Ontario Health Teams’ digital strategy, I have been working on tools to support this transition.

Hephestus is a software tool for ETL (Extract-Transform-Load) for open-source EMR systems such as OSCAR EMR and national datasets such as Discharge Abstract Database (DAD). It is organized into modules to allow code reuse. Hephestus uses SqlAlchemy for database connection and auto-mapping tables to classes and bonobo for managing ETL. Hephaestus aims to support common machine learning workflows such as model building with Apache spark and model deployment using serverless architecture. I am also working on FHIR based standards for ML model deployments.

Hephaestus is a work in progress and any help will be highly appreciated. Hephaestus is an open-source project on GitHub. If you are looking for an open-source project to contribute to Hacktoberfest, consider Hephaestus! 

Categories
OpenSource Resources

Hephestus: Health data warehousing tool for public health and clinical research

Originally published by Bell Eapen at nuchange.ca on November 3, 2018. If you have some feedback, reach out to the author on TwitterLinkedIn or Github.

Health data warehousing is becoming an important requirement for deriving knowledge from the vast amount of health data that healthcare organizations collect. A data warehouse is vital for collaborative and predictive analytics. The first step in designing a data warehouse is to decide on a suitable data model. This is followed by the extract-transform-load (ETL) process that converts source data to the new data model amenable for analytics.

The OHDSI – OMOP Common Data Model is one such data model that allows for the systematic analysis of disparate observational databases and EMRs. The data from diverse systems needs to be extracted, transformed and loaded on to a CDM database. Once a database has been converted to the OMOP CDM, evidence can be generated using standardized analytics tools that are already available.

Each data source requires customized ETL tools for this conversion from the source data to CDM. The OHDSI ecosystem has made some tools available for helping the ETL process such as the White Rabbit and the Rabbit In a Hat. However, health data warehousing process is still challenging because of the variability of source databases in terms of structure and implementations.

Hephestus is an open-source python tool for this ETL process organized into modules to allow code reuse between various ETL tools for open-source EMR systems and data sources. Hephestus uses SqlAlchemy for database connection and automapping tables to classes and bonobo for managing ETL. The ultimate aim is to develop a tool that can translate the report from the OHDSI tools into an ETL script with minimal intervention. This is a good python starter project for eHealth geeks.

Anyone anywhere in the world can build their own environment that can store patient-level observational health data, convert their data to OHDSI’s open community data standards (including the OMOP Common Data Model), run open-source analytics using the OHDSI toolkit, and collaborate in OHDSI research studies that advance our shared mission toward reliable evidence generation. Join the journey! here

Disclaimer: Hephestus is just my experiment and is not a part of the official OHDSI toolset.

  • SSH URL
  • Clone URL