“Informing individual patient care with Big Data and predictive analytics is the Holy Grail these days”

As an intern in the US Department of Veteran Affairs in the ‘80s, Lou Fiore noticed how unique the organization was in its sophisticated handling of electronic medical records (EMR). Now Executive Director of the VA’s Massachusetts Veterans Epidemiology Research and Information Centre and Professor of Medicine at the Boston University School of Medicine, Fiore talks to HealthTech Wire about what Europe could learn – even today – from the VA’s approach to data – and why he’s decided to jump onto the program board at the HIMSS Impact 17 event, November 20-21!

What is so special about the work the US Department of Veterans Affairs (VA) – is doing in the field of Big Data?

The average insured person in the US is in any given health plan for no longer than three years; it’s not like England’s NHS, for example, where people are continuously covered. And what this does is it fragments not just the care, but as they change systems they typically change physicians and they change medical record systems as well. The VA, however, is unique in the US – being responsible for the care of retired soldiers (‘veterans’) – in that it follows patients longitudinally, digitally, throughout the lifespan – something that’s virtually impossible in the States outside of it!

Another benefit is that, as a result, the VA has a very robust dataset, going back many years, for all patients – having handled electronic medical records (EMR) since I was an intern there, back in the 80s. And that allows for some really cool things in terms of looking back – and then being able to predict forward.

What is the VA working on within the field of Big Data that is particularly relevant to the themes of this conference?

I head up a research team at the VA that uses EMR data to improve the patient care experience. My group has been working on repurposing this data to be able to identify groups of patients with certain features in common – and by studying how they’ve been treated and how they’re doing, to predict best practices going forward… using predictive analytics and Big Data to inform individual patient care. That is, it has to be said, pretty much the overarching goal of many different groups – not just my own, these days… I think that it’s kind of the Holy Grail!

Perhaps the best-use case of this in programs we’ve implemented is in precision oncology. This involves taking a newly diagnosed patient’s tumor sample and analyzing the DNA in the cancer cells to identify mutations that are putatively causative for the cancer development.

By understanding what mutations the patient has, we can then target the downstream effects of those mutations, using emerging targeted therapies. And we can expect to get better outcomes doing this than when using traditional chemotherapy – which is, of course, not patient-specific but rather general for a whole population of patients. This is generally the emerging field of precision oncology. The problem that we have in this field, as precision medicine advances across other disciplines, is that when you start learning enough information about a patient to treat them as an individual it becomes increasingly difficult to know what to do for each individual: a clinician can’t possibly have in their brain an encyclopedia of individual mutation statuses and how to behave. So, therefore, the art of medicine really falls apart at this point. And what you need is computational resources in order to do what used to be done by a clinician!

What’s compelling about precision oncology as a clinician is that it makes a huge difference if you can treat a patient with a therapy that is specific to them based on their DNA; the likelihood of a response is far greater than using general treatment.

So what are the obstacles to achieving the potential of precision medicine in this way?

In order to access these data, to analyse them, to understand them and to use them effectively in healthcare, of course, you need a marriage of two very distinct worlds: clinical care or operations and research. The problems that need to be solved are owned by the clinical care or operations community and in order to access the relevant data and understand them you need skills that are resident in the research community – and those two sides really don’t talk to each other. Clinical care don’t know how to use big data; meanwhile, researchers don’t know how to make it relevant to clinicians when they do use it and discover things.

What are you hoping the session you’re leading on will achieve?

Well I would hope that the session “Big Data – what will be the impact on the medical profession and on healthcare provider organizations?” will provide a forum to discuss some of the themes mentioned above. It will also, I hope, present a coherent discussion of how Big Data could impact patient care from the perspective of both clinicians and healthcare administrators and will examine the benefits and the challenges of accessing and analyzing real-world data and offer ‘lessons learnt’, along with best practices going forward.

We will get an opportunity to talk about the research use and clinical care use of Big Data and to look at what data are available with EMR and why would you want to access it, what can you access with those data and what can you do with it – as a healthcare provider and as a clinician.

And it won’t be just theoretical. We’ll be looking at best practice examples going forward and how delegates can make practical changes within their organizations when they return home.