Text Mining Radiology Reports: working group meeting in Edinburgh

Andreas Grivas and Beatrice Alex

At the beginning of March this year, we held a Healtex working group meeting for Text Mining Radiology Reports in the Bayes Centre in Edinburgh. This was an excellent opportunity to bring together teams with a shared focus on improving health services through a better understanding of radiology reports.

All teams brought different insights and thoughts to the table about challenges they face. We discovered that we have a lot of common ground but also differences in the way we process reports or deal with practical issues. The meeting started with several presentations from attendees which gave us a focus for our discussions.

Dr Beatrice Alex, who chaired the meeting, presented ongoing work at the Edinburgh Language Technology Group on text mining brain imaging reports for stroke type and other observations. She presented the EdIE-R system and a comparison of other machine learning methods for the initial step of recognising different types of named entities in brain imaging reports that are part of the Edinburgh Stroke Study (Jackson et al., 2008) and NHS Tayside data. Dr William Whiteley, consultant neurologist, then presented the use case for this work of conducting large-scale epidemiological research using linked data of electronic healthcare records, e.g. Scotland and Generation Scotland wide. Dr Grant Mair, radiologist by profession, provided a lot of useful practical insights into the process of writing radiology reports, summarising observations and the state of technology used in practice (e.g. speech transcription and checking). Dr Honghan Wu, HDR UK Fellow, then presented an overview of SemEHR, a transfer learning system, which he adapted to the same data as processed by EdIE-R.

We also heard from practitioners using text mining and natural language processing for radiology reports and other types of electronic healthcare records. Dr Peter Hall and Dr. Paul Mitchell from the Cancer Research UK Edinburgh Centre talked about their plans to use text mining for processing pathology reports and explained that one of their challenges is anonymisation to avoid accidental disclosure. Dr Adrian Parry-Jones, Honorary Consultant Neurologist at Salford Royal NHS Foundation Trust, introduced the group to a care bundle for clinicians via an app to reduce mortality of stroke patients. He also proposed ways in which text mining could help direct clinical care. We also heard from Dr Ewen Harrison‘s about the work his group is involved in. They applied a rule-based and a deep learning method to identify mentions of gallstones in MRI scan reports. They found the task to be tractable and achieved high scores against a manually code valuation set. Prof Goran Nenadic, Director of the Healtex network, also informed the group about another Healtex working group focussed on data governance which provides guidance on governance and data sharing related issues.

Some broad topics we discussed were:

Standardising reports and their annotation

  • Work on making NHS formats for radiology reports consistent (possible applications for NLP/ML)
  • Link annotations to ontologies, e.g. UMLS or SNOMED CT (possibly extend SNOWMED CT UK)
  • Crowdsourcing labels (ethics application, inter-annotator agreement)

Experiment results

  • Comparing rule-based, machine learning and deep learning methods
  • Choosing sample size for statistical power
  • Metrics: positive predictive value (precision) vs sensitivity (recall) depends on end goal

Data governance

  • Data access
  • Best practices in creating and sharing data
  • Working with systems on locked down infrastructure

Data privacy and ethics

  • Best practices for data anonymisation
  • Models not replacing human judgement

Tools and knowledge resources

  • UMLS
  • GATE
  • EdIE-R
  • SemEHR
  • Other text mining tools

As an outcome of our meeting we discussed ways of working together to avoid duplicating our efforts. We agreed to share our code, systems and rules where possible. We also decided to create a mailing list to keep in touch more easily.  We also discussed the possibility of a state-of-the-art review on text mining radiology reports.  The last comprehensive systematic review of NLP methods and tools supporting practical clinical applications in radiology is that of Pons et al. (2016). 

Future goals in terms of text mining technology include extending previous models to work for additional target types and types of scans (transfer learning) as well as exploring summarisation (e.g. see Zhang et al., 2018 on summarising findings of radiology reports) and medical language simplification.  Members of our working group will attend the HealTAC 2019 conference on 24/25thof April in Cardiff where we will present the goals of this group. We are also widening participation to other groups or individuals in the UK and world-wide.


We thank Healtex for funding this event.


Dr. Beatrice Alex, Chancellor’s Fellow and Turing Fellow at University of Edinburgh, leading the Edinburgh Language Technology Group


  • Dr William Whiteley, Senior Clinical Fellow and Consultant Neurologist, Centre for Clinical Brain Sciences, University of Edinburgh
  • Dr Grant Mair, Senior Clinical Lecturer in Neuroradiology and Radiologist, Centre for Clinical Brain Sciences, University of Edinburgh
  • Prof Goran Nenadic, Manchester Institute for Biotechnology, School of Computer Science, University of Manchester, Director of the HealTex Network
  • Dr Adrian Parry-Jones, NIHR Clinician Scientist at the University of Manchester and an Honorary Consultant Neurologist at Salford Royal NHS Foundation Trust
  • Dr Ewen Harrison, Senior Lecturer, General Surgery, University of Edinburgh and Consultant HPB / Transplant Surgeon, Royal Infirmary of Edinburgh
  • Cameron Fairfield, Edinburgh Surgery Online Clinical Research Fellow and PhD Student, Clinical Surgery, Royal Infirmary of Edinburgh
  • Dr Riinu Ots, Senior Data Manager, Surgical Informatics, Usher Institute
  • Dr Honghan Wu, HDR UK Fellow, Usher Institute, University of Edinburgh
  • Andreas Grivas, Research Assistant, Edinburgh Language Technology Group, Institute for Language, Cognition and Computation, School of Informatics, University of Edinburgh
  • Richard Tobin, Research Fellow, Edinburgh Language Technology Group, Institute for Language, Cognition and Computation, School of Informatics, University of Edinburgh
  • Paul Mitchell, IT Developer, Cancer Research UK Edinburgh Centre, University of Edinburgh
  • Dr Peter Hall, Senior Clinical Lecturer, Cancer Research UK Edinburgh Centre, University of Edinburgh 


Jackson, C., Crossland, L., Dennis, M., Wardlaw and J., Sudlow, C. (2008). Assessing the impact of the requirement for explicit consent in a hospital-based stroke study. QJM: Monthly Journal of the Association of Physicians, 101(4), 281–289.

Pons, E., Braun, L.M., Hunink, M.M.  and Kors, J.A. (2016). Natural language processing in radiology: a systematic review, Radiology, 279, pp. 329-343, https://pubs.rsna.org/doi/10.1148/radiol.16142770.

Zhang, Y., Ding, D.Y., Qian, T., Manning, C.D. and Langlotz, C.P.: Learning to summarize radiology findings (2018). In: Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis, pp. 204-213. Association for Computational Linguistics, Brussels, Belgium (2018), http://aclweb.org/anthology/W18-5623

Programming Historian Lesson on the Edinburgh Geoparser

The Programming Historian lesson on Geoparsing Text with the Edinburgh Geoparser was released yesterday. The Programming Historian site provides novice-friendly, peer-reviewed lessons that help humanists acquire skills on using different digital tools and techniques for research or teaching.

The lesson on the Edinburgh Geoparser is a step-by-step guide on how to download and set up the tool, how to geo-parse a text file and how to extract the geo-location information from the geoparser’s XML output into TSV format. We are hoping that anyone interested in mapping location mentions in text will try it out. We would like to thank the reviewers, Anouk Lang, Sarah Simpkin and Ian Milligan, for all of their useful comments and feedback.

First Edinburgh Geoparser Workshop at the Digital Day of Ideas

Last week Beatrice Alex held the first workshop on using the Edinburgh Geoparser at the Digital Day of Ideas 2016. This was one in a set of hands-on workshops on using different tools and techniques relevant to Digital Humanities research, including visualisation in D3, Drupal, tweeting/blogging for academics and WordPress.

The Edinburgh Geoparser is a language processing tool designed to detect place name references in English text and ground them against an authoritative gazetteer so that they can be plotted on a map. It is operated via the command line. Given the event’s broad audience from the Humanities and Social Sciences, the workshop was targetted at participants with limited command line expertise.

The attendees were able to follow the material and made useful suggestions related to the inner workings of the tool. The workshop slides can be found here.

First Release of the Edinburgh Geoparser

The Edinburgh Geoparser (v1.0) has been released under the University of Edinburgh GPL license on Dec. 18th 2015.

It can now be used by other researchers in the field of text mining as well as scholars in the humanities and social sciences who would like to geoparse text and prefer to have more control over the tool.

More information on the Edinburgh Geoparser, its documentation, our publications about it and how to download it can be found here. An online demo of the Geoparser can be tested here.

We have used the Edinburgh Geoparser in many research projects and tailored it to different needs, for example to perform fine-grained geo-referencing for literature set in Edinburgh (Palimpsest) presented in the LitLong interface, to geo-reference volumes of the Survey of English Place Names (DEEP) or to geo-reference large historical collections related to commodity trading in the 19th century British Empire (Trading Consequences).  We adapted the geoparser to the ancient world for the GAP project, with its GapVis interface, and for Hestia Phase 2 which developed the interface further for use in undergraduate study of classical literature in translation. The geoparser has also been used in external research projects, including Prof. Ian Gregory‘s group on geo-referencing 19th century newspapers.

We welcome suggestions and future collaboration, so please get in touch if you have ideas about how we should develop the software (balex AT staffmail DOT ed DOT ac DOT uk).