top of page

Academic Publications

Academic publications across the topics of Emergency Medicine, Public Health, Artificial Intelligence, Ethics and Cybersecurity. For articles that are not open access, please email for the full text.

Straw I, Rees F, Nachev P.

Now that we experience health and illness through so much technology (e.g. wearables, telemedicine, implanted devices), the medium is redefining our expression of symptoms, the observable signs of pathology and the range of diseases that may occur. Here, we systematically review all case reports describing illnesses related to digital technology in the past ten years, in order to identify novel biotechnological syndromes, map out new causal pathways of disease, and identify gaps in care that have disadvantaged a community of patients suffering from these digital complaints.

Cite as: Straw, I., Rees, G. & Nachev, P. 21st century medicine and emerging biotechnological syndromes: a cross-disciplinary systematic review of novel patient presentations in the age of technology. BMC Digit Health 1, 41 (2023). https://doi.org/10.1186/s44247-023-00044-x

Straw I, Rees F, Nachev P.

Our research with safeguarding professionals seeks to (i) identify the emerging trends of technology-facilitated abuse affecting young people, (ii) uncover gaps in current safeguarding guidance, and (iii) form comprehensive recommendations for ensuring best practice when caring for patients harmed by technology. Safeguarding practitioners supporting patients in hospital and community settings were engaged in an initial focus group, which was then followed up with one-to-one interviews. Technology played a role in initiating and intensifying harm in a number of safeguarding cases, for example (1) Tik-Tok challenges leading to an exacerbation of gender-based abuse, (2) the use of GPS tracking systems to enact physical assaults and (3) the creation of deepfakes (synthetic images) to manipulate individuals. Specific demographic groups appeared to be more at risk, including young women and girls, LGBTQ youth, religious and ethnic minorities and children with long hospital stays. 

Cite as: Straw I, Tanczer L 325 Digital technologies and emerging harms: identifying the risks of technology-facilitated abuse on young people in clinical settings Archives of Disease in Childhood 2023;108:A54.

Straw I, Dobbin J, Luna Reaver D, Tanczer L. 

Biotechnological syndromes refer to the illnesses that arise at the intersection of human physiology and digital technology. Implanted technologies can malfunction (eg, runaway pacemakers, hacked insulin pumps), and consumer technologies can be exploited to impose adverse health effects (eg, technology-facilitated abuse, hacks on epilepsy websites inducing seizures). Through a series of clinical simulation events, our study aimed to (1) evaluate the ability of physicians to respond to biotechnological syndromes, (2) explore gaps in training impeding effective patient care in digital cases, and (3) identify clinical cases due to digital technology arising in the population.

Cite as: Straw, I., Dobbin, J., Reaver, D. L., & Tanczer, L. (2023). Medical cyber crises and biotechnological syndromes: a multisite clinical simulation study focused on digital health complaints. The Lancet, 402, S88.

Safeguarding patients from technology-facilitated abuse in clinical settings: A narrative review

Straw I, Tanczer L. 

Safeguarding vulnerable patients is a key responsibility of healthcare professionals. Yet, existing clinical and patient management protocols are outdated as they do not address the emerging threats of technology-facilitated abuse. Lack of attention on the risk of technology-facilitated abuse can result in clinicians failing to protect vulnerable patients and may affect their care in several unexpected ways. We attempt to address this gap by evaluating the literature that is available to healthcare practitioners working with patients impacted by digitally enabled forms of harm. 

Cite as: Straw I, Tanczer L. Safeguarding patients from technology-facilitated abuse in clinical settings: A narrative review. PLOS Digital Health. 4th January 2022.

A Systematic Literature Review of the Use of Computational Text Analysis Methods in Intimate Partner Violence Research

Neubauer L, Straw I, Mariconti E, Tanczer L. 

Computational text mining methods are proposed as a useful methodological innovation in Intimate Partner Violence (IPV) research. Text mining can offer researchers access to existing or new datasets, sourced from social media or from IPV-related organisations, that would be too large to analyse manually. This article aims to give an overview of current work applying text mining methodologies in the study of IPV, as a starting point for researchers wanting to use such methods in their own work.

Cite as:

Neubauer L, Straw I, Mariconti E, Tanczer L. ‘A Systematic Literature Review of the Use of Computational Text Analysis Methods in Intimate Partner Violence Research’. Journal of Family Violence, Mar. 2023. https://doi.org/10.1007/s10896-023-00517-7.

When Brain Devices Go Wrong: A Patient with a Malfunctioning Deep Brain Stimulator (DBS) Presents to the Emergency Department

Straw I, Ashworth C, Radford N.

A man in his 50s attended the emergency department with an acute deterioration in his Parkinson’s symptoms, presenting with limb rigidity, widespread tremor, choreiform dyskinesia, dysarthria, intense sadness and a severe occipital headache. After excluding common differentials for sudden-onset parkinsonism (eg, infection, medication change), an error on the patient’s deep brain stimulator was noted. The patient’s symptoms only resolved once he was transferred to the specialist centre so that the programmer could reset the device settings. 

Cite as: Straw I, Ashworth C, Radford N. ‘When Brain Devices Go Wrong: A Patient with a Malfunctioning Deep Brain Stimulator (DBS) Presents to the Emergency Department’. BMJ Case Reports CP, vol. 15, no. 12, Dec. 2022, p. e252305. https://doi.org/10.1136/bcr-2022-252305.

Representational Ethical Model Calibration

Carruthers R, Straw I, Ruffle J, Herron D, Nelson A, Bzdok D, D Fernandez-Reyes, Rees G, Nachev P. 

Equity is widely held to be fundamental to the ethics of healthcare. In the context of clinical decision-making, it rests on the comparative fidelity of the intelligence - evidence-based or intuitive - guiding the management of each individual patient. Though brought to recent attention by the individuating power of contemporary machine learning, such epistemic equity arises in the context of any decision guidance, whether traditional or innovative. Yet no general framework for its quantification, let alone assurance, currently exists. Here we formulate epistemic equity in terms of model fidelity evaluated over learnt multidimensional representations of identity crafted to maximise the captured diversity of the population, introducing a comprehensive framework for Representational Ethical Model Calibration. 

Cite as: Carruthers R, Straw I, Ruffle JK, Herron D, Nelson A, Bzdok D, Fernandez-Reyes D, Rees G, Nachev P. Representational ethical model calibration. NPJ Digit Med. 2022 Nov 4;5(1):170. doi: 10.1038/s41746-022-00716-4. PMID: 36333390; PMCID: PMC9636204.

Investigating for Bias in Healthcare Algorithms: A Sex-Stratified Analysis of Supervised Machine Learning Models in Liver Disease Prediction

Straw I, Wu H.

The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.

Cite as: Carruthers R, Straw I, Ruffle JK, Herron D, Nelson A, Bzdok D, Fernandez-Reyes D, Rees G, Nachev P. Representational ethical model calibration. NPJ Digit Med. 2022 Nov 4;5(1):170. doi: 10.1038/s41746-022-00716-4. PMID: 36333390; PMCID: PMC9636204.

The Automation of Bias in Medical Artificial Intelligence (AI): Decoding the Past to Create a Better Future

Isabel Straw

With the rapid integration of Artificial Intelligence (AI) into the healthcare field the future care of our patients will depend on the decisions we make now. Demographic healthcare inequalities continue to persist worldwide and the impact of medical biases on different patient groups is still being uncovered by the research community. At a time when clinical AI systems are scaled up in response to the Covid19 pandemic, the role of AI in exacerbating health disparities must be critically reviewed. 

Cite as: Straw, Isabel. ‘The Automation of Bias in Medical Artificial Intelligence (AI): Decoding the Past to Create a Better Future’. Artificial Intelligence in Medicine, vol. 110, Nov. 2020, p. 101965. ScienceDirect, https://doi.org/10.1016/j.artmed.2020.101965.

Artificial Intelligence in Mental Health and the Biases of Language Based Models

Straw I, Callison-Burch C

We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective. Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.

Cite as: Straw, Isabel, and Chris Callison-Burch. ‘Artificial Intelligence in Mental Health and the Biases of Language Based Models’. PLOS ONE, vol. 15, no. 12, Dec. 2020, p. e0240376. PLoS Journals, https://doi.org/10.1371/journal.pone.0240376.

Ethical Implications of Emotion Mining in Medicine

Isabel Straw

Emotion mining is a novel technique used in Artificial Intelligence to extract and analyse the emotions of individuals and populations. In recent years these techniques have been applied to the field of Medical Artificial Intelligence (AI). The use of emotion mining has widespread implications for both individual patient health and for population health. Existing medical curriculums and ethical frameworks are not fit for purpose when it comes to these new e-health technologies. A new approach is suggested for appraising medical AI, for both clinicians and policy makers.

Cite as: Straw, Isabel. ‘Ethical Implications of Emotion Mining in Medicine’. Health Policy and Technology, vol. 10, no. 1, Mar. 2021, pp. 191–95. ScienceDirect, https://doi.org/10.1016/j.hlpt.2020.11.006.

Knife Crime in London, UK: A Youth Perspective

Straw I, Thornton M, Hassan F, Fiberesima H, Kokkinos N, Dobbin J.

From 2016 to 2017, the UK saw a 22% increase in crime with knives and sharp weapons. Admission to hospitals with stabbing injuries is now at its highest rate for 7 years with a shift towards younger victims, the youngest victim being just 13 years old. We aimed to explore young people's knowledge and consequences of knife crime to inform public health interventions. Participants were recruited voluntarily to join a first-aid session and discussion exploring perceptions of knife crime in an informal setting. We identified three key themes: distrust of public services, a lack of knowledge of the justice system, and differing perspectives on the underlying causes of crime. Anxieties around the role of the police were heightened because of the children's experience within the asylum system. Like previous work, we found common misconceptions around the consequences of knife violence.

Cite as: Straw, Isabel, et al. ‘Knife Crime in London, UK: A Youth Perspective’. The Lancet, vol. 392, Nov. 2018, p. S85. ScienceDirect, https://doi.org/10.1016/S0140-6736(18)32212-8

Sexual Health of Female Asylum-Seeking Teenagers: Identifying Knowledge Gaps, Tackling Perceived Barriers, and Suggestions for Intervention

Straw I, Thornton M, Hassan F, Fiberesima H, Kokkinos N, Dobbin J.

At a time of record global forced displacement, the health needs of migrant populations pose a challenge to our existing health-care systems. Unaccompanied refugee children have specific health-care needs, particularly in terms of sexual health education. Our study aimed to explore health literacy in asylum-seeking girls in the UK to help identify the specific health needs and barriers to care that exist for this population. Our team of doctors, who volunteer with asylum seeking children in London, conducted a series of educational sessions. The domains that emerged from our analysis included puberty, female anatomy and female genital mutilation, pregnancy, sexually transmitted infections, and relationships. Group discussion demonstrated vast gaps in knowledge of female anatomy, contraception, and sexually transmitted diseases, and, of particular importance, relationships and sexual consent. 

Cite as: Straw, Isabel, et al. ‘Sexual Health of Female Asylum-Seeking Teenagers: Identifying Knowledge Gaps, Tackling Perceived Barriers, and Suggestions for Intervention’. The Lancet, vol. 392, Nov. 2018, p. S84. ScienceDirect, https://doi.org/10.1016/S0140-6736(18)32923-4

Me_Syria.jpeg

Grey Literature & Blog Posts

A selection of non-academic publications, informal interviews and blog posts

Isabel Straw

While most articles for Adventure Medic come from far away places, this one concerns another frontier altogether – Artificial Intelligence in medicine and its implications for the patients we care for, at home and overseas. Isabel Straw is an Emergency Medicine doctor, studying for a PhD at University College London on the ‘Artificial Intelligence and Healthcare’ programme. Previously, she worked for the United Nations, and in Syria, with the Syrian American Medical Society. In this fascinating essay, Isabel gives us an insight into the world of medical AI, including some of its ethical dilemmas, as well as her incredible career so far.

Cite as: ‘The Doctor-AI Relationship: Medicine in a Digital World’. I, Straw. Adventure Medic, 15 Apr. 2022, https://www.theadventuremedic.com/features/the-doctor-ai-relationship-medicine-in-a-digital-world/.

I Am yet to Meet a Young Person That Has Not Experienced Some Form of Abuse via Tech

Isabel Straw

Technology-facilitated abuse describes the misuse of digital systems such as smartphones or other Internet-connected devices to monitor, control and harm individuals. In recent years increasing attention has been given to this phenomenon in school settings and the criminal justice system. Yet, an awareness in the healthcare sector is lacking. To address this gap, Dr Isabel Straw and Dr Leonie Tanczer from University College London (UCL) have been leading a new research project that examines technology-facilitated abuse in medical settings.

Cite as: Straw, Isabel. ‘“I Am yet to Meet a Young Person That Has Not Experienced Some Form of Abuse via Tech”’. Bentham’s Gaze, 1 Sept. 2022, https://www.benthamsgaze.org/2022/09/01/i-am-yet-to-meet-a-young-person-that-has-not-experienced-some-form-of-abuse-via-tech/.

bottom of page