UW News

November 17, 2021

A chatbot can help doctors better understand incoming emergency department patients’ social needs

UW News

A row of green chairs in a waiting room

A team led by the University of Washington developed a chatbot that could ask emergency department visitors about social needs, including housing, food, access to medical care and physical safety.Greg Rosenke/Unsplash

Americans visit hospital emergency departments nearly 130 million times per year. Although the focus of these visits is to address acute illness and injury, doctors are increasingly finding that social needs — such as food and housing insecurity — place many patients at higher risk of getting sick and requiring emergency care.

In order to better serve patients and possibly prevent future emergency department visits, doctors need a way to assess incoming patients to establish a wider context behind their visit.

A team led by the University of Washington developed a chatbot that could ask emergency department visitors about social needs, including housing, food, access to medical care and physical safety. The team tested it on 41 patients in Seattle and Los Angeles emergency departments. Results show that two groups of patients preferred the chatbot: patients who had less than a middle school level of health literacy and patients who appreciated establishing emotional connections.

The team presented these results in July at the Conference for Conversational User Interfaces 2021.

“A few years ago there was a huge buzz around chatbots, and then people started realizing that maybe they aren’t meant for everything,” said co-senior author Gary Hsieh, a UW associate professor in the human centered design and engineering department. “We have been trying to figure out opportunities where having a chatbot would actually be meaningful and make sense.”

One good opportunity involved collaborating with emergency department doctors.

“We want to understand the upstream issues that bring people into the emergency department. What are the social needs of the patients that we serve and how can we develop interventions that address these needs?” said co-author Dr. Herbert Duber, associate professor of emergency medicine in the UW School of Medicine. “For many people, including those with low literacy levels, a chatbot makes so much sense for collecting this information.”

The team designed a chatbot named HarborBot, after the hospitals where it was tested. HarborBot takes patients through a social needs survey that was developed by the Los Angeles County Health Agency. This survey asks patients 36 questions related to demographics, finances, employment, education, housing, food and utilities. It also asks questions related to physical safety, legal needs and access to care.

HarborBot is displayed on a tablet as a typical chat window with the patient’s and bot’s conversation showing up in different colored bubbles. HarborBot’s chat bubble shows animated ellipses when the bot is “typing.”

Based on a previous study, the researchers improved the chatbot’s efficiency and social skills.

For efficiency, the researchers:

  • modified the amount of time the bot looked like it was typing to match the length of text the bot displayed. This means that the bot would “type” for a shorter amount of time for a shorter response
  • added a question at the beginning of the interaction that would allow patients to stop HarborBot from reading all of its questions and responses aloud
  • placed the patients’ answer options in the same part of the screen so that patients, who were often tired or in pain, could respond without having to move their hands

To increase the empathy of the interaction, the team changed the bot’s reactions to better match the content of the questions and patient responses.

“Some of the questions are quite sensitive — there are questions about violence and sexual abuse — and the bot’s original responses said ‘Sure,’ ‘Great’ or ‘Thanks for sharing with us,'” said lead author Rafał Kocielnik, who completed this project as a doctoral student at the UW and is now a postdoctoral fellow at Caltech. “We tried tailoring its responses in a way that made them more appropriate for the content and specific to the patients’ responses, such as ‘That must be stressful, thank you for letting me know.'”

Three examples side by side all showing the question "Do you have enough resources to pay for the very basics like food, housing, and medical care?" The web form (on the left) has two options to click: yes or no. The original chatbot (in the middle) shows a conversation between the chatbot and a participant where the participant has answered "no" and the chatbot responds with "got it" and then goes on to the next question: "Do you have any significant outstanding bills or debts?" The improved chatbot (on the right) is having the same conversation except the chatbot's response is "That must be stressful, thank you for letting me know."

Shown here is a question from the social needs survey as a form (left), in the original chatbot (middle) and in the improved chatbot (right). The improvements are shown here as a) through e). a) The chatbot asked people if they wanted to continue hearing it read questions out loud. b) If they said no, the chatbot gave them an option to turn it back on later. c) The chatbot varied the amount of time it spent “typing” based on the length of its response. d) The team fixed the patient response area to one place on the screen. e) The chatbot’s responses were more specific to the context of the questions and the patient’s answers.University of Washington

After HarborBot received its upgrades, the researchers tested it at two emergency departments: one at Harborview Medical Center in Seattle and the other at the Harbor-UCLA Medical Center in Los Angeles.

For both locations, the researchers worked at night (between 8 p.m. and 1 a.m. in Seattle and between 4 p.m. and 4 a.m. in Los Angeles). The teams collaborated with triage nurses to select potential participants. Then the researchers took participants to a visitor room where they could still hear announcements. After the patients signed a consent form, they completed:

  • two surveys to gauge health literacy. One survey asks patients to pronounce health-related terms and the other asks patients to answer questions about the nutritional facts label on a pint of ice cream
  • the social needs survey as both a web form through SurveyGizmo and an interaction with HarborBot. These were given in a randomized order
  • evaluations for both the web form and HarborBot
  • a survey to gauge a patient’s desire for emotional interactions

At the end, the researchers interviewed the participants about the experience.

The team was not surprised to find that many people with low health literacy preferred the HarborBot version of the survey — 17 out of 20 low-literacy participants chose HarborBot, compared to 8 out of 21 high-literacy participants. People who valued emotional connection also liked the chatbot but these two groups didn’t necessarily overlap.

“We thought maybe people with low health literacy would also be more in need of emotional interaction,” Kocielnik said. “But it turns out, the two groups are not strongly correlated.”

For the 23 participants who scored high on the emotional interactions questionnaire, 18 chose HarborBot. Meanwhile only 7 of the 18 participants who scored lower on that questionnaire preferred HarborBot.

This paper received an honorable mention award at the 2021 Conference for Conversational User Interfaces.

“It’s important to understand that chatbots can benefit people in different ways,” said co-author Raina Langevin, a UW doctoral student in human centered design and engineering.

In the future, the team plans to design a survey system that could tailor the experience to each user. For example, it could start out as the chatbot, but then based on how a user is answering the questions, it could shift into more of a survey format.

“Our vision would be some sort of kiosk people could use while they are waiting. Or even a QR code that people can scan with their own devices and then answer these questions,” Hsieh said. “Ultimately we want to connect people entering emergency departments as smoothly as possible with the resources that they need.”

Andrea Hartzler in the UW biomedical and health informatics department is the other co-senior author on this paper. Additional co-authors are Dr. Callan Fockele and Layla Anderson, both in the UW emergency medicine department; Amelia Wang and Alexander Argyle, both of whom completed this research as a UW undergraduate students majoring in human centered design and engineering; Darwin Jones, who completed this research as a UW undergraduate student majoring in biomedical and health informatics; James George, Shota Akenaga and Dr. Kabir Yadav at the Harbor-UCLA Medical Center; and Dr. Dennis Hsieh at Contra Costa Health Plan. This research was funded by the National Institutes of Health.

For more information, contact Kocielnik at rafalko@caltech.edu, Langevin at rlangevi@uw.edu and Hsieh at garyhs@uw.edu. To speak to Herbert Duber, please contact Susan Gregg at sghanson@uw.edu.

Grant number: UL1 TR002319

Tag(s):