User Experience Research

Vital Vision: Computer Vision for First Responders

  • Timeframe: 10 weeks (Fall 2017)
  • Collaborators: Nouela Johnston • Magda Nilges • Michael P. Smith (Instructor)
  • Personal role: Research plan, user interviews, synthesis, concept development
Vital Vision: Computer Vision for First Responders

Overview

A field research and design project exploring how computer vision might help first responders in their duties. Our team created a research plan, interviewed experts, designed an intervention and tested its viability.

Research Plan and Activities

To begin the project, we conducted secondary research to formulate a research question: How do first responders assess emergency situations?

Our team then prepared a study guide consisting primarily of semi-structured interviews and guided storytelling. The interviews were semi-formal, loosely structured and conversational. Questions were formulated through secondary research and combined into a loosely followed script. The guided storytelling activity asked first responders to both “Walk us through a typical day” and to “walk us through a recent call.” This approach helped us identify both the critical moments in the job and the greater context of the work.

Our participant recruiting began with contacts in our own networks. We asked these first responders to refer co-workers and friends. Through targeted emails, we contacted 13 first responders and interviewed ten. These participants included EMTs, fire fighters, a street medic, and members of the Coast Guard.

First Responder Interviews
Interviews were conducted with first responder in diverse roles.

Research Findings and Synthesis

As recorded interviews accumulated, the team organized and began to synthesize our data. Each significant piece of information was written on a sticky note. Together, we organized the notes into patterns and themes. Insights were derived from each theme and actionable design principles were formulated.

Evidence board detail
Evidence was gathered on sticky notes and synthesized into patterns, themes, and insights.

We found that first responder roles vary greatly but have many common threads. While time responding to emergencies is considerable, first responders experience extensive down time between calls. The cycle of work contains three main phases:

  • Downtime: Where first responders do their daily training, meetings, equipment maintenance, and “hive mind activities” - eat, work out, play games, build personal connections.
  • En route: The time en route to an emergency is a busy stage in a response. Here, first responders make decisions about appropriate tools, collaborate and communicate with each other and other departments.
  • Emergency response: Assess, provide aid, hand-off to another agency/department/officer in charge.

Throughout the work cycle, documentation plays a crucial role. There is always somebody in charge of keeping track of all decisions, actions, vitals, aid administered, any unusual activity, and this documentation goes through multiple hands and departments.

A synthesis of the general cycle of first responders’ duties
A synthesis of the general cycle of first responders’ duties.

Research Insights

Three insights in particular went on to inform our design direction and intervention.

Documentation is a necessary evil

“The people who put the information into the database are the people who went out... they’ve gotten beat up in the elements for how many hours in the cold and wet and now they’re expected to come back and make a thorough report of what they’ve done”

Quality documentation creates a sense of order

“Garbage in, garbage out. If the reports are badly written, they get thrown out”

“So all of that is tracked so on a more macro scale we can see those trends... this is the average response time”

The chain of command limits responsibility and decreases stress

Responders are one part of a larger system. Every type of first responder relies on other teams to get the job done. Every participant mentioned the process of handing a case off to another team or working closely with another branch of government responders to respond or accurately assess an emergency situation.

There’s a clear need to update documentation systems. Documentation goes through multiple hands using different systems. Nothing about the documentation is universal, but each system overlaps others, leading to redundancy and duplicated work. The quality of these tools can be low, with one responder describing the Sun Pro system as “30 years out of date, 30 years ago.” Faulty tools can also lead to confusion, such as two devices reading conflicting vitals. These issues must be resolved by hand for each call.

Design Principles

These insights and others were synthesized into actionable design principles.

Support flexible roles in critical moments. There’s no typical call. Responders have to adapt on the spot.

Support order. There’s a strong hierarchy of command and procedure that needs to be supported.

Be aware of existing processes and procedures. There’s a unique, interdependent system. We can’t reinvent the system - there’s too much to change all at once. Any design solution needs to fit into this larger system.

Do not hinder in critical moments. We need to appreciate how valuable time and attention can be in emergency contexts.

Facilitate clear communication and collaboration. Everyone wants to collaborate, but the problems are often in communication.

With the opportunities uncovered through research, we defined a clear product goal: improve documentation through computer vision. We proposed a device that can track vitals and take notes for documentation purposes.

Prototypes and Validation

To arrive at this product and to validate our approach, we began with different product concepts. Given the large proportion of first responders’ time spent on documentation, as well as its universal presence among participants, we sought to improve this process. A requirement of the project was to apply computer vision technology to any product outcome.

Behavioral prototype made with a battery pack and a Raspberry Pi
Behavioral prototype made with a battery pack and a Raspberry Pi computer. This "magic" device helped us discover user behavior without concern for the specific form of the object.

With our design principles in mind, we created a behavioral prototype to explore how a computer vision device could assist in the documentation process. We wanted to know how first responders would use a “magic” device and what functions they would expect it to provide them. Our first prototype was nonfunctional but built with plausible hardware - a battery pack, a Raspberry Pi computer, and a webcam. This hardware was meant to look technical enough that people wouldn’t get caught up in how it worked, but would take it at face value so that we could see how they respond to it.

User test in progress
Behavioral prototype testing with the first responder's data being recorded in real time using our handwriting.

We tested the prototype by creating a hypothetical emergency situation where someone had been stabbed. The first responders used the device to scan a patient, which then provided notes on a tablet. A member of the research team added vitals and injury notes to an image on the tablet in real time, then handed it to our participant. We observed the participant and later asked how the device’s information would be used in their documentation. He expected the prototype to record what was done to the patient, including which drugs were given and what kind of gauze was applied. This information could be handed off to doctors and used for both providing care and later reports.

Prototype Test Findings

Over several tests of the initial behavioral prototype, first responders valued the potential to minimize documentation. We saw different expectations and ways of using the prototype. One responder only stabilized the victim and left basic notes for doctors, concluding his involvement. Another tester had to write up a report of the entire event based on multiple sets of notes. He was more interested in the logging and timekeeping aspects of the design.

Some features were particularly valued.

  • The timekeeping log was most important for documentation and later care. When using a tourniquet, future treatment directly depends on how long the tourniquet has been applied. Removing an old tourniquet suddenly can release dangerously decayed fluids back into the bloodstream, but a more recently applied tourniquet can be removed without concern.
  • One tester wanted to speak to the device and get feedback. Even though the device works automatically, he wanted audible confirmation.
  • Collecting simple physical documentation was a valued addon. The ability to scan an ID or contact information can save valuable time and aid coordination with other providers.

The form factor of the first prototype didn’t work - nobody wanted to hold the device, and in the field there’s no good place to set it down. Because medical emergencies can be messy, a small and easy to clean device is important. Based on this feedback, the next iteration was made smaller and affixed to the body.

Prototype Version 2
The second behavioral prototype was attached to uniforms. We used a skate bearing with a magnet to hold it in place.

Outcomes

When interviewing people after testing the behavioral prototypes, we found that saving time was a major benefit. Having more time in the moment is like having another team member. Most importantly, the product allows more time for an emergency responder to focus on the situation, rather than documentation.

The proposed product aided focus, which diminishes miscommunication. By partially automating documentation, a responder can focus better on their situation. Because the product documents many actions, this also eliminates the need for handwriting, making communication more clear. Since automated documentation can be made to match any type of form, responders do not have to worry if this technology matches other providers, such as hospitals. Rather than create a new technology ecosystem, this product integrates with existing systems.

Next Steps

With a validated product concept and an understanding of the problem’s context, next steps would include designing specific interactions and planning for technology. This project in its current form is a high level concept, made on the assumption that improving computer vision will make the proposed functions possible. In reality, technical limitations will inform what specific tasks can be automated. The actual level of technical maturity will inform the feature set and level of interaction between the product and first responders.