UW News

April 18, 2018

Screen reader plus keyboard helps blind, low-vision users browse modern webpages

UW News

Browsing through offerings on Airbnb means clicking on rows of photos to compare options from prospective hosts. This kind of table-based navigation is increasingly central to our digital lives – but it can be tedious or impossible for people who are blind or have low vision to navigate these modern webpages using traditional screen readers.

A new approach developed by engineers at the University of Washington and Carnegie Mellon University uses the keyboard as a two-dimensional way to access tables, maps and nested lists. Results to be presented April 25 at the CHI 2018 conference in Montreal find this tool lets blind and low-vision users navigate these kinds of sites much more successfully than screen readers alone.

fingers above keyboard with computer screen above

A mockup shows how a user could press keys to select a top-level menu, submenu, and then click through options on a nested list to book a sightseeing activity through Airbnb.University of Washington

“We’re not trying to replace screen readers, or the things that they do really well,” said senior author Jennifer Mankoff, a professor in the UW’s Paul G. Allen School of Computer Science. “But tables are one place that it’s possible to do better. This study demonstrates that we can use the keyboard to bring tangible, structured information back, and the benefits are enormous.”

The new tool, Spatial Recognition Interaction Techniques, or SPRITEs, maps different parts of the keyboard to areas or functions on the screen. A research trial asked 10 people, eight of whom were blind and two with low vision, to complete a series of tasks using their favorite screen reader technology, and then using that technology plus SPRITEs. After a 15-minute tutorial, three times as many participants were able to complete spatial web-browsing tasks within the given time limit using SPRITEs, even though all were experienced with screen readers.

graphic of keyboard with areas marked along edges

The SPRITEs tool uses keys to navigate a webpage. The top three rows activate menu and submenu items. The keys along the top row and outside edges act as horizontal and vertical coordinates for a table or map.University of Washington

The tool has users press keys to prompt the screen reader to move to certain parts of the website. For instance, number keys, along the top of the keyboard, map to menu buttons. Double-clicking on a number opens that menu item’s submenu, and then the top row of letters lets the user select each item in the submenu. For tables and maps, the keys on the outside edge of the keyboard act like coordinates that let the user navigate to different areas of the two-dimensional feature.

Tapping a number key might open an icon for each Airbnb menu option, for example. Then tapping the letter “u” could read out the entry that says whether this host will accept pets. (The AirBnB example illustrates how the system could work; the system’s current implementation is confined to wiki-style webpages.)

“Rather than having to browse linearly through all the options, our tool lets people learn the structure of the site and then go right there,” Mankoff said. “You can learn which part of the keyboard you need to jump right down and check, say, whether dogs are allowed.”

Most of the test participants couldn’t complete a task such as find an item in a submenu or find specific information in a table using their favorite screen reader, but could complete it using SPRITEs.

bar graph showing completion rates for webpage tasks, menu interaction, table tasks and navigation tasks

More study participants could complete tasks involving menus, tables and maps by using SPRITEs (orange bars) compared to using a screen reader alone (blue bars).University of Washington

“A lot more people were able to understand the structure of the webpage if we gave them a tactile feedback,” said co-author Rushil Khurana, a doctoral student at Carnegie Mellon University who conducted the tests in Pittsburgh. “We’re not trying to replace the screen reader, we’re trying to work in conjunction with it.”

For straightforward text-based tasks such as finding a given section header, counting headings in a page or finding a specific word, participants were able to complete them successfully using either tool.

SPRITEs is one of a suite of tools that Mankoff’s group is developing to help visually impaired users navigate items on a two-dimensional screen. An ethnographic study in 2016 led by doctoral student Mark Baldwin and faculty member Gillian Hayes, both at the University of California, Irvine, observed about a dozen students over four months while they learned to use accessible computing tools, in order to find areas for improvement in screen reading technology.

Now that the team has developed and tested SPRITEs, it plans to make the system more robust for any website and then add it to WebAnywhere, a free, online screen reader developed at the UW. Adding SPRITEs would let users navigate with their keyboard while using the WebAnywhere plugin to read information displayed on a webpage. The team also plans to develop a similar technique that would augment screen-reading technology on mobile devices.

“We hope to deploy something that will make a difference in people’s lives,” Mankoff said.

Other co-authors of the paper presented at the CHI meeting are Duncan McIsaac and Elliot Lockerman at Carnegie Mellon University. The research was funded by the U.S. Department of Health and Human Services.

###

For more information, contact Mankoff at jmankoff@cs.washington.edu.

HHS grant: 90DP5004-01-00

Tag(s):