Presentation Summaries

Terrill Thompson presents on accessible technology.

Overview of Accessibility Issues for Online Learning

Presented by Sheryl Burgstahler, University of Washington

I taught my first online class in 1995, before the internet was widely used. This was a class on adaptive technology for people with disabilities. I taught the class with professor Norm Coombs, who is blind.  We took steps to showcase how it is possible to design an online course that’s accessible to any potential student, including those with disabilities. Although the digital tools are different and more complex, I strive to reach this goal in the online classes I teach today.

According to the US Department of Justice and the Office of Civil Rights of the U.S. Department of Education, “accessible” means “a person with a disability is afforded the opportunity to acquire the same information, engage in the same interactions, and enjoy the same services as a person without a disability in an equally effective and equally integrated manner.”

There are two approaches for making our campuses accessible: accommodations and universal design (UD). Accommodations are reactive in adapting a product or environment to make it more accessible to an individual who finds it inaccessible (e.g., captioning a video when a student with a hearing impairment requests it). UD is a proactive approach to create all aspects of a product or environment as accessible as possible as it is being designed. As defined by North Carolina State University’s Center on Universal Design, UD is “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaption or specialized design.” A building entrance that is technically accessible might have a separate ramp for people with wheelchairs or who cannot use the stairs, while an entrance that is universally designed might have one wide, gently sloping entrance that is used by everyone entering the building. Universal Designs are accessible, usable, and inclusive. Universally designed technology builds in accessibility features, is flexible, and is compatible with assistive technology.

Ability exists on a continuum, where all individuals are more or less able to see, hear, walk, read print, communicate verbally, tune out distractions, learn, or manage their health. Regardless of where each of a student’s abilities fall on this continuum and regardless of whether or not they disclose a disability or request accommodations, we want to ensure that they have access to the classes we teach and resources we share.

Postsecondary efforts to include students with disabilities typically focus on making accommodations for students with disabilities. At the UW, we remediate over 30,000 PDFs and caption over 60 hours of video each quarter as accommodations for students. If faculty designed their classes with universal design in mind, these numbers would be reduced because documents would be universally designed in accessible formats and videos would be captioned for the benefit of everyone. Students wouldn’t need to be accommodated. More than just people with disabilities are helped by UD—sloped entrances benefit people moving carts, and captions help those learning English or viewing in noisy environments.

UD values diversity, equity, and inclusion and can be implemented incrementally. Universal design of instruction (UDI) focuses on benefits to all students, promotes good teaching practice, does not lower academic standards, and minimizes the need for accommodations. UDI can be applied to all aspects of instruction, including class climate, interactions, physical environments and products, delivery methods, information resources and technology, feedback, and assessment. For specific tips on designing an accessible course, follow the 20 Tips for Teaching an Accessible Online Course. Other resources can be found in DO-IT’s Center for Universal Design in Education (CUDE).


IT Accessibility

Presented by Terrill Thompson, University of Washington

How do we overcome large barriers? We innovate, and we refine our innovations over time until they’re better and more inclusive. Throughout history, innovation has often initially excluded groups of people. For example, the Gutenberg Printing Press produced the first mass printing in 1452, but print remained inaccessible to people who are unable to see for nearly four centuries (Braille was invented in 1829, and the first electronic screen reader was introduced by IBM in 1986). Similarly, television appeared in the 1920s, but the first captions for people who are deaf or hard of hearing didn’t appear until 1972, and audio description for people who are blind followed in 1988.      

In contrast, HTML included accessibility features from the beginning (e.g., alt text for images, hierarchical heading tags for document structure), demonstrating that it is possible to innovate without erecting barriers.

When we’re creating digital content such as web pages or online documents, we may envision our typical user as an able-bodied person using a desktop computer. In reality, users utilize a wide variety of technologies to access the web, including assistive technologies and mobile devices. Everyone has a unique combination of levels of ability when it comes to seeing, hearing, or using a mouse or keyboard; there is a wide variety of technology and software tools that people use to access information online. But are digital learning environments always accessible to or usable by students or instructors using assistive technology? In order to ensure our digital resources are accessible, designers, developers, and content authors must understand that users are technologically diverse, and familiarize themselves with a few simple accessibility standards, tools, and techniques. One simple test is to try navigating your own online resources (e.g., websites, software, assessment tools) without a mouse (nomouse.org). HTML websites, rich web applications, Microsoft Office documents, and Adobe PDF files can all be accessible to all users, but only if they are designed and created with accessibility in mind.

Most students now interact with an Learning Management System (LMS) for accessing course materials, engaging in class discussions, turning in assignments, completing assessments, etc. Most LMS’s have reasonably good accessibility. However, each educator must keep accessibility in mind as they select plug-ins and create or upload course content. Many students and professionals may interact with web conferencing, videos, and collaboration tools as well—these tools also need to be made accessible and easy to use by all.

The most common guidelines for designing accessible technology are the Web Content Accessibility Guidelines (WCAG), published by the World Wide Web Consortium (W3C). WCAG 2.0 (2008) is organized into four main principles; information should be perceivable, operable, understandable, and robust. Each of these principles is defined by more specific guidelines, and those are further defined by specific success criteria, each assigned Level A, AA, or AAA, in descending order of priority. WCAG 2.0 Level AA is widely identified in legal settlements, resolutions, and policies as the expected level of accessibility for websites.

If websites include rich, dynamic content (as opposed to static materials), ensuring their accessibility will likely depend on use of Accessible Rich Internet Applications (ARIA), a markup language that supplements HTML with attributes that communicate roles, states, and properties of user interface elements to assistive technologies. ARIA answers questions like “What is this?”, “How do I use it?”, “Is it on/selected/expanded/collapsed?”, and “What just happened?” The W3C maintains an extensive set of design patterns for common web widgets within its WAI-ARIA Authoring Practices document (ref).   If creating web applications that include any of the components defined by the W3C, their recommended design patterns should be implemented in order to ensure that users encounter consistent, reliable user interfaces. Otherwise, users (especially keyboard users and assistive technology users) have to learn an entirely new interface every time they visit a new website.

For more information about IT accessibility, consult the following resources:


Cyberlearning for All

Presented by Richard Ladner, University of Washington

What is Cyberlearning? According to the Center for Innovative Research in Cyberlearning (CIRCL), cyberlearning “applies scientific insight about how people learn, leverages emerging technologies, designs transformative learning activities, engages teachers and other practitioners, measures deeper learning outcomes, and emphasizes continuous improvement. “ In looking over this description I found it needed something more. As such, I’ve added another focus: It supports cyberlearning for all. Cyberlearning is about people, particularly students, and they come in a wide variety of abilities.

I am a professor emeritus at the University of Washington, and I’ve been on the faculty since 1971. I have seen the growth of computer science over the past 48+ years. For the past 15 years, my focus has been on accessibility research and two collections of grants: AccessComputing and AccessCSforAll. My accessibility research in learning has focused on K-12 and college levels, as can be seen in the following projects: Tactile Graphics, ASL-STEM Forum, ClassInFocus, BraillePlay, Blocks4All, and Accessible Computer Science Principles.

There are a lot of students with disabilities. The Individuals with Disabilities Act (IDEA) covers about 13% of K-12 students with disabilities nationally. These students have Individual Education Programs (IEPs) that establish their educational goals and identify accommodations they need to reach these goals. In addition to IDEA, about 2% of students with disabilities are covered by Section 504 of the Rehabilitation Act. These students have the same education goals as mainstream students but require accommodations to ensure access to the curriculum. In total, about 15% of K-12 students in the US have identified disabilities. In Washington State the percentages are higher with 13.8% IDEA students and 3.2% Section 504 students. These add up to about 17% of the 1.1 million K-12 students in Washington State public schools. In higher education, 11% of undergraduate students have disabilities and 5.3% of graduate students have disabilities.

The biggest barriers to education are teachers’ and administrators’ attitudes. Students with disabilities were historically excluded, though more recently they became included through accommodations and the application of Universal Design for Learning (UDL). Nonetheless, the IEP process can lead educators to set a low bar for the educational goals of their students with disabilities. Attitudinal barriers for students with disabilities can come from low expectations and a focus on compliance, rather than on welcoming students as part of a diverse student body. Technology is often a barrier because almost all new educational technology is not accessible  to many students with disabilities from the beginning. This includes most cyberlearning tools. Cyberlearning should be for all students regardless of disability.

There are multiple design concepts in human computer interaction to think about when designing a cyberlearning tool. You can design for accessibility using universal design and ability-based design. We also use user engaged design, which includes three perspectives: user-centered design, participatory design, and design for user empowerment.

  • Universal design aims to make products accessible to the largest group possible.
  • Ability-based design leverages the full range of human potential by creating systems that can adapt to the abilities of the user.
  • User-engaged design recognizes that the intended users of a technology may be different than the designers.

The design cycle has four phases: analysis of the problem to be solved, design of a solution, prototype, and testing. This cycle is repeated until the problem is solved satisfactorily as judged by the testing. Designs created with the engagement of the intended users will more likely be adopted. User-centered design involves the users just in the testing phase, participatory design involves the user in both the testing and design phases, and user empowerment involves users in every phase of the design cycle.

User empowerment requires that the users have self-determination and the technical education needed to participate fully in the design cycle. Self-determination means that the person with a disability has the power to make change, and in this case solve their own accessibility problem. Education mean they have the wherewithal to design, build, and test their solution. Such individuals are not waiting for someone else to solve their accessibility problem, but can do it themselves with the help of allies.

Demographics, equity, and quality all need to be considered when thinking about accessibility. Demographics refers to the large segment of the population that have disabilities. Equity refers to the concept that this large segment should be included and have power. Quality refers to the idea that better solutions to problems often come from diverse approaches to the problem. Disability is one facet of diversity. My closing thought can be stated succinctly that research fields need more people with disabilities because their expertise and perspectives spark innovation.


Autism Glass Project: Expression Recognition Glasses for Autism Therapy

Presented by Aaron Kline, Stanford University

Many students with autism cannot read people’s facial expressions and gauge emotions. Technology in the Autism Glass Project, which works similarly to Google Glass and is connected to a smart phone, will read people’s emotions and feed that information back to the wearer by showing the expression as a word or emoji back to the wearer. We are also testing different audio feedback options. When looking at people’s faces from different directions in larger groups, it is difficult for the technology to read people’s facial cues.

The technology also records different interactions and a viewer can go back to review these interactions and read facial expressions again, with their parents or others. This technology is aimed at increasing people with autism’s facial engagement. It gives people with autism the tools and empowerment to learn and grow in social situations. There are options for children to play with games around facial cues and expression to learn in a game setting.

We ran a study where students wore our technology in social settings. Many participants became more likely to look at people’s faces and engage with facial expressions. Students became more comfortable with the headset after wearing it for a length of time and weren’t overwhelmed by visual or audio feedback. They expressed a desire for more gamification, feedback and rewards, and personalization. More advanced students also wanted levels and more ways to challenge themselves with the technology. We have now moved on to randomized control in future studies.

Our project team currently does not include any people with autism. In the future we have to include people with autism in the design, development, and evaluation. As seen in other projects, having the students involved in the design of their technology makes them more excited to wear it. Furthermore, we are exploring other uses for this technology, including reading people’s levels of interest in meetings or showing what content is in pictures or real life to someone who is blind.


Cyber Support for Difficulty Resolution to Make Learning More Accessible?

Presented by Prasun Dewan, University of North Carolina Chapel Hill

Accessible cyberlearning should address not only delivery of knowledge but also creation of learning-inducing artifacts. Our research involves systems that (a) allow both textual and visual user-interfaces to create artifacts, automatically translating between two; (b) use machine intelligence to detect task difficulties and communicating this inference to those who can help with the task; and (c) use machine intelligence to automatically recommend solutions to difficulties.  Such systems have the potential to increase accessibility for workers and/or helpers with visual impairments, limited motor skills, and autism. Investigating this potential requires getting enough data for both training the machine-intelligence algorithms and evaluating their impact on task creation and learning.

Our work addresses difficulty resolution and spans two projects:

Difficulty Detection in Programming: We are building a system that uses machine learning to automatically determine if programmers are facing difficulty, conveys this information to interested potential helpers, and provides an environment to offer help with the problem.

Difficulty Amelioration in Data Science: Data science involves connecting programs into workflows. Traditionally, this connection has been done using command languages, but because these are considered difficult to learn and use, some modern systems offer visual alternatives. This project is using machine learning to automatically recommend workflow steps to users in difficulty.

Can these cyberinfrastructure projects on ameliorating difficulties make learning and teaching more accessible? We say yes, based on several hypotheses below.

More impact on challenged populations: Our programming studies with the average population found that difficulties were rare (which is to be expected if problems are matched to the workers) but took long to resolve. Arguably those who face atypical challenges will (a) encounter certain kinds of difficulties more often, especially if instruction does not accommodate these challenges, and (b) take longer to resolve difficulties. Hence, digital support for difficulty resolution should have larger impact on atypical populations.

Second pair of eyes more effective for visually impaired: Our programming studies also show that the vast majority of the fixes involved a helper recommending change to a single line of code, which took the workers much longer to identify on their own. This means that the time required to make the fix was a small fraction of the time required to read the code to find the problem. A second pair of eyes of a human or system should be more effective for visually impaired programmers using a screen reader to find the “fix needle” in a large “code haystack.”

Difficulty inferences useful for autistic/visually impaired helpers:  In a face-to-face programming lab, an autistic or visually-impaired helper who has difficulty reading faces to discover confusion, can use automatic difficulty detection to find struggling workers too shy or flustered to ask for help.

Command languages more useful for visually impaired: A simple workflow composition task of connecting the output of a program to the input of another involves (a) typing a few characters in a single command line, and (b) interaction with six screens (forms/menus) in a visual system. Consistent with the accessibility principle of ensuring that content is accessible using the keyboard alone, command languages are more appropriate for visually impaired workers who can master them as they require smaller read/write ratios to perform the same task.

Polymorphic workflow composition more accessible: Based on the accessibility principle advocating multiple ways of obtaining the same knowledge, supporting and translating between text-based and visual user interfaces for workflow composition should increase accessibility by accommodating multiple forms of challenges, and allowing problems to be solved collaboratively by people with different abilities.

Automatic recommendations for visual impairment and motor-skill limitation: Automatic recommendations are more useful for those (a) with limited motor skills as they do not have to use the keyboard or mouse to enter the recommended information, and (b) visual impairment, as they do not have to read documentation to determine the recommended information.

Research to investigate these hypotheses faces the problem that it is difficult to get enough subjects from atypical populations to gather (a) training data for developing the machine-learning innovations, and (b) usability data from our innovations.  Our expectation is that training data from typical populations will also be useful for predicting and ameliorating difficulties of atypical populations. Longitudinal field studies of a few subjects are an answer to (b).


Sensables: 3D Printed Models for Students with Visual Impairments

Presented by Shiri Azenkot, Cornell Tech

3D models are very important learning tools. With 3D printing, there are even more 3D models available. There is huge potential in using 3D printers to teach, especially to portray visual materials to students with visual impairments. Visually impaired students may be able to better see a building, a terrain, and or a 3D tactile globe that is more accessible. However, with a 3D globe, you may lose information such as country names, and differentiation of countries. So we developed a tool kit that tags (Markit) and senses (Talkit) models. In Markit, you can download a model and attach labels to different parts. Then, after printing the model, Talkit will use the camera on the device and read those labels on the model marked up in Markit. Talkit uses the keyboard to choose which model, reads hand gestures, and responds to speech output.

We ran a study to see how teachers could use this technology. Three different teachers of the visually impaired over six weeks developed models with their students: A volcano, a plane, and a small map, and could incorporate sound effects as well as stating what the part of the model is. The images on the screen could also show high contrast visuals with accompanying descriptions.


Sensory Regulation and Embodied Math Design

Presented by Sofia Tancredi, University of California, Berkeley

Math instruction is moving in exciting new directions. Designers and researchers are recognizing and expanding the use of whole-body movement, gesture, and manipulatives for learning math concepts. This movement is inspired by a paradigm shift in the philosophy of cognition from computational models of cognition (input, processing, and output) to embodied cognition models, which see our bodies and interactions with the environment as centrally constitutive of how we think and learn.

As movement-based learning activities expand, it is important to address the accessibility of such activities to all students. One critical and generally overlooked parameter is that of sensory regulation.

Individuals have different sensory needs in order to attend and learn. Sensory processing exists on a spectrum based on neurological threshold. Individuals with a high threshold are less sensitive to sensory input and need more sensory input to stay regulated. For example, one student with math difficulties that I worked with in 2010 would become exhausted whenever he tried to work at a desk. However, when this student had access to sensory regulation tools such as a balance board that provided amplified sensory input, he was able to focus and engage with math learning for long stretches. Individuals with low neurological threshold are more sensitive to sensory input. Sensory differences are associated with ADHD, ASD, mental and emotional disorders (OCD, schizophrenia), and genetic syndromes (Fragile X), and have also been linked to academic performance.

So how might students with diverse sensory regulation needs access math embodied design?  Some key questions to answer towards this goal are 1) How can we both serve students’ sensory regulation needs and include them in learning through movement (that is, give a student a balance board, but also have them engage in a walk-the-number-line activity)?, and 2) How can we accommodate different and often opposing sensory profiles?

I propose that the answer to question 1 lies in the integration of conceptual learning and sensory regulatory affordances of movement, or what I call sensory regulatory embodied mathematics design. An example from a current project is a walk-the-number-line activity adapted to high neurological threshold students through the wearing of ankle weights. In this example, the weights play the dual function of (1) providing regulatory sensory input to the proprioceptive system, and (2) providing sensory input that is relevant to the learning movement. Rather than engaging in competing regulatory and conceptual learning activities, sensory needs can be met harmoniously through task-relevant sensory input. In cyberlearning design, sensory inputs (particularly to the vestibular and proprioceptive sensory systems) might take the form of vibration, whole-body movement, weights, rotation, or orientation changes. These dimensions of movement learning activities need to be adjusted differently for students who need more or less sensory stimulation. Adaptive cyberlearning tools are a promising pathway towards achieving this.

As movement-based cyberlearning activities proliferate, these are poised to improve or exacerbate learning access for students towards both ends of the sensory spectrum. Which occurs depends on our ability to intentionally design sensory dimensions of learning activities for sensory diversity.


Learning in Sign Language Using a Head-Mounted Display

Presented by Mike Jones, Brigham Young University

Deaf students who primarily learn and speak in sign language can find it challenging to look at visuals while also using an interpreter to relay the otherwise verbal instruction and information. How can sign language be watched while also looking at models or away from the speaker?

There are foundations in multimedia learning (Mayer, 1998; Mayer, 2005): Students learn better when hearing instruction while viewing visuals. Do deaf students learn better when viewing an animation accompanying by sign narration rather than captions? Do deaf students learn better when the signer is closer to the visual aid verse further away?

Students use a head-mounted display in the form of eyewear to see the signer while looking at other visuals. The signer can be anywhere (same room, another room, pre-recorded, etc.) and watch a presentation or visual aid at the same time. This may be especially helpful in museums, planetariums, or other places of learning outside the classroom that have been historically hard for deaf students.

We tested various types of equipment and where the signer would be view in the equipment. We studied how it would be done with split attention, how the signer position mattered, and how the fit affected learning. In a planetarium, we focused on how the signer helped the student understand the material either through a head-mounted display or projected on the planetarium itself.


Signing Avatars and Embodied Learning in Virtual Reality

Presented by Lorna Quandt, Gallaudet University

Signing avatars have the potential to be a powerful communication and accessibility tool. They are programmable, responsive, iterative, create digital storybooks, online courses, and can share content in American Sign Language (ASL) online. Online courses could be aided in receiving presentations and other help with an online avatar that can sign in ASL. Thanks to a recently funded NSF EAGER grant (Signing Avatars & Immersive Learning, SAIL), we are now working on a project to further develop these signing avatars and place them in a virtual reality environment to teach users ASL. This virtual reality environment will create an immersive, embodied learning experience.

Our avatars are designed from actual motion capture recordings of fluent signers. We use this data to build avatars which resemble fluid signing, instead of the unnatural signing that comes from computer-based models. These avatars can be used to teach people how to sign ASL in a virtual reality learning environment. This system is based on principles of embodied learning. Students learn better when they can use their bodies to learn, and our new ASL learning system will harness this fact to create a better way to learn ASL. Even more, virtual reality and gesture tracking will allow your own hands (in virtual reality) to demonstrate ASL from a first-person perspective. In SAIL, a student will be able to interact with virtual teachers and see their own virtual hands sign in response. Currently, SAIL is for teaching ASL to non-signers. But eventually, it can open up to a large population and other applications.


The More Accessible Webinar

Presented by Ray Rose, Online Learning and Accessibility Evangelist

We were asked to do a webinar for the United States Distance Learning Association. We asked them to include real-time captioning, but they said it was too expensive. So we chose to use Google Slides with a transcription.

If you convert a PowerPoint to Google Slides, captions will appear as it listens to you. This means there is no excuse to not have an accessible meeting. If you have your slides paired with Google captions, it becomes more accessible—these captions may not be perfect, but they allow the viewer and listener to gain more context then they would have before.

There is no extra cost for using Google captions. All you need is a microphone in your computer to record speaking and to turn on the widget. The captions are relatively accurate, compared to other auto captioning services. If you use Zoom or another lecture recording service, it can save and record the captions as part of the slides, though it does not create a separate transcript for the captions.