World Café

Participants write their input to a statement on a large post-it note.

Participants were asked to read and write their responses and reactions to seven statements (derived from the previous day and half of discussions). The statements were rotated around the tables so that participants could add their own responses to the original question and respond to the responses of others if they wished. Questions focused on what research is needed to resolve the issues highlighted in relation to models and frameworks; what is needed to move practice forward and how research and practice might inform one another?

Each statement is presented below and followed by some of the responses.

An important part of the solution to accessibility and inclusion is to adopt both a top-down and bottom-up approach.

  • Revolving door of champions/administrators 
  • YES
  • Break that glass ceiling, but I think that’s only possible from the top
  • Who’s the change agent? Spend time convincing them?
  • Clear statement from the “top” might really help those at the “bottom”
  • Reset the pyramid!
  • Absolutely- this is how system change happens. One without the other is only a Band-Aid.
  • Horizontal approach, solidarity not charity
  • Yes. By using two opposite approaches you encourage people at all levels to be involved. This also contributes to more diverse perspectives being expressed and taken into account throughout the process. Follow-up question to think about...When trying to implement a bottom-up approach, how do you get the students who are traditionally in a more quiet, insubordinate role to create a push for change? 
  • I believe in bottom-up
  • Give the “bottom” representation
  • Does “bottom” include students and consumers? 
  • Definitely agreed! However, “top” needs to be fully active and engaged with the “bottom” = communication + collaboration between the two
  • Tried the bottom-up model for many years. Ready to move to top-down. Conclusion - you need both as they bring different aspects to the solution to accessibility/inclusion
  • Some schools can do top-down but any school can do bottom-up
  • Value 
  • There can be two tops, 1 Government, 2. Organizational
  • Flexibility, balance  
  • External forces...Government, compliance 
  • Both are important - it can depend on the institution which works best 
  • Of course! There must be highest level (funder/government) support all the way down to the individuals. Request an input on accessibility needs to push from both directions. 
  • We need buy-in
  • (Hierarchy and equality) not only as a chain process, but interlinked and inter-active. Government to leadership to faculty to staff, student to faculty to leadership to government, society use of ICT exchange 
  • I agree. Would also like to see people at top explain their position to those lower down and vice-versa, from bottom up   
  • Resources and external drivers (legal, etc.)

A successful model for framework is one that stimulates a post-secondary education (PSE) institution to transform its systems and processes rather than enables it to carry on doing what it already does.

  • Stimulates positive change
  • Model can be aspirational 
  • A model can be an inspiration 
  • A model should educate and be not assume prior knowledge
  • I like what Robert Prisig discusses in Lila where he talks about both static and dynamic quality; what needs to stay in function while changing cell structure regularly 
  • Maybe both! 1) Can describe current processes 2) could be designed to purpose systemic change. So an institution that wants to change should have 2 models/frameworks to describe what is status and what is desired 
  • With evaluation of results of change
  • Yes! Nothing should be “enabled” - institutions should always aim for all students to have fully equal and successful participation of all students. A model should set an example that an institution can follow 
  • When there is buy-in YES!
  • Developing the framework/model can still serve as a source of stimulation for the PSE. The process is important.
  • Do models stimulate change? Or do models change because they respond to what is changing “out there” (e.g., new technology)
  • A model or framework helps to structure process, communication, responsibilities, criteria, indications, measurements, etc. 
  • It is possible that a successful model is in place. Is change needed then?
  • The framework/ model can’t be imposed but must be created within the PSE community itself 
  • Why does it have to be either or why can’t we stimulate new and support what works 
  • A model should be used as a guideline for step-by-step best practices 
  • Are there any truly successful models? Aren’t the models all more of theories about how something could work? Who decides what is successful/effective? What if what it already does is more effective than implementing a specific model 
  • I don’t think we have truly successful, appropriate, and effective models. If there were our work would have been done.

We need to develop our existing models and frameworks rather than come up with new ones.

  • If our current model worked we wouldn’t be here 
  • Can we use existing models to inspire new models?
  • Needs to adjust to change
  • We need both—Why constrain ourselves either way?
  • What are our existing models?
  • I agree that we should further develop existing models and frameworks BUT we need to respond to missing context and develop new ones 
  • Models need time to grow and mature and take root. We need to reflect and assess whether the growth is still guiding our. We also need to consider the changing audience for the models … the early adoption/innovations vs. the more cautious adopter and slow to change 
  • HYBRID of old and new
  • Learn from existing and decide then - if needed restart completely new
  • Theory vs. practice > that is the question. We need to respect our different orientations and use them to have a balanced approach. Thus theory and practice = model
  • Can sharing of our good practice create models
  • Maybe stop worrying about models and abstractions and more daily practice. 
  • We need to develop our existing models to ensure that they are effective. However, using the same model all of the time risk becoming outdated and irrelevant in the presence of new technology 
  • What comes out of our daily practice? 
  • At what point do we decide an approach is broken? When litigation appears?

Adopting both a reactive and proactive approach to accessibility and inclusion is the best approach.

  • Proactively reactive - anticipate problems even if you are proactive
  • Plan for the proactive. Be ready for the reactive - should be part of the proactive plan
  • Hybrid
  • Some needs in some contexts cannot be anticipated. 
  • Need to plan for reactive processes - need full resource and support
  • Need to monitor and measure reactive processes to see if they should be part of reactive or become part of proactive
  • Who determines what reactive and proactive is? It should be more than black and white! We need grey!
  • Proactive is always better but reactive approaches must be quick - like the U.S. Digital Service or the UK Government Digital Service - a quick turn-around - no waiting lists
  • Proactive is the goal. Reactive is (and probably will always be) a necessity. 
  • There is still a place for both but we should aspire to just having accessibility without remediation or accommodation. 
  • Build in evaluations! Check points. 
  • Reactive comes from not being proactive. 
  • Proactive and reactive approaches are important - Proactive: think of possible issues that could arise. Reactive: thinking on your feet; reacting at the time. You cannot anticipate every possible issue, but some issues can be avoided or have a more effective solution with more forethought. Approaches can be more/less effective for different situations/environments/population. Proactive requires a greater base of knowledge in the beginning on a continuum.
  • Explore the limits of both approaches. What is better done proactively? What is better done in reaction to?
  • Proactive always is the best approach. However, sometimes you will have reactive approaches if the situation is unusual or unexpected. 
  • Navigate bureaucracy
  • Proactivity is #1 and Reactivity is #2 because sometimes it might be too late to react.
  • Learn from the reactive cases for continual improvement. 
  • If not proactive, will always have to be reactive.
  • Who is proactive or reactive? Staff, faculty, student, leadership?
  • Proactive would be best but we will always have to have a reactive approach. 
  • Sometimes a reactive approach can be better targeted at individual requirements
  • Proactive in the wrong direction can bind resources (waste), which would be needed elsewhere (reactive).
  • Proactive approach is not well developed.
  • Rapid response cannot always predict new necessity. 
  • Fire prevention education - fire gets out, run, put it out
  • Yes, UD and AT and other accommodations
  • Technology changes rapidly
  • We have no choice but to have both to address all problems. 

The only models worth having or developing are those that are testable.

  • Every model should be able to measure
  • By testable - if you mean validated that’s not important for me. If you mean useable - that’s critical. For me a model needs to go beyond theory. In fact, some models may come out of practice - thus already “tested.” Are used to “describe” what’s already been done so others can replicate it. 
  • I agree
  • Agreed!
  • Diversify data, not numbers, but other things
  • Need to evaluate outcome of use of model
  • Can we isolate? Can we test as it is hard to isolate? Testing the model of outcomes are two different things. 
  • No: model can serve as an aid for thinking, developing, describing.
  • It is always good to evaluate what you are doing so that you call adaptive/improve with feedback
  • Some actions have positive tangible outcomes with no need to accurately measure those. Do we need a 98.2% level of compassion or could we just be content with an unspecified increase in compassion?
  • I’m unclear on the statement. What does an untestable model look like? Wouldn’t any be testable?
  • We need to optimize the model - we need to define the components. Maybe if we can test components in an expected model it look we evaluate components?
  • Model components - define - test. 
  • People are not always data
  • Always! Never!
  • Who defines “testable”? How exactly do you determine how well it does on this test? If models are mainly theories, I think it is important to develop any effective models in order to take other ideas/perspectives into account. Maybe a model isn’t testable now, but it may be when built upon or in the future.
  • It depends on how we define models! I think there is an important role for models to expand our vision of what we can be and do… so I disagree. 
  • Sometimes the really important things can not be tested! Maybe indicators if at all!
  • Work on one thing at a time vs. all at once. 
  • Can be evaluated?
  • Should be testable otherwise how do we know?
  • Costs effective model - measure success
  • People, technology, user skills
  • Student feedback - qualitative, quantitative
  • What are testing?
  • Isn’t anything testable with some thought and creativity?
  • Can it be replicated in different settings?
  • Untrue - they’re also good for documenting processes and practices. The practice outputs and outcomes should be measurable.
  • Bizarre
  • Models used as a strategy have to prove their usefulness - not useful? Come in with a different/better model
  • How do we measure “affective” benefits of UDL?
  • “Got no data? It didn’t happen.”
  • Data driven processes needed to measure results; to come up with directions for future development and research.

We don’t need one single model or framework, we need different ones for different problems, contexts or audiences.

  • Comfort level changes based on context
  • Maybe one common framework or template that provides the structure for multiple models to fit into. That way we’ll have a common language for discussion but relevance for different contexts and audiences.
  • Hybrid.
  • An issue with multiple models is choosing between them! purpose/context/audience needs to be clearly described
  • Standards - customized - localization - possibilities ----> choose what works
  • We need constant reflection and updating of dynamic models/frameworks
  • … but elements of a good process model could work in a variety of contexts
  • A set of models which are compatible. Developmental models <---> Models of operation. Models need to be adjustable to context. XXX model
  • I think this is an important point. We need multiple models to fit different problems… but there has to be a unifying value or theory. They have to be compatible. 
  • Are these changes just different components of a model? But one model unlikely. 
  • Need: Micro, meso, macro models with different lenses/foci/contexts no “ring to rule them all” “Best practices”/compliance is a model
  • I like the concept of levels of models. Having one overarching model with the capacity to be adapted in various scenarios can be useful for giving groups a starting point and an idea of how to adjust to meet their specific needs
  • I think “best practices” are building blocks for models but not an actual model
  • I am really not sure. If one model is flexible enough, perhaps one is enough. But a very inflexible model will break like the Tacoma-Narrows Bridge! How do we balance being prescriptive enough to be useful, but flexible enough?
  • Some models will be inherently “better” -- who and how that’s decided is also an important consideration
  • Something has to be set to have a model
  • Everyone needs to say same thing for a model to work
  • I don’t see how you can have just one model - things need to be flexible/adaptable to apply at a variety of places
  • An overarching, broad framework could inform multiple tailored models
  • Would a “meta-model” be an option? When do we drop/delete “unusable” models?
  • How are models developed? What kind of data is used?
  • What’s the focus and the function of models? Who is supposed to use the model?

We need a model that will guide senior managers regarding best practice in relation to policy, strategy and governance. 

  • Well-thought out guidelines that include the opinions/thoughts of people w/ disabilities (instead of/as a model)
  • The administration needs to have human experience to see the impact. 
  • 1st need them to embrace a11y + UD.
  • Senior managers need knowledge and need to be able to expand their knowledge
  • Open minded and willing to change
  • A model or a set of principles
  • Never met a senior manager who developed policies, strategies based on a model of accessibility to IT. Maybe, budget student success, unions, etc. 
  • Do younger/lower level managers not need any guidance? Why do we need to call out specific manager, rather than implementing training for all managers/workers?
  • Models cannot capture everything, need to convey to management importance of accessibility. Otherwise, they will work around problems with model as a guide. 
  • Worries - becomes “all talk” not action. 
  • Checklist approach
  • Centralized accommodations support implementation
  • Important for institution specific drives can be accounted for. 
  • Staff/HR policy
  • What models are normal/acceptable for this audience? (business/management style of models?)
  • What is a model? Wouldn’t a policy or guideline work?
  • This model has to be embraced by a broader, institutional community.
  • Model is too static a concept in terms of what would help senior managers- process sounds more dynamic and flexible. 
  • Policy Leadership Purchasing Training Remediation Retention Requirement
  • Senior managers seem to be driven more by metrics (success, budget) than models. Accessibility is difficult to measure for the most part. This presents a challenge selling it to execs.  
  • Broad models “so they are applicable”
  • Inform model
  • Department policies
  • Define operationalizing - best practice - policy - governance - strategy
  • Managers training strategies (people)
  • Must be continuous training ---> real outcomes
  • How to measure outcomes
  • I appreciate the focus on practice as it informs policy
  • Needs to be unique to each school.

Overview of World Café Outputs and Discussion on Where to Go Next Regarding Research Plans for the Group

Facilitated by Jane 

Questions presented followed by some of the responses are presented below.

What have you gotten out of the last few days, what are your thoughts on models and their use? Where are they too theoretical and where are they practical? Where does it lie for top-down and bottom-up, or proactive verse reactive? What are the conclusions we’ve come up with?

  • We need more data—the efficacy of models and accessibility should be based on data.
  • What counts as data? Data is not necessarily numbers, it can be observable practice or interviews, or it can be different statistics based on studies. 
  • In the paper I wrote, the 9 models I identified, some were pulling from case studies. Does that count as data? 
  • Making accessibility improvements is difficult, because accessibility is not a switch that can be turned on and off. We don’t know if a product will ever be perfectly accessible. Accessibility is a process. It’s hard to collect data on a process and an improvement.
  • We had a discussion about how data can be measured as users and as technology. There’s way to measure a tool or a website but measuring people is harder. Separating these two may be necessary.
  • Do we want data to develop models or data to evaluate models? It’s kind of the chicken and the egg scenario.
  • We aren’t necessarily measuring accessibility, but the effectiveness of a model. 
  • A model has some facts to measure. For example, if you’re applying universal design to a course, you could measure how students do in a course verse students who aren’t given universal design in a course.
  • Senior management is not driven my models, but my metrics. Measuring accessibility is difficult. Senior management wants to know the number of accessibility issues within a software, not an abstract discussion about models.
  • If we have a model and we try to implement, we can measure how well we implement something but not necessarily measuring the model itself. We can make a conclusion based on how well the model is implemented or methods of implementation but not the model itself.
  • Does this assume the model is working?
  • But if the concept is implemented, then I can measure only that implementation not the actual concept itself. If someone implements it a different way, then it may work better or worse.
  • Data that helps us come up with models, data that tests a hypothesis, and data that evaluates how we use and implement a model.

What will you do when you go back to your day job?

  • As a student leading a disabilities group, this has opened my eyes for how we can use technology in our meetings. We’ve never taken the time to see how we can be more inclusive using technology, and now I’ve realized how important it is.
  • We’ve brought together different people in different roles and interest.
  • Time for us to really start curating the models that exist. We have a lot of organic models being used without labels or names or instruction. It seems important that we spend some time really analyzing this work and really sketching out what we’re doing and how we’re doing it and where we can improve or change.
  • I noticed that a lot of us are articulating models even though we never saw them as models before. There is a need for curating these “models.”
  • Funding can bring weight to it and help connect institutions. Could an organization like AHEAD help curate models within institutions? Could institutions spend the time sketching out what they are doing and compare them, through an organization like AHEAD? People can often crave a way to have their work be acknowledged, and this could be a way it is done.
  • I would love to take our different practices that are in place and map them together and see how they compare.
  • As a teacher, I have written down notes about my own teaching practices and how to improve those. I have a clearer understanding of where to go in my research, especially from the panel discussions. 
  • I want to talk to my students who are studying ICT and technology and see if I can appeal to them on how they can include accessibility in their own teaching and educational pursuits.
  • From a student perspective, there are many influential universal design models. Please don’t regard a student with a disability as an accessibility specialist—all people with disabilities are different and students with disabilities often don’t know how to accommodate others, let alone even the best way to accommodate themselves.
  • Look at the marriage between ICT and AT: if the ICT doesn’t have accessibility features, it will need AT to use it. For example, I use Zoom or a screen reader when using Google Docs. Sometimes AT can cause ICT to malfunction or crash.
  • We have specific models for UDL/UDI. How technology is actually used within a specific class. I want to have the guidelines to analyze widely used technologies in conjunction with AT to better have an idea of how it works and create data and a professional judgment/subjectivity.