Skip to content

Part 4: Assessment

Two key components help ensure the efficient and effective assessment of job applicants:

  1. a clear and consistent assessment rubric (i.e., the criteria by which committees and other decision makers evaluate applicants’ qualifications and potential), and
  2. a clear and consistent assessment plan (i.e., the process by which committees evaluate applicants, make selections at each stage of evaluation, and ultimately make recommendations to the voting faculty and/or to leadership with hiring authority).

Moreover, in addition to ensuring efficiency and effectiveness, assessment rubrics and assessment plans assist in mitigating the impact of personal and collective biases.

DOWNLOAD DOCUMENTS:

QUESTIONS? CONTACT:

4.1 Bias and the Assessment Process

Research confirms that bias can enter the assessment process at multiple points and in multiple forms. Follow best practices to minimize the effects of specific manifestations of bias in the hiring process:

Early Bird Bias and/or Recency Bias. Beware of over-valuing applications that arrive early or late in the process, or simply giving them more attention.

  • Best practice: Avoid reviewing any applications until the priority deadline, and organize applications by some method other than order of arrival.

Moving Target Syndrome. Beware of changing the requirements for the position as the search proceeds in order to include or exclude particular applicants. And beware of being distracted by interesting or impressive applicants whose qualifications fall outside the advertised position.

  • Best practice: Establish evaluation criteria while writing the job ad and commit to using the assessment rubric at every stage of evaluation. It may be helpful to designate a point during the process to evaluate the usefulness of the assessment criteria and the consistency of their application. How well are the criteria and the process working?

Known Quantity Bias. Internal applicants—current students, recent graduates, post-docs, visiting scholars, instructors, etc.—can be both disadvantaged and advantaged during the hiring process.

  • Best practice: It is important to openly discuss the challenge of maintaining fairness, collegiality, and confidentiality when internal applicants or other well-known applicants are part of the pool. It is also important to openly discuss how the committee and the hiring unit will define and manage potential conflicts of interest. When should members disclose professional and personal relationships with applicants? And under what conditions should members recuse themselves from making evaluations?
  • Best practice: Assign a point person—someone with authority who is not on the hiring committee—as a communication contact for internal applicants. If internal applicants proceed to the interview stage, schedule them early in the process to avoid the appearance that they have an inside advantage.

Resources for defining and managing potential conflicts of interest and potential bias or perception of bias based in professional or personal relationships are available in the Toolkit.

Implicit Bias. All of us are affected by unconscious bias, the stereotypes and preconceptions about various social groups stored in our brains that can influence our behavior toward members of those groups, both positively and negatively, without our conscious knowledge.

One well-documented example is our tendency to feel more comfortable with those we perceive as similar to ourselves (so-called in-group favoritism). And numerous studies show that in situations of evaluation, members of dominant groups are typically rated more highly than others, even when credentials are identical. This occurs regardless of the evaluator’s gender or racial background. “Positive bias” often manifests as favoritism and giving some applicants both more attention and the benefit of the doubt. “Negative bias” often manifests not as overt hostility but rather as a kind of neglect—as an absence of attention or lack of careful consideration. Without thinking, we often ignore the less familiar.

It is therefore crucial to consider the potential impact that implicit bias may have on the evaluation process.

In addition to familiar “demographic” factors that can trigger implicit bias—such as gender, race, ethnicity, age, national origin, and so forth—academia has its own set of factors that can trigger implicit bias.

“Academic” or “professional” factors that can trigger implicit bias against particular applicants, whether or not they meet advertised selection criteria, include:

  • Non-traditional career paths.
  • Non-traditional research interests or methodologies.
  • Degrees from institutions considered less historically prestigious.
  • Prior work experience at institutions considered less historically prestigious.
  • Do not appear to “fit” the unit’s historical or current profile (e.g., in terms of gender, age, background, interests, commitments, and so forth).

“Academic” or “professional” factors that can trigger implicit bias in favor of particular applicants, whether or not they meet advertised selection criteria, include:

  • Traditional career paths.
  • Traditional research interests and methodologies.
  • Degrees from institutions considered historically prestigious.
  • Prior work experience at institutions considered historically prestigious.
  • Appear to “fit” the unit’s historical or current profile (e.g., in terms of gender, age, background, interests, commitments, and so forth). This is sometimes referred to as “cloning”—replicating the historical or current unit profile in new hires.

Implicit bias is more likely to affect our decision making when we are tired, in a hurry, feeling overworked or distracted, or uncertain of exactly what we should do—in other words, under the conditions we often face while serving on search committees. And research shows that bias can be contagious; we are more likely to feel, express, or enact bias after witnessing it in others.

Attention to implicit bias can help committees to acknowledge the value of applicants who are less obviously similar to historical or current colleagues, and thus to consider their positive contributions to the unit and its future. Attention to implicit bias can also encourage committees to openly discuss how members define concepts like “merit,” “quality,” “excellence,” and “potential.” Do committee members assume these and related concepts have singular definitions?  And do members think definitions for these concepts should be fixed and unchanging?

Resources and case studies about implicit bias are available in the Toolkit.

4.2 Creating and Implementing an Assessment Rubric

One of our best tools for mitigating potential bias in the hiring process is to establish evaluation criteria before the committee begins reviewing applications—ideally, before or while the committee drafts the job advertisement. An assessment rubric ensures that all applicants are subject to the same evaluation criteria, and that members of search committees apply selection criteria consistently.

If possible, the entire unit should participate in the creation of an assessment rubric to ensure that the unit’s values are reflected in the evaluation criteria. Minimally, the search committee should be assisted by unit leadership.

An assessment rubric also helps the committee and the unit clearly weigh its selection criteria against unit priorities—including the unit’s commitments to diversity, equity, and inclusion. For a particular search, which areas of assessment are considered “must haves” or “deal breakers”?

Some questions to consider:

  • What are the goals for this hire in terms of research, teaching, service, and outreach?
  • How is the unit’s DEI mission a potential factor in each goal?
  • How does the unit weight the various goals for the hire in terms of first and second priorities?
  • What types of evidence will demonstrate achievement or future potential in each area?
  • Does the job ad request materials appropriate to the assessment criteria?

And a caution about DEI assessment criteria:

  • Be careful not to use DEI assessment criteria as a proxy for rating applicants’ identities, based either on their self-disclosures in application materials or on the committee’s assumptions. Assessment criteria should focus on applicants’ knowledge, experience, and exertise, as well as on their potential for future contributions, not on demographic markers.
  • A specific example is assessment criteria that specifies evaluating an applicant’s ability or potential to serve as a “role model” for students from underrepresented backgrounds. What kinds of evidence in applicants’ submitted materials or in candidates’ responses to interview questions will the committee actually assess?

Committees should consider how many distinct criteria will be used in their assessment. Between 5 and 8 is a manageable range. They should also consider whether a single rubric will be adequate, or whether it will be useful to devise multiple rubrics for the multiple stages of a complex search. Some committees, for instance, find it useful to “scaffold” their rubrics so that they use 2-3 criteria in the first round of assessment then add additional criteria in subsequent rounds.

And committees should consider what kind of scale to employ. Interfolio uses a “star” rating system; evaluators can assign between 1 and 5 “stars” for each criterion. Committees can also devise their own rating systems outside Interfolio.

Some typical scales include:

  • A simple choice of “Evidence,” “No Evidence,” and “Unknown” or “High,” “Medium,” and “Low” rankings (e.g., using only the first three “stars” in Interfolio). A simple choice may ensure greater consistency in how diverse committee members employ the scale.
  • A more elaborate choice of “Excellent,” “Good,” “Fair,” “Deficient,” and “No Evidence” or similar rankings (e.g., using all five “stars” in Interfolio). Finer distinctions may require “norming” exercises to ensure diverse committee members employ the scale in similar ways.

A range of sample assessment rubrics are available in the Toolkit.

Open Rank Searches

If the unit is running an “open rank” search (i.e., “assistant or associate,” “associate or full,” or open to all ranks), the committee should consider creating more than one assessment rubric, since different kinds and different levels of achievement may be expected from applicants at different stages of their careers (e.g., in terms of research productivity, range of teaching, amount of student advising, leadership in diversity efforts, outreach activities, or national service).

Using the Assessment Rubric as a Tool for Discussion

Committees may be tempted to use the assessment rubric similarly to how they would use a rubric designed for grading coursework or reviewing grant proposals: to rank applications based on total scores. It is important to stress, however, that the assessment rubric is a tool to help maintain consistency and fairness in the evaluation process, that is, to minimize bias either in favor of or against particular applicants. The rubric is not a substitute for active committee deliberations.

Committee members should come to meetings prepared to discuss the relative merits of specific applicants, and the review process should allow committee members opportunities to discuss any applicants they find have merit, regardless of assigned scores or combined rankings.

4.3 Creating and Implementing an Assessment Plan

Before any applications are reviewed, the committee should have agreed upon an explicit plan for how it will conduct its business in a fair and consistent manner. Some questions to ask:

  • When will the committee begin reading and assessing applications? As applications come in? Or after the priority deadline?
  • Given the anticipated size of the applicant pool, how many rounds or stages of assessment are likely to be needed? And will a single assessment rubric be appropriate, or will multiple rubrics be needed?
  • Should all committee members read and assess the same materials at the same stage of the search process?
  • How will committee members define and then handle potential conflicts of interest or potential bias or perception of bias, such as a prior relationship with an applicant or with an applicant’s adviser? This issue can be especially challenging if the pool includes internal applicants.
  • By what process will the committee come to a decision about its interview list? Will members vote, for example, or deliberate until they achieve consensus?
  • At what point in the process will the committee review letters of recommendation or contact references? Research suggests that, although they can provide useful information, letters of recommedation often reflect their authors’ indiosyncracies and biases—rather than provide an “objective” assessment.
  • Will the committee conduct preliminary interviews? If so, will these be in person, over the phone, by Zoom, or by some other means?
  • By what process will the committee create its short list of candidates for final interviews? 
  • How will the committee organize final interviews—will they be conducted in person and on campus (the conventional “campus visit”) or in a virtual environment?
  • By what process will the committee make its final assessments and recommendations?
  • How will the committee communicate with candidates and with the larger unit at each stage of the process?

In sum, it is important to consider:

  • At which stage(s) of the assessment process will you apply the assessment rubric or rubrics?
  • How will you ensure that agreed upon criteria are applied consistently for all applicants at all appropriate stages of the assessment process?
  • How will you work to minimize the potential impact of implicit bias?

4.4 Preliminary Interviews

In many fields it is standard practice to conduct preliminary interviews with a “long” short list—perhaps 8 to 10, or as many as 15 candidates—before determining which 2 to 4 to schedule for final interviews. Preliminary interviews are an efficient way for committees to consider a range of interesting candidates. To help make preliminary interviews consistent, fair, and effective:

  • Avoid offering “courtesy” interviews to applicants who do not meet stated criteria, including internal applicants.
  • Conduct all interviews in the same format and under similar conditions—whether in person, over the phone, or on Zoom—including interviews with internal candidates.
  • Have the same committee members present for all interviews, and ask the same set of standard questions, in the same order.
  • Ask questions about potential contributions to the unit’s diversity, equity, and inclusion mission of every candidate.
  • Make sure all interview questions comply with federal and state hiring laws and university policies. Some questions are always off limits. A “Chart for Fair and Unfair Pre-employment Inquiries”—which covers everything from asking questions about age to asking questions about disability, marital status, national origin, race, and sexual orientation—is available under the heading “Guidelines for pre-employment inquiries” on the EOAA website, now part of UW HR (hr.uw.edu/eoaa).

A link to the chart of “fair” and “unfair” inquiries, sample interview questions that highlight issues of diversity and inclusion, and a guide to interviewing candidates with disabilities are available in the Toolkit.

4.5 Final Interviews

The set of final interview activities—whether conducted in person on campus or in a virtual environment—is a component of the assessment process but it is also the beginning of the recruitment process. These activities should involve not only the search committee but also the unit, the college or school, and campus and community allies.

Organizing the Final Interview

Final interview activities—again, whether conducted on campus or virtually—allow candidates to showcase their professional qualities. They are also opportunities for the unit to make finalists feel welcomed and to help finalists imagine themselves as part of a new community.

It is important to clearly distinguish which components of the final interview are part of the search committee’s and the unit’s assessment process. Assessment components typically include some form of job talk, research seminar or presentation, and/or teaching demonstration; meetings with the chair or director, other unit leaders, and graduate students; meetings with relevant unit committees, such as a curriculum committee or a diversity committee; and a meeting with the appropriate dean or chancellor.

  • Provide finalists with a detailed itinerary of all assessment activities, as far in advance as possible. To ensure equitable treatment, itineraries for all finalists should be identical in terms of assessment activities, including itineraries for internal candidates. For example, if one finalist is scheduled to meet with a curriculum or diversity committee as part of their assessment, all finalists should be scheduled to meet with the same curriculum or diversity committee.

The unit’s recruitment activities can take many forms during a final interview.

  • Units may want to introduce finalists to relevant faculty, staff, students, and administrators within and outside the unit with whom they might share research, teaching, service, and/or outreach interests. How can the unit help finalists imagine local professional networks?
  • Units may want to ask finalists if they would like to visit relevant research centers, facilities, or other campus resources, and/or to meet with specific individuals. It is best to create a list of resources finalists can review before the final interview. A sample list of campus resources is available in the Toolkit.
  • Time permitting, units may want to ask finalists if they would like to meet with relevant community partners and resources.
  • Units may want to provide venues for finalists to ask questions they might not feel comfortable asking members of the hiring unit (e.g., about atypical resources for their research, or about partner accommodations, family or medical leave, disability accommodations, resources for childcare or eldercare, unit or campus climate for people from underrepresented backgrounds, and so forth).
  • Units may want to introduce finalists to relevant campus resources for their success, such as Teaching@UW or The Whole U.

It is important to maintain clear and open communication with finalists before, during, and after the final interview, and it is important to be honest about the unit’s expectations for teaching, research, and service, as well as about issues of funding, space, or other resources.

Taking a “Tiered” Approach to Final Interviews

One advantage of conducting final interviews virtually, rather than in person on campus, is that units can interview a larger number of finalists in an efficient and cost-effective way.

In one potential version: The unit invites a first “tier” of, say, 6 candidates for one hour of interview activities each (an initial commitment of 6 hours) and selects 4 of these to advance to the second “tier.” Next, the unit invites the 4 selected candidates for an additional 2 hours of interview activities each (an additional commitment of 8 hours) and selects 2 of these to advance to a third “tier.” Finally, the unit invites the 2 selected candidates to an additional 1 – 2 hours of interview activities each (an additional commitment of 2 – 4 hours) and makes its final selections. In roughly the same amount of time as hosting a single candidate on campus, the unit has considered 6.

If appropriate funding streams are available (e.g., discretionary gift budgets), the unit may choose to fund “recruitment visits” for the selected finalist(s) while they make their decision to accept the unit’s offer.

Final Interviews and Internal Candidates

If the list of finalists includes internal candidates, it is important to:

  • Insure that the itineraries for their assessment activities—whether on campus or virtual—are identical to those of external candidates.
  • Inform internal candidates about all aspects of the final interview process, and be intentional about maintaining fairness, collegiality, and confidentiality.
  • Encourage internal candidates not to attend assessment activities, such as job talks, teaching demonstrations, interviews, or hiring meetings, involving the other finalists.

A best practice is to interview internal candidates first in order to avoid any potential perception that internal candidates have an advantage from having seen firsthand or gathered information about the other candidates’ final interviews.

Next section: Recruitment