Two key components help ensure the efficient and effective assessment of job applicants:
1. A clear and consistent assessment rubric (i.e., the criteria by which committees and other decision-makers evaluate applicants’ qualifications and potential), and;
2. A clear and consistent assessment plan (i.e., the process by which committees evaluate applicants, make selections at each stage of evaluation, and ultimately make recommendations to the voting faculty and/or to leadership with hiring authority).
Moreover, in addition to ensuring efficiency and effectiveness, assessment rubrics and assessment plans assist in mitigating the impact of personal and collective biases.
4.1 Bias and the Assessment Process
Research confirms that bias can enter the assessment process at multiple points and in multiple forms. Follow best practices to minimize the effects of specific manifestations of bias in the hiring process:
Early Bird Bias and/or Recency Bias. Beware of over-valuing applications that arrive early or late in the process, or simply giving them more attention.
- Best practice: Avoid reviewing any applications until the priority deadline, and organize applications by some method other than order of arrival.
Moving Target Syndrome. Beware of changing the requirements for the position as the search proceeds in order to include or exclude particular applicants. And beware of being distracted by interesting or impressive applicants whose qualifications fall outside the advertised position.
- Best practice: Establish evaluation criteria while writing the job ad and commit to using the assessment rubric at every stage of evaluation. It may be helpful to designate a point during the process to evaluate the usefulness of the assessment criteria and the consistency of their application. How well are the criteria and the process working?
Known Quantity Bias. Internal applicants—current students, recent graduates, post-docs, visiting scholars, instructors, etc.—can be both disadvantaged and advantaged during the hiring process.
- Best practice: It is important to openly discuss the challenge of maintaining fairness, collegiality, and confidentiality when internal applicants are part of the pool. It is also important to openly discuss how the committee and the hiring unit will define and manage potential conflicts of interest. When should members disclose potential conflicts? And under what conditions should members recuse themselves from making evaluations?
- Best practice: Assign a point person—someone with authority who is not on the hiring committee—as a communication contact for internal applicants. If internal applicants proceed to the interview stage, schedule them early in the process to avoid the appearance that they have an inside advantage.
Implicit Bias. All of us are affected by unconscious bias, the stereotypes and preconceptions about various social groups stored in our brains that can influence our behavior toward members of those groups, both positively and negatively, without our conscious knowledge.
One well-documented example is our tendency to feel more comfortable with those we perceive as similar to ourselves (so-called in-group favoritism). And numerous studies show that in situations of evaluation, members of dominant groups are typically rated more highly than others, even when credentials are identical. This occurs regardless of the evaluator’s gender or racial background. “Positive bias” often manifests as favoritism and giving some applicants both more attention and the benefit of the doubt. “Negative bias” often manifests not as overt hostility but rather as a kind of neglect—as an absence of attention or lack of careful consideration. Without thinking, we often ignore the less familiar.
It is therefore crucial to consider the potential impact that implicit bias may have on the evaluation process.
In addition to familiar “demographic” factors that can trigger implicit bias—such as gender, race, ethnicity, age, national origin, and so forth—academia has its own set of factors that can trigger implicit bias.
“Academic” or “professional” factors that can trigger implicit bias against particular applicants, whether or not they meet advertised selection criteria, include:
- Non-traditional career paths.
- Non-traditional research interests or methodologies.
- Degrees from institutions considered less historically prestigious.
- Prior work experience at institutions considered less historically prestigious.
- Do not appear to “fit” the unit’s historical or current profile (e.g., in terms of gender, age, background, interests, commitments, and so forth).
“Academic” or “professional” factors that can trigger implicit bias in favor of particular applicants, whether or not they meet advertised selection criteria, include:
- Traditional career paths.
- Traditional research interests and methodologies.
- Degrees from institutions considered historically prestigious.
- Prior work experience at institutions considered historically prestigious.
- Appear to “fit” the unit’s historical or current profile (e.g., in terms of gender, age, background, interests, commitments, and so forth). This is sometimes referred to as “cloning”—replicating the historical or current unit profile in new hires.
Implicit bias is more likely to affect our decision making when we are tired, in a hurry, feeling overworked or distracted, or uncertain of exactly what we should do—in other words, under the conditions we often face while serving on search committees. And research shows that bias can be contagious; we are more likely to feel, express, or enact bias after witnessing it in others.
Attention to implicit bias can help committees to acknowledge the value of applicants who are less obviously similar to historical or current colleagues, and thus to consider their positive contributions to the unit and its future. Attention to implicit bias can also encourage committees to openly discuss how members define concepts like “merit,” “quality,” “excellence,” and “potential.” Do committee members assume these and related concepts have singular definitions? And do members think definitions for these concepts should be fixed and unchanging?
Resources and case studies about implicit bias are available in the Toolkit.
4.2 Creating and Implementing an Assessment Rubric
One of our best tools for mitigating potential bias in the hiring process is to establish evaluation criteria before the committee begins reviewing applications—ideally, before or while the committee drafts the job advertisement. An assessment rubric ensures that all applicants are subject to the same evaluation criteria, and that members of search committees apply selection criteria consistently.
If possible, the entire unit should participate in the creation of an assessment rubric to ensure that the unit’s values are reflected in the evaluation criteria. Minimally, the search committee should be assisted by unit leadership and the unit’s diversity committee.
An assessment rubric also helps the committee and the unit clearly weigh its selection criteria against unit priorities—including the unit’s commitments to diversity, equity, and inclusion. For a particular search, which areas of assessment are considered “must haves” or “deal breakers”?
Some questions to consider:
- What are the goals for this hire in terms of research, teaching, service, and outreach?
- How is a commitment to diversity, equity, and inclusion a factor in each goal?
- How does the unit weigh these goals in terms of first and second priorities?
- What types of evidence will demonstrate achievement or future potential in each area?
- Does the job ad request materials appropriate to the assessment criteria?
Committees should consider how many distinct criteria will be used in their assessment. Between 5 and 8 is a manageable range. They should also consider whether a single rubric will be adequate, or whether it will be useful to devise multiple rubrics for the multiple stages of a complex search. Some committees, for instance, find it useful to “scaffold” their rubrics so that they use 2 -3 criteria in the first round of assessment then add additional criteria in the subsequent rounds.
And committees should consider what kind of scale to employ. Interfolio uses a “star” rating system; evaluators can assign between 1 and 5 “stars” for each criterion. Committees can also devise their own rating systems outside Interfolio.
Some typical scales include:
- A simple choice of “High,” “Medium,” and “Low” rankings (e.g., using only the first three “stars” in Interfolio). A simple choice may ensure greater consistency in how diverse committee members employ the scale.
- A more elaborate choice of “Excellent,” “Good,” “Fair,” “Deficient,” and “No Evidence” rankings (e.g., using all five “stars” in Interfolio). Finer distinctions may require “norming” exercises to ensure diverse committee members employ the scale in similar ways.
A range of sample assessment rubrics are available in the Toolkit.
Open Rank Searches
If the unit is running an “open rank” search (i.e., “assistant or associate,” “associate or full,” or open to all ranks), the committee should consider creating more than one assessment rubric, since different kinds and different levels of achievement may be expected from applicants at different stages of their careers (e.g., in terms of research productivity, range of teaching, amount of student advising, leadership in diversity efforts, outreach activities, or national service).
Using the Assessment Rubric as a Tool for Discussion
Committees may be tempted to use the assessment rubric similarly to how they would use a rubric designed for grading coursework or reviewing grant proposals: to rank applications based on total scores. It is important to stress, however, that the assessment rubric is a tool to help maintain consistency and fairness in the evaluation process, that is, to minimize bias either in favor of or against particular applicants. The rubric is not a substitute for active committee deliberations.
Committee members should come to meetings prepared to discuss the relative merits of specific applicants, and the review process should allow committee members opportunities to discuss any applicants they find have merit, regardless of assigned scores or combined rankings.
4.3 Creating and Implementing an Assessment Plan
Before any applications are reviewed, the committee should have agreed upon an explicit plan for how it will conduct its business in a fair and consistent manner. Some questions to ask:
- When will the committee begin reading and assessing applications? As applications come in? Or after the priority deadline?
- Given the anticipated size of the applicant pool, how many rounds or stages of assessment are likely to be needed? And will a single assessment rubric be appropriate, or will multiple rubrics be needed?
- Should all committee members read and assess the same materials at the same stage of the search process?
- How will committee members define and then handle potential conflicts of interest, such as a prior relationship with an applicant or with an applicant’s adviser? This issue can be especially challenging if the pool includes internal applicants.
- By what process will the committee come to a decision about its interview list? Will members vote, for example, or deliberate until they achieve consensus?
- At what point in the process will the committee review letters of recommendation or contact references? Research suggests that, although they can provide useful information, letters of recommendation often reflect their authors’ idiosyncracies and biases—rather than provide an “objective” assessment.
- Will the committee conduct preliminary interviews? If so, will these be in person, over the phone, by Zoom, or by some other means?
- By what process will the committee create its short list of candidates for final interviews?
- How will the committee organize final interviews—will they be conducted in person and on campus (the conventional “campus visit”) or in a virtual environment?
- By what process will the committee make its final assessments and recommendations?
- How will the committee communicate with applicants and with the larger unit at each stage of the process?
In sum, it is important to consider:
- At which stage(s) of the assessment process will you apply the assessment rubric or rubrics?
- How will you ensure that agreed upon criteria are applied consistently for all applicants at all appropriate stages of the assessment process?
- How will you work to minimize the potential impact of implicit bias?
4.4 Preliminary Interviews
In many fields it is standard practice to conduct preliminary interviews with a “long” short list—perhaps 8 to 10, or as many as 15 candidates—before determining which 2 to 4 to schedule for final interviews. Preliminary interviews are an efficient way for committees to consider a range of interesting candidates. To help make preliminary interviews consistent, fair, and effective:
- Avoid offering “courtesy” interviews to applicants who do not meet stated criteria, including internal applicants.
- Conduct all interviews in the same format and under similar conditions—whether in person, over the phone, or on Zoom—including interviews with internal candidates.
- Have the same committee members present for all interviews, and ask the same set of standard questions, in the same order.
- Ask questions about diversity, equity, and inclusion of every candidate.
- Make sure all interview questions comply with federal and state hiring laws and university policies. (These are available on the EOAA website.)
A guide to “fair” and “unfair” inquiries, sample interview questions that highlight issues of diversity and inclusion, and a guide to interviewing candidates with disabilities are available in the Toolkit.
4.5 Final Interviews
The set of final interview activities—whether conducted in person on campus or in a virtual environment—is a component of the assessment process but it is also the beginning of the recruitment process. These activities should involve not only the search committee but also the unit, the college or school, and campus and community allies.
Organizing the Final Interview
Final interview activities—again, whether conducted on campus or virtually—allow candidates to showcase their professional qualities. They are also opportunities for the unit to make finalists feel welcomed and to help finalists imagine themselves as part of a new community.
In addition to the traditional job talk, research seminar, and/or teaching demonstration; meetings with the chair or director, other unit leaders, and graduate students; meals or other casual events with colleagues; a meeting with the appropriate dean or chancellor; and a tour of the campus, final interviews should entail:
- Providing finalists a detailed itinerary, as far in advance as possible. To ensure equitable treatment, all itineraries should be similar, including those for internal candidates.
- Introducing finalists to relevant faculty, staff, students, and administrators within and outside the unit with whom they might share research, teaching, service, and/or outreach interests. How can you help finalists imagine local professional networks?
- Asking finalists if they would like to visit relevant research centers, facilities, or other campus resources, and/or to meet with specific individuals. It is best to create a list of resources finalists can review before the final interview. A sample list of campus resources is available in the Toolkit.
- Time permitting, asking finalists if they would like to meet with relevant community partners and resources.
- Providing venues for finalists to ask questions they might not feel comfortable asking members of the hiring unit (e.g., about partner accommodations, family or medical leave, disability accommodations, resources for childcare or eldercare, unit or campus climate for people from underrepresented backgrounds). The meeting with a dean can be an opportunity for these kinds of questions if it is clear they can be asked in confidence.
- Maintaining clear and open communication with finalists. It is important to be honest about expectations for teaching, research, and service, as well as about issues of funding, space, or other resources.
- Introducing finalists to relevant campus resources for their success.
Taking a “Tiered” Approach to Final Interviews
One advantage of conducting final interviews virtually, rather than in person on campus, is that units can interview a larger number of finalists in an efficient and cost-effective way.
In one potential version: The unit invites a first “tier” of, say, 6 candidates for one hour of interview activities each (an initial commitment of 6 hours) and selects 4 of these to advance to the second “tier.” Next, the unit invites the 4 selected candidates for an additional 2 hours of interview activities each (an additional commitment of 8 hours) and selects 2 of these to advance to a third “tier.” Finally, the unit invites the 2 selected candidates to an additional 1 – 2 hours of interview activities each (an additional commitment of 2 – 4 hours) and makes its final selections. In roughly the same amount of time as hosting a single candidate on campus, the unit has considered 6.
If appropriate funding streams are available (e.g., discretionary gift budgets), the unit may choose to fund “recruitment visits” for the selected finalist(s) while they make their decision to accept the unit’s offer.
Final Interviews and Internal Candidates
If the list of finalists includes internal candidates, it is important to:
- Ensure that the itineraries for their interview activities—whether on campus or virtual—are as similar as possible to those of external candidates.
- Inform internal candidates about all aspects of the final interview process, and be intentional about maintaining fairness, collegiality, and confidentiality.
- Encourage internal candidates not to attend public events, such as job talks or open meetings, involving the other finalists.
A best practice is to interview internal candidates first in order to avoid any potential perception that internal candidates have an advantage from having seen firsthand or gathered information about the other candidates’ final interviews.