Other Topics

Mathematics Placement Test Performance Predicts Subsequent Math Course Success at Four-Year Universities. D. McGhee. OEA Report 10-04, 2010 (285K PDF*)

This report describes the results of a study of the relationship between Mathematics Placement Test (MPT) scores and subsequent student performance in mathematics courses at four Washington universities. The results show that MPT total scores can be used to predict performance in college-level courses. In entry- and precalculus-level courses, all three tests produced scores that were significantly correlated with numeric course grade. At the calculus level, MPT-A total score was moderately-to-strongly related to course grade. The equations derived from logistic regression analyses can be used to estimate the MPT score likely to result in course success, which may be helpful to faculty going about the task of reviewing or setting placement cut scores.

UW Seattle English Language Proficiency Requirement: Autumn 2009 Cohort. D. McGhee. OEA Report 10-03, 2010 (282K PDF*)

Beginning Autumn quarter 2009, the University of Washington (Seattle) instituted new English Language Proficiency Requirement (ELPR) procedures for incoming undergraduates. The new (interim) policy requires that all entering undergraduate students (both freshmen and transfers) demonstrate English language proficiency prior to enrollment in classes. This report summarizes the English Language Proficiency status of the Autumn 2009 entering cohort as of January, 2010.

Intermediate Mathematics Placement Test (MPT-I): Version 11 Development. D. McGhee and J. Peterson. OEA Report 10-02, 2010 (250K PDF*)

This report describes the development of Version 11 of the Intermediate Mathematics Placement test (MPT-I). The instrument pilot test was conducted during winter quarter 2010 at the University of Washington. Item and total score statistics indicated that MPT-I Version 11 is acceptable for use in the Academic Placement Testing Program. MPT-I Version 11 will be released at the beginning of the 2010-2011 testing season.

General Mathematics Placement Test (MPT-G): 2009 Pilot Study. D. McGhee, N. Lowell, J. Gillmore, and J. Peterson. OEA Report 09-03, 2009 (214K PDF*)

Over the course of the 2008-2009 academic year, the new MPT-G and revised MPT-I were administered to high school and college students throughout the state. Subsequent end-of-course grades were collected to assist in setting a common college readiness cut score. This report describes the pilot procedures and results.

2008 DELNA Screening Pilot. N. Lowell and D. McGhee. OEA Report 08-03, 2008. (359K PDF*)

This report describes the methods and outcomes of a pilot administration of the Diagnostic English Language Needs Assessment (DELNA) Screening carried out during September and August of 2008. The DELNA Screening is a short (17 minutes) online test consisting of Vocabulary and Speed-Reading subtests. It was completed by 1158 students attending selected freshman, transfer, and international summer orientation sessions. Student test scores were combined with demographic and academic data from the UW Student Database and subjected to a series of analyses to determine whether the DELNA Screening would provide an appropriate measure to estimate English language proficiency of incoming students.

2008 Survey of Community and Technical College Testing Centers. A. Giesbrecht. OEA Report 08-02, 2008. (154K PDF*)

In 2007 the Washington State legislature passed a bill requiring the Mathematics Placement Tests (MPT) be revised to align with College Readiness Mathematics Standards (CRMS) created by Washington's Transition Mathematics Project (TMP). This report describes a brief survey of test administrators at community and technical colleges to determine whether the newly developed General Math Placement Test (MPT-G) can play a dual placement/college readiness role at two-year institutions and, further, whether there is a good fit between the current test format and existing testing infrastructure. Survey results indicated that the MPT-G will be of limited usefulness at community and technical colleges because the majority of students require placement into below-college level mathematics courses not addressed by the MPT-G. Additionally, the format of this test (paper-pencil, pre-scheduled, group) does not fit readily into the regular, daily testing process of two-year institutions which rely heavily on computer-adaptive, walk-in, and individual testing. However, two-year schools currently administer tests of the same format as the MPT-G as date-specific administrations once or twice a year, and any given institution may elect to offer the MPT-G on the same basis. Students at all community and technical colleges are eligible for testing at existing APTP administrations whether or not the tests are available at their home institution.

General Mathematics Placement Test (MPT-G): Initial Test Development. D. McGhee, J. Peterson, J. Gilmore, and N. Lowell. OEA Report 08-01, August 2008. (462K PDF*)

Over the past several years, there has been increasing concern at both the state and national level about mathematics preparation among high school students. In 2007 the Washington State legislature mandated that the Mathematics Placement Test offered by the Academic Placement Testing Program be modified to align with College Readiness Mathematics Standards developed by the Transition Math Project. This report describes the initial development of the General Mathematics Placement Test (MPT-G) to meet the requirements of House Bill 1906.

Classroom Learning Environment Questionnaire UW College of Education Pilot: AU 2006-SU 2007. D.E. McGhee. OEA Report 07-09, October 2007 (144K PDF*)

This report details the second stage of development of the Classroom Learning Environment (CLE) questionnaire. The aim of the present study was to evaluate the statistical characteristics of the revised instrument using a large sample of classes.

Academic Challenge and Engagement Index (CEI): Development and Validation . D. McGhee, G. Gillmore, and A. Greenwald. OEA Report 07-05, November 2007. (235K PDF*)

In response to expressed concerns regarding the degree of academic challenge posed by UW courses, OEA undertook to develop a single index of challenge and student engagement based on items from Instructional Assessment System (IAS) course evaluation forms. Although a set of items specifically directed at this topic had been added to IAS forms in 1998, we felt that a single index might provide a simpler and more powerful representation for individual courses. Also, because the IAS is used to evaluate a large percentage of courses taught at the UW, the index could provide useful insight to more general student perception of UW educational experiences.

Factors Related to Attrition and Retention of Under-Represented Minority Students: National and Regional Trends. S. Lemire and C. Snyder, OEA Report 06-08, 2006. (430K PDF*)

This report summarizes analyses of selected data from files of the National Postsecondary Student Aid Study (NPSAS:04). The NPSAS:04, conducted by the National Center of Educational Statistics (NCES) during the 2003-2004 school year, is a comprehensive nationwide study designed to determine how students and their families pay for postsecondary education and to describe characteristics of those enrolled. It captures extensive information on students' educational circumstances, and the resulting dataset affords the opportunity to describe the national context relative to a variety of educational issues. Our purpose is to present a limited number of educational variables, both nationally and regionally, relating to attrition and retention of under-represented minority students.

The Classroom Learning Environment (CLE) Questionnaire: Preliminary Development. D.E. McGhee, N. Lowell, S. Lemire, and W. Jacobson, OEA Report 06-07, 2006. (386K PDF*)

There is strong interest at the University of Washington in providing a positive environment for all faculty, staff, and students. Within the past few years, this Office has been asked to assist in administering two surveys of campus climate and, more recently, an extensive study of Leadership, Community, and Values has been initiated by our new Provost. It is in this context that we were asked by the Dean of the Office of Undergraduate Education to consider ways in which questions relating to issues of diversity could be integrated into ongoing course evaluations. The Office of Educational Assessment maintains a well established course evaluation system used by most courses and all departments at UW Seattle. Our task was to determine whether and in what way we could capitalize on the capabilities of this system to obtain regular student assessment of classroom climate. In order to do this, we formed an Advisory Council made up of faculty and staff from a variety of programs and offices that work with diverse groups.

Mathematics Placement Test (MPT) Alignment with Washington State College Readiness Mathematics Standards (CRMS). J. Peterson, OEA Report 06-06, 2006. (212K PDF*)

This report provides a basic description of the Math Placement Tests (MPT) currently in use in five of the six public baccalaureate institutions in Washington State. The recently developed College Readiness Mathematics Standards (CRMS) are also described, along with an analysis of the alignment of the MPT to those standards carried out by an external agency, Achieve, Inc. Achieve noted that the MPT were not closely aligned to the CRMS, as would be expected given that the tests are principally designed to place entering college students into first-year mathematics courses rather than to provide a comprehensive assessment of their K-12 mathematics education. Nevertheless, a mapping of the MPT items to the CRMS should be included in a deliberate discussion of the structure and content of future revisions of the tests. For this reason, we undertook a more detailed analysis of the MPT and CRMS alignment, finding more correspondence than reported by Achieve and identifying specific areas on which to focus test redevelopment efforts.

UW Academic Advising Self-Study: Preliminary Report. April, 2005. (2,177K PDF*)

In the summer of 2004, the University of Washington (UW) Board of Regents authorized funding to address advising issues at the UW, and the Office of Educational Assessment (OEA) was asked to undertake a self-study of all undergraduate advising activities at the UW. OEA contacted academic advisors, students, and administrators campus-wide to solicit feedback on their experiences with, and perspectives on academic advising. The results of surveys, interviews, and reviews of existing records provided a rich array of both quantitative and qualitative data.

The Evaluation of General Education: Lessons from the USA State of Washington Experience. G. Gillmore, OEA Report 04-03, 2004. (274K PDF*)

Assessment and accountability are presented as contrasting models for evaluating outcomes in higher education. While both models are concerned with quality and improvement, the accountability model stresses externally imposed evaluation goals, methods, and criteria and comparisons among institutions. The assessment model stresses faculty-determined, institutionally- specific goals and methods with a focus on improvement. This discussion is followed by a description of general education and outcomes that one should consider measuring. Two State of Washington studies, relevant to the evaluation of general education, are presented -- one under an accountability model that used standardized tests, and one under an assessment model that used student writing from courses. The report concludes with six prerequisites for evaluating general education.

Some Assessment Findings. 2003 OEA Assessment Group. OEA Report 03-05, 2003. (79K PDF*)

This is the first in a series of dynamic reports created by the OEA Assessment Group to highlight particular findings of interest. Throughout the year we undertake a variety of studies, both large and small, that address a wide range of topics. At times our research is very pointed and at times more general, but there is always the potential to learn something we didn't expect, or to look again from a different angle at things we thought we knew. We will update this page periodically throughout the year to share our findings with you.

Drawing Inferences about Instructors: Constructing Confidence Intervals for Student Ratings of Instruction. D.E. McGhee, OEA Report 02-05, 2002. (82K PDF*)

This report expands upon an earlier discussion of instructor-level reliability of course ratings. Gillmore (2000) previously demonstrated that adequate instructor-level reliability may be obtained when ratings are aggregated across at least seven classes. What was left unexamined, however, was the precision with which one should regard mean ratings. This brief report presents confidence intervals for true scores based on Instructional Assessment System (IAS) data from approximately 4,000 instructors.

University of Washington Parents Survey 2001. G. Garson, OEA Report 02-04, 2002. (132K PDF*)

The fourteenth annual Parents Phone Survey invited parents to express their views on "how things are going" for their daughters and sons. The survey is one part of a larger effort to assess the University's success in establishing and maintaining effective lines of communication with parents through such means as Freshman Convocation and UW News, a publication for parents of University undergraduates. This report describes the methodology and presents the findings from the 2001 Parents Survey.

What Student Ratings Results Tell Us About Academic Demands and Expectations. G.M. Gillmore, OEA Report 01-02, 2001. (61K PDF*)

This report is based on a presentation by Dr. Gerald Gillmore, Director of the UW Office of Educational Assessment, at the Second Campus-wide Forum on Student Expectations and Demands, which took place on April 26, 2001. The purpose of these brief remarks were to present what students tell us about demands and expectations via their evaluations of classes using the Office of Educational Assessment Instructional Assessment System (IAS). The following four facts are discussed:

  1. Students put more effort into classes that demand more effort for them to be successful.
  2. Students tend to prefer more challenging classes over less challenging classes.
  3. The widely held belief that assigning students more work will lead to lower student ratings is not true in and of itself.
  4. It is clear that all faculty are not equally demanding. In fact, there are considerable differences among faculty in the amount of time students devote to their courses.

Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction. G.M. Gillmore, OEA Report 00-02, 2000. (339K PDF*)

The question addressed in this report is whether there is sufficient consistency in student ratings of instructors to support the use of data aggregated over classes for personnel decisions. Instructional Assessment System (IAS) data from over 2,800 instructors teaching over 23,000 classes were analyzed. Results showed adequate instructor-level reliability of ratings when aggregating across about seven classes and especially strong instructor-level reliability when aggregating across 15 or more classes. However, these results assume certain conditions of decision-making and are limited to similar conditions of measurement.

Eventual Majors of Students Who Enrolled in MATH 124 and CHEM 140: A Study of 1992 Entering Freshmen. G.M. Gillmore, OEA Report 99-04, 1999. (221K PDF*)

The graduation majors of students who entered as freshmen in 1992 were categorized into those that required MATH 124 or CHEM 140, those that did not require the course, and those for which the course was an option. Only 33% and 38% of all graduates who passed Math 124 and Chem 140, respectively, majored in a field that required the course. Students who scored below 600 on SAT Math were much less likely to major in a field requiring either course, as were students who received a grade below 3.0 in the respective course. Thus, both courses continue to function as gate-keepers.

Student Views of the Calculus Sequence. K. Etzkorn and T. Taggart, OEA Report 98-3, 1998.

At the request of the Department of Mathematics, this study solicited student opinions on the introductory math calculus sequence. Focus groups were conducted with a random sample of students currently enrolled in the calculus sequence, and a sample of students who had previously completed the sequence. Scripted questions were based on math faculty concerns. Focus group results were combined with quantitative course evaluations of 50 courses and 97 quiz sections over seven academic quarters. The results suggested several modifications to the calculus sequence including the addition of condensed courses directed at students with previous calculus experience, improvements in instructional materials, and broadening of the calculus curriculum to include material related to a wider variety of majors.

Survey Response Rate and Request for Student Identification. N. Lowell, OEA Research Note 98-N3, 1998. (69K PDF*)

When conducting survey research concerning student experiences and educational outcomes, it is often critical to link questionnaire responses to data collected from other sources. To this end, we request that students provide their student number as they complete their survey forms. This study examined differential response rates of students who were asked to provide identification and those who were not. Requests for identification did not appreciably affect rate of response, but a sizable proportion of students who returned completed surveys did not provide identification. It is recommended that surveys include requests for respondent identification as well as various demographic variables, even though the latter may be available from linked sources. The same pattern of response was found when results were examined by gender and transfer status, but, surprisingly, minority students were more likely to respond when asked to identify themselves than when they were not asked.

Prediction of English Composition Grades. G.M. Gillmore, OEA Research Note N-98-2, 1998.

The general purpose of this study was to determine the extent to which SAT verbal scores predict success in English 131, Freshman Composition. Test scores were compared to course grades for 10,613 students who received grades in English 131 from fall, 1992, through winter, 1998. All analyses pointed to the conclusion that SAT verbal scores do not effectively predict English 131 grades. High school grade point average is a better overall predictor.

Average Grades. G.M. Gillmore, OEA Research Note 98-N1, 1998. (55K PDF*)

This report provides information on average grades at the UW for the 1996-97 academic year, updating OEA Research Note N-95-3. The earlier report noted that grades rose from 1975 to 1987 and then leveled off. Even so, faculty almost unanimously thought that grades were too high and over 80% felt that measures should be taken to reduce them. Students thought that grades were too high, though not to the extent that faculty did. Faculty and students tended to prefer the 4.0 grading system over the alternatives. The present study found that average grades for 1996-97 were essentially equivalent to those for 1994-95.

The Effects of Priority Registration on Missed Classes for University of Washington Softball Players. G.M. Gillmore and A. Few, OEA Report 97-7, 1997.

Student-athletes must be enrolled as full-time students to compete in intercollegiate sports, yet, game schedules result in many missed class sessions. This study examined the effect of priority registration on reducing class absences due to conflicting game schedules. Student class schedules were matched with times of scheduled games and travel for 1996, before priority registration was implemented, and 1997, after priority registration was implemented. It did not appear that priority registration made a positive contribution to class attendance or student-athlete performance.

The University of Washington Teaching Portfolio Project. G.M. Gillmore, D. Hatch, and other contributors, OEA Report 97-5, 1997.

Currently, student ratings of instruction are the most widely used method by which teaching is evaluated. In addition, UW faculty are required to undergo periodic peer review. While these two methods are useful components of a thorough evaluation, they don't provide a complete picture. This report summarizes attempts by ten campus units to develop teaching portfolios as an additional source of information on quality of instruction. Awards of $10,000 were made to participating units and this compilation summarizes their efforts.

Four Models of the Relationship between Assessment and Accountability. G.M. Gillmore, OEA Research Note 97-N4, 1997. (35K PDF*)

Assessment and accountability are uneasy bedfellows. In a sense, the legislature has paid us to do assessment but will deduct payment if we fail to meet certain accountability goals. Our constituencies demand both, as well they should. This report outlines four models relating accountability and assessment, along with related underlying assumptions and major problems.

Teaching, Learning and Technology (TLT) Pilot. N. Lowell. OEA Research Note 97-N3, 1997. (95K PDF*)

The University of Washington UWired program, in conjunction with teachers from around the state, has developed the Teaching, Learning and Technology (TLT) Program to teach effective use of educational technology in K-12 classrooms. This report describes an assessment of a four-credit pilot course offered in August, 1997, and includes links to the three web-based assessment instruments used. Although participants differed in their prior level of experience with computers, what was taught and how it was taught appeared to have more influence on learning than did previous experience. Lessons were also learned regarding the use of on-line surveys, in particular the importance of limiting the number and length of surveys administered.

A Decade of Formal Assessment at the University of Washington. G. Gillmore and L. Basson, OEA Report 96-7, 1996. (94K PDF*)

This report describes the history, impact, and future directions of formal assessment at the University of Washington. It begins with a discussion of the background and history of the assessment movement in the State. Principles guiding assessment and implementation strategies are described, followed by an overview of the impact of assessment. Specific improvements in curriculum and courses guided or influenced by assessment research are described for departmental majors, writing, quantitative and symbolic reasoning, distribution requirements, special programs, diversity, graduation rates and time to degree, accreditation, and forging links among institutions. The report ends with a discussion of future directions seen as serving three major and interrelated goals: 1) the development and measurement of accountability or performance indicators, 2) the assessment of new and continuing programs to improve their effectiveness, and 3) the contribution of assessment to the University's strategic planning by providing relevant data on quality and efficiency.

Cultural and Ethnic Diversity: The Need for a Requirement. G.M. Gillmore and P.J. Oakes, OEA Report 95-8, 1995.

The Cultural and Ethnic Diversity (CED) Task Force was appointed by the Faculty Senate in order to evaluate the need for and the feasibility of an ethnic and cultural pluralism requirement. This report addresses the need for a CED requirement based on data from three sources: a Cultural Diversity Questionnaire administered to students in twelve classes, questions about the proposed requirement that appeared on the 1994 and 1995 senior surveys, and simulations performed to determine how many 1993-94 graduates would have met the requirement had it been in effect during their tenure. Results from the questionnaire suggest that factors considered important in educating students about cultural and ethnic diversity are considered important and are learned in some classes at the UW. Responses to the senior survey questions indicated that roughly one-third of the respondents would have liked more attention paid to issues of pluralism and diversity in their classes, one-third would have liked less, and one-third were neutral. Finally, the simulations showed that between 60% to 80% of the graduates would have met the criteria for a CED requirement but many of those in the sciences and professional schools would not.

Grades. G.M. Gillmore, OEA Research Note 95-N3, 1995. (176K PDF*)

During 1994-95, the Faculty Senate Council on Academic Standards considered the question of grading and the perceived continuation of grade inflation at the UW. The Office of Educational Assessment subsequently surveyed faculty on grading matters and included questions on grading in the annual survey of seniors. This research note provides some of the results from these surveys as well as data from the UW Registrar's Office on trends in average grades over time, and from the UW Office of Institutional Studies on average grades in various academic units. These data indicate that average grades rose from 1975 to 1987 before leveling off. Both faculty and students think grades are now too high. Over 80% of faculty members feel that measures should be taken to reduce average grades.

Student Instructional Ratings at the University of Washington: Increased Usage and Averages. G.M. Gillmore, OEA Note N-95-2, 1995.

A fairly steady increase in student ratings usage data is presented from 525 classes rated during the 1959-60 academic year to 9110 classes rated during the 1993-94 academic year. The median of the class averages has also risen, from 3.70 in 1976-77 to 3.97 in 1993-94 for the average of items 1 - 4. The ratings of teaching assistants (TAs) increased more sharply than other ranks, suggesting that TA training programs have been effective. Generally, there is some evidence that new generations of faculty may tend to be more effective teachers.

The Effects of Course Demands and Grading Leniency on Student Ratings of Instruction. G.M. Gillmore and A. Greenwald, OEA Report 94-4, 1994. (243K PDF*)

The purpose of this study was to better understand the effects of grades and measures of course difficulty on student ratings of instruction. It was based on ratings from 337 UW classes fall 1993, using the newly developed Form X. The study found that students' ratings are positively influenced by three factors, in order of importance: student perceptions of the ratio of valuable hours to total hours in the time put into the course, the challenge of the course, and the leniency of grading.

University of Washington Undergraduate Capstone Courses, Practica and Internship Opportunities, and Feedback from Employers. G.M. Gillmore, OEA Report 91-6, 1991. (13,842K PDF*)

As a beginning point for end-of-program assessment, a survey was conducted to develop a compendium of existing capstone courses, internship and practica opportunities, and programs for feedback from employers. Each of these three major categories were further subdivided into subcategories, such as whether there are multiple raters of the final products for capstone courses and whether internships are required (either formal or informal).

Reliability of the Items of the Instructional Assessment System: Forms A-G. N. Lowell and G.M. Gillmore, OEA Report 91-1, 1991. (134K PDF*)

The University of Washington (UW) was among the earliest institutions to systematically evaluate courses using student ratings. The first efforts at the UW were initiated in the 1920's, and over the years different methods of collecting and reporting ratings have been used. The Instructional Assessment System (IAS) was introduced in 1974 and has grown considerably in use since that time. This report presents item means and reliability estimates for IAS evaluation items based on data gathered at the UW main campus during the 1989-90 academic year. These data represent more than 150,000 ratings forms, evaluating nearly 7000 classes.

The Validity and Usefulness of Three National Standardized Tests for Measuring the Communication, Computation, and Critical Thinking Skills of Washington State College Sophomores: General Report. May, 1989. (3,296K PDF*)

In its master plan (December, 1987), the Washington State Higher Education Coordinating (HEC) Board recommended that both two-year and four-year institutions condduct a pilot study to evaluate the appropriateness of using standardized tests as a means for measuring the communication, computation, and critical thinking skills of sophomores. The purpose of such a testing program would be for institutions to: a) strengthen their curricula, b) improve teaching and learning, and c) provide accountability data to the public. Over 1,300 sophomore students from public four-year and two-year colleges were tested, with each student taking two of the three national tests studied. Additionally, more than 100 faculty members took shortened versions of the same tests and critiqued them for appropriateness or content and usefulness. The study concluded that the national tests did not provide an appropriate or useful assessment of the communication, computation, and critical thinking skills of sophomores, and added little reliable information about students' academic performance beyond what was already known from admissions test data and student grades. The tests did not reflect specific aspects of the college experience such as credits earned and did not provide an adequate match with curricular content.

*Software capable of displaying a PDF is required for viewing or printing this document. Adobe Reader is available free of charge from the Adobe Web site at http://www.adobe.com/products/acrobat/readstep2.html