Skip to page content

Assessment Glossary

Like any academic discipline, assessment has a language of its own. Following are terms typically used in the assessment field.*

Achievement target

An achievement target stipulates the threshold for findings of a measurement qualifying as having met an objective. Separate achievement targets are established for each objective measured by an assessment instrument.

Action plan

The purpose of an action plan for program improvement—not just assessment improvement. Action plans are created to address all objectives that are not met in a given year. If all objectives are met, then at least one action plan to enhance the program is expected.

Assessment and Accountability

Although the terms "assessment" and "accountability" often are used interchangeably, they have important and significant differences. In general, when we assess our own performance, that is assessment; when others assess our performance, that is accountability. In other words, assessment is a set of initiatives we use to monitor results of our actions and make our own improvements; accountability is a set of initiatives others take to monitor the results of our actions, and to penalize or reward us based on the outcomes.


Assessment is the systematic process of determining educational objectives, gathering, using, and analyzing information about student-learning outcomes to make decisions about programs, student progress, and/or accountability.


Benchmarks are established when criterion-referenced performance is used for comparative purposes. A program faculty can use their own data as a baseline benchmark against which to compare future performance. They also can use national standards or data from another program as a benchmark.

Curriculum Map

Also known as "course map," "curriculum alignment," or "assessment audit," a curriculum map provides visual representation of how faculty prepare students to meet a program's established student-learning objectives. The process involves identifying where in the curriculum each student-learning outcome is introduced, developed, and mastered.

Direct measures

Direct measures of student leaning require students to display their knowledge, skills, and attitudes for measurement. Objective tests, essays, presentations, portfolios, and classroom assignments all are examples of this criterion; tools such as student-perception reports or alumni surveys do not meet this criterion because they measure student perceptions about learning, rather than measuring actual student learning.


Broadly covering all potential investigations with formative or summative conclusions about institutional effectiveness, evaluation may include assessment of student learning, but it also includes non-learning-centered investigations (e.g., student satisfaction with recreational facilities or financial-aid packages).

Formative assessment

Formative assessment is used for progressive improvement (at the individual or program level) rather than for final summative decisions or for accountability. This interim process can provide feedback at various points in the academic program to improve teaching, learning, and curricula, and to identify students' strengths and weaknesses.


Long-term in nature, program goals form the foundation for student-learning assessment. Directly linked to the program's mission, goals stipulate the major principles the program serves (e.g., to develop student competence meeting employer demands in the field of practice).

Indirect measures

Indirect instruments such as surveys and interviews ask students to reflect on their learning, attitudes, and skills, rather than to demonstrate same. Direct measures are preferred highly over indirect instruments.


The systematic investigation of students' performance, whether direct or indirect, encompasses assessment measurement.


Norms reflect scores on a measure, focusing on the rank ordering of students and not on their performance in relation to set criteria.


Short-term in nature and directly linked to the program's goals, student-learning objectives encompass the specific knowledge, skills, and attitudes that students are expected to achieve through their college experience; expected or intended student-learning outcomes.


Outcomes are the results of learning, the specific knowledge, skills, and attitudes that students actually develop through their college experience. Outcomes are assessment results.

Performance-based assessment

Performance-based assessment involves gathering data through systematic observation of a behavior or process and evaluating that data based on a clearly articulated set of performance criteria.

Rater calibration

Rater calibration is conducted to help assure assessment rubrics are used consistently and similarly by various raters completing them. Calibration sessions involve discussion of the criteria for each element of the rubric, then participants’ completion of the rubric for the same student artifact, followed by discussion and justification of scores given on various criteria. This process is repeated until rater scores are not significantly different for the same examples (usually two or three rounds).


Rubrics are scoring instruments that list the criteria for a piece of work, or "what counts" (e.g., purpose, organization, and mechanics are often what count in a piece of writing); they also articulate gradations of quality for each criterion, from highest to lowest.

Summative assessment

Summative assessment is a sum-total or final-product assessment of achievement at the end of a course of study.

Sustainability map

The sustainability map (or matrix) designates in which years each of a program’s SLO’s are scheduled for direct measurement. Each program is expected to adhere to assessment experts’ recommendations that at least one student-learning objective be measured and reported annually, and that every program measure each SLO at least once in every five year period.


Measures of student learning need to be triangulated, meaning that results of at least two measures need to point to the same conclusion; one or more of the triangulated measures must be direct.


The effects educational providers have on students during their programs of study comprise the value-added feature of academics. Participation in higher education has value-added impact when student learning and development occur at levels above those that occur through natural maturation; usually measured as longitudinal change or difference between pretest and posttest.

* This glossary is adapted, in part, for terms defined in “Assessing Communication Knowledge, Skills, and Attitudes” by Phil Backlund, Timothy J. Detwiler, and Pat Arneson, in A Communication Primer, edited by Phil Backlund and Gay Wakefield (Washington, D.C.: National Communication Association , 2010),