Meta-Assessment
Meta-assessment is the evaluation of our assessment practice. It is used to help us understand and improve the quality of our assessment at all levels. The process provides feedback to university areas, faculty and staff on their assessment reports. Considerable time and effort is invested in this process which is coordinated by Institutional Assessment and Accreditation.
Meta-assessment produces data for multiple uses:
- Feedback reports on the quality of assessment reports for each ‘program of study.”
- Summary reports for departments, colleges, and the university on how many reports are being submitted and the quality of those reports.
- Discussion with external stakeholders about accountability.
We have used the Quality Assessment Rubric to evaluate assessment reports since 2016, which was adapted from James Madison University’s APT Assessment Rubric. This comprehensive rubric was developed from best practices and is used or has been adapted for use at other institutions. Using a standardized rubric aligned with best practices, provides us an opportunity to benchmark our assessment practices and contribute to national discussion of assessment quality.
The meta-assessment review is conducted each Spring. As data is collected and analyzed, reports are made available on this webpage, and each program of study’s report is shared directly with its department.
Assessment Self-Evaluation and Peer Review
Programs may use the Quality Assessment Rubric self-evaluation tool, to evaluate themselves or engage in peer review. The results of each evaluation are shown at the end and can be printed. Reviewers can also request to have a summary emailed to them. This is the same tool used by Institutional Assessment and Accreditation for meta-assessment.
Sample Assessment Reports by Quality Rating
Institutional Assessment and Accreditation gets requests to share examples of assessment reports from other programs. The chart below is updated during the annual meta-assessment with example reports from each college, for each level achieved. Reports shown are from 2022.
College | Mature | Established | Developing |
---|---|---|---|
Agricultural and Life Sciences | Sustainable Food Systems, B.S. | Food & Nutrition, B.S. | Ag. Education, B.S. |
Health and Human Sciences | Athletic Training, M.S.A.T | Curriculum and Instruction, M.Ed | |
Letters, Arts, & Social Sciences | Communication, B.A., B.S. | Music Business, B.Mus. | |
Art & Architecture | |||
Business & Economics | Business Economics - General Option, B.S.Bus | Accountancy, M.Acct. | |
Engineering | Computer Science, B.S.C.S | Biological Engineering, M.S., M.Engr. | |
Natural Resources | Fishery Resources, B.S.Fish.Res. | Renewable Materials, B.S.Renew.Mat | Enviro. Science, M.S. |
Science | Math, General, B.S. | Statistics, GR Cert | Math, M.S. |
Reports
Program Reports
Individual feedback reports, for each program of study, are attached directly to the evaluated Student Learning Assessment Report in Anthology Planning (APR). The report is added at the bottom of the template, and an example of what this looks like is shown in Figure 1.
General Education Reports
Oral Communication | Written Communication | Mathematical WOK* |
Scientific WOK* | Humanistic/Artistic WOK* | Social/Behavioral WOK* |
*Way of Knowing
College Reports
- College of Agricultural and Life Sciences Quality Assessment Report, 2023
- College of Art and Architecture Quality Assessment Report, 2023
- College of Business and Economics Quality Assessment Report, 2023
- College of Education, Health and Human Sciences Quality Assessment Report, 2023
- College of Engineering Quality Assessment Report, 2023
- College of Graduate Studies Quality Assessment Report, 2023 — no report available
- College of Law Quality Assessment Report, 2023 — no report available
- College of Letters, Arts and Social Sciences Quality Assessment Report, 2023
- College of Natural Resources Quality Assessment Report, 2023
- College of Science Quality Assessment Report, 2023
Institutional Reports
Resources
Meta-Assessment Rubric Section 1.A & 1.B
Section 1.A and 1.B: This section looks at the student learning outcome statements and evaluates them for student-centered language and adequate description of whom will be assessed, action that will be assessed (verb precision), and content or domain that will be assessed.
Creating Learning Outcomes
Boston University
Degree Qualifications Profile (DQP)
Learner-centered framework for what college graduates should be able to do at differing degree levels.
Approaches for Developing Outcomes
University of Nebraska-Lincoln
Writing Effective Learning Objectives
Johns Hopkins Sheridan Libraries: The Innovative Instructor
Bloom’s Taxonomy, Action Speaks Louder
Johns Hopkins Sheridan Libraries: The Innovative Instructor
Bloom’s Taxonomy of Measurable Verbs
California State University Northridge
SLOs, Bloom’s Taxonomy, Cognitive, Psychomotor, and Affective Domains (pages 3-5)
Crafton Hills College
To Imagine a Verb: The Language and Syntax of Learning Outcomes Statements
National Institute for Learning Outcomes Assessment (NILOA)
Meta-Assessment Rubric Section 2
Section 2: This section looks for a related activity for each student learning outcome. This could include listing courses where an outcome is addressed/assessed or uploading a curricular mapping document.
Mapping Learning Outcomes: What You Map is What You See
National Institute for Learning Outcomes Assessment (NILOA)
Opportunity to Learn (planning worksheets)
University of Nebraska-Lincoln
Meta-Assessment Rubric Section 3.A, 3.B, 3.C, 3.D, & 3.E
Section 3.A: This section refers to how well a tool measures what it is supposed to do or to what extent an assessment task provides a student work product that represents the domain of outcome(s) you intend to measure. Things you might report in your plan might include: steps used to develop the instrument; the relationship or correlation between the measure and the student learning outcome; or clarity on training for raters/observers, instructions for test-takers, and instructions for scoring. Types of validity to address include face validity, construct validity, and formative validity.
Why Should Assessment, Learning Objectives, and Instructional Strategies be Aligned?
Carnegie Mellon University
Evaluating Student Learning Assessment of Outcomes
Cosumnes River Colleges
A Primer on the Validity of Assessment Instruments
Journal of Graduate Medical Education
Assessment Techniques
Canada College
Section 3.B This section refers to whether or not the student learning outcome is assessed using a direct measure.
Choose a Method to Collect Data/Evidence
University of Hawaii – Manoa
Direct and Indirect Measures of Student Learning
Indiana University – Purdue University Indianapolis
Using Indirect vs. Direct Measures in the Summative Assessment of Student Learning in Higher Education
Journal of the Scholarship of Teaching and Learning
Measurement
Kansas State University: Office of Assessment
Assessment Method
University of Nebraska-Lincoln
Section 3.C: This section refers to the desired level of student achievement. The score should be justified. Possible ways to do this could include: comparing students’ performance against peers, against an establish standard, or against prior years’ results.
Performance Indicator (Criteria for Success)
Central Michigan University: Curriculum and Assessment
Section 3.D: This section asks for information about data collections. Basically, who took the assessment (or information on sample) and information about how it was administered, including important information about the testing conditions such as student motivation or testing conditions.
Section 3.E: This section asks about reliability of measure. This might include discussion about inter-rater reliability exercises, calibration of rubrics, or rubrics norming. Ultimately, this is about what is being done to ensure that there is consistency between raters across sections or over time. Keep in mind some subjectivity exists to how psychometric terms are used in the area of student learning outcomes assessment, and many of the available resources. The materials presented here provide information on the type of activity or reporting that is relevant to our work.
Calibrating Multiple Graders
Johns Hopkins Sheridan Libraries: The Innovative Instructor
Quick Guide to Norming on Student Work for Program-Level Assessment
Washington State University, Office of Assessment of Teaching and Learning
Calibration Protocol for Scoring Student Work
Rhode Island Department of Education
The Use of Scoring Rubrics: Reliability, Validity, and Educational Consequences
Education Research Review
Meta-Assessment Rubric Section 4.A, 4.B, & 4.C
Sections 4.A, 4.B, & 4.C: This section focuses on how the assessment results were reported – are the findings clear, do they relate to the student learning outcome(s), are prior findings also discussed in relation to current findings, and how the data/results interpreted.
Analyzing, Interpreting, Communicating and Acting on Assessment Results
Ball State University
Meta-Assessment Rubric Section 6
Sections 6.A & 6.B:This section asks for information about the changes that will be made based on your assessment activities, to either or both your curriculum and/or your assessment activities.
Guidelines for Using Results and Interpretations
University of Nebraska-Lincoln
Showing an Impact: Using Assessment Results to Improve Student Learning
National Institute for Learning Outcomes Assessment (NILOA)