Content Validity Protocol
Validity Evidence Needed for Rubric Use and Interpretation (link)
Content Validity includes gathering evidence to demonstrate that the assessment content fairly and adequately represents a defined domain of knowledge or performance. The purpose of this paper is to provide guidance for collection of evidence to document adequate technical quality of rubrics that are being used to evaluate candidates in the Cato College of Education at UNC Charlotte.
Establishing Content Validity for Internally-Developed Assessments/Rubric (link)
To establish content-validity for internally-developed assessments/rubrics, a panel of experts will be used. While there are some limitations of content validity studies using expert panels (e.g., bias), this approach is accepted by CAEP. As noted by Rubio, Berg-Weger, Tebb, Lee and Rauch (2003),
Using a panel of experts provides constructive feedback about the quality of the measure and objective criteria with which to evaluate each item …. A content validity study can provide information on the representativeness and clarity of each item and a preliminary analysis of factorial validity. In addition, the expert panel offers concrete suggestions for improving the measure. (p. 95).
Protocol (Flowchart)
Directions to faculty click here to watch this video (13:56)
1. Complete the Initial Rubric Review (FORM A) (Google Form link) for each rubric used to officially evaluate candidate performance in the program. Make sure that the “overarching constructs” measured in the assessment are identified (see #3-2 on FORM A). NOTE: A preview of the questions on this form is available in Word Doc here
It is recommended that all rubric revisions be uploaded. Copies of all rubrics (if collected electronically) should be submitted in the designated file on the S: drive. This file is accessible by program directors (if you need access, please contact Brandi L Lewis in the COED Assessment Office).
To access the S: drive file to submit Rubrics & Content Validity Results, go to Computer ⇒ Shared Drive (S:) ⇒ coed ⇒ Shared ⇒ Assessment ⇒ Content Validity Results ⇒ select your department ⇒ select the program where the assessment is used. Multiple files may be added.
Save expert responses in the following format: Rubric name (or shortened version)_Expert Last Name_Degree_Program
(example: “STAR Rubric_Smith_BA_CHFD” “Present at State Read Conf_Smith_MEd_READ”)
2. Identify a panel of experts and credentials for their selection. The review panel should include a mixture of IHE Faculty (i.e., content experts) and B12 school or community practitioners (lay experts). Minimal credentials for each expert should be established by consensus from program faculty; credentials should bear up to reasonable external scrutiny (Davis, 1992).
The number of panel experts should include:
- At least 3 content experts from the program/department in the College of Education at UNC Charlotte;
- At least 1 external content expert from outside the program/department. This person could be from UNC Charlotte or from another IHE, as long as the requisite content expertise is established; and
- At least 3 practitioner experts from the field.
TOTAL NUMBER OF EXPERTS: At least seven (7)
3. Creating the response form. For each internally-development assessment/rubric, there should be an accompanying response form that panel members are asked to use to rate items that appear on the rubric. Program faculty should work collaboratively to develop the response form needed for each rubric used in the program to officially evaluate candidate performance. (See example (link) – faculty may cut and paste from the example to develop their response forms)
- For each item, the overarching construct that the item purports to measure should be identified and operationally defined.
- The item should be written as it appears on the assessment.
- Experts should rate the item’s level of representativeness in measuring the aligned overarching construct on a scale of 1-4, with 4 being the most representative. Space should be provided for experts to comment on the item or suggest revisions.
- Experts should rate the importance of the item in measure the aligned overarching construct, on a scale of 1-4, with 4 being the most essential. Space should be provided for experts to comment on the item or suggest revisions.
- Experts should rate the item’s level of clarity on a scale of 1-4, with 4 being the most clear. Space should be provided for experts to comment on the item or suggest revisions.
Helpful Notes:
- Faculty are welcome to make an electronic version of this tool that is customized to their specific rubrics to collect reviewer responses. The first row of information (see example) and the 4-level rating scale must be used for each item on each rubric. Qualtrics or a Google Form could be adapted for this purpose.
- The COED Office of Assessment can schedule a rubric review session to discuss rubric development and/or content validity protocols. Please contact Dr. Bradley Smith, Director of Assessment and Accreditation.
4. Create an assessment packet for each member of the panel. The packet should include:
- A letter explaining the purpose of the study, the reason the expert was selected, a description of the measure and its scoring, and an explanation of the response form. An example draft is included (this is just a draft to get you started; faculty are welcome to develop their own letters). (SEE DRAFT EXAMPLE (link))
- A copy of the assessment instructions provided to candidates.
- A copy of the rubric used to evaluate the assessment.
- The response form aligned with the assessment/rubric for the panel member to rate each item.
5. Initiate the study. Set a deadline for the panel to return the response forms to you / complete the response form online. All expert reviewers should watch this video (7:16) for instructions.
6. Collecting the data. Once response data for each internally-developed rubric have been collected from the panel participants, that information should be submitted to the COED Assessment Office. Copies of all forms and/or an excel file of submitted scores (if collected electronically) should be submitted in the designated file on the S: drive. This file is accessible by program directors (if you need access, please contact Brandi Lewis in the COED Assessment Office). .
Save expert responses in the following format: Rubric name (or shortened version)_Expert Last Name_Degree_Program
(example: “STAR Rubric_Smith_BA_CHFD” “Present at State Read Conf_Smith_MEd_READ”)
To access the S: drive file to submit Content Validity Results, go to Computer ⇒ Shared Drive (S:) ⇒ coed ⇒ Shared ⇒ Assessment ⇒ Content Validity Results ⇒ select your department ⇒ select the program where the assessment is used. Multiple files may be added.
7. Once Content Validity Results have been submitted, the COED Assessment Office will generate a Content Validity Index (CVI). This index will be calculated based on recommendations by Rubio et. al. (2003), Davis (1992), and Lynn (1986):
The number of experts who rated the item as 3 or 4
The number of total experts
A CVI score of .80 or higher will be considered acceptable.
DRAFT EXAMPLE (link): Establishing Content Validity – Rubric/Assessment Response Form
Additional Resources (link)
References:
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
Davis, L. (1992). Instrument review: Getting the most from your panel of experts. Applied Nursing Research, 5, 194-197.
Lawshe, C. H. (1975). A qualitative approach to content validity. Personnel Psychology, 28, 563-575.
Lynn, M. (1986). Determination and quantification of content validity. Nursing Research, 35, 382-385.
Rubio, D.M., Berg-Weger, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27(2), 94-104.