Home > NewsRelease > Visualizing results from an evaluative rubric, Part II by Stephanie Chasteen
Text
Visualizing results from an evaluative rubric, Part II by Stephanie Chasteen
From:
American Evaluation Association (AEA) American Evaluation Association (AEA)
For Immediate Release:
Dateline: Washington , DC
Wednesday, July 17, 2019

 
Stephanie Chasteen
Stephanie Chasteen
I’m Stephanie Chasteen, an external evaluator on NSF-funded projects aiming to improve STEM education at the university level. This is part 2 of a 2-part series on evaluative rubrics. In yesterday’s post, I shared how I developed an analytic rubric to evaluate program structures. In today’s post, I’ll share how we produced meaningful visuals of rubric results to inform program organizers.
Lesson Learned:
As a utilization-focused evaluator, my ultimate aim for the rubric is that it be useful for the end-user (busy physics faculty members). Faculty need the rubric results to point to the gaps and strengths of local programs, to help them in action planning.
People are used to a rubric “score” but I refused to assign points to the rubric levels (e.g., Not Present = 0, Developing = 1) as the items are not independent, levels are not equally spaced, and I found “scoring” programs across such diverse items to be meaningless at best, and unethical practice at worst (e.g., could a program with a poor score be faced with funding cuts?).
So, I did two things to make things easier for my users:
  1. Created visualizations that make gaps and strengths apparent.
  2. Found a number that could serve users’ desire for a single number representing the results.
Hot Tip:
To develop meaningful visuals, I went straight to Stephanie Evergreen’s “Effective Data Visualization” book to find visuals that help viewers make comparisons between numbers. We used a bidirectional bar chart (example shown below), directly created in Excel from the Excel version of the rubric. “Right is good, left is bad” is the advice that I give users as they look at these charts. For example, the program below is strong in “Knowledge and Skills for Teaching Physics” and weak in “Recruitment.” A center dotted line helps to anchor the difference between areas that have met the minimum suggested level (“Benchmark”) and those which have not. Darker colors for higher ratings provide some additional ease of interpretation, as do some other features recommended by Evergreen (such as white space).
bidirectional bar chart example
This visualization has worked well. Still, users were asking for a single number to aid interpretation. I decided that I could give them a number without compromising my integrity as an evaluator. Since we hope that programs will achieve Benchmark level (and achieving Exemplary level is “gravy”), we decided that providing users with the percent of items at least Benchmark (i.e., the percent of Benchmark plus percent Exemplary) was OK and useful.
Below is a sample of the resulting visualization. One nice bonus is that this number allows us to compare programs more easily – both program to program, and a single program over time.
Together, these two visualizations have provided rich interpretive information for the users, and for me as an evaluator.
chart showing percent of items at benchmark or higher
Rad Resources:
  • The project website, http://phystec.org/thriving, has our Excel rubric. You can also see some of the custom visuals created within the PTEPA Rubric User’s Guide on that page.
  • Stephanie Evergreen’s blog has many more tips on effective visuals, but I highly recommend her book (“Effective Data Visualization”) to help shop for visuals, and how they can be created directly in Excel.
Acknowledgements: We acknowledge funding from NSF-0808790 for development of the rubric and visuals.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
 

About AEA

The American Evaluation Association is an international professional association and the largest in its field. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products and organizations to improve their effectiveness. AEA’s mission is to improve evaluation practices and methods worldwide, to increase evaluation use, promote evaluation as a profession and support the contribution of evaluation to the generation of theory and knowledge about effective human action. For more information about AEA, visit www.eval.org.

 
American Evaluation Association
Washington, DC
202-367-1223.
Other experts on these topics