Home > NewsRelease > NA TIG Week: Why Journal Articles about Needs Assessment (NA) Aren’t Plentiful? Excerpts from a 2018 AEA Ignite Session by Jim Altschuld
Text
NA TIG Week: Why Journal Articles about Needs Assessment (NA) Aren’t Plentiful? Excerpts from a 2018 AEA Ignite Session by Jim Altschuld
From:
American Evaluation Association (AEA) American Evaluation Association (AEA)
For Immediate Release:
Dateline: Washington , DC
Monday, June 17, 2019

 
I’m Jim Altschuld, an AEA charter member who’s written about needs assessments (NAs) for nearly 4 decades and Professor Emeritus, The Ohio State University. So why aren’t journal articles about NAs plentiful? Possible answers:
  1. Journal reviewers don’t like NAs, are biased, and reject most articles.
  2. NAs are for internal organization and agency purposes, not for publication, the question isn’t pertinent.
  3. As NA practitioners, we don’t focus on publications.
*D. Manuscripts are rejected due to weaknesses in instruments and methodological problems.
Why choose *D? Most NA books are primarily ‘how tos’ and don’t delve beneath the surface into needs assessment methods and the validity of the data collected.  
Can we draw legitimate conclusions and implications if validity isn’t there? If we do, would the results be tainted – The Fruit of a Poisonous Tree? I raised 14 issues about double (what should be, what is) scaled NA surveys in a 2018 Ignite session, and discuss two issues (survey pre-structuring, failure to categorize) below.
Issue 1 -Skewness coming from ‘What Should be’ Items on NA Surveys (Pre-structuring):
Needs assessors most often do qualitative studies, review the literature, and consult with others to learn what’s important before developing ‘what should be’ items for stakeholder surveys. What’s wrong with that?
Well, items are frequently rated on the high end of the scale leading to negatively skewed results with item values that are very close and not differentiated from one another. Little information is gained.  What do we learn from this kind of data, how meaningful are the what should be scores, and what sense do we make of discrepancies/gaps against them as anchor points?   I’m not sure.
Issue 2 – Failure to Categorize:
Say we have 40 double scaled items about a topic that can be put into 3-4 categories. Besides rating the items, if respondents ranked the categories, undoubtedly they wouldn’t be of equal rank.  Logically, high ‘what should be’ items in the top ranked category would be more valuable than similar high items in a lower category. This offers additional insight and information about the skewness. Analysis/interpretation of data becomes complicated, but the understanding gained would be worth the cost. Besides that, there’s likelihood that the validity of the instrument (at least face validity) would be improved.
Hot Tips:
  • Issue 2’s discussion also applies to issue 1 and important for better instrument design.
  • Dig into the literature (see 3 references below) for subtle dimensions of methods used in collecting NA data. This will improve the validity of assessments and provide findings that are more sound and acceptable for decision making.  Only then will we achieve the standards of professional journals.
Rad Resources:
The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

About AEA

The American Evaluation Association is an international professional association and the largest in its field. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products and organizations to improve their effectiveness. AEA’s mission is to improve evaluation practices and methods worldwide, to increase evaluation use, promote evaluation as a profession and support the contribution of evaluation to the generation of theory and knowledge about effective human action. For more information about AEA, visit www.eval.org.

 
American Evaluation Association
Washington, DC
202-367-1223.
Other experts on these topics