When it comes to evaluating the effectiveness of training, most organizations admit they could do a better job, according to a new study released by the American Society for Training & Development (ASTD). The study, Value of Evaluation: Making Training Evaluations More Effective, found that only about one-quarter of respondents agree their organizations get a “solid bang for the buck” from their training evaluation efforts.
The study, conducted in partnership with the Institute for Corporate Productivity (i4cp), is based on responses from 704 individuals in high-level positions in business, human resources, and learning. Eighty two percent of respondents worked for companies headquartered in North America, and 40.5 percent were employed by multinational or global organizations.
The study found that the five-level Kirkpatrick/Phillips model of learning evaluation is the most commonly used evaluation tool. Findings show that almost all organizations (92 percent of respondents) use the first level of evaluation which measures participant reaction. The use of the model drops off dramatically with each subsequent level, with very few organizations (17.9 percent of respondents) using Level 5 evaluation—return-on-investment for training. Findings also show that for organizations that effectively evaluate at Level 4, which measures business results, there is a positive correlation with marketplace performance.
Other key findings in the report include:
• The Brinkerhoff Success Case Method is the second most widely used evaluation method. About half of respondents used some version of this method, which highlights individual training success stories to communicate the value of learning.
• There are several barriers to the evaluation of learning including metrics that are seen as too difficult to calculate, isolating training as a factor that affects behaviors and results, and lack of leadership interest in training evaluation information.
• An average of 5.5 percent of training budgets is spent on evaluation, and organizations tend to spend the largest share of their evaluation budgets on Level 1 (reaction) evaluations.
Also included in the report are recommended actions for learning professionals:
• Don’t abandon evaluation. Learn to use metrics well as they are associated with evaluation success and overall organization success.
• Establish clear objectives and goals to be measured from the outset of a training program. For example, if measuring at Level 3 (behavior change) identify and measure the behaviors that should change before and after training.
• Collect data that is meaningful to leaders. Recognize that this type of data is not primarily found in participant reaction (Level 1) evaluations.
• Indentify the key performance indicators to be measured. When evaluating results, focus on metrics such as proficiency and competency levels, customer satisfaction, employee perceptions of training impact, business outcomes, and productivity measures.
• When choosing a Learning Management System, investigate the evaluation tools available with the system.
The report, Value of Evaluation: Making Training Evaluations More Effective, shows conclusively that organizations struggle with evaluating whether their programs meet the business needs of their organizations and whether they are meaningful to employees and business leaders. By delineating what organizations are currently doing, and identifying best practices and recommendations for improvement, ASTD hopes this report will help learning professionals and their organizations become more proficient and strategic when evaluating learning.
To access the full report, go to www.astd.org/content/research.
Categories: ASTD in the News