The Official ASTD Blog
Learning Industry News and Opinion

Using Technology to Collect Data

January 31, 2010 20:55 by Patti Phillips

On Wednesday, January 27, President Obama gave his State of the Union Address. Upon completion, the pollsters and pundants were in usual form. If you happened to watch the post-address discussion on CNN, you probably saw John King provide the latest Twitter results. That's right. Polling using Twitter. John King provided us another example of how technology is aiding us in collecting survey data.

Twitter, Facebook, LinkedIn along with SurveyMonkey, SurveyPro, Metrics-that-Matter and many other technologies provide an array of opportunity to collect data from colleagues, customers, and the like. While the program evaluation community has embraced technology to make data collection more convenient, less expensive, and more interactive, we often have such a reliance on it, that we fail to realize the potential error in the results that can surface from depending solely on technology. Types of error most immediately at risk are coverage error and non-response error.

Coverage Error
Coverage error occurs when we collect data and report results only from a group of respondents who have access to the delivery mode we employ. While admittedly, John King's results were not representative of the country at large, consider some of the people he missed:

  • People who follow CNN on Twitter, but choose not to tweet.
  • People who don't follow CNN on Twitter.
  • People who don't know about Twitter.
  • People who don't have computers.

Non-Response Error
Non-response error occurs when people do not respond to a survey. With a low response rate it becomes difficult to draw conclusions with the survey results. People fail to respond to surveys for a variety of reasons, including (but not limited to):

  • Lack of time
  • Lack of interest
  • No incentive
  • No access
  • Too many surveys
  • Too many emails 
  • Technology challenged
  • Technology resistant

In order to take advantage of what technology has to offer in terms of data collection and mitigate coverage and non-response error, consider the following steps taken from the work of Don Dillman (2009) and other experts of survey research.

1. Identify your primary mode of data collection for a given survey project.
You may choose technology as your primary mode. If so, then steps 2-5 below will use technology. If you choose paper-based or telephone surveys as your primary mode of data collection, you will use whichever one of those to complete the following steps.

2. Provide pre-notice prior to administering the survey.
This communication will come in the form of email, if you plan to email your survey; a letter or memorandum if you plan to use paper-based survey; or a brief telephone call if you plan to use telephone as your primary method of data collection. The purpose of the pre-notice is to advise potential respondents of the importance of the survey. In addition, the pre-notice will explain to them when they will receive the survey, what they can expect in terms of time commitment, completion timeline, planned use of the data, and any incentives you are willing to offer for survey completion.

3. Administer the survey.
Three days after the pre-notice has been distributed, send the survey. As part of the survey instructions, explain again the importance of survey, time commitment, completion timeline, planned use of the data, and incentives.

4. Administer the survey a second time.
After a week or two, administer the survey a second time using, again, your primary mode of delivery. This second distribution serves as a reminder and makes it convenient for the audience by providing the entire survey with instructions.

5. Send a follow-up reminder.
By now, you should have received a large number of surveys. But there are still a few people who need another reminder. So, using your primary mode of delivery, send a reminder to those who have not yet responded.

6. Administer the survey a third time -- using a different delivery method.
This last contact with potential respondents is your opportunity to influence people to respond by attacking the issue from another position. This time, change your delivery method. If your previous contacts have been electronic, send potential respondents a paper-based survey or place a call to them. By changing the delivery method, you give people who have not had access to (or who chose not to access) your survey opportunity to respond.

* * * *

Reference
Dillman, D. A., Smyth, J. D., and Christian, L. M. (2009). Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 3rd edition. Hoboken, New Jersey: John Wiley & Sons.

Additional Resources
Alreck, P. L. and Settle, R. B. (1995). The Survey Research Handbook, 2nd edition. New York: McGraw-Hill

Fink, A. (2002) Series Editor. The Survey Kit 2nd edition. Thousand Oaks: Sage Publications.

Trochim, W. M. The Research Methods Knowledge Base, 2nd Edition. Internet WWW page, at URL: <http://www.socialresearchmethods.net/kb/> (version current as of October 20, 2006).

 

 

 

 

 


Tags: , , ,

Categories: Books | Evaluation and ROI

Selecting an Evaluation Approach

January 17, 2010 14:39 by Patti Phillips

It's the middle of January and conference season is upon us. ASTD kicks it off with TK 2010 in Las Vegas. Then, its off to San Diego with Training 2010. And of course the big one - ASTD 2010 ICE takes place in May in Chicago. Some of you will attend one or more of the 2010 conferences with the intent of finding an evaluation process that fits your needs. To ensure you are clear on those needs, ask yourself the following questions.


1. What purpose will evaluation serve?

The first step toward achieving a goal is purpose clarification. Get clear on the purpose you are trying to serve through adopting an evaluation approach. Do you want to justify your spending? Do you want to increase your budget? Are you looking for an approach to help you ensure your team implements the right programs for the right people to meet the right needs? Whatever your purpose for pursuing an evaluation approach, get clear. Write a simple purpose statement to stay focused.

2. Who are our stakeholders?

While it may sound simple, it is sometimes surprising to see who gets left off this list.  Think about all of the stakeholders who have a vested interest in your training programs. Among the many stakeholders are the participants, supervisors, senior executives, and the suppliers from whom you purchase programs. There is also your team including designers, developers, performance consultants, and evaluators. Identify them all.

3. What types of decisions do these stakeholders make about our programs?

Your many stakeholders make decisions about your programs routinely. Participants decide whether or not they are going to engage. Supervisors decide whether or not they are going to support participants as they apply what they learn in a program. Executives decide whether or not they are going to continue funding programs. Think about all of the possible decisions being made about your programs.

4. What type of data do our stakeholders need to make these decisions?

Given the type of decisions being made, think about what type of information would help them make those decisions. For example, what type of information would supervisors need in order to fully support a program. Would they need to know how a program will help change the work habits of their staff? Or maybe they would need to know how a program is improving the quality of work. Maybe they would want to know what is preventing their staff from being successful with the application of knowledge and skill acquired during a program. 

5. What type of data are we providing our stakeholders?

What information do you currently provide your stakeholders about programs? Are you only sharing learning objectives or do you actually provide the success with those objectives? Are you collecting data post-program to describe to supervisors the barriers and enablers to application of knowledge and proving them insights as to how they can better support their team? Do you describe to them how improvement in quality is directly linked to your program?

6. What are the gaps?

Given your stakeholders, the types of decisions they make about your programs, the types of data they need, and the data you are currently providing, what are you missing? These gaps in your data are one of the primary needs to be filled with your new evaluation approach.

With a clear view of what you need in terms of data, look at other criteria important to your selection. Maybe you want a process that is:

  • Credible 
  • Simple
  • Appropriate for a variety of programs
  • Economical
  • Theoretically sound
  • Accounts for all program costs
  • Accounts for other factors
  • Applicable on a pre-program basis

Make your list and pack your bags. Upon arrival at the conference, read your purpose statement, review your list, and attend as many sessions on training evaluation as you can. 

Now you are ready to make your selection.

 


Tags: , ,

Categories: Books | Conferences | Evaluation and ROI

Positioning Your Programs for Success in 2010

December 27, 2009 13:42 by Patti Phillips

As 2009 comes to a close and 2010 rolls in, people around the world will reflect on their accomplishments and set goals for the new year. To ensure your training programs support the 2010 goals of your organization, resolve to invest more time than in the past in positioning your programs for success.   Four simple steps can help: 

1.  Clarify Needs
Before deciding to offer a program, be clear on the highest level of need.  This is a need that ultimately leads to making money, saving money, and/or avoiding costs. Examples of these opportunities are customer satisfaction, employee engagement, morale, market share, image, productivity, and operating costs. Once you are clear there, identify lower levels of need that are more specific and that lead you to a program aligned with the ultimate goal. The following series of questions can help you with this process.

  • What areas need improvement that will help your organization ultimately make money, save money, or avoid cost? (Highest Level of Need)
  • What specific business measures would tell you that improvement has been made?  (Business Needs)
  • What needs to happen (or stop happening) in order to improve the above defined measures? (Performance Needs)
  • What is it that people need to know in order to do what you want them to do to address your business needs? (Learning Needs)
  • How best can you deliver knowledge, skill, and/or information people need to know to do what you want them to do? (Preference Needs)

2.  Develop SMART Objectives
Based on the identified needs, develop objectives that reflect each level of need. Be sure your objectives are SMART. Specific, measurable, achievable, realistic, and time-bound objectives representative of stakeholder needs are critical to program success. 

3.  Communicate Program Objectives
This step, while on the surface is obvious, is often overlooked. This is particularly true when it comes to participants.  Objectives are your positioning power. By communicating specific, measurable objectives reflective of all levels of need, designers, developers, and facilitators know what they need to do to make the program successful. Evaluators know what questions to ask during the evaluation. Supervisors, managers and senior leaders recognize that the program is on track with their goals. But the group to whom objectives are often not communicated so clearly are participants. This is particularly true when it comes to objectives beyond those targeting knowledge acquisition. Participants need to know not only what they are going to learn in a program, but what they are expected to do with what they learn and why they are expected to do it.  

4. Evaluate Success
If  you want to know whether or not your program is successful, evaluation is must. Programs for which needs are clear, objectives are SMART, and all stakeholders are in the 'loop' are more likely to drive results. But you will never know how well the program achieved those results or how to improve the program without evaluation The good news is that if you are clear as to why a program is being offered and you have set and communicated SMART objectives, evaluation is relatively simple!

Your Assignment                                                                                                                    When you return to work, identify a program and work with your team to answer the following questions:

  1. Are the needs for this program clear?
  2. Are the objectives SMART?
  3. Have we communicated the objectives to everyone who needs to know? 

 

Further Reading

Annulis, H. and Gaudet, C. (in press) Developing Powerful Objectives. In Phillips, P. P. (editor) Handbook of Measuring and Evaluating Training. Alexandria: ASTD.                                

Phillips, J. J. and Phillips, P. P. (2008) Beyond Learning Objectives: Develop Measurable Objectives That Link to the Bottom Line. Alexandria: ASTD. 

 

 

 

 

 

 


Tags: , , ,

Categories: Books | Evaluation and ROI

Hear Don Kirkpatrick talk about the 50th anniversary of the four levels!

December 23, 2009 11:49 by Tora Estep

One of the big projects that I have been working on lately is the ASTD Handbook of Measuring and Evaluating Training, edited by Patti Phillips and forthcoming in 2010. A sub-project of the book is a series of interviews conducted by Rebecca Ray, an award-winning chief learning officer, with some of the legends of the field of training evaluation. These include Robert Brinkerhoff, Jac Fitz-enz, Donald Kirkpatrick, Jack Phillips, Dana Gaines Robinson, and William Rothwell. This morning we completed recording all the interviews, and they will be made available at the ASTD Handbook of Measuring and Evaluating Training webpage early in the new year. (The website will be continually updated with new content, so check back often!)

But, as an early Christmas present, we've got the interview with Donald Kirkpatrick up there now, talking about the 50th anniversary of his four levels of evaluation and many other topics. To listen, click here.


Tags: , , , , , , , , ,

Categories: Books

Categories: Books
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Don Kirkpatrick Live

December 21, 2009 14:45 by jllorens

As part of the ASTD Handbook of Measuring and Evaluating Training, we have asked Rebecca Ray, senior vice president of global talent management and development at MasterCard, to interview some of the founding voices of the measuring and evaluating field. The first of these interviews, which also celebrates the 50th anniversary of his seminal articles on the four levels of evaluation in T+D, is with Donald Kirkpatrick.

Read more and listen to the interview.


Tags: , , ,

Categories: T+D

Categories: T+D
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Handbook of Measuring and Evaluating Training: 2010

December 7, 2009 09:44 by Patti Phillips

Learning professionals have a love-hate relationship with measurement and evaluation. On one hand, most people agree that evaluation and the evolving results can represent important information. On the other hand, the act of evaluation seems daunting and beyond their interest in learning and development. But there is no arguing that the call for accountability of resource expenditures is louder than ever. To support learning professionals as they answer this call, ASTD is launching a new handbook, Handbook of Measuring and Evaluating Training.

This new book addresses the mechanics of evaluation from the perspective of a variety of contributors. It addresses content relevant to the four phases of measurement and evaluation: planning, data collection, data analysis, and reporting. In addition, chapters are included that support implementation of your measurement practice.  Each chapter is written to achieve at least three learning objectives. A knowledge check is included at the end of each chapter to ensure readers gain at least one new insight.

 Content is presented in four parts:  

  1. Evaluation Planning
  2. Data Collection
  3. Data Analysis
  4. Measurement and Evaluation at Work  

 

Evaluation Planning 

“Plan your work, work your plan” was a lesson I learned from my dad at a very early age. That explains some of my slow-to-start project executions! The point of this is, however, if you spend time planning, execution is simplified. Part I of the new Handbook describes three critical issues in the planning phase: 

  • Identifying Stakeholder Needs
  • Developing Powerful Objectives
  • Planning Your Evaluation Project 

Data Collection

Obviously, data collection is an important phase because without data collection, there are no results. The question most often asked when it comes to data collection is, “what is the best technique to collect data?"  The answer to this question is simply “it depends.” There are a variety of ways in which data are collected. How you decide which technique to use depends on the purpose of the evaluation, type of data, time to collect data, cost, organization culture, and other constraints and conveniences. Part II topics include: 

  • Using Surveys and Questionnaires
  • Designing Criterion-Referenced Tests
  • Conducting Interviews
  • Conducting Focus Groups
  • Using Action Plans
  • Using the Success Case Method
  • Using Performance Records 

Data Analysis

Many learning professionals are most concerned with this phase of the evaluation process. Of course without good analysis, it’s pretty tough to explain the results.  Data analysis may require descriptive statistics, inferential statistics, content analysis, and cause-and-effect analysis, conversion of data to money, cost calculations, and ROI analysis. What you do when depends on the purpose of the evaluation, the type of data, time to analyze, cost, organization culture, and other constraints and conveniences. Part III, Data Analysis presents content on: 

  • Using Statistics
  • Analyzing Qualitative Data
  • Isolating the Effects of the Program
  • Converting Measures to Monetary Value
  • Identifying Program Costs
  • Calculating the ROI 

Measurement and Evaluation at Work

To make evaluation work, results must be put to use. This begins with reporting results. The results must, however, be relevant to stakeholders. In addition to reporting results, the data evolving from the evaluation process must be put to good use. To make evaluation work for the long-term, systems must be put into place. Part IV, Measurement and Evaluation at Work, includes topics such as: 

  • Reporting Evaluation Results
  • Giving CEOs the Data They Want
  • Using Evaluation Results
  • Implementing and Sustaining a Measurement and Evaluation Practice
  • Selecting Technology to Support Evaluation 

In addition, we’ve included case studies on evaluating technology-enabled learning, leadership development, global sales training, technical training, and training using a simulation component.

Voices 

A final part of the book, Part V, titled Voices, presents summaries of interviews with experts in training measurement and evaluation. Dr. Rebecca Ray, award winning Chief Learning Officer and an expert in talent management, conducted in-depth interviews with the experts who paved the way for training measurement and evaluation. Each interview has been converted into a series of podcasts that will be available for download. In addition, we plan to make the entire transcript of each interview available. You will hear from experts such as Don Kirkpatrick, Jack Phillips, Robert Brinkerhoff, Dana Robinson, Jac Fitz-enz, Bill Rothwell, and more.

How to Get the Most out of This Book

We’re excited about this book for many reasons. While the book provides readers with a little information on many aspects of evaluation, it is only the beginning of the learning opportunity. ASTD will be launching a website to support the dissemination of additional content. We’re already in the process of collecting case studies, tools, and examples for download. Through the website you will be able to download the Voices podcasts as well as have access to other resources.

And of course, we have the Evaluation and ROI Blog that you are currently viewing. Through this blog we can share thoughts and ideas on measurement and evaluation.

Call to Action

This initial blog announces ASTD’s upcoming Handbook of Measuring and Evaluating Training. The book will be available in Spring 2010. In weeks to come, I'll post a variety of topics to generate discussion. To get the conversation going, let us hear how you are using measurement and evaluation in your organization!



Tags: , , ,

Categories: Books | Evaluation and ROI

USDA's major reorg driven by focus on performance, results

December 1, 2009 17:30 by Ann Pace

The Agriculture Department's acting chief human capital officer Donald Sanders explains the agency's administrative reorganization in simple terms.

It's all about becoming a performance driven and results oriented organization.

Sanders, who spoke recently at the Human Capital Federal Management conference in Arlington, Va., sponsored by Worldwide Business Research, says the Office of Management and Budget is pushing agencies in this direction. OMB has asked agencies to develop three-to-eight high performance goals they can achieve in the next 12-to-18 months.

"We are in the midst of transforming how our HR function supports our core mission areas," he says. "One thing we understand in the future the human resources function will have to play a more active role in engaging senior managers and providing them with the technical consulting they need in terms of people strategies, particularly talent management."

Read the full article.


Tags: , , , ,

Categories: News | T+D

Categories: News | T+D
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

ASTD: New Study Shows Training Evaluation Efforts Need Help

November 17, 2009 09:42 by Kristen Fyfe

When it comes to evaluating the effectiveness of training, most organizations admit they could do a better job, according to a new study released by the American Society for Training & Development (ASTD). The study, Value of Evaluation: Making Training Evaluations More Effective, found that only about one-quarter of respondents agree their organizations get a “solid bang for the buck” from their training evaluation efforts.

The study, conducted in partnership with the Institute for Corporate Productivity (i4cp), is based on responses from 704 individuals in high-level positions in business, human resources, and learning. Eighty two percent of respondents worked for companies headquartered in North America, and 40.5 percent were employed by multinational or global organizations. 
 
The study found that the five-level Kirkpatrick/Phillips model of learning evaluation is the most commonly used evaluation tool. Findings show that almost all organizations (92 percent of respondents) use the first level of evaluation which measures participant reaction. The use of the model drops off dramatically with each subsequent level, with very few organizations (17.9 percent of respondents) using Level 5 evaluation—return-on-investment for training. Findings also show that for organizations that effectively evaluate at Level 4, which measures business results, there is a positive correlation with marketplace performance.

Other key findings in the report include:

• The Brinkerhoff Success Case Method is the second most widely used evaluation method. About half of respondents used some version of this method, which highlights individual training success stories to communicate the value of learning.
• There are several barriers to the evaluation of learning including metrics that are seen as too difficult to calculate, isolating training as a factor that affects behaviors and results, and lack of leadership interest in training evaluation information.
• An average of 5.5 percent of training budgets is spent on evaluation, and organizations tend to spend the largest share of their evaluation budgets on Level 1 (reaction) evaluations.

Also included in the report are recommended actions for learning professionals:

• Don’t abandon evaluation. Learn to use metrics well as they are associated with evaluation success and overall organization success.
• Establish clear objectives and goals to be measured from the outset of a training program. For example, if measuring at Level 3 (behavior change) identify and measure the behaviors that should change before and after training.
• Collect data that is meaningful to leaders. Recognize that this type of data is not primarily found in participant reaction (Level 1) evaluations.
• Indentify the key performance indicators to be measured. When evaluating results, focus on metrics such as proficiency and competency levels, customer satisfaction, employee perceptions of training impact, business outcomes, and productivity measures.
• When choosing a Learning Management System, investigate the evaluation tools available with the system.

The report, Value of Evaluation: Making Training Evaluations More Effective, shows conclusively that organizations struggle with evaluating whether their programs meet the business needs of their organizations and whether they are meaningful to employees and business leaders. By delineating what organizations are currently doing, and identifying best practices and recommendations for improvement, ASTD hopes this report will help learning professionals and their organizations become more proficient and strategic when evaluating learning.

To access the full report, go to www.astd.org/content/research.

 


Tags: , , , , , , , , , , ,

Categories: ASTD in the News

Report: Direct Link Between L&D Programs and Profitability

November 16, 2009 17:33 by jllorens

Winston-Salem, NC (PRWEB) November 16, 2009 -– According to a recent Aberdeen Group study sponsored in part by SilkRoad technology, inc., the leading provider of talent management solutions, organizations targeting learning and development for managers improve performance, customer retention, and revenue, with 51 percent of the top organizations linking learning and development initiatives directly to changes in profitability and revenue.

The report outlines the key learning and development topics used by top performing companies, discusses the most widely used types of learning technologies, and addresses key learning measurement and management strategies.

Read more.


Tags: ,

Categories: News

Categories: News
Actions: E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

The Three Rs: Retention, Return on Investment and Recession

September 21, 2009 15:00 by Ann Pace

Results from independent research, published on September 21, show that 67% of graduates surveyed are likely to consider leaving their current employer as the country comes out of recession. Commissioned by the Inspirational Development Group (IDG), provider of bespoke leadership and management programmes, the results offer a snapshot of how graduates are viewed and valued in the workplaces of some of the UK's largest employers, including the NHS, Thomson Reuters and the Lloyds Banking Group.

Focusing on graduates two and a half to three and a half years into their scheme, the report investigates the perceptions and reality of graduate retention, recession impact and valuation issues for graduate programmes, both from the organisation and graduate's perspective.

Read the full release. 


Tags: , , ,

Categories: Research | The Economy