The Official ASTD Blog
Learning Industry News and Opinion

"The Bottomline on ROI" - Upcoming Webcast With Patti Phillips (ROI Institute)

March 15, 2012 16:55 by jllorens

March 21, 2012 • 2:00 p.m. ET

Presenter: Patti Phillips, President & CEO of the ROI Institute

Business fundamentals teach us that ROI is the ultimate measure of profitability. This simple metric compares the benefits of an investment to the investment itself. ROI has a long history of use in a variety of fields, including learning and development. Join us for this one-hour webinar as Patti Phillips describes ROI in terms of what it is and how it is applied to programs in global organizations. Referencing examples from her  most recent books, Measuring ROI in Learning and Development (ASTD, 2012) and Measuring the Success of Coaching (ASTD, 2012) Patti will describe the application of the ROI Methodology to programs such as coaching, Six Sigma Training, consultative selling, and others.

Upon completion of this webinar, you will be able to:

  •     Define ROI in terms stakeholders understand
  •     Determine how the ROI Methodology can be applied to a variety of programs
  •     Determine which programs are suitable for evaluation up to ROI


REGISTER NOW


Tags: , , ,

Categories: Evaluation and ROI | Learning & Development

ASTD Archive Image of the Day: Collecting Data for Training, Circa 1982

November 30, 2011 07:22 by aallen

 

(click image for larger version)

Collecting data for training has been an issue for many years, as evidenced by the image that appeared in the August 1982 issue of Training and Development Journal. Showing value in training programs is more than just collecting actual training costs....In today's business climate, showing value in training means linking all training initiatives to the goals of the company. This 1982 article by Laurence M. Weinstein highligted a cost framework model that looked at data from Levels 1, 2, and 3.

How would you answer the question, "Do you consider this expenditure worthwhile?" Do you have any particular models or frameworks you use to show value in learning?

For more information about T+D magazine, visit www.astd.org/td.

 

 


Tags:

Categories: Evaluation and ROI | Learning & Development | News | T+D

ASTD Archive Image of the Day:

November 18, 2011 07:32 by aallen

This image appeared in the January 1980 Training and Development Journal. The option bags are sorted by basic assessment methods. Two of the methods--work samples and records and reports--showed up in both bags. The bags are separated for an academic setting (client-centered) and a practitioner's world (other-centered). The article focused on how to select a needs assessment strategy.

Do you have a needs assessment strategy? What criteria do you use?

For more information about T+D magazine, visit www.astd.org/td.

 

 


Tags:

Categories: Evaluation and ROI | News | T+D

Isolating the Effects of Your Program: An Offensive Play

August 29, 2010 12:31 by Patti Phillips

Some people will say that the step to isolate the effects of your programs when measuring and evaluating their results is a defensive move. This can easily be said for evaluation at large. Anytime the ball is being run toward your goal, you’re on the defense – protecting what is yours. The key is in taking the offense and addressing tough questions before they are asked.

The Tough Questions

Mike Swan is the training manager at a large tire retail company. He piloted a new training initiative in five stores. The purpose of the training was to reduce customer wait time and increase the number of cars serviced per day. Upon completion of the pilot, data showed that customer wait time had gone down and cars serviced per day increased. Mike shared these data with his Chief Learning Office (CLO) as well as the Chief Financial Officer (CFO), hoping to receive enough funding to implement the initiative in other stores. The CFO, impressed there had been improvement in the two measures, asked:

“How much of that improvement is actually due to the program?”

Mike responded that he could not say with any level of certainty, but he said he knew that without the training, the improvement would not have occurred. The CFO asked a second question:

“How do you know?”

When Mike could not answer, the CFO suggested that he find out before he received additional funding. Mike is now playing defense .

The Emotional Debate

Had Mike addressed the isolation issue during the evaluation and presented the positive results so that answers to the tough questions were evident, Mike may have received funding on the spot. All the executives wanted to know was how much change in improvement was due to the program--a fair question.

Those who argue that you cannot or should not isolate the effects of a program are often uninformed or misinformed. While a long-time part of the research process, this important step of measurement and evaluation was first brought to light in the training industry in the late 1970s when Jack Phillips developed the ROI Methodology. It was later incorporated into the first Handbook of Training Evaluation and Measurement Methods published in the U.S. by Gulf Publishing and authored by Jack Phillips (1983). The book, now going into its fourth edition, is used by training managers and academia worldwide. In spite of the wide application and acceptance by executives and researchers of this important step, the topic of isolating the effects of the program stirs up such an emotion in people that one has to wonder whether or not there is a fear that maybe the training does not make a contribution.

It is because of this debate and the need for more information that this topic is covered in the ASTD Handbook of Measuring and Evaluating Training. In this chapter author Bruce Aaron, Ph.D., capability strategy manager for Accenture, describes the importance of isolating the effects of your programs through the evaluation process. He describes some of the approaches often used by organizations. As you read the chapter, you will find there are a variety of techniques available.

The End of the Debate

Will this debate of isolating the effects of the program ever end? That’s like asking the question, will the need for evaluation ever end? Hopefully the answer to both is no. Without debate, there is no research – without research there is no grounding – and without grounding there is no sustainability.

Fortunately, more than ever, individuals responsible for training measurement and evaluation are taking the offense. They are pursuing good evaluation, including isolating the effects of their programs. They plan ahead and can answer the tough questions – before they are asked.

 

 


Tags: , , , ,

Categories: Books | Celebrity Bloggers | Evaluation and ROI

Get ready for fall with the ASTD Handbook of Measuring and Evaluating Training

August 20, 2010 14:14 by Tora Estep

Summer is drawing to a close, and, although I will miss summer produce from the farmer's market, I look forward to fall. Even more than January 1 does for many, for me fall represents a chance to start over, improve my skills and knowledge, develop as a person and a professional, maybe take some classes and read some books--just to get better. If you are like me and think of fall as a time of year for improving your processes, increasing your skills, and making sure that your contribution to your organization is more powerful and better appreciated, then one book that may be interesting to you is The ASTD Handbook of Measuring and Evaluating Training, edited by Patti Phillips. The reason I would recommend the book is probably because I think evaluation represents a method for running a tighter ship, making sure that what you do really has an impact and has the right impact--and tighter ships are what fall is all about, right? (You know, you need a tighter ship because winter's coming, it's cold and windy, storms are on the horizon...eh, well, never mind, I digress.) 

The Handbook is a substantive work that provides practical information about everything you need to know about evaluation (or at least, we hope it provides everything--if it doesn't, let us know so that we can fill in any holes at the accompanying website, where you can also get a sample chapter). It tells you how to get started with an evaluation project, emphasizing the importance of identifying what you want a training program to accomplish from the get-go and developing measurable objectives right from the start (basically putting the E in the ADDIE model right up there with the A, so it's kind of the AEDDI model, which sounds sort of Icelandic and neat to me).  

Then it provides lots of practical information about how to collect data using a variety of tools (surveys, tests, interviews, focus groups, action planning, and more) and how to analyze the data once you have it. Another section covers issues surrounding measurement and evaluation, such as technology, how to report the results, what you do with the results once you have them, plus some case studies. And of course, there is the Voices section: a collection of interviews with some of the pioneers in measurement and evaluation: Brinkerhoff, Broad, Fitz-enz, Kaufman, Kirkpatrick, Phillips, Robinson, and Rothwell (you can actually hear the interviews in their entirety here).

One of the values of the book is that it is written by a balanced group of academicians and practitioners. Here are points of view from both folks who have studied how to do this stuff for ages, but also from people who work with it, every day, in their organizations. Just to give you a sense of what's in the book, here's a random selection of the kinds of things you can get from the book:

  • knowledge checks and practitioner tips
  • examples of objectives for a software implementation project
  • data collection plan template
  • comprehensive planning tool template
  • a method for determining sample size
  • strategies for improving response rates
  • sample interview protocol
  • sample fishbone diagram
  • steps in the action planning process
  • guidelines for how to work with statistics--basically statistics 101
  • methods for designing control groups
  • methods for converting measures to monetary value
  • introductions to several important calculations (such as BCR, ROI, payback period, net present value, and so on)--and how to use them
  • formats for reports.

      


Tags: , , , , , , , , , ,

Categories: Books | Evaluation and ROI

Getting the Most out of ASTD ICE 2010

May 13, 2010 07:44 by Patti Phillips

ight:250px;width:100%;">

As you prepare for your trip to Chicago, think about how you can get the highest return on your ASTD ICE investment.  Consider the following questions and corresponding tips:

  1. What can you do before your trip to ensure you get the content you need back on the job?

    • Plan your sessions.
    • Work with you manager to target key learning outcomes.
    • Clear the deck so you don't worry about what you are missing at work.
  2. What can you do during the conference to ensure you get the content you need back on the job?

    • Show up! (That's half the battle)
    • Turn off the Blackberry, iPhone, during sessions.
    • Ask relevant questions.
    • Engage in exercises.
    • Attend sessions with an open mind.
  3. What can you do after the conference to transfer learning into action?

    • Compile all of your notes into one system (do this on the plane ride home if possible!)
    • Meet with you manager to discuss key learning outcomes.
    • Present content to your team.
    • Apply at least one new skill, knowledge area, or piece of information within three days of your return.
    • Follow-up with at least one new contact within three days of your return.
Remember, a positive return on investment comes from the benefits gained by applying new knowledge, skill, and information. Without application, there are no results. So develop your learning transfer strategy plan and make the most out of ASTD ICE 2010.

See you at the conference!

Join Rebecca Ray and me at the bookstore as we launch the new ASTD Handbook of Measuring and Evaluating Training.


 

 


Tags:

Categories: Books | Celebrity Bloggers | Conferences | Evaluation and ROI | International

Using Action Plans to Align Programs with the Business

February 21, 2010 12:34 by Patti Phillips

Business alignment is an imperative if you are interested in driving a positive return on your organization's training investment. As described in an earlier blog,  Positioning Your Programs for Success in 2010, the process of achieving alignment includes four steps:

1. Clarify stakeholder needs
2. Develop SMART objectives
3. Communicate program objectives
4. Evaluate success, including the ROI when appropriate

Clarying the ultimate need, or the payoff need, sets the stage for identifying a correct solution. But it is the business need that defines specific measures that must improve in order to take advantage of the payoff opportunity. These business needs represents the business measures that will ultimately be converted to monetary value and compared to the program costs, used in the ROI calculation. But what happens when business measures are not so clear? How do you align a program to the business when participants come to the program armed with their own specific business needs?  When these situations occur, try using action plans.

What is an action plan?

In Chapter 8 of the upcoming ASTD Handbook of Measuring and Evaluating Training (coming soon!), Holly Burkett describes action plans in detail. But in a nutshell, an action plan is a tool by which participants identify specific actions they will take using content presented in your program to achieve some end. That 'end' is often the improvement in key business measures. Below is an example of an action plan intended to align planned actions with improvement in a business measure and calculate the ROI.

How does action planning work?

To ensure a successful action planning process, the following steps should be followed at their designated time frames.

Before the Program

Prior to the program, set the stage for action planning.

  • Communicate the action plan requirement.
  • As part of pre-work, have participants identify the business measure(s) that need to improve (and that can improve) through the successful application of knowledge, skill, and/or information presented in the program.

During the Program

During the program participants learn more about action planning.

  • Describe the action planning process at the beginning of the program.
  • Teach the action planning process, explaining how the document should be completed.
  • Allow time to develop the action plan.
  • Have the facilitator approve the action plan.
  • Require participants to assign monetary value to their business measures and provide the basis for this value (see Items A, B, C on the right side of the action plan).
  • If time allows, have participants present their action plans to the group.
  • Explain the follow-up process.

After the Program

Post-program activities serve as the follow-up evaluation.

  • Require participants to provide improvement data (Item D on the right side of the action plan).
  • Ask participants to isolate the effects of the program (Item E on the right side of the action plan).
  • Ask participants to provide their confidence level in their estimates (Item F on the right side of the action plan).
  • Collect action plans at a pre-determined follow-up time.
  • Summarize the data and calculate the ROI.

* * * *

The action planning process takes effort, but can be a powerful tool for collecting business impact data. In addition, data collected through the action planning process are used to calculate the ROI.

For an example of action planning in action, watch the webcast, Measuring the ROI in Performance Improvement Training The webcast was conducted on behalf of Learn.com. The actual case study (which serves as the handout referenced in the webcast) is attached below.

 

02.19.2010_Measuring ROI in Performance Improvement Training_Using Action Plans to Collect Level 4 Data.pdf (582.08 kb)


Tags: , ,

Categories: Books | Celebrity Bloggers | Evaluation and ROI

Using Technology to Collect Data

January 31, 2010 20:55 by Patti Phillips

On Wednesday, January 27, President Obama gave his State of the Union Address. Upon completion, the pollsters and pundants were in usual form. If you happened to watch the post-address discussion on CNN, you probably saw John King provide the latest Twitter results. That's right. Polling using Twitter. John King provided us another example of how technology is aiding us in collecting survey data.

Twitter, Facebook, LinkedIn along with SurveyMonkey, SurveyPro, Metrics-that-Matter and many other technologies provide an array of opportunity to collect data from colleagues, customers, and the like. While the program evaluation community has embraced technology to make data collection more convenient, less expensive, and more interactive, we often have such a reliance on it, that we fail to realize the potential error in the results that can surface from depending solely on technology. Types of error most immediately at risk are coverage error and non-response error.

Coverage Error
Coverage error occurs when we collect data and report results only from a group of respondents who have access to the delivery mode we employ. While admittedly, John King's results were not representative of the country at large, consider some of the people he missed:

  • People who follow CNN on Twitter, but choose not to tweet.
  • People who don't follow CNN on Twitter.
  • People who don't know about Twitter.
  • People who don't have computers.

Non-Response Error
Non-response error occurs when people do not respond to a survey. With a low response rate it becomes difficult to draw conclusions with the survey results. People fail to respond to surveys for a variety of reasons, including (but not limited to):

  • Lack of time
  • Lack of interest
  • No incentive
  • No access
  • Too many surveys
  • Too many emails 
  • Technology challenged
  • Technology resistant

In order to take advantage of what technology has to offer in terms of data collection and mitigate coverage and non-response error, consider the following steps taken from the work of Don Dillman (2009) and other experts of survey research.

1. Identify your primary mode of data collection for a given survey project.
You may choose technology as your primary mode. If so, then steps 2-5 below will use technology. If you choose paper-based or telephone surveys as your primary mode of data collection, you will use whichever one of those to complete the following steps.

2. Provide pre-notice prior to administering the survey.
This communication will come in the form of email, if you plan to email your survey; a letter or memorandum if you plan to use paper-based survey; or a brief telephone call if you plan to use telephone as your primary method of data collection. The purpose of the pre-notice is to advise potential respondents of the importance of the survey. In addition, the pre-notice will explain to them when they will receive the survey, what they can expect in terms of time commitment, completion timeline, planned use of the data, and any incentives you are willing to offer for survey completion.

3. Administer the survey.
Three days after the pre-notice has been distributed, send the survey. As part of the survey instructions, explain again the importance of survey, time commitment, completion timeline, planned use of the data, and incentives.

4. Administer the survey a second time.
After a week or two, administer the survey a second time using, again, your primary mode of delivery. This second distribution serves as a reminder and makes it convenient for the audience by providing the entire survey with instructions.

5. Send a follow-up reminder.
By now, you should have received a large number of surveys. But there are still a few people who need another reminder. So, using your primary mode of delivery, send a reminder to those who have not yet responded.

6. Administer the survey a third time -- using a different delivery method.
This last contact with potential respondents is your opportunity to influence people to respond by attacking the issue from another position. This time, change your delivery method. If your previous contacts have been electronic, send potential respondents a paper-based survey or place a call to them. By changing the delivery method, you give people who have not had access to (or who chose not to access) your survey opportunity to respond.

* * * *

Reference
Dillman, D. A., Smyth, J. D., and Christian, L. M. (2009). Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 3rd edition. Hoboken, New Jersey: John Wiley & Sons.

Additional Resources
Alreck, P. L. and Settle, R. B. (1995). The Survey Research Handbook, 2nd edition. New York: McGraw-Hill

Fink, A. (2002) Series Editor. The Survey Kit 2nd edition. Thousand Oaks: Sage Publications.

Trochim, W. M. The Research Methods Knowledge Base, 2nd Edition. Internet WWW page, at URL: <http://www.socialresearchmethods.net/kb/> (version current as of October 20, 2006).

 

 

 

 

 


Tags: , , ,

Categories: Books | Evaluation and ROI

Selecting an Evaluation Approach

January 17, 2010 14:39 by Patti Phillips

It's the middle of January and conference season is upon us. ASTD kicks it off with TK 2010 in Las Vegas. Then, its off to San Diego with Training 2010. And of course the big one - ASTD 2010 ICE takes place in May in Chicago. Some of you will attend one or more of the 2010 conferences with the intent of finding an evaluation process that fits your needs. To ensure you are clear on those needs, ask yourself the following questions.


1. What purpose will evaluation serve?

The first step toward achieving a goal is purpose clarification. Get clear on the purpose you are trying to serve through adopting an evaluation approach. Do you want to justify your spending? Do you want to increase your budget? Are you looking for an approach to help you ensure your team implements the right programs for the right people to meet the right needs? Whatever your purpose for pursuing an evaluation approach, get clear. Write a simple purpose statement to stay focused.

2. Who are our stakeholders?

While it may sound simple, it is sometimes surprising to see who gets left off this list.  Think about all of the stakeholders who have a vested interest in your training programs. Among the many stakeholders are the participants, supervisors, senior executives, and the suppliers from whom you purchase programs. There is also your team including designers, developers, performance consultants, and evaluators. Identify them all.

3. What types of decisions do these stakeholders make about our programs?

Your many stakeholders make decisions about your programs routinely. Participants decide whether or not they are going to engage. Supervisors decide whether or not they are going to support participants as they apply what they learn in a program. Executives decide whether or not they are going to continue funding programs. Think about all of the possible decisions being made about your programs.

4. What type of data do our stakeholders need to make these decisions?

Given the type of decisions being made, think about what type of information would help them make those decisions. For example, what type of information would supervisors need in order to fully support a program. Would they need to know how a program will help change the work habits of their staff? Or maybe they would need to know how a program is improving the quality of work. Maybe they would want to know what is preventing their staff from being successful with the application of knowledge and skill acquired during a program. 

5. What type of data are we providing our stakeholders?

What information do you currently provide your stakeholders about programs? Are you only sharing learning objectives or do you actually provide the success with those objectives? Are you collecting data post-program to describe to supervisors the barriers and enablers to application of knowledge and proving them insights as to how they can better support their team? Do you describe to them how improvement in quality is directly linked to your program?

6. What are the gaps?

Given your stakeholders, the types of decisions they make about your programs, the types of data they need, and the data you are currently providing, what are you missing? These gaps in your data are one of the primary needs to be filled with your new evaluation approach.

With a clear view of what you need in terms of data, look at other criteria important to your selection. Maybe you want a process that is:

  • Credible 
  • Simple
  • Appropriate for a variety of programs
  • Economical
  • Theoretically sound
  • Accounts for all program costs
  • Accounts for other factors
  • Applicable on a pre-program basis

Make your list and pack your bags. Upon arrival at the conference, read your purpose statement, review your list, and attend as many sessions on training evaluation as you can. 

Now you are ready to make your selection.

 


Tags: , ,

Categories: Books | Conferences | Evaluation and ROI

Positioning Your Programs for Success in 2010

December 27, 2009 13:42 by Patti Phillips

As 2009 comes to a close and 2010 rolls in, people around the world will reflect on their accomplishments and set goals for the new year. To ensure your training programs support the 2010 goals of your organization, resolve to invest more time than in the past in positioning your programs for success.   Four simple steps can help: 

1.  Clarify Needs
Before deciding to offer a program, be clear on the highest level of need.  This is a need that ultimately leads to making money, saving money, and/or avoiding costs. Examples of these opportunities are customer satisfaction, employee engagement, morale, market share, image, productivity, and operating costs. Once you are clear there, identify lower levels of need that are more specific and that lead you to a program aligned with the ultimate goal. The following series of questions can help you with this process.

  • What areas need improvement that will help your organization ultimately make money, save money, or avoid cost? (Highest Level of Need)
  • What specific business measures would tell you that improvement has been made?  (Business Needs)
  • What needs to happen (or stop happening) in order to improve the above defined measures? (Performance Needs)
  • What is it that people need to know in order to do what you want them to do to address your business needs? (Learning Needs)
  • How best can you deliver knowledge, skill, and/or information people need to know to do what you want them to do? (Preference Needs)

2.  Develop SMART Objectives
Based on the identified needs, develop objectives that reflect each level of need. Be sure your objectives are SMART. Specific, measurable, achievable, realistic, and time-bound objectives representative of stakeholder needs are critical to program success. 

3.  Communicate Program Objectives
This step, while on the surface is obvious, is often overlooked. This is particularly true when it comes to participants.  Objectives are your positioning power. By communicating specific, measurable objectives reflective of all levels of need, designers, developers, and facilitators know what they need to do to make the program successful. Evaluators know what questions to ask during the evaluation. Supervisors, managers and senior leaders recognize that the program is on track with their goals. But the group to whom objectives are often not communicated so clearly are participants. This is particularly true when it comes to objectives beyond those targeting knowledge acquisition. Participants need to know not only what they are going to learn in a program, but what they are expected to do with what they learn and why they are expected to do it.  

4. Evaluate Success
If  you want to know whether or not your program is successful, evaluation is must. Programs for which needs are clear, objectives are SMART, and all stakeholders are in the 'loop' are more likely to drive results. But you will never know how well the program achieved those results or how to improve the program without evaluation The good news is that if you are clear as to why a program is being offered and you have set and communicated SMART objectives, evaluation is relatively simple!

Your Assignment                                                                                                                    When you return to work, identify a program and work with your team to answer the following questions:

  1. Are the needs for this program clear?
  2. Are the objectives SMART?
  3. Have we communicated the objectives to everyone who needs to know? 

 

Further Reading

Annulis, H. and Gaudet, C. (in press) Developing Powerful Objectives. In Phillips, P. P. (editor) Handbook of Measuring and Evaluating Training. Alexandria: ASTD.                                

Phillips, J. J. and Phillips, P. P. (2008) Beyond Learning Objectives: Develop Measurable Objectives That Link to the Bottom Line. Alexandria: ASTD. 

 

 

 

 

 

 


Tags: , , ,

Categories: Books | Evaluation and ROI