-->

Syncing In

Article Featured Image

In today's marketplace, constant change must be met with constant learning and growth.

Everything seemed to have gone smoothly. There were attentive students, a well-considered curriculum and experienced, dedicated instructors. But when USG Corporation completed Customer Relationship Management (CRM) training of 475 people in its sales and marketing organizations, questions lingered. The biggest question was also the most important: Did anyone actually learn anything? "People tend to suffer in silence," said Rick Reese, manager of training for the Fortune 500 manufacturer of construction and remodeling building materials. "We knew that to truly know whether our training was effective, we'd need to build mechanisms into the process to give us that information. If you can get someone to express what his issues are, and walk through it with him, I think that produces a more effective solution."

USG sells gypsum wallboard, joint compound, ceiling tiles, grid components and other building material products. When the company decided to improve its automation of customer relationship management in a project it code-named Genesis, it faced a number of unknowns. Among the most pressing challenges was that USG needed to get field sales and support organizations, product managers, marketing personnel and administrators using the same hardware and software to streamline sales and customer service.

"We had great divergence in skill levels using computers and a diversity of computer systems and software in place," said Reese. "There was no easy way to communicate information. Some had laptops, some had desktops--but not everyone." The company planned a significant investment in new equipment, software and training. To justify those expenses, it had to know the technology would be adopted and used by employees. To close the loop, USG set aside resources to measure the effectiveness of their CRM training. "We were supplying our people with a complete package of hardware and software tools," said Reese. In the end, students received training on no fewer than seven different software operating systems and applications that ran on laptops and handhelds. "Speeding adoption of these tools was critical to the success of Genesis."

The Business Case for Measurement
Dave Basarab, manager of training operations and business for Motorola SPS Organization and Human Effectiveness, summed up the need for post-training evaluation this way: "Evaluation can help you make sound training investment decisions, it can help the training community to ensure that courses are working, it can help operations departments to identify barriers that are preventing skills from being applied and it can help your company be confident that the hours your employees spend in training and the dollars that it invests in its people are time and money well spent."

Let's face it: Any company implementing a large-scale CRM project pays a big price. Beyond significant financial costs, many companies experience short-term productivity declines, turnover among key personnel--even some declines in customer service levels. To mitigate these short-term disruptions, change management experts insist on frequent and honest communication between those implementing the project and those affected by it. Knowing precisely the quality of instruction that is being delivered can be critical to change management success. In this respect, the need for training evaluation goes beyond simply justifying the use of training--or justifying its continuation. In cases like these, post-training evaluation provides a critical feedback loop for ongoing improvements to a corporate training program.

When Dow Corning implemented new CRM technology, the sheer scope of the changes involved persuaded project manager Jackie Herring to include in-depth post-training evaluation.

"Number one, it was a new tool," she said. "And it was a global tool. It wasn't rolled out to just salespeople; a number of other people also got the training. We were crossing multiple functions with a new tool that was vital to our competitiveness. There were behavior changes that needed to be instilled, a lot of new technology and a lot of different kinds of people involved. We thought we could capture some feedback around all of these things by doing significant post-training evaluation." Herring needed to measure both understanding and adoption of the new technology among the 120 initial trainees. This was particularly important because the company planned to use the same curriculum to train as many as 1,500 employees over the next few years.

"The post-training evaluation helped us identify where our training could be improved," she said. "It helped us identify areas that people didn't understand or didn't find of value or benefit. We found out that some of the teams were not using the tool, and it helped us understand why."

Herring said the evaluation showed her that people wanted ongoing training, rather than a one-time event. "We responded in a number of ways. We did lunch-and-learn workshops and found other ways to share best practices. Some people found ways to use the tool that were very effective, and they were able to share them.

"The evaluation results not only helped us improve the training, but also gave us an understanding of the software tweaks that were necessary during rollout," said Herring. "For instance, the evaluation helped us understand a technical problem we were having around synchronizing. We knew about it before the evaluation. But we raised the issue in the evaluation to help us understand how big the problem really was."

While Dow Corning used training evaluation data to improve ongoing training efforts, there are those who believe there is no need to evaluate a one-time-only training event. This raises an obvious question, however: How do you decide to offer training only once when you haven't determined how effective the training was, and if it needs to be offered again? Clearly, you can make the strong argument that some level of evaluation should always occur--only the detail level of the evaluation may vary. The key lies in finding the right level of evaluation required.

How Much Evaluation Is Enough?
As one might guess, there are literally hundreds of different methods, mechanisms, procedures and time frames within which to do post-training evaluation.

Tech Resource Group (TRG) has done training for dozens of Fortune 1000 companies, and we've found that the proper level of post-training evaluation differs from situation to situation. As a result, we've designed a training evaluation approach based on Donald Kirkpatrick's four-level model. The model quantifies four levels of training evaluation, moving from the simplest and least illuminating to the most complex and in-depth. We use the model to help clients determine how much post-training is appropriate for their needs. The first level in the model involves collection and assessment of training participants' reactions using evaluation or feedback forms. Everyone who has ever been to a seminar or training course has been asked to fill one of these out at the close of the session. Usually these short questionnaires focus on the delivery of the training program and on how well participants liked the course. The second level tells us about the actual learning that took place during the training. This means that, in most cases, participants must take some form of knowledge quiz both before and after training. These can take the form of simulations, skill practice exercises or projects, the outcomes of which provide data on the quality of learning that's occurred. If the training is designed properly, much of this data can be collected during the training event itself. Frequently, training is just one component of an overall change management process designed to modify behaviors. Level three training evaluations help us know if participants actually apply the knowledge and skills they learned. Usually performed 60 to 90 days after training, this type of evaluation requires follow-up visits and observations at the job site or surveys to trainees and their supervisors.

According to Kirkpatrick, if the end result is to change attitudes, improve skill levels or increase knowledge, then only levels one (reaction) and two (learning) need be performed. However, if one wishes to use the training to change behavior, then he recommends the use of level three (behavior) and possibly level four (results).

Because USG's Genesis project required many new behaviors from trainees, the company invested in level-three post-training evaluation. "You could characterize some of the people as 'hunkering down, waiting for the wave to pass over them,'" said Reese. "Obviously, that was not going to happen. We realized early on that we needed lots of face-to-face follow-up to actually change behaviors. But the more you actually use the system, the better you get at it. That was an enlightenment process that went on last year."

Reese said the company used a number of different data collection devices to determine whether behaviors were actually changing. "There was a follow-up survey to ask if they're using the things we trained them to use," said Reese. "Our server also tells us what applications are being used, how often people call in to synchronize data and what reports are being used, and by whom."

Measurement Challenges
While measuring attitudinal changes or behavior changes can be challenging, by far the most difficult aspect of training effectiveness to measure is return on investment (ROI), Kirkpatrick's fourth level of evaluation. ROI calculations usually attempt to draw a causal connection between sales force automation training and actual changes in sales at the company.

While this seems straightforward, the devil, as always, lies in the details. When measuring ROI, one can give evidence toward the benefits of training, but rarely can one show a cause-and-effect relationship. For example, are increased sales due to the new sales force automation training or the fact that the economy is doing welL?

Depending on which variables one decides to measure--improvements in work output, sales turnaround, costs savings, increases in sales, quality and so on--evaluating ROI can be time consuming, expensive and sometimes ambiguous. Statistically nailing down the notion that a specific training event "caused" a specific sales result requires that the company use a control group. Unless an organization is willing to have a control group that doesn't receive the new software training, there is no way to actually "prove" the ultimate benefit of the training. Only by juxtaposing the control group's performance with that of the newly trained group can one make statistically valid statements regarding ROI.

On the other end of the spectrum, by far the easiest feedback to retrieve is the trainee's reaction to the training. Did trainees like the course, the instructor, class material, methodology, content and the training environment? Were the objectives met and was the training useful to them? These questions can be answered with a simple questionnaire. Unfortunately, the answers don't give much insight into questions such as "Will this training change the way you do your job for the better?"

Common Mistakes
Among the most common mistakes made by trainers is to think of training evaluation as something one "adds on at the end." Evaluation should be planned and integrated into the curriculum from the beginning. For instance, if a group of students has some knowledge of the software they are being trained to use, it is critical to establish a baseline of understanding prior to the training event. This may require a quiz or questionnaire filled out prior to the event by participants.

The results of this pre-event benchmarking give trainers two important tools: a means to evaluate the effects of training and an idea of what level of knowledge the curriculum itself should assume.

On the other side of the coin, you can spend money unnecessarily in pre-event benchmarking. This occurs in circumstances where employees are going to be taught something completely new--where we can safely assume existing knowledge levels among students is zero. In such a circumstance, only a post-test need be given to measure learning.

Deciding beforehand what data to collect and analyze is very important. As Mark Twain once said, "Collecting data is like collecting garbage, you must know in advance what you are going to do with the stuff before you collect it." Along those lines, here are some questions to ask when planning an evaluation:

  • How will we measure whether the training objectives have been met?
  • Are the training objectives written so that we can measure whether they've been met?
  • Who and what will be evaluated?
  • What is the purpose of the training?
  • Is the training designed to increase knowledge, improve skills, change attitudes or change behavior? The answer to this question can determine what levels of evaluation you perform.
  • When will the evaluations occur?
  • Is a pre-training baseline study necessary?
  • If evaluations will require analysis of behavior changes, what data will we collect to measure those changes?
  • What types of information do we need to know? Is it enough to know whether the participants enjoyed or understood the course material?
  • What do we need to know about the attitudes of training participants?
  • How will we know whether more training is necessary?

In today's marketplace, constant change must be met with constant learning and growth. To meet that need, some industry experts estimate that large companies spend an average of $6,000 per year, per salesperson, on training. Common sense suggests that the companies that do the best job of deploying those investments will succeed over the long term.

While it's vital for sales managers to reinforce training and constantly identify new areas that require training, it is equally vital that training be not just efficient, but effective. As a result, relatively small investments in training evaluation can pay big dividends. After all, there is a lot to know out there. Effective evaluation of sales force training efforts can tell companies some important things about what they still need to learn.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Buyer's Guide Companies Mentioned