-->

How Much Did Your Sales Force Really Learn?

When USG Corporation completed Customer Relationship Management (CRM) training of 475 people in its sales and marketing organizations, one troubling question lingered: Did anyone actually learn anything?

Rick Reese, manager of training for the Fortune 500 manufacturer of construction and remodeling building materials, saw a pressing need to monitor the effectiveness of the training. "If you can get someone to express what his issues are, and walk through it with him, I think that produces a more effective solution."

When USG decided to improve its automation of CRM in a project it code-named Genesis, it faced a number of unknowns. Among the most pressing challenges was that USG needed to get field sales and support organizations, product managers, marketing personnel and administrators using the same hardware and software to streamline sales and customer service.

"We had great divergence in skill levels using computers and a diversity of computer systems and software in place," said Reese. "There was no easy way to communicate information."

The company planned a significant investment in new equipment, software and training. But it had to know the technology would be adopted and used by employees. USG set aside resources to measure the effectiveness of their CRM training. In the end, students received training on no fewer than seven different software operating systems and applications that ran on laptops and handhelds. "Speeding adoption of these tools was critical to the success of Genesis," said Reese.

The Business Case for Measurement
Dave Basarab, manager of training operations and business for Motorola SPS Organization and Human Effectiveness, summed up the need for post-training evaluation this way: "Evaluation can help you make sound training investment decisions, it can help the training community to ensure that courses are working, it can help operations departments to identify barriers that are preventing skills from being applied and it can help your company be confident that the hours your employees spend in training and the dollars that it invests in its people are time and money well spent."

Let's face it: Any company implementing a large-scale CRM project pays a big price. Beyond significant financial costs, many companies experience short-term productivity declines, turnover among key personnel--even some declines in customer service levels. Knowing precisely the quality of instruction that is being delivered can be critical to change management success. In cases like these, post-training evaluation provides a critical feedback loop for ongoing improvements to a corporate training program.

When Dow Corning implemented new CRM technology, the sheer scope of the changes involved persuaded project manager Jackie Herring to include in-depth post-training evaluation.

"Number one, it was a new tool," she said. "And it was a global tool. It wasn't rolled out to just salespeople; a number of other people also got the training. We were crossing multiple functions with a new tool that was vital to our competitiveness. There were behavior changes that needed to be instilled, a lot of new technology and a lot of different kinds of people involved." Herring needed post-training evaluation to measure both understanding and adoption of the new technology among the 120 initial trainees. This was particularly important because the company planned to use the same curriculum to train as many as 1,500 employees over the next few years.

She said the evaluation process "helped us identify areas that people didn't understand or didn't find of value or benefit. We found out that some of the teams were not using the tool, and it helped us understand why." Herring said the evaluation also showed her that people wanted ongoing training, rather than a one-time event, and the company tailored its training program to better fit their needs.

While Dow Corning used training evaluation data to improve ongoing training efforts, there are those who believe there is no need to evaluate a one-time-only training event. This raises an obvious question, however: How do you decide to offer training only once when you haven't determined how effective the training was, and if it needs to be offered again? Clearly, you can make the strong argument that some level of evaluation should always occur--only the detail level of the evaluation may vary. The key lies in finding the right level of evaluation required.

How Much Evaluation Is Enough?
There are literally hundreds of different methods, mechanisms, procedures and time frames within which to do post-training evaluation.

Tech Resource Group (TRG) has done training for dozens of Fortune 1000 companies, and we've found that the proper level of post-training evaluation differs from situation to situation. As a result, we've designed a training evaluation approach based on Donald Kirkpatrick's four-level model (see sidebar). The model quantifies four levels of training evaluation, moving from the simplest to the most complex and in-depth. We use the model to help clients determine how much post-training is appropriate for their needs.

The first level in the model involves collection and assessment of training participants' reactions using evaluation or feedback forms. Usually these short questionnaires focus on the delivery of the training program and on how well participants liked the course.

The second level tells us about the actual learning that took place during the training. In most cases, participants must take some form of knowledge quiz both before and after training. The outcomes provide data on the quality of learning that's occurred, and if the training is designed properly, much of this data can be collected during the training event itself.

Frequently, training is just one component of an overall process designed to modify behaviors. Level three training evaluations help us know if participants actually apply the knowledge and skills they learned. Usually performed 60 to 90 days after training, this type of evaluation requires follow-up visits and observations at the job site or surveys to trainees and their supervisors.

According to Kirkpatrick, if the end result is to change attitudes, improve skill levels or increase knowledge, then only levels one (reaction) and two (learning) need be performed. However, if one wishes to use the training to change behavior, then he recommends the use of level three (behavior) and possibly level four (results).

Because USG's Genesis project required many new behaviors from trainees, the company invested in level-three post-training evaluation. "You could characterize some of the people as 'hunkering down, waiting for the wave to pass over them,'" said Reese. "We realized early on that we needed lots of face-to-face follow-up to actually change behaviors."

Reese said the company used a number of different data collection devices to determine whether behaviors were actually changing. "There was a follow-up survey to ask if they're using the things we trained them to use," said Reese. "Our server also tells us what applications are being used, how often people call in to synchronize data and what reports are being used, and by whom."

Measurement Challenges
By far the most difficult aspect of training effectiveness to measure is return on investment (ROI), Kirkpatrick's fourth level of evaluation. ROI calculations usually attempt to draw a causal connection between sales force automation training and actual changes in sales at the company.

While this seems straightforward, the devil lies in the details, especially when one must show a concrete cause-and-effect relationship. For example, are increased sales due to the new sales force automation training or the fact that the economy is doing well?

Depending on which variables one decides to measure, evaluating ROI can be time consuming, expensive and sometimes ambiguous. Statistically nailing down the notion that a specific training event "caused" a specific sales result requires that the company use a control group. Unless an organization is willing to have a control group that doesn't receive the new software training, there is no way to actually "prove" the ultimate benefit of the training.

On the other end of the spectrum, by far the easiest feedback to retrieve is the trainee's reaction to the training. Did trainees like the course, the instructor, class material, methodology, content and the training environment? Were the objectives met and was the training useful to them? These questions can be answered with a simple questionnaire. Unfortunately, the answers don't give much insight into questions such as "Will this training change the way you do your job for the better?"

Common Mistakes
A common mistake made by trainers is to think of training evaluation as something one "adds on at the end." Evaluation should be planned and integrated into the curriculum from the beginning. For instance, if a group of students has some knowledge of the software they are being trained to use, it is critical to establish a baseline of understanding prior to the training event. This may require a quiz or questionnaire filled out prior to the event by participants.

The results of this pre-event benchmarking give trainers two important tools: a means to evaluate the effects of training and an idea of what level of knowledge the curriculum itself should assume.

On the other hand, there is little need for such benchmarking in instances where employees are going to be taught something completely new--where we can safely assume existing knowledge levels among students is zero. In such a circumstance, only a post-test need be given to measure learning.

Deciding beforehand what data to collect and analyze is very important. As Mark Twain once said, "Collecting data is like collecting garbage, you must know in advance what you are going to do with the stuff before you collect it."

Along those lines, here are some questions to ask when planning an evaluation:

  • How will we measure whether the training objectives have been met?

  • Are the training objectives written so that we can measure whether they've been met?

  • Who and what will be evaluated?

  • What is the purpose of the training?

  • Is the training designed to increase knowledge, improve skills, change attitudes or change behavior? The answer to this question can determine what levels of evaluation you perform.

  • When will the evaluations occur?

  • Is a pre-training baseline study necessary?

  • If evaluations will require analysis of behavior changes, what data will we collect to measure those changes?

  • What types of information do we need to know? Is it enough to know whether the participants enjoyed or understood the course material?

  • What do we need to know about the attitudes of training participants?

  • How will we know whether more training is necessary?

    In today's marketplace, constant change must be met with constant learning and growth. To meet that need, some industry experts estimate that large companies spend an average of $6,000 per year, per salesperson, on training. Common sense suggests that the companies that do the best job of deploying those investments will succeed over the long term.

    While it's vital for sales managers to reinforce training and constantly identify new areas that require training, it is equally vital that training be not just efficient, but effective. As a result, relatively small investments in training evaluation can pay big dividends. After all, there is a lot to know out there. Effective evaluation of sales force training efforts can tell companies some important things about what they still need to learn.

  • CRM Covers
    Free
    for qualified subscribers
    Subscribe Now Current Issue Past Issues