Eileen tossed a stack of student evals in the trash can. They were blank.

 

I had just observed her training session and walked up to introduce myself, but she spoke first.

 

“Anybody can get good Smile Sheets,” she said.  The unspoken half of that sentence is the conclusion left to the listener: “So why bother?”

 

I’ve heard that line many times over the last twenty years. And judging by her tone, she thought I’d agree with it.  I didn’t agree. And you shouldn’t either.

 

There are two shaky assumptions she wants us to accept here.

 

1. She claimed that “anybody” can get good scores. Of course the important message there is that she could. Do I assume that Eileen could get good Smile Sheets?

 

No. In fact, when a trainer says this, I assume the opposite – that she can’t get those good Smile Sheets.

 

2. The second assumption is a familiar thinking trap: “If A is true, then B must be true.” Here it is:  “If anybody can get good Smile Sheets, then the scores given to good training and bad training will reveal no helpful distinction.”  This leads us again to the conclusion: “So, why bother?”

 

But here, assumption A isn’t true. There is a very noticeable score difference between good training and bad. It is evident at first glance. Again, she’s wrong.

 

So “Smile Sheets” (a.k.a. Level 1 feedback, for you Kirkpatrick fans) have been misrepresented, and their benefits have been unrealized as these unfortunate statements have been repeated over the years. The very words “Smile Sheets” support this intellectually lazy point of view; they mock the gullible fool who takes student evals seriously.

 

Eileen expected to sell me on her cynical point of view.  Sorry. Not buying it.

 

In “Smile Sheets, Part 2” I will make the case that student evals are the most important measurements in training. If training is your field, or even your temporary responsibility, you’ll want to read it and create new thought and more constructive practice toward improvement in your organization.

Now before we consider the right perspective, let's explore the implications of this wrongheaded assertion that student evals aren't worth looking at.

First, evals are your customer feedback. (Yes, students are a customer type.)

All customers have opinions about products they consume. These customer opinions drive buying decisions, which can determine the success or failure of any product. A dissatisfied buyer simply doesn’t buy again. So producers typically want to know all they can about how their customers feel about their products.

 

But Eileen’s “Smile Sheet” comment promotes the opposite idea, which I’ve heard in a few forms. It usually sounds like, “What do our students know about training? We create it. We’re the learning experts. We know much more than they do.” After this some training jargon is usually thrown in to establish the speaker’s expert bona fides.

 

But this is not how true experts behave.

 

Note the behavior of experts outside of the training world. The people who design and produce cars are experts, for example. They know more about cars than the people who buy cars. Much more.

 

But they realize that car buyers do have one very important piece of information: they know when they have a good car – one that they like. They like driving it. It’s comfortable. It works in their life. They tell their friends about it. And they can tell you exactly how and why, and in what circumstances, this car serves them well.

 

If the car designers ignore these customer opinions, they’ll fail. They are interested in how their customers feel.

 

So evals are customer surveys, designed to measure what works for our students. It seems obvious that we’d care very much about their feelings and opinions. Just like the car designers, if we ignore them, we’ll fail.

Softballs?

But, Eileen might also claim, as others have, that Smile Sheets are just “softball” questions, deliberately intended to encourage positive responses.

All right. Let’s suppose that the questions on the evaluations are foolish.

Maybe they’re easy, self-serving - just fishing for compliments. Then Eileen would have a point, right?

 

No she wouldn’t. She’s a professional. She could be part of the solution.

 

If she knows that foolish questions don’t help at all, as a trainer, she could easily call attention to this problem. She could help by suggesting that the evaluation questions be fixed. Trainer suggestions like this usually have influence. If you’re the trainer and you can’t get them fixed, my advice is to politely ask for permission to distribute your own evals, so you can improve your own performance.

 

But now let's return to the most disturbing issue here.

There is a widespread belief in the training community that student feedback is somehow beneath serious consideration.

This belief is hypocritical. Training exists to help students in their day-to-day work. Companies invest in training hoping that it will help their people, and ease their transitions to a more productive way of work. They expect that training will change what their people do.

 

For Eileen to believe that student evals are bound to be worthless, Eileen must believe that her students aren’t smart enough to recognize what will help them.

 

And further, she must also believe that the evals won’t help her, as she can’t possibly learn anything from their unenlightened feedback.

 

But what if her students loved their session? What if they left feeling that her training session was helpful – that it would enhance their day-to-day work? Wouldn’t she at least want to know that?

 

Sadly, whole organizations sometimes believe these “experts” and accept this willfully blind approach. Why? What’s in it for them? Isn’t there some other reason – a valid one – not to distribute and collect these student evals?

 

No. Here’s Eileen’s real reason: she doesn’t want the evals. They might make her look bad.

This the real motivation of many trainers who disparage the value of student feedback.

 

So before you read “Smile Sheets, Part 2,” and before your next training session, look at the questions you ask on your student evals.

Here’s a start:

  • If the questions are irrelevant or poorly worded, respectfully suggest that they be replaced. Write some improved versions as examples. Do this when you believe it’s needed. You’ll be respected for it.

  • If nobody ever does anything with this student feedback, you should start looking for important patterns and repeated suggestions in the student scores and comments, and then….

  • Do something with the feedback. Initiate.

 

If student response is low, or half-hearted, convince yourself, and then your students, that evals are vital and taken seriously. Start asking your students more emphatically and sincerely for their best feedback on the evals.

 

You'll get it from most of them. Over time you'll see patterns, good and bad, among many similar responses. You'll adjust. And you'll see student responses change.

 

But there will be more. I guarantee that once in a while, probably soon, you'll see an idea so full of promise that you'll be compelled to take hold of it. It will improve the training. And it might even change your way of presenting other training.

 

Later you'll ask yourself, “What if I hadn't read this?”

 

Truth is, some of our students are very smart. You'll realize it. And with a little irony, you'll smile.

 

Smile Sheets, Part 2 soon.