Tuesday, August 21, 2012

Blow it up! Challenge yourself with crazy hypotheticals


... more extreme questions yield insight more quickly. I don’t think of this exercise as a brainstorming session. I think of it more like scorched earth but without all the physical carnage

Who am I? What am I about? Where am I going? What do I really want?

Questions like these explore the underbelly of our consciousness as we second guess our origins, personal decisions, and where we fit in the grand scheme. It is generally considered a healthy exercise to question one’s core understanding, to challenge one’s assumptions, and to reassess one’s value system.

We’re a little schizophrenic about how much questioning we can take. As a society and media consumer, we gravitate toward creativity and exploration while shunning rigidity and closed-mindedness, but we also tend to worship stability, consistency, security, and some degree of predictability.

The good news for us is that as imaginative beings, we have a wonderful capacity for conceptual experiments. We don’t actually have to send live animals into space in a box with a nuclear device in order to get the gist of Schrodinger’s cat. We can – without putting hand to stone – work through the ins and outs of most any scenario we can dream up. For example, I can imagine my life without my current wife, or what it might be like to have been born into serious money, or what my neighbors might say if I painted my house fluorescent pink. The ability to radically question and to think up extreme scenarios can be quite thought provoking, potentially leading one to make real life changes, but also often to affirm the value of a choice or life standard already in place.

The same could be said for business strategy. As with the individual, it is healthy for the organization to run through a similar set of questions as I presented in the top of the article, replacing “I” with “we”. The answers to these questions provide a framework for how we approach almost any challenge or situation that comes our way. Any good, thriving company already has a clear sense of identity and where it fits but the best companies are also adaptable and as the environment changes, reassess. Kentucky Fried Chicken is now KFC and KFC is a very different company in some ways than was Kentucky Fried Chicken. These sorts of strategic moves don’t have to come as a result of paying millions of dollars to a consulting agency, which is essentially going to perform this same exercise, afforded the luxury to harshly question operations in ways that organization members might fear will taint them in the eyes of peers and leaders.

Whether or not you choose to share your crazy thoughts with others, it is worth your time to examine these fundamental questions about identity and to also perform some mental acrobatics with regard to your organizational strategies. While the exercise may not lead to wholesale change, it will lead to considerations that may put you on a path toward the next important strategic or innovative move. It could be something really crazy like, what if I swapped my Finance team with my HR team? How would their approach to these alternative roles differ? It could be something more practical like, what would I do if tomorrow my budget was cut in half? Or, how would I meet this objective if I didn’t have this particular technology at my disposal?

While both the extreme and less-extreme questions provide value, I’d argue that the more extreme questions yield insight more quickly. I don’t think of this exercise as a brainstorming session. I think of it more like scorched earth but without all the physical carnage.

Saturday, August 4, 2012

Time To Turn Off Your GPS - Your Vision Is Constrained By Your Work

What will you find when you turn off your GPS and just drive?


You type the destination into your navigation system and you're off and driving, confident that the technology will get you where you want to go. We do the same in our business development efforts. We set a course then vet the various technological tools and processes that we believe from our current experience and understanding may best get us to the destination. This is a proven approach to getting consistent and mostly predictable results. We have a vision then we work to fill in the blanks between point A and point B.

These efforts are almost entirely predicated upon the idea that we know where we are capable of going and what we are capable of doing. And in determining our destination and best path, we use some degree of business intelligence (some companies more than others) - targeted analytics and legacy successes - to get to what we define as the best result.

This process is an important part of business across many industries, especially in the area of customer care, wherein we strive to provide the level of service our customers want. We have a couple ways at our disposal to try to understand what it is they want. We can ask them and we're lucky if they answer at all much less be honest and forthcoming. Another method is to closely analyze their behaviors then de-emphasize process points that lead to undesired behavior (e.g. abandonment), while emphasizing process points that lead to desired behavior (e.g. conversion).

In terms of the tools we make available to ourselves in driving behavior, we narrow our options to only those tools which fit a set of business requirements - requirements that were determined by the process points we've already identified for improvement.

While a productive exercise, it is highly constrained. Even the one explorative aspect above - the analytical process - is constrained. Our technological research is equally constrained because we've already narrowly defined the feature set we're looking for as we drill down to our "best" options.

Have you considered that your customers have a limited sense of what they want? Our experience shows they tend to ask only for what they feel they can get. They are just as constrained in their thinking as we. Those who aren't will perhaps come to our playground and take the ball from us.

Think about it. Back when the service option was limited to phones and phone reps, before IVR, web, or mobile technology existed, if you asked customers what they want in terms of a great service interaction, they'd say something like, "I want my call answered right away, I want the rep to be nice to me, and I want my problem fixed." They wouldn't say "I don't want to talk to you but instead I want a really great tool that will allow me to help myself, something I can use anywhere I am, at any time."

The factor that leads to customers wanting this latter option today is because they now know they can have it.

This is no different for those of us on the business side of the fence. For example, we put the self service option out there according to industry best practices, then try to find places where it breaks. We have a somewhat understandable belief in the absolute business value of self service as the lower-cost, most convenient route of service.

Following the strategy above, most companies completely missed the boat when an inventive player began differentiating itself by intentionally, greatly de-emphasizing self-service technology. One might suggest that this was merely the insight that a niche of customers longed for the good old days of handshakes and friendly subject matter experts and that this company was just following the same paradigm of find a way to do what the customer wants. This may be true to a degree but I'll argue that before any company could pull the trigger on this option, spending millions of dollars on a national ad campaign and denying the millions of dollars labor save from self-service, someone had to have had the vision to look where no other company was looking even though every company was getting the same feedback from many of the same customers.

In both cases, web and mobile self-service and later, the resurgence of personalized human service, someone had to go exploring the land that no other company knew existed. The destination wasn't programmed into their GPS systems.

Now we have effortless spawning of video conference via the web and there are only a narrow band of companies even considering it as a potential service option, even when the technology has been available for a several years and so many customers Skype regularly with their families and friends. Why? Because customers haven't yet told us this is how they want to do business. What about tomorrow? Who's playing with it? Who's allowing customers to give it a try?

My belief is that every company should be devoting some amount of resources to exploring, inventing, innovating, and testing stuff even when there is no current business case or requirement providing a destination.

Google for instance allows every employee to spend 20% of their time working on projects completely unrelated to their normal workload (that's 20% of labor cost for thousands of employees - a huge investment). Many of their most successful initiatives have originated from this program. I am not sure every company needs to go this far to get some good but I do believe that every company, especially those of us in customer care (delivery and technology), must devote resources to do the same - to play with lots of toys, to research principles, techniques, technologies, approaches, and paradigms often from completely outside of the realm of our given industry.

There is great value to finding something then asking, "What can I do with this?" This is exactly how scientific developments in new materials come to practical application (rubber, plastic, silicon. etc). We need to be doing the same.

What will you find when you turn off your GPS and just drive?

Friday, April 27, 2012

A More Scientific Process for Boosting Quality Assurance





Current State

Most contact centers employ a Quality Assurance program, which consists of reviewing customer interactions and grading the performance of an agent or self-serve technology. For this brief article, I am going to focus on live rep interactions. In nearly all cases, the measurement is based upon behavioral performance against well-defined standards and measured on a well-defined scale.

Many companies now supplement or even place more priority on Customer Satisfaction or C-SAT measurement, which is obtained via customer survey either directly or sometime after the interaction.

The Problems
There are significant problems with both methods, especially where measurement of employee performance is concerned.

The two biggest issues are:
1. Low sample size
2. High levels of subjectivity

Quality Assurance Review
For a sample size to be statistically relevant, you're typically targeting 5-10%. The sample size however for the manual interaction review will be well below 1% due to the time expense to thoroughly review an interaction, often multiple times. Though over time these measurements will begin to show some consistency and dependability of trends, frequency and subjectivity are sore spots not only for the business but also pose distractions for the employees themselves. The speculation is often, "but why did they pick that call? ... my other calls are so much better" or "the other Quality Specialist gave me good grades but this one doesn't."

C-SAT
The sample size for customer survey is typically somewhere between 2 and 6% on the good side and lower in some environments. While it is generally a great practice to survey your customers, this is also highly subjective and results are inconsistent. If you're asking customers to rate on a scale from 1 - 10, one customer's 9 is not necessarily another customer's 9. Also, despite how the questions are framed, customers still show a tendency to punish agents when they are dissatisfied with a company policy (like cases where an agent is required to say "no"). As with the Quality Assurance efforts, a center will be able to see some useful trends over time and will probably identify highly substandard interactions quickly but as a dependable employee performance measurement it is heavily lacking.

The Solution
We need to increase sample size to well above statistical relevance and remove as much subjectivity as possible.

The way to do this is with the use of Speech or Text Analytics technology, ('speech' for voice and 'text' for chat, email, or social media support interactions). Unfortunately, most of today's analytical solutions are devoid of the most important component - a modeling component - though the availability of this technology is on the rise.

Step 1 - build the model
Run a large sample of both exemplary and poor interactions through the modeling engine. A good modeling engine will be able to compose a model on hundreds if not thousands of statistical attributes.

Step 2 - develop a scoring process against your model
Your modeling engine shall be able to analyze a new interaction against the model interaction and assign a score (matches on positive attributes and fewer matches on negative attributes). 

Step 3 - calibrate the model
If you plan to use this measurement in concert with your other measurements, the scale should mirror those other measurements, i.e. a call measured by the model to be 90% should align with a personally reviewed interaction scored at 90%.

Step 4 - apply the model at scale
Once you have a model, you will use the engine to grade many interactions. Theoretically, you could even assess every interaction collected in your center (many centers are required by law to record every interaction). For performance measurement, every interaction for each agent can be run against the model in order to come up with an average machine score, which I'll call a "soft score" if I am communicating the process to employees (sounds a bit more warm and fuzzy, right?).

Step 5 - continually refine the process 
Business climates, customer preferences, and standards change, so it will be important to rebuild your model with some regularity. How you use the soft score is up to you and there are options. The idea I present here is not necessarily to replace your other measurements but to inject more statistical relevance and objectivity into your ongoing efforts. The soft score is probably best used as a given percentage of an overall measurement, which includes Quality Assurance monitoring and C-SAT. At the very minimum, this process could be used to calibrate your staple efforts and find points of inconsistency among Quality Specialists on the performance review side, or perhaps products, services, or policies on the business side.