Bringing Questions into Focus (Groups)

In a previous post, I wrote at length about my participation in a practicum involving Susan Schreibman’s Versioning Machine, and about the design work that I was doing. However, my role in the practicum includes more than just design; I am to help with outreach as well. Part of outreach includes letting an audience know that something exists, but another important part of outreach lies in collecting feedback from users. While one of the easiest methods of collecting information is through the use of surveys, it is potentially more illuminating—especially when collecting feedback on how easy or difficult something is to use—to conduct a focus group.

Conducting a focus group means bringing a number of people together in one place and then having them use a product while discussing their experiences or answering specific questions. Such a group generally requires that a moderator be present, both to keep participants on task and to collect the data being generated. Focus groups can be resource-intensive, since a moderator must be present for the duration of the group, a location (physical or virtual) for conducting the group needs to be secured, and participants must be willing to donate a greater amount of time than is generally required by a simple survey. As such, focus groups tend to generate smaller data sets than surveys do, but the trade off is quite significant: first, focus groups can collect users’ immediate feedback from the very point at which they use a product, rather than requiring them to remember an experience that may have happened in the past; second, behaviour can be observed that users may not even think to report, such as how long it takes a user to find a particular function or the various things that a user tries in first learning to navigate an object; and third, issues may be discovered or information can be gathered that the researchers may not even think to ask about in a survey.  In short, the difference between a focus group and a survey can be seen as a choice between quantity and quality. Now, generally when the quality vs. quantity question is presented, the “correct” choice is expected to be quality, but with data collection, that is not a given. Qualitative data can be very useful, but from a statistical standpoint, quantitative data, such as the 1-5 scales seen so often on surveys, is much more easily analysed, and can be very powerful in large numbers. Rather, the choice between large amounts of quantitative data and small amounts of qualitative data comes down to the purpose for which the data is being collected. In the case of the versioning machine, we wanted a small-scale idea of how our new features and interface were received (and how intuitive they were) in order to help guide our further development. In a case like this, a focus group is ideal.

This blog post will not go into all the particulars of setting up and conducting a focus group—a simple search on the web for “how to conduct a focus group” yields a wealth of information on that front. Rather, I am going to focus on one particular area that I feel is of utmost importance when collecting information: how to ask questions. While I am now a digital humanist and have worked in a number of different fields, the piece of paper I received upon completing my undergraduate studies labels me as a psychologist. While working as a psychologist, one of the most critical points impressed upon me was how careful one must be in asking questions. After all, many of us are conditioned early on in school to believe that every “question” has a correct “answer”: and we as humans have become very good at picking up subtle cues as to what someone asking a question wants to hear in response. Dr. Robert Cialdini has written a fantastically illuminating book on the ways that subtle cues can affect another person’s behaviour, appropriately titled Influence: The Psychology of Persuasion. The book was required reading during my freshman year, and I have kept its lessons in mind ever since. In the case of research, the factors outlined in Cialdini’s Influence become a list of things to avoid when asking questions. Some of the triggers that can affect responses are surprising: in his introduction, for example, Cialdini writes about the research of Ellen Langer, who found that the simple presence of a reason when asking for a favour has a dramatic impact on the likelihood of someone granting that favour—regardless of how meaningful the reason is. Specifically, Cialdini writes that, when people waiting in line to use a copy machine were approached and asked “Excuse me, I have five pages. May I use the Xerox machine?”, 60% of the people agreed to let the approaching person cut in front of them, while if the waiting people were asked “Excuse me, I have five pages. May I use the Xerox machine because I have to make some copies?”, the request was met with 93% compliance (Cialdini 4). The simple presence of a clause beginning with “because”, regardless of whether or not the reason that followed provided any additional information, was sufficient to dramatically alter the trend in user responses.

So how, then, does one craft a question to get feedback that accurately represents how someone feels? When I am asking questions to get at what someone thinks or feels, I remember a book I read in high school, Dr. Virginia M. Axline’s Dibs in Search of Self. In the book, which is a fascinating case study of a troubled young child, therapist Dr. Axline is very careful in her interactions with the child Dibs to ask questions in such a way as to not in any way lead him to a response. Sometimes questions can seem very innocuous, but even asking for value judgements using words such as “like” or “enjoy” can prod a user in one direction or another.

It may seem strange to think that a case study on child therapy can have implications on data collection, so perhaps an example is in order. Consider this button that we have on the new Versioning Machine:

Question Mark

As buttons go, this question mark is extremely straightforward. When I designed the new Versioning Machine’s layout, I intended for the question mark to open up tooltips showing how the site works. Much to my surprise, however, I found that, in the implementation, the question mark button was used instead to bring up bibliographic information about the document being examined in the versioning machine. Both approaches can be argued to be perfectly reasonable applications of the question mark: one produces information about the document being used, while the other produces information about the versioning machine itself. When I saw the way that the question mark was being applied, I disagreed with its use: I felt that the question mark should point to help about using the Versioning Machine itself. I didn’t want my focus group participants to know how I felt about the button’s application before they told me their own impressions—it would be unprofessional, for one, but their feedback would also be unreliable if I primed them to feel a particular way with my questions. Here are some of the questions I could have asked, and reasons why they would not have been ideal.

1. “Don’t you think that the question mark button should point to a help file instead of to information about the document itself?”

This question is terrible for reasons that are, I hope, immediately evident. Any statement that begins with “don’t you think” is essentially asking for agreement, rather than for an objective opinion. In this case, it immediately tells the listener that the “correct” response is to say that yes, the question mark should point to a help file. It also forces the user to make a choice between two alternatives, rather than giving an option for a neutral response. Granted, someone may still take a contrary position if that person felt very strongly that the question mark button’s use was correct, but if the participant had no strong opinion about the button’s function, he or she is likely to simply agree with the moderator. Asking a question like this may seem like an obvious mistake, but it does happen. Many years ago I worked for a company that held a focus group to find out what users thought of a product in development, and I sat and cringed as my boss asked a string of leading questions like this one, and left satisfied that the participants completely agreed with his opinions. When the product was finally released, my boss could not understand why the opinions of the wider consumer base toward the product were not wildly positive like those of the focus group had been.

2. “Should the question mark button bring up help about the Versioning Machine itself?”

This question takes a neutral tone, and as such is better than question #1, but it still falls short by introducing an idea that may not have otherwise occurred to the focus group participants. I call these “seed ideas”, because they are quiet little thoughts planted in questions that can quickly sprout in the participants’ minds. A question like this is often met with “Oh, that’s a good idea!” Wonderful to know, but it doesn’t answer the question of whether or not the participants found the button’s original function perfectly satisfactory to begin with. And once that new idea has taken root, it’s impossible to back-pedal and get at that initial impression: there’s too much new idea foliage in the way.  There is occasionally a place for asking about a possible feature change such as this in a focus group, but those are few and far between. Questions like this are best generally avoided.

3. “Do you think the question mark button is poorly implemented?”

Here is another leading question. “Poorly implemented” essentially introduces a value judgement, which participants are then either expected to accept or reject. Furthermore, the question’s negative tone is a subtle cue to participants that they should be finding fault with the question mark button, even if they initially thought it was fine. This can cause participants to second-guess their own initial reactions, leading to disingenuous responses. As with question #1, a participant who feels very strongly that the question mark button is functioning fine as-is will probably still say so, but participants who have no strong feelings one way or the other will suddenly feel that they are expected to take sides—and they are likely to side with the moderator in that case.

4. “Do you like the question mark button?”

Here is a very common form for a question to take, but it is nonetheless not ideal. “Do you like…?” is one of those tricky questions that sounds neutral while really introducing a value judgement. It is, essentially, the positive alternative to the negative stance presented in question #3. People are so used to being asked questions beginning with “Do you like…” that the subtle prod the phrase gives is easy to overlook, but showing someone a question mark button and asking, “Do you like the question mark button?” is basically like giving someone a plate of strawberries and asking, “Do you like strawberries?” If the participant answers in the negative, it is a subtle kind of rejection, and social niceties dictate that we generally shouldn’t reject something presented to us unless we have a good reason. This is the kind of question that is likely to be met with “Yeah, I think it’s fine.” from people who normally wouldn’t care one way or the other.

5. “What do you think of the question mark button?”

This, finally, is an ideal question. The question is specific in that it asks for feedback about a particular element of the design, but it does so while avoiding any words that may prime participants to feel a particular way or to have a particular idea. Also, unlike every other question on this list, it is not a question that can be answered with a “yes” or a “no”—it does not, in and of itself, expect that participants make value judgements, while at the same time providing room for them to do so if they wish. Any time a particular element is being asked about, a question beginning with “What do you think of…” is generally a safe approach. Questions #1 to #4 may come into play as follow up questions if a participant indicates that he or she has a particular opinion, but question #5 is the one that should start that conversation, as it gives no indication of expecting a value judgement at all.

In the case of the Versioning Machine, of the focus groups held so far, opinions have been mixed. In the first two sessions, the opinion was very strong that the question mark button was confusing and that any button labelled “?” should bring up help about the interface itself rather than about the document being examined, while in the third session, no fault was found with the button at all. Three data points do not make for conclusive results, but I am at least aware that the question mark button is potentially an area of concern. While I cannot yet decide whether the button’s implementation is ideal, however, I can be confident that the opinions expressed by the participants are genuine, because of the effort I put into crafting neutral questions.

And getting genuine responses is, after all, what focus groups are all about.

Further Reading

Axline, Virginia M. Dibs in Search of Self. 1964. New York: Ballantine Books, 1986. Print.

Cialdini, Robert B. Influence: The Psychology of Persuasion. Rev. Ed. New York: Quill-William Morrow & Company, Inc., 1993. Print.
-Note: A 2006 revision of this book has been published by Harper Business.

This entry was posted in Versioning Machine and tagged , , . Bookmark the permalink.

Comments are closed.