5 min read

Guerilla User Research

I’ve had the privilege of working alongside several fantastic UX Researchers over the last decade. Anytime I had a project where I could get help from one I gladly took the opportunity to learn everything I could.

My understanding at this point is still basic; there’s so much more to learn. But I’ve been able to grasp enough for tactical purposes in order to make decisions on a product.

UX research is a field of study where we spend time trying to understand the needs of customers, and generally try to match that up against the product we’re building to make sure there’s enough of a connection point for the business to fill those needs.

It’s useful. It helps companies shift and make decisions. It’s also controversial. Many founders I’ve worked with eschew research, seeing it as a gum in the works, something to stop everything up and keep the team from building something great.

Following is my basic plan for UX research, and what I’ll typically do absent having access to a great UX researcher on the team.

The example I’ll use is only one way to do it, but may fit well depending on the type of product (or app) you’re trying to build. This example is specific to a mobile consumer app that already exists, but where you’re trying to improve a feature or introduce a new feature.

---

For my mobile consumer app I’d like to improve paid trial conversions. My hunch is that a new feature may be enough of an enticement to encourage new users to pay.

First I validate the feature with the team. We talk through it, discuss it’s technical and design challenges, and analyze whether it’s likely to improve the metric we’re going for.

At this point we have two decisions. We can do some initial design, work through some concepts, or we can talk to a few potential users first.

Because we’re a startup, we’re short on time and money, and speed matters above all else. Either of those options are great, but we’d rather take some designs to users to have concrete responses we can use to improve the design. In that case I work with the team to design screens and get a feel for the new feature.

We go through some sketches and concepts, and land on a mid-fidelity wireframe. It has good-enough copy, the colors we need, and all the elements on the screen. What it doesn’t have is a pixel perfect design. We’re not worried about fonts, spacing, a design system, or anything else that would slow us down. We’re looking for clarity. Is it clear what this feature is trying to accomplish?

Next up we need to talk to some users. Since we want to test this on new users, our existing customer base isn’t as helpful. We’re a consumer mobile app, so we have a general appeal to a wide audience.

In this case we could go to a coffee shop with gift cards and ask for 30 minutes of each person’s time. This is probably the ideal case. But we’re a distributed team and want to have multiple team members around the country listen in.

So we create a Craigslist ad (maybe Instagram ad these days) and ask for volunteers to join a short survey in exchange for a gift card. We link them to a Google form and ask them to fill out a few questions.

This is a filter form. We ask if they have experience in the field of our app, ask if they’ve bought anything related to products we are involved with, and then ask some questions about age, gender, and income. All these answers go to a Google Sheet, and we reach out to 6 interviewees with a link to our Calendly availability, letting them know that a gift card will be shared with them at the end of the survey.

Once we’ve had three responses, we jump on calls with them. Preferably as quick as possible. Based on past experience we know as much as half of them will never show, so it’s a bit of a numbers game.

On the calls one member of the team asks questions and the rest turn off cameras and mics. When the person shows (hopefully), we ask permission to record the call, and the speaker (often the person leading product, but it really can be anyone) asks clarifying questions. Generally we try to learn about their background with the industry, whether they’re familiar with our product, and any buying habits related to our product.

Then we let them know that there is no wrong answer, and anything they say will help us improve the designs. As we present designs we remind them to say their thoughts out loud as they walk through the feature.

We ask them to share their screen and send them a link to our Figma prototype, where they can click around on the design and move between pages. If they showed up on a mobile device the process may still work, but if not one of our team will share the screen and click through for them. This isn’t ask ideal if it happens, but the information can still be useful.

The goal at this point is to not lead them on with any specific questions. Asking if they think this design is so amazing would bias the entire interview. Instead we ask them to accomplish goals that we think a user might want to do with the feature. As they stumble around—and they almost always do, this is why we do the research—we ask them what they expected a button would do, what they think some words on the screen mean, and continue to remind them to speak out loud.

If they accomplish the task we make note of that, but primarily we’re looking for friction points where they couldn’t accomplish the task; where they got stuck.

If someone else is running the interview I (as the designer) will sometimes update the design live to see if they are more easily able to meet the goal.

We take 30-45 minutes, thank them profusely, then email the gift card.

We then repeat until we’ve done it 3-6 times.

Six is ideal, but also a lot of time and energy for a small startup. So, to be practical, I suggest to founders to shoot for three. It’s enough to find the big friction points, and it’s easy enough that you’re likely to tweak the designs quickly and either do another round of three interviews, or ship the feature and see the results in code.

The biggest thing I’m fighting against is not doing research at all. So I’d rather three interviews than none.

This type of guerilla research can be done in a day, sometimes just a few hours, and is incredibly rewarding. The big risk you can run into with only three users is that they may heavily skew a feature in a way that could lead you down the wrong path. They’re useful for a trend, which is sometimes enough in a product. Six users is generally a safe point to understand something, but three gives you a gut check idea.

For my part, just having a single user stumble through buttons and get confused is often enough for me to think about a way to improve the accessibility for everyone in the app.

Even after 17 years of design I’m still amazed at the simplest things I can miss in my designs, and have appreciated every user session I’ve had the privilege of running or being part of.

----

I wanted to share this narrative to hopefully help you see how simple this can be. Don’t ask leading questions, try to find users who have interest in your field or product, and ask them to attempt to accomplish the goals of your feature.

Once that’s done look back at the overall metric you’re trying to move, and see if the two connect. Then go build it and see.

After building in code it’s often useful to do another quick test with a few users and see if there’s any place they are stumbling on.

If finding users is challenging, grab some friends or family members, or colleagues (who did not help build this feature) and run through the interview with them. Fresh eyes are key here.

You’ve got this! Feel free to reach out if you need any help running guerilla research, I love talking about this stuff and can either answer some quick questions, or give a little more guidance if needed.

Next up, I’ll probably share about my experience with async user testing (Lyssna.com and UserTesting.com).