How to Get the Bang Out of Your Research Buck: 5 Ways for Startups to Learn About Their Early-Stage Products Without Breaking the Bank

Katerina Schenke, PhD
The Startup

--

It’s the old catch-22: You need money to do research and you need research to make money.

Call it data, call it insights. Whatever you call it, in the information economy, the smarter you are, the better you’ll do. Research on users’ needs inspires development of high-demand products — and, in the case of educational products, highly effective products. Research on users’ engagement (UX) informs smooth, sticky, and profitable user journeys. Research on products’ impacts empower compelling claims — claims that can win grants, delight funders, and convert discerning consumers. Design should be iterative, responsive to data, because data = dollars. Data are a “need to have,” not a “nice to have.” So how do you collect data?

You could hire a delightful professional like me — I’d be happy to solve your research pain and support your success while respecting your budget. Let’s talk!

But how do you collect data when your dollars are stretched this way from Sunday? If I were limited to just one pearl of wisdom, it’d be this: JUST START. Your methods (questions, expectations, storage, etc.) don’t have to be perfect. Some data are better than no data, and the process of collecting data points you towards collecting more and better. So just start. Yesterday. ;-)

For those who can handle not one but FIVE pieces of sage advice — and I’ve buried a secret somewhere in the middle! — , read on.

1. Understand your product

Photo by C D-X on Unsplash

What does your product do? How does it work? Why?

These bedrock ideas should ground your product development. They’ll keep you from “mission/scope creep,” or chasing all the shiny bells and whistles that your product doesn’t need (at least not in Phase 1). Staying scrupulously focused also will streamline user journeys and maintain schedules, which saves you money.

Importantly, these core ideas should inform investment.

For example, let’s say you’re developing an app that gets people to exercise. How are you getting them to do it — what’s your lever (researchers call this a mechanism)? What’s the thing that will loosen the floodgates and move users from their baseline of NOT exercising to the promised land of EXERCISING? Note, it might be a series of levers, as in a gear system — think “This leads to that which leads to that.” (Researchers call this a theory of change or a logic model.)

So let’s say your app runs on accountability — that’s your lever. You believe that boosting users’ sense of accountability to a meaningful community increases the odds that they’ll exercise. All right, then lean into that. Optimize the app’s accountability features (e.g., making pledges, publishing stakes, suggesting team goals, etc). Deprioritize other features (e.g., tracking steps, recommending workouts).

These basic ideas also should guide your user testing. They facilitate recruitment, because you know how to explain the product simply so users will try it out. And they also should inform investigations.

For example, if your product runs on accountability and users aren’t exercising more three weeks later, then you know where to start digging: accountability. Maybe the “dose” isn’t high enough. Ask about the extent to which users feel accountability. Ask about the accountability features with which they engage. Get their take on potential modifications — more push notifications, say, or enrollment of more meaningful members of their community.

See if you can get your lever to deliver the desired result. And if it just doesn’t — if maximizing accountability, so far as you can tell, after giving it your good old college try, just isn’t driving exercise — then you can stop throwing good money after bad. Take on a redesign, leveraging users’ data to illuminate a more viable lever.

Note, this also spares you from exclusively collecting “meaningless” data. By meaningless, I mean metrics that don’t provide you with information when it comes to engaging, retaining, and/or satisfying users. People who understand their product won’t only ask about, say, appeal (e.g., liking colors or icons), and then scratch their heads when the product fails to take off. “I don’t get it, everyone thought it looked great!” People who understand their product recognize what contributes to success versus what (allegedly) drives it. So these folks recognize which questions pertain to peripheral considerations and can deprioritize them by default and/or dump them at crunchtime.

2. Define your research goal

Photo by Markus Winkler on Unsplash

What do you want to find out?

Do you want to know more about your users — gather insights so you can design a solution that people will embrace? (Researchers call this formative research.)

Do you want to discover if your product works — document evidence that it does what it says? (Researchers call this summative research.)

What’s your bar for whether your product works? For folks who want sales-boosting marketing copy, then collecting users’ rosy testimonials will do the trick. For folks who need statistically significant proof of effectiveness, then a different type of data (collected and analyzed methodically) are required.

Figure out your “why” so you can plan the right process.

3. Start systematically

Photo by Jeswin Thomas on Unsplash

Make data a priority from the get-go. Create a standard survey — which can be super basic! — for collecting data so that you can save yourself future pain in at least two ways:

  1. Whenever you’ve got to hop into an informal user testing session, you’ve got a tool you can quickly grab. No recreating the wheel!
  2. Whenever you want to get a sense of Feature X, you’ve got a deep data set because you consistently asked about Feature X across every user encounter. There will be no kicking of yourself for forgetting to ask about this in spontaneous Sessions 1–6, when you were flying by the seat of your pants.

This doesn’t handcuff you to your first approach. Over time, you can add or omit questions, change process, etc. I do advise that you keep at least a few Qs consistent from Day One so you can get the long view; but otherwise, follow your bliss. The bottom line is, your choices should be deliberate and, once executed, carried forward consistently (until you decide to evolve again). This is my hidden secret: By eliminating arbitrariness and operator error from your procedure, you’re doing your team a huge favor.

Of course, once you’ve collected your data, always store it in the same, accessible place. Because if you can’t find your data when you need it, it’s as if you hadn’t collected it in the first place.

4. Don’t overthink it

Photo by Markus Winkler on Unsplash

Before we dive too deep, let’s recognize that not all products’ stakes are the same. For example, a life-saving product, like an inhaler, needs to be tested in a much more rigorous manner than an entertaining and/or educational physical product, like a fidget toy, or than a free/modestly-priced digital product, like an app. So let’s limit this conversation to products with lower stakes.

I say, dedicate an afternoon to writing your simple survey, program it in user-friendly GoogleForms, and you’re good to go. If you are lucky enough to have some users nearby, test out the survey with them to make sure they understand what you are asking. Even “piloting” your survey with 1 or 2 users will give you a lot of insights.

Which questions go into a survey?

Start with the questions that really matter to you, that will reflect respondents at their “freshest” and/or before they bounce. Put lower priority questions and demographic information (yes, you should be collecting some information on who your users are) at the end to optimize data quality and avoid stereotype threat.

Write questions that will deliver both numbers and words. Questions that deliver numbers ask about, for example, points on a scale from 1 to 10, how often per week you do X, etc. (Researchers call this quantitative data). Questions that deliver words ask “Why?”, “How?”, etc., and provide boxes for typed responses. (Researchers call this qualitative data.) Both types of data are valuable for different reasons, so go after both. I must admit, though, that people tend to (wrongly) believe that numbers don’t lie or view them as more objective and factual. So definitely get some numbers.

Demographic information is inherently quantifiable. Basic Qs include: name; age (or birth year if you think that people might lie about their age); gender; race/ethnicity; contact info. I’ve found that it’s useful to ask teachers about their number of years teaching and their higher degrees (because this tends to predict teachers’ motivation and excellence). It’s useful to ask parents about their highest level of education attained (because this tends to predict annual household income, which has all sorts of implications).

I’ve created these simple GoogleForms. Go ahead and use them, share them, and let me know how it went!

5. Don’t rush to measure your long-term outcomes

Photo by Matt Duncan on Unsplash

This is a big topic with serious implications. In a nutshell, improperly measuring your long-term outcomes can tank your entire operation. You may find that your product doesn’t have the results that you are claiming/hoping it has. This might be because you are too early in your product development and might need to deal with issues around usability and engagement before you can understand effects on the outcomes that you care about. In the context of ed tech products, you might have to see if children enjoy your game before you can determine whether they’ve learned something from it. To really dive into why that is and what you can do about it, stay tuned for future blog posts from me about the topic.

And there you have it! Five ways that startups can bang their research bucks. I hope this inspires you to go get those dollar-driving data — or hire me to do it for you!

Katerina Schenke, PhD. is Founder and Principal at Katalyst Methods and cofounder of EdTech Recharge, where she works with educational media companies to design and evaluate games, software, and assessments. She also works with organizations that care about learning, like Facebook, the Connected Learning Lab, and UNICEF, to run research projects that help them to improve educational policy and practice. Learn more at katalystmethods.com

--

--

Katerina Schenke, PhD
The Startup

Katerina Schenke, PhD. is a researcher | learning designer | data scientist