Games User Research Methodology Series: Concept Tests

Player Research
7 min readJan 14, 2021

Video game ‘Concept Testing’ is a research approach, specifically tailored for the games industry, to explore design direction very early in development.

It’s a solution for teams wanting to reality-check their game ideas, and be more informed in making big decisions surrounding potential products.

Each development and publishing house has their own take on assessing potential game concepts. Understandably those insights usually surround funding:

Is this a game idea worthy of our effort, time, and money?

But Concept Testing — using real players — has the potential to deliver plenty more: discovering useful insights about your future audience, and seeding novel game ideas by learning about players’ frustrations and unmet needs. If ‘purchase intent’ and ‘uniqueness’ are allowed to be the entire focus of an early-stage research, then it’s a missed opportunity.

In their embryonic state these first-playables, greybox prototypes, even moodboards or visual target footage: they’re the foundations of the player experience being forged.

This is where the power of understanding your player is at its most potent.

At its core, Concept Testing is the act of getting a group of representative players to give useful feedback on assets that are indicative of new game ideas, all in a low-bias and secure environment.

Sample slides with quantitative (left) and qualitative (right) feedback. We always ask the mix of both types of Qs to probe not only whats , but also whys.

We tailor our approach to Concept Testing on a per-title basis, maximising the insights that can be gotten from players, including:

  • Informing novel design
  • Prioritising potential features
  • Ranking and rating visual styles
  • Digging into what defines target audiences

Concept Tests can greenlight and inspire design pillars, feature priority, clarify target audiences, and more.

Product Vision & Risks

The first step in organising a concept test is a workshop among researchers and stakeholders. The first discussion: “What assets could we utilise to communicate our game ideas to players?”.

From our experience, the assets we know work well include:

  • Concept art (characters, levels, setting)
  • Videos (gameplay, trailers, world-setting)
  • Gameplay prototypes (white box or more developed)
  • Branding assets (logos, descriptions)
  • In-game menu mockups
  • In-game stills (with or without UI)
  • Narrative outlines (neutrally worded)

Next discussion: “What else could we learn about our future players, that might push our concept ‘over the horizon’?”.

The answers here are unique to every project and every audience. A good starting objective is to learn about comparable games players haven’t loved, or didn’t stick with. Ensuring future development doesn't fall into the same traps.

A Concept Test’s objectives are built around these facets: exploring what teams want to show and what teams ought to know before committing to design direction.

Purpose-built labs help ensure responses are unique to each participant: their genuine thoughts, uninfluenced by others.

Asking The Right Questions

Games user researchers are expert in taking assets, questions and topics into building a survey or interview guide.

A structured questionnaire can carefully explore game assets or ideas one by one — setting, theme, character art, proposed features, and more — unrolling the concept progressively to gather detailed feedback on each aspect of the game.

Quality question-writing ensures we learn whether a game concept appeals to the target audience, but also why it is or is not appealing. A quality survey considers the order in which content is revealed; progressively rolling out different aspects of the game to gather actionable, accurate feedback about each element in turn. And because people innately prefer the first and last things they see, there’s randomisation in the survey too.

The actual assets presented will be defined by what you need to discover, and what assets you have available.

We always use a combination of quantitative and qualitative data.

It’s good research process to gather a mix of quantitative. Rating scales to measure participant sentiment, and qualitative free-text fields for participants to give us rich, contextualising responses in their own words.

Interviews or discussion groups can be run to collect additional qualitative, conversational data, but only after individual participants have already given their own responses.

The aim in a Concept Test is to gather individual responses and compare them, not capture a vanilla-flavoured consensus of the group.

Other forms of influence are mitigated: players shouldn’t feel lead by the opinions of others around them, for example in focus groups where ‘groupthink’ is strong. Assets are presented without grandiosity, culling anything that feels like a sales pitch or hyperbole. Where possible, we’ll remove the development studios’ name, to avoid the ‘halo effect’ or fan service.

Running a Concept Test

A concept testing session invites groups of participants from your target audience(s) to our custom-built playtesting labs.

As with all in-person research, participants are sought from the general public, screened to match the target audience. The most important criteria for checking playtesters is their game genre experience, and their prior purchases of similar titles.

Playtesting in-person isn’t possible in a pandemic, but Concept Testing can be run remotely too.

Our goal in a Concept Test is to gather subjective feedback: opinions and feelings towards a game. And with that data, to identify meaningful trends across the whole group’s opinions.

We recruit a sample of no less than 30 participants for each defined target audience.

Why 30+? This is a balance of research rigour and practical value. Existing academic research and our own experience highlights ~30 as the ‘Goldilocks’ of sample sizes, when seeking data on opinions.

This line-in-the-sand means researchers and stakeholders can build confidence in the conclusions from the group’s data. That it is likely-enough to be applicable across the whole audience, such that they can reliably make decisions informed by the insight. Fewer would be misleading, and more risks being both superfluous and expensive.

When comparing multiple different audience profiles, we recruit larger numbers in multiples of ~30.

Making Sense & Making Changes

The next step sounds simple: make some sense of all this structured feedback.

Researchers carefully read and analyse every response from every participant, looking to identify trends and recurrent themes relevant to the research questions. Categorising and compiling using our specialist analysis software.

It’s a process that takes several days to complete, even with years of experience, templates and time-tested approaches.

Compiling these patterns into a report, along with our own commentary, context, and data visualisations, takes another few days.

The result: Detailed feedback on each asset or feature that captures players’ first impressions, combined with the ‘why’ that lets teams react to (or ignore) feedback in confidence.

Furthermore, deeper-dives into future players’ wants, needs, and potential opportunities for diversification: fodder for design inspiration to drive the game design beyond what exists today.

Misusing Concept Testing

This approach of exploring a target audience in detail, and presenting game assets in turn, isn’t a ‘silver bullet’:

Finding a target audience

The old adage ‘a solution in search of a problem’ is true for video games. Bringing in random people to ‘see who sticks’ is an approach best suited to soft launch, click-through tracking, or somehow else casting a wider net. Concept Testing is for clarifying a defined audience’s opinions, and validating your assumptions about their attitudes.

Predicting future behaviour

People are poor predictors of their future behaviour. It’s tempting to ask these players to predict future behaviour in Concept Tests — “will you play this game? Will you spend in this game?” — but it’s data that can forge false confidence.

Learnability, onboarding or balancing

Concept Tests typically won’t give much useful information on the game’s usability, learnability, onboarding. These are best suited to studies rooted in detailed observation of play.

Conclusion

A Concept Test is an effective method to feed player data into early game design direction. It goes beyond asking players “is this a good game’?”— delving into the ‘why’ or ‘why not’ of player’s subjective impressions.

They allow team to make more informed decisions on where to go next with your game concept — what to prioritise, what to adjust, and even whether a concept should be set aside entirely. Each led by rigorous research.

Concept testing fits nearly into a wider picture of research in the dev lifecycle, as part of a mixed-methods strategy that ensures player data is used throughout development.

See our other articles for more information about games user research and the other games user research methods to employ.

--

--

Player Research

The premier games user research partner. Where gaming instinct meets scientific insight. PlayerResearch.com