
One of the biggest challenges in product design is making sure that great user research doesn’t just sit on the sidelines. You do the mahi - craft thoughtful archetypes, capture real customer preferences, needs, and behaviours - and then somewhere along the way, that work ends up tucked into a deck that gets referenced less and less.
As a senior experience designer, I’m driven by keeping the customer front and centre. I want the decisions we make, from big feature sets to small design tweaks, to always come back to real needs. So while working with one of our media clients to explore future features, I had a moment where I thought: What if those archetypes weren’t just research artefacts? What if they were part of the team?
So, I decided to build something.
Turning personas into participants
AI-powered archetype tool that flips traditional research on its head. It’s a kind of virtual research assistant designed to keep user needs front and centre throughout the entire design process.
Its job is to reflect the behaviours, needs and attitudes of our key user archetypes and challenge us to think about how new features really address a customer need. If it doesn’t like an idea, it has to say why. And not just ‘nah, don’t like it’; it has to give at least three ways to make the idea better. It’s been a huge help in ideation, especially when we hit a wall and just need something to bounce ideas off.
Building it (with no coding background)
I’m not a developer. So the build was a bit of an adventure.
I built it inside OpenAI’s playground in a ring-fenced space, starting with a system prompt some help from my colleague Josh in the web team. From there, I iterated through prompts in the OpenAI playground, tested different models, and trained it with archetypes based on existing user data. We even threw in a few PDFs from earlier research work to help ground its worldview. From there it was a lot of trial and error and tweaking the parameters until we got something that felt right.
One of the most surprising moments was when I showed it to Amanda, one of our Principal Designers, and she immediately asked, “Can it give feedback on a wireframe?” Up until then, I’d just been training the model on text. So we did… and the AI got it straight away, giving us feedback on the layout and flow. Suddenly, this wasn’t just a language model; it could analyse visuals too. And that’s when I knew I’d cracked it.
A helpful sidekick (not a human replacement)
There’s been one big question that’s come up a lot: does this replace the need to talk to actual users?
Absolutely not.
As someone who’s spent years doing user research, that was my biggest hesitation at first. But this tool doesn’t bypass the human side of research: it amplifies it. It helps us stress-test ideas so we’re better prepared when we do speak to real people. It sharpens the focus; it doesn’t shortcut the process.
Humans are great at linking unexpected ideas. AI’s not quite there. But it can help us push the weird ones further, or bring them back to earth. This tool helps make ideas stronger before they go in front of users. It keeps the human voice in the room. And it means we can get more out of our conversations with real customers, not less.
What’s next?
We’re using the tool as I’d intended: it’s part of our regular design process, sense-checking feature design against archetypes and staying true to the customer. It helps expand our thinking when we’re starting broad, but it also helps bring things back to the core user need.
We’ve even run focus groups with the archetype tool, where they all sit around a virtual table discussing a new feature. It’s oddly wholesome. No snark, no derailments. Just personas working together to make ideas better.
As for what’s next, I’d love to give it a face, a proper UI, something themed and interactive. At the moment it lives in a pretty dry developer interface. But it’s ready to evolve.
There’s talk of expanding the archetypes to represent the whole NZ streaming market, not just current customers, but future ones too. On the roadmap too is pulling in real-time data from APIs (such as viewing behaviour or trending content) so it’s not just responding based on past research, but actively learning and adapting in real time.
Why I’m excited
The problem we set out to solve was simple: good research shouldn’t fade into the background. This tool keeps it alive. And more importantly, it makes it usable.
As designers, we’re great at making strange, surprising connections. That’s something AI still struggles with. But what this tool is great at is refining and stretching ideas once we’ve had them helping them land more meaningfully with real users.
It’s not here to replace what we do. It’s here to push us, support us, and maybe, occasionally, argue with us. In the friendliest possible way.