How do you figure out what truly matters to users when you’ve got a long list of features, benefits, or design options - but only a limited sample size and even less time? A lot of UX researchers use Best-Worst Scaling (or MaxDiff) to tackle this. It’s a great method: simple for participants, easy to analyze, and far better than traditional rating scales. But when the research question goes beyond basic prioritization - like understanding user segments, handling optional features, factoring in pricing, or capturing uncertainty - MaxDiff starts to show its limits. That’s when more advanced methods come in, and they’re often more accessible than people think. For example, Anchored MaxDiff adds a must-have vs. nice-to-have dimension that turns relative rankings into more actionable insights. Adaptive Choice-Based Conjoint goes further by learning what matters most to each respondent and adapting the questions accordingly - ideal when you're juggling 10+ attributes. Menu-Based Conjoint works especially well for products with flexible options or bundles, like SaaS platforms or modular hardware, helping you see what users are likely to select together. If you suspect different mental models among your users, Latent Class Models can uncover hidden segments by clustering users based on their underlying choice patterns. TURF analysis is a lifesaver when you need to pick a few features that will have the widest reach across your audience, often used in roadmap planning. And if you're trying to account for how confident or honest people are in their responses, Bayesian Truth Serum adds a layer of statistical correction that can help de-bias sensitive data. Want to tie preferences to price? Gabor-Granger techniques and price-anchored conjoint models give you insight into willingness-to-pay without running a full pricing study. These methods all work well with small-to-medium sample sizes, especially when paired with Hierarchical Bayes or latent class estimation, making them a perfect fit for fast-paced UX environments where stakes are high and clarity matters.
Ways To Conduct User Research In Engineering Design
Explore top LinkedIn content from expert professionals.
Summary
User research in engineering design helps uncover what truly matters to users, enabling teams to create products that meet real needs while navigating complex systems and constraints. This process leverages a variety of methods to gather insights, prioritize features, and validate decisions efficiently.
- Explore advanced methods: Use techniques like MaxDiff, conjoint analysis, or Bayesian Truth Serum to prioritize features, understand user preferences, and identify hidden user segments.
- Use asynchronous tools: Record short videos with targeted questions to gather quick, focused user feedback without the need for lengthy interviews or scheduling conflicts.
- Engage with participants: Strategically choose users for research, ask for comparisons or metaphors, and co-construct diagrams to gain deeper insights into their mental models and system understanding.
-
-
What strategies can we use to do #UserResearch about complex systems, particularly ones we're unfamiliar with? I like to use the following strategies: ✅ Pick pilot participants strategically ✅ Take a tour of the system through different POVs ✅ Ask for comparisons and metaphors ✅ Reflect on counterfactuals and rare scenarios ✅ Co-construct research documentation 1️⃣ Pick pilot participants strategically: We are more efficient as researchers when we have a tentative "outline" of what a system COULD look like before we dive into interview sessions. I like to use pilot participants to help brainstorm that outline so I always try to recruit the following type of folks because they tend to have a better grasp of how a system (doesn't) works: 💡 Work in operations 💡 Have long tenures in that role or organization 💡 Do "glue work" (to quote Yvonne Lam) 2️⃣ Ask for a "tour" of the system through different perspectives and through progressively more nuanced explanations: For example, "How would you describe X to a new hire who is unfamiliar with the system but has a deep expertise in the work?" versus "someone who is more senior and removed from the everyday work?" You can also ask participants to “correct” your “misunderstanding” of the system by presenting them a lexicon, processual map, diagram, etc. with a purposeful mistake. Observe what they correct (first) and what elicits an emotional response. I also appreciate Melanie Kahl's approach of asking about: 💡How things "really" happen 💡What common misunderstanding do they have to constantly correct 💡What are the "Informal roles" or invisible work that enable things to happen 3️⃣ Ask for comparisons and metaphors: Comparisons--whether scenario-based or metaphorical--are a useful way to ground any abstract or complex system description participants offer. But it's important to remember that when asking participants to generate metaphors, you should also ask them to explain HOW and WHY it fits. The explanation is often more important than the metaphor itself. 4️⃣ Reflect on counterfactuals and rare scenarios: Particularly when interviewing "expert users", asking "what if" questions can clarify tacit knowledge, rules and requirements, red tape, and more. I also like this list of discussion points by Arvind Venkataramani: 💡Where is change easy and where is it difficult 💡What part/person if removed would cause breakage 💡What happens when this system is shocked/stressed 💡What is mysterious to them 5️⃣ Co-construct research documentation: Hand over the pen and paper or digital whiteboard and ask them to map out the system themselves. Observe: What do they start with? What do they designate as foundational elements? What do the center versus put on the periphery? #PracticalEthics #UXResearch #QualitativeResearch #UX #systemsthinking
-
Too many product teams believe meaningful user research has to involve long interviews, Zoom calls, and endless scheduling and note-taking. But honestly? You can get most of what you need without all that hassle. 🙅♂️ I’ve conducted hundreds of live user research conversations in early-stage startups to inform product decisions, and over the years my thinking has evolved on the role of synchronous time. While there’s a place for real-time convos, I’ve found async tools like Loom often uncover sharper insights—faster—when used intentionally. 🚀 Let’s break down the ROI of shifting to async. If you want to interview 5 people for 30 minutes each, that’s 150 minutes of calls—but because two people are on the call (you and the participant), you’re really spending 300 minutes of combined time. Now, let’s say you record a 3-minute Loom with a few focused questions, send it to those same 5 people, and they each take 5 minutes to write their feedback. That’s 8 minutes per person and just 5 minutes once for you. 45 total minutes versus 300. That’s an order-of-magnitude reduction in time to get hyper-focused feedback. 🕒🔍 Just record a quick Loom, pair it with 1-3 specific questions designed to mitigate key risks, and send it to the right people. This async, scrappy approach gathers real feedback throughout the entire product lifecycle (problem validation, solution exploration, or post-launch feedback) without wasting your users' time or yours. Quick example: Imagine your team is torn between an opinionated implementation of a feature vs. a flexible/customizable one. If you walk through both in a quick Loom and ask five target users which they prefer and why, you’ll get a solid read on your overall user base’s mental model. No need for endless scheduling or drawn-out Zoom calls—just actionable feedback in minutes. 🎯 As an added benefit: this approach also allows you to go back to users for more frequent feedback because you're asking for less of their team with each interaction. 🍪 Note that if you haven’t yet established rapport with the users you’re sending the Looms to, it’s a good idea to introduce yourself at the start in a friendly, personal way. Plus, always make sure to express genuine appreciation and gratitude in the video—it goes a long way in building a connection and getting thoughtful responses. 🙏 Now, don’t get me wrong—there’s still a place for synchronous research, especially in early discovery calls when it’s unclear exactly which problem or solution to focus on. Those calls are critical for diving deeper. But once you have a clear hypothesis and need targeted feedback, async tools can drastically reduce the time burden while keeping the signal strong. 💡 Whether it’s problem validation, solution validation, or post-launch feedback, async research tools can get you actionable insights at every stage for a fraction of the time investment.