🔬 How To Run UX Research In B2B and Enterprise. Practical techniques of what you can do in strict environments, often without access to users. 🚫 Things you typically can’t do 1. Stakeholder interviews ← unavailable 2. Competitor analysis ← not public 3. Data analysis ← no data collected yet 4. Usability sessions ← no users yet 5. Recruit users for testing ← expensive 6. Interview potential users ← IP concerns 7. Concept testing, prototypes ← NDA 8. Usability testing ← IP concerns 9. Sentiment analysis ← no media presence 10. Surveys ← no users to send to 11. Get support logs ← no security clearance 12. Study help desk tickets ← no clearance 13. Use research tools ← no procurement yet ✅ Things you typically can do 1. Focus on requirements + task analysis 2. Study existing workflows, processes 3. Study job postings to map roles/tasks 4. Scrap frequent pain points, challenges 5. Use Google Trends for related search queries 6. Scrap insights to build a service blueprint 7. Find and study people with similar tasks 8. Shadow people performing similar tasks 9. Interview colleagues closest to business 10. Test with customer success, domain experts 11. Build an internal UX testing lab 12. Build trust and confidence first In B2B, people buying a product are not always the same people who will use it. As B2B designers, we have to design at least 2 different types of experiences: the customer’s UX (of the supplier) and employee’s UX (of end users of the product). In customer’s UX, we typically work within a highly specialized domain, along with legacy-ridden systems and strict compliance and security regulations. You might not speak with the stakeholder, but rather company representatives — who regulate the flow of data they share to manage confidentiality, IP and risk. In employee’s UX, it doesn’t look much brighter. We can rarely speak with users, and if we do, often there is only a handful of them. Due to security clearance limitations, we don’t get access to help desk tickers or support logs — and there are rarely any similar public products we could study. As H Locke rightfully noted, if we shed the light strongly enough from many sources, we might end up getting a glimpse of the truth. Scout everything to see what you can find. Find people who are the closest to your customers and to your users. Map the domain and workflows in service blueprints and . Most importantly: start small and build a strong relationship first. In B2B and Enterprise, most actors are incredibly protective and cautious, often carefully manoeuvring compliance regulations and layers of internal politics. No stones will be moved unless there is a strong mutual trust from both sides. It can be frustrating, but also remarkably impactful. B2B relationships are often long-term relationships for years to come, allowing you to make huge impact for people who can’t choose what they use and desperately need your help to do their work better. [continues in comments ↓] #ux #b2b
Usability Testing Techniques
Explore top LinkedIn content from expert professionals.
-
-
We recently wrapped up usability testing for a client project. In the fast-paced environment of agency culture, the real challenge isn’t just gathering insights—it’s turning them into actionable outcomes, quickly and efficiently. Here’s how we ensured that no data was lost, priorities were clear, and progress was transparent for all stakeholders: 1️⃣ Organized Documentation: We broke the barriers— and documented on Excel sheet to categorize all observations into usability issues, enhancement ideas, and general comments. Each issue was tagged with severity (critical, high, medium, low) and frequency to highlight trends and prioritize fixes. 2️⃣ Action-Oriented Workflow: For high-severity and high-frequency issues, immediate fixes were planned to minimize potential impact. Ownership was assigned to specific team members, with timelines to ensure quick resolutions, in line with our fast-moving development cycle. 3️⃣ Client Transparency: A summarized report was shared with the client, showing the issues identified, the actions taken, and the progress made. This kept everyone aligned and built confidence in our iterative design process. Previously, I’ve never felt the level of confidence that comes from having such detailed and well-organized documentation. This documentation not only gave us clarity and streamlined our internal processes but also empowered us to communicate progress effectively to the client, reinforcing trust and showcasing the value of our iterative approach. It’s a reminder that thorough documentation isn’t just about organizing data—it’s about enabling smarter, faster decision-making. In agency culture, speed matters—but so does precision. How does your team balance the two during usability testing?
-
Ever started a conversation by asking someone their social security number? That’s what some usability tests feel like. 🕵️♂️🤦♀️ Getting users to open up takes time. And the right questions. Some things I’ve learned about running genuinely useful usability tests: 🕵️♀️ Tailor your pre-test questions to your research needs Don't just ask boring stuff like "How old are you?" Think about what background information will actually help you analyze your results. If you're testing a work tool, ask about company size or role. For a dating or networking app, (non-intrusive) questions about their social life might be a better fit. 🤖 Speak human, not robot Ditch the jargon! Instead of "Did the product's user interface facilitate ease of navigation?", try "Did you find it easy to move around the site?" Your users will thank you for not making them reach for a tech dictionary. 🎭 Go off-script (sometimes) Your discussion guide is a map, not a cage. If a user says something interesting, follow that thread! The best insights often come from unexpected detours. 🔜 Use clear tasks, not vague instructions Instead of saying "Explore the website," give specific, realistic tasks. For example, "Imagine you want to set up a new account. Please go through that process and tell me what you're thinking as you do." This approach mimics real-world usage and helps you identify specific pain points in your user journey. 🕳️ Spot the black holes in your UX Sometimes, the most important thing is what users don't do. If your "revolutionary" filter feature might as well be invisible - ask why. 🤔Ask 'why' "Why did you click there?" can reveal more than a hundred assumptive questions. It can also balance out the quantitative questions. If someone rates a feature 2 out of 5, ask what would have made it a 4 or 5. This combination gives you both the data to spot trends and the insights to understand the reasoning behind those trends. Here are my learnings on what makes a wildly successful usability test + an Airtable question bank that can help: https://bit.ly/4bODMJc How do you get your users to trust you during a usability test? What’s your go-to ‘human-ing’ warm-up question? #usabilitytesting
-
It's here! We ran a usability study of Google’s AI Mode with 37 people across 250 tasks. What we saw: • Clicks were rare. Basically 0. • When clicks happen, they are mostly transactional. • AI Mode holds attention. In most sessions, people stayed inside the panel. • 88% interacted with the text. • Product previews behave like mini PDPs. Quick spec checks, few exits. I break down the study, methods, and plays in this week’s Growth Memo.
-
For a long time, the golden rule in UX research was simple: just test 5 users, and you will catch 80 percent of usability issues. It made sense in early usability testing when the goal was catching obvious bugs or severe blockers. But today, UX research often asks bigger questions. We explore subtle user preferences, test multiple design variations, predict market behavior, or validate critical flows in products where mistakes can cost millions. Suddenly, 5 users do not seem enough anymore. As UX research matured, so did the need for smarter ways to plan sample sizes. Recent years have brought more advanced methods that help researchers move beyond rough estimates. For instance, adapted statistical power analysis for UX allows us to calculate sample size based on expected effect sizes, even when working with small or noisy samples. Bayesian approaches are gaining traction too, offering flexible sample planning that updates based on incoming data, letting you stop early when enough certainty is reached. Sequential and adaptive sampling strategies are another exciting development, especially for usability studies or preference tests. Instead of setting a fixed number in advance, you continue collecting data until you achieve a desired confidence level, making studies faster and more cost-effective. Risk-based models are also changing how researchers think about participant numbers. Instead of focusing only on detecting problems, they consider the business or design risk of making a wrong decision, adjusting the sample size based on how much uncertainty you can afford. Another growing trend is mixing qualitative and quantitative sizing in adaptive ways. Some frameworks now combine early qualitative saturation analysis with quantitative validation stages, offering a dynamic approach where the study evolves based on what you learn. All of these methods offer something critical that the old "5 users" rule does not: they match the sample size to the research goal, the risk involved, and the complexity of the product. If you are running a simple early discovery study, small samples still work well. But if you are testing pricing sensitivity, final designs, or behavioral metrics that inform big decisions, modern UX research demands more. It is an exciting time because we now have access to Bayesian calculators, sequential stopping rules, risk modeling tools, and mixed-methods planning guides that make our studies not just bigger, but smarter.
-
Ever noticed how two UX teams can watch the same usability test and walk away with completely different conclusions? One team swears “users dropped off because of button placement,” while another insists it was “trust in payment security.” Both have quotes, both have observations, both sound convincing. The result? Endless debates in meetings, wasted cycles, and decisions that hinge more on who argues better than on what the evidence truly supports. The root issue isn’t bad research. It’s that most of us treat qualitative evidence as if it speaks for itself. We don’t always make our assumptions explicit, nor do we show how each piece of data supports one explanation over another. That’s where things break down. We need a way to compare hypotheses transparently, to accumulate evidence across studies, and to move away from yes/no thinking toward degrees of confidence. That’s exactly what Bayesian reasoning brings to the table. Instead of asking “is this true or false?” we ask: given what we already know, and what this new study shows, how much more likely is one explanation compared to another? This shift encourages us to make priors explicit, assess how strongly each observation supports one explanation over the alternatives, and update beliefs in a way that is transparent and cumulative. Today’s conclusions become the starting point for tomorrow’s research, rather than isolated findings that fade into the background. Here’s the big picture for your day-to-day work: when you synthesize a usability test or interview data, try framing findings in terms of competing explanations rather than isolated quotes. Ask what you think is happening and why, note what past evidence suggests, and then evaluate how strongly the new session confirms or challenges those beliefs. Even a simple scale such as “weakly,” “moderately,” or “strongly” supporting one explanation over another moves you toward Bayesian-style reasoning. This practice not only clarifies your team’s confidence but also builds a cumulative research memory, helping you avoid repeating the same arguments and letting your insights grow stronger over time.
-
User research is great, but what if you do not have the time or budget for it........ In an ideal world, you would test and validate every design decision. But, that is not always the reality. Sometimes you do not have the time, access, or budget to run full research studies. So how do you bridge the gap between guessing and making informed decisions? These are some of my favorites: 1️⃣ Analyze drop-off points: Where users abandon a flow tells you a lot. Are they getting stuck on an input field? Hesitating at the payment step? Running into bugs? These patterns reveal key problem areas. 2️⃣ Identify high-friction areas: Where users spend the most time can be good or bad. If a simple action is taking too long, that might signal confusion or inefficiency in the flow. 3️⃣ Watch real user behavior: Tools like Hotjar | by Contentsquare or PostHog let you record user sessions and see how people actually interact with your product. This exposes where users struggle in real time. 4️⃣ Talk to customer support: They hear customer frustrations daily. What are the most common complaints? What issues keep coming up? This feedback is gold for improving UX. 5️⃣ Leverage account managers: They are constantly talking to customers and solving their pain points, often without looping in the product team. Ask them what they are hearing. They will gladly share everything. 6️⃣ Use survey data: A simple Google Forms, Typeform, or Tally survey can collect direct feedback on user experience and pain points. 6️⃣ Reference industry leaders: Look at existing apps or products with similar features to what you are designing. Use them as inspiration to simplify your design decisions. Many foundational patterns have already been solved, there is no need to reinvent the wheel. I have used all of these methods throughout my career, but the trick is knowing when to use each one and when to push for proper user research. This comes with time. That said, not every feature or flow needs research. Some areas of a product are so well understood that testing does not add much value. What unconventional methods have you used to gather user feedback outside of traditional testing? _______ 👋🏻 I’m Wyatt—designer turned founder, building in public & sharing what I learn. Follow for more content like this!
-
During a Usability test, noticed that sometimes users tend to put on their 'best performance’ when they're being watched? You're likely witnessing the Hawthorne effect in action! Happens with us as well. When working from home, during meetings, you're more attentive, nodding more, and sitting up straighter, not just because you're engaged, but because you're aware that your colleagues can see you. This subtle shift in your behaviour due to the awareness of being observed is a daily manifestation of the Observation bias or Hawthorne effect. In the context of UX studies, participants often alter their behaviour because they know they're being observed. They might persist through long loading times or navigate more patiently, not because that's their natural behaviour, but to meet what they perceive are the expectations of the researcher. This phenomenon can yield misleading data, painting a rosier picture of user satisfaction and interaction than is true. When it comes to UX research, this effect can skew results because participants may alter how they interact with a product under observation. Here are some strategies to mitigate this bias in UX research: 🤝 Building Rapport: Setting a casual tone from the start can also help, engaging in small talk to ease participants into the testing environment and subtly guiding them without being overly friendly. 🎯 Design Realistic Scenarios: Create tasks that reflect typical use cases to ensure participants' actions are as natural as possible. 🗣 Ease Into Testing: Use casual conversation to make participants comfortable and clarify that the session is informal and observational. 💡Set Clear Expectations: Tell participants that their natural behavior is what's needed, and that there's no right or wrong way to navigate the tasks. ✅ Value Honesty Over Perfection: Reinforce that the study aims to find design flaws, not user flaws, and that honest feedback is crucial. 🛑 Remind Them It's Not a Test: If participants apologise for mistakes, remind them that they're helping identify areas for improvement, not being graded. So the next time you're observing a test session and the participant seems to channel their inner tech wizard, remember—it might just be the Hawthorne effect rather than a sudden surge in digital prowess. Unmasking this 'performance' is key to genuine insights, because in the end, we're designing for humans, not stage actors. #uxresearch #uxtips #uxcommunity #ux
-
When we run usability tests, we often focus on the qualitative stuff — what people say, where they struggle, why they behave a certain way. But we forget there’s a quantitative side to usability testing too. Each task in your test can be measured for: 1. Effectiveness — can people complete the task? → Success rate: What % of users completed the task? (80% is solid. 100% might mean your task was too easy.) → Error rate: How often do users make mistakes — and how severe are they? 2. Efficiency — how quickly do they complete the task? → Time on task: Average time spent per task. → Relative efficiency: How much of that time is spent by people who succeed at the task? 3. Satisfaction — how do they feel about it? → Post-task satisfaction: A quick rating (1–5) after each task. → Overall system usability: SUS scores or other validated scales after the full session. These metrics help you go beyond opinions and actually track improvements over time. They're especially helpful for benchmarking, stakeholder alignment, and testing design changes. We want our products to feel good, but they also need to perform well. And if you need some help, i've got a nice template for this! (see the comments) Do you use these kinds of metrics in your usability testing? UXR Study
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development