🎙️ “I do feel as though it is a fundamental shift in how we operate as humanity.” Leisa Reichelt is a big optimist when it comes to AI. In our latest UXR Geeks episode, Leisa expounded on her positive attitude towards artificial intelligence and how she believes our industry will experience what she calls the Contextual Research Renaissance in the upcoming years. Tune in to learn: → What specific changes AI will bring to the UXR industry. → How to prioritize research projects and stop being the “quality police.” → How researchers can adjust to a faster pace of the product development process. Watch a short extract from their talk with Tina Ličková and head over to the comments for links to the full episode! 🔗
UXtweak®
Software Development
UX research platform that allows teams to recruit, conduct, analyze, and share – all in one place.
About us
UXtweak is the user research platform helping teams effortlessly recruit and manage participants, conduct studies, analyze data, and share insights—all in one place. Trusted by top brands, UXtweak has been used to craft over 570K+ questions and collect 1.8 million+ responses, helping research teams uncover valuable insights and craft exceptional user experiences. ✨ Start for free today at UXtweak.com 🔧🐝 Why do customer-centered organizations choose UXtweak? 🌍 Recruit globally, reliably Access participants from 130+ countries with 2,000+ targeting attributes for precise recruitment. Rigorous quality checks and free replacements guarantee reliable feedback so you can focus on insights—not logistics. 🔧 Comprehensive tools for every stage Test concepts, prototypes, or finished products; conduct live user interviews; and use proven methods like card sorting and tree testing—moderated or unmoderated. 📊 Actionable analytics Turn raw data into actionable insights with advanced analytics and visualizations. Quickly identify trends, generate reports, and share findings with your team to drive informed decisions. 🤝 Support that gets it Our customer success team made up of experienced UX researchers, is available to help you tackle challenges and maximize your research impact. 🎓 Driven by research and community Founded by researchers, we are committed to advancing UX research and HCI. Explore original studies by our dedicated research team in open-access journals or visit our Research by UXtweak section. We support, educate, and connect the UX community through initiatives like the UX Research Geeks podcast, Women in UX interviews, live events, and educational resources - find out more on our website.
- Website
-
https://www.uxtweak.com
External link for UXtweak®
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Bratislava
- Type
- Privately Held
- Founded
- 2019
- Specialties
- UX Research, User Research, Usability Testing, User Testing, Research Recruiting, and UX Testing
Products
UXtweak®
User Research Software
UXtweak is the user research platform helping teams recruit and manage participants effortlessly, conduct studies, analyze data, and share insights - all in one tool trusted by top brands. Start unlocking actionable insights with UXtweak. Request a free demo today! Why UXtweak? 🌍 Recruit globally, reliably Access participants from 130+ countries with advanced targeting, ensuring you recruit the right users. Our rigorous quality checks and free replacements guarantee reliable results. 🔧 Run studies your way Test concepts, prototypes, or finished products; conduct live user interviews; use methods like card sorting and tree testing—moderated or unmoderated. 📊 Analytics that drive action Turn your data into actionable insights with advanced analytics and visualizations. Easily create and share highlights and reports. 🤝 Support that gets it Our customer success team of experienced UX researchers is available to help you make the most of your research.
Locations
-
Primary
Get directions
Vysoká 26
Bratislava, 81106, SK
Employees at UXtweak®
Updates
-
UXtweak® reposted this
How do you actually use AI as your UX Research partner? To answer that, we are bringing in Maria Rosala, Director of Research at Nielsen Norman Group. Next week, Maria is joining us for a live Q&A to cut through the hype and answer your hardest questions on using AI in UXR. We will be digging into topics like: • How to use AI as a partner, from planning to synthesis • What AI can (and safely can't) do • Practical tips to integrate directly into your daily practice Bring your questions, as we want to make it as helpful for you as possible. Grab your spot here: https://lnkd.in/dTz83eEq PS: Sign up even if you can't make it live. The UXtweak® team will send you the recording!
-
-
🎙️ "I’m an big optimist when it comes to AI." In the latest UXR Geeks episode, Tina Ličková talked with Leisa Reichelt, the former Head of Research and Insights at Atlassian. Together they reflected on how AI is changing the UX research industry, the role of UX researchers in the near future and how we will continue bringing inimitable value to the table by filling the “judgment gap” with deep, longitudinal human observation. ➡️ Swipe to learn the 3 key insights from our conversation with Leisa. Or listen to the full episode - in the comments 🔗
-
UXtweak® reposted this
Worth a read: this literature review of 182 studies on synthetic research (currently in pre-print) by people from UXtweak® 👇 https://lnkd.in/dW6WexMZ Long story short, LLMs are still a long way off from being able to properly simulate real people. They lack on important, complex human characteristics and skills such as moral reasoning, express themselves very differently (highly verbose and overly structured), and they suffer from so-called "distortions", i.e. differences in biases compared to humans. Which makes sense because (stealing a quote here from the authors) "humans are embodied beings that process and act upon sensory input", while "LLMs are predictors of plausible text ... thus lacking a life experience." They simply can't think like we do! 🤖 But perhaps most importantly: these bias distortions are impossible to identify because the training of LLMs is one big black box 🏴 Yet how can one judge the accuracy/validity of research insights without accounting for its underlying biases (AKA one of the first skills you learn as an aspiring human researcher)! This is a structural blocker. The fact that LLMs are biased and imperfect in itself is fine, so are humans in their own way. However, to not be able to identify what those biases and imperfections are, and hence not be able to account for them, is a problem. But realistically, synthetic research is here to stay. The temptation of getting easy data - without the hassle of recruitment, the constraints of privacy laws, and having to personally conduct the research - will simply be too high for many to resist. Not to mention the temptation of cost-saving opportunities for companies. But we have to use it responsibly. I endorse the authors' conclusion that synthetic research should be used mainly as heuristic tools 🤝 Essentially very similar to how an expert can employ a heuristic evaluation of the quality of a product, service, method, or process. Examples they mention are: 🧐 Explorative studies before engaging with the target audience (for design, I would include benchmarking and brainstorming in this category); 🔎 Critiquing hypotheses (useful for product discovery!); 🔢 Predicting possible effects of variables on complex systems; 🚀 Doing pilot tests before (but not instead of) involving humans. Any other research use cases, in which synthetic users replace humans as the source for insights, as pointed out in the literature review, requires rich, highly contextual research data to train the LLMs ...which obviously totally defeats the purpose: if the creation of a synthetic user requires the recruitment and interviewing of a real person, then what use is the synthetic user? You already have access to that real person for your design project 🤷♂️ #redefineSuccess #design #UX #humanResearch #syntheticResearch #AI #LLM
-
-
UXtweak® reposted this
You don't need to be a user researcher to know that humans are weird. But by studying those oddities, our biases and incongruities, you can learn many useful things about people. Can LLMs — with their training on human data — match this? (This post is the third in my series on LLMs as simulators of human participants, which is the topic of UXtweak®’s recent review of 182 research articles. Check the comments for the link to the first post and the preprint version of our article.) If a single word could describe LLM-generated participants, it would be “distortion”. Bias by itself might actually be good for simulation, if it matched the biases of humans. The problem is when it doesn’t. Distortion represents biases on top of biases. Below are the prevalent types of distortions that appear left and right in research studying LLM participant simulations: ⬣ Hyper-accuracy. If a question has a by-the-book answer, expect the LLM to just repeat it ad nauseam. ⬣ Hallucinations. If the question has uncertain or subjective answers, expect the LLM to make something up. Quite likely, it will take something out of context and miss the mark. This is not just about factuality — it can just diverge from personas or other information it should stay grounded to. ⬣ Divergence from human patterns. Various complex patterns found in humans, including cognitive biases, public opinions, group dynamics or differences in personality traits between cultures just do not manifest in LLMs. ⬣ No latent traits without manipulation. Explicit prompts have a better chance of aligning with humans than implicit ones. Meaning that you need to essentially prime the model to tell you what you want to hear instead of exploring the hidden truth. That is not research. ⬣ Low depth and variability. While elaborate, LLM responses lack the narrative depth and diversity that draw from an authentic human experience. They are unoriginal, homogenous, repetitive and shallow. ⬣ Stereotypicality. LLMs reflect caricatures of popular narratives and stereotypes without a sense for nuance. ⬣ Helpfulness, harmlessness, honesty. LLMs are tuned to be useful. Because of this, they act friendly, positive and sycophantic. This manifests as spinelessness, high tolerance against issues and bending their imposed preferences. ⬣ Temporal lock-in. Data for training LLMs was made in the past. Some old information is no longer relevant. The models struggle to catch up to change. Our article cites references regarding these issues in more detail for anybody interested. In my next post, I will discuss something that LLMs actually excel at — believability — and why that isn’t a good thing. Have you run into any of these distortions in your experimentation?
-
-
RSVP for our Live Q&A with Maria Rosala: Using AI as a UX Research Partner 🤖 How to use AI as a thought partner in UX Research without compromising quality? Join our live Q&A with Maria Rosala, Director of Research at NN/g, and find out. 📆 Date: April 30th, 2026 🕔 Time: 12:00 p.m. EST / 6:00 p.m. CET / 9:00 a.m. PST ➡️ Save your spot What we'll cover: • How to use AI as a thought partner, from planning to synthesis • Practical tips for AI-assisted analysis and sense-making • The biggest misconceptions about AI and bias in research • What AI can (and can't) do as a research partner 💬 This is a live Q&A format — bring your questions and get answers in real time.
-
"Using technology has to be geared towards achieving your goals, not replacing them." In the latest UXR Geeks episode, Tina Ličková talks with Anne Njoroge, a business strategist and UX designer. Together, they explore the silent harms of technology and how products are often designed to be addictive. Tune in to learn: → how technology impacts education and might lead to shorter attention spans → how to create guardrails to help protect users’ mental health → how to balance business objectives and the philosophy of promoting non-harmful technology ➡️ Swipe to learn the 3 key insights from our conversation with Anne Nrojoge. Or listen to the full episode - in the comments 🔗
-
Rewatch our webinar: How to Run a 5-Day Research Sprint 🎥 Last week we joined Hannah Knowles for a hands-on webinar on running a 5-day research sprint. Thanks to everyone who attended, and for those who couldn’t make it, make sure to watch the recording! What you'll find in the recording: → Hannah's 5-day research sprint framework for getting faster answers without sacrificing quality → The three rules for scoping research — defining success, answering one question at a time, and balancing your data set → A real recruitment case study: going from zero panel to fully recruited participants in one week → How to match the scale of your research to the weight of the decision your team is making → Practical tips on recruitment channels, sample sizes, and running pilot tests within a sprint Link in the comments 🔗
-
-
UXtweak® reposted this
How do LLMs align with humans from the perspective of cognitive and behavioral psychology? Can they reach similar thoughts and behaviors as people? Or do they just parrot existing information and hallucinate? This post is the second in my series on UXtweak®’s Research recent review of 182 scientific articles using LLMs as simulators of human participants. (Link to the start of the series here: https://lnkd.in/dkzYCn8H, link to the preprint in comments) LLMs are based on transformer neural networks. This can create the naive preconception that if we just feed them enough human knowledge, maybe they’ll start thinking like us. However, this is refuted just by looking at fundamental differences in inputs and outputs. The human brain processes sensory input based on past experiences stored as memories to produce behaviors that help us survive and reproduce. An LLM stochastically generates probable text based on its training dataset to match some fine-tuned user expectations. Psychologists have tested LLMs through various experiments to see how they compare with humans. While some human-like patterns can emerge, significant differences are evident in: ⬣ Reasoning, ⬣ Decision-making, ⬣ Personality traits, ⬣ Cognitive skills, ⬣ Emotions. Unlike humans, LLMs seem to always know the correct answer in cognitive reflection benchmarks. They project superior emotional awareness because they have memorized a wide range of emotions. At the same time, theory of mind and causal reasoning can make them flail and they make unrepresentative choices. They suppress certain personality traits and emotions in favor of others. Since models lack a body or a lived personal experience, there is no consistency or latent meaning to how they respond. Over the course of a conversation, it can just morph into a completely different “person”. In my next post, I will discuss the downstream effects and the types of distortions that cognitive misalignments create.
-
-
UXtweak® reposted this
Large Language Models (LLMs). How well can they actually simulate human participants? This is the topic of the sweeping review of scientific literature recently completed by our team at UXtweak® Research. This post is the first in a series where I will explain our diverse findings. With the current push for AI into every area of our lives, slop has become the word of the hour. As an HCI and UX researcher, my professional concern lies in “research slop”. Meaningful research takes time, effort and skill. To non-researchers — including important stakeholders — a program marketed on “encompassing all human knowledge” can seem a good source of insights. Companies are popping up with solutions that prey on this demand. Is your audience hard to reach? Don’t wish to spend weeks collecting and analyzing data? Just jump straight to conclusions with the magic of AI. ⭐ 🫠 If you’re familiar with research or how AI models work, this likely raises red flags. However, there might also be some cases where LLM-generated data could be supplementary to real research. Studies on LLM-simulated participants were already out there. But they were scattered across various fields and had very different goals. Our rigorous review, available as a preprint, addresses the need for comprehensive understanding. Our investigation covers 182 primary studies across social sciences, psychology, HCI, engineering, healthcare, education, economics and marketing. We compared results, analyzed their methods and how effective they were. We identified patterns and frequent issues. We interrogated the argument used by proponents when something doesn’t work that you just need a more precise prompt or a more powerful or sophisticated model. In my next post, I will discuss LLM’s cognitive misalignment. Their mechanistical and empirical mismatch with humans from the viewpoint of psychology.
-