FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

What’s The Perfect Design Process?

Design process is messy. You might be following a structured approach, but with all the last-minute changes and overlooked details, too often, it takes a life of its own. And before you know it, you are designing in a chaotic environment full of refinements, final-final deliverables, and missed deadlines.

This article is part of our ongoing series on design patterns. It’s an upcoming part of the video library on Smart Interface Design Patterns 🍣 and is a part of the live UX training as well.

What’s The “Right” Design Process?

Of course, there is no “right-and-only” way to frame a design process. It’s defined by whatever works well for you and for your team. Personally, I tend to rely on 4 design models that seem to fit well with my design work:

  • Double Diamond Process for its comprehensive and reliable methodology for solving problems. In this guide, Dan Nessler breaks down the entire Double-Diamond process into single parts, explaining how exactly it works, step-by-step, in all fine details.

  • Triple Diamond Process for its more realistic approach to the designer’s input across the product’s life cycle. That’s a piece by Adam Gray on why bringing flexibility to the messy reality of the design process is critical to improving planning and involving design work as prototypes are being built.

  • Enterprise Design Thinking Model by IBM for its focus on design maturity and scale, which really helps large organizations. A useful model that helps argue for user research, user-centricity, and rapid low-fidelity prototyping — and how to transfer ownership to design teams at scale.

  • Hot Potato process, for its simplicity in bridging design and development across the entire product lifecycle. Designers and developers throw ideas, mock-ups, and prototypes to each other permanently. Sometimes there are more involved design phases than dev phases, but there is no hand-off, and the entire process is driven by continuous collaboration.

These ways of thinking about the design process translated into a process that works well for me but has to be adjusted for every project that I’m working on. In a nutshell, here’s how it would work.

A Process That Works For Me

There is no such thing as enough user research. In every project, I start with involving users as early as possible. I explore all the data we have, interview customer support and the service desk, check for technical debt and design issues, backlog items, and dismissed ideas. I explore organizational charts to understand layers of management. I set the right expectations and seek allies.

From there, I would typically spend weeks or even months in diagrams and spreadsheets and endless docs before drawing a single pixel on the screen. I try to get developers on board, so they can start setting up the dev environment already.

I bring in stakeholders and people who have a vested interest in contributing to the success of the project. Voices that need to be heard but are often forgotten. I see my role as a person who needs to bridge the gap between business requirements and user needs through the lens of design.

Then I take a blank piece of paper and start sketching. I sketch ideas. I sketch customer journey maps. I sketch content boxes. I write down components that we will surely need in the product — the usual suspects. I set up a workshop with designers and developers to decide on names. Then developers can go ahead and prototype while designers focus on UI and interaction design.

To make sure I get both sides of the equation right, I draft customer journey maps, brainstorm ideas and prioritize them with the Kano model and Impact ÷ Effort matrix (with developers, PMs, and stakeholders).

I don’t want to waste time designing and building the wrong thing, so I establish design KPIs and connect them with business goals using KPI trees. I get a sign-off on those, and then the interface design starts.

I develop hypotheses. Low-fidelity mock-ups. Speak to developers. Get their feedback. Refine. Throw the mock-ups to developers. Bring them into HTML and CSS. Test hypotheses in usability sessions until we get to an 80% success rate for top tasks. Designers keep refining, and developers keep building out.

Establish a process to continuously measure the quality of design. Track task completion rates. Track task completion times. Track error rates. Track error recovery rates. Track accessibility. Track sustainability. Track performance. In a B2B setting, we track the time customers need to complete their tasks and try to minimize it.

Make them visible to the entire organization to show the value of design and its impact on business KPIs. Explain that the process isn’t based on hunches. It’s an evidence-driven design.

Establish ownership and governance. The search team must be measured by the quality of search results for the top 100 search queries over the last two months. People who publish content are owners of that content. It’s their responsibility to keep it up-to-date, rewrite, archive, or delete it.

Refine, refine, refine. Keep throwing new components and user journeys to developers. Stop. Test with users to check how we are doing. Keep going and refine in the browser. Continuously and rigorously test. Launch and keep refining. Measure the KPIs and report to the next iteration of the design.

Admittedly, it is a bit messy. But it helps me stay on track when navigating a complex problem space in a way that delivers measurable results, removes bias and subjectivity from design decisions, and helps deliver user-centric designs that also address business needs.

Wrapping Up

Of course, there is no “right-and-only” way to frame a design process. It’s defined by whatever works well for you and for your team. Explore options and keep them in mind when designing your design process. Whatever you choose, don’t follow it rigidly just for the sake of it, and combine bits from all models to make it right for you.

As long as it works well for you, it’s right. And that’s the only thing that matters.

You can find more details on design patterns in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Further Reading on Smashing Magazine

Meet Success At Scale, A New Smashing Book By Addy Osmani

Today, we are very happy to announce our new book: Success at Scale, a curated collection of best-practice case studies capturing how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale. Case studies are from industry experts with guidance that stands the test of time.

Join Addy Osmani, your curator, as we dive into a nuanced look at several key topics that will teach you tips and tricks that may help you optimize your own sites. The book will also include short interviews with each contributor on what additional lessons, challenges, and tips they have to share some time after each case study was written.

High-quality hardcover. Curated by Addy Osmani. Cover art by Espen Brunborg. Print and eBook shipping in fall 2023. Pre-order the book.

Contents

Each section of the book is filled with case studies from real-world large-scale web applications and services, interviews with the people involved, and key takeaways to help you achieve the same success.

  • Performance includes examples of measuring, budgeting, optimizing, and monitoring performance, in addition to tips for building a performance culture.
  • Capabilities is about bridging the gap between native capabilities and the modern web. You’ll explore web apps, native apps, and progressive web applications.
  • Accessibility makes web apps viable for diverse users, including people with temporary or permanent disabilities. Most of us will have a disability at some point in our lives, and these case studies show how we can make the web work for all of us.
  • Developer Experience is about building a project environment and culture that encourage support, growth, and problem-solving within teams. Strong teams build great projects!
Who This Book Is For

This book is for professional web developers and teams who want to deliver high-quality web experiences. We explore dimensions like performance, accessibility, capabilities, and developer experience in depth. Success at Scale goes beyond beginner material to cover the pragmatic approaches required to tackle these challenges in the real world.

About the Author

Addy Osmani is an engineering leader working on Google Chrome. He leads up Chrome’s Developer Experience organization, helping reduce the friction for developers to build great user experiences.

Technical Details
  • ISBN: 978-3-910835-00-9 (print)
  • Quality hardcover, stitched binding, ribbon page marker.
  • Free worldwide airmail shipping from Germany starting in fall 2023.
  • eBook available for download in fall 2023 as PDF, ePUB, and Amazon Kindle.
  • Pre-order the book.
Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members as soon as it’s out. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

Interface Design Checklists

100 practical cards for common interface design challenges.

Add to cart $39

Designing Sticky Menus: UX Guidelines

We often rely on sticky headers to point user’s attention to critical features or calls to action. Think of sidebar navigation, CTAs, sticky headers and footers, “fixed” rows or columns in tables, and floating buttons. We’ve already looked into mobile navigation patterns in Smart Interface Design Patterns, but sticky menus deserve a closer look.

As users scroll, a sticky menu always stays in sight. And typically, it’s considered to be a good feature, especially if the menus are frequently used and especially if we want to speed up navigation.

However, sticky menus also come with a few disadvantages. In his recent article on Sticky Menus Are Problematic, And What To Do Instead, Adam Silver argues about some common usability issues of sticky menus — and how to solve them. Let’s take a closer look.

When Sticky Menus Are Useful

How do we decide if a menu should be sticky or not? This depends on the primary job of a page. If it’s designed to primarily convey information and we don’t expect a lot of navigation, then sticky menus aren’t very helpful.

However, if we expect users to navigate between different views on a page a lot and stay on the page while doing so — as it often is on long landing pages, product pages, and filters — then having access to navigation, A-Z or tabs can be very helpful.

Also, when users compare features in a data table, sticky headers help them verify that they always look at the right piece of data. That’s where sticky headers or columns can help and aid understanding. That’s why sticky bars are so frequently used in eCommerce, and in my experience, they improve the discoverability of content and speed of interaction.

Keep Sticky Headers Small, But Large Enough To Avoid Rage Taps

The downside of sticky menus is that they typically make it more difficult for users to explore the page as they obscure content. Full-width bars on mobile and desktop are common, but they need to be compact, especially on narrow screens. And they need to accommodate for accessible tap sizes to prevent rage taps and rage clicks.

Typically, that means we can’t have more than five items in the sticky bar navigation. The choice of the items displayed in the sticky menu should be informed by the most important tasks that users need to perform on the website. If you have more than five items, you probably might need to look into some sort of an overflow menu, as displayed by Samsung.

Whenever users have to deal with forms on a page on mobile, consider replacing sticky menus with accordions. Virtual keyboards typically take up to 60% of the screen, and with a sticky bar in view, filling in a form quickly becomes nothing short of impossible.

Accessibility Issues of Sticky Menus

By their nature, sticky menus always live on top of the content and often cause accessibility issues. They break when users zoom in. They often block the content for keyboard users who tab through the content. They obscure links and other focusable elements. And there is often not enough contrast between the menu and the content area.

Whenever we implement a sticky menu, we need to make sure that focusable elements are still visible with a sticky menu in action. And this also goes for internal page anchors that need to account for the sticky bar with the scroll-padding property in CSS.

Avoid Multiple Scrollbars Of Long Sticky Menus

When sticky menus become lengthy, the last items on the list become difficult to access. We could make them visible with some sort of an overflow menu, but often they appear as scrollable panes, causing multiple scroll bars.

Not only does this behavior cause discoverability issues, but it’s also often a cause for mistakes and repetitive actions on a page. Ideally, we would prevent it by keeping the number of items short, but often it’s not possible or can’t be managed properly.

A way out is to show the menu as an accordion instead in situations when the space is limited, especially on mobile devices. That’s what we do at Smashing Magazine in the checkout, with a button that reveals and hides the contents of the cart when needed.

Partially Persistent Menus

Because sticky menus often take up too much space, we could reveal them when needed and hide them when a user is focused on the content. That’s the idea behind partially persistent headers: as a user starts scrolling down, the menu disappears, but then any scrolling up prompts the menu to appear again.

The issue with this pattern is that sometimes users just want to jump back to a previous section of the page or double-check some details in a previous paragraph, and the menu often gets in the way. Page Laubheimer from NN/Group recommends using a slide-in animation that is roughly 300–400ms long and will preserve the natural feel without being distracting.

Alternatives To Sticky Menus

In some situations, we might not need a sticky menu after all. We can avoid their downsides with shorter pages, or lengthy pages which repeat relevant calls-to-actions or navigation within the page.

We could display a table of contents on the top of the page and bring the user’s attention to the table of contents with a back-to-top link at the bottom of the page.

Wrapping Up

Whenever the job of the page is to help users act, save, and compare, or we expect users to rely on navigation a lot, we might consider displaying sticky navigation. They are most harmful when there isn’t enough space anyway, as it often is with forms on mobile devices.

Sticky menus do come at a cost, as we need to account for usability and accessibility issues, especially for zooming, keyboard navigation, and anchor jumps. Add them if you need them, but be careful in plugging them in by default.

We need to prioritize what matters and remove what doesn’t. And too often, the focus should lie entirely on content and not navigation.

You can find more details on navigation UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Further Resources

Of course, the techniques listed above barely scratch the surface. Here are wonderful articles around sticky headers, from design considerations to technical implementations:

The Best Handoff Is No Handoff

Many companies organize their workflows around projects and departments. Especially in large companies, work often travels from one place to another, often getting stuck between emails and Slack messages, and often “refined” on its never-ending journey between design and engineering teams.

This inevitably brings up the question about the design hand-off: that magical moment when designers are done with their work and developer can take over. Most importantly, that’s where designers must stop working, and move on to other work — unless the scope changes or late adjustments creep their way in.

The “No Handoff” Method

Last week, I stumbled upon an interesting article about the no-handoff method, in which Shamsi Brinn shows an alternative to typical design hand-offs. Shandi shows a fluid model where product and engineering teams work on the product iteratively all the time, with functional prototyping being the central method of working together.

With the process, the working prototype is the living spec of the project and a shared language for the team. No more translation is needed because everyone works on the same prototype. The problem space and the solution space are explored by designers and engineers collaboratively, and the entire workflow is organized around the product, rather than company’s internal structure.

The “Hot Potato” Process

This reminded me of the Hot Potato Process by Dan Mall and Brad Frost, where ideas are passed quickly back and forth from designer to developer and back to designer then back to developer for the entirety of a product creation cycle — similar to throwing hot potato back and forth (audio, video).

From my personal experience, I can only testify that the best collaboration doesn’t have any handoffs between teams. There, work flows seamlessly from design to engineering and back — with both teams working simultaneously, and discussing issues as they occur, during the entire product lifecycle.

There are phases of independent work for sure, but there are also plenty of overlaps for collaborative work, which are opportunities to discuss the progress, explore what is and what isn’t viable and hence avoid lurking issues down the line.

Create As Many Overlaps As Possible

Of course, the process works well for small product teams. But what if a part of the product is outsourced to an external agency? Many companies choose the route of extensive documentation — almost to the last pixel, along with a brief explaining the strategy and the thinking behind the design.

This isn’t enough though. Design decisions have to be informed by technical implementations and its limitations. There is no universal language around design patterns and their interaction design either. And not every design detail can be implemented in an accessible and performant way. This is why beautiful mock-ups turn into painfully slow and inaccessible monsters.

We can reduce the risks of hand-offs with dedicated overlaps between designers and engineering teams. With regular check-ins. Weekly reviews. Shared channels for communications. Visibility into the work done. Usability testing of functional prototypes and small but regular design revisions.

Design is a team work. It involves everybody who contributes to the website — from customer service and marketing to developers and designers. Any overlaps you can create will benefit the teams, their productivity, and ultimately your users.

Wrapping Up

So we want to move away from handoffs. But how to convince teams to change their workflow entirely? With a small experiment on a small project. Pick a project where you could test the waters and suggest a collaborative process. Ask what designers could do while developers are busy. Ask what developers could do while designers iterate. And enable both teams to work together, at the same time.

Ultimately, the success depends on one simple thing: just how well the teams work together. And if they can’t collaborate particularly well, chances are high that a design hand-off won’t make it any better, and a major change in the team culture will need to happen first.

You can find more details on design patterns and UX in the video library on Smart Interface Design Patterns 🍣 — with a live UX training that’s coming up in September this year.

Chocolate, Waffles And Fries: Meet SmashingConf Antwerp 2023 🇧🇪

We’ve been passionate about design & UX for years. We’ve written about design systems and usability, inclusive design and product design, UX research and enterprise UX. And now it’s time to bring it all together. In-person. In a new location. With a new spirit of curiosity and community. And it’s happening this October.

Roll up your sleeves and meet SmashingConf Antwerp 🇧🇪, our shiny new conference for designers & UI engineers who love design, UX and the web. On design systems, usability, product design, accessibility and complex UIs. 1 track, 2 days, 4 full-day-workshops, mysteries and friendly networking. A bright new gem in the heart of Belgium — the land of incredible chocolate, waffles and truly Belgian fries! 🍫 🧇 🍟 🍺

In-Person + Online

€ 499.00 Get your ticket!

Oct 9–11, 2023. Bourla, Antwerp.
Save up to 25% with Smashing Membership.

Online

€ 200.00
Live talks + Behind the scenes

With all video recordings, of course.
Save up to 25% with Smashing Membership.

Ah, perhaps your manager needs a little bit convincing? We’ve got your back! Download the Convince Your Boss PDF to tip the scales in your favor. And we’ve prepared a “letter to the boss” template for you as well. Good luck! 🤞🤞🏼🤞🏾

What Should You Expect?

SmashingConfs have always been about how we work, how we fail and how we succeed. We kindly encourage speakers to share lessons learned and show how they work. Don’t be surprised by speakers sitting down and showing their design process, or setting up a Figma board and designing live with the audience. Check what SmashingConfs are like (+ video).

Don’t expect big picture talks or abstract concepts — Smashing Conferences are always honest, practical and passionate. Speakers are also very approachable and there is enough time for you to ask all your questions and get all the answers, in 1:1-conversations or in round tables.

But it’s also very friendly and inclusive. In fact, we know many attendees by names, and we love friendships emerging as people get together and learn together, during the talks and in the workshops. Plus, we design our side events to help everyone take part in meaningful, respectful conversations.

Everybody is welcome. Not everyone has to speak, but everyone has to feel heard.

We don’t choose venues randomly either. We love the contrast of beautiful historical venues and digital craftsmanship. So for Antwerp, we’ve chosen Bourla Schouwburg, an unforgettable neoclassical theatre in the heart of Antwerp, meticulously designed and built in 1820s and decorated with statues of Apollo and the nine muses (to get your creativity flowing!). The Bourla Theatre will be your home for both conference days.

Finally, we love a good mystery, and we love to have fun! This shows in plenty of side events, morning runs, mysterious riddles, design challenges and walking tours around the chocolate factory and waffle shops.

For Designers and UI Engineers

We’ve designed the conference for UX designers, product designers and interface designers, but it will have plenty of insights for UI engineers and jacks of all trades as well. If you want to boost your design and UX skills, you’ll be in the right place: with insightful talks and practical workshops.

All workshops are hands-on and practical, so you can apply your new skills immediately.

We’ll dive into:

  • design systems,
  • design patterns,
  • usability and UX research,
  • product design and workflow,
  • enterprise UX and complex UIs,
  • inclusive design and accessibility,
  • new design techniques (in Figma, Miro etc.),
  • sustainable and age-friendly design,
  • UX writing.

We will announce the first talks and workshops shortly. The speakers we’ve invited are not just knowledgeable and amongst the best in their fields. They are also excellent speakers and teachers, smart and kind friends, and wonderfully nice and approachable.

Team Tickets? 👫👭

Bring the entire team to SmashingConf, and save some of your training budget along the way as well. With our friendly bundle tickets and team discounts, we’ve got your back! If you want to come with a large team, or are interested in something special, please send us a short email and we’ll get back to you right away!

We Can’t Wait To See You!

As we are getting ready for the event, we couldn’t be more excited to meet you in Antwerp. Let’s boost our design and UX skills, together, and create memorable experiences that will last for a while. ❤️

In-Person + Online

€ 499.00 Get your ticket!

Oct 9–11, 2023. Bourla, Antwerp.
Save up to 25% with Smashing Membership.

Online

€ 200.00
Live talks + Behind the scenes

With all video recordings, of course.
Save up to 25% with Smashing Membership.

Product Reviews And Ratings UX: A Designer’s Guide

When we make purchasing decisions, we often rely on expertise from people like us who happen to have purchased that item already. And so we seek user reviews, often looking for a high enough average score, a good number of these reviews, and a good number of ratings.

In fact, 95% of users rely on reviews to make their decisions and learn more about how well a product fits their needs. Unfortunately, too often, the way we design reviews and ratings UX confuses and frustrates users, rather than helps them. Let’s fix that.

This article is part of our ongoing series on design patterns. It’s also a part of the video library on Smart Interface Design Patterns 🍣 and is available in the live UX training as well.

What Customers Search In Reviews

We often assume that the only thing that a best-selling product needs is a high number of positive reviews. Positive reviews indeed are needed to build up trust, but just as many users need to be convinced that a product would work for them. This shows up in a few attributes that customers seek to verify in reviews:

  • The product is indeed of high quality as advertised,
  • They aren’t overpaying: the pricing is fair and acceptable,
  • They’ve chosen the right product that addresses their needs well,
  • People like them (similar age group, experience, needs) were satisfied with the product,
  • They haven’t missed any critical details (accessories not included, different sizing),
  • The risk is low and manageable (simple return policy, cancellation process, 30-days-money-back-guarantee),
  • The worst thing that could happen to them due to purchase (poor customer service that’s hard to reach, fraudulent activities that have been flagged).

All these fine little details are quite difficult to decipher from a standalone, beautiful 5-star rating on a landing page. So how can we communicate with them to remove doubts and concerns? Well, we can start off with negative reviews, which often can be way more helpful than not.

Reward Negative Reviews

We shouldn’t be surprised by the fact that most customers, when shopping online, meticulously study negative reviews first, and they need enough of those to tackle the concerns and skepticism most customers almost instinctively come with. The more specific negative reviews are, the easier customers can connect the dots and reach that threshold of trust. Especially if all negative reviews simply don’t apply to them.

Most negative reviews focus on specific things that went wrong during the purchase. For most decent products and most decent brands, that isn’t going to be a particularly long list. It might be that the customer service reaches its busyness spikes every now and again, or that promised delivery times weren’t achieved during holidays, or that there is a high severity bug in the application that’s being solved that week. These things, however, wouldn’t be relevant throughout the entire year (hopefully!).

However, it might be that a product is incredibly faulty, confusing, malicious, or intricately complex — and that the cancellation process is designed to keep as many paying customers as possible. Users will watch out for any red flags that might indicate just that. And the lack of negative reviews altogether is one of those red flags, as it might be hiding some critical underlying issues.

Not every negative review is negative, though. It’s worth repeating that customers tend to seek customer feedback from people like themselves on issues that are relevant to them. If negative reviews mostly focus on poor packaging for shipping overseas or low brightness outdoors, it might be less of an issue for a customer ordering in the same country with the intent to use a device at home. These issues are definitely less critical than any notion of “extremely buggy,” “horrible customer support,” “major refund troubles,” or “severe cancellation issues.”

Chances are high that you will need some negative reviews to appear trustworthy; they will, in fact, increase conversion rates and boost customers’ confidence. That’s why you should consider encouraging customers to write honest negative reviews and ask them to be very specific about the parts of the experience they didn’t like. Also, prompt them to explain if the issue has been successfully resolved in the end. To do that, you could reward users with bonus points or coupon codes that they could redeem with their next billing cycle or with their next purchase.

Always Show Decimal Ratings And Number Of Ratings

Another detail that customers often rely on is a precise estimation of the experience that similar customers have had. In that regard, many websites don’t go far enough, though. Frequently we can spot bright orange stars used as indicators of how great or poor the experience has been, but there are vast differences and nuances between “4 stars”- and “5 stars”-experiences, yet they are somehow getting lost between two aggressively rounded scores.

The problem is that stars alone don’t provide enough context, nor do they provide a detailed enough explanation. With this pattern,

  • we can’t easily tell what exactly customers liked or disliked;
  • we can’t tell how satisfied users with a profile similar to ours are;
  • we can’t say what’s particularly good or particularly bad about the product;
  • it’s impossible to distinguish between products with a rating closer to 4.0 and a rating closer to 5.0. The difference between them, however, is vast.

In order for us to provide a slightly better assessment, our rating scores could be a bit more precise. As a general rule, it’s always a good idea to display a decimal average score (e.g., 4.7 out of 5) and the total number of reviews (78 reviews) together. The average decimal score provides a more granular estimate, while the total number of reviews guarantees a high enough number of reviewers contributing to that score.

Bite prominently highlights the total number of reviews, but it doesn’t reveal an average score. However, once a product reaches 19,348 reviews (overwhelmingly positive), displaying an average score might not be as necessary.

In fact, the number of reviews matters significantly. In usability testing, customers “often show greater preference towards some products with 4.5-star averages than some with perfect 5-star ratings due to the number of reviews that these averages are based on”. In fact, 70% of customers would pick the one with the higher number of ratings despite its lower average.

For example, for two otherwise identical products, where one product has a product ranking of 4.5 with 180 reviews and another has 4.8 with 39 reviews, a majority of customers are likely to choose the first one, despite a lower average. That’s also why it’s critical to mix the number of ratings and average score to avoid the bias towards products with a better ranking but lower number of reviews.

Is there a perfect score, by the way? Well, as long as your product’s score is somewhere between 4.5 and 4.89 — with a high enough (75+) number of reviews, you probably shouldn’t be worrying too much about its competitiveness. When a rating score hits anywhere beyond 4.9, customers start getting suspicious; that’s when they slow down and search for traps and flaws that aren’t explicitly stated. Unsurprisingly it doesn’t help in closing sales.

Why is that? Because in real life, anything sincere with a score above 4.9 is very hard to come by and hence less likely to be true — unless it’s based on a very small sample of biased and dishonest reviews. Don’t feel tempted or pressured to hit that average user rating, though, as it’s not the only detail that customers pay attention to.

Display A Full Rating Distribution Summary

In addition to bright yellow stars, over the years, we’ve learned to rely on distribution summaries for ratings. They explain the relation between high-score reviews vs. low-score reviews and give customers a more granular overview of how good the product is on average for most people. There, we can tell quicker if most customers had an overwhelmingly positive or negative experience.

We do so by looking for specific patterns in the rating distribution summary. As users, we quickly eliminate options with a disproportionate number of low-score reviews. We quickly eliminate options with a disproportionate number of mid-score reviews. And we abandon options without any low-score reviews.

We almost instinctively search for the pattern — the one perfect, trustworthy distribution that is representative of good products. In it, we need to find a lot of positive reviews and enough negative reviews.

Typically this means that a vast majority of ratings will be 5-star-reviews, followed by plenty of 4-star reviews, and eventually followed by a large enough (but not too large!) number of low-score reviews. Most importantly, there must be enough negative reviews to give customers pointers about the most negative experience they might have. Usually, products that match that perfect pattern appear most trustworthy and perform best.

In user research, this “perfect” pattern is called the J-shaped distribution pattern. However, as it turns out, it’s not really that perfect and conceals some troubles for businesses.

The J-Shaped Distribution Issues

You might be wondering why that particular rating distribution has such an unusual name. The distribution resembles a capital letter “J”, with a small spike of the most negative reviews (★), a flat middle ground (★★, ★★★), and a high spike of the most positive ones (★★★★★). It is very different from the omnipresent, classic bell curve that we might have expected initially.

In the examples for IMDB scores of “Game of Thrones” and “Avatar: The Way of Water,” you’ll notice that the distribution in the top scores is much clearer for “Game of Thrones” and much less conclusive for “Avatar: The Way of Water.” Despite a stronger negative scoring in the lower ratings, the distribution of “Game of Thrones” is likely to perform much better — both in movies and for any other products.

We can also notice that in both cases, the lowest score (Rating 1) gets a disproportionate amount of attention. That’s the “J”-shaped distribution in action. Why does it happen?

As Sav Sidorov and Nikita Sharaev explain, user feedback is often skewed towards the extremes. Customers who are just satisfied may not take the time to leave a review or rating, but those who are either extremely happy or extremely frustrated are more likely to do so. This makes rating scores very vulnerable to the strongest opinions of the extremes and hence biased and inaccurate.

An alternative design that Sav has suggested includes four options: , , ↑↑, and ↓↓. When providing a review, customers could click or tap multiple times to provide a stronger assessment. This might eliminate the large spread between the extremes.

Another option is to introduce a delay for reviews. As Marianna Alshina noticed, Etsy doesn’t allow its customers to review products straight away. They have to wait for one week to have a “full” experience with the product, and only then leave pictures and their review.

Will it work? Maybe.

Either way, the distribution needs context that explains it. And that’s where breaking the rating distribution summary by product attributes can help.

Break Rating Distribution Summary By Product Attributes

Showing a distribution summary provides a better overview to customers, but it still requires quite a bit of effort to discover how well-performing a specific product attribute is. For example, a customer might be interested in how long the battery life is or how sophisticated a particular feature is. To learn both, they’ll need to sip through dozens or even hundreds of reviews, often without any success.

A useful approach to help out customers there is to break down the distribution summary even further. For each standalone product attribute — appearance, value for money, product quality, battery life, etc. — we could gather and highlight average scores, based on user reviews. When a customer chooses to provide their feedback, we could ask them to evaluate specific qualities of the product and then calculate a separate average score for each attribute.

On Flipkart (pictured above), customers are encouraged to provide feedback broken down into groups of attributes that are very specific for the product. The “Value for Money” is a classic one that is being used across all products. The color coding is used to indicate “good” and “bad” reviews, but a slightly different color scheme would be more accessible. Also, unfortunately, customers can’t filter reviews by a particular score.

On Adidas and LL Bean, customers can explore specific qualities of a product in a bit more detail. It can be a position on a scale or yet another distribution summary. Both patterns make it clear to customers how well a product performs in relation to that one specific attribute.

Enhance User Reviews With Suggested Tags

The quality of the experience can’t be measured by product attributes alone. A skillfully designed and manufactured product might not be a good fit for some customers, and learning it from a product description spec is impossible. To help there, we could suggest relevant tags when customers write a review. Typically, it could be something along the lines of “great fit,” “great for kids,” “easy to use,” “love the design,” “beautiful packaging,” etc.

These tags could also be slightly more personal and individual; they could relate to the product but also to the customers who have already purchased that product. The goal here is to gather and surface insights and personal details that would make it easier for prospective customers to relate to published reviews. In other words, we try to help prospective customers find similarities with already existing customers and explore their reviews in that light.

Personal details could be anything from the frequency of use and level of experience to age range and current location. But it would differ depending on the product, of course. For skin products, for example, we could ask about the preferred look, skin type, shade, glow, scent, characteristics of the face, but also texture and typical type of makeup. That’s exactly what Glossier and Sephora do.

These tags, then, could be displayed as additional rating filters — and hence help customers find relevant reviews faster, and perhaps even scores for important attributes, based on feedback from customers like them. And that’s much more valuable than any general feedback based on averages could be.

Highlight Social Proof From Reviews

In addition to suggested tags, one of the questions we could ask existing customers at the very end of the review is if they’d recommend this product to their friends, colleagues, or total strangers. As a result, we could calculate a quite significant number that’s often neglected but can be quite a game-changer: the recommendation score.

Asos (pictured above) highlights that “86% of customers recommend this product.” This is quite different from 5-star ratings or 1-star ratings because even customers who aren’t quite happy with the product and would perhaps give it a 3-star rating might still recommend it to their friends. It might not be a perfect fit for them, but they are quite satisfied with its quality.

Highlighting prominently how many customers actually recommend the product is definitely a good idea. A magical number to stay above is 90% recommendation score; yet again, if you climb over 95%, it’s getting a bit suspicious.

To make it even stronger, you could specify the group of customers that actually has recommended the product and allow customers to choose the group that would fit them best. There, we could include fine details such as level of experience, frequency of use, type of projects, and others.

A statement such as “86% of customers (5+ years of experience, enterprise-level) recommend this product” would probably perform the strongest for people who fit that particular group.

Provide Quick Summaries Of Pros And Cons

We’ve come quite far already, yet customers still need to filter and explore reviews on their own. We could help them faster grasp the good bits and the bad bits about the products though. To do that, we could highlight the pros and cons of a product, along with quick summaries of positive and negative reviews.

Ideally, these quick summaries would be analyzing user reviews and listing pros and cons as bullet points. This information is precious and often not displayed at the right time. For example, it would be a very impactful add-on for a category listing page, a product page, and a feature comparison page where users land when they try to compare a couple of products side-by-side.

Allow Users To Mark Helpful Reviews

Not every review is helpful. Some reviews appear very generic, others lack important details, and others again focus too much on irrelevant details. To make sure that relevant reviews get surfaced to the top of the reviews list, we could kindly prompt users to mark if they find a review helpful — both for positive and negative reviews.

Marked reviews — a few of each kind — could then be highlighted at the top of the reviews section, along with the number of users who found them helpful. This dramatically boosts trustworthiness and reduces time to relevance, as highlighted reviews have been validated by other customers. This appears authentic and builds trust.

Provide Helpful Sorting Types and Search Autocomplete

We typically allow users to filter reviews by a particular rating score, but what if a customer is interested in specific details within a particular score range? Once a filter has been applied, we can allow users to sort by specific details of reviews. On Bite, for example, users can sort by the date of review, available photos and videos, as well as highest and lowest ratings and most and least helpful reviews. That’s better.

Additionally, customers on Glossier and Wayfair can sort by reviews that include customer photos and search in reviews, and that’s where an autocomplete feature could be particularly helpful as well. Notice how Wayfair (pictured below) not only displays the tags used for products and customer photos but also highlights how often the keywords have been mentioned in reviews. That’s helpful.

Enhance Reviews With Personal Details

For a review to be convincing, it needs to appear trustworthy, be written by an actual person with credentials and a public social presence, or be recommended by a brand or person that carries some weight. This is why testimonials and logos of big brands work well; this is also why video reviews and customer success stories are quite impactful.

If possible, prompt your users to add some personal details to their reviews, such as:

  • full name,
  • location,
  • age (range),
  • current role or job title,
  • link to a social media account (LinkedIn, Twitter, Instagram),
  • a casual personal photo,
  • a casual personal video (less than 30 sec),
  • a photo of the product,
  • the logo of the brand they work for (if applicable).

One of the most effective ways to build authenticity with a review is to display customer photos prominently next to reviews. Because many customers don’t trust marketing illustrations or can’t reliably visualize product specs, they often rely on customers’ (unedited) product photos to get a more realistic preview of what they should be expecting.

Any personal details are very valuable. Hence it’s a good idea to always prompt users to upload personal product photos and more details about themselves and reward such reviews as well.

Wrapping Up

Getting user ratings right requires quite a bit of work. We need to gather and highlight relevant reviews, present distribution charts with the number of ratings and decimal averages, and allow users to use filters, tags, or search to find reviews faster from people who are very much like them.

Users need:

  • The average score presented with decimal numbers;
  • An overall number of ratings;
  • A full distribution chart of ratings;
  • Customer’s unedited product photos;
  • Ratings for individual features (size, fit, support, etc.);
  • The number of reviews for intermediate values (1.0, 2.0, 3.0, 4.0, 5.0);
  • Tags and filters to find relevant reviews;
  • Personal details that will help customers find people like them in the reviews;
  • The date of when the review was written or published;
  • How many people find it helpful;
  • How many customers (%) recommend the product for purchase.

Sounds like a lot of effort — and it actually is. The impact of a proper user rating design can be remarkable and drive your sales efforts much stronger than any other campaign could have. And once it’s done, it can work for you long-term, building up a community of people who trust your brand and recommend it online and offline. Without any doubt, that’s very much worth the effort.

Useful Resources Meet “Smart Interface Design Patterns”

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our shiny 9h-video course with 100s of practical examples from real-life projects. Design patterns and guidelines on everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Just sayin’! Check a free preview.

Meet Smart Interface Design Patterns, our new video course on interface design & UX.

100 design patterns & real-life examples.
9h-video course + live UX training. Free preview.

❌