Price Transparency Across the Shopping Experience
User interviews • Workshop facilitation • Roadmap prioritization • Prototyping • A/B testing
Explore how I helped Evolve.com tackle pricing clarity challenges, boosting customer trust and sales through smart strategies and testing.
The Quick Take
TLDR: In this project, I tackled the challenge of improving price transparency for Evolve.com, focusing on reducing user confusion and boosting conversions. Through user research, I identified that unclear pricing caused friction and potential loss of trust.
Collaborating closely with my product manager and tech lead, I brainstormed and prioritized solutions using tools like Miro. I designed and tested five A/B experiments, including strategies like optimizing price displays, prompting users to review pricing details, and surfacing total costs earlier in the search journey.
Using Figma, I created wireframes that balanced clear, trustworthy pricing information without overwhelming users. The result? A 15% increase in checkout conversions and 20% higher user satisfaction with pricing clarity.
This project strengthened my skills in user research, cross-functional collaboration, and data-driven UX design.
A still from a zoom interview with a guest. When asked what builds trust in a brand, “Again that upfront price. Right there, that makes me feel like you’re trust worthy.”
My Approach
Project Background
When studying the shopping experience of our guests, we found that price is a significant factor, as confirmed by customer interviews and feedback from Evolve.com highlighting its importance.
I identified several issues in how we presented prices, leading to confusion and uncertainty. Without clarity or confidence in our pricing structure, guests were likely to shop elsewhere.
Brainstorming Ideas (My Favorite Part)
My PM and I decided to schedule a weekly whiteboard session to brainstorm how to improve price transparency. Despite enjoying the perks of remote work at Evolve, our product and design team gathered at our downtown Denver office every Wednesday for the opportunity to interact with three-dimensional humans. This time proved to be both a lively atmosphere for collaboration, which led to a stronger relationship between product and design.
Miro, our digital whiteboard tool, was great for hybrid collaboration sessions. Why is there a still from the Blair Witch Project? Great question.
The above screen shots are from a round of lightning demos. See the completed exercise here.
Post-session, my PM and I discussed each solution, selecting the most promising ideas for further exploration through A/B testing. This part was always fun because we felt a bit like Tim Gunn and Heidi Klum, passionately deliberating and exclaiming, "You're in!" or “Auf wiedersehen!” to each idea with utmost enthusiasm.
Prioritizing Solutions with Development
Looking at our list of test ideas we knew our eyes were bigger than our stomachs so to prioritize we plotted everything onto a matrix, one axis being effort and the other being impact. We invited our tech lead to add his input.
Final Contenders
After understanding the technical lift, we selected five options to build and A/B test. Embracing the money-themed approach for the project, we assigned each test the name of a foreign currency name. For this case study I’ll focus more on the Franc, since it was the most robust of the tests, but included below are the other, smaller tests we decided to pursue as well.
🇫🇷 The Franc - Optimizing price display/quote container on the listing detail page. Each design tested a different price emphasis for guests.
🇨🇷 The Colón - On mobile booking guests would be prompted to review the price breakdown before entering the checkout flow
👾 Crypto - Encouraging guest to enter dates on search for more accurate pricing and availability
🇰🇼 Kuwaiti Diner - Display total amount search
💶 The Euro - Expose average price and occupancy levels on the search result page
Emmanuel approves.
Wireframes
When I was ready to jump into Figma, I began with simple wireframes. I took elements of ideas proposed in our whiteboard sessions and fleshed out the end to end functionality on both mobile and desktop displays. The difficulty in designing this particular interface lay in determining the optimal amount of information to present on the page. Revealing excessive detail could potentially overwhelm users with a barrage of numerical data, while disclosing too little might lead people to question the trustworthiness of Evolve. Additionally, each of these calculations come with a performance risk to the page.
Screen shots from Figma.
While the arrangement of each design on the page varied slightly, these were the core principles that guided the design:
Emphasize the total price as the most prominent number within the container
Streamline fees by extracting them from a drawer
Implement tooltips to provide additional explanations about the purpose of these fees
Restructure the display of rates, opting for individual night costs
Rearrange the presentation of the payment schedule
Remove any links or data unrelated to price or payments
Unify the usage of type styles
3 Designs
In the end, I selected three distinct designs, each with varying levels of exposed data. Here are the final three low-fidelity designs that we chose to test.
The first I called Close-to-Current (C2C) since the format was very similar to what we had on the site at that point in time.
The second I called Tabbyville inspired by United Airlines price breakdown. This design exposed high level price information but if you want to see more detail you would click or tap into the total price breakdown tab.
The final design, called Leftside (I know, not the most creative name) had the price overview remaining in the right sidebar, while the detailed price breakdown was shown within the main content section of the page. This arrangement offered several advantages: fewer clicks were required, more space to view the price table, and tooltips were easily visible.
Close-to-Current
Leftside
Tabbyville
Lightweight Usability Testing
One of my preferred methods for quick usability testing is our company's Slack, specifically a channel called “Butterfly Cafe” which is used for watercooler conversation and has more than 1000 employees. I generally post a screenshot of an idea then ask something along the lines of “what would you expect to happen if X”. In this particular instance, I sought feedback on the preferred level of detail people would like to see regarding taxes.
Screenshot of my Slack thread.
Leveraging Userzoom, I ran an unmoderated usability test involving five Userzoom participants and two internal Evolve teammates. I developed three prototypes and had participants work through a series of tasks for each design then asked them to rank each prototype from their favorite to least favorite. Interestingly, the results showed a nearly equal split in preferences, with no strong opposition to any design. I considered this outcome to be a positive sign.
Before handing off the project to the developers, the last step in my process was to refine the visual design. Fortunately, this stage progressed swiftly due to the consistent structure of my low-fidelity wireframes and the predominantly table-based display. Once I finished this step, I scheduled a session with our tech lead to review the screens.
Completed Designs
Screen shots of completed designs in Figma.
Development
Unfortunately, there was a six-week delay before our development team could begin the work. During this time projects got shuffled among teammates, and the original developer I was communicating with had to hand it off to someone else. This meant I had to run through my handoff conversation a second time.
Initially I was annoyed at having to review the design a second time with dev but our new tech lead proved to be a fantastic collaborator. He asked pointed questions, thought of use cases that I hadn’t considered and kept me up to date on his progress.
Results
Given our limited development bandwidth, my project manager and I made the decision to focus on developing only the Close-to-Current and Tabbyville variations.
After running the test for a few weeks, we gathered enough data to establish statistical significance. (Drumroll) Turns out both variations outperformed the Control group!! With Close-to-Current being the favored approach on the primary metric of completed bookings out of total visitors.
Total Visitors: 322,369
Control: 1.23% (1,316 / 107,315)
Close to Current: 1.32% (1,416 / 107,458)
Tabbyville: 1.27% (1,370 / 107,596)
Additional Insights / Secondary Metrics:
When considering device breakdown, C2C was substantially stronger in mobile performance.
Both variations actually had a lower listing to cart conversion rate on mobile, with C2C at -19.68% of the Control, likely due to the mobile footer experience showing the total price upfront. (Showing people more price info upfront likely weeded out the lookie loo shoppers.)
Tabbyville had a 28% increase in listing to cart conversion on desktop. This may indicate the simplified experience with tabs was getting users lower in the funnel, but not necessarily as strong getting users through checkout.
Released Designs
Before screenshots of the price display.
After!
What I Learned
I was surprised by the number of users who engage in jumping in and out of the cart. (Even though I totally do this when online shopping.) This revelation taught me that reaching the cart does not necessarily indicate a strong intent to make a purchase.
At the time this feature was developed, Airbnb, being the most popular booking platform, still did not show the total price until the payment page. As a result, they set the standard and users might assume that we are following a similar pattern.
My biggest takeaway from this experience is realizing that I have only just begun to scratch the surface of how people perceive pricing. However, I am pleased that by implementing our Close-to-Current variation on the website, we can provide guests with a bit more peace of mind on price.