Let’s connect

Let’s connect

Let’s connect

Let’s connect

Let’s connect

Let’s connect

Redesigning for trust:
Increased Confidence in AI predictions

What?
Reducing churn by increasing trust for Nexoya.

How?
Solving a major user trust issue: Leading a redesign initiative and creating a new prominent "Validation" feature.

AI

UX/UI Design

B2B

Webapp

Feature

Analytics

PROJECT OVERVIEW

PROJECT OVERVIEW

Main Challenge

Solving a key user trust issue: Some Nexoya users did not feel confident enough about the validity of the platform's AI-powered predictions, a key product selling point.

What's Nexoya?

Solution

I led a feature redesign initiative to improve clarity when it comes to how users understand the AI-powered predictions. A highly iterative and collaborative design process, resulting in a new prominent "Validation" 2nd-level tab.

See before/after design solution

Impact

  • New, prominent "Validation" tab in product.

  • Increased confidence in AI predictrions.

  • Increased long-term client retention.

  • New, prominent "Validation" tab in product.

  • Tangible increased confidence in AI-predictions.

  • Increased long-term client retention.

CLIENT

MY ROLE

Lead UX/UI Designer | UX Writer

PROJECT TIMEFRAME

ca. 3 Months

TEAM

CEO, Head of Product, Head of CSM, Head of ML, Developers (2)

TOOLS USED (2)

Figma, Notion

AFFECTED LANGUAGES (1)

🇺🇸 English (American English)

APPROACH/PROCESS

APPROACH/PROCESS

With a clear sense that the current design wasn't meeting user needs, I advocated the classic Design Thinking Methodology as a structured, user-first approach to guide the redesign.

Why Design Thinking?

I use it because it balances empathy, creativity and iteration in a collaborative scale-up environment to find a meaningful solution.

With a clear sense that the current design wasn't meeting user needs, I advocated the classic Design Thinking Methodology as a structured, user-first approach to guide the redesign.

Why Design Thinking?

I use it because it balances empathy, creativity and iteration in a collaborative scale-up environment to find a meaningful solution.

Empathise

Understand the user.
What are the goals, needs, wants and frustrations?

1.Define

Define the problem.

What needs to be solved, and how do we do it?

2.Ideate

  1. Conceptualise designs: Bring ideas to life.

  2. Discuss with team: Review and improve.

3.Refine

Cut out the noise: Iterate and focus on the smaller details.

4.Prototype
and test

Create a solid prototype to test with our CSMs.

Does this solution add value?

5.Refine
as needed

Incorporate user feedback into the design solution

6.Deliver

Final check: Rollout and hand-over design solution to the development team.

Empaphise

Empathise

Empathise

Understand the user.
What are the goals, needs, wants and frustrations?

1.Define

Define the problem.

What needs to be solved, and how do we do it?

2.Ideate

  1. Conceptualise designs: Bring ideas to life.

  2. Discuss with team: Review and improve.

3.Refine

Cut out the noise: Iterate and focus on the smaller details.

4.Prototype
and test

Create a solid prototype to test with our CSMs.

Does this solution add value?

5.Refine
as needed

Incorporate user feedback into the design solution

6.Deliver

Final check: Rollout and hand-over design solution to the development team.

  1. Define

Define the problem. What needs to be solved, and how do we do it?

1.1 AUDITING THE PREVIOUS DESIGN

1.1 AUDITING THE PREVIOUS DESIGN

Previous design to be redone below (Demonstrated with old Figma screens) 👇

1.2 CONDUCTING AND ANALYSING USER RESEARCH

1.2 CONDUCTING AND ANALYSING USER RESEARCH

Based on data analysis and qualitative feedback through user interviews, some users were concerned with:


  1. What is Gain/Loss? How accurate are these statistics, and what are they?

  2. Subscription value: The value of their subscription to continue.

  3. ML-data transparency: How do they fare against real results, and how do we predict data?

Based on data analysis and qualitative feedback through user interviews, some users were concerned with:


  1. What is Gain/Loss? How accurate are these statistics, and what are they?

  2. Subscription value: The value of their subscription to continue.

  3. ML-data transparency: How do they fare against real results, and how do we predict data?

Based on data analysis and qualitative feedback through user interviews, some users were concerned with:


  1. What is Gain/Loss? How accurate are these statistics, and what are they?

  2. Subscription value: The value of their subscription to continue.

  3. ML-data transparency: How do they fare against real results, and how do we predict data?

Main user friction point below: What is gain/loss? What do these figures mean?

?

1.3 DEFINING THE SOLUTION: WHAT DO WE TACKLE?

1.3 DEFINING THE SOLUTION: WHAT DO WE TACKLE?

Myself, the Head of Product and Head of Machine Learning (ML) came together, and identified 3 key themes of the "What" to focus on throughout the design process. They are:

📈

📈

📈

Data Transparency

Data Transparency

We must show how we calculate our predictions, and show the data in a scannable way.

We must show how we calculate our predictions, and show the data in a scannable way.

We must show how we calculate our predictions, and show the data in a scannable way.

AI Validity

AI Validity

AI Validity

How accurate is the ML-predicted data, and how does it match up with real data?

How accurate is the ML-predicted data, and how does it match up with real data?

How accurate is the ML-predicted data, and how does it match up with real data?

💸

💸

💸

Subscription Value

Subscription Value

Subscription Value

Highlight the value users are getting from their subscription with the SaaS tool.

Highlight the value users are getting from their subscription with the SaaS tool.

Highlight the value users are getting from their subscription with the SaaS tool.

📈

Data Transparency

We must show how we calculate our predictions, and show the data in a scannable way.

AI-Validity

How accurate is the ML-predicted data, and how does it match up with real data?

💸

Subscription Value

Highlight the value users are getting from their subscription with the SaaS tool.

Why these 3 themes?

Why these 3 themes?

The answer is simple: They directly address specific user painpoints as discovered from the research phase of the problem definition stage.


These themes act as a base for the phases of the design. By directly addressing user needs, we address key issues that affect business value, with a strong focus on leveraging user trust.

2.1: Ideate: Phase 1

Conceptualise: Bring ideas to life.

I) CONDUCTING AND ANALYSING USER RESEARCH

I) CONDUCTING AND ANALYSING USER RESEARCH

This design phase is largely conceptual.


It involves starting the design momentum and working towards something "less wrong" per iteration.


While it's unlikely these will be used in their purest form, it does provide a clear insight into what works, and what doesn't.

What was the main solution?

What was the main solution?

Creation of a New 2nd-level category: Performance.

Creation of a New 2nd-level category: Performance.

And why?

Creating a new 2nd level category is a simple way in "Adding" to the product feel, without modifying any existing features which could potentially throw existing users off.


Moreover, an additional 2nd level category creates a new, experimentation-friendly canvas.

From this, we could create 3rd-level categories, directly attacking the key themes of Data Transparency, AI-Validity and Subscription Value.


They are as follows:

📈

Data Transparency

Data Transparency

Issue:

Show the user how we calculate our predictions.

Solution:

Solution:

Creation of 3rd-level sub-categories: Historic trend, Performance increase, Prediction Accuracy

Creation of 3rd-level sub-categories:

  • Historic trend

  • Performance increase

  • Prediction Accuracy

Why this solution?

Why this solution?

Why this solution?

AI Validity

AI Validity

Issue:

Provide evidence of AI-data / real data.

Issue:

Provide evidence of AI-data / real data.

Solution:

Solution:

Emphasis on easy-to-scan Prediction Accuracy score, part of "Prediction Accuracy" category.

Solution: Emphasis on easy-to-scan Prediction Accuracy score, part of "Prediction Accuracy" category.

Why this solution?

Why this solution?

Why this solution?

💸

💸

💸

Subscription Value

Subscription Value

Subscription Value

Issue:

Show the value users are getting by subscribing.

Solution:

Solution:

Inclusion of overall "Return on Investment" value (performance increase/subscription fee)

Inclusion of overall "Return on Investment" value (performance increase/subscription fee)

Why this solution?

Why this solution?

Experimental in nature, but direct and to the point.
Subscription value = costs-saved - subscription cost.

Why this solution?

Why this solution?

📈

Data Transparency

Issue;

Show the user how we calculate our predictions.

Solution:

Creation of 3rd-level sub-categories: Historic trend, Performance increase, Prediction Accuracy

Why this solution?

AI-Validity

Issue:

Provide evidence of AI-data / real data.

Solution:

Emphasis on easy-to-scan Prediction Accuracy score, part of "Prediction Accuracy" category.

Why this solution?

II) INFORMATION ARCHITECTURE

II) INFORMATION ARCHITECTURE

This architecture shows in which categories the solutions are placed.

III) LOW-FI TO MID-FI ITERATIONS

III) LOW-FI TO MID-FI ITERATIONS

i) Low-fi: Initial sketches

i) Low-fi: Initial sketches

i) Low-fi: Initial sketches

ii) Mid-Fi: Detailing initial sketches in Figma

ii) Mid-Fi: Detailing initial sketches in Figma

ii) Mid-Fi: Detailing initial sketches in Figma

IV) FINAL FIRST ITERATION: PROTOTYPE

IV) FINAL FIRST ITERATION: PROTOTYPE

2.2 Ideate: Phase2

Discuss and review with the team: Review and improve.

I) TOUCHING BASE WITH KEY STAKEHOLDERS: WHERE ARE WE AT

I) TOUCHING BASE WITH KEY STAKEHOLDERS: WHERE ARE WE AT?

The mid-fi prototype had been created to get the ball rolling - it was time to discuss and review with key stakeholders, notably Head-of-Product, Head-of-CSM and the Head-of-ML.

What was the feedback?

What was the feedback?

The design ticked the key boxes for Data Transparency, AI-Validity and Subscription Value - it was a good job so far.

However - there was some pushback in terms of the UI layout, particularly in a SaaS analytics industry context.


Basically, the Card-based UI hierarchy layout was an issue.

Why?


For context: Nexoya users are well acquainted with tools such as Google Analytics and DV360. Navigation hierarchies are small, functional in nature, drawing more attention to the analytics charts and tables in the content area. This made sense in the context of user centred design.

TL:DR

TL:DR

  • The information architecture was good - but we needed test Cards vs Existing Switchers.

  • Incorporate existing patterns Nexoya users are familiar with. Create a V2 to test that includes:

    • Existing Nexoya switchers

    • Existing Nexoya funnel step selection funnel.

    • Removal of "Return on Investment" section: (Not industry-standard)

  • The information architecture was good - but we needed test Cards vs Existing Switchers.


  • Incorporate existing patterns Nexoya users are familiar with. Create a V2 to test that includes:

    • Existing Nexoya switchers

    • Existing Nexoya funnel step selection funnel.

    • Removal of "Return on Investment" section: (Not industry-standard)

II) V1 AND V2, SIDE BY SIDE. READY FOR A/B TESTING

II) V1 AND V2, SIDE BY SIDE. READY FOR A/B TESTING

V1

V1

V2

And the winner?

And the winner?

A resounding victory for V2, with nearly 80% of participants opting for it.

A resounding victory for V2, with nearly 80% of participants opting for it.

III) V2 IN ACTION

III) V2 IN ACTION

3: Refine

Cut out the noise: Iterate and focus on the smaller details.

I) KEY THEMES

I) KEY THEMES

To continue the collaborative design approach, the key stakeholders and I sat down to refine the feature further. Due to the highly specialised performance-marketing based profile of Nexoya's target audience, collaboration with the CSM department was key.

After speaking and conducting unmoderated tests with clients and in-house Nexoya users, the CSM team played an invaluable role in providing specific user insights, including quick, easily executable product improvements.

The insights include specific SaaS and performance marketing industry knowledge, proving highly valuable in keeping the feature user-focussed.

TL;DR - What did we refine?

TL;DR - What did we refine?

To summarise, we focussed on:

  1. UX Writing: Microcopy label.
    Microcopy optimisation. Performance increase -> Insights.

  1. UX Writing: Microcopy label.
    Microcopy optimisation: Performance increase -> Insights.

Why change microcopy?

Why change microcopy?

Why change microcopy?

Why change microcopy?

  1. Familiarity: Prediction Accuracy
    Replacement of total donut chart score with graph-based analytics and new logic.

  1. Familiarity: Prediction Accuracy
    Replacement of total donut chart score with graph-based analytics and new logic.

Why adapt this?

Why adapt this?

Why adapt this?

Why adapt this?

Why adapt this?

  1. Learnability: Prediction Accuracy explainer.
    New feature logic, new information for the user to digest.

  1. Learnability: Prediction Accuracy explainer.
    New feature logic, new information for the user to digest.

Why add explainer?

Why add explainer?

Why add explainer?

Why add explainer?

Why add explainer?

  1. Consistency: Product categories
    Placement of Performance and combination with "Overview".

  1. Consistency: Product categories
    Placement of Performance and combination with "Overview".

Why this combination?

Why this combination?

Why this combination?

Why this combination?

Why this combination?

II) REFINED HIGH-FIDELITY PROTOTYPE: TO TEST WITH USERS

II) REFINED HIGH-FIDELITY PROTOTYPE: TO TEST WITH USERS

This prototype shows all key refinement changes above, together in one place.

  1. Prototype

Prototype and test. Does the solution add value?

I) TESTERS

I) TESTERS

Who participated in the testing?

The testers were divided into two primary groups:

  1. Our internal Customer Success Managers (CSMs), who interact with clients on a daily basis

  2. The clients themselves

Who participated in the testing?

The testers were divided into two primary groups:


  1. Our internal Customer Success Managers (CSMs), who interact with clients daily

  2. The clients themselves

II) TESTING RESULTS

II) TESTING RESULTS

Positives ✅

  • Strong foundational design approach proved effective.

  • Solution was very well received by users.

  • Features were intuitive to navigate and use independently.

  • Integration of the familiar Nexoya funnel significantly contributed to usability.

Usability issues ❓

Usability issues ❓

  • Some users found analytics validation distracting under "performance"

  • Overview page felt too information-heavy.

  • High content density added cognitive load.

  • Some users found analytics validation distracting under "performance"

  • Overview page felt too information-heavy.

  • High content density added cognitive load.

Positives ✅

  • Strong foundational design approach proved effective.

  • Solution was very well received by users.

  • Features were intuitive to navigate and use independently.

  • Integration of the familiar Nexoya funnel significantly contributed to usability.

  1. Refine as needed

Incorporate user feedback into the design solution

I) SOLUTION

I) SOLUTION

What to be incorporate into the design?

What to be incorporate into the design?

Separation of "Performance" and "Validation": Creation of new Validation category.

Separation of "Performance" and "Validation": Creation of new Validation category.

Why separate the categories?

Why separate the categories?

The original "Overview" category was to be renamed "Performance", and the previous "Performance" category to be reimagined and renamed as "Validation".

By doing this, we addressed the usability issues head on: Some users found the overview page too content dense. Moreover, it was clear that two user tasks couldn't be combined at once: 1) Assessing portfolio performance and 2) validating AI-predictions.

Therefore, validating AI-predictions required its own distinct category to mirror this user need.

What does the "Validation" category include, and why?

What does the "Validation" category include, and why?

  • 3rd level category: Achieved / Predicted
    Allows users to clearly compare actual vs. forecasted performance in a focused view.


  • 3rd level category: Prediction Timeline
    Helps users understand how predictions evolve over time, supporting better context and decision-making.


  • 3rd level category: Prediction Details
    Provides deeper insights into individual predictions, reducing information overload on the main overview page.


  • All 3rd level category pages: Prediction Accuracy
    Provides users with a constant, transparent data indicator, while keeping the Validation page feel consistent.

  • 3rd level category: Achieved / Predicted
    Allows users to clearly compare actual vs. forecasted performance in a focused view.


  • 3rd level category: Prediction Timeline
    Helps users understand how predictions evolve over time, supporting better context and decision-making.


  • 3rd level category: Prediction Details
    Provides deeper insights into individual predictions, reducing information overload on the main overview page.


  • All 3rd level category pages: Prediction Accuracy
    Provides users with a constant, transparent data indicator, while keeping the Validation page feel consistent.

II) ADAPTED INFORMATION ARCHITECTURE

II) ADAPTED INFORMATION ARCHITECTURE

III) UPDATED DESIGN

III) UPDATED DESIGN

  1. Deliver

Final check: Rollout and hand-over design solution to dev. team

I) FINAL CHECKS

I) FINAL CHECKS

Before handing over to dev, a final check was in order. This includes checking for final copy, consistencies and double-checking with wider members of the team before handoff.

Before handing over to dev, a final check was in order. This includes checking for final copy, consistencies and double-checking with wider members of the team before handoff.

II) DEVELOPED SOLUTION: IT'S LIVE!

II) DEVELOPED SOLUTION: IT'S LIVE!

RESULTS

RESULTS

While hard data is still being gathered, the impact of this feature is already clear in terms of strategic value:

While hard data is still being gathered, the impact of this feature is already clear in terms of strategic value:

  • Increased confidence in AI Predictions
    Validation tells users their insights are reliable, which directly supports long-term engagement and lowers the risk of churn.


  • Built Trust, Reduced Churn:
    By giving users confidence in their data through the new Validation category, we addressed a key pain point: Trust. When users trust what they see, they stick around.


  • Business-Critical Design:
    Trust in analytics is not a “nice to have”. It drives core business metrics. With this release, we didn’t just improve UX, but use good user experience to drive business value.

  • Increased Confidence Increase in AI Predictions
    Validation tells users their insights are reliable, which directly supports long-term engagement and lowers the risk of churn.


  • Built Trust, Reduced Churn:
    By giving users confidence in their data through the new Validation category, we addressed a key pain point: Trust. When users trust what they see, they stick around.


  • Business-Critical Design:
    Trust in analytics is not a “nice to have”. It drives core business metrics. With this release, we didn’t just improve UX, but use good user experience to drive business value.

  • Increased Confidence Increase in AI Predictions
    Validation tells users their insights are reliable, which directly supports long-term engagement and lowers the risk of churn.


  • Built Trust, Reduced Churn:
    By giving users confidence in their data through the new Validation category, we addressed a key pain point: Trust. When users trust what they see, they stick around.


  • Business-Critical Design:
    Trust in analytics is not a “nice to have”. It drives core business metrics. With this release, we didn’t just improve UX, but use good user experience to drive business value.

Project impact

Project impact

The project was based on a V2, acting to improve initial designs based on data and business direction.


As mentioned to the Nth degree throughout this case study, the theme of "trust" is invaluable in increasing retention metrics in SaaS. Trust brings transparency, and transparency brings user confidence in maintaining partnerships that positively impact both sides. Product design isn't always about creating something new, but improving what you already have in presenting it in a "manageable" way.

The project was based on a V2, acting to improve initial designs based on data and business direction.

As mentioned to the Nth degree throughout this case study, the theme of "trust" is invaluable in increasing retention metrics in SaaS. Trust brings transparency, and transparency brings user confidence in maintaining partnerships that positively impact both sides. Product design isn't always about creating something new, but improving what you already have in presenting it in a "manageable" way.

The project was based on a V2, acting to improve initial designs based on data and business direction.

As mentioned to the Nth degree throughout this case study, the theme of "trust" is invaluable in increasing retention metrics in SaaS. Trust brings transparency, and transparency brings user confidence in maintaining partnerships that positively impact both sides. Product design isn't always about creating something new, but improving what you already have in presenting it in a "manageable" way.

Project takeaways

Project takeaways

My personal takeaway: One thing that stuck with me is the value of a strong "Definition" phase of the design process together with key stakeholders.

By bringing in expertise from different areas of the company, including intricate customer knowledge and being aware of technical limitations, we could create something to ship relatively quickly. Therefore, foundations are key in saving time down the line.


My personal takeaway: One thing that stuck with me is the value of a strong "Definition" phase of the design process together with key stakeholders.

By bringing in expertise from different areas of the company, including intricate customer knowledge and being aware of technical limitations, we could create something to ship relatively quickly. Therefore, foundations are key in saving time down the line.


My personal takeaway: One thing that stuck with me is the value of a strong "Definition" phase of the design process together with key stakeholders.

By bringing in expertise from different areas of the company, including intricate customer knowledge and being aware of technical limitations, we could create something to ship relatively quickly. Therefore, foundations are key in saving time down the line.

How can I help your business? Let’s chat!

2025, Benjamin Bruton

How can I help your business? Let’s chat!

2025, Benjamin Bruton

How can I help your business? Let’s chat!

2025, Benjamin Bruton

How can I help your business? Let’s chat!

2025, Benjamin Bruton

Redesigning for trust:
Increased Confidence in AI predictions

What?
Reducing churn by increasing trust for Nexoya.

How?
Solving a major user trust issue: Leading a redesign initiative and creating a new prominent "Validation" feature.

AI

UX/UI Design

B2B

Webapp

Feature

Analytics

Redesigning for trust:
Increasing Confidence in AI predictions

What?
Reducing churn by increasing trust for Nexoya.

How?
Solving a major user trust issue: Leading a redesign initiative and creating a new prominent "Validation" feature.

AI

UX/UI Design

B2B

Webapp

Feature

Analytics