From Missed Defects to Measurable Quality: Redesigning Reporting at Exact Systems

Inspectors were losing time and data in a broken paper-email hybrid process. I rebuilt it with a step-by-step mobile tool cutting report time by 49% and increasing accuracy.
“The issue wasn’t the tools – it was lack of structure.”
User interview participant
MY ROLE
Sole UX designer, responsible for the entire process. Worked closely with developers and QA during implementation, ensuring field feedback shaped final iterations.
GOAL + SOLUTION
Built a guided step-by-step flow to help inspectors report faster and with fewer mistakes.
IMPACT
Measured over 6 weeks post-release using internal metrics. Report accuracy jumped from 18% to 92%, report time cut in half (74s → 38s).

Before the app - there was just confusion

Without a real-time system, even important defect reports were easy to miss or delay.

Exact Systems provides quality control services for the automotive and manufacturing industries, deploying thousands of inspectors to factories.

Their work relies on fast, accurate defect reporting, but the existing email-based process was outdated, error-prone, and frustrating for users.

The goal of this project was to design a mobile tool that would streamline reporting, reduce mistakes, and improve coordination with central teams.

Talking to the people stuck in the chaos

I interviewed inspectors to uncover how a broken system created
confusion, delays, and distrust in the defect reporting process.

To understand what really wasn’t working, I conducted 10 in-depth interviews with quality inspectors from multiple regions at Exact Systems. Patterns quickly emerged around frustration, confusion, and distrust in the old reporting process.

1.
Managers had to manually rewrite reports into internal systems.
2.
Critical data (e.g. inspection time or order number) was often missing.
3.
Workers had no standard form - just emails, messages, or paper.

When a defect was spotted, the real problem was just beginning

From remembering details to writing emails
from scratch, the friction made errors inevitable.

Finding a defect should trigger action - but for inspectors, it triggered uncertainty. I mapped the user journey to understand what actually happened after an issue was spotted, and where the breakdowns occurred.
Key insight:
They had to remember what details to include, switch to email, and manually write it out - every time. No standard, no confirmation, no feedback.

From field problems to product features

I translated user pain points into clear product requirements through
structured workshops, aligning the team around what mattered most.

I facilitated a remote brainstorming session with inspectors, coordinators, and developers to capture pain points and feature ideas. Over 30 ideas landed on the board. I helped structure the session to bridge the gap between field frustrations and what was technically realistic.

Working with the dev team, we used the MoSCoW method to prioritize feasible features that delivered the most impact quickly, while still keeping long-term needs visible on the roadmap.

A key decision: photo uploads vs. speed.
“We debated whether photo uploads should be required - users wanted speed, managers wanted proof.”

In the end, we focused on structured reporting instead: choose defect type, quantity, and add comments only if needed.

First usability test: the form looked simple, but felt confusing

The first prototype included all key fields, but with no hierarchy
or guidance, users didn’t know where to begin - or if they’d done it right.

To validate the first version of the reporting form, I ran moderated usability tests with six inspectors across two production sites.

Each session followed a scenario-based task: users were asked to report a defect as they would on the job, using a mobile prototype.
I applied the think-aloud method to capture not only their actions, but also their expectations, doubts, and assumptions in real time.

The test helped me uncover where the interface supported users
- and where it failed them.

Test goals:
1.
Can users complete a defect report on their own using the prototype?
2.
How do they interpret each step in the reporting flow?
3.
Where do they hesitate, get confused, or ask for clarification?
What didn't work:
1.
No visual hierarchy - everything had the
same weight on screen.
2.
No progress indicator - users didn’t know how far they were in the process.
3.
Users expected step-by-step flow, not a freeform worksheet.

reporting flow v1

When a promising idea met real-world blockers

I planned component number scanning - but the dev team pushed back.

In early concepts, I proposed letting users scan a component number with the phone’s camera to speed up data entry. During field interviews, the idea got positive feedback - inspectors often dealt with poor handwriting or faded labels, so scanning seemed like a practical shortcut.


But in implementation, two blockers surfaced:
1.
Development effort - Implementing it would have pushed the release by two full sprints, with no way to check if the scanned data was even valid.
2.
No data validation - The app didn’t have full access to warehouse databases, so it couldn’t confirm if scanned numbers were valid.


Instead, I designed a final summary screen showing all inputted data, with one-tap access to edit any step. This gave users confidence and control - without adding technical debt.

I broke the flow into steps - and clarity followed

Splitting the form into single-task screens gave users the guidance
they needed - and the confidence they lacked in the first version.

The first prototype tried to do it all at once: select defect, enter quantity, write a comment, assign the executor. It looked simple in Figma. But in testing, users froze.

There was no visual hierarchy. No progress bar. Just a dense worksheet. So I did the opposite of what we thought would make it faster: I slowed it down.
Each step became a separate screen, with a clear call to action and feedback. Instead of guessing what to do next, users moved forward with confidence.

The result?
Immediate clarity - and a form that finally felt usable in the field.

reporting flow v2

I tested the new flow - and users just… got it

After splitting the form into steps, users completed reports
nearly twice as fast - and called the experience “easy” and “clear.”

After splitting the form into steps, I ran another round of usability tests with 4 inspectors. This time, something unexpected happened: no one asked for help.
Even users who had struggled with the first version completed the report in under a minute. The average completion time dropped from 74 seconds to 38, and all testers described the flow as “easy” or “clear.”

What changed?
1.
Clear step-by-step guidance
2.
Built-in validation at each step
3.
Progress feedback after each input

Key insight: The redesign didn’t just improve usability. It built trust. Users felt like the app finally “spoke their language.”

From prototype to production - and one bug that almost broke trust

A critical issue in early Android devices caused reports to be lost offline
- we fixed it fast, but it reminded us how fragile user trust can be.

We rolled out the new flow in phases - but within days, one issue surfaced we hadn’t caught in testing.

On older Androids, reports made in offline mode weren’t being saved - despite a “success” message. The problem? Some phones blocked background saving due to battery settings.

Together with devs, we:
- Switched to SQLite-based local storage
-
Added pending sync labels and visible error toasts
- Updated the UI to include a local “draft saved” confirmation
in offline mode, so users could feel reassured even without network access.

This fix mattered: 1 in 8 inspectors worked in places with weak or no signal. Catching it early kept trust intact.


Users saw a success message
- but the report silently disappeared.

From prototype to production – and one design assumption that broke real workflows

One inspection, many components – but the flow saved only the last one.

After launch, we discovered that the system logic overwrote previous components if users added multiple entries under the same order.

Together with developers and the product owner, we redefined the report structure so that one report could include multiple components.

I translated real-world workflows into a data model the dev team could implement without increasing cognitive load on users.
Together with devs, we:
- Reframed the report as the unit of work (not the component)
- Allowed users to add multiple components inside a single report
- Updated UI to support step-based flow

This issue wasn’t technical – it was a mismatch between how the system worked and how users expected it to work. Fixing it made the product align with real-world workflows.

reporting flow final vesrion

Was it worth it? The numbers said yes - and so did the people

Within six weeks of launch, report completion jumped from 18% to 92%,
data quality improved, and trust was restored - with users and managers alike.

To measure success, we tracked usage metrics, report quality, and qualitative feedback from coordinators and managers - all within the first 6 weeks post-release.

Key results:
- 92% of defect reports now submitted via the app (up from 18%)
- Avg. completion time: 38 seconds per report
- Incomplete/missing reports down by 64%
- 5 team leads reported better traceability during client audits

And what about the remaining 8%?
- They mostly came from two older factory sites still using legacy workflows. We’ve already started onboarding them to the new system.

Roadmap

The work didn’t end with launch. Each release solved real problems - and revealed
new ones. Here’s how the product evolved, and what I learned along the way.

1
Even a small UX change - like breaking a form into steps - can dramatically
improve accuracy and reduce friction. Structure builds trust.
2
People don’t just want to complete tasks - they want to feel progress.
UX that supports autonomy and agency drives adoption.
3
Personalization isn’t a “nice to have” - it’s essential when serving a large,
diverse workforce. Relevance reduces noise and increases trust.
4
The biggest UX wins don’t always come from features.
Delivering value beyond daily operations can reframe the entire product.
What I Learned
Working as the only designer on this project gave me a hands-on crash course in end-to-end product work. I had to balance user needs with operational constraints, navigate feedback from field teams, and collaborate directly with developers to make the solution work in real-world conditions - not just in Figma.
I also learned that even simple UX decisions, like breaking one form into steps, can have a massive impact on trust and adoption. Structure reduces friction. Clear flows build confidence. And when people feel understood, they actually want to use the tool - not because they have to, but because it helps them.

I didn’t just fix a broken process - I helped shape a tool that people trust
and actually want to use. That’s the kind of design impact I want to keep making.

See my other case studies

Created a cross-platform soundtrack discovery flow for Apple TV+ that let users save songs without interrupting playback. A missing feature even Apple didn’t solve.
View Case Study
Designed a behavioral UX model to reduce no-shows by 42%,balancing fairness with client accountability without punishing regulars.

View Case Study
view case study

Looking for a designer who brings clarity and usability to your product?
I’m open to freelance, full-time, or creative collaborations.

Let’s make something meaningful together.

Thank you! Your message has been sent!
Oops! Something went wrong while
submitting the form. Try again