Visual defects slipped through every sprint cycle. Off-brand buttons, misaligned fields, and inconsistent spacing reappeared even with bi-weekly team reviews.
By the time we consolidated everything, we had identified three years' worth of backlogged defects. Addressing them would require nearly a full year of development work.
We weren't just missing pixels. We were missing a strategic process.
I approached this as a communication and alignment problem. Designers relied on visual observation, developers thought in systems and code, and QA operated through scripts and test cases. Each group evaluated quality differently.
To close that gap, we needed a shared methodology that could operate in real time and integrate directly into existing workflows, without adding friction or slowing delivery.
I began my career as a graphic designer, trained to obsess over typography, spacing, and composition. Years of studying and archiving imagery sharpened my eye for details others often pass by. That way of seeing became a way of understanding, and it continues to shape how I approach design today.
Designers, developers, and QA all operated under different constraints and priorities. To address the problem at scale, I needed to translate my intuition into something shared and actionable. The challenge was not teaching taste, but systematizing my understanding so it could be clearly communicated and used across the team.
How might we create a shared framework that aligns design, development, and QA around a common way of seeing and evaluating quality before issues reach production?
Interactive elements that fail to respond or behave unexpectedly, breaking core user flows.
Elements that should exist per design specs but are absent from implementation.
Text or media content that doesn't match approved copy or assets.
Components that deviate from established design system tokens or patterns.
Elements positioned incorrectly relative to the design grid or neighboring components.
This defect pertains to screen reader issues, such as incorrect module sequencing and orientation. The reading sequence of screen readers, such as JAWs, typically read left to right and top to bottom.
Color values that don't match design specifications or accessibility standards.
Typography that deviates from specified font families, sizes, or weights.
This defect pertains to anomalies and undefined issues, please use this category flexibly for your project needs.
Margins or padding that don't match design tokens or spacing scale.
Over time, we built a shared index of visual and interaction defects based on real issues we kept seeing in production. Some were common and repeatable, others more specific, but naming them gave teams a faster way to recognize the same problem together.
The glossary helped designers, developers, and QA align on what they were seeing, while also giving designers more exposure to how issues showed up in code. By grounding visual quality in concrete examples, teams could communicate more clearly, spot issues earlier, and move faster with confidence.
While the defect glossary and triage tools improved how teams identified and managed issues, I wanted to explore how we could prevent many of these problems from reaching development in the first place. To support this, I developed a custom Figma linter plugin that validates design files directly within Figma, scanning for structural inconsistencies, component misuse, and design system deviations to help teams catch production risks earlier and ensure designs are development-ready before handoff.