Context
Government contracting officers deal with a mountain of regulations, policies, and institutional knowledge that changes constantly. The Procurement AI Copilot is an AI-powered assistant I designed to help them cut through that complexity -- giving them a fast way to ask questions and get answers grounded in their own agency's documents.
The big design challenge here was trust. These users make high-stakes decisions, so the AI couldn't just give answers -- it needed to show its work. I designed the experience around source citations, confidence indicators, and document selection so users always know where an answer came from.
Key Design Areas
Chatbot Interface
A conversational interface where users ask questions and receive AI-driven, contextually relevant responses tailored to the agency's unique operational needs, drawing directly from uploaded agency documents.
Dashboard Analytics
Key performance indicators including query resolution rates, frequently asked questions, and chat utilization metrics. Helps identify common queries and knowledge gaps for training and process improvement.
Document Management
Centralized document upload process that makes institutional knowledge easily accessible, maintaining an up-to-date repository of information crucial for efficient contracting.
Configurable Settings
Settings to tailor the AI Copilot experience to specific agency needs and procurement workflows.
The Experience
The Procurement AI Copilot landing page with the chat interface and collapsible Document Library

Expanding the Document Library to select which documents the AI references for answers

AI-generated response with structured, contextually relevant answers drawn from selected documents

Source Text citations with expandable references, page numbers, and similarity scoring for transparency

Context
Here's the fundamental challenge: CW is supposed to be the source of truth for contract data, but it doesn't sync two-ways with FPDS-NG (the federal reporting system). So when validation errors come back from FPDS-NG, users have to go fix things in CW and re-send -- but that workflow was really confusing before I redesigned it.
These aren't minor annoyances. Errors at this stage can block award releases, delay procurement timelines, and create real compliance risk.
The Problem
Contracting officers could successfully create a CAR from CW, but if validation errors occurred after initial creation, users were forced into a fragmented workflow:
- •Errors surfaced late and were difficult to interpret
- •It was unclear where fixes needed to be made (CW vs FPDS-NG)
- •Users often discovered issues at the point of award release, when time pressure was highest
- •There was no clear path to reconcile discrepancies while reinforcing CW as the system of record
The experience increased rework, confusion, and delayed releases.
My Role
I led end-to-end UX design for this feature, partnering closely with product management and engineering. Since January 2025, I've owned the design direction for this enterprise solution at Appian. My responsibilities included:
- •Discovery and stakeholder alignment
- •User flow mapping across CW and FPDS-NG
- •UI design for error states, sync actions, and validation modals
- •Edge case documentation and design specs
Behind the Scenes
This wasn't a project where I could just jump into Figma. I spent a lot of time mapping out how data actually flows between CW and FPDS-NG before I designed a single screen.
System Mapping
I mapped every touchpoint between CW and FPDS-NG to understand where data could fall out of sync. This diagram became my reference point for every design decision.
Edge Case Documentation
I documented 15+ error scenarios with my PM -- from partial syncs to timeout failures. Each one needed a different message and recovery path, so I catalogued them all before designing.
Stakeholder Walkthroughs
I walked through each error flow with contracting officers to validate that my language made sense in their world. Their feedback reshaped how I framed error messages entirely.
Design Iteration
I went through multiple rounds of design, testing different information hierarchies in the validation modals. The final version groups errors by severity so users tackle the most critical issues first.
Design Goals
Reinforce CW as the source of truth
Surface validation errors at the moment users can act
Make errors understandable, scannable, and actionable
Reduce cognitive load during high-pressure release workflows
Design for complex system states, not just happy paths
Key UX Decisions
1. Introduce an explicit "Sync CAR" action
Rather than treating updates as a background process, I partnered with my product manager to introduce a clear "Sync CAR" action. This made data transfer intentional and helped users understand when CW data would overwrite CAR values.

2. Surface validation errors immediately after sync
Previously, validation errors were buried in audit views. I identified this as a critical pain point and designed a post-sync validation modal that appears immediately after syncing, keeping users in context and reducing back-tracking.
The modal surfaces:
- •Total number of errors upfront
- •Clear instructions to resolve issues in CW first
- •A clear distinction between CW-related and FPDS-NG-only errors

3. Separate CW vs FPDS-NG errors to clarify ownership
To reduce confusion, errors are separated into tabs:
- •CW + FPDS-NG errors (fixable in CW)
- •FPDS-NG only errors (informational or external)
This helped users quickly understand which issues they could act on and which required external resolution.

4. Design for scanability and actionability
Error lists were broken down by CAR categories, with:
- •Error counts per category
- •Collapsible sections
- •Clear hierarchy to prevent overwhelming users
This structure reduced cognitive load while still supporting complex validation requirements.

Edge Cases & System Constraints
A major part of this work involved designing for non-ideal states, including:
I worked through each state with engineering to define precise UX logic that accurately reflects backend rules while maintaining user trust.

Final Experience
The experience I designed allows contracting officers to:

Impact
While this feature launched incrementally, my design work delivered meaningful UX improvements:
Reflection
This project really solidified something I believe strongly: designing for what happens when things go wrong matters just as much as designing for the happy path. In compliance-heavy enterprise systems, a clear error state can be the difference between a user resolving an issue in minutes versus spending hours stuck.
If I had more time, the next thing I'd tackle is inline validation right within the CW fields -- so users could fix issues without ever leaving the page they're already on.
Why this work matters
Through this project, I demonstrated my ability to:
- →Lead design across multiple interconnected enterprise systems
- →Navigate complex technical constraints independently
- →Translate backend rules into clear, humane UX patterns
- →Make high-stakes compliance workflows safer and more understandable