Contract Writing dashboard showing My Workspace with procurement overview and Procurement AI Copilot with document library and chat interface
Enterprise UXUsability TestingGovernmentAppian

Contract Writing

I lead UX for Appian's government procurement solution -- setting the design direction and working across product and engineering to make contracting workflows faster and less error-prone for federal officers.

Role

Lead UX Designer

Timeline

Jan 2025 – Present

Team

Product, Engineering

Platform

Appian

Product Overview

Appian Contract Writing (CW) is a government acquisition management solution that helps federal agencies move through the solicitation and contract creation process more efficiently. It connects data across systems, automates routine workflows, and gives procurement teams a single place to manage everything.

The goal is straightforward: help contracting officers spend less time fighting their tools and more time doing the actual work. CW pulls together data, documents, tasks, and status into one unified interface instead of forcing people to jump between disconnected systems.

By 2026, Contract Writing has scaled to support three major agencies, including the USDA Forest Service, where it now manages $2.5B+ in annual obligations and 5,000+ contracts and agreements.

As the lead UX designer on this product, my role is to own the design direction, partner with product and engineering on strategy, and shape the experience across multiple feature areas.

Key Capabilities

Automate Key Processes

Configurable checklists, tasks, and review processes that automatically assign work based on solicitation and award data.

Increase Speed & Accuracy

Guided data collection wizards, intelligent classification code search, and generative AI capabilities for contract text.

Leverage AI

Procurement AI Copilot for document interaction, summaries, and knowledge assistance throughout the contracting process.

Integrate with Enterprise Tools

Seamless connections to SAM.gov, FPDS-NG, DocuSign, Government Source Selection, and Vendor Management.

Provide Transparency

A single centralized view of contracts combining data, documents, tasks, and status with procurement analytics.

Ensure Compliance

Built-in security, complete audit trails, and support for Other Transaction Agreements (OTs).

Features

Key Design Work

Click into each feature to explore the design process and decisions

Context

Government contracting officers deal with a mountain of regulations, policies, and institutional knowledge that changes constantly. The Procurement AI Copilot is an AI-powered assistant I designed to help them cut through that complexity -- giving them a fast way to ask questions and get answers grounded in their own agency's documents.

The big design challenge here was trust. These users make high-stakes decisions, so the AI couldn't just give answers -- it needed to show its work. I designed the experience around source citations, confidence indicators, and document selection so users always know where an answer came from.

Key Design Areas

Chatbot Interface

A conversational interface where users ask questions and receive AI-driven, contextually relevant responses tailored to the agency's unique operational needs, drawing directly from uploaded agency documents.

Dashboard Analytics

Key performance indicators including query resolution rates, frequently asked questions, and chat utilization metrics. Helps identify common queries and knowledge gaps for training and process improvement.

Document Management

Centralized document upload process that makes institutional knowledge easily accessible, maintaining an up-to-date repository of information crucial for efficient contracting.

Configurable Settings

Settings to tailor the AI Copilot experience to specific agency needs and procurement workflows.

The Experience

The shipped Copilot experience spans document selection, conversational Q&A, and source transparency -- all within a single interface.

Chat Interface

The landing state opens with a focused prompt and a collapsible Document Library. Users can scope the AI's context before asking their first question, reducing irrelevant answers.

Procurement AI Copilot landing page showing the main chat interface with prompt 'What question can we help you answer?' and collapsed Document Library sidebar

Landing state -- clean prompt with the Document Library collapsed by default.

Document Library expanded showing selectable PDF documents with checkmarks, alongside the chat interface ready for questions

Expanded Document Library -- users select which documents the AI references.

Responses & Source Transparency

AI responses are structured with clear formatting and followed by expandable source citations. Each source includes the document name, page number, and a similarity score so users can verify answers without leaving the conversation.

Procurement Copilot showing a detailed AI response to 'what is an IDP model?' with bullet points covering Automated Data Extraction, AI-Powered Technology, Iterative Improvement, Integration with Processes, and Human-in-the-Loop Capability

Structured AI response with contextually relevant bullet points drawn from selected documents.

Source Text section showing expandable document references with page numbers and similarity indicators ranging from High to Medium, enabling users to verify AI-generated answers

Source citations with page numbers and similarity scoring for full transparency.

My Influence & Tradeoffs

Championing WCAG Accessibility Standards

Government tools must serve all users, including those with disabilities. I pushed to ensure the entire Procurement AI Copilot followed WCAG accessibility guidelines from the start -- not as an afterthought. This meant proper keyboard navigation for the chat interface, screen reader support for AI responses and citations, sufficient color contrast ratios, and focus management when new content loaded. I pushed to prioritize this on the backlog since building it in from day one was far more efficient than retrofitting later, and it resulted in a more usable experience for everyone.

Context

Here's the fundamental challenge: CW is supposed to be the source of truth for contract data, but it doesn't sync two-ways with the Federal Procurement Data System - Next Generation (FPDS-NG). So when validation errors come back from FPDS-NG, users have to go fix things in CW and re-send -- but that workflow was really confusing before I redesigned it.

These aren't minor annoyances. Errors at this stage can block award releases, delay procurement timelines, and create real compliance risk.

The Problem

Contracting officers could successfully create a CAR from CW, but if validation errors occurred after initial creation, users were forced into a fragmented workflow:

  • Errors surfaced late and were difficult to interpret
  • It was unclear where fixes needed to be made (CW vs FPDS-NG)
  • Users often discovered issues at the point of award release, when time pressure was highest
  • There was no clear path to reconcile discrepancies while reinforcing CW as the system of record

The experience increased rework, confusion, and delayed releases.

My Role

I led end-to-end UX design for this feature, partnering closely with product management and engineering. My responsibilities included:

  • Discovery and stakeholder alignment
  • User flow mapping across CW and FPDS-NG
  • UI design for error states, sync actions, and validation modals
  • Edge case documentation and design specs

Behind the Scenes

This wasn't a project where I could just jump into Figma. I spent a lot of time mapping out how data actually flows between CW and FPDS-NG before I designed a single screen.

System Mapping

I mapped every touchpoint between CW and FPDS-NG to understand where data could fall out of sync. This diagram became my reference point for every design decision.

Edge Case Documentation

I documented 15+ error scenarios with my PM -- from partial syncs to timeout failures. Each one needed a different message and recovery path, so I catalogued them all before designing.

Stakeholder Walkthroughs

I walked through each error flow with contracting officers to validate that my language made sense in their world. Their feedback reshaped how I framed error messages entirely.

Design Iteration

I went through multiple rounds of design, testing different information hierarchies in the validation modals. The final version groups errors by severity so users tackle the most critical issues first.

Design Goals

1

Reinforce CW as the source of truth

2

Surface validation errors at the moment users can act

3

Make errors understandable, scannable, and actionable

4

Reduce cognitive load during high-pressure release workflows

5

Design for complex system states, not just happy paths

The Experience

The shipped error recovery flow spans sync initiation, validation surfacing, error resolution, and award release -- designed to keep contracting officers confident at every step.

Sync & Validation

Rather than treating updates as a background process, I introduced an explicit "Sync CAR" action that makes data transfer intentional. After syncing, a validation modal appears immediately -- surfacing the total error count, clear instructions to resolve issues in CW first, and a distinction between CW-fixable and FPDS-NG-only errors.

Integrations panel showing FPDS-NG with Draft status and SAM.gov with Not Published status, with a dropdown menu highlighting the Sync CAR action

Explicit "Sync CAR" action -- making data transfer intentional, not automatic.

Validation Run modal showing 43 errors need attention, with Contract Writing and FPDS-NG tabs separating errors by source

Post-sync validation modal -- errors surfaced immediately, grouped by source system.

Error Ownership & Scanability

Errors are separated into tabs -- CW + FPDS-NG (fixable in CW) vs FPDS-NG only (informational or external) -- so users immediately know which issues they can act on. Within each tab, errors are grouped by CAR category with counts and collapsible sections to reduce cognitive load.

Validation Run modal showing Contract Writing tab with 6 errors, separating CW-fixable errors from FPDS-NG only errors

CW tab -- errors the user can resolve directly in Contract Writing.

Validation Run modal showing FPDS-NG tab with 37 errors organized into collapsible categories

FPDS-NG tab -- errors grouped by category with counts and collapsible detail.

Edge Cases & Award Release

A major part of this work involved designing for non-ideal states: CARs that can't be updated if finalized in FPDS-NG, mandatory errors blocking award release, and sync actions that are hidden or disabled depending on system state. I worked through each scenario with engineering to define precise UX logic that reflects backend rules while maintaining user trust.

Release Award dialog showing an Award Release Restricted warning because the CAR is not validated, with a link to view validation errors and an override option

Award release blocked -- clear warning with path to view and resolve validation errors.

CAR Sync Successful confirmation dialog showing the CAR has been successfully updated in FPDS-NG

Success state -- clear confirmation with a link to view the synced CAR in FPDS-NG.

My Influence & Tradeoffs

Advocating for Comprehensive Error Handling

The initial proposal was a simple error toast notification -- a reasonable starting point given timeline constraints. However, I saw an opportunity to do more. Government contracting has real compliance consequences, and I believed users deserved clear guidance when something went wrong. I made the case for a full validation modal with error categorization, sharing research showing that ambiguous error states were a major driver of support tickets in similar systems. The team agreed the investment was worthwhile, and post-launch we've seen a measurable drop in user-reported confusion.

Collaborating on Edge Case Coverage

For edge cases like finalized CARs and mandatory vs. optional errors, the team initially considered handling them with generic messages to keep scope manageable. I collaborated with engineering to find a middle ground: distinct UX for scenarios where user actions differed significantly, while keeping implementation pragmatic. This partnership resulted in clearer error states that eliminated the "what do I do now?" confusion without overcomplicating the codebase.

Impact

Reduced ambiguity around error ownership and resolution
Reinforced CW as the authoritative system of record
Improved user confidence during compliance-critical workflows
Established a reusable validation pattern across CW

Reflection

01

Error states deserve as much design rigor as happy paths. In compliance-heavy enterprise systems, a clear error state can be the difference between resolving an issue in minutes versus spending hours stuck.

02

Separating error ownership reduces cognitive load. Tabbing errors by system (CW vs FPDS-NG) was a simple structural decision, but it fundamentally changed how confidently users could triage and resolve issues.

03

Next step: inline validation. If I had more time, I'd explore surfacing validation errors directly within CW fields -- so users could fix issues without leaving the page they're already on.