Enterprise UXUsability TestingGovernmentAppian

Contract Writing

I lead UX for Appian's government procurement solution -- setting the design direction and working across product and engineering to make contracting workflows faster and less error-prone for federal officers.

Role

Lead UX Designer

Timeline

Jan 2025 – Present

Team

Product, Engineering

Platform

Appian

Product Overview

Appian Contract Writing (CW) is a government acquisition management solution that helps federal agencies move through the solicitation and contract creation process more efficiently. It connects data across systems, automates routine workflows, and gives procurement teams a single place to manage everything.

The goal is straightforward: help contracting officers spend less time fighting their tools and more time doing the actual work. CW pulls together data, documents, tasks, and status into one unified interface instead of forcing people to jump between disconnected systems.

As the lead UX designer on this product, my role is to own the design direction, partner with product and engineering on strategy, and shape the experience across multiple feature areas.

Key Capabilities

Automate Key Processes

Configurable checklists, tasks, and review processes that automatically assign work based on solicitation and award data.

Increase Speed & Accuracy

Guided data collection wizards, intelligent classification code search, and generative AI capabilities for contract text.

Leverage AI

Procurement AI Copilot for document interaction, summaries, and knowledge assistance throughout the contracting process.

Integrate with Enterprise Tools

Seamless connections to SAM.gov, FPDS-NG, DocuSign, Government Source Selection, and Vendor Management.

Provide Transparency

A single centralized view of contracts combining data, documents, tasks, and status with procurement analytics.

Ensure Compliance

Built-in security, complete audit trails, and support for Other Transaction Agreements (OTs).

Features

Key Design Work

Click into each feature to explore the design process and decisions

Context

Government contracting officers deal with a mountain of regulations, policies, and institutional knowledge that changes constantly. The Procurement AI Copilot is an AI-powered assistant I designed to help them cut through that complexity -- giving them a fast way to ask questions and get answers grounded in their own agency's documents.

The big design challenge here was trust. These users make high-stakes decisions, so the AI couldn't just give answers -- it needed to show its work. I designed the experience around source citations, confidence indicators, and document selection so users always know where an answer came from.

Key Design Areas

Chatbot Interface

A conversational interface where users ask questions and receive AI-driven, contextually relevant responses tailored to the agency's unique operational needs, drawing directly from uploaded agency documents.

Dashboard Analytics

Key performance indicators including query resolution rates, frequently asked questions, and chat utilization metrics. Helps identify common queries and knowledge gaps for training and process improvement.

Document Management

Centralized document upload process that makes institutional knowledge easily accessible, maintaining an up-to-date repository of information crucial for efficient contracting.

Configurable Settings

Settings to tailor the AI Copilot experience to specific agency needs and procurement workflows.

The Experience

The Procurement AI Copilot landing page with the chat interface and collapsible Document Library

Procurement AI Copilot landing page showing the main chat interface with prompt 'What question can we help you answer?' and collapsed Document Library sidebar

Expanding the Document Library to select which documents the AI references for answers

Document Library expanded showing selectable PDF documents with checkmarks, alongside the chat interface ready for questions

AI-generated response with structured, contextually relevant answers drawn from selected documents

Procurement Copilot showing a detailed AI response to 'what is an IDP model?' with bullet points covering Automated Data Extraction, AI-Powered Technology, Iterative Improvement, Integration with Processes, and Human-in-the-Loop Capability

Source Text citations with expandable references, page numbers, and similarity scoring for transparency

Source Text section showing expandable document references with page numbers and similarity indicators ranging from High to Medium, enabling users to verify AI-generated answers

Context

Here's the fundamental challenge: CW is supposed to be the source of truth for contract data, but it doesn't sync two-ways with FPDS-NG (the federal reporting system). So when validation errors come back from FPDS-NG, users have to go fix things in CW and re-send -- but that workflow was really confusing before I redesigned it.

These aren't minor annoyances. Errors at this stage can block award releases, delay procurement timelines, and create real compliance risk.

The Problem

Contracting officers could successfully create a CAR from CW, but if validation errors occurred after initial creation, users were forced into a fragmented workflow:

  • Errors surfaced late and were difficult to interpret
  • It was unclear where fixes needed to be made (CW vs FPDS-NG)
  • Users often discovered issues at the point of award release, when time pressure was highest
  • There was no clear path to reconcile discrepancies while reinforcing CW as the system of record

The experience increased rework, confusion, and delayed releases.

My Role

I led end-to-end UX design for this feature, partnering closely with product management and engineering. Since January 2025, I've owned the design direction for this enterprise solution at Appian. My responsibilities included:

  • Discovery and stakeholder alignment
  • User flow mapping across CW and FPDS-NG
  • UI design for error states, sync actions, and validation modals
  • Edge case documentation and design specs

Behind the Scenes

This wasn't a project where I could just jump into Figma. I spent a lot of time mapping out how data actually flows between CW and FPDS-NG before I designed a single screen.

System Mapping

I mapped every touchpoint between CW and FPDS-NG to understand where data could fall out of sync. This diagram became my reference point for every design decision.

Edge Case Documentation

I documented 15+ error scenarios with my PM -- from partial syncs to timeout failures. Each one needed a different message and recovery path, so I catalogued them all before designing.

Stakeholder Walkthroughs

I walked through each error flow with contracting officers to validate that my language made sense in their world. Their feedback reshaped how I framed error messages entirely.

Design Iteration

I went through multiple rounds of design, testing different information hierarchies in the validation modals. The final version groups errors by severity so users tackle the most critical issues first.

Design Goals

1

Reinforce CW as the source of truth

2

Surface validation errors at the moment users can act

3

Make errors understandable, scannable, and actionable

4

Reduce cognitive load during high-pressure release workflows

5

Design for complex system states, not just happy paths

Key UX Decisions

1. Introduce an explicit "Sync CAR" action

Rather than treating updates as a background process, I partnered with my product manager to introduce a clear "Sync CAR" action. This made data transfer intentional and helped users understand when CW data would overwrite CAR values.

Integrations panel showing FPDS-NG with Draft status and SAM.gov with Not Published status, with a dropdown menu highlighting the Sync CAR action
2. Surface validation errors immediately after sync

Previously, validation errors were buried in audit views. I identified this as a critical pain point and designed a post-sync validation modal that appears immediately after syncing, keeping users in context and reducing back-tracking.

The modal surfaces:

  • Total number of errors upfront
  • Clear instructions to resolve issues in CW first
  • A clear distinction between CW-related and FPDS-NG-only errors
Validation Run modal showing 43 errors need attention, with Contract Writing and FPDS-NG tabs separating errors by source, and expandable categories revealing specific missing mandatory elements
3. Separate CW vs FPDS-NG errors to clarify ownership

To reduce confusion, errors are separated into tabs:

  • CW + FPDS-NG errors (fixable in CW)
  • FPDS-NG only errors (informational or external)

This helped users quickly understand which issues they could act on and which required external resolution.

Validation Run modal showing Contract Writing tab with 6 errors, separating CW-fixable errors from FPDS-NG only errors across expandable categories
4. Design for scanability and actionability

Error lists were broken down by CAR categories, with:

  • Error counts per category
  • Collapsible sections
  • Clear hierarchy to prevent overwhelming users

This structure reduced cognitive load while still supporting complex validation requirements.

Validation Run modal showing FPDS-NG tab with 37 errors organized into collapsible categories like Competition, Product Or Service Information, and Relevant Contract Dates with specific error details expanded

Edge Cases & System Constraints

A major part of this work involved designing for non-ideal states, including:

CAR cannot be updated if finalized in FPDS-NG
Mandatory validation errors blocking award release
Sync actions hidden or disabled depending on system state
Validation success vs partial failure states

I worked through each state with engineering to define precise UX logic that accurately reflects backend rules while maintaining user trust.

Release Award dialog showing an Award Release Restricted warning because the CAR is not validated, with a link to view validation errors, an override restriction checkbox, reason for override field, and signature details

Final Experience

The experience I designed allows contracting officers to:

Sync updated CW data to an existing CAR
Immediately view validation results
Clearly understand where fixes must occur
Resolve errors earlier in the workflow
Proceed with greater confidence during award release
CAR Sync Successful confirmation dialog showing a checkmark icon and message that the CAR has been successfully updated in FPDS-NG, with a View CAR link and the Sync CAR integration option visible in the background

Impact

While this feature launched incrementally, my design work delivered meaningful UX improvements:

Reduced ambiguity around error ownership and resolution
Reinforced CW as the authoritative system of record
Improved user confidence during compliance-critical workflows
Established a reusable pattern for validation handling across CW

Reflection

This project really solidified something I believe strongly: designing for what happens when things go wrong matters just as much as designing for the happy path. In compliance-heavy enterprise systems, a clear error state can be the difference between a user resolving an issue in minutes versus spending hours stuck.

If I had more time, the next thing I'd tackle is inline validation right within the CW fields -- so users could fix issues without ever leaving the page they're already on.

Why this work matters

Through this project, I demonstrated my ability to:

  • Lead design across multiple interconnected enterprise systems
  • Navigate complex technical constraints independently
  • Translate backend rules into clear, humane UX patterns
  • Make high-stakes compliance workflows safer and more understandable