AI UXEnd-to-End UX DesignUser ResearchAppian

Appian AI: Generate Sample Data

I designed this feature from scratch -- figuring out how to let developers generate realistic test data with AI while keeping the control and transparency that enterprise teams need.

Role

Lead UX Designer

Team

Product, Engineering, Research

Scope

End-to-end Design

Context

Appian is a low-code enterprise platform where developers build complex business applications. A constant pain point? Sample data. Teams need realistic data to validate their UIs, test workflows, and demo to stakeholders -- but creating it was always a manual, tedious chore.

I noticed developers were stuck either hand-typing fake data or copying old datasets that didn't fit their schema. The result: development environments full of "test123" entries, poor early testing, and demos that didn't land. I saw an opportunity to fix this with AI.

The Problem

Creating quality sample data in Appian was one of those tasks everyone hated but nobody had solved. Developers would spend hours manually entering rows of data, and the result was usually low-quality filler like "test123" and "test description." That made it hard to build realistic UIs, validate business logic, or run a convincing demo.

The Solution

I designed a flow where developers can generate high-quality, realistic test data with just a few clicks. It uses Appian's private AI (powered by AWS Bedrock) to look at your data model and generate data that actually matches your schema -- field types, relationships, and all. You pick how many rows you want, hit generate, and you've got usable data. And because data models change, you can regenerate anytime without starting from scratch.

My Role

I was the sole UX designer on this feature, which meant I owned every part of the process from problem framing through launch:

  • Defining the problem and narrowing the scope of work
  • Ideation on possible solutions
  • Interviewing target users for insights
  • Creating interactive prototypes
  • Usability testing and iterating on designs
  • Identifying and designing for edge and error cases
  • Working with developers and product management to implement the feature

Design Process

After gathering requirements, I explored different approaches for where this feature should live within Appian's Data Fabric and how users would actually interact with it. My north star was simplicity -- if generating sample data felt like yet another complex task, nobody would use it.

User Flow

01

User clicks "Generate Sample Data" to launch the flow

02

Wizard pop-up previews sample data based on existing data field names and types

03

Users can use "Advanced Configurations" to generate more relevant data

04

User data model is updated with AI-generated sample data

Behind the Scenes

Here's a glimpse of what my process actually looked like. I don't start in high-fidelity -- I start with questions, sketches, and a lot of back-and-forth with engineering and product.

Competitive Analysis

Before drawing anything, I surveyed how other tools (Mockaroo, Faker.js, Retool) handle data generation. I mapped out what worked, what felt clunky, and where our opportunity was -- especially around schema-aware generation, which most tools ignored.

Early Explorations

I explored multiple entry points for the feature -- a standalone tool, inline within record types, or a wizard flow. After whiteboarding with engineering, the wizard approach won out because it kept the cognitive load low and the context clear.

Iteration Cycles

I went through three major design iterations, each informed by developer feedback. The biggest pivot was moving from a "generate and review" model to a "preview and refine" model -- users wanted to see data before committing to it.

Usability Testing

I ran moderated usability sessions with Appian developers, watching where they hesitated, what they skipped, and what confused them. The "Advanced Configurations" panel came directly from testers asking for more control over data diversity.

Design Goals

  • Reduce time and effort required to create usable sample data
  • Maintain user trust in AI-generated outputs
  • Provide transparency without overwhelming users
  • Ensure generated data aligns with schema and application context
  • Design a scalable pattern for future AI features

Research Insights

I ran interviews and usability tests with Appian developers. A few themes kept coming up:

Speed matters most early, but accuracy matters before demos

Users wanted AI to assist, not override their intent

Lack of visibility into how data was generated reduced trust

Developers needed easy ways to regenerate or tweak results

These insights shaped the experience around control and iteration, not one-click automation.

Usability Study Overview showing four participants (two Appian Developers, two Customer Success), study goals including testing the Generate Sample Data flow and advanced configurations, and key themes including that users navigated easily but wanted more clarifying language

Key UX Decisions

1. AI as a collaborator, not a black box

I didn't want the AI to just silently generate data. I designed the flow so users explicitly opt in, see what's influencing the output, and feel like they're working with the AI rather than handing over control to it.

Appian Data Model view showing an empty Data Preview tab with no data available, with the Generate Sample Data button highlighted as the entry point

2. Generated data had to actually feel real

If the generated data didn't behave like production data, nobody would trust it. I pushed hard for the AI output to respect:

  • Field types
  • Required vs optional fields
  • Relationships between data objects

I worked with engineering to design UI patterns that reflect these constraints, helping users trust that generated data would behave like real production data.

Generate Sample Data wizard showing a data preview table with AI-generated records, alongside the Advanced Configurations panel with row quantity, special instructions, and record field selection options

3. Built for iteration, not one-shot perfection

In testing, I found that users almost never accepted the first result. So rather than treating generation as a single action, I built the experience around iteration:

  • Clear regeneration controls
  • Lightweight editability
  • Fast feedback loops

This reduced frustration and encouraged experimentation.

Generate Sample Data wizard showing expanded data preview with 4 rows of realistic AI-generated data including descriptions and request types, with Advanced Configurations collapsed for a cleaner view

4. Enough transparency, without the overwhelm

My early concepts showed too much AI detail and it actually slowed people down. Through iteration, I found the right balance -- surfacing just:

  • What data was generated
  • Where it could be edited
  • What could be regenerated

Without overwhelming users with technical AI explanations.

Latest Create Sample Data wizard with AI Copilot branding, showing the advanced configuration panel with record fields, field types, and additional instructions input for tailoring AI-generated data

Advanced Configuration

I designed the advanced configuration panel to give power users fine-grained control over their sample data without overwhelming first-time users. The panel is hidden by default and can be expanded when needed.

Record quantity

Choose 10, 25, or 50 new records to generate at once

Related record quantity

For related record types, generate 1, 2, or 3 records per base record

Field selection

Select which fields to include - primary key and relationship fields are locked to maintain data integrity

Additional instructions

Natural language instructions to tailor the data (e.g., 'Include a range of 50 to 90 percent in the discount column')

Refresh all data

Simultaneously regenerate data for both base and related record types

Create Sample Data wizard showing Campaign record with related record types, advanced configuration panel with Record Fields selection, Field Types, and Additional Instructions field with custom prompt

Edge Cases & Constraints

I designed for a wide range of non-ideal scenarios that enterprise users would encounter:

Related record type dependencies

When a record type depends on related record types that have no data, I designed a clear screen showing which related types need to be populated first, with actionable guidance

Access permission issues

If users lack permission to view related record types, the UI surfaces the specific access issue and directs them to their system administrator

Fewer records than requested

AI may return fewer records to avoid generating sensitive or misusable data. I designed messaging to explain this transparently rather than leaving users confused

Unsupported record type configurations

Custom record fields, one-to-one relationships, and non-integer primary keys are not supported. I designed clear guardrails to communicate these constraints upfront

Post-insertion editing

I designed an inline editing experience on the Data Preview page, allowing users to add rows and edit field values after generation with a clear "Write Changes" confirmation pattern

Generate Sample Data modal showing the related record dependency edge case with an illustration and message directing users to insert sample data in related records first, with a Go to record link for the requestType dependency

Final Experience

The solution I designed allows developers to:

  • Generate realistic sample data in seconds
  • Understand how data was created
  • Regenerate or edit results easily
  • Validate UIs with higher confidence earlier in the workflow

The experience integrates naturally into existing Appian workflows while introducing AI in a way that feels intentional and safe.

Key Results

Increased sales

Enabled the Sales team to more quickly prepare for and deliver impactful prospect/analyst demos with build-from-scratch live flows

Increased Appian's value proposition

Helped developers more quickly prepare for and deliver impactful stakeholder demos

Improved developer experience

Sped up application development (e.g. building UIs and business logic) by eliminating the manual data setup task

Increased adoption of AI features

Enabled developers to populate record types and related record types with high-quality data for testing AI features like Records Chat in lower environments

Improved application quality

Facilitated all sorts of development lifecycle testing including unit testing and user acceptance testing

This work also established foundational AI UX patterns that informed future AI-assisted features at Appian.

Reflection

This project taught me something I keep coming back to: AI UX is really about trust, not automation. In enterprise settings, people would rather have a slightly slower workflow that they understand than a fast one that feels opaque. Clarity, reversibility, and control aren't friction -- they're the features.

If I were to take this further, I'd explore:

  • Inline editing during generation
  • Smarter defaults based on past user behavior
  • Deeper visibility into data quality indicators

Why This Work Matters

Through this project, I demonstrated my ability to:

  • Lead the design of responsible AI experiences from concept to launch
  • Balance speed with trust in high-stakes enterprise systems
  • Independently translate complex technical constraints into usable interfaces
  • Establish scalable AI UX patterns adopted by other teams at Appian

Want to discuss this project?

I'd love to walk you through my process and decisions.

Get in Touch