Building — January 2026

Building a Demo Environment That Lies Convincingly

matterunknown  ·  January 2026  ·  10 min read

The standard security demo environment is a lie that everyone knows is a lie. It has five users. Three of them have obviously suspicious names. The data is clearly synthetic. The findings are neat and comprehensible. A CISO looks at it and thinks: this is interesting technology. They do not think: this is my problem. The demo fails at the moment that matters most.

Building a demo environment that actually lands requires a different approach. The goal is not to show that a security product works. The goal is to make a security leader feel, briefly and viscerally, that they are looking at their own environment. The question you want them to walk away asking is not "how does this work?" but "how do I know this isn't us?"

The specificity problem

Generic demo environments fail because they are generic. They demonstrate capability without creating recognition. A finding that says "terminated user with active access" is informative. A finding that says "Dr. Sarah Chen, Medical Director, terminated March 2023, still holds full EHR admin access including patient record deletion privileges, last authenticated six weeks ago" is alarming.

The difference is specificity. The first finding tells you a category of problem exists. The second finding makes you wonder if your terminated Medical Director still has access to your EHR. If the person you're showing it to is a healthcare CISO — and if you've done your research on their recent organizational changes — the second finding might land very close to home.

This is the design principle behind the demo environment we built. Every account has a name, a title, a department, a history, and a reason the finding exists. The terminated administrator didn't just leave — they were part of a department that was restructured, and in the chaos of the restructuring, the offboarding process fell through. The service account with the orphaned credentials isn't just stale — it was created for a project that was acquired and then abandoned, and the acquiring team never inventoried the access they inherited.

The right amount of mess

Real environments are messy in specific ways. They have the aftermath of acquisitions — systems that were integrated quickly and never properly cleaned up. They have the residue of departed employees whose digital traces were never fully removed. They have service accounts from projects that seemed temporary and became permanent. They have permissions that made sense three reorgs ago and no longer reflect how anyone actually works.

A demo environment that's too clean reads as fake. A demo environment that's too messy reads as overwhelming. The right amount of mess is: enough that it looks like something that evolved rather than something that was designed, but organized enough that the findings are discoverable and comprehensible.

The hardest part of building the environment was getting this ratio right. The first version was too clean. Every finding was obvious. Every user had exactly the right amount of access except for the deliberately misconfigured ones, which were obviously deliberately misconfigured. We had to add entropy — accounts with permission sets that made sense historically but look strange now, configurations that were correct at the time and have drifted, relationships between systems that nobody documented because they seemed obvious to the people who built them.

"The question you want them walking away asking is not how does this work but how do I know this isn't us."

The character work

The part that surprised me most was how much the character work mattered. Each account in the environment is a person. Each person has a story. The story explains the finding — not as an excuse, but as context that makes the finding feel real rather than planted.

The IT administrator who is also a member of the PLC engineering group is not a bad actor. They started in OT engineering, transitioned to IT five years ago, and nobody ever removed them from the OT groups they no longer need. The finding is an IT/OT boundary violation with risk score 92. The story is: this is what organizational change looks like when identity governance doesn't keep up with it.

We built backstories for the highest-risk accounts. Not elaborate fiction — just enough to make the finding feel like the result of something real rather than something constructed. It changed how the environment read. A security leader who sees angel.face with IT and OT access doesn't think "demo." They think "we have someone like this."

What AI makes possible

Building this kind of environment manually would take months. The character development, the SQL seeds, the Ansible playbooks, the Keycloak configuration, the findings narratives, the consistency checking across four verticals and seven servers — that's an enormous amount of work that doesn't scale.

With AI, it's a different kind of work. The architecture decisions still require human judgment. The strategy — what verticals to build, what findings to highlight, what the narrative arc of the environment should be — is still human work. But the execution, the generation of consistent and specific data at scale, the maintaining of coherence across hundreds of user accounts and dozens of findings — AI handles that well.

What we built is a four-vertical identity environment: healthcare, manufacturing, financial, retail. Seven servers. Over a thousand seeded users. Dozens of named findings, each with a backstory, each technically accurate, each designed to produce a specific recognition in a specific kind of audience. It took weeks, not months, and the result is something that consistently produces the reaction we designed for.

The reaction is not "impressive technology." The reaction is "how do I know this isn't us?" That's the only reaction that matters.

← All writing matterunknown