Corporate AI··7 min read

Beyond Linear: Building Self-Correcting AI Workflows in Optimizely Opal

Wide futuristic illustration of a blue-to-orange infinity loop with a central AI figure, surrounded by analytics icons and feedback symbols representing a self-correcting workflow.

Most AI agent workflows are conveyor belts. Content goes in one end, gets processed step by step, comes out the other. That works fine until it doesn't. And when it doesn't, you get garbage output that sailed through your whole pipeline because nobody taught the system how to catch its own mistakes.

Opal's workflow engine has condition logic. You can branch, retry, and escalate based on what your agents actually produce. I want to walk through a pattern I've been using that makes agent orchestration feel less like automation and more like a system that knows when something's wrong.

The Problem With Straight Lines

Picture a typical content workflow: a brief comes in, a content agent writes something, maybe a validator checks it, and the result gets pushed to a CMP task for human review. Clean enough.

But what happens when the content agent produces output that doesn't conform to your CMS schema? You get a broken import. What happens when the tone drifts off-brand? It lands in someone's review queue with no context about what went wrong. The human becomes the error handler, which is exactly the bottleneck you were trying to get rid of.

More agents won't fix this. Smarter routing will.

Structured Action Responses

Opal Workflow with Retry, Proceed, or Alert paths from Schema Validation
Opal Workflow with Retry, Proceed, or Alert paths from Schema Validation

The idea is simple. Have your validation agents return structured action responses. Not just "pass" or "fail," but explicit instructions the workflow can act on. Every validation agent returns a JSON object with an action field:

  • "proceed" means validation passed, move to the next step
  • "retry" means something's off, send it back with feedback
  • "alert" means we've exhausted retries, get a human involved

Three states. That's all it takes to turn a straight line into a self-correcting loop.

The Workflow

I'll use a content generation pipeline as the example, but this pattern works anywhere you need quality gates.

Content Brief Interpreter

The workflow starts with an agent that takes a raw content request and turns it into a structured brief. Content type, target audience, SEO requirements, schema constraints. This is the single source of truth for everything downstream.

No condition logic needed here. It's a one-shot transformation. But it matters because everything after it depends on a well-structured brief.

Long-Form Content Generator

A specialized content agent takes the structured brief and writes the actual content. Blog post, landing page, product description, whatever.

Here's the thing that makes this interesting: this agent has two possible inputs. On the first pass, it gets the brief from Step 1. On subsequent passes, it also gets retry feedback from the validator below. The agent's instructions need to handle both scenarios.

Schema Conformance Validator

This is where condition logic earns its keep. This agent checks generated content against your CMS content schema. Field lengths, required fields, allowed values, content type structure. If you've documented your schema rules as numbered items (like "I-7" for image requirements), the validator can reference them in its feedback.

Three possible outcomes:

Content passes. {"action": "proceed"}. On to the next step.

Content fails but the problems are fixable. {"action": "retry", "feedback": "Field 'metaDescription' exceeds 160 characters. Hero image alt text is missing. See schema rules I-3, I-7."}. The workflow loops back to the content generator with those specific notes.

Retries are used up. {"action": "alert"}. A separate agent fires off an email to the content team with the failure details. No silent failures.

Brand Guidelines Validator

Content that passes schema validation might still sound wrong. This agent checks voice, tone, terminology, and messaging against brand standards. Same three-outcome pattern.

The retry loop goes back to the same content generator entry point. The content agent just receives different feedback depending on which validator flagged the issue.

CMP Task Creator

Only content that passes both validators reaches this step. It packages the validated content with quality reports from each validation pass and creates a task for human review. The human's job shifts from catching errors to making editorial decisions. That's a meaningful difference.

Why Not Just Retry Forever?

AI agents aren't deterministic. If a content agent can't produce schema-conformant output after two or three tries, attempt number four probably won't fix it either. The problem is likely in the brief, the schema docs, or the agent's instructions. Repeating the same operation won't help.

Cap the retries. Escalate when they're spent. This prevents infinite loops and wasted compute.

I keep the retry limit inside the validator agent's instructions, not in the workflow itself. The agent tracks how many times it's seen the same content and decides whether to retry or alert. The workflow conditions stay simple (they just read the action value), and the decision logic lives where it can be tuned per use case.

Setting Up the Condition Nodes

In the Opal workflow builder, you add condition nodes after each validator. Three conditions per validator:

  1. If value equals "action": "proceed", connect to the next agent
  2. If value equals "action": "retry", connect back to the content generator
  3. If value equals "action": "alert", connect to the alerter agent

When you look at the finished workflow in the builder, the feedback loops are visible. You can see where content can circle back and where it exits. That helps a lot when you're debugging why a piece took seven minutes instead of thirty seconds.

Where Else This Works

Once you get comfortable with this pattern, you'll find uses for it everywhere.

Image generation. A brand image agent produces an image, a visual QA agent checks dimensions, colors, and composition. Retry sends it back with notes like "background too busy, subject off-center." Alert sends it to a designer.

Translation. A translation agent produces localized content, a cultural review agent checks for sensitivity issues and brand fit in the target market. If the source content itself is the problem, retrying won't help, so that's an alert.

SEO optimization. A content agent writes, an SEO analyst agent checks keyword density, readability, and internal linking. Retry with specific instructions.

Same architecture every time. Generate, validate, branch.

Things I've Learned

Keep the action vocabulary small. Three states covers most situations. If you're adding more, you're probably encoding logic in the workflow that belongs in the agent.

Validator feedback has to be specific. "Content doesn't match brand voice" is useless to a content agent. "Paragraph 3 uses 'utilize' when brand guidelines say 'use.' Section headers are sentence case, brand requires title case." That's feedback the agent can act on.

Version your schema docs. When your validator references "schema rule I-7," that rule needs to exist in an actual, versioned document. Update the schema, update the validator's instructions. They drift apart faster than you'd expect.

Watch your retry rates. If a validator triggers retries on 80% of content, the generator's instructions or the brief structure need work. Don't raise the retry limit. Fix the upstream problem.

Match on the full param and value in conditions. The condition node evaluates against the entire agent response, not a parsed field. If your agent returns a complex object and you just match on "proceed", you might get false positives from that string appearing somewhere else in the response. I ran into this. Matching on the full "action": "proceed" string instead of just the value protects against it.

Don't skip the alerter. It's tempting to rely on retries for everything. The alerter is your safety net. It's the difference between a content piece quietly failing and your team knowing about it in minutes.

Why This Matters

The technical implementation here isn't complicated. Condition nodes and JSON action responses are straightforward. What changes is how you think about the system.

A workflow that can reject its own output, explain why, and take another pass is a different kind of thing than one that just moves data along. The agents do the work. The conditions enforce standards. Humans handle the parts that actually need human judgment.

That's the split worth aiming for.