Workflows

Build AI workflowsnot chatbots

Workflows are reusable AI pipelines: a model, a prompt template, input variables, and a live API endpoint. Create once, deploy instantly, call from anywhere. Pick a template below and ship in minutes.

Workflow Pipeline

Input{{variables}}
System PromptAI persona
ModelGPT-5
OutputREST API

Avg Latency

142ms

Success Rate

99.9%

Models Available

50+

Anatomy

What makes a workflow

A workflow is the core building block. Instead of wiring API keys and prompt logic into your code, you define everything in the dashboard and get a stable endpoint.

AI Model

GPT-5, Claude, Gemini, Grok, and 50+ more

System Instructions

The AI's personality, rules, and constraints

Prompt Template

User message with {{variable}} placeholders

Input Variables

Dynamic parameters your consumers provide

Model Settings

Temperature, max tokens, and tuning options

Web Search

Optional — ground responses in real-time data

Templates

Start with a template

Real workflow templates you can deploy in minutes. Select one to see the full configuration — system prompt, user template, variables, and recommended model.

Content

Blog Post Generator

Generate SEO-optimized blog posts with custom tone, length, and structure. Perfect for content teams that need to scale output without sacrificing quality.

Recommended model:GPT-5

Input Variables

{{topic}}{{tone}}{{word_count}}

System Instructions

You are an expert content writer. Write engaging, SEO-optimized blog posts with clear structure, compelling headers, and actionable insights. Match the requested tone and target word count.

User Prompt Template

Write a blog post about {{topic}} in a {{tone}} tone. Target length: {{word_count}} words. Include an introduction, 3-5 key sections with headers, and a conclusion with a call to action.
How it works

Six steps to production

From zero to live API endpoint in under 10 minutes.

Step 01

Create a workflow

Open the dashboard and click "New Workflow." Give it a name that describes its purpose.

Step 02

Define your inputs

Add dynamic input variables like {{topic}}, {{tone}}, or {{customer_question}} — these become the API parameters.

Step 03

Write your prompts

Craft a system instruction and user template. Use the built-in editor with live preview to iterate fast.

Step 04

Choose your model

Pick from 50+ models — GPT-5 for reasoning, Gemini Flash for speed, Perplexity Sonar for web-grounded answers.

Step 05

Test in the console

Use the Compose tab to test with real inputs. Toggle streaming. Iterate until the output is perfect.

Step 06

Publish & deploy

Hit Publish and your workflow gets a live REST endpoint. Call it from your app — it's live instantly.

Platform

Built for production

Everything you get out of the box with every workflow.

50+ AI Models

GPT-5, Claude Sonnet 4.5, Gemini 2.5 Pro, Grok 3, Llama, DeepSeek, and more. Switch models without changing integration code.

Real-time Streaming

Stream responses token-by-token. Compatible with Vercel AI SDK, React useChat, and any HTTP client.

Production Security

Bearer auth, single-use Redis tokens, server-derived rate limiting, and fail-closed error handling.

Auto-scaling Infra

Edge-deployed with zero cold starts. Auto-scaling serverless functions, 300s max duration.

Web Search

Ground AI responses in real-time web data. Citations with source URLs returned automatically.

Versioning

Branch and iterate on prompts without breaking production. Roll back to any version instantly.

Integration

Call any workflow with one request

Three endpoints cover every integration pattern. Copy, paste, ship.

POST

/api/v1/run/{workflow_id}

Execute a workflow and receive the full result as JSON. Best for server-side.

GET

/api/v1/token?ttl=60

Generate a single-use streaming token. Use server-side, pass to your client.

POST

/api/v1/run/{workflow_id}/stream?token=...

Stream the response token-by-token. Safe for client-side use.

1const res = await fetch(
2 "https://api.aitutor.com/v1/run/wf_blog_gen",
3 {
4 method: "POST",
5 headers: {
6 Authorization: "Bearer sk_live_...",
7 "Content-Type": "application/json",
8 },
9 body: JSON.stringify({
10 topic: "AI workflow automation",
11 tone: "professional",
12 word_count: "1500",
13 }),
14 }
15);
16
17const { output } = await res.json();
api.aitutor.com
UTF-8JavaScript
Pro Tips

Ship better workflows

Practical advice from teams already in production.

1

Be specific in system instructions

Include the AI's role, tone, constraints, output format, and domain knowledge. The more context, the better the output.

2

Use the right model for the job

GPT-5 and Claude Sonnet 4.5 for complex reasoning. Gemini Flash for speed. Perplexity Sonar for web-grounded answers.

3

Design inputs for flexibility

Use descriptive variable names like {{customer_question}} instead of {{input}}. Add optional variables for tone and format.

4

Always stream for user-facing apps

Streaming feels dramatically faster. Generate a token server-side, pass it to the client, use fetch ReadableStream.

5

Set spend limits

Configure monthly limits in Settings. The API blocks requests at the cap, preventing surprise charges during traffic spikes.

6

Monitor with Statistics

Track API calls, active models, token usage, success rates, and latency. Use this data to optimize model selection.

Ready to build your
first workflow?

Pick a template, customize the prompts, and deploy a live API endpoint in minutes. No credit card required.