Photo by Jakub Zerdzicki on Pexels
n8n AI Workflow Tutorial: Build Your First AI Pipeline
An n8n AI workflow connects a trigger to a chain of nodes that includes at least one call to an AI API — OpenAI, Anthropic, or another language model. The AI processes input data and returns a result (a classification, summary, or generated text) that the workflow routes to a destination. This tutorial covers the core nodes, how to connect OpenAI via HTTP Request, a complete step-by-step workflow build, and three situations where n8n is the wrong tool for the job. If you haven’t used n8n before, start with the n8n tutorial for beginners first.
Most searches for an n8n AI workflow tutorial come from the same place: you’ve been doing some manual task — reading emails, routing support tickets, categorising form responses — and you want to stop. You’ve heard n8n can connect to OpenAI. You want to know which nodes to use and in which order.
That’s exactly what this covers. The n8n AI workflow is not conceptually complicated. A trigger fires. Data flows through nodes. One of those nodes calls an AI API. The response routes somewhere useful. You stop doing the task by hand.
Where it gets interesting — and where most tutorials skip ahead — is in the details: how to structure the prompt inside an HTTP Request node, how to extract the AI’s response with an expression, and how to build the error path before you ship. The happy path works first time. Everything else is what you’ll be debugging at 11pm.
What an n8n AI Workflow Actually Does
A standard n8n workflow moves data. A trigger fires, nodes transform or route that data, and something happens at the end. An n8n AI workflow does the same thing, but somewhere in the chain, a node calls a language model and the model makes a decision.
That decision can be:
- Classification — “Is this support ticket urgent, normal, or low priority?”
- Extraction — “Pull the date, amount, and vendor name from this invoice text.”
- Summarisation — “Condense this 800-word email thread to three bullet points.”
- Generation — “Write a draft reply to this customer complaint in a professional tone.”
- Routing — “Based on this description, which department should handle this?”
n8n has a dedicated AI category in its node library — OpenAI node, AI Agent node, Memory Buffer, and tool integrations like Google Search. These pre-built nodes give you a higher-level interface. But for most workflows, an HTTP Request node pointed directly at the API endpoint is more useful. It shows you the exact request body, the raw response, and the full payload. When something breaks, you know exactly where to look. The native AI nodes abstract that away, which is convenient until it isn’t.
The AI Agent node is worth knowing about separately. It wraps a language model with tool-calling — you give it tools (web search, a calculator, n8n functions) and it decides which tools to use based on the input. For complex research workflows, it’s useful. For straightforward tasks like classifying a form submission, it adds complexity you don’t need. Use the simplest node that handles the job.
Setting Up n8n for AI Workflows
There are two ways to run n8n: n8n.cloud (managed) and self-hosted. For developers building AI workflows, self-hosted gives you full control over credentials, environment variables, and execution limits. For testing, n8n.cloud’s free trial is the fastest way to start.
Self-hosted via Docker takes about five minutes:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
docker.n8io/n8n
Open http://localhost:5678 and the editor is ready. (I say “five minutes.” If you’ve never configured Docker volumes before, budget twenty. The command is correct; the first-time Docker experience is its own journey.)
| Setup method | Time to first workflow | Best for |
|---|---|---|
| n8n.cloud (free trial) | 2 minutes | Testing, no server needed |
| Docker self-hosted | 5–15 minutes | Production, full control |
| npm global install | 3–5 minutes | Local dev, quick iteration |
Before building any workflow, add your AI API credentials. Go to Settings → Credentials, create a new credential of type “Header Auth” (for manual HTTP Request use) or “OpenAI” (for the native node). n8n stores credentials encrypted and references them by name. You won’t paste API keys into nodes directly — you’ll select the saved credential. This matters when sharing workflow JSON files with teammates who have their own keys.
One thing to set before your first AI workflow runs: the execution timeout. By default, n8n waits indefinitely for a node to respond. An OpenAI API call under load can take 10–30 seconds. Under heavy load or during outages, it can hang. Set a timeout at the node level in the Settings tab of the HTTP Request node. 30 seconds is a sensible default for most chat completion calls.
The Core Nodes for Every n8n AI Workflow
Most n8n AI workflows use the same handful of nodes regardless of what the workflow does. Build familiarity with these five and most tasks become a matter of wiring them together.
| Node | Role in an AI workflow | Notes |
|---|---|---|
| Webhook / Schedule Trigger | Starts the workflow on an event or interval | Webhook for real-time; Schedule for batch jobs |
| Set | Extracts, renames, or reshapes fields | Use this immediately after the AI response to pull out the text cleanly |
| HTTP Request | Calls the AI API (OpenAI, Anthropic, etc.) | Most transparent approach — you see exactly what’s sent and received |
| IF / Switch | Routes the workflow based on the AI’s output | IF for binary decisions; Switch for multi-branch routing |
| Slack / Gmail / Sheets | Delivers the AI’s output to its destination | The endpoint — where the automated result lands |
Three HTTP Request node settings that are not obvious but matter:
- Timeout — set it explicitly, as discussed above
- On Error: Continue on error — essential if you’re processing a batch of items and don’t want one failed API call to abort the rest
- Response format — if you expect JSON from the model, add
"response_format": {"type": "json_object"}to the request body. Without this, the model may return JSON wrapped in a markdown code block, and your expression will fail to parse it. (This is a known side effect of assuming JSON is JSON until it isn’t.)
The Code node belongs in this list for one specific use: transforming data the built-in nodes can’t handle. If you need to reshape a nested API response, run a calculation, or call an API with unusual authentication, the Code node runs JavaScript or Python inline. It’s not needed for most AI workflows. When you do reach for it, it accepts the previous node’s output as $input.all() and must return an array of objects.
Building a Practical n8n AI Workflow Step by Step
Here is a complete, practical example: a workflow that reads incoming support emails, classifies them by urgency using OpenAI, and routes them to different Slack channels. This pattern covers the core mechanics of every n8n AI workflow — trigger, transformation, AI call, branching, delivery.
The workflow, node by node
- Webhook Trigger — your email tool sends a POST request to n8n’s generated URL whenever a new email arrives
- Set node — extracts
subjectandbodyfrom the incoming payload into clean fields - HTTP Request node — sends the email content to OpenAI with a classification prompt
- Set node — extracts
choices[0].message.contentfrom the OpenAI response into a field calledaiClassification - IF node — checks if
aiClassificationequalsurgent - Slack node (two branches) — posts to
#support-urgentor#support-normaldepending on the result
The HTTP Request node body for the OpenAI call:
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You classify support emails. Return only one word: urgent, normal, or low."
},
{
"role": "user",
"content": "Subject: {{ $json.subject }}\n\n{{ $json.body }}"
}
],
"max_tokens": 5
}
A few deliberate choices here worth understanding:
gpt-4o-mini— the cheapest model that handles three-category classification reliably. You don’t need GPT-4o to categorise an email into three buckets. As of early 2026,gpt-4o-minicosts $0.15 per million input tokens. Running this workflow 1,000 times a month with 150-token prompts costs roughly $0.02.max_tokens: 5— caps the response at five tokens. The model returns one word. It doesn’t need to explain itself, and you don’t want to pay for an explanation you’ll ignore.- System prompt — instructs the model to return exactly one word from a defined set. This makes the IF node’s condition reliable. If you let the model respond freely, you’ll get variations like “This is urgent”, “Urgent ticket”, or “High priority” and your IF condition will miss half of them.
I spent about twenty minutes building the first version of this workflow. It’s been running since. The time I’ve not spent manually reading and routing emails is disproportionate to that setup cost — which is roughly the same ratio as every other small automation investment that has actually paid off.
For more complex orchestration with memory and tool use, see the guide to building an n8n AI agent. For the underlying automation foundations, the n8n automation workflow tutorial covers triggers and expressions in depth.
Connecting OpenAI to n8n
The HTTP Request node approach requires you to extract the AI response manually. OpenAI returns completions in this structure:
{
"choices": [
{
"message": {
"content": "urgent"
}
}
]
}
In n8n expressions, you access this with {{ $json.choices[0].message.content }}. If the API call fails or returns an unexpected format, the expression returns undefined and the next node fails silently — or worse, routes incorrectly without error.
To make the connection robust, add a Set node immediately after the HTTP Request node:
- Create a field called
aiOutput - Set its value to
{{ $json.choices[0].message.content }} - All downstream nodes reference
{{ $json.aiOutput }}instead of the nested path
Then add an IF node that checks aiOutput is not empty before the routing logic. This separates API failures from routing failures and makes debugging faster.
The native OpenAI node handles this extraction automatically and surfaces errors more clearly. Its trade-off: it abstracts away the raw request, which makes it harder to adapt for non-standard use cases — fine-tuned models, different providers, streaming responses, or any API that isn’t OpenAI. For developers who want to understand what’s happening at each step, starting with HTTP Request and switching to the native node once the workflow is stable is the approach that produces fewer surprises in production. The n8n AI nodes documentation covers the native nodes in full.
My Take: When the ROI Actually Works Out
Here’s the opinion built from building these workflows on real projects: n8n AI workflows are worth the setup cost only when there is a specific, recurring task that currently consumes real time. Not “we could automate this someday” — but “I do this exact thing three times a day and each instance takes ten minutes.”
The framing matters because building a reliable workflow — one that handles errors, retries on API timeout, and notifies you when something breaks — takes several hours the first time. That’s a genuine investment. If the workflow runs twice a week and saves five minutes each time, the maths are not in your favour.
Spending a day on tooling that saves ten minutes a week is not a good trade. The workflows worth building have frequency and meaningful time cost: daily jobs that run dozens of times, content pipelines processing hundreds of items, routing tasks that happen on every form submission. Once you’ve built one n8n AI workflow properly, the pattern repeats. The second takes an hour. The fifth takes twenty minutes. The learning cost is front-loaded.
When NOT to Use n8n for Your AI Workflow
Four situations where n8n is the wrong tool:
1. You need responses under 500 milliseconds
n8n workflows have startup overhead. Triggering a workflow, running through nodes, making an external API call, and returning a result takes 1–3 seconds at minimum — and that’s before the AI model responds. If you need an AI response inline in a user-facing product, build the API call directly in your application backend. n8n is for asynchronous background automation, not synchronous user-facing requests.
2. The task runs once
n8n is an automation platform. Its value is in repetition. If you need to run an AI task one time — summarise this specific document, analyse this particular dataset — write a script or call the API directly. n8n adds a visual layer, credential management, and execution history on top of what is ultimately a JSON POST request. For a one-off job, that overhead has no return.
3. Your team has no one who can maintain it
n8n workflows are visual but not self-documenting. A 25-node workflow running in production with no owner is a liability. If you’re the only person who understands n8n on your team, document every non-obvious step or choose a different tool. Check our n8n vs Make comparison — Make’s interface is more approachable for non-developers who will need to maintain the workflow after you’ve moved on.
4. You need complex persistent state
n8n’s in-workflow memory is limited to the current execution. If your AI workflow needs to maintain rich conversation history across multiple runs, track parallel threads, or coordinate asynchronous jobs with shared state, you’ll need a database or a purpose-built agent framework. n8n’s Memory Buffer node handles basic short-term memory within a single session — anything more complex requires external state management. The n8n AI agent guide covers where n8n’s memory capabilities stop and what to do when you need more.
Conclusion
An n8n AI workflow is a few nodes that call an API and do something with the response. The AI layer is not the hard part. Structuring the prompt for reliable output, extracting the response correctly, and building an error path that actually catches failures — that’s the work.
Key takeaways:
- Use the HTTP Request node to call AI APIs directly — it’s more transparent than native AI nodes for learning and debugging
- Set
max_tokensdeliberately — let the model generate only what you’ll use - Extract the AI response into a clean field with a Set node before routing logic — it makes debugging faster
- Add an IF node that checks for empty output before the routing layer — API failures and logic failures look identical without it
- Build workflows for frequent, time-consuming tasks — the setup cost is front-loaded and only pays off with repetition
n8n won’t eliminate all the tedious parts of your workflow. It’ll just run the specific, defined tedious parts automatically, which frees you up to find new tedious parts to resent.
↑ Back to topFrequently Asked Questions
What is an n8n AI workflow?
An n8n AI workflow is a standard n8n automation that includes one or more nodes calling an AI API — such as OpenAI or Anthropic — to process data. The AI reads input from a trigger, applies reasoning or transformation (classification, summarisation, generation), and returns output that the workflow routes to a destination like Slack, a database, or an email.
How do I connect OpenAI to n8n?
Add your OpenAI API key under Settings → Credentials, selecting the OpenAI credential type. Once saved, you can reference the credential in the HTTP Request node’s Auth tab or in the native OpenAI node. The HTTP Request approach points to https://api.openai.com/v1/chat/completions with a JSON body containing your model, messages array, and parameters. See the OpenAI Chat Completions API reference for the full request format.
What’s the difference between the OpenAI node and the HTTP Request node in n8n?
The OpenAI node handles authentication and response parsing automatically and surfaces errors in a structured way. The HTTP Request node is a generic API caller — you write the request body manually and extract response fields with expressions. HTTP Request is more transparent and works with any AI provider. The OpenAI node is cleaner for teams who don’t need to customise request parameters.
How do I pass data between nodes in an n8n AI workflow?
n8n passes data using expressions in double curly braces. {{ $json.fieldName }} references the current item’s JSON. After an HTTP Request to OpenAI, the response text lives at {{ $json.choices[0].message.content }}. Place a Set node after the API call to extract this into a clean field — downstream nodes then use {{ $json.aiOutput }} instead of the nested path.
How much does running AI workflows in n8n cost?
n8n itself is free to self-host. The cost comes from AI API calls. With gpt-4o-mini, OpenAI charges $0.15 per million input tokens as of early 2026. A short classification prompt under 200 tokens running 1,000 times a month costs less than a dollar. GPT-4o costs $2.50 per million input tokens — use the smallest model that handles the task correctly.
Can n8n AI workflows run automatically on a schedule?
Yes. Replace the Webhook trigger with a Schedule trigger and define your interval or cron expression. The workflow fires at each scheduled time, runs through all nodes including AI API calls, and delivers output without manual intervention. On self-hosted n8n, the instance must be running continuously — a persistent Docker container handles this. On n8n.cloud, scheduled execution is managed for you.
Do I need to know how to code to build n8n AI workflows?
Not for most workflows. The visual editor handles triggers, routing, and API calls. You’ll need n8n expressions — the {{ $json.field }} syntax — to pass data between nodes, and writing a JSON request body requires understanding the AI API’s format. The Code node accepts JavaScript or Python for complex transformations, but it’s optional. Most classification, summarisation, and routing workflows require no code beyond basic expressions.