1. Home
  2. Articles
  3. Your First AI Agent in n8n: In...
Deutsch

Your First AI Agent in n8n: Inquiry Pre-Qualification in 30 Minutes

A practical blueprint for your first AI agent with n8n: trigger, OpenAI node, CRM lookup, Outlook draft. Includes a working template and the spots where most teams stumble.

Your First AI Agent in n8n: Inquiry Pre-Qualification in 30 Minutes

So far we've talked strategy: the three paths, OpenAI setup, Azure setup, running Ollama locally. Now we build.

The use case: pre-qualifying inbound customer inquiries. We'll take the example from the pillar guide and build it for real -- with n8n as the orchestrator, OpenAI (or Azure, or Ollama -- doesn't matter) as the language-model backend, a CRM lookup, and an email draft at the end.

If you follow this article, you'll have a working agent at the end. Not a hello-world tutorial example, but something you can drop into your own inquiry pipeline tomorrow.

Why n8n, Not LangChain or Make

Three options for the workflow layer: n8n, Make, LangChain. We're using n8n. The reasoning, briefly:

n8n is open source, self-hostable (important for GDPR), has a graphical UI, and can also run code (JavaScript). You can run it on a 5-euro Hetzner server or have us host it for you. Your data stays entirely with you.

Make (formerly Integromat) is the cloud-only alternative. Slicker, faster to get started, but every byte of data flows through their Swiss servers. For GDPR-sensitive use cases, that's a non-starter.

LangChain is code-first, no visual editor. More power, more complexity, longer development time. If you're a development shop anyway, great. If you're an SMB without a dev team trying to ship something, wrong choice.

For SMBs, n8n is the default -- self-hostable, visual, no code learning curve, data stays with you.

Prep: What You'll Need

Before you start, get three things straight:

n8n is running. Either cloud (n8n.io, from 20 euros/month) or self-hosted. If you self-host: a Hetzner CX22 VM with Docker is enough to start. Docker Compose installation is about an hour of work -- we also host n8n as part of consulting engagements.

API access to an LLM. An OpenAI key, Azure endpoint plus key, or a local Ollama endpoint. Whichever of the three paths you pick, n8n has nodes for all of them.

A source of inquiries. A web form on your site, an IMAP mailbox, a webhook from a marketing tool. We'll use the web form as the example, since that's the most common.

The Workflow at a Glance

Before we start clicking, let's sketch the workflow. An agent is always a sequence of steps with a clear responsibility per step:

Dieses Thema vertiefen? 32 KI-Rezepte mit Kostenrahmen als kostenloses PDF.

PDF holen
  1. Trigger: Webhook receives an inquiry from the contact form
  2. Classification: LLM categorizes it (quote request / support / spam / other)
  3. Routing: Spam or other: stop. Quote request or support: continue
  4. CRM lookup: Check whether the sender is an existing customer
  5. Extraction: LLM pulls out structured fields (service type, scope, urgency)
  6. Reply draft: LLM writes a reply in your company's tone of voice
  7. Handoff: Draft lands as an email draft in the responsible rep's inbox

Seven steps. We'll build them in order.

Step 1: Webhook Trigger

In n8n: New Workflow → Add first step → Webhook.

In the Webhook settings:

  • HTTP Method: POST
  • Path: kontakt-anfrage (your own path)
  • Authentication: Header Auth -- generate a long random string as the secret
  • Response Mode: Last Node (n8n responds once the workflow has completed)

n8n shows you the webhook URL, something like https://your-n8n-instance.com/webhook/kontakt-anfrage. You enter this URL in your contact form as the POST endpoint.

Test: in the n8n editor, click "Test workflow", then send a real inquiry through the form. If n8n receives the webhook, you'll see the data as JSON. If not, the path is wrong or the header-auth secret doesn't match.

Step 2: Classification with an LLM

After the Webhook node: + → AI → OpenAI → Message a Model. (If you're using Azure, there's a separate Azure OpenAI node with the same settings.)

Configuration:

Credential: Your OpenAI key (stored once in n8n, then reusable)

Resource: Chat

Operation: Message a Model

Model: gpt-4o-mini (plenty for classification, much cheaper than gpt-4o)

System prompt:

You are a classification assistant for inbound
customer inquiries to a construction company.
Reply only with a JSON object of the form:
{"kategorie": "angebot" | "support" | "spam" | "sonstiges"}
No explanation, no prose.

User message: {{ $json.body.nachricht }} (reference to the message field from the webhook)

Important: set Response Format to JSON Object. Otherwise the model will sometimes return Markdown-formatted answers and the workflow breaks.

Test with the inquiry from earlier. You should get back something like { "kategorie": "angebot" }.

Step 3: Routing via an IF Node

After the classification node: + → Flow → IF.

Condition: {{ $json.message.content.kategorie }} is not in spam, sonstiges.

The IF node splits the workflow into two paths. On the "false" path (spam/other), the workflow ends. Optional: a logging entry for spam, and moving "other" inquiries to a fallback folder.

On the "true" path, we continue with the next steps.

Step 4: CRM Lookup

This is where things get CRM-specific. We'll show it for a generic REST-API CRM (HubSpot, Pipedrive, and Zoho work similarly).

After the IF node (true path): + → HTTP Request.

Method: GET

URL: https://api.hubapi.com/contacts/v1/contact/email/{{ $('Webhook').first().json.body.email }}/profile

Authentication: Header Auth → Bearer Token (HubSpot Private App Token)

Response Code Setting: enable "Continue on Error" -- if the contact doesn't exist, you'll get a 404, and the workflow should keep running anyway.

In the next step we'll use the existence or absence of this contact.

Step 5: Extracting Structured Fields

Second LLM call. This time we extract the important information from the inquiry into a structured format.

+ → AI → OpenAI → Message a Model.

Model: gpt-4o (for more involved extraction)

System prompt:

Extract the following fields from the customer inquiry as JSON:
{
  "leistungsart": "Which service is being requested?",
  "umfang": "Estimated scope in words, or open",
  "dringlichkeit": "hoch" | "normal" | "niedrig",
  "rueckfragen": ["Which information is missing for a
                   binding quote?"]
}
If information is missing: use null, do not guess.

User message: {{ $('Webhook').first().json.body.nachricht }}

Step 6: Drafting the Reply

Third LLM call. This is where the agent delivers the actual value: a reply draft that sounds like a human wrote it.

Two variants depending on whether the sender is an existing customer or not. Build it with two separate OpenAI nodes after a second IF node that checks whether the CRM lookup returned a hit.

Example prompt for existing customers:

You are a customer-service rep at [Company Name]. Write a
reply draft to an EXISTING CUSTOMER.
Tone: polite and direct, no marketing-speak.
Greet by first name (from CRM data).
Reference their inquiry concretely.
Ask the follow-up questions if any are listed in the JSON.
Close with a note on response time (24 hours).
Leave the signature open ("Your team at [Company Name]").

Variant for new customers: a slightly different greeting, a one-line introduction to the company in the first sentence, otherwise the same.

Step 7: Handing It Off as an Email Draft

Here's the critical point: the agent does not send the reply itself. It places it as a draft in the inbox of the responsible rep.

For Microsoft 365: + → Microsoft Outlook → Draft → Create. You'll need a one-time OAuth authentication against your Microsoft 365 account.

To Recipients: {{ $('Webhook').first().json.body.email }}

Subject: RE: Your inquiry from {{ $now.format('DD.MM.YYYY') }}

Body Content Type: HTML

Body: {{ $('OpenAI Antwort-Entwurf').first().json.message.content }}

Save to Sent: off (it's a draft, it shouldn't be sent yet)

Optional: a Slack/Teams notification to the rep that a new draft is ready. + → Slack → Send Message.

What Goes Wrong Early On (and How to Fix It)

From experience -- the five most common issues during the first run:

JSON parse errors. LLMs love wrapping JSON in Markdown code blocks. Fix: Response Format → JSON Object in the OpenAI settings, plus an explicit "JSON only, no code blocks" in the prompt. Both -- one alone isn't enough.

Wrong webhook data structure. Depending on the form provider (Typeform vs. a custom HTML form vs. WordPress Contact Form 7), the JSON structure differs. In the n8n editor you can inspect the most recent webhook call and see exactly where each field lives. Only then write the references.

Hallucinated CRM data. When the CRM lookup turns up nothing (404), the LLM will sometimes still reach for "available data" and invent a first name. Fix: explicitly state in the system prompt "if no CRM record is available, use a neutral salutation" -- and check in code whether the CRM result actually exists.

Cost blowups. If your webhook gets flooded by a spam bot with 10,000 requests per hour, things get expensive fast. Fix: a rate-limit step before the first LLM call (n8n has no native one, but you can roll your own with Static Data) and a hard limit on the OpenAI side.

Email draft lands in the wrong mailbox. The default is the mailbox of the user n8n is authenticated as. If you want the draft in a shared mailbox, you have to configure the Outlook node accordingly.

When the Agent Is Ready for Production

Before you flip the switch, three tests:

Replay 20 historical inquiries. Take the last 20 inquiries from your inbox and run them through the workflow. Compare the generated drafts with what your reps actually wrote at the time. Match rate at 70 percent or above? Ready.

Test spam inquiries. Generate a handful of obvious spam texts ("Buy cheap viagra", "Investment opportunity") and check whether the workflow classifies correctly. If not: tighten the classification prompt.

Edge cases. A very long inquiry (with a 5,000-word code snippet inside), a very short one ("need help"), one in a foreign language. What happens? Do the steps fail cleanly or does the LLM produce garbage?

If all three tests come through cleanly, you can go live. But even then: the first 100 live inquiries get reviewed manually before the rep sends them out. That's the human-in-the-loop phase.

What You Can Extend Next

The agent is running. What comes next? Three sensible extensions:

Analyze attachments. If the inquiry includes a PDF (a spec sheet, a blueprint), the agent can read the document, pull out the key points, and weave them into the reply draft. With GPT-4o or Gemini 2.5, this works well.

Confidence score. Extend the classification prompt with a confidence value (0-100). On low confidence: special routing to a senior team member instead of an automatic draft.

Learning the tone. Collect the manual edits from the first 100 inquiries. Drop them into the prompt as few-shot examples. After three months, the agent sounds like your actual team.

Ready-Made Template

We have the workflow described above available as an n8n template (JSON export). When you book our AI consulting, you get the template adapted to your specific CRM and mail environment and deployed live in your n8n instance.

Contact: info@kiba.berlin.

Part 5 (Final) of the series. Back to the pillar guide · All articles

32 KI-Rezepte für den Mittelstand

Kostenloser Praxisleitfaden mit Kostenrahmen, Entscheidungsmatrix und Fördermittel-Guide für KMU.

PDF kostenlos herunterladen

Bereit für den nächsten Schritt?

Sprechen Sie mit unseren KI-Experten – der erste Beratungstermin ist kostenlos und unverbindlich.

This article is part of our comprehensive guide: AI for SMEs — The Complete Guide for Medium-Sized Businesses

Ähnliche Artikel