Master n8n debugging and error handling with logs, retries, and alerts. Real examples for SaaS & agencies by Alfaz Mahmud Rizve at whoisalfaz.me.
The Reality of Production Workflows
You’ve built a beautiful n8n workflow. It works in tests. It works when you click “Execute.” But at 2 AM, your client’s data sync fails silently, and you don’t know until they complain three days later.
This is where n8n debugging and error handling separates hobbyists from professionals. Alfaz Mahmud Rizve has seen workflows that looked perfect fail in production because no one set up proper error handling. This post teaches you how to catch, log, and fix those issues before they become disasters.
By the end of Day 7 of the 30 Days of n8n & Automation series, you’ll know:
- How to read n8n debugging logs like a pro
- Why retries matter and how to set them up
- How to send real-time alerts to Slack and email when things break
This isn’t theory. These are patterns Alfaz Mahmud Rizve uses for every production automation at whoisalfaz.me.

Part 1: Understanding n8n Debugging Logs
Every workflow run creates logs. Most people never look at them. That’s a mistake.
Where to find your logs
In n8n:
- Open a workflow and click Execute.
- On the right side, you’ll see output for each node.
- Click on a node’s output to expand it and see the data it received and produced.
- If a node fails, you’ll see a red error message with a stack trace.
That error message is your n8n debugging clue. It tells you:
- Which node failed
- What input it received
- Why it failed (missing field, API error, timeout, etc.)
Real example: API call failure
Let’s say your HTTP Request node fails with:
textError: 401 Unauthorized
This means your API credentials are wrong or expired. Without n8n debugging, you’d spend hours guessing. With it, you know in seconds: re-check your API key.
Pro tip from Alfaz Mahmud Rizve: Always check node output before moving to retries or alerts. 80% of issues are caught here.
The 5 Most Common n8n Debugging Scenarios
When building n8n debugging workflows, expect these errors:
- Missing fields in data
- You expected
email, but the data hasuser_email. - Fix: Use a Set node to normalize field names early (like you learned in Day 6).
- You expected
- API rate limits
- API says “too many requests” and rejects your call.
- Fix: Add a retry with exponential backoff (more on this below).
- Timeouts
- External API is slow or down.
- Fix: Increase timeout in HTTP Request node settings, or use retries.
- Authentication failures
- Your credentials expired or were entered wrong.
- Fix: Re-authenticate and test with a small dataset first.
- Unexpected data format
- API changed response structure, or you got
nullinstead of an object. - Fix: Add error handling in a Try-Catch pattern (covered below).
- API changed response structure, or you got

Part 2: Retries – Your First Line of Defense
A retry automatically runs a failed node again, usually after a short delay. This catches temporary failures (API hiccup, network blip, temporary overload).
How to set up retries in n8n
- Click on any node.
- Go to Settings (gear icon).
- Scroll to Retry on failure.
- Set:
- Max retries: 3 (good default; adjust based on your API’s behavior).
- Retry interval: 5 seconds (exponential backoff: first retry at 5s, second at 25s, third at 125s).
- Backoff multiplier: 5 (multiply wait time by 5 each retry).
When retries work
Retries are gold for:
- Rate-limited APIs (they recover after a few seconds)
- Flaky network calls
- Temporary timeouts
When retries DON’T work
Don’t waste retries on:
- Auth failures (wrong API key won’t magically fix itself)
- Data validation errors (malformed email won’t become valid on retry)
- Missing required fields
Alfaz Mahmud Rizve’s rule: If a human would get the same error on retry, retries won’t help. You need error handling instead.
Part 3: Error Handling – Catching and Routing Failures
Retries are good, but they fail eventually. When they do, you need error handling to decide what happens next: log it, notify, or retry a different way.
The Try-Catch pattern
This is the most powerful n8n debugging pattern. Here’s how:
- Wrap risky nodes (API calls, data transforms) in a Try block.
- If it fails, the workflow doesn’t stop; it flows to a Catch block.
- In the Catch block, you log, notify, or retry differently.
How to build Try-Catch in n8n
n8n doesn’t have a visual “Try-Catch” node, but you can simulate it:
- Add a Conditional node after your risky node.
- Check:
if node output contains error. - True branch: Route to error handling (log + alert).
- False branch: Continue with success flow.
Real workflow example
Let’s build a simple but powerful n8n debugging workflow:
Scenario: Sync leads from a form to your CRM. If the sync fails, log it and alert the team.
Nodes:
- Webhook – receive form data
- HTTP Request – call CRM API to create lead
- IF – check if HTTP response has an error
- True: go to Error Handler
- False: go to Success Handler
- Error Handler branch:
- Set node: create error message
- Email node: send error alert to admin
- Google Sheets node: log failed leads
- Success Handler branch:
- Slack node: notify team of new lead
This workflow ensures no lead is lost – even if the CRM fails, you have a log and know about it.

Part 4: Real-Time Alerts – Slack & Email
When an error happens, silence is your enemy. Real-time alerts keep you in the loop.
Option 1: Slack Alerts (fastest)
Slack is usually the best choice because:
- Instant notification to your phone
- Team can see it immediately
- Easy to thread and discuss
Setup:
- In the error handling branch, add a Slack node.
- Connect your Slack workspace (authorize n8n).
- Select channel (e.g.,
#errorsor#alerts). - Message format:
textWorkflow: Lead Sync Failed
Error: API returned 500
Lead Email: {{$json["email"]}}
Time: {{$now.toISO()}}
Action: Check CRM status, retry manually if needed
Pro tip from Alfaz Mahmud Rizve: Use message formatting with bullet points so your team can scan it quickly.
Option 2: Email Alerts (persistent record)
Slack is great, but emails create a record and work even if Slack is down.
Setup:
- In error handling, add an Email node (or Brevo if that’s your email provider).
- Recipient: your admin email or support team
- Subject:
🚨 n8n Workflow Error: {{workflow_name}} - Body:
textWorkflow: {{workflow_name}}
Error: {{error_message}}
Node: {{failed_node_name}}
Time: {{execution_time}}
Details: {{full_error_stack}}
Next steps:
1. Check logs in n8n dashboard
2. Verify API credentials
3. Retry manually if it's a temporary issue
Option 3: Combined Alerts (best)
For critical workflows (lead sync, billing, client data), send BOTH Slack + Email:
- Slack for immediate awareness
- Email for persistent record
This ensures Alfaz Mahmud Rizve‘s principle: visible failures are fixed; hidden failures are disasters.

Part 5: Building Your First Production-Ready Workflow
Let’s bring it all together. Here’s a real n8n debugging workflow you can copy for your SaaS or agency:
The workflow: “Client Onboarding with Error Handling”
Goal: When a deal is won in your CRM, automatically create a Notion page, send an email, and add to a Google Sheet. If anything fails, alert Slack and log it.
Nodes (in order):
- CRM Webhook – trigger when deal status = “Won”
- Set – normalize data (extract name, email, company)
- HTTP Request – call Notion API to create page (with retries: 3, interval: 5s)
- IF – check if Notion call succeeded
- True → go to Step 5a
- False → go to Error Handler
5a. Email – send onboarding email to client
5b. Google Sheets – add record to onboarding log
5c. Slack – notify team “New client onboarded”
Error Handler (if Notion fails):
- Set – create error summary
- Slack – send alert: “Onboarding failed for [client]. Notion API error. Check logs.”
- Email – send summary to admin
- Google Sheets – log the failed record with error details
This workflow ensures:
- ✅ Visibility: Team knows immediately if onboarding fails
- ✅ Recoverability: Nothing is silently lost; all failures are logged
- ✅ Audit trail: Google Sheets has a record of every attempt and error
Alfaz Mahmud Rizve uses this pattern for every client-facing workflow at whoisalfaz.me.

Checklist: Is Your Workflow Production-Ready?
Before you deploy any n8n debugging workflow, check:
- Retries enabled on all API calls (3 retries with exponential backoff)
- Error handling for every risky node (HTTP Request, database writes, API calls)
- Slack alert for critical failures (lead loss, payment failure, data sync broken)
- Email alert sent to admin with full error context
- Logging to Google Sheets or database of all failures
- Manual retry option (a button or workflow to re-run failed records)
- Tested with bad data (test how it handles null fields, API errors, timeouts)
If you can check all of these, your workflow is production-ready.
Common Mistakes (and How to Avoid Them)
Mistake 1: Retrying forever
Don’t set max retries to 100. You’ll waste API quota and look like a bot. 3–5 retries is standard.
Mistake 2: Silent failures
A workflow that fails silently is worse than no workflow. Always add alerts.
Mistake 3: Not logging errors
You forgot what went wrong last week? Always log to Google Sheets or a database.
Mistake 4: Catching errors but doing nothing
If you catch an error, handle it. Slack the team, email admin, or retry differently. Silent catches = hidden disasters.
How This Fits Into 30 Days of n8n & Automation
By Day 7, you now understand:
- How to read n8n debugging logs to find problems fast
- How to use retries for temporary failures
- How to build error handling workflows that catch and notify
- How to send real-time alerts to Slack and email
The next posts in this series will show you how to apply these patterns to real-world automations: lead handling, client onboarding, reporting, and more.
Every workflow you build from here on should follow the patterns you learned in n8n debugging and error handling. This isn’t extra work—it’s the difference between automation that works and automation that creates silent disasters.
Your Next Step
- Open one of your existing workflows.
- Add retries to HTTP Request nodes.
- Add a Slack alert to an error handler.
- Test it by intentionally breaking an API call and watching the alert fire.
Once you see that Slack notification, you’ll understand why Alfaz Mahmud Rizve says debugging and error handling is the most important part of production automation.
Subscribe to the newsletter to get Day 8 tomorrow, where we’ll build your first real-world workflow with everything you’ve learned so far.