How to Use Any Model with n8n Using Open Router

How to Use Any Model with n8n Using Open Router

Introduction

In this guide, I’m going to show you how to use any model with n8n using something called Open Router. I’ll also be showing you how to create an LLM router so that your agent decides which model to use based on your question.

The beauty of these agents is that they can be implemented wherever you want. You can just drop these in the middle of any agentic system you have. When you need to get an answer with a dynamic model, you can just put the agent before it.

Open Router and n8n

We’re going to be using a new update in n8n that allows you to use Open Router. Open Router gives you access to a vast selection of models, including:

  • Anthropic
  • Amazon
  • Deepseek
  • Gemini
  • OpenAI
  • Meta Llama
  • Microsoft
  • Grok

How It Works: The Decider Agent

In my decider agent, I’ve classified three models that Open Router has access to by giving them a model name and strengths.

  • If I ask a web search question, Perplexity will get called.
  • If I ask an advanced reasoning question, o03 mini will get called.
  • If I ask a coding question, Anthropic will get called.

This allows the agent to decide which model to use based on the question.

Step-by-Step Guide to Build Your Own LLM Router System

First, you want to add your first step and use any application, such as Telegram, Slack, or Whatsapp. For this demo, we will start on a simple chat message.

Then, add an AI Agent node, and since it shows up in red, we need to add a chat model. Go to Open Router and follow these steps to create an account:

  1. Sign up for Open Router.
  2. Go to Credits and add credits using crypto or a credit card.
  3. Go to Keys and create a new key.
  4. Copy the secret key.
  5. Go back to your router agent in n8n and click Open Router Chat Model.
  6. Create a new credential and paste in your API key.

With the open Router homepage, you can then select the mode that you want. In this example, we will have the 03 mini running as it’s very cheap, cost-effective, fast, and good at reasoning.

After this, add memory by adding a basic window buffer memory and abbreviate to wbm. Now we are ready to create a system prompt for this agent.

The System Prompt

When you’re using this setup, I would recommend anywhere from 1 to 10 models for the agent to select from. If you add more, it might be difficult for the agent to decide.

First, give the agent a role.

  • You are a routing agent, and your job is to take in user queries and decide which model is best fit for each use case.
  • You will have models you can select from for the user query.

Then, list those models with their model ID from Open Router, as well as their strengths:

  • Perplexity, built-in web search features, citations, ability to customize sources, and can search the web for live data.
  • O3 mini high, cost-efficient language model optimized for reasoning tasks, excels in Science and Mathematics. Best when careful, well-thought-out responses are needed regarding problems with multiple variables or connections.
  • Claude 3.5 Sonnet

Additional Step

From this structured output, we have to pass it to another agent to solve the query. To do this, add an advanced AI agent node. After, where it says the user message, define it to be dynamic and make it an expression from the decider agent to pass the user query over.

To make sure that the chatbot is following our instructions, map another open router chat model. Select open router chat model, and click expressions and map model to model. The reason that we do this is so that the models dynamically change in the chatbot.

After this add, another window buffer memory and drag in the Chat message session ID.

Using Multiple Models (10+)

If you want to start getting into using multiple models and have a model selection of 10 or more, create a system where the user has a question or command. Send that to a categorizing agent. Instead of deciding what model to use right here, you would decide what category does that fall in. Then, you could have models under those categories. You would send the category, whatever category the prompt or query is in, to a completely different agent.

Each agent here has a list of models for that query type and how certain models are better in certain situations. Send your query to this agent, which decides which coding model to use. And it will go to your response agent, and then that message would come back to you, the user.

Conclusion

You can put this into any automation you want. You could create business plans and coding outputs with the best models that Open AI currently has available on the market. By using open router, gives dynamic outputs while using the best model for the job.

Backlink related to post
Backlink related to post

Thank you for reading this blog post on how to use any model with n8n using Open Router. I hope this guide has been helpful in understanding how to set up an LLM routing system to dynamically select the best model for your use case.

To watch a video demonstration of this process, please check out the following YouTube video: https://youtube.com/watch?v=qNdDoeUj6Yg

Watch this video on Youtube

Leave a Reply

Your email address will not be published. Required fields are marked *