Call an AI Agent as a tool
This tool allows an LLM to call an AI Agent. Make sure to specify a name and a description so the LLM can understand what it does to decide if it needs to call it.
type: "io.kestra.plugin.ai.tool.AIAgent"Examples
Call an AI agent as a tool
id: ai-agent-with-agent-tools
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: |
Each flow can produce outputs that can be consumed by other flows. This is a list property, so that your flow can produce as many outputs as you need.
Each output needs to have an ID (the name of the output), a type (the same types you know from inputs, e.g., STRING, URI, or JSON), and a value, which is the actual output value that will be stored in internal storage and passed to other flows when needed.
tasks:
- id: ai-agent
type: io.kestra.plugin.ai.agent.AIAgent
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
systemMessage: Summarize the user message, then translate it into French using the provided tool.
prompt: "{{inputs.prompt}}"
tools:
- type: io.kestra.plugin.ai.tool.AIAgent
description: Translation expert
systemMessage: You are an expert in translating text between multiple languages
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash-lite
apiKey: "{{ kv('GEMINI_API_KEY') }}"Properties
description *Requiredstring
Agent description
The description will be used to instruct the LLM what the tool is doing.
provider *RequiredNon-dynamicAmazonBedrockAnthropicAzureOpenAIDashScopeDeepSeekGoogleGeminiGoogleVertexAIHuggingFaceLocalAIMistralAIOciGenAIOllamaOpenAIOpenRouterWorkersAIZhiPuAI
Language model provider
configuration Non-dynamicChatConfiguration
{}Language model configuration
contentRetrievers GoogleCustomWebSearchSqlDatabaseRetrieverTavilyWebSearch
Content retrievers
Some content retrievers, like WebSearch, can also be used as tools. However, when configured as content retrievers, they will always be used, whereas tools are only invoked when the LLM decides to use them.
maxSequentialToolsInvocations integerstring
Maximum sequential tools invocations
name string
toolAgent name
It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.
systemMessage string
System message
The system message for the language model
tools Non-dynamicA2AAgentAIAgentCodeExecutionDockerMcpClientGoogleCustomWebSearchKestraFlowKestraTaskSseMcpClientStdioMcpClientStreamableHttpMcpClientTavilyWebSearch
Tools that the LLM may use to augment its response
Definitions
io.kestra.core.models.tasks.retrys.Constant
interval *Requiredstring
durationtype *Requiredobject
behavior string
RETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONmaxAttempts integer
>= 1maxDuration string
durationwarningOnRetry boolean
falseMistral AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Model Context Protocol (MCP) Stdio client tool
command *Requiredarray
MCP client command, as a list of command parts
env object
Environment variables
logEvents booleanstring
falseLog events
type object
io.kestra.core.models.tasks.retrys.Exponential
interval *Requiredstring
durationmaxInterval *Requiredstring
durationtype *Requiredobject
behavior string
RETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONdelayFactor number
maxAttempts integer
>= 1maxDuration string
durationwarningOnRetry boolean
falseZhiPu AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
https://open.bigmodel.cn/API base URL
The base URL for ZhiPu API (defaults to https://open.bigmodel.cn/)
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
maxRetries integerstring
The maximum retry times to request
maxToken integerstring
The maximum number of tokens returned by this request
stops array
With the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id
type object
Call a Kestra flow as a tool
description string
Description of the flow if not already provided inside the flow itself
Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.
flowId string
Flow ID of the flow that should be called
inheritLabels booleanstring
falseWhether the flow should inherit labels from this execution that triggered it
By default, labels are not inherited. If you set this option to true, the flow execution will inherit all labels from the agent's execution.
Any labels passed by the LLM will override those defined here.
inputs object
Input values that should be passed to flow's execution
Any inputs passed by the LLM will override those defined here.
labels arrayobject
Labels that should be added to the flow's execution
Any labels passed by the LLM will override those defined here.
namespace string
Namespace of the flow that should be called
revision integerstring
Revision of the flow that should be called
scheduleDate string
date-timeSchedule the flow execution at a later date
If the LLM sets a scheduleDate, it will override the one defined here.
type object
Model Context Protocol (MCP) SSE client tool
url *Requiredstring
URL of the MCP server
headers object
Custom headers
Useful, for example, for adding authentication tokens via the Authorization header.
logRequests booleanstring
falseLog requests
logResponses booleanstring
falseLog responses
timeout string
durationConnection timeout duration
type object
Call a Kestra runnable task as a tool
Deepseek Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
https://api.deepseek.com/v1API base URL
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
jsonSchema object
JSON Schema (used when type = JSON)
Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.
jsonSchemaDescription string
Schema description (optional)
Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."
type string
TEXTTEXTJSONResponse format type
Specifies how the LLM should return output. Allowed values:
- TEXT (default): free-form natural language.
- JSON: structured output validated against a JSON Schema.
OpenRouter Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
io.kestra.core.models.tasks.retrys.Random
maxInterval *Requiredstring
durationminInterval *Requiredstring
durationtype *Requiredobject
behavior string
RETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONmaxAttempts integer
>= 1maxDuration string
durationwarningOnRetry boolean
falseModel Context Protocol (MCP) Docker client tool
image *Requiredstring
Container image
apiVersion string
API version
binds array
Volume binds
command array
MCP client command, as a list of command parts
dockerCertPath string
Docker certificate path
dockerConfig string
Docker configuration
dockerContext string
Docker context
dockerHost string
Docker host
dockerTlsVerify booleanstring
Whether Docker should verify TLS certificates
env object
Environment variables
logEvents booleanstring
falseWhether to log events
registryEmail string
Container registry email
registryPassword string
Container registry password
registryUrl string
Container registry URL
registryUsername string
Container registry username
type object
Google Custom Search web tool
apiKey *Requiredstring
API key
csi *Requiredstring
Custom search engine ID (cx)
type object
Ollama Model Provider
endpoint *Requiredstring
Model endpoint
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Code execution tool using Judge0
apiKey *Requiredstring
RapidAPI key for Judge0
You can obtain it from the RapidAPI website.
type object
OpenAI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
https://api.openai.com/v1API base URL
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
SQL Database content retriever using LangChain4j experimental SqlDatabaseContentRetriever. ⚠ IMPORTANT: the database user should have READ-ONLY permissions.
databaseType *Requiredstring
POSTGRESQLMYSQLH2Type of database to connect to (PostgreSQL, MySQL, or H2)
Determines the default JDBC driver and connection format.
password *Requiredstring
Database password
provider *RequiredAmazonBedrockAnthropicAzureOpenAIDashScopeDeepSeekGoogleGeminiGoogleVertexAIHuggingFaceLocalAIMistralAIOciGenAIOllamaOpenAIOpenRouterWorkersAIZhiPuAI
Language model provider
username *Requiredstring
Database username
configuration ChatConfiguration
{}Language model configuration
driver string
Optional JDBC driver class name – automatically resolved if not provided.
jdbcUrl string
JDBC connection URL to the target database
maxPoolSize integerstring
2Maximum number of database connections in the pool
type object
Web search content retriever for Google Custom Search
apiKey *Requiredstring
API key
csi *Requiredstring
Custom search engine ID (cx)
maxResults integerstring
3Maximum number of results
type object
io.kestra.plugin.ai.domain.ChatConfiguration
logRequests booleanstring
Log LLM requests
If true, prompts and configuration sent to the LLM will be logged at INFO level.
logResponses booleanstring
Log LLM responses
If true, raw responses from the LLM will be logged at INFO level.
maxToken integerstring
Maximum number of tokens the model can generate in the completion (response). This limits the length of the output.
responseFormat ChatConfiguration-ResponseFormat
Response format
Defines the expected output format. Default is plain text.
Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use.
When using a JSON schema, the output will be returned under the key jsonOutput.
returnThinking booleanstring
Return Thinking
Controls whether to return the model's internal reasoning or 'thinking' text, if available. When enabled, the reasoning content is extracted from the response and made available in the AiMessage object. It Does not trigger the thinking process itself—only affects whether the output is parsed and returned.
seed integerstring
Seed
Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.
temperature numberstring
Temperature
Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.
thinkingBudgetTokens integerstring
Thinking Token Budget
Specifies the maximum number of tokens allocated as a budget for internal reasoning processes, such as generating intermediate thoughts or chain-of-thought sequences, allowing the model to perform multi-step reasoning before producing the final output.
thinkingEnabled booleanstring
Enable Thinking
Enables internal reasoning ('thinking') in supported language models, allowing the model to perform intermediate reasoning steps before producing a final output; this is useful for complex tasks like multi-step problem solving or decision making, but may increase token usage and response time, and is only applicable to compatible models.
topK integerstring
Top-K
Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.
topP numberstring
Top-P (nucleus sampling)
Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.
io.kestra.core.models.tasks.Cache
enabled *Requiredboolean
ttl string
durationio.kestra.core.models.tasks.WorkerGroup
fallback string
FAILWAITCANCELkey string
WorkersAI Model Provider
accountId *Requiredstring
Account Identifier
Unique identifier assigned to an account
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Azure OpenAI Model Provider
endpoint *Requiredstring
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
modelName *Requiredstring
Model name
apiKey string
API Key
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientId string
Client ID
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
clientSecret string
Client secret
serviceVersion string
API version
tenantId string
Tenant ID
type object
Google VertexAI Model Provider
endpoint *Requiredstring
Endpoint URL
location *Requiredstring
Project location
modelName *Requiredstring
Model name
project *Requiredstring
Project ID
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Google Gemini Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
OciGenAI Model Provider
compartmentId *Requiredstring
OCID of OCI Compartment with the model
modelName *Requiredstring
Model name
region *Requiredstring
OCI Region to connect the client to
authProvider string
OCI SDK Authentication provider
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Call a remote AI agent via the A2A protocol.
description *Requiredstring
Agent description
The description will be used to instruct the LLM what the tool is doing.
serverUrl *Requiredstring
Server URL
The URL of the remote agent A2A server
name string
toolAgent name
It must be set to a different value than the default in case you want to have multiple agents used as tools in the same task.
type object
Model Context Protocol (MCP) SSE client tool
sseUrl *Requiredstring
SSE URL of the MCP server
headers object
Custom headers
Could be useful, for example, to add authentication tokens via the Authorization header.
logRequests booleanstring
falseLog requests
logResponses booleanstring
falseLog responses
timeout string
durationConnection timeout duration
type object
WebSearch content retriever for Tavily Search
apiKey *Requiredstring
API Key
maxResults integerstring
3Maximum number of results to return
type object
Anthropic AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
maxTokens integerstring
Maximum Tokens
Specifies the maximum number of tokens that the model is allowed to generate in its response.
type object
WebSearch tool for Tavily Search
apiKey *Requiredstring
Tavily API Key - you can obtain one from the Tavily website
type object
DashScope (Qwen) Model Provider from Alibaba Cloud
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
https://dashscope-intl.aliyuncs.com/api/v1API base URL
If you use a model in the China (Beijing) region, you need to replace the URL with: https://dashscope.aliyuncs.com/api/v1,
otherwise use the Singapore region of: "https://dashscope-intl.aliyuncs.com/api/v1.
The default value is computed based on the system timezone.
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
enableSearch booleanstring
Whether the model uses Internet search results for reference when generating text or not
maxTokens integerstring
The maximum number of tokens returned by this request
repetitionPenalty numberstring
Repetition in a continuous sequence during model generation
Increasing repetition_penalty reduces the repetition in model generation,
1.0 means no penalty. Value range: (0, +inf)
type object
LocalAI Model Provider
baseUrl *Requiredstring
API base URL
modelName *Requiredstring
Model name
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
type object
Amazon Bedrock Model Provider
accessKeyId *Requiredstring
AWS Access Key ID
modelName *Requiredstring
Model name
secretAccessKey *Requiredstring
AWS Secret Access Key
baseUrl string
Base URL
Custom base URL to override the default endpoint (useful for local tests, WireMock, or enterprise gateways).
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.
modelType string
COHERECOHERETITANAmazon Bedrock Embedding Model Type
type object
HuggingFace Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
baseUrl string
https://router.huggingface.co/v1API base URL
caPem string
CA PEM certificate content
CA certificate as text, used to verify SSL/TLS connections when using custom endpoints.
clientPem string
Client PEM certificate content
PEM client certificate as text, used to authenticate the connection to enterprise AI endpoints.