docs

LLM Node

The LLM node calls AI language models to analyze data, generate insights, and make decisions within your workflow. It supports 9 providers and 30+ models — from fast summarizers to deep reasoning engines.

Market Analyst
Claude Sonnet 4.6

Configuration

Model
Prompts
Advanced
0.7

Configuration

FieldDescription
ModelSelect from the dropdown, grouped by provider. Default: Claude Sonnet 4.6.
API CredentialsNickAI Credits (default — works with all models) or your own API key for the selected provider.
System PromptDefine the AI's role and behavior. Shapes how the model responds to every request.
User PromptThe main instruction for this execution. Use {{edge_label.field}} to inject data from connected nodes.
TemperatureControls randomness. 0 = deterministic, 0.7 = balanced (default), 2.0 = maximum creativity.
Max TokensMaximum response length. Default: 4000. Range: 1–8192.
TimeoutMaximum wait time in seconds. Default: 60. Range: 1–300.

Available Models

ProviderModelBest for
AnthropicClaude Sonnet 4.6 (default)Complex analysis, structured output
Claude Opus 4.6Hardest tasks, 1M context
Claude Sonnet 4.5Flagship general-purpose
Claude Opus 4.5Deep complex reasoning
Claude Sonnet 4Extended thinking / reasoning
Claude Haiku 4.5Fast and cost-effective
OpenAIGPT-5.2Latest flagship
GPT-5Complex analysis
GPT-5 MiniEfficient general-purpose
GPT-4oMultimodal / chart analysis
GPT-4o MiniFast, low-cost
GoogleGemini 3 ProFlagship multimodal
Gemini 2.5 FlashFast multimodal / vision
Gemini 2.5 Flash LiteUltra-fast inference
Gemini 2.5 ProAdvanced reasoning
xAIGrok 4Flagship
Grok 4 FastUltra-fast
Grok 3General-purpose
Grok 3 MiniLightweight
Grok Code FastCode generation
DeepSeekDeepSeek ChatConversational
DeepSeek ReasonerDeep reasoning
QwenQwen 2.5 72BLarge-scale analysis
Qwen Coder 32BCode generation
PerplexitySonar ProResearch with web search
Sonar ReasoningDeep reasoning + search
SonarFast search with citations
KimiKimi K2.5Visual coding, multimodal
Kimi K2 ThinkingLong-horizon reasoning
Kimi K2General-purpose
MiniMaxMiniMax M2.5Real-world productivity
MiniMax M2.1Coding, agentic workflows
MiniMax M2Compact, high-efficiency

Model Selection Guide

Not sure which model to pick? Match the use case to the right category:

Use caseRecommended models
Complex analysis, structured outputClaude Sonnet 4.6, Claude Opus 4.6, GPT-5
Visual / chart analysis (image input)GPT-4o, Gemini 2.5 Flash, Kimi K2.5
Web search + analysisSonar Pro, Sonar Reasoning, Sonar
Fast, cost-effectiveClaude Haiku 4.5, GPT-4o Mini, GPT-5 Mini, Grok 4 Fast
Deep reasoningClaude Sonnet 4 (extended thinking), DeepSeek Reasoner, Kimi K2 Thinking
Code generationQwen Coder 32B, Grok Code Fast

Prompt Interpolation

Use double curly braces to inject live data from upstream nodes into your prompts.

ExpressionWhat it resolves to
{{price_data.data.prices[0].current}}Current price from a Price Data node
{{price_data.data.prices[0].indicators.rsi}}RSI value
{{portfolio.positions}}Full positions array from a Portfolio node
{{my_function.signal}}A specific field from a Function node

Example Prompts

Market Analyst:

System: You are a crypto market analyst. Analyze the provided price data
and technical indicators.

Respond in this exact format:
ACTION: [BUY / SELL / HOLD]
CONFIDENCE: [0-100]%
RATIONALE: [2-3 sentence explanation]

Be conservative — only recommend BUY when multiple indicators align.
User: Analyze BTC/USD right now.

Current price: {{price_data.data.prices[0].current}}
24h change: {{price_data.data.prices[0].changePercent24h}}%
RSI: {{price_data.data.prices[0].indicators.rsi}}

Based on these indicators, what is your recommendation?

Risk Monitor:

System: You are a portfolio risk manager. Review the positions and flag
any concerns.

Respond in valid JSON only:
{"alerts": [{"symbol": "...", "issue": "...", "severity": "low|medium|high"}],
 "overallRisk": "low|medium|high",
 "summary": "..."}
User: Review my current positions:
{{portfolio.positions}}

Cash balance: {{portfolio.cashBalance}}
Net worth: {{portfolio.netWorth}}

Prediction Market Analyst:

System: You are a prediction market analyst. Given the market title,
current probability, and description, assess whether the current
probability represents good value.

Respond in this format:
POSITION: [BUY_YES / BUY_NO / SKIP]
EDGE: [your estimated probability minus market probability]%
REASONING: [1-2 sentences]
User: Evaluate this Polymarket market:
Title: {{polymarket.markets[0].title}}
Probability: {{polymarket.markets[0].markets[0].outcomePrices[0]}}
Description: {{polymarket.markets[0].description}}

Structured Output

Toggle Structured Output to force the model to return a specific JSON schema instead of free-form text. This is useful when you need to feed parsed data directly into Conditional or Function nodes without extra parsing.

Define fields with a name, type (string, number, boolean, array, or object), and whether they're required. Click the type badge to change it. Expand objects and arrays to add nested properties. Try it below — the output preview updates in real time.

Structured Output

Define the shape of the LLM's response

LLM will return JSON matching this schema. Temperature forced to 0.

Output preview
{
  "signal": "buy",
  "confidence": 0.85,
  "reasoning": "RSI below 30 indicates oversold"
}

The demo above is pre-populated with a trading signal schema. In your workflow, the LLM will return JSON matching your schema exactly — you can then route it directly into a Conditional node (e.g., check if signal equals "buy" and confidence is greater than 0.7).


Visual Analysis

Connect a Chart Image node to the LLM to enable visual chart analysis. Vision-capable models (GPT-4o, Gemini, Kimi K2.5) can analyze candlestick patterns, support/resistance levels, and trend direction directly from the chart image.

BTC Chart
BINANCE:BTCUSDT
Analyze Chart
GPT-4o
Buy Signal?
signal = buy
Place Order
Buy BTC

The LLM automatically detects images in the interpolated data — no special configuration needed.


Parsing LLM Output Downstream

The LLM returns a text string by default. To use it in decisions:

  • Simple routing: Connect LLM → Conditional. Set Field to llm.output, Operator to "Contains", Value to BUY. The true branch triggers the trade, false branch sends a notification.

  • Structured parsing: Enable Structured Output on the LLM node itself, or connect LLM → Function node that parses the text into JSON → Conditional on the parsed fields.

  • Multi-model consensus: Run the same data through multiple LLM nodes in parallel, then merge results in a Function node to vote on the final action.

BTC Price
Claude Analysis
Claude Sonnet 4.6
Contains BUY?
1 rule
Buy BTC
Alert: No Signal

Pricing & Credits

When you use NickAI Credits (the default), each LLM call costs credits based on what the provider charges us. The credit cost depends on:

  • Which model you choose — larger, more capable models cost more per token than lightweight ones.
  • How many tokens the request uses — both the input (your prompts + interpolated data) and the output (the model's response) count toward the total.

Longer prompts, bigger context windows, and higher maxTokens values all increase the cost. If you want to minimize credit usage, choose a smaller model (e.g., Claude Haiku 4.5, GPT-4o Mini) and keep your prompts concise.


Output

PathDescription
{llm.output}The model's response — plain text string or parsed JSON object (when structured output is enabled)
{llm.citations}Array of web search citations (Perplexity models only)

Next Steps