LLM Node
The LLM node calls AI language models to analyze data, generate insights, and make decisions within your workflow. It supports 9 providers and 30+ models — from fast summarizers to deep reasoning engines.
Configuration
Model
Prompts
Advanced
Configuration
| Field | Description |
|---|---|
| Model | Select from the dropdown, grouped by provider. Default: Claude Sonnet 4.6. |
| API Credentials | NickAI Credits (default — works with all models) or your own API key for the selected provider. |
| System Prompt | Define the AI's role and behavior. Shapes how the model responds to every request. |
| User Prompt | The main instruction for this execution. Use {{edge_label.field}} to inject data from connected nodes. |
| Temperature | Controls randomness. 0 = deterministic, 0.7 = balanced (default), 2.0 = maximum creativity. |
| Max Tokens | Maximum response length. Default: 4000. Range: 1–8192. |
| Timeout | Maximum wait time in seconds. Default: 60. Range: 1–300. |
Available Models
| Provider | Model | Best for |
|---|---|---|
| Anthropic | Claude Sonnet 4.6 (default) | Complex analysis, structured output |
| Claude Opus 4.6 | Hardest tasks, 1M context | |
| Claude Sonnet 4.5 | Flagship general-purpose | |
| Claude Opus 4.5 | Deep complex reasoning | |
| Claude Sonnet 4 | Extended thinking / reasoning | |
| Claude Haiku 4.5 | Fast and cost-effective | |
| OpenAI | GPT-5.2 | Latest flagship |
| GPT-5 | Complex analysis | |
| GPT-5 Mini | Efficient general-purpose | |
| GPT-4o | Multimodal / chart analysis | |
| GPT-4o Mini | Fast, low-cost | |
| Gemini 3 Pro | Flagship multimodal | |
| Gemini 2.5 Flash | Fast multimodal / vision | |
| Gemini 2.5 Flash Lite | Ultra-fast inference | |
| Gemini 2.5 Pro | Advanced reasoning | |
| xAI | Grok 4 | Flagship |
| Grok 4 Fast | Ultra-fast | |
| Grok 3 | General-purpose | |
| Grok 3 Mini | Lightweight | |
| Grok Code Fast | Code generation | |
| DeepSeek | DeepSeek Chat | Conversational |
| DeepSeek Reasoner | Deep reasoning | |
| Qwen | Qwen 2.5 72B | Large-scale analysis |
| Qwen Coder 32B | Code generation | |
| Perplexity | Sonar Pro | Research with web search |
| Sonar Reasoning | Deep reasoning + search | |
| Sonar | Fast search with citations | |
| Kimi | Kimi K2.5 | Visual coding, multimodal |
| Kimi K2 Thinking | Long-horizon reasoning | |
| Kimi K2 | General-purpose | |
| MiniMax | MiniMax M2.5 | Real-world productivity |
| MiniMax M2.1 | Coding, agentic workflows | |
| MiniMax M2 | Compact, high-efficiency |
Model Selection Guide
Not sure which model to pick? Match the use case to the right category:
| Use case | Recommended models |
|---|---|
| Complex analysis, structured output | Claude Sonnet 4.6, Claude Opus 4.6, GPT-5 |
| Visual / chart analysis (image input) | GPT-4o, Gemini 2.5 Flash, Kimi K2.5 |
| Web search + analysis | Sonar Pro, Sonar Reasoning, Sonar |
| Fast, cost-effective | Claude Haiku 4.5, GPT-4o Mini, GPT-5 Mini, Grok 4 Fast |
| Deep reasoning | Claude Sonnet 4 (extended thinking), DeepSeek Reasoner, Kimi K2 Thinking |
| Code generation | Qwen Coder 32B, Grok Code Fast |
Prompt Interpolation
Use double curly braces to inject live data from upstream nodes into your prompts.
| Expression | What it resolves to |
|---|---|
| {{price_data.data.prices[0].current}} | Current price from a Price Data node |
| {{price_data.data.prices[0].indicators.rsi}} | RSI value |
| {{portfolio.positions}} | Full positions array from a Portfolio node |
| {{my_function.signal}} | A specific field from a Function node |
Example Prompts
Market Analyst:
System: You are a crypto market analyst. Analyze the provided price data
and technical indicators.
Respond in this exact format:
ACTION: [BUY / SELL / HOLD]
CONFIDENCE: [0-100]%
RATIONALE: [2-3 sentence explanation]
Be conservative — only recommend BUY when multiple indicators align.
User: Analyze BTC/USD right now.
Current price: {{price_data.data.prices[0].current}}
24h change: {{price_data.data.prices[0].changePercent24h}}%
RSI: {{price_data.data.prices[0].indicators.rsi}}
Based on these indicators, what is your recommendation?
Risk Monitor:
System: You are a portfolio risk manager. Review the positions and flag
any concerns.
Respond in valid JSON only:
{"alerts": [{"symbol": "...", "issue": "...", "severity": "low|medium|high"}],
"overallRisk": "low|medium|high",
"summary": "..."}
User: Review my current positions:
{{portfolio.positions}}
Cash balance: {{portfolio.cashBalance}}
Net worth: {{portfolio.netWorth}}
Prediction Market Analyst:
System: You are a prediction market analyst. Given the market title,
current probability, and description, assess whether the current
probability represents good value.
Respond in this format:
POSITION: [BUY_YES / BUY_NO / SKIP]
EDGE: [your estimated probability minus market probability]%
REASONING: [1-2 sentences]
User: Evaluate this Polymarket market:
Title: {{polymarket.markets[0].title}}
Probability: {{polymarket.markets[0].markets[0].outcomePrices[0]}}
Description: {{polymarket.markets[0].description}}
Structured Output
Toggle Structured Output to force the model to return a specific JSON schema instead of free-form text. This is useful when you need to feed parsed data directly into Conditional or Function nodes without extra parsing.
Define fields with a name, type (string, number, boolean, array, or object), and whether they're required. Click the type badge to change it. Expand objects and arrays to add nested properties. Try it below — the output preview updates in real time.
Structured Output
Define the shape of the LLM's response
LLM will return JSON matching this schema. Temperature forced to 0.
{
"signal": "buy",
"confidence": 0.85,
"reasoning": "RSI below 30 indicates oversold"
}The demo above is pre-populated with a trading signal schema. In your workflow, the LLM will return JSON matching your schema exactly — you can then route it directly into a Conditional node (e.g., check if signal equals "buy" and confidence is greater than 0.7).
Visual Analysis
Connect a Chart Image node to the LLM to enable visual chart analysis. Vision-capable models (GPT-4o, Gemini, Kimi K2.5) can analyze candlestick patterns, support/resistance levels, and trend direction directly from the chart image.
The LLM automatically detects images in the interpolated data — no special configuration needed.
Parsing LLM Output Downstream
The LLM returns a text string by default. To use it in decisions:
-
Simple routing: Connect LLM → Conditional. Set Field to
llm.output, Operator to "Contains", Value toBUY. The true branch triggers the trade, false branch sends a notification. -
Structured parsing: Enable Structured Output on the LLM node itself, or connect LLM → Function node that parses the text into JSON → Conditional on the parsed fields.
-
Multi-model consensus: Run the same data through multiple LLM nodes in parallel, then merge results in a Function node to vote on the final action.
Pricing & Credits
When you use NickAI Credits (the default), each LLM call costs credits based on what the provider charges us. The credit cost depends on:
- Which model you choose — larger, more capable models cost more per token than lightweight ones.
- How many tokens the request uses — both the input (your prompts + interpolated data) and the output (the model's response) count toward the total.
Longer prompts, bigger context windows, and higher maxTokens values all increase the cost. If you want to minimize credit usage, choose a smaller model (e.g., Claude Haiku 4.5, GPT-4o Mini) and keep your prompts concise.
Output
| Path | Description |
|---|---|
| {llm.output} | The model's response — plain text string or parsed JSON object (when structured output is enabled) |
| {llm.citations} | Array of web search citations (Perplexity models only) |
llm with your node's edge label. If the edge connecting the LLM to the next node is labeled analysis, use {analysis.output}.Next Steps
- Conditional Node — Route decisions based on LLM output.
- Function Node — Parse or transform LLM responses with custom code.
- Chart Image Node — Generate charts for visual LLM analysis.
- Credentials — Set up your own API keys for each provider.