> This page is part of the [Customer.io documentation](https://docs-customerio.netlify.app). For the complete index, see [llms.txt](https://docs-customerio.netlify.app/llms.txt).
> Last updated: May 12, 2026

# LLM actions: Generate data & decisions with AI

[BetaThis feature is new and we're actively working on it.](/beta-experimental-features/#beta-features)

An **LLM action** lets you prompt a Large Language Model (LLM) to generate and store data for use throughout a campaign. It’s how you use generative AI to enhance your workflows!

[![An LLM action in a campaign workflow](https://docs.customer.io/images/llm-action.png)](#bdf7c025ddf3432e082a572d86197f63-lightbox)

 Not seeing this AI feature?

Make sure “Customer.io AI” is enabled in [AI settings](https://fly.customer.io/settings/ai). Reach out to an *Account Admin* if you can’t edit the toggle.

## How it works[](#how-it-works)

LLM actions let you prompt an AI model as a part of a campaign and store the output as attributes so you can use them later in the campaign. You can personalize messages, enrich data, and create conditions to help you reach the right audience.

LLM actions automatically follow your [account-level security settings for AI](#review-your-ai-settings): your Gemini safety settings (if the action uses one of Google’s models) and your compliance prompt.

By default, LLM actions store data as **[journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/)**, which expire when people exit your campaign. If you want to use the LLM’s response outside of the campaign, you can change them to **[customer attributesData stored on your customers’ profiles, like a person’s name. You can include this data in messages or conditions across your workflows.](/journeys/attributes/)** instead.

### What data can an LLM action process?[](#llm-data)

Data

Can an LLM action process it?

What you need to do

Text in the prompt

Automatically processed

[Your account level AI settings](https://fly.customer.io/settings/ai): Compliance prompt and Gemini safety settings (when you use a Google model)

Automatically processed

[Your workspace’s business context](https://fly.customer.io/workspaces/last/settings/ai-business-profile)

[Reference it with liquid in the prompt](#add-your-business-context-with-liquid)

Customer attributes

Reference them with liquid in the prompt

Journey attributes set earlier in the campaign

Reference them with liquid in the prompt

Data that triggered the campaign

Reference them with liquid in the prompt

Events unrelated to the campaign trigger

It can only process events that triggered the campaign

Object or relationships unrelated to the campaign trigger

It can only process objects or relationships that triggered the campaign

Websites, articles, or other online content

N/A, the LLM can’t crawl any sites

Media files like images and videos

Learn more about [adding and previewing liquid in your prompt](#personalize-your-prompt-with-liquid) below.

## Billing: LLM actions use AI credits[](#billing-llm-actions-use-ai-credits)

Unlike other workflow blocks, LLM actions have their own currency: **AI credits**. Each time an LLM action calls a model, it uses AI credits. This includes when a person reaches the action in a campaign and when you use Preview response to test it. The number of credits consumed depends on the model you select, the size of the prompt, and the amount of context sent with the request. See [AI credits](/accounts-and-workspaces/ai-credits/) for details on pricing and what happens when credits run out.

## Ways to use LLM actions[](#ways-to-use-llm-actions)

You can use LLM actions to generate data for use across your workflows. Here are a few use cases you could consider:

*   **Personalized product recommendations**: Pass purchase history and browsing data to suggest relevant products for each person.
*   **Follow-up on purchase based on customer sentiment**: Create message content based on a customer’s experience from purchase to delivery. If sentiment is positive, request review. If sentiment is negative, send a follow-up asking what you could do better.
*   **Classify accounts**: Classify customers based on their companies’ data.

### Update data from the response of an LLM action[](#update-data-from-the-response-of-an-llm-action)

You can use LLM actions to analyze a customer’s behavior and generate insights that you store on attributes for use later on in your campaign.

To set or update data based on an LLM’s insights, you would follow these steps:

1.  Prompt the LLM to analyze specific customer attributes, trigger data, or data provided in the prompt.
2.  Store the output as a journey or customer attribute, depending on if you want to use the data outside of the campaign.
3.  Create subsequent conditions that target the updated attribute or reference the data in messages using liquid.

### Send a message using content from an LLM action[](#send-a-message-using-content-from-an-llm-action)

 Don’t communicate sensitive information or updates with LLM actions

If you’re looking to automate personalized messaging at scale, you can use LLM actions to create email content unique to each person moving through your workflow. However, you’ll be sending content that hasn’t been reviewed by your team.

Remember that LLMs can make mistakes, like not quite matching your tone or incorrectly categorizing your data. Don’t communicate sensitive matters with unreviewed, LLM-generated content. **Consider using our [Agent](/ai/ai-assistant/) to generate a template instead.**

To send a message using content from an LLM action, you would follow these steps:

1.  Prompt the LLM action to create copy based on your customer’s data and your content guidelines.
2.  Store the output as a journey attribute, like `body`.
3.  Reference the journey attribute in a subsequent message block.
    *   If the attribute value doesn’t contain liquid syntax, you can reference it as: `{{journey.body}}`.
    *   If the LLM-generated content contains liquid syntax—like `{{customer.first_name}}`—use [`{% render_liquid journey.body %}`](/journeys/liquid-tag-list/?version=latest#render_liquid-latest) so the liquid within the value renders dynamically. If you use `{{journey.body}}` instead, any liquid in the value displays as static text.

## Set up an LLM action[](#set-up-an-llm-action)

LLM actions are available for campaigns. In the workflow builder, scroll down to *Data*, then drag **Run LLM** onto your campaign’s canvas.

[![An LLM action in a campaign workflow](https://docs.customer.io/images/llm-action.png)](#bdf7c025ddf3432e082a572d86197f63-lightbox)

1.  Click the block to open its configuration menu, and select **Edit Content** to get started.
    
    [![To the right of the LLM action is the configuration menu with options to edit content or add conditions.](https://docs.customer.io/images/llm-action-config.png)](#4b4d4488e3d5073fb170ae209ea79ed9-lightbox)
    
    (Optional) If you only want certain people who trigger the campaign to run the LLM action, you can add **Conditions** here to filter your audience.
    
2.  Consider the type of task it should perform then choose your **Model**. [Learn more about model types, credit usage, and costs below](#model-choose-the-right-model-for-the-task).
    
3.  Add a **Prompt** to instruct the LLM on what to do and how. The more specific you are, the better the results will be. Learn more about [creating prompts](#prompts) below or [check out our templates](#use-a-template-prompt).
    
    [![The LLM action prompt is a text area where you can add your prompt.](https://docs.customer.io/images/llm-action-prompt-may-11-2026.png)](#f2f581cceddb5a73a0b86689c47f5225-lightbox)
    
4.  Click to preview liquid in your prompt with sample data. This shows the data the LLM will process. Learn more about the [data you can reference in LLM prompts](#personalize-your-prompt-with-liquid) below.
    
5.  Generate **Output Fields**—the [journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/) you want to create to store data from the LLM response. Learn more about [setting and storing responses](#output-store-the-response-as-attributes) below.
    
6.  Click the **Response** tab to set **fallback values** for each attribute created by your output fields.
    
    If you want this data available outside the campaign, this is also where you can change a journey attribute to a **customer attribute**.
    
7.  Click **Test prompt** to see how the LLM would interpret your prompt. Note, this counts towards your AI credit usage. [Learn more about billing](#billing-llm-actions-use-ai-credits).
    

## Model: Choose the right model for the task[](#model-choose-the-right-model-for-the-task)

When you configure an LLM action, you choose which model processes your data. Different models have different strengths—and different [costs](#billing-llm-actions-use-ai-credits).

*   **Reasoning models** produce higher-quality results for complex tasks but use more credits per run.
*   **Quick models** are faster and more cost-efficient, using fewer credits per run.

Consider the complexity of your task when choosing a model. If you’re doing simple categorization or translation, a quick model may work well. For nuanced analysis or creative content, a reasoning model may produce better results.

When you choose a model, you’ll see a multiplier beside the model name. This represents the credit burn rate compared to the base model. In this example, the Anthropic model uses 10x more than our base model—Google’s Gemini 2.5 Flash Lite. [Learn more about credit burn rates](/accounts-and-workspaces/ai-credits/#how-llms-consume-ai-credits).

[![The model dropdown shows three use cases. Quick answers is opened and shows two models where one is 10x more than the base model.](https://docs.customer.io/images/llm-action-model-cost.png)](#06a2641462119b663a662534af259c7a-lightbox)

## Prompt: Tell the LLM what to do and how[](#prompts)

When you prompt an LLM action, you should include the following so the LLM has full context on your use case:

*   Define your goal. If you don’t know exactly what you want, the LLM won’t either.
*   Be direct, concise, and specific. Provide any context that’s necessary to achieve your goal, like how and why to evaluate data.
*   Include any attributes you want the LLM to use in its response. See [Personalize your prompt with liquid](#personalize-your-prompt-with-liquid) for more info.
*   Define the structure of your output.

You can learn more about best practices for prompts from the LLM providers.

*   If you choose a Google model to process your prompt, learn more in the [Google’s Gemini documentation](https://ai.google.dev/gemini-api/docs/prompting-strategies).
*   If you choose an Anthropic model to process your prompt, learn more in the [Anthropic’s Claude documentation](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices).

### Prompt example[](#prompt-example)

Below is an example of how to improve a prompt. Bottom line, you should [test your prompt](#preview-an-llm-action) to gauge whether the output is what you want. But if you’re looking to improve your output quality and make it more consistent, here’s an example that highlights best practices.

Prompt

Quality

Why

Account upsell: Compare customer seat utilization to their current plan.

Low

The goal is not clear; there’s only an idea around upselling. The data to use is barely defined and the desired output is absent.

Analyze this account’s expansion readiness. Compare their seat utilization `{{customer.seats_used}}` to their current plan `{{customer.plan_name}}`. An account may expand if seat utilization is greater than 80% and they’re not on the highest plan.

Medium

The goal is stated. Some data is identified along with some criteria for evaluation. But the desired output is still absent.

[![The prompt gives the LLM action a persona, followed by a goal. Then there are three separate lists showing what data to use to make a decision, how to evaluate criteria, and what the output should look like.](https://docs.customer.io/images/llm-actions-prompt-3.png)](#12f62923ea25ab0d17ead124af19cb3a-lightbox)

High

The goal is defined and criteria for being expansion ready is defined. The prompt includes the data to use and desired output format.

### Use a template prompt[](#use-a-template-prompt)

Click **Use template** to create a prompt based on one of our templates. Each prompt demonstrates key best practices: defining a persona, setting clear guidelines, and specifying the output format.

[![Use template is highlighted at the top of the LLM action prompt.](https://docs.customer.io/images/llm-action-prompt-may-11-2026-template.png)](#63f3dbbcc712aedf787154dd3c52dd3d-lightbox)

You should adapt them to your business, data, and tone for best results with your audience.

### Review your AI settings[](#review-your-ai-settings)

In your account and workspace settings, you can add context about your company and audience to improve how AI generates responses across your workflows. These settings influence how the agent communicates with you and how AI features like segment generation and email content analysis work.

Go to [Account settings > AI](https://fly.customer.io/settings/ai) to manage compliance and safety settings across all your workspaces. These settings automatically apply to LLM actions.

*   Gemini Safety Settings—Configure safety thresholds for content created by Customer.io’s tools; these only apply to LLM actions when you use a Gemini model.
*   Compliance Prompt—Manage regulatory and policy guidelines.

Go to [Workspace settings > Business context](https://fly.customer.io/workspaces/last/settings/ai-business-profile) to add context about your business, like links and tone preferences, to improve content generated by Customer.io’s AI tools. These settings do not automatically apply to LLM actions, but you can [add in this context with liquid](#add-your-business-context-with-liquid).

## Personalize your prompt with liquid[](#personalize-your-prompt-with-liquid)

You can include specific data in your prompt so the LLM creates an output personalized to your recipient.

Data type

Liquid keys

Can an LLM action process it?

Your workspace’s business context

`{{ai_context.<attribute_name>}}`

Yes

Customer attributes

`{{customer.<attribute_name>}}`

Yes

Journey attributes

`{{journey.<attribute_name>}}`

Yes

Campaign trigger data

`{{trigger.<attribute_name>}}`, `{{trigger.<object_type_name>.<attribute_name>}}`, `{{trigger.relationship.<attribute_name>}}`, and `{{event.<attribute_name>}}`

Yes

Objects & relationships

Any keys that start with `{{objects...`

No

Events

None

No

Any trigger data available through liquid is accessible to LLM actions; the LLM action can use events, objects, webhooks, etc that trigger campaigns to generate responses. However, LLM actions cannot access event and object relationships that did not trigger campaigns.

For instance, this means you could ask an LLM action to generate a message based on event data from the trigger, but you shouldn’t prompt the LLM action to analyze all event data for a person and save its findings to the customer’s profile. That wouldn’t be inclusive of the breadth of a person’s activity across your platform.

### Add your business context with liquid[](#add-your-business-context-with-liquid)

If you want LLM actions to take into account your [business context](https://fly.customer.io/workspaces/last/settings/ai-business-profile), you have to explicitly add it to your prompt with liquid. Keep in mind, this takes up extra tokens, so make sure you test it and review the [cost implications](#billing-llm-actions-use-ai-credits).

You’ll use the liquid object `ai_context` with any of the attributes below. For instance, if the output should follow your audience guidelines, you should add `{{ai_context.audience}}` to the prompt. Click and you’ll see the data that the LLM would process, in this case, the Audience prompt from workspace settings.

Prompt for Run LLM action

Preview

Generate a message following our audience guidelines: `{{ai_context.audience}}`.

Generate a message following our audience guidelines: marketing, product, engineering, and sales teams looking to improve customer engagement, activate users, drive cross-sells/upsells, enhance onboarding, and improve retention through personalized, data-driven communication across multiple channels.

Some settings, like tone, have nested data that an LLM more easily parses if you explain what each field means. While you can reference `{{ai_context.tone}}`, you’ll get better results if you create guidelines for the different tones you use with your audience:

```fallback
Tone guidelines:  
- Formality: {{ ai_context.tone.formality.description }}  
- Humor: {{ ai_context.tone.humor.description }}  
- Respect: {{ ai_context.tone.respect.description }}  
- Energy: {{ ai_context.tone.energy.description }}
```

If you include the entire object `{{ ai_context }}` in your prompt, make sure you test it and [check the cost implications](#preview-your-llm-action-response-preview-an-llm-action). Compare that against the cost implications for specifying only the attributes you need. The more clear, concise and directed you can be in your prompt, the more efficiently an LLM will process your prompt. You may find you don’t need all of your business context to get the results you want.

#### Liquid keys: Basic info & Key links[](#liquid-keys-basic-info--key-links)

The table below includes data available through the `ai_context` object that you’ll find under *Basic info* and *Key links* in your Business context.

Liquid key

Value type

Description

ai\_context.audience

string

Target audience

ai\_context.version

int

Context version

ai\_context.workspace\_id

int

Workspace ID

ai\_context.account\_id

int

Account ID

ai\_context.domain

string

Sending domain

ai\_context.created\_at

timestamp

When context was created

ai\_context.updated\_at

timestamp

When context was last updated

ai\_context.name

string

Company name

ai\_context.long\_description

string

Long description of the business

ai\_context.industry

string

Industry

ai\_context.website\_url

string

Website URL

ai\_context.privacy\_policy\_url

string

Privacy policy URL

ai\_context.terms\_of\_service\_url

string

Terms of service URL

ai\_context.pricing\_url

string

Pricing page URL

ai\_context.download\_url

string

Download page URL

#### Liquid keys: Tone & Voice[](#liquid-keys-tone--voice)

The table below includes data available through the `ai_context` object that you’ll find under *Tone & Voice* in your Business context.

Liquid key

Value type

Description

ai\_context.tone.formality.description

string

System-generated description

ai\_context.tone.humor.description

string

System-generated description

ai\_context.tone.respect.description

string

System-generated description

ai\_context.tone.energy.description

string

System-generated description

ai\_context.tone\_examples

string\[\]

Example text snippets showing brand tone

You can control the descriptions by changing the sliders under *Tone & Voice* in Business context.

#### Liquid keys: Platform availability[](#liquid-keys-platform-availability)

The table below includes data available through the `ai_context` object that you’ll find under *Platform availability* in your Business context.

Liquid key

Value type

Description

ai\_context.platforms.ios.available

bool

iOS app available

ai\_context.platforms.ios.link

string

iOS app link

ai\_context.platforms.android.available

bool

Android app available

ai\_context.platforms.android.link

string

Android app link

ai\_context.platforms.mac.available

bool

Mac app available

ai\_context.platforms.mac.link

string

Mac app link

ai\_context.platforms.windows.available

bool

Windows app available

ai\_context.platforms.windows.link

string

Windows app link

ai\_context.platforms.web.available

bool

Web app available

ai\_context.platforms.web.link

string

Web app link

ai\_context.platforms.browserExtension.available

bool

Browser extension available

ai\_context.platforms.browserExtension.link

string

Browser extension link

ai\_context.platforms.api.available

bool

API available

ai\_context.platforms.api.link

string

API link

### Preview liquid in your prompt[](#preview-liquid-in-your-prompt)

If there’s an error with the liquid, like a customer doesn’t have the variable set on their profile, then the LLM action will fail to run. The person will move onto the next action in the workflow. [To prevent an LLM action from failing due to liquid errors, set fallbacks in the prompt](/journeys/using-liquid/#fallback-for-latest-liquid).

If your prompt includes liquid, make sure you click to preview your prompt with sample data.

[![The preview button is highlighted at the top of the LLM action prompt.](https://docs.customer.io/images/llm-action-prompt-may-11-preview.png)](#2437df9b567dc487cddffcf98ea4b688-lightbox)

Check how the prompt renders with customers that do and don’t have the liquid variables to confirm your fallback values render as expected.

Previewing a prompt only renders the liquid; it does not send the prompt through an LLM, so this preview does not spend your AI credits. [Learn more about testing your prompt and billing implications](#preview-an-llm-action).

## Output: Store the response as attributes[](#output-store-the-response-as-attributes)

After you add your prompt, you’ll generate the output—how the LLM will store its response. By default, the LLM action stores data as [journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/), which you can use throughout a person’s journey in the campaign, but not once they exit. If you want to use this data outside the campaign, [change them to customer attributes in the Response tab](#move-to-customer).

You can use these attributes in a variety of ways in subsequent actions:

*   Personalize messages with liquid
*   Create branches in your workflow based on the attribute output from the model
*   Build conditions to filter people out of certain actions or messages
*   Use them as inputs for other LLM actions downstream

### Create outputs manually[](#create-outputs-manually)

1.  On the Content tab, click **Add field** under Output Fields.
    
    [![A filled in output field with a name, type and description. The checkbox for required is checked.](https://docs.customer.io/images/llm-action-add-field.png)](#ac42f7c949945cca9431b1e7996b1696-lightbox)
    
2.  Add a **Name**. This becomes the key used to reference the output through liquid syntax.
3.  Select a [**Type** of value you want to store](#types-of-values).
4.  Enter a **Description** so you know how to use the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
5.  Select whether the LLM action is required to generate the output.
6.  Click **Save**.

By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can [change them to customer attributes in the Response tab](#move-to-customer).

### Generate outputs from your prompt[](#generate-outputs-from-your-prompt)

1.  On the Content tab, click **Generate from prompt** under Output Fields.
2.  Click **Replace** to view the latest output fields.
3.  Review the output: click to view the returned name, value type, and descriptions. Modify them as you see fit.
    
    [![A filled in output field with a name, type and description. The checkbox for required is checked.](https://docs.customer.io/images/llm-action-add-field.png)](#ac42f7c949945cca9431b1e7996b1696-lightbox)
    
    *   **Name**: The key used to reference the output through liquid syntax.
    *   **Type**: The [type of value you want to store](#types-of-values).
    *   **Description**: A description of the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
4.  Save your changes.

You can also [add fields manually](#create-outputs-manually) alongside generated outputs or [delete items](#delete-output-fields) you don’t want to store.

By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can [change them to customer attributes in the Response tab](#move-to-customer).

### Types of values[](#types-of-values)

Each output field has a type of value that defines what the LLM action should store in your attributes.

Type

Description

Example

Text

A text string value

“Mark your calendars: the summer solstice is coming!”

Number

A number that can include decimals

`3.14`

Integer

A whole number (no decimals)

`42`

Boolean

A true/false value

true

Date

A date string (ISO 8601 format)

“2026-03-31”

Date and Time

A timestamp string (ISO 8601 format)

“2026-03-31T14:30:00Z”

Time

A time string

“14:30:00”

List

An array of generated text values

`["Subject line 1", "Subject line 2", "Subject line 3"]`

Single Select

One value picked from predefined options

“positive” (from options like `["positive", "negative", "neutral"]`)

Multi Select

Multiple values picked from predefined options

`["positive", "neutral"]` (from options like `["positive", "negative", "neutral"]`)

### Delete output fields[](#delete-output-fields)

To remove output fields stored from an LLM action response, go to the Content tab and click beside the field you want to delete. The Response tab will update to reflect the changes.

### Change from journey to customer attributes[](#move-to-customer)

By default, the output fields generated in the Content tab are journey attributes, but you can change that in the Response tab. If you want to take action on the data outside the campaign, then you’ll want to change them to **customer attributes**.

Click beside an attribute to switch types.

[![Response tab. Under journey attributes, there's a field expansion_score. To the right, a menu is selected showing Move to customer attribute.](https://docs.customer.io/images/llm-action-switch-type-2.png)](#893664bd457fe701c6e4f98a11518c1a-lightbox)

You can’t set or modify events, objects, or relationships with LLM actions. However, you can use a [*Send event* action](/journeys/event-action/) to store events based on customer or journey attributes set by an LLM action.

### Respond to failed LLM actions[](#respond-to-failed-llm-actions)

An LLM action can fail for reasons including:

*   Your account [runs out of AI credits](/accounts-and-workspaces/ai-credits/#purchase-additional-credits)
*   The model returns an error
*   The action times out

**If an LLM action fails, your campaign will retry the action twice.** If the action fails after three attempts, the journey will continue without the attribute updates, which could impact subsequent workflow actions that rely on them.

You can set **fallback values** so any condition or content that references the attributes continues to be evaluated in a way that’s best for your customers. **By default, output attributes do not have fallback values, but you can set them in the Response tab.**

[![Response tab. To the right of each attribute name is a field labeled Fallback value.](https://docs.customer.io/images/llm-action-fallback.png)](#c6596b6e191d76136590118864fef3bf-lightbox)

Consider what’s best for your use case. How should people move through your campaign if the Run LLM action fails?

*   If the LLM action generates email copy, it might make sense to store fallback content so your customers still get the core of your message in a subsequent action, just with less personalization. Otherwise, the email would fail to send altogether, and they’d move onto the next action.
*   If the LLM action is meant to determine whether your customer is likely to upgrade their plan, you might leave the fallback blank so you know it didn’t update and send them down a different path in the workflow when the attribute does not exist.

If a customer or journey attribute is already set and the LLM action should update them, the attributes will only update if the LLM action succeeds or has fallback values. If the LLM action fails and has no fallbacks set, the attributes remain unchanged; they won’t be cleared or unset.

## Test your prompt[](#preview-an-llm-action)

[After you preview your prompt and confirm there are no liquid errors](#preview-liquid-in-your-prompt), you should test your prompt to see how the selected LLM will interpret your prompt and how many AI credits it will use.

 Testing your prompt costs AI credits

Each time you test your prompt, it spends your AI credits. [Learn more about AI credit usage and billing](#billing-llm-actions-use-ai-credits).

1.  Select a person from the Sample Data panel that would cause the LLM action to run.
    
2.  Click **Test prompt**. For smaller screens, click the *Preview* tab first. [Remember, each run uses AI credits.](#billing-llm-actions-use-ai-credits)
    
    [![A pop-up modal shows a response from the LLM action including the model used, credits used, and attributes that would update.](https://docs.customer.io/images/llm-action-test-prompt.png)](#5bb8fd50cd50457d145bd5c671c2bb85-lightbox)
    
3.  Review the model’s output to verify it meets your expectations.
    
    [Check your credit usage](#billing-llm-actions-use-ai-credits); does your account have enough credits to run the action considering the anticipated size of your audience?
    
    If a value is cutoff, hover over it to view the full output.
    
4.  Adjust your prompt or model selection if needed and preview the response again.
    

 Test LLM actions with multiple people

Try testing with several people to make sure your prompts handle a variety of inputs. Check edge cases like missing attributes or unusual values to make sure the LLM returns something useful and uses any fallbacks specified in your liquid.