Skip to main content
If you’re new to AI, this guide will help you understand what powers Bolt and how to get the most out of it. We’ll cover what LLMs are, how they work in Bolt, and some tips to save costs and improve results.

What is an LLM?

LLM stands for Large Language Model, and you can think of it as predictive text on steroids. Instead of just guessing the next word like your phone does, it can generate entire answers, explanations, and even working code. This is why it is often called generative AI technology, because it does not simply repeat what it has seen. It creates something new each time based on what you ask. In Bolt, this generative power is what turns your requests into code, designs, or solutions behind the scenes. Bolt is powered by Anthropic’s Claude Agent and Claude Sonnet LLMs.
See Agents to learn more about switching between Claude Agent and v1 Agent (legacy) in Bolt.

How Bolt Uses LLMs

When you give Bolt a request, it sends that request to a powerful underlying LLM, which is the brain behind the scenes. The LLM breaks your words into tokens, analyzes what you are asking, and predicts the best possible code or answer based on everything it has learned during training. This is what makes the result feel so natural and useful. For example, if you prompt Bolt to “Create a star icon next to names so users can save favorites,” Bolt does more than just paste in an existing solution. It actually generates brand-new code that solves your specific problem. If you add extra details, like the way you want the function to handle errors, the LLM takes those into account and produces a solution that is even closer to what you need. This process is what allows you to go from a plain English request (a prompt) to working, customized code in just seconds.
Your data is safe: Bolt never uses your project data to train Claude or any other model.

Prompts

A prompt is simply the message you send to Bolt, usually in the form of a question or an instruction. Think of it as telling the AI what you want it to do. This could be something simple like “Change my website color scheme to a dark theme” or something more complex like a detailed request to build a project management application. Learning to write good prompts is one of the most important skills for getting great results. This is not just a Bolt-specific trick, it is quickly becoming part of a new wave of tech skills known as prompting or prompt engineering. The clearer and more specific your prompt, the better Bolt can understand your intent and give you a high-quality answer. Strong prompting skills can save you time, reduce costs, and improve the accuracy of the code or content you get back. If you are new to this, start by writing short, direct prompts and then experiment with adding more context or examples to fine-tune the results.
For tips on writing better prompts, check out our Prompt effectively guide.

Context

Context is what the AI can “see” when it responds. This includes:
  • Your current prompt
  • Your chat history with Bolt
  • The code in your project
Bolt uses a large context window, which means it can keep track of a lot at once. But it can’t remember everything forever.

Tokens

LLMs process text as tokens, which are small pieces of words. Example: “I love cats!” becomes four tokens: I, love, cats, ! Every message uses tokens, and there is a limit to how many can be processed at one time. Because token usage affects costs, keeping your prompts focused can save money. If you’d like to learn more about tokens in general, check out this Nvidia article on tokens.

Accuracy and Limitations

LLMs can produce inaccurate or outdated outputs. There are a couple of reasons for this:
  • Training set age: LLMs are trained on massive amounts of data, but they can’t know anything that occurs after their training finishes. When building software, it’s important to be aware that the LLM may not know about the latest version of all the tools and frameworks you’re using.
  • Hallucination: LLMs are predictive, not deterministic. They can produce different results even with the same prompt. Sometimes, they generate false information.
It’s important to keep this in mind when building, and always test your applications carefully.

Next Steps