<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/03c9ca2b-b7d4-43c1-b15e-034f9d95f556/3a3b9b46-4383-47ea-9a26-f7315384e231/16.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/03c9ca2b-b7d4-43c1-b15e-034f9d95f556/3a3b9b46-4383-47ea-9a26-f7315384e231/16.png" width="40px" />
Please review the content below carefully. If it does not resolve your issue, we encourage you to explore our Education & Tutorials section before reaching out to Support. If you need us, we’re here to assist you and will do our best to help!
</aside>
Bolt.new inference uses Anthropic’s Sonnet 3.5 AI model. We purchase tokens from Anthropic, defined by them as: “the smallest individual units of a language model, and can correspond to words, subwords, characters, or even bytes (in the case of Unicode).” When users interact with Bolt, tokens are consumed in three primary ways: chat messages between the user and the LLM, the LLM writing code, and the LLM reading the existing code to capture any changes made by the user.
Our goal is for Bolt to use as few tokens as possible to accomplish each task. As such, the team is hard at work continuing to ship product changes that increase token efficiency.
Below are a number of tips you can currently implement to maximize token efficiency:
Avoid Repeated Automated Error "Fix" Attempts
Continuously clicking the automatic "fix" button can lead to unnecessary token consumption. After each attempt, review the result and refine your next request if needed. There are programming challenges that the AI cannot solve automatically, so it is a good idea to do some research and intervene manually if it fails to fix automatically.
Leverage the Rollback Functionality
Use the rollback feature to revert your project to a previous state without consuming tokens. This is essentially an undo button that can take you back to any prior state of your project. This can save time and tokens if something goes wrong with your project. Keep in mind that there is no "redo" function though, so be sure you want to revert before using this feature because it is final: all changes made after the rollback point will be permanently removed.
Crawl, Walk, Run
Make sure the basics of your app are scaffolded before describing the details of more advanced functionality for your site.
Use Specific and Focused Prompts
When prompting the AI, be clear and specific. See here for more information on prompting most effectively. Direct the model to focus on certain files or functions rather than the entire codebase, which can improve token usage efficiency. This approach is not a magic fix, but anecdotally we've seen evidence that it helps.
Reduce the Size of your project
As your project grows, more tokens are required to keep the AI in sync with your code. Larger projects (and longer chat conversations) demand more resources for the AI to stay aware of the context, so it's important to be mindful of how project size impacts token usage.
This could be accomplished by breaking a large app into smaller chunks, and glueing it all back together outside of Bolt later. For example, separate backend and frontend into separate projects is a common developer pattern. This could be challenging for less experienced developers.
ADVANCED USERS ONLY: .bolt/ignore
In every bolt project, if you open it in StackBlitz you can edit a file called .bolt/ignore, and in this file you can list out any folders or folders that should be excluded from the AI context window. For example, here is our vite react starter’s ignore files: https://stackblitz.com/edit/vite-shadcn?file=.bolt%2Fignore. Any files listed there will be completely invisible to the AI, and will clear up space in the context window. You’ll need to edit the .bolt/ignore file in StackBlitz and then reopen the project in bolt for the changes to take effect. Please note: hiding files from the AI can have unintended consequences as it is no longer aware of your entire project. This approach is very powerful, but is only recommended for advanced users who can make informed decisions about what can safely be excluded, and understand/resolve issues that may arise from this approach.
Advanced Strategy: Reset the AI Context Window
If the AI seems stuck or unresponsive to commands, resetting the AI context window can help. To do this, you will open your and fork your project in StackBlitz, followed by reopening the forked project in Bolt.