About LM Studio
With LM Studio, you can run LLMs on your laptop, entirely offline, chat with your local documents, use models through the in-app Chat UI.
In this guide, we will demonstrate how you can load Selene Mini into LM Studio and start running evals locally!
1. Open LM Studio and access the model catalog by clicking the magnifying glass icon.
Search for the model by typing 'Selene' into the search bar.
2. Choose a suitable quantization format and hit download.
Hover over the icon to the right of the model name to assess which quantization will be right for you. The model might entirely fit in your GPU's memory, which would significantly speed up inference.
At the bottom of this page, you can find a helpful reference table on the different quants from @bartowski.

Select the model and click Download.
3. Once the download is complete, switch to the Chat tab in the sidebar.
In the Select a model to chat with dropdown menu, choose Atla Selene Mini.

4. Start running evals with Selene in the Chat window.
Here's an example prompt that you can start off by running:
For different use cases, we provide the prompts we used for training here. Use these to achieve best results with Selene Mini 🤗
---
Quant descriptions
