Model APIs

Built on top of the Lepton platform, we provide a variety of model API services for popular open source models. You can experiment with the models on our Built With Lepton directly, or use the APIs to integrate such models into your own application.

Sample Usage of Mistral-7b with Model API

1 Install dependencies for using LLM Model API

Our LLM model APIs are fully compatible with OpenAI's API spec, so you can use the OpenAI Python SDK to call it. To begin with, let's install the OpenAI Python SDK.

pip install -U openai

2 Import dependencies and set up the ENV variables

Simply redirecting the request to service hosted by Lepton with your API token will get the setup process done.

import os
import openai

client = openai.OpenAI(

3 Make chat completion requests

Now let's make a completion request to the model and see the response.

completion =
        {"role": "user", "content": "say hello"},

for chunk in completion:
    if not chunk.choices:
    content = chunk.choices[0].delta.content
    if content:
        print(content, end="")

This is a simple example of making a completion request to the model. And as mentioned above, there are other SOTA models available for you to use. You may check Model APIs for more details or experiment with them on our Built With Lepton.

Usage and billing

Model API usage will be shown in Dashboard - Setting - Billing. Usage will be billed by the amount of tokens processed.

For the pricing of each model, please refer to Pricing Page.

Rate Limit

The rate limit for the Model APIs is 10 requests per minute across all models under Basic Plan. If you need a higher rate limit, please add a payment mothod udner Settings and upgrade to Standard Plan. If you are looking for a tailored model API service or have any other questions, please contact us.