Explore Built with Lepton
    Mixtral 8x22B

    Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

    LLMtext
    WizardLM-2 8x22B

    The WizardLM-2 8x22B is a state-of-the-art large language model, demonstrating highly competitive performance in complex chat, multilingual, reasoning, and agent tasks.

    LLMtext
    DBRX

    DBRX is a large language model trained by Databricks, and made available under an open license.

    LLMtext
    Mistral 7B

    The Mistral-7B Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.

    LLMtext
    Toppy M 7B

    A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.

    LLMtext
    Samba-CoE v0.3

    Samba-CoE-v0.3 is Composition of Experts, built from a subset of Samba-1 with a total of 69B parameters. It achieves SOTA performance on tasks in OpenLLM leaderboard.

    LLMtext-to-textprompt
    JetMoE

    JetMoE is a LLM developed by MyShell, MIT, MIT-IBM Watson Lab, Princeton University and Lepton AI

    LLMtext
    Dolphin Mixtral 8x7b

    An uncensored, fine-tuned model based on the Mixtral mixture of experts model that excels at coding tasks.

    LLMtext
    Octopus v2

    On-device language model for AI agent, much faster than RAG and tailored for Android APIs.

    LLMtext
    Samba-1

    A 1.3T parameter Composition of Experts, strategically curated expert models from the open source community to offer state of the art accuracy at a diverse set of enterprise tasks and processes, running securely, privately and 10X more efficiently than any other model of its size.

    LLMtext-to-textprompt
    Gemma 7b

    Gemma is a family of lightweight, state-of-the-art open models from Google.

    LLMtext
    IP Adapter FaceID

    Use face ID embedding from a face recognition model instead of CLIP image embedding, additionally, we use LoRA to improve ID consistency. IP-Adapter-FaceID can generate various style images conditioned on a face with only text prompts.

    Imageimage-to-image
    Open Dalle

    A unique fusion that showcases exceptional prompt adherence and semantic understanding

    Imagetext-to-image
    Stable Video Diffusion

    Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.

    Videoimage-to-video
    OpenVoice

    A versatile instant voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages.

    Audiotext-to-speech
    Mixtral 8x7b

    The Mixtral 8x7b Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

    LLMtext
    SD-XL Inpainting

    SD-XL Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.

    Imageimage-to-image
    Stable Diffusion XL

    Text-to-image diffusion model capable of generating photo-realistic images given any text input.

    Imagetext-to-image
    WhisperX

    Automatic Speech Recognition with Word-level Timestamps (& Diarization).

    Audiospeech-to-text
    Llama2 13b

    Llama 2 is a pretrained and fine-tuned generative text models, This is the 13B pretrained model.

    LLMtext
Get more examples