Meta developed and released the Meta Llama 3 family of large language models (LLMs). The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
The WizardLM-2 8x22B is a state-of-the-art large language model, demonstrating highly competitive performance in complex chat, multilingual, reasoning, and agent tasks.
The Mistral-7B Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.
Samba-CoE-v0.3 is Composition of Experts, built from a subset of Samba-1 with a total of 69B parameters. It achieves SOTA performance on tasks in OpenLLM leaderboard.
JetMoE is a LLM developed by MyShell, MIT, MIT-IBM Watson Lab, Princeton University and Lepton AI
An uncensored, fine-tuned model based on the Mixtral mixture of experts model that excels at coding tasks.
On-device language model for AI agent, much faster than RAG and tailored for Android APIs.
A 1.3T parameter Composition of Experts, strategically curated expert models from the open source community to offer state of the art accuracy at a diverse set of enterprise tasks and processes, running securely, privately and 10X more efficiently than any other model of its size.
Use face ID embedding from a face recognition model instead of CLIP image embedding, additionally, we use LoRA to improve ID consistency. IP-Adapter-FaceID can generate various style images conditioned on a face with only text prompts.
Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
A versatile instant voice cloning approach that requires only a short audio clip from the reference speaker to replicate their voice and generate speech in multiple languages.
The Mixtral 8x7b Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
SD-XL Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.