Learn how to deploy LoRA models from Hugging Face Hub to Friendli Dedicated Endpoints for efficient inference, including a quick guide for FLUX LoRA models.
https://friendli.ai/deploy-model/{hf-model-id}
.
For example, to deploy the FLUX LoRA model mentioned above, use this link. This will launch the deployment workflow, allowing you to quickly serve and experiment with the model on Friendli.
Sign up for Friendli Suite
Navigate to the Endpoint Creation Page
Select the Base Model
predibase/tldr_content_gen
adapter.mistralai/Mistral-7B-v0.1
model.Select the LoRA Adapter
predibase/tldr_content_gen
.Experiment with the Deployed Adapter Model