Introducing Hosted
The easiest way to deploy an interface for your machine learning model.

We built Gradio to be the fastest way to share your machine learning models with anyone, anywhere. Our interfaces have been used for debugging, validation, demos and all sorts of collaboration. With Hosted, we're taking it a step further. You can create permanent and persistent links for your interfaces through just your Github repo.
- Permanent: The interface will remain live until you decide to stop it. Gradio will handle the compute, and will show you analytics on views, calls, as well as logs so you can monitor any performance issues.
- Persistent: If you want to make changes, just push to your Github repo and update the interface on Gradio. The link remains the same so you don't have to bother anyone with a new URL.
Here's a (live) example of what a Hosted interface looks like
Why Hosted?
If you've used Gradio, you've probably wanted to share a model with others. But where is your model running? If your laptop goes to sleep, your link goes down. If Colab hangs, your link goes down. You need a permanent home for your link. But deploying takes time and effort, whether its on your own hardware or the cloud.
That's where Hosted comes in: If your github repo compiles, you can consider it deployed. We'll take care of all the routing, compute and logs. Update your link by pushing to the repo.
We only charge $7/month, which puts us amongst the more affordable options.
Note: We're allowing everyone to deploy a free link in February using the FEBRUARY promo code.
Examples of Hosted interfaces (find more in Hub)
Nubia

A "NeUral Based Interchangeability Assessor". NUBIA gives a score on a scale of 0 to 1 reflecting how much it thinks the candidate text is interchangeable with the reference text.
BackgroundMattingV2

Background Matting V2 is a model which removes the background of a photo. It requires both a source image and a background image.
DAN

Unfolding the Alternating Optimization for Blind Super Resolution by Zhengxiong Luo et al. and presented in NeurIPS 2020.
How does Hosted work?
Check out our hosted-hello-world repo for a simple example
Step 1 - Create the Launch File
The first step is to create a file that launches the Gradio GUI for your model. Give it a simple name like demo.py. This file should:
- Load the model
- Define a prediction function using the model
- Launch a Gradio interface with the prediction function and appropriate UI components
Note: We can only support CPU right now, if you require GPUs support please email beta@gradio.app
Here’s an example from Noise2Same, a state-of-the-art image denoising model:
You should run this file locally to make sure the interface launches fine.
Step 2 - Publish it on Github
Once your code is working, make a public GitHub repository for your account with all of the necessary files, including the launch file that you just created.
We also need to make sure that we designate all of the Python libraries that we will need as dependencies, along with their versions if necessary. So create a requirements.txt file in the root of your repo. In the case of the Noise2Same repo, this is what the requirements look like:
Once you have added these files to your GitHub project, push your updated repo to GitHub and make sure its visibility is set to public.
Step 3 - Create a Gradio Account
Hosting models permanently requires a Gradio account, which you can sign up for on gradio.app/hosted. If this is your first time creating an account, you will authenticate with your GitHub credentials. Make sure that you sign in with the same GitHub account that you created the repo under.
Gradio charges $7/month for Hosted. Once you provide your billing information click the “+ Add Repo” button to add your repo.
Note: We're allowing everyone to deploy a free link in February using the FEBRUARY promo code.
Step 4 - Deploy!
When you click "+ Add Repo" you’ll be presented with a form that looks something like this. Select your repository and branch, and type the name of the Gradio app file. Once you’ve put your information in, you’ll see some messages confirming that the appropriate files have been found in the right places:

When you’re ready, click Launch, and we'll take it from there!
- Behind the scenes, Gradio clones your repository in a Python 3.7 container with about 1 GB of RAM and 5 GB of disk space (CPU only), and installs all of your dependencies
- It then launches the Gradio app from your app file and creates a public URL where your model will be accessible
- Finally, your latest model is added to a list of models that you currently have running, and you the option to make it visible on Hub for the wider machine learning community, as well as see analytics on views, calls, and logs.
- If there are any errors in the deployment process, you’ll receive an email explaining what went wrong — allowing you to address the issue and try again.
