Gradio Agents & MCP Hackathon · Virtual, June 2-8 · $10k+ in prizes
Register NowGradio Agents & MCP Hackathon · Virtual, June 2-8 · $10k+ in prizes
Register NowNew to Gradio? Start here: Getting Started
See the Release History
To install the Gradio Python Client from main, run the following command:
pip install 'gradio-client @ git+https://github.com/gradio-app/gradio@5b2e0cbdd887d71ebb52d7c7efdee4c9643ffc46#subdirectory=client/python'
Hugging Face Spaces now offers a new hardware option called ZeroGPU. ZeroGPU is a “serverless” cluster of spaces that let Gradio applications run on A100 GPUs for free. These kinds of spaces are a great foundation to build new applications on top of with the python gradio client, but you need to take care to avoid ZeroGPU’s rate limiting.
ZeroGPU spaces are rate-limited to ensure that a single user does not hog all of the available GPUs.
The limit is controlled by a special token that the Hugging Face Hub infrastructure adds to all incoming requests to Spaces.
This token is a request header called X-IP-Token
and its value changes depending on the user who makes a request to the ZeroGPU space.
Let’s say you want to create a space (Space A) that uses a ZeroGPU space (Space B) programmatically. Simply calling Space B from Space A with the python client will quickly exhaust your rate limit, as all the requests to the ZeroGPU space will have the same token. So in order to avoid this, we need to extract the token of the user using Space A before we call Space B programmatically.
How to do this will be explained in the following section.
When a user presses enter in the textbox, we will extract their token from the X-IP-Token
header of the incoming request.
We will use this header when constructing the gradio client.
The following hypothetical text-to-image application shows how this is done.
import gradio as gr
from gradio_client import Client
def text_to_image(prompt, request: gr.Request):
x_ip_token = request.headers['x-ip-token']
client = Client("hysts/SDXL", headers={"x-ip-token": x_ip_token})
img = client.predict(prompt, api_name="/predict")
return img
with gr.Blocks() as demo:
image = gr.Image()
prompt = gr.Textbox(max_lines=1)
prompt.submit(text_to_image, [prompt], [image])
demo.launch()