# load_chat

```python
gradio.load_chat(···)
```

### Description

Load a chat interface from an OpenAI API chat compatible endpoint.

### Example Usage

```python
import gradio as gr
demo = gr.load_chat("http://localhost:11434/v1", model="deepseek-r1")
demo.launch()
```

### Initialization

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `base_url` | `str` | `` | The base URL of the endpoint, e.g. "http://localhost:11434/v1/" |
| `model` | `str` | `` | The name of the model you are loading, e.g. "llama3.2" |
| `token` | `str \| None` | `None` | The API token or a placeholder string if you are using a local model, e.g. "ollama" |
| `file_types` | `Literal['text_encoded', 'image'] \| list[Literal['text_encoded', 'image']] \| None` | `"text_encoded"` | The file types allowed to be uploaded by the user. "text_encoded" allows uploading any text-encoded file (which is simply appended to the prompt), and "image" adds image upload support. Set to None to disable file uploads. |
| `system_message` | `str \| None` | `None` | The system message to use for the conversation, if any. |
| `streaming` | `bool` | `True` | Whether the response should be streamed. |
| `kwargs` | `<class 'inspect._empty'>` | `` | Additional keyword arguments to pass into ChatInterface for customization. |
- [Creating a Chatbot Fast](https://www.gradio.app/guides/creating-a-chatbot-fast)
