[new] add gradio and hf spaces deployments (#1042)

This commit is contained in:
Sidharth Mohanty
2023-12-22 00:00:55 +05:30
committed by GitHub
parent ec8549d0e1
commit 210fe9bb80
8 changed files with 267 additions and 4 deletions

View File

@@ -0,0 +1,59 @@
---
title: 'Gradio.app'
description: 'Deploy your RAG application to gradio.app platform'
---
Embedchain offers a Streamlit template to facilitate the development of RAG chatbot applications in just three easy steps.
Follow the instructions given below to deploy your first application quickly:
## Step-1: Create RAG app
We provide a command line utility called `ec` in embedchain that inherits the template for `gradio.app` platform and help you deploy the app. Follow the instructions to create a gradio.app app using the template provided:
```bash Install embedchain
pip install embedchain
```
```bash Create application
mkdir my-rag-app
ec create --template=gradio.app
```
This will generate a directory structure like this:
```bash
├── app.py
├── embedchain.json
└── requirements.txt
```
Feel free to edit the files as required.
- `app.py`: Contains API app code
- `embedchain.json`: Contains embedchain specific configuration for deployment (you don't need to configure this)
- `requirements.txt`: Contains python dependencies for your application
## Step-2: Test app locally
You can run the app locally by simply doing:
```bash Run locally
pip install -r requirements.txt
ec dev
```
## Step-3: Deploy to gradio.app
```bash Deploy to gradio.app
ec deploy
```
This will run `gradio deploy` which will prompt you questions and deploy your app directly to huggingface spaces.
<img src="/images/gradio_app.png" alt="gradio app" />
## Seeking help?
If you run into issues with deployment, please feel free to reach out to us via any of the following methods:
<Snippet file="get-help.mdx" />

View File

@@ -0,0 +1,103 @@
---
title: 'Huggingface.co'
description: 'Deploy your RAG application to huggingface.co platform'
---
With Embedchain, you can directly host your apps in just three steps to huggingface spaces where you can view and deploy your app to the world.
We support two types of deployment to huggingface spaces:
<CardGroup cols={2}>
<Card title="" href="#using-streamlit-io">
Streamlit.io
</Card>
<Card title="" href="#using-gradio-app">
Gradio.app
</Card>
</CardGroup>
## Using streamlit.io
### Step 1: Create a new RAG app
Create a new RAG app using the following command:
```bash
mkdir my-rag-app
ec create --template=hf/streamlit.io # inside my-rag-app directory
```
When you run this for the first time, you'll be asked to login to huggingface.co. Once you login, you'll need to create a **write** token. You can create a write token by going to [huggingface.co settings](https://huggingface.co/settings/token). Once you create a token, you'll be asked to enter the token in the terminal.
This will also create an `embedchain.json` file in your app directory. Add a `name` key into the `embedchain.json` file. This will be the "repo-name" of your app in huggingface spaces.
```json embedchain.json
{
"name": "my-rag-app",
"provider": "hf/streamlit.io"
}
```
### Step-2: Test app locally
You can run the app locally by simply doing:
```bash Run locally
pip install -r requirements.txt
ec dev
```
### Step-3: Deploy to huggingface spaces
```bash Deploy to huggingface spaces
ec deploy
```
This will deploy your app to huggingface spaces. You can view your app at `https://huggingface.co/spaces/<your-username>/my-rag-app`. This will get prompted in the terminal once the app is deployed.
## Using gradio.app
Similar to streamlit.io, you can deploy your app to gradio.app in just three steps.
### Step 1: Create a new RAG app
Create a new RAG app using the following command:
```bash
mkdir my-rag-app
ec create --template=hf/gradio.app # inside my-rag-app directory
```
When you run this for the first time, you'll be asked to login to huggingface.co. Once you login, you'll need to create a **write** token. You can create a write token by going to [huggingface.co settings](https://huggingface.co/settings/token). Once you create a token, you'll be asked to enter the token in the terminal.
This will also create an `embedchain.json` file in your app directory. Add a `name` key into the `embedchain.json` file. This will be the "repo-name" of your app in huggingface spaces.
```json embedchain.json
{
"name": "my-rag-app",
"provider": "hf/gradio.app"
}
```
### Step-2: Test app locally
You can run the app locally by simply doing:
```bash Run locally
pip install -r requirements.txt
ec dev
```
### Step-3: Deploy to huggingface spaces
```bash Deploy to huggingface spaces
ec deploy
```
This will deploy your app to huggingface spaces. You can view your app at `https://huggingface.co/spaces/<your-username>/my-rag-app`. This will get prompted in the terminal once the app is deployed.
## Seeking help?
If you run into issues with deployment, please feel free to reach out to us via any of the following methods:
<Snippet file="get-help.mdx" />

BIN
docs/images/gradio_app.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

View File

@@ -91,7 +91,9 @@
"deployment/modal_com",
"deployment/render_com",
"deployment/streamlit_io",
"deployment/embedchain_ai"
"deployment/embedchain_ai",
"deployment/gradio_app",
"deployment/huggingface_spaces"
]
},
{

View File

@@ -103,12 +103,39 @@ def setup_streamlit_io_app():
console.print("Great! Now you can install the dependencies by doing `pip install -r requirements.txt`")
def setup_gradio_app():
# nothing needs to be done here
console.print("Great! Now you can install the dependencies by doing `pip install -r requirements.txt`")
def setup_hf_app():
subprocess.run(["pip", "install", "huggingface_hub[cli]"], check=True)
hf_setup_file = os.path.join(os.path.expanduser("~"), ".cache/huggingface/token")
if os.path.exists(hf_setup_file):
console.print(
"""✅ [bold green]HuggingFace setup already done. You can now install the dependencies by doing \n
`pip install -r requirements.txt`[/bold green]"""
)
else:
console.print(
"""🚀 [cyan]Running: huggingface-cli login \n
Please provide a [bold]WRITE[/bold] token so that we can directly deploy\n
your apps from the terminal.[/cyan]
"""
)
subprocess.run(["huggingface-cli", "login"], check=True)
console.print("Great! Now you can install the dependencies by doing `pip install -r requirements.txt`")
@cli.command()
@click.option("--template", default="fly.io", help="The template to use.")
@click.argument("extra_args", nargs=-1, type=click.UNPROCESSED)
def create(template, extra_args):
anonymous_telemetry.capture(event_name="ec_create", properties={"template_used": template})
src_path = get_pkg_path_from_name(template)
template_dir = template
if "/" in template_dir:
template_dir = template.split("/")[1]
src_path = get_pkg_path_from_name(template_dir)
shutil.copytree(src_path, os.getcwd(), dirs_exist_ok=True)
console.print(f"✅ [bold green]Successfully created app from template '{template}'.[/bold green]")
@@ -120,6 +147,10 @@ def create(template, extra_args):
setup_render_com_app()
elif template == "streamlit.io":
setup_streamlit_io_app()
elif template == "gradio.app":
setup_gradio_app()
elif template == "hf/gradio.app" or template == "hf/streamlit.app":
setup_hf_app()
else:
raise ValueError(f"Unknown template '{template}'.")
@@ -187,6 +218,17 @@ def run_dev_render_com(debug, host, port):
console.print("\n🛑 [bold yellow]FastAPI server stopped[/bold yellow]")
def run_dev_gradio():
gradio_run_cmd = ["gradio", "app.py"]
try:
console.print(f"🚀 [bold cyan]Running Gradio app with command: {' '.join(gradio_run_cmd)}[/bold cyan]")
subprocess.run(gradio_run_cmd, check=True)
except subprocess.CalledProcessError as e:
console.print(f"❌ [bold red]An error occurred: {e}[/bold red]")
except KeyboardInterrupt:
console.print("\n🛑 [bold yellow]Gradio server stopped[/bold yellow]")
@cli.command()
@click.option("--debug", is_flag=True, help="Enable or disable debug mode.")
@click.option("--host", default="127.0.0.1", help="The host address to run the FastAPI app on.")
@@ -204,8 +246,10 @@ def dev(debug, host, port):
run_dev_modal_com()
elif template == "render.com":
run_dev_render_com(debug, host, port)
elif template == "streamlit.io":
elif template == "streamlit.io" or template == "hf/streamlit.app":
run_dev_streamlit_io()
elif template == "gradio.app" or template == "hf/gradio.app":
run_dev_gradio()
else:
raise ValueError(f"Unknown template '{template}'.")
@@ -316,12 +360,43 @@ def deploy_render():
)
def deploy_gradio_app():
gradio_deploy_cmd = ["gradio", "deploy"]
try:
console.print(f"🚀 [bold cyan]Running: {' '.join(gradio_deploy_cmd)}[/bold cyan]")
subprocess.run(gradio_deploy_cmd, check=True)
console.print("✅ [bold green]'gradio deploy' executed successfully.[/bold green]")
except subprocess.CalledProcessError as e:
console.print(f"❌ [bold red]An error occurred: {e}[/bold red]")
except FileNotFoundError:
console.print(
"❌ [bold red]'gradio' command not found. Please ensure Gradio CLI is installed and in your PATH.[/bold red]"
)
def deploy_hf_spaces(ec_app_name):
if not ec_app_name:
console.print("❌ [bold red]'name' not found in embedchain.json[/bold red]")
return
hf_spaces_deploy_cmd = ["huggingface-cli", "upload", ec_app_name, ".", ".", "--repo-type=space"]
try:
console.print(f"🚀 [bold cyan]Running: {' '.join(hf_spaces_deploy_cmd)}[/bold cyan]")
subprocess.run(hf_spaces_deploy_cmd, check=True)
console.print("✅ [bold green]'huggingface-cli upload' executed successfully.[/bold green]")
except subprocess.CalledProcessError as e:
console.print(f"❌ [bold red]An error occurred: {e}[/bold red]")
@cli.command()
def deploy():
# Check for platform-specific files
template = ""
ec_app_name = ""
with open("embedchain.json", "r") as file:
embedchain_config = json.load(file)
ec_app_name = embedchain_config["name"] if "name" in embedchain_config else None
template = embedchain_config["provider"]
anonymous_telemetry.capture(event_name="ec_deploy", properties={"template_used": template})
@@ -333,5 +408,9 @@ def deploy():
deploy_render()
elif template == "streamlit.io":
deploy_streamlit()
elif template == "gradio.app":
deploy_gradio_app()
elif template.startswith("hf/"):
deploy_hf_spaces(ec_app_name)
else:
console.print("❌ [bold red]No recognized deployment platform found.[/bold red]")

View File

@@ -0,0 +1,18 @@
import os
import gradio as gr
from embedchain import Pipeline as App
os.environ["OPENAI_API_KEY"] = "sk-xxx"
app = App()
def query(message, history):
return app.chat(message)
demo = gr.ChatInterface(query)
demo.launch()

View File

@@ -0,0 +1,2 @@
gradio==4.11.0
embedchain

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "embedchain"
version = "0.1.39"
version = "0.1.40"
description = "Data platform for LLMs - Load, index, retrieve and sync any unstructured data"
authors = [
"Taranjeet Singh <taranjeet@embedchain.ai>",