From 27236bd1b2360b34e23bd0256f4ccb7f3312e0b2 Mon Sep 17 00:00:00 2001 From: Taranjeet Singh Date: Mon, 1 Jan 2024 09:52:43 -0800 Subject: [PATCH] Improve docs. (#1096) --- docs/components/introduction.mdx | 12 +++ docs/get-started/quickstart.mdx | 130 ++++++++++++++------------ docs/logo/dark-rt.svg | 10 ++ docs/logo/light-rt.svg | 10 ++ docs/mint.json | 56 ++++++----- docs/use-cases/chatbots.mdx | 2 +- docs/use-cases/introduction.mdx | 11 +++ docs/use-cases/question-answering.mdx | 2 +- docs/use-cases/semantic-search.mdx | 4 + 9 files changed, 147 insertions(+), 90 deletions(-) create mode 100644 docs/components/introduction.mdx create mode 100644 docs/logo/dark-rt.svg create mode 100644 docs/logo/light-rt.svg create mode 100644 docs/use-cases/introduction.mdx diff --git a/docs/components/introduction.mdx b/docs/components/introduction.mdx new file mode 100644 index 00000000..a914a277 --- /dev/null +++ b/docs/components/introduction.mdx @@ -0,0 +1,12 @@ +--- +title: 🧩 Introduction +--- + +## Overview + +You can configure following components + +* [Data Source](/components/data-sources/overview) +* [LLM](/components/llms) +* [Embedding Model](/components/embedding-models) +* [Vector Database](/components/vector-databases) \ No newline at end of file diff --git a/docs/get-started/quickstart.mdx b/docs/get-started/quickstart.mdx index 9250d280..55720c02 100644 --- a/docs/get-started/quickstart.mdx +++ b/docs/get-started/quickstart.mdx @@ -1,68 +1,82 @@ --- title: '⚡ Quickstart' -description: '💡 Start building ChatGPT like apps in a minute on your own data' +description: '💡 Create a RAG app on your own data in a minute' --- -Install python package: +## Installation + +First install the python package. ```bash pip install embedchain ``` -Creating an app involves 3 steps: +Once you have installed the package, depending upon your preference you can either use: - - - ```python - from embedchain import App - app = App() - ``` - - Embedchain provides a wide range of options to customize your app. You can customize the model, data sources, and much more. - Explore the custom configurations [here](https://docs.embedchain.ai/advanced/configuration). - - ```python yaml_app.py - from embedchain import App - app = App.from_config(config_path="config.yaml") - ``` - ```python json_app.py - from embedchain import App - app = App.from_config(config_path="config.json") - ``` - ```python app.py - from embedchain import App - config = {} # Add your config here - app = App.from_config(config=config) - ``` - - - - - ```python - app.add("https://en.wikipedia.org/wiki/Elon_Musk") - app.add("https://www.forbes.com/profile/elon-musk") - # app.add("path/to/file/elon_musk.pdf") - ``` - - Embedchain supports adding data from many data sources including web pages, PDFs, databases, and more. - Explore the list of supported [data sources](https://docs.embedchain.ai/data-sources/overview). - - - - ```python - app.query("What is the net worth of Elon Musk today?") - # Answer: The net worth of Elon Musk today is $258.7 billion. - ``` -
- - Embedchain provides a wide range of features to interact with your app. You can chat with your app, ask questions, search through your data, and much more. - ```python - app.chat("How many companies does Elon Musk run? Name those") - # Answer: Elon Musk runs 3 companies: Tesla, SpaceX, and Neuralink. - app.chat("What is his net worth today?") - # Answer: The net worth of Elon Musk today is $258.7 billion. - ``` - To learn about other features, click [here](https://docs.embedchain.ai/get-started/introduction) - -
-
+ + + This includes Open source LLMs like Mistral, Llama, etc.
+ Free to use, and runs locally on your machine. +
+ + This includes paid LLMs like GPT 4, Claude, etc.
+ Cost money and are accessible via an API. +
+
+ +## Open Source Models + +This section gives a quickstart example of using Mistral as the Open source LLM and Sentence transformers as the Open source embedding model. These models are free and run mostly on your local machine. + +We are using Mistral hosted at Hugging Face, so will you need a Hugging Face token to run this example. Its *free* and you can create one [here](https://huggingface.co/docs/hub/security-tokens). + + +```python quickstart.py +import os +# replace this with your HF key +os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_xxxx" + +from embedchain import App +app = App.from_config("mistral.yaml") +app.add("https://www.forbes.com/profile/elon-musk") +app.add("https://en.wikipedia.org/wiki/Elon_Musk") +app.query("What is the net worth of Elon Musk today?") +# Answer: The net worth of Elon Musk today is $258.7 billion. +``` +```yaml mistral.yaml +llm: + provider: huggingface + config: + model: 'mistralai/Mistral-7B-v0.1' +embedder: + provider: huggingface + config: + model: 'sentence-transformers/all-mpnet-base-v2' +``` + + +## Paid Models + +In this section, we will use both LLM and embedding model from OpenAI. + +```python quickstart.py +import os +# replace this with your OpenAI key +os.environ["OPENAI_API_KEY"] = "sk-xxxx" + +from embedchain import App +app = App() +app.add("https://www.forbes.com/profile/elon-musk") +app.add("https://en.wikipedia.org/wiki/Elon_Musk") +app.query("What is the net worth of Elon Musk today?") +# Answer: The net worth of Elon Musk today is $258.7 billion. +``` + +# Next Steps + +Now that you have created your first app, you can follow any of the links: + +* [Introduction](/get-started/introduction) +* [Customization](/components/introduction) +* [Use cases](/use-cases/introduction) +* [Deployment](/get-started/deployment) \ No newline at end of file diff --git a/docs/logo/dark-rt.svg b/docs/logo/dark-rt.svg new file mode 100644 index 00000000..83eb7fc6 --- /dev/null +++ b/docs/logo/dark-rt.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/logo/light-rt.svg b/docs/logo/light-rt.svg new file mode 100644 index 00000000..f204d17e --- /dev/null +++ b/docs/logo/light-rt.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/mint.json b/docs/mint.json index ca9ce9ad..10c2c0f3 100644 --- a/docs/mint.json +++ b/docs/mint.json @@ -2,8 +2,8 @@ "$schema": "https://mintlify.com/schema.json", "name": "Embedchain", "logo": { - "dark": "/logo/dark.svg", - "light": "/logo/light.svg", + "dark": "/logo/dark-rt.svg", + "light": "/logo/light-rt.svg", "href": "https://github.com/embedchain/embedchain" }, "favicon": "/favicon.png", @@ -41,16 +41,6 @@ "name": "Talk to founders", "icon": "calendar", "url": "https://cal.com/taranjeetio/ec" - }, - { - "name": "Join our slack", - "icon": "slack", - "url": "https://join.slack.com/t/embedchain/shared_invite/zt-22uwz3c46-Zg7cIh5rOBteT_xe1jwLDw" - }, - { - "name": "Join our discord", - "icon": "discord", - "url": "https://discord.gg/CUU9FPhRNt" } ], "topbarLinks": [ @@ -61,7 +51,7 @@ ], "topbarCtaButton": { "name": "Join our slack", - "url": "https://join.slack.com/t/embedchain/shared_invite/zt-22uwz3c46-Zg7cIh5rOBteT_xe1jwLDw" + "url": "https://embedchain.ai/slack" }, "primaryTab": { "name": "Documentation" @@ -70,35 +60,23 @@ { "group": "Get Started", "pages": [ - "get-started/introduction", "get-started/quickstart", + "get-started/introduction", + "get-started/faq", { - "group": "🔗 Integrations", + "group": "🔗 Integrations", "pages": [ "integration/langsmith", "integration/chainlit", "integration/streamlit-mistral" ] - }, - "get-started/faq" - ] - }, - { - "group": "Deployment", - "pages": [ - "get-started/deployment", - "deployment/fly_io", - "deployment/modal_com", - "deployment/render_com", - "deployment/streamlit_io", - "deployment/gradio_app", - "deployment/huggingface_spaces", - "deployment/embedchain_ai" + } ] }, { "group": "Use cases", "pages": [ + "use-cases/introduction", "use-cases/chatbots", "use-cases/question-answering", "use-cases/semantic-search" @@ -107,9 +85,11 @@ { "group": "Components", "pages": [ + "components/introduction", { "group": "Data sources", "pages": [ + "components/data-sources/overview", { "group": "Data types", @@ -143,6 +123,19 @@ "components/embedding-models" ] }, + { + "group": "Deployment", + "pages": [ + "get-started/deployment", + "deployment/fly_io", + "deployment/modal_com", + "deployment/render_com", + "deployment/streamlit_io", + "deployment/gradio_app", + "deployment/huggingface_spaces", + "deployment/embedchain_ai" + ] + }, { "group": "Community", "pages": [ @@ -240,6 +233,9 @@ "posthog": { "apiKey": "phc_PHQDA5KwztijnSojsxJ2c1DuJd52QCzJzT2xnSGvjN2", "apiHost": "https://app.embedchain.ai/ingest" + }, + "ga4": { + "measurementId": "G-4QK7FJE6T3" } }, "feedback": { diff --git a/docs/use-cases/chatbots.mdx b/docs/use-cases/chatbots.mdx index 122abdd1..76807590 100644 --- a/docs/use-cases/chatbots.mdx +++ b/docs/use-cases/chatbots.mdx @@ -1,5 +1,5 @@ --- -title: 'Chatbots' +title: '🤖 Chatbots' --- Chatbots, especially those powered by Large Language Models (LLMs), have a wide range of use cases, significantly enhancing various aspects of business, education, and personal assistance. Here are some key applications: diff --git a/docs/use-cases/introduction.mdx b/docs/use-cases/introduction.mdx new file mode 100644 index 00000000..e908ba64 --- /dev/null +++ b/docs/use-cases/introduction.mdx @@ -0,0 +1,11 @@ +--- +title: 🧱 Introduction +--- + +## Overview + +You can use embedchain to create the following usecases: + +* [Chatbots](/use-cases/chatbots) +* [Question Answering](/use-cases/question-answering) +* [Semantic Search](/use-cases/semantic-search) \ No newline at end of file diff --git a/docs/use-cases/question-answering.mdx b/docs/use-cases/question-answering.mdx index 691907cb..f538419b 100644 --- a/docs/use-cases/question-answering.mdx +++ b/docs/use-cases/question-answering.mdx @@ -1,5 +1,5 @@ --- -title: 'Question Answering' +title: '❓ Question Answering' --- Utilizing large language models (LLMs) for question answering is a transformative application, bringing significant benefits to various real-world situations. Embedchain extensively supports tasks related to question answering, including summarization, content creation, language translation, and data analysis. The versatility of question answering with LLMs enables solutions for numerous practical applications such as: diff --git a/docs/use-cases/semantic-search.mdx b/docs/use-cases/semantic-search.mdx index 37665445..f506e5dd 100644 --- a/docs/use-cases/semantic-search.mdx +++ b/docs/use-cases/semantic-search.mdx @@ -1,3 +1,7 @@ +--- +title: '🔍 Semantic Search' +--- + Semantic searching, which involves understanding the intent and contextual meaning behind search queries, is yet another popular use-case of RAG. It has several popular use cases across various domains: - **Information Retrieval**: Enhances search accuracy in databases and websites