Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] : Fix typos in docs #920

Merged
merged 4 commits into from
Oct 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/docs/gpt-researcher/context/local-docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ export DOC_PATH="./my-docs"
```

Step 2:
- If you're running the frontend app on localhost:8000, simply select "My Documents" from the the "Report Source" Dropdown Options.
- If you're running the frontend app on localhost:8000, simply select "My Documents" from the "Report Source" Dropdown Options.
- If you're running GPT Researcher with the [PIP package](https://docs.tavily.com/docs/gpt-researcher/gptr/pip-package), pass the `report_source` argument as "local" when you instantiate the `GPTResearcher` class [code sample here](https://docs.gptr.dev/docs/gpt-researcher/context/tailored-research).

## Local Docs + Web (Hybrid)

![GPT Researcher hybrid research](./gptr-hybrid.png)

Check out the blog post on [Hybrid Research](https://docs.gptr.dev/blog/gptr-hybrid) to learn more about how to combine local documents with web research.
```
```
2 changes: 1 addition & 1 deletion docs/docs/gpt-researcher/context/tailored-research.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ You can combine the above methods to conduct hybrid research. For example, you c
Simply provide the sources and set the `report_source` argument as `"hybrid"` and watch the magic happen.

Please note! You should set the proper retrievers for the web sources and doc path for local documents for this to work.
To lean more about retrievers check out the [Retrievers](https://docs.gptr.dev/docs/gpt-researcher/search-engines/retrievers) documentation.
To learn more about retrievers check out the [Retrievers](https://docs.gptr.dev/docs/gpt-researcher/search-engines/retrievers) documentation.


### Research on LangChain Documents 🦜️🔗
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/gpt-researcher/context/vector-stores.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ query = """
Summarize the essay into 3 or 4 succinct sections.
Make sure to include key points regarding wealth creation.

Include some recommendations for entrepeneurs in the conclusion.
Include some recommendations for entrepreneurs in the conclusion.
"""


Expand Down Expand Up @@ -145,11 +145,11 @@ researcher = GPTResearcher(
vector_store=vector_store,
)

# Conduct research, the context will be chunked and store in the vector_store
# Conduct research, the context will be chunked and stored in the vector_store
await researcher.conduct_research()

# Query the 5 most relevant context in our vector store
related_contexts = await vector_store.asimilarity_search("GPT-4", k = 5)
print(related_contexts)
print(len(related_contexts)) #Should be 5
```
```
2 changes: 1 addition & 1 deletion docs/docs/gpt-researcher/llms/llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Current supported LLMs are `openai`, `anthropic`, `azure_openai`, `cohere`, `goo
Using any model will require updating the `SMART_LLM` and `FAST_LLM` env vars. You might also need to include the LLM provider API Key.
To learn more about support customization options see [here](/gpt-researcher/config).

**Please note**: GPT Researcher is optimized and heavily tested on GPT models. Some other models might run intro context limit errors, and unexpected responses.
**Please note**: GPT Researcher is optimized and heavily tested on GPT models. Some other models might run into context limit errors, and unexpected responses.
Please provide any feedback in our [Discord community](https://discord.gg/DUmbTebB) channel, so we can better improve the experience and performance.

Below you can find examples for how to configure the various supported LLMs.
Expand Down