-
Notifications
You must be signed in to change notification settings - Fork 887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add evaluate RAG with LlamaIndex #253
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level comment: Cool notebook! I like the eval part especially. It'd be awesome to deep dive on that (as another task, another day)
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth adding a link or two here that can point users off to relevant pre-reading? e.g. make LlamaIndex
link to their site, and make RAG
link to something with background info on RAG?
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear outputs for pip install
and for llamaindex
specifically can you pin the version? It's <1.0 so they can and have made breaking API changes, so it would help keep the guide working.
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #6. "Response": [response.response],
According to the type annotations response
is a string, so has no .response
property. Ditto for response.source_nodes
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #8. "Evaluation Result": [eval_result.feedback],
eval_result
is a string too, so has no .feedback
property.
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #12. eval_df = eval_df.style.set_properties(
Could you use the colab formatter instead?
from google import colab colab.data_table.enable_dataframe_formatter()
Reply via ReviewNB
@@ -0,0 +1,550 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #9. display_eval_df(question, llm_response, eval_result)
I don't think we want to create 1-row dataframes in a loop. This would look increasingly strange as N gets bigger. Can you assemble a single dataframe instead?
Reply via ReviewNB
Marking this pull request as stale since it has been open for 14 days with no activity. This PR will be closed if no further activity occurs. |
No description provided.