📣 Pezzo Evaluations Feature - Share your thoughts! #196
arielweinberger
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Our Mission
Pezzo is the first truly open-source and cloud-native LLMOps platform. We are committed to this vision, and for this vision to succeed, it's important that the platform continues to evolve to meet the needs of AI adopters.
Evaluations
One of the most commonly requested features is Evaluations. In a nutshell, it is the ability to understand whether a prompt performs well or not.
Share your thoughts - how can Pezzo help here? It could be anything from UIs for humans to manually rate results, APIs to close the trace loop, integrations with analytics systems to track converting AI operations, and even Pezzo automatically validating prompt results against a "validator" prompt.
📣 Please provide your feedback! Any feedback, even unstructured, or random ideas, is most welcome.
This is a living document that will be used to document ideas, community needs and challenges with regards to this feature.
I will occasionally update the the main post with information based on community feedback, and the goal is to deliver a first functioning version as soon as possible.
Beta Was this translation helpful? Give feedback.
All reactions