-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't specify params when running inference on MLServer #1927
Comments
@svpino this has been fixed for the standard mlflow runtime Would it be possible to use this endpoint instead of Note that |
Unfortunately, MLflow requires an |
@svpino is this something that you would like to fix in a PR? We encourage contributions from the community, |
I'd love to. I'll try to get to this at some point, but I can't make any promises for now. |
When deploying an MLflow model using MLServer, we can't specify
params
as part of the request.Here is the error returned by the server:
Here is how I'm running the server:
Here is an example of the request:
It seems that MLServer doesn't support parameters as part of the request, which breaks its compatibility with MLflow models that require these parameters to work.
The text was updated successfully, but these errors were encountered: