You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was exploring using mlserver to deploy ML models as a REST service. I noticed an issue: if you plan to use mlserver with Python and want to utilize its codecs (like numpy or pandas codecs), you must include mlserver in your codebase as a dependency. This action introduces many transitive dependencies, such as fastapi, aiokafka, uvicorn, etc., which significantly increases the size of the dependencies. Would it not be more practical to have a separate mlserver-client package that exclusively contains the codecs and types?
Or how do you currently integrate mlserver with another microservice? Do you manually create the Open Inference Protocol v2 JSON?
The text was updated successfully, but these errors were encountered:
Hello everyone,
I was exploring using mlserver to deploy ML models as a REST service. I noticed an issue: if you plan to use mlserver with Python and want to utilize its codecs (like numpy or pandas codecs), you must include mlserver in your codebase as a dependency. This action introduces many transitive dependencies, such as fastapi, aiokafka, uvicorn, etc., which significantly increases the size of the dependencies. Would it not be more practical to have a separate mlserver-client package that exclusively contains the codecs and types?
Or how do you currently integrate mlserver with another microservice? Do you manually create the Open Inference Protocol v2 JSON?
The text was updated successfully, but these errors were encountered: