Skip to content

A flexible, high-performance serving system for machine learning models

License

Notifications You must be signed in to change notification settings

jwu26/xgboost-serving

 
 

Repository files navigation

Build Test

XGBoost Serving

This is a fork of TensorFlow Serving, extended with the support for XGBoost, alphaFM and alphaFM_softmax frameworks. For more information about TensorFlow Serving, switch to the master branch or visit the TensorFlow Serving website.


XGBoost Serving is a flexible, high-performance serving system for XGBoost && FM models, designed for production environments. It deals with the inference aspect of XGBoost && FM models, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. XGBoost Serving derives from TensorFlow Serving and is used widely inside iQIYI.

To note a few features:

  • Can serve multiple models, or multiple versions of the same model simultaneously
  • Exposes gRPC inference endpoints
  • Allows deployment of new model versions without changing any client code
  • Supports canarying new versions and A/B testing experimental models
  • Adds minimal latency to inference time due to efficient, low-overhead implementation
  • Supports XGBoost servables, XGBoost && FM servables and XGBoost && alphaFM_Softmax servables
  • Supports computation latency distribution statistics

Documentation

Set up

The easiest and most straight-forward way of building and using XGBoost Serving is with Docker images. We highly recommend this route unless you have specific needs that are not addressed by running in a container.

Use

Export your XGBoost && FM model

In order to serve a XGBoost && FM model, simply export your XGBoot model, leaf mapping and FM model.

Please refer to Export XGBoost && FM model for details about the models's specification and how to export XGBoost && FM model.

Configure and Use XGBoost Serving

Extend

XGBoost Serving derives from TensorFlow Serving and thanks to Tensorflow Serving's highly modular architecture. You can use some parts individually and/or extend it to serve new use cases.

Contribute

If you'd like to contribute to XGBoost Serving, be sure to review the contribution guidelines.

Feedback and Getting involved

  • Report bugs, ask questions or give suggestions by Github Issues

About

A flexible, high-performance serving system for machine learning models

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 87.5%
  • Starlark 6.3%
  • Python 4.0%
  • Shell 1.1%
  • Jupyter Notebook 0.8%
  • C 0.2%
  • Dockerfile 0.1%