You are viewing documentation for Kubeflow 1.0

This is a static snapshot from the time of the Kubeflow 1.0 release.
For up-to-date information, see the latest version.

Tools for Serving

Serving of ML models in Kubeflow

Overview

Model serving overview

KFServing

Model serving using KFServing

Seldon Core Serving

Model serving using Seldon

BentoML

Model serving with BentoML

NVIDIA Triton Inference Server

Model serving with Triton Inference Server

TensorFlow Serving

Serving TensorFlow models

TensorFlow Batch Prediction

See Kubeflow v0.6 docs for batch prediction with TensorFlow models