-
Notifications
You must be signed in to change notification settings - Fork 3
KServe Quickstart
This guide will explain how to deploy a simple scikit-learn model using KServe, and log its inferences to a Parquet file in S3.
- KServe
- KNative Eventing - with the Kafka broker
- Kafka - with Schema Registry, Kafka Connect, and Confluent S3 Sink connector plugin
To get started as quickly as possible, see the environment preperation tutorial, which shows how to set up a full environment in minutes.
First, we will need a Kafka broker to collect all KServe inference requests and responses:
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: sklearn-iris-broker
namespace: default
annotations:
eventing.knative.dev/broker.class: Kafka
spec:
config:
apiVersion: v1
kind: ConfigMap
name: inferencedb-kafka-broker-config
namespace: knative-eventing
---
apiVersion: v1
kind: ConfigMap
metadata:
name: inferencedb-kafka-broker-config
namespace: knative-eventing
data:
# Number of topic partitions
default.topic.partitions: "8"
# Replication factor of topic messages.
default.topic.replication.factor: "1"
# A comma separated list of bootstrap servers. (It can be in or out the k8s cluster)
bootstrap.servers: "kafka-cp-kafka.default.svc.cluster.local:9092"
Next, we will serve a simple sklearn model using KServe:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: sklearn-iris
spec:
predictor:
logger:
mode: all
url: http://kafka-broker-ingress.knative-eventing.svc.cluster.local/default/sklearn-iris-broker
sklearn:
protocolVersion: v2
storageUri: gs://seldon-models/sklearn/iris
Note the logger
section - you can read more about it in the KServe documentation.
Finally, we can log the predictions of our new model using InferenceDB:
apiVersion: inferencedb.aporia.com/v1alpha1
kind: InferenceLogger
metadata:
name: sklearn-iris
namespace: default
spec:
# NOTE: The format is knative-broker-<namespace>-<brokerName>
topic: knative-broker-default-sklearn-iris-broker
events:
type: kserve
config: {}
destination:
type: confluent-s3
config:
url: s3://aporia-data/inferencedb
format: parquet
# Optional - Only if you want to override column names
schema:
type: avro
config:
columnNames:
inputs: [sepal_width, petal_width, sepal_length, petal_length]
outputs: [flower]
First, we will need to port-forward the Istio service so we can access it from our local machine:
kubectl port-forward --namespace istio-system svc/istio-ingressgateway 8080:80
Prepare a payload in a file called iris-input.json
:
{
"inputs": [
{
"name": "input-0",
"shape": [2, 4],
"datatype": "FP32",
"data": [
[6.8, 2.8, 4.8, 1.4],
[6.0, 3.4, 4.5, 1.6]
]
}
]
}
And finally, you can send some inference requests:
SERVICE_HOSTNAME=$(kubectl get inferenceservice sklearn-iris -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -v \
-H "Host: ${SERVICE_HOSTNAME}" \
-H "Content-Type: application/json" \
-d @./iris-input.json \
http://localhost:8080/v2/models/sklearn-iris/infer
If everything was configured correctly, these predictions should have been logged to a Parquet file in the S3 bucket you configured.
import pandas as pd
df = pd.read_parquet("s3://aporia-data/inferencedb/default-sklearn-iris/")
print(df)