Skip to content

Instantly share code, notes, and snippets.

@yuvalif
Last active February 25, 2026 16:01
Show Gist options
  • Select an option

  • Save yuvalif/0f75869b654a65e430950c4d16305e2e to your computer and use it in GitHub Desktop.

Select an option

Save yuvalif/0f75869b654a65e430950c4d16305e2e to your computer and use it in GitHub Desktop.
# channel for rgw send notifications
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
metadata:
name: text-channel
---
# subscription for the python-ceph-vectordb app which listens notifications from the channel
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: text-subscription
spec:
channel:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
name: text-channel
subscriber:
ref:
apiVersion: v1
kind: Service
name: python-ceph-vectordb-text

initial setup

  • start minikune:
minikube start --extra-disks=1 --driver=kvm2
  • clone rook and get into the examples directory:
git clone https://github.com/rook/rook.git
cd rook/deploy/examples
  • deploy the rook operator:
kubectl create -f crds.yaml -f common.yaml -f csi-operator.yaml -f operator.yaml
  • change the ceph image in cluster-test.yaml the developer build:
image: quay.ceph.io/ceph-ci/ceph:wip-add-lancedb
  • deploy the cluster:
kubectl create -f cluster-test.yaml
  • wait for all cluster pods to be up and running:
kubectl get pods -n rook-ceph
  • deploy the RGW:
kubectl create -f object-test.yaml
  • expose the RGW as a service outside of the cluster:
kubectl create -f rgw-external.yaml

test S3

  • create storage class and bucket:
kubectl create -f storageclass-bucket-delete.yaml
kubectl create -f object-bucket-claim-delete.yaml
  • upload an object to the bucket:
export AWS_URL=$(minikube service --url rook-ceph-rgw-my-store-external -n rook-ceph)
export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)
export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-delete-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)
export BUCKET_NAME=$(kubectl get objectbucketclaim ceph-delete-bucket -o jsonpath='{.spec.bucketName}')
echo "hello world" > hello.txt
aws --endpoint-url $AWS_URL s3 cp hello.txt s3://$BUCKET_NAME

test s3vectors

  • install the latest aws cli (s3vectors is a new feature)
  • set signature version for the profile (we currently only support the s3 varient of sigv4):
aws configure set default.s3vectors.signature_version s3v4
  • create a vector bucket:
aws --endpoint-url $AWS_URL s3vectors create-vector-bucket --vector-bucket-name my-v-bucket
  • expected reply is:
{
    "vectorBucketArn": "arn:aws:s3vectors:::bucket/my-v-bucket"
}

end2end setup

based on the blueprint in: https://github.com/thotz/python-vectordbapp-ceph (without using milvus)

install knative eventing

kubectl create -f https://github.com/knative/eventing/releases/download/knative-v1.21.0/eventing-crds.yaml
kubectl create -f https://github.com/knative/eventing/releases/download/knative-v1.21.0/eventing-core.yaml
kubectl create -f https://github.com/knative/eventing/releases/download/knative-v1.21.0/in-memory-channel.yaml

setup channel and subscription

kubectl create -f knative-resources.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment