Skip to content

Instantly share code, notes, and snippets.

@ashwathniranjh
Last active March 5, 2026 13:53
Show Gist options
  • Select an option

  • Save ashwathniranjh/9a05e9226d8a692694203d2326822c2e to your computer and use it in GitHub Desktop.

Select an option

Save ashwathniranjh/9a05e9226d8a692694203d2326822c2e to your computer and use it in GitHub Desktop.
GSoC'26-ceph-KafkaSecurity-Evaluation

Tasks Provided:

  1. Build Ceph and Run Bucket Notifications Tests Result:
  • Successfully built ceph and ran bucket notification tests as suggested. Output in the python server terminal:
ashwathniranjh@ashwathniranjh:~$ python3 server.py 10900
INFO:root:Starting httpd...

INFO:root:POST request,
Path: /
Headers:
Host: localhost:10900
Accept: */*
Content-Type: application/json
Content-Length: 815



Body:
{"Records":[{"eventVersion":"2.2","eventSource":"ceph:s3","awsRegion":"default","eventTime":"2026-03-01T13:24:14.836151Z","eventName":"ObjectCreated:Put","userIdentity":{"principalId":"testid"},"requestParameters":{"sourceIPAddress":""},"responseElements":{"x-amz-request-id":"450f2d3e-f1fb-47ec-96c3-c9238087abfd.4179.8659120375192951466","x-amz-id-2":"4179-default-default"},"s3":{"s3SchemaVersion":"1.0","configurationId":"notif1","bucket":{"name":"fish","ownerIdentity":{"principalId":"testid"},"arn":"arn:aws:s3:default::fish","id":"450f2d3e-f1fb-47ec-96c3-c9238087abfd.4179.1"},"object":{"key":"myfile","size":512,"eTag":"237130b608018ada0d248f8ecc97eff7","versionId":"","sequencer":"FE3DA469E8AC3D32","metadata":[],"tags":[]}},"eventId":"1772371454.842902.237130b608018ada0d248f8ecc97eff7","opaqueData":""}]}

127.0.0.1 - - [01/Mar/2026 13:24:14] "POST / HTTP/1.1" 200 -

  1. Run Kafka Tests Result:
  • Configured kafka - opted to use Kafka 2.8.0 in order to use a version that uses zookeeper out of the box instead of KRaft output in the kafka consumer terminal:
ashwathniranjh@ashwathniranjh:/opt/kafka$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kafkatopic
{"Records":[{"eventVersion":"2.2","eventSource":"ceph:s3","awsRegion":"default","eventTime":"2026-03-01T16:00:00.706814Z","eventName":"ObjectCreated:Put","userIdentity":{"principalId":"testid"},"requestParameters":{"sourceIPAddress":""},"responseElements":{"x-amz-request-id":"753198d5-ffcd-4bbe-8424-f31907f5e9ca.4179.4809856097512179502","x-amz-id-2":"4179-default-default"},"s3":{"s3SchemaVersion":"1.0","configurationId":"notif2","bucket":{"name":"fish","ownerIdentity":{"principalId":"testid"},"arn":"arn:aws:s3:default::fish","id":"753198d5-ffcd-4bbe-8424-f31907f5e9ca.4179.1"},"object":{"key":"myfile","size":512,"eTag":"87320c125c9c980b9980fe38b7110f9a","versionId":"","sequencer":"8062A469D844892A","metadata":[],"tags":[]}},"eventId":"1772380800.713639.87320c125c9c980b9980fe38b7110f9a","opaqueData":""}]}
  1. Kafka over mTLS
  • Cherry-picked the PR ceph/ceph#61572 by pulling the branch used from yuvalif and added the changes -> resolved merge conflicts in documentation file by choosing the changes in the PR
  • Rebuilt ceph using ninja
  • generated SSL certificates for RGW adjusting the command provided in kafka security tests:
cd /path/to/kafka/
KAFKA_CERT_HOSTNAME=192.168.72.128 KAFKA_CERT_IP=192.168.72.128 bash ~/ceph-gsoc/ceph/src/test/rgw/bucket_notification/kafka-security.sh

OUTPUT:

########## create the request in key store 'server.keystore.jks' with SAN=DNS:192.168.72.128,IP:192.168.72.128
Generating 3,072 bit RSA key pair and self-signed certificate (SHA384withRSA) with a validity of 36,500 days
        for: CN=192.168.72.128, OU=Michigan Engineering, O=Red Hat Inc, L=Ann Arbor, ST=Michigan, C=US
########## create the CA 'y-ca.crt'
.+..........+......+.....+....+...+..+...+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+.....+.+...+...........+.+.....+....+...+...........+.......+.........+...+....................+...+.........+...+..........+......+..+...+.......+..+.+.........+..+...+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+....+.........+.....+.......+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.......+...+..+....+.........+...............+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.....+..+...+......+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*...+.+..+............+..................+.+.....+.........+......+......+...+...+.........+.+.........+...+...+...+.....+............+.......+..+......+.......+.....+...+...+....+...+..+.+......+......+.....+....+...+..+.........+......+.+.....+.............+...............+............+..+............+..........+........+.+.....+.........+...+....+......+............+...+..+.......+.....+...+...+.......+...........+.+...+......+...........+....+.....+.+.....+..........+.....+.+...+..+..........+.................+......+.+......+.....+...+.+.....+.+.........+........+......+.+.....+......+...............+.........+.+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
########## store the CA in trust store 'server.truststore.jks'
Certificate was added to keystore
########## create a request '192.168.72.128.req' for signing in key store 'server.keystore.jks'
########## sign and create certificate '192.168.72.128.crt' with SAN=DNS:192.168.72.128,IP:192.168.72.128
Certificate request self-signature ok
subject=C = US, ST = Michigan, L = Ann Arbor, O = Red Hat Inc, OU = Michigan Engineering, CN = 192.168.72.128
########## store CA 'y-ca.crt' in key store 'server.keystore.jks'
Certificate was added to keystore
########## store certificate '192.168.72.128.crt' in key store 'server.keystore.jks'
Certificate reply was installed in keystore

  • Configured kafka broker for mTLS by editing the server.properties with the suggested parameters in kafka security tests
listeners = PLAINTEXT://192.168.72.128:9092,SSL://192.168.72.128:9093,SASL_SSL://192.168.72.128:9094,SASL_PLAINTEXT://192.168.72.128:9095
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.72.128:9092,SSL://192.168.72.128:9093,SASL_SSL://192.168.72.128:9094,SASL_PLAINTEXT://192.168.72.128:9095

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# SSL configuration matching the kafka-security.sh script 
ssl.keystore.location=/opt/kafka/server.keystore.jks 
ssl.keystore.password=mypassword 
ssl.key.password=mypassword 
ssl.truststore.location=/opt/kafka/server.truststore.jks 
ssl.truststore.password=mypassword
ssl.client.auth=none

# SASL mechanisms 
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512 
sasl.mechanism.inter.broker.protocol=PLAIN 
inter.broker.listener.name=PLAINTEXT

# SASL over SSL with PLAIN mechanism 
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_alice="alice-secret";

# SASL over SSL with SCRAM-SHA-256 mechanism 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";

# SASL over SSL with SCRAM-SHA-512 mechanism 
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";

# PLAINTEXT SASL with PLAIN mechanism 
listener.name.sasl_plaintext.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_alice="alice-secret";

# PLAINTEXT SASL with SCRAM-SHA-256 mechanism 
listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";

# PLAINTEXT SASL with SCRAM-SHA-512 mechanism 
listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
  • created topic utilising generated certificate and CA with the following command:
aws --endpoint-url http://localhost:8000 sns create-topic --name=mtlstopic \
  --attributes='{"push-endpoint": "kafka://192.168.72.128:9093", "persistent": "true", "use-ssl": "true", "ca-location": "/opt/kafka/y-ca.crt", "cert-location":, "/opt/kafka/192.168.72.128.crt"}'

bucket creation - aws --endpoint-url http://localhost:8000 s3 mb s3://fish --  the url here can remain same as this corresponds to aws s3's endpoint and not kafka

(important thing to note above is the endpoint URL needs to point to the address configured for SSL in the server.properties)

  • restart kafka and zookeeper to support the newly added configuration
  • associating kafka topic to bucket:
aws --endpoint-url http://localhost:8000 s3api put-bucket-notification-configuration  --bucket fish \
  --notification-configuration='{"TopicConfigurations": [{"Id": "notif2", "TopicArn": "arn:aws:sns:default::mtlstopic", "Events": []}]}'
  • kafka consumer can connect to 192.168.72.128:9092 as it was mentioned that the consumer neednt connect over SSL output on uploading documennt to bucket

OUTPUT:

ashwathniranjh@ashwathniranjh:~$ aws --endpoint-url http://localhost:8000 s3 cp myfile s3://fish
upload: ./myfile to s3://fish/myfile      

output in kafka consumer terminal:

ashwathniranjh@ashwathniranjh:/opt/kafka$ bin/kafka-console-consumer.sh --bootstrap-server 192.168.72.128:9092 --topic mtlstopic
{"Records":[{"eventVersion":"2.2","eventSource":"ceph:s3","awsRegion":"default","eventTime":"2026-03-02T18:30:11.773426Z","eventName":"ObjectCreated:Put","userIdentity":{"principalId":"testid"},"requestParameters":{"sourceIPAddress":""},"responseElements":{"x-amz-request-id":"1f4cfb5a-a6a9-4e45-838f-d3519510a7ae.4179.1335149760644657019","x-amz-id-2":"4179-default-default"},"s3":{"s3SchemaVersion":"1.0","configurationId":"notif2","bucket":{"name":"fish","ownerIdentity":{"principalId":"testid"},"arn":"arn:aws:s3:default::fish","id":"1f4cfb5a-a6a9-4e45-838f-d3519510a7ae.4179.1"},"object":{"key":"myfile","size":512,"eTag":"a6bafa41ba7c04e90650f90026242e74","versionId":"","sequencer":"33D7A569683A8B2E","metadata":[],"tags":[]}},"eventId":"1772476211.780876.a6bafa41ba7c04e90650f90026242e74","opaqueData":""}]}
@yuvalif
Copy link

yuvalif commented Mar 5, 2026

@ashwathniranjh for the mtls part (step 3). please provide the exact details on how to do these steps

@ashwathniranjh
Copy link
Author

@yuvalif I have made the changes to the above gist itself. Please check and let me know if this suffices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment