You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
a. When creating endpoint choose type "Gateway" and service name "com.amazonaws.us-east-1.s3" (change region according to your bucket)
Step 3: Create MSK Cluster
a. Cluster type : Provisioned
b. Security group : allow inbound and outbound traffic from everywhere (you can keep it more restricted to allow only from required sources)
c. Authentication type : IAM
Step 4: Create an IAM Role for MSK with the policy given in Repo
a. IAM policy details given here : IAM/MSKConnectorRole.json
Step 5: Create connector plugins for SQL Server Source and S3 Sink
a. Debezium Connector for SQL Server : https://www.confluent.io/hub/debezium/debezium-connector-sqlserver
b. Kafka-S3-Sink Connector: https://www.confluent.io/hub/confluentinc/kafka-connect-s3
Step 6: Create the source connector
follow the steps explained in the video to create the source connector
Step 7: Create and EC2 instance (This step is optional , we are doing this to just to view the data in Kafka topic)
a. Create EC2 instance
b. Create EC2 service role with IAM permissions given here : IAM/ecRole.json
Step 8: Verify the data in topic (This step is optional , we are doing this to just to view the data in Kafka topic)
a. Install Kafka by using the commands given here : kafka/kafka_commands.txt
b. Use the console consumer command (#7 in above file) to view the data in topic
Step 9: Create S3 Sink Connector
follow the steps explained in the video to create the Sink connector