Spring Boot + Kafka + Zookeeper
This project uses Java, Spring Boot, Kafka, Zookeeper
to show you how to integrate these services in the composition.
TIP
Just head over to the example repository in GitHub and follow the instructions there.
Zookeeper Docker image
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don’t already have one
docker-compose.yml
Kafka Docker image
Now start the Kafka server. In the docker-compose.yml
it can be something like this
docker-compose.yml
To start the Kafka server with the certain per-configuration, you need to use Environment variables. Below, you can see which Environment variables are available for this service.
Broker IDs
You can configure the broker id in different ways:
- Explicitly, using
KAFKA_BROKER_ID
- Via a command, using
BROKER_ID_COMMAND
, e.g.BROKER_ID_COMMAND: "hostname | awk -F'-' '{print $2}'"
If you don’t specify a broker id in your docker-compose file, it will automatically be generated (see https://issues.apache.org/jira/browse/KAFKA-1070). This allows scaling up and down. In this case it is recommended to use the --no-recreate
option of docker-compose to ensure that containers are not re-created and thus keep their names and ids.
Automatically create topics
If you want to have kafka-docker automatically create topics in Kafka during
creation, a KAFKA_CREATE_TOPICS
environment variable can be
added in docker-compose.yml
.
Here is an example snippet from docker-compose.yml
:
environment:
KAFKA_CREATE_TOPICS: "Topic1:1:3,Topic2:1:1:compact"
Topic 1
will have 1 partition and 3 replicas, Topic 2
will have 1 partition, 1 replica and a cleanup.policy
set to compact
.
Advertised hostname
You can configure the advertised hostname in different ways:
- Explicitly, using
KAFKA_ADVERTISED_HOST_NAME
- Via a command, using
HOSTNAME_COMMAND
, e.g.HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"
When using commands, make sure you review the “Variable Substitution” section in https://docs.docker.com/compose/compose-file/
If KAFKA_ADVERTISED_HOST_NAME
is specified, it takes precedence over HOSTNAME_COMMAND
For AWS deployment, you can use the Metadata service to get the container host’s IP:
HOSTNAME_COMMAND=wget -t3 -T2 -qO- http://169.254.169.254/latest/meta-data/local-ipv4
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
JMX
For monitoring purposes, you may wish to configure JMX. Additional to the standard JMX parameters, problems could arise from the underlying RMI protocol used to connect
- java.rmi.server.hostname - interface to bind listening port.
- com.sun.management.jmxremote.rmi.port - the port to service RMI requests.
For example, to connect to a kafka running locally (assumes exposing port 1099)
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
JMX_PORT: 1099
Spring Boot + Kafka
Then grab the spring-kafka JAR and all of its dependencies - the easiest way to do that is to declare a dependency in your build tool, e.g. for Maven:
Text
Using plain Java to send and receive a message:
Java
Maven will download the needed dependencies, compile the code and run the unit test case. The result should be a successful build during which following logs are generated:
Java
Related articles
CD pipeline examples
Codefresh YAML for pipeline definitions
Creating pipelines
How Codefresh pipelines work