DEV Community

KenjiGoh
KenjiGoh

Posted on • Updated on

Kafka Quickstart

Getting Started on Kafka (Windows)

1. Start up zookeeper server

Open a command prompt in C:\kafka and enter this command:

.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
Enter fullscreen mode Exit fullscreen mode

Image description

Zookeeper is running on Port 2181
Image description

2. Start the Kafka server

Open a new command prompt and start the Kafka server:

.\bin\windows\kafka-server-start.bat .\config\server.properties
Enter fullscreen mode Exit fullscreen mode

Image description

3. Create a topic

Open a new command prompt in C:\kafka\bin\windows:

kafka-topics.bat --create --bootstrap-server localhost:9092 --topic mytopic
Enter fullscreen mode Exit fullscreen mode

Image description

4. Create a Producer Console

kafka-console-producer.sh --broker-list <BROKER_LIST> --topic <TOPIC_NAME>
Enter fullscreen mode Exit fullscreen mode

Open another new command prompt in C:\kafka\bin\windows:

kafka-console-producer.bat --broker-list localhost:9092 --topic mytopic
Enter fullscreen mode Exit fullscreen mode

You should see a > prompt when the producer is ready.

5. Create a Consumer Console

kafka-console-consumer.sh --bootstrap-server <BROKER_LIST> --topic <TOPIC_NAME>
Enter fullscreen mode Exit fullscreen mode

Open another new command prompt in C:\kafka\bin\windows:

kafka-console-consumer.bat --topic mytopic --bootstrap-server localhost:9092 --from-beginning
Enter fullscreen mode Exit fullscreen mode

6. Paste Data to Producer and Observe Consumer

Sample data for use:

{"Name":"Kenneth","Age":"35","Gender":"Male"}
{"Name":"Gemma","Age":"25","Gender":"Female"}
{"Name":"Amy","Age":"15","Gender":"Male"}
Enter fullscreen mode Exit fullscreen mode

Referencing the image below, once you paste a row of data into the producer terminal (left), the data appears on the consumer terminal (right) almost instantaneously.

Image description

In Git Bash/ Linux

Similar commands collated from kafka quickstart

# Download and extract
tar -xzf kafka_2.13-3.7.0.tgz
cd kafka_2.13-3.7.0

# Start the ZooKeeper service
bin/zookeeper-server-start.sh config/zookeeper.properties
# Start and hide logs on terminal
bin/zookeeper-server-start.sh config/zookeeper.properties > /dev/null 2>&1 &
# verify that Zookeeper is indeed running
ps -aef

# Start the Kafka broker service in another terminal
bin/kafka-server-start.sh config/server.properties

# Create a topic in another terminal
bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092

# Show details of a topic
bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
Topic: quickstart-events        TopicId: z_zqwwkiT6ueXVJxi3EAdA PartitionCount: 1       ReplicationFactor: 1     Configs:
        Topic: quickstart-events        Partition: 0    Leader: 0       Replicas: 0     Isr: 0

# Run Producer Console in another terminal
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092

# Run Consumer Console in another terminal
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092

# Write event to topic at Producer console
This is my first message
This is my second message
Enter fullscreen mode Exit fullscreen mode

Image description

Demonstrating 3 Brokers Replica

# Create 2 other brokers
cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties
ll config/serv*
Enter fullscreen mode Exit fullscreen mode

Image description

Update these 3 areas in server-1 and server-2 file:

# SERVER-1
# id of the broker
broker.id=1
# broker port
listeners=PLAINTEXT://:9093
# log files directories 
log.dirs=/tmp/kafka-logs-1

# SERVER-2
# id of the broker
broker.id=2
# broker port
listeners=PLAINTEXT://:9094
# log files directories 
log.dirs=/tmp/kafka-logs-2
Enter fullscreen mode Exit fullscreen mode

Now start all 3 brokers

bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
Enter fullscreen mode Exit fullscreen mode

Now Create a topic with 2 partitions and 3 replication factor across 3 brokers

bin/kafka-topics.sh --create --topic TestTopicXYZ --partitions 2 --replication-factor 3 --bootstrap-server localhost:9092, localhost:9093, localhost:9094
Enter fullscreen mode Exit fullscreen mode

Lastly describe the topic to examine:

bin/kafka-topics.sh --describe --topic TestTopicXYZ --bootstrap-server localhost:9092,localhost:9093, localhost:9094

Topic: TestTopicXYZ     TopicId: 65thxHOeTIGSJJ5L_jl-XQ PartitionCount: 2       ReplicationFactor: 3    Configs:
        Topic: TestTopicXYZ     Partition: 0    Leader: 0       Replicas: 0,2,1 Isr: 0,2,1
        Topic: TestTopicXYZ     Partition: 1    Leader: 2       Replicas: 2,1,0 Isr: 2,1,0

Enter fullscreen mode Exit fullscreen mode

Image description

What is Kafka?

  1. Publish/Subscribe systems

With Publish/Subscribe systems, each event can be processed by multiple consumers who are listening to the topic. So multiple consumers can all get the same messages from the producer.

Pub/Sub systems introduce the concept of a topic. We can have multiple topics (to categorize your messages) and multiple producers writing to each topic and multiple consumers reading from a topic.

  1. Distributed
    Kafka is like a messaging system. Kafka Cluster is made up of more than 1 Kafka servers. Each Kafka server is referred to as a Broker. A Kafka cluster will have multiple brokers, therefore it is distributed application.

  2. Fault-Tolerant
    In Kafka cluster, messages are replicated in multiple brokers. A message published in Broker 1 will also be published in Broker 3. Therefore Kafka is fault-tolerant as message is replicated and not lost even if Broker 1 is down.

  3. Language Agnostic
    Data transferred (called Messages) are byte arrays so you can use JSON or a data format like Avro. Messages are immutable and have a timestamp, a value and optional key/headers

Kafka Architecture

Every Kafka ecosystem can have multiple producers/publishers.
A Kafka system consists of several components:

  • Producer - Any service that pushes messages to Kafka is a producer.

  • Consumer - Any service that consumes (pulls) messages from Kafka is a consumer. There can be multiple Consumer groups in the Kafka ecosystem. Consumers must be associated in a Consumer groups.

  • Kafka Brokers - the kafka cluster is composed of a network of nodes called brokers. Each machine/instance/container running a Kafka process is called a broker.

In a Kafka cluster, the individual servers are referred to as brokers. Brokers will handle:

  1. Receiving Messages - accepting messages from producers
  2. Storing Messages - storing them on disk organized by topic. Each message has a unique offset identifying it.
  3. Serving Messages - sending messages to consumer services when the consumer requests it
  • Topics - A Kafka server has multiple topics. Every topics can have multiple partitions. Messages in Kafka are stored in topics, which are the core structure for organizing and storing your messages. You can think of a topic as a log file that stores all the messages and maintains their order. New message are appended to the end of the topic.

Offset Explorer

A UI tool for Kafka to help us visualize kafka clusters, topics and offset etc.

Thank you!

Top comments (0)