DEV Community

Cover image for System Design - Building a Scalable Real-Time Blog Platform with Kafka and Cassandra
Alejandro Sosa
Alejandro Sosa

Posted on

System Design - Building a Scalable Real-Time Blog Platform with Kafka and Cassandra

In the world of modern web applications, achieving scalability and real-time responsiveness is essential. Two technologies that excel in these areas are Apache Kafka and Apache Cassandra. This post explores how to integrate Kafka and Cassandra to build a robust blog platform, driven by data access patterns and designed for future scalability and analytics.


Understanding the Components

Apache Kafka

  • Kafka is a distributed streaming platform that enables the building of real-time data pipelines and streaming applications.
  • It excels at handling high-throughput, low-latency data feeds.
  • Acts as a real-time messaging system, decoupling data producers from consumers.

Apache Cassandra

  • Cassandra is a distributed NoSQL database designed for handling large amounts of data across many servers.
  • Provides high availability with no single point of failure.
  • Optimized for write-heavy workloads and linear scalability.

Modeling Data Based on Access Patterns

---
title: Blog example
---
erDiagram
    User {
        string firstName
        string lastName
        string userName
        string email
    }
    Post {
        string title
        string content
        int likes
    }
    Comment {
        string content
        int likes
    }
    Category {
        string name
    }
    User only one to zero or more Post : has
    Post one or more to one or more Category : in
    User only one to zero or more Comment : makes
    Post only one to zero or more Comment : has   
Enter fullscreen mode Exit fullscreen mode

Mermaid ER Diagram - Blog Post Example

In Cassandra, data modeling starts with understanding data access patterns, which are essentially the questions your application needs to answer efficiently. Unlike traditional relational databases, Cassandra encourages designing tables around queries to optimize performance.

Identifying Key Questions

We will cover the flow and process for designing the insertion of a post.

Here are some critical questions for our blog platform that dictate the table designs:

  1. How to retrieve all posts by a specific user?
  2. How to retrieve all posts within a specific category?
  3. How to count the total number of posts in each category?
  4. How to track a user's activity and engagement over time?
  5. How to maintain user-specific post counts?
  6. How to manage active posts and their categories?
  7. How to count active posts in each category?
  8. How to log user activities by category?
  9. How to count user activities overall and by category?
  10. How to log and count category-specific activities?

Mapping Questions to Tables

For each question, we design tables to provide efficient answers:

  1. Posts by User

    • Table: posts_by_user
    • Stores posts keyed by user ID for quick retrieval of a user's posts.
  2. Posts by Category

    • Table: posts_by_category
    • Indexes posts by category for efficient category-based queries.
  3. Post Counts by Category

    • Table: posts_count
    • Table: post_count_by_category
    • Maintains counts of posts overall and per category for analytics.
  4. User Activity Tracking

    • Table: user_activity
    • Table: user_activity_by_category
    • Logs user actions and behaviors over time.
  5. User Post Counters

    • Table: user_posts_count
    • Table: user_post_count_by_category
    • Maintains counts of user posts overall and per category.
  6. Active Posts Management

    • Table: active_posts
    • Table: user_active_posts
    • Table: user_active_posts_by_category
    • Table: active_posts_by_category
    • Keeps track of currently active posts and categorizes them accordingly.
  7. Active Post Counters

    • Table: active_posts_count
    • Table: active_posts_count_by_category
    • Maintains counts of active posts overall and per category.
  8. User Active Post Counters

    • Table: user_active_posts_count
    • Table: user_active_posts_count_by_category
    • Maintains counts of active posts overall and per category.
  9. User Activity Counters

    • Table: user_activity_count
    • Table: user_activity_count_by_category
    • Maintains counts of user activities overall and per category.
  10. Category Activity Logs

    • Table: category_activity
    • Records activities within each category for analysis.
  11. Category Activity Counters

    • Table: category_activity_count
    • Maintains counts of activities per category.

This comprehensive mapping results in approximately 20 tables, each optimized to answer specific queries essential for the blog platform's functionality and scalability.


High-Level Architecture Overview

To visualize how all the components interact, let's examine the system's high-level architecture using a C4 Containers diagram.

Components of the System

  • User: The end-user interacting with the blog platform via web or mobile applications.
  • API Gateway: Manages incoming HTTP requests, handling routing, authentication, and authorization.
  • Write Service: Handles write operations such as creating posts and comments, persisting data to Cassandra, and producing events to Kafka.
  • Read Service: Manages read operations, retrieving data from Cassandra based on various queries.
  • Kafka Cluster: Facilitates asynchronous communication by streaming events between services.
  • Cassandra Cluster: Serves as the main data store for user data, posts, comments, and activity logs.
  • Analytics Processor: Consumes events from Kafka to perform analytics and updates Cassandra with aggregated data.

Architecture Containers Diagram

@startuml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Context.puml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Container.puml

LAYOUT_TOP_DOWN()

Person(user, "User", "Interacts with the blog platform via web or mobile.")

System_Boundary(blogPlatform, "Blog Platform") {
    Container(apiGateway, "API Gateway", "Nginx", "Routes incoming HTTP requests to appropriate services and handles authentication and authorization.")

    Container(writeService, "Write Service", "Golang", "Handles all write operations such as creating posts, comments, and logging user activities. Persists data to Cassandra and produces events to Kafka.")

    Container(readService, "Read Service", "Golang", "Handles all read operations like retrieving posts, comments, and personalized recommendations by querying Cassandra.")

    Container(kafka, "Apache Kafka", "Kafka Cluster", "Facilitates asynchronous communication between services by storing and managing event streams.")

    Container(cassandra, "Cassandra Cluster", "Apache Cassandra", "Primary data store for user data, posts, comments, and analytics data. Utilizes separate keyspaces for operational and analytics data.")

    Container(analyticsProcessor, "Analytics Processor", "Apache Spark", "Consumes events from Kafka, processes user activity data, and updates analytics tables in Cassandra.")

    Container(newPostConsumer, "NewPostConsumer", "Golang", "Processes new_post events and inserts data into relevant Cassandra tables.")
    Container(newPostCountersConsumer, "NewPostCountersConsumer", "Golang", "Updates post counts in Cassandra based on new_post events.")
    Container(userPostCountersConsumer, "UserPostCountersConsumer", "Golang", "Updates user post counts in Cassandra based on new_post events.")
    Container(newActivePostConsumer, "NewActivePostConsumer", "Golang", "Handles active post data insertion into Cassandra.")
    Container(logUserActivityConsumer, "LogUserActivityConsumer", "Golang", "Logs user activities into Cassandra.")
    Container(logCategoryActivityConsumer, "LogCategoryActivityConsumer", "Golang", "Logs category-specific activities into Cassandra.")
}

Rel(user, apiGateway, "Uses")
Rel(apiGateway, writeService, "Routes write requests to")
Rel(apiGateway, readService, "Routes read requests to")
Rel(writeService, cassandra, "Writes data to")
Rel(writeService, kafka, "Produces events to")
Rel(kafka, newPostConsumer, "Delivers new_post events to")
Rel(kafka, newPostCountersConsumer, "Delivers new_post_counters events to")
Rel(kafka, userPostCountersConsumer, "Delivers user_post_counters events to")
Rel(kafka, newActivePostConsumer, "Delivers new_active_post events to")
Rel(kafka, logUserActivityConsumer, "Delivers log_user_activity events to")
Rel(kafka, logCategoryActivityConsumer, "Delivers log_category_activity events to")
Rel(newPostConsumer, cassandra, "Inserts data into posts_by_user and posts_by_categories")
Rel(newPostCountersConsumer, cassandra, "Updates posts_count and post_count_by_category")
Rel(userPostCountersConsumer, cassandra, "Updates user_posts_count and user_post_count_by_category")
Rel(newActivePostConsumer, cassandra, "Inserts data into active_posts and related tables")
Rel(logUserActivityConsumer, cassandra, "Inserts data into user_activity and related tables")
Rel(logCategoryActivityConsumer, cassandra, "Inserts data into category_activity and related tables")
Rel(writeService, analyticsProcessor, "Sends events for analytics")
Rel(analyticsProcessor, cassandra, "Updates analytics tables in")
Rel(readService, cassandra, "Queries data from")
@enduml
Enter fullscreen mode Exit fullscreen mode

Blog Example - C4 Container Diagram


Data Flow Example: Creating a New Post

Let's walk through the process of a user creating a new post and see how Kafka and Cassandra interact in this scenario.

Step-by-Step Process

  1. User Submits a Post

    • The user creates a new post via the platform's interface.
    • The request is sent to the API Gateway.
  2. API Gateway Routes the Request

    • Validates the request and forwards it to the Write Service.
  3. Write Service Processes the Post

    • Handles business logic, such as validating the content.
    • Inserts data into Cassandra:
      • Adds the post to posts_by_user and posts_by_categories tables.
      • Each insertion accounts for all categories associated with the post.
  4. Event Production to Kafka

    • The Write Service produces events to Kafka topics such as new_post.
    • Events contain information about the new post and its associated categories.
  5. Kafka Distributes Events

    • Kafka efficiently distributes events to all subscribed consumers.
  6. Consumers Process Events

    • NewPostConsumer:
      • Inserts data into posts_by_user and posts_by_categories.
      • Produces additional events to topics like new_post_counters, user_post_counters, etc.
    • NewPostCountersConsumer:
      • Updates posts_count and post_count_by_category tables.
    • UserPostCountersConsumer:
      • Updates user_posts_count and user_post_count_by_category tables.
    • NewActivePostConsumer:
      • Inserts data into active_posts and related tables.
    • LogUserActivityConsumer:
      • Logs user activities into user_activity and related tables.
    • LogCategoryActivityConsumer:
      • Logs category-specific activities into category_activity and related tables.
  7. Analytics Processor Updates Aggregates

    • Consumes events from Kafka to perform real-time analytics.
    • Updates aggregate data in Cassandra tables like user_activity_count.
  8. Read Service Serves Data to Users

    • When other users request data, the Read Service queries Cassandra.
    • Retrieves posts, counts, and activity logs efficiently via designed tables.

Sequence Diagrams Explained

To further illustrate the interactions, let's explore how the sequence diagrams fit into the data flow example.

1. New Post Creation

sequenceDiagram
    participant Client as HTTP Client
    participant Server as HTTP Server
    participant Kafka as Kafka Cluster
    participant Consumer as NewPostConsumer
    participant DB as Cassandra Database

    Client->>Server: POST /api/v1/posts (new post data)
    Server->>Kafka: Produce message to "new_post" topic
    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch insert into posts_by_user and posts_by_categories tables
    Consumer->>Kafka: Produce message to "new_post_counters" topic
    Consumer->>Kafka: Produce message to "user_post_counters" topic
    Consumer->>Kafka: Produce message to "new_active_post" topic
    Consumer->>Kafka: Produce message to "log_user_activity" topic
    Consumer->>Kafka: Produce message to "log_category_activity" topic
Enter fullscreen mode Exit fullscreen mode

New Post - Sequence Diagram


2. Post Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as NewPostCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update posts_count and post_count_by_category tables
Enter fullscreen mode Exit fullscreen mode

New Post Counter Consumer - Sequence Diagram


3. User Post Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as UserPostCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update user_posts_count and user_post_count_by_category tables
Enter fullscreen mode Exit fullscreen mode

User Post Counters Consumer - Sequence Diagram


4. New Active Post Creation

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as NewActivePostConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch insert into active_posts, user_active_posts, user_active_posts_by_category, and active_posts_by_category tables
    Consumer->>Kafka: Produce message to "new_active_post_counters" topic
    Consumer->>Kafka: Produce message to "user_active_post_counters" topic
Enter fullscreen mode Exit fullscreen mode

New Active Post Consumer - Sequence Diagram


5. New Active Post Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as NewActivePostCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update active_posts_count and active_posts_count_by_category tables
Enter fullscreen mode Exit fullscreen mode

New Active Post Counters Consumer - Sequence Diagram


6. User Active Post Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as UserActivePostCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update user_active_posts_count and user_active_posts_count_by_category tables
Enter fullscreen mode Exit fullscreen mode

User Active Post Counters Consumer - Sequence Diagram


7. Log User Activity Creation

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as LogUserActivityConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch insert into user_activity and user_activity_by_category tables
    Consumer->>Kafka: Produce message to "log_user_activity_counters" topic
Enter fullscreen mode Exit fullscreen mode

Log User Activity Consumer - Sequence Diagram


8. Log User Activity Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as LogUserActivityCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update user_activity_count and user_activity_count_by_category tables
Enter fullscreen mode Exit fullscreen mode

Log User Activity Counters Consumer - Sequence Diagram


9. Log Category Activity Creation

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as LogCategoryActivityConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch insert into category_activity tables
    Consumer->>Kafka: Produce message to "log_category_activity_counters" topic
Enter fullscreen mode Exit fullscreen mode

Log Category Activity Consumer - Sequence Diagram


10. Log Category Activity Counters Update

sequenceDiagram
    participant Kafka as Kafka Cluster
    participant Consumer as LogCategoryActivityCountersConsumer
    participant DB as Cassandra Database

    Kafka->>Consumer: Deliver message to consumer group
    Consumer->>DB: Batch update category_activity_count tables
Enter fullscreen mode Exit fullscreen mode

Log Category Activity Counters Consumer - Sequence Diagram


These sequence diagrams demonstrate how each Kafka topic interacts with its respective consumer and how data flows into Cassandra. This modular approach ensures that each component has a single responsibility, enhancing maintainability and scalability.


Benefits of This Architecture

Scalability

  • Kafka and Cassandra are both designed to scale horizontally.
  • The architecture handles increased load by adding more nodes to the Kafka cluster and Cassandra ring.

Real-Time Processing

  • Kafka's event streaming enables real-time data processing.
  • Consumers can react to events as they occur, providing up-to-date information.

High Availability

  • Cassandra's replication across multiple nodes ensures no single point of failure.
  • Kafka's distributed nature provides fault tolerance in message processing.

Optimized Queries

  • Designing tables around specific queries allows Cassandra to retrieve data efficiently.
  • Denormalized data models reduce the need for complex joins and enable fast reads.

Key Takeaways

  • Data Access Patterns Drive Design: Start by identifying the questions your application needs to answer. Design your Cassandra tables to provide efficient responses to these queries.

  • Asynchronous Processing with Kafka: Use Kafka to decouple services and handle event-driven processing. This ensures that write operations don't block read operations and vice versa.

  • Denormalization for Performance: Embrace denormalization in Cassandra to optimize read performance. Store data in the format that best suits your query patterns.

  • Scalability and Resilience: Build with technologies that support horizontal scalability and fault tolerance to future-proof your application.

  • Monitoring and Maintenance: Implement robust monitoring for both Kafka and Cassandra to maintain performance and quickly address issues.


Conclusion

By integrating Apache Kafka and Apache Cassandra, you can build a blog platform that is both scalable and capable of real-time data processing. The key lies in modeling your data based on the application's access patterns and leveraging the strengths of both technologies to handle high-throughput workloads. Leveraging advanced data modeling techniques, as demonstrated in Advanced Data Modeling in Apache Cassandra by DataStax Developers, ensures that your Cassandra database is structured efficiently to support diverse query requirements. Additionally, Cassandra builds upon the foundational concepts introduced in Amazon's Dynamo, offering a more flexible approach without the constraints of single table design inherent in DynamoDB. This architectural synergy not only meets current demands but also provides a solid foundation for future analytics and feature expansions.


Remember, the success of such a system hinges on thoughtful design and a clear understanding of how each component interacts within the architecture. By focusing on the core questions your application needs to answer, you can tailor your data models and services to work harmoniously, delivering a responsive and reliable user experience.

Top comments (0)