JM
JordanMarcelino
jordan@deck:~/projects/learn-go-microservices$cat README.md
Reading project details...
codeGo

E Commerce Microservice

A learning-oriented, production-patterned microservices e-commerce platform in Go, featuring an API gateway, Kafka + RabbitMQ eventing, PostgreSQL replication, Redis idempotency locks.

Jordan Marcelino
Jordan MarcelinoSoftware Engineer
E Commerce Microservice

If you’ve ever tried to learn microservices “properly,” you’ve probably hit the same wall: tutorials show endpoints, but rarely show the messy, real-world patterns that actually make distributed systems survivable event buses, idempotency, delayed workflows, replication, and observability.

Learn Go Microservices is a reference implementation of a simple e-commerce platform built with Golang, designed to demonstrate production-grade patterns end-to-end: an API Gateway fronting multiple services, Kafka + RabbitMQ for event-driven workflows, PostgreSQL master–replica setups for CQRS-style reads/writes, Redis for distributed locking.

Overview

  • What it is: A Go-based microservices e-commerce platform (Auth, Product, Order, Mail) behind an API Gateway.
  • Who it’s for: Engineers learning modern microservice patterns (eventing, CQRS, delayed messages, observability).
  • Primary value: A cohesive, runnable codebase that demonstrates multiple “real patterns” working together.

Background

Microservices are easy to describe and hard to operationalize. The moment you split a system into services, you inherit new requirements: asynchronous workflows, eventual consistency, retries, idempotency, and debugging across service boundaries.

This project is positioned as a learning journey that intentionally includes the infrastructure and system-level concerns most demo apps avoid message brokers, replication, metrics/traces/logs, and deployment topology.

The Problem

Building a microservices system that feels “real” requires solving multiple constraints simultaneously:

  • Coordination: services must react to events reliably without tight coupling.
  • Consistency: data and state transitions must remain correct under retries and partial failure.
  • Time-based workflows: reminders, expirations, and delayed side effects must be robust.
  • Operational visibility: you need metrics + traces + logs to understand distributed behavior.

Most examples cover one or two of these. This project aims to show them working together in a single system.

The Solution

At a high level, the system uses:

  • An API Gateway as the single entrypoint for clients.
  • Four core services (Auth, Product, Order, Mail) with clear responsibility boundaries.
  • Kafka for cross-service domain events (e.g., order/product events).
  • RabbitMQ for asynchronous tasks and delayed messaging (e.g., email verification, payment reminders, expirations).
  • PostgreSQL replication and pgpool-II to support CQRS-style patterns (read replicas).
  • Redis for distributed locking to enforce idempotency on request processing.

Core

  • API Gateway
    • JWT validation middleware
    • Rate limiting per service
    • Request/response transformation
    • Prometheus metrics collection
  • Auth Service
    • JWT-based authentication
    • Email verification with a 10-minute expiry
    • Anti-spam protection (1-minute cooldown)
    • Publishes verification events to RabbitMQ for Mail Service
  • Product Service
    • Product lifecycle management
    • Publishes product events to Kafka (create/update)
    • Real-time inventory synchronization via events
  • Order Service
    • Order creation with idempotency via Redis distributed locks
    • Delayed message scheduling for payment reminders (4h/12h/22h) and expiration (24h)
    • Event-driven workflows via Kafka and RabbitMQ
    • State transition management persisted in PostgreSQL
  • Mail Service
    • RabbitMQ consumer for async email processing
    • Template-based email rendering
    • Retry mechanism with exponential backoff
    • Development SMTP capture via MailHog

Advanced (Production-Patterned Learning)

  • Event-driven architecture using both Kafka and RabbitMQ (distinct roles per broker)
  • CQRS-style database topology with master–replica replication and pgpool-II
  • Delayed workflow orchestration via RabbitMQ delayed message strategy
  • Full observability stack (Prometheus, Loki, Tempo, Grafana) wired via OpenTelemetry
  • Deployment topology including load balancing and clustered services (documented at a high level)

Architecture

Loading diagram...

Tech Stack

  • Languages & Frameworks

    • Go 1.23+
    • Gin web framework
  • Data

    • PostgreSQL 16 with pgpool-II
    • Redis cluster (Bitnami-based)
  • Message Brokers

    • Kafka cluster (Bitnami-based)
    • RabbitMQ 4.0 cluster
  • Infra/DevOps

    • Docker Swarm (deployment-oriented topology)
    • HAProxy for load balancing
    • MailHog SMTP server for development
  • Observability

    • OpenTelemetry
    • Prometheus
    • Grafana
    • Loki
    • Tempo

Getting Started

Prerequisites

  • Docker + Docker Compose
  • GNU Make
  • Postman (to import the provided collection)
  • (Optional) Go 1.23+ if you plan to run services outside containers

Installation

  1. Clone the repository:

    git clone https://github.com/JordanMarcelino/learn-go-microservices.git
    cd learn-go-microservices
    
  2. Start the platform (via Make):

    make
    
  3. Verify services are running:

    docker compose ps
    
  4. Import the Postman collection (see postman/) and run API requests.

  5. Stop everything:

    make compose-down
    

Configuration

Best-practice placeholders (update with actual values from the repo):

  • JWT_SECRET: signing secret for JWT issuance/validation
  • DATABASE_URL (or DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME): PostgreSQL connection
  • REDIS_ADDR (or host/port): Redis cluster endpoint
  • KAFKA_BROKERS: Kafka bootstrap servers
  • RABBITMQ_URL: RabbitMQ connection string
  • OTEL_EXPORTER_OTLP_ENDPOINT: OTLP collector endpoint for traces/metrics/logs
  • SMTP_HOST, SMTP_PORT: SMTP settings (MailHog in dev)

Run

# Start the full stack
make

# Inspect containers
docker compose ps

# Tail logs (optional)
docker compose logs -f --tail=200

# Shutdown
make compose-down

Usage

Typical workflows to validate the system end-to-end:

  1. Register a user via the Gateway (Auth Service)

    • Expected behavior: user created as unverified, verification event published to RabbitMQ, Mail Service consumes and sends verification email.
  2. Create/update products (Product Service)

    • Expected behavior: product events published to Kafka (product-created, product-updated) and can be consumed by other services for sync.
  3. Create an order (Order Service)

    • Expected behavior:

      • idempotency lock acquired in Redis (e.g., SETNX lock:request_id)
      • order stored as PENDING
      • delayed reminders scheduled via RabbitMQ (4h/12h/22h)
      • expiration scheduled (24h)
      • order.created event emitted to Kafka for Product Service stock reservation
  4. Observe the system

    • Use the Grafana stack to inspect metrics, traces, and logs correlated across services.