back to blog

fafnir

Published: November 6, 2025

Last Updated: January 30, 2026

part 1

This project fafnir is a distributed and scalable microservice backend for trading platforms (will be simulated). I'm mainly coding everything in Go, but might use Python, Java, or C++ down the line depending on the service.

It's meant for my portfolio and to understand modern microservices, system design patterns, and best coding practices in building a scalable distributed application. It is not fully implemented yet, but I have been able to deploy on docker, connect multiple containers and services together, and utilize a bunch of technologies such as Redis, gRPC, GraphQL, Prometheus, and Grafana to name a few.

Here is a rough sketch of what I have right now of the system architecture:

System High Level Diagram

Here is the related issue on GitHub: Implement NATS + auth validation logic #4

Going through, I have many more features to be implemented (check TODO on README), and hopefully learn something new

part 2

Here is the GitHub issue related to this part: kubernetes local implementation #5

I have now implemented Kubernetes as a deployment alternative instead of Docker Compose. I used something called minikube, which is basically a local Kubernetes cluster for well, local development and fast learning.

I also installed kubectl, which if minikube is the Kubernetes cluster, kubectl would be the user's control plane tool, or CLI, that interacts with the Kubernetes cluster. I then tested minikube with these following commands:

> minikube start
> minikube kubectl -- get po -A
> minikube dashboard
> kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
> kubectl expose deployment hello-minikube --type=NodePort --port=8080
> kubectl get services hello-minikube
> minikube service hello-minikube

# visit localhost address

With this, I was able to get started in migrating a multi-container deployment to a multi-node kubernetes cluster!

Or so I thought 😅

Turns out, it took a bit longer than I expected. I had to dive deeper into Kubernetes concepts like the kubelet, kube-proxy, etcd, services, deployments, and pods. I also needed to learn what a kubernetes manifest was and how to define and manage cluster resources declaratively.

After all that, I then revised my rough architecture design so many times (too many to count), but finally was able to get a workable and reasonable diagram:

K8s Infra Design

It definitely took more time and rewrites than I expected, but I learned a ton about Kubernetes in the process. Up next, NATS implementation (lol)

part 3

Here is the GitHub issue(s) related to this part: NATS #11 & Implement JetStream and NATS in K8s #20

I was able to include NATS into the project, as well as attaching JetStream. Same protocol as how I implemented Kubernetes, except a different system. I had to learn about what an event broker was first, then what an event bus was (spoiler alert, they're not the same), then understand what a message queue is, to then realize different types of terminology where some overlap and some don't. Some words are: producers, consumers, publishers, subscribers, fan in, fan out, push, pull, subjects, topics, queues, worker groups, streams, ack, nack, saga, ... (did I get them all?).

Definitely a blast though and courtesy to the NATS documentation for such an easy developer experience

From what I understand though, NATS JetStream works like this:

  • Multiple publishers (often microservices) can publish messages (e.g., {product: "phone", cost: "15"}) to a subject such as orders.buy on a NATS server (the broker itself).

  • When JetStream is enabled, if that subject matches the subject filter of a configured stream (for example, a stream named ORDERS configured with orders.*), the message is captured by the stream and persisted according to the stream’s retention and storage policies.

  • Subscribers (other microservices) can then read these persisted messages. Depending on how subscribers are configured, messages may be fanned out to multiple subscribers, or delivered to a queue group where subscribers share the workload as a worker pool.

Here's a rough diagram that I made to show my thoughts:

NATS JetStream Diagram