This is a work in progress. The project is done but this post is not.
This article Kubernetes for personal projects? No thanks! refutes this article Kubernetes: The Surprisingly Affordable Platform for Personal Projects.
While I agree that you might need to spend some time to understand how to use it, my goal is to show this can be easier than you think.
Let me introduce a decent problem we solved with Kubernetes. Before I worked on the project, I had zero experience with Kubernetes. Explanations on the solutions we considered are detailed after.
A client asked me to quickly develop a small prototype system that performs batch processing of a list of inputs on multiple machines at the same time.
Using had to be easy for my client's users. For our prototype, usage is similar to this:
# Deploy local workers work.sh --workers=2
# Deploy cloud workers work.sh --workers=50 --cloud
# Send current folder's data to be processed send-work-input.sh
# Wait for completion of processing and shut off the system wait-all-work-and-shut-down.sh
This problem seems doable up to this. The next constraints are the hard part.
How would you design a system that covers all of these requests? A unit has to make sure cloud workers and local workers can synchronize their processing. Don't forget this system also has to offer the possibility to run locally only.
Hint: Kubernetes can now run on Windows 10 with Docker for Windows...
To be continued.
The rest of this article is a draft that will be completed some day. Contact me if you want more information quickly!
The two main possible options we considered are Docker Swarm and Kubernetes.
IMO will soon die. Seemed like a great solution at first.
Our solution is to use Kubernetes.
Did you know you can now run a local kubernetes cluster on a windows machine? Kubernetes cluster can now run on a local machine with a recent version of Docker for Windows. Only if you use Windows 10 (pro version or better) and have HyperV enabled hardware.
The user decides if he wants a local or cloud cluster. If cloud, queue is ran in the cloud. Otherwise cluster is deployed on the user's machine with Docker installed.
Workers can then launch Docker containers and process the queue items.
The main problem of our system is with this architecture is with the networking layer. How can the local workers communicate with a cloud queue? How can the cloud workers communicate with a local queue? Can the queue be local, considering our local network port configurations cannot be changed and we don't have a static IP?
To be continued.