Do we have to migrate our application running on VM’s Physical server EC2 instances to Kubernetes
This is one of the most common questions that I encountered while interacting with most of the developers/customers. Here, I am gonna draw a picture to help you to make a decision by yourself whether to go with Kubernetes or not.
Below are some advantages of applications deployed in Kubernetes platform:
- Scalability
- Immutability
- Cloud-native, can be moved from one cloud to another cloud easily
- Observability
- Easy rollout and rollback options
- Infrastructure usage can be optimized and thus save cost
There are many types of applications and can be classified the way you want them to — Monolithic/Microservice, Stateful/Stateless, Closed source/Open source. Now the question is how to decide whether an application can be run on Kubernetes?? Well continue reading 🙂
Spoiler alert: Any application which can be containerized can run on Kubernetes
Monolithic or Microservice?
Whether your application is monolithic or microservice architecture, you can certainly leverage the advantages of Kubernetes platform. If your application is in monolithic and working great then you might not need to think about re-engineering the application to make it microservices and deploy on Kubernetes.
It’s really a misconception that only microservice application can be run on Kubernetes.
You can just containerize the monolithic application and run it on top of Kubernetes.
When to use microservice?
You may not need to design your application in microservice when your application is developed by a single team with few members or if your application only has a limited number of functionalities.
If you are developing an application with more functionalities, with a big team with a variety of skillsets then you can go with Divide and Conquer strategy, which is essentially microservices.
You can divide your big team to many small teams and give each micro app to each team. Each team will be using a different tech stack of their choice. In the end, they will build a container image of the application and they test run it on their test Kubernetes cluster.
Make sure that there are proper guidelines and architecture to follow by each team because at the end each micro app makes up the actual application. This approach can increase the pace of the application development life cycle.
If Kubernetes and containerization technology were never been invented, this approach might not be even possible. Because this approach can lead issues like package dependency, each team can come up with their application with different versions of a package of nodejs or python or golang … Even if we still think like each micro app can be deployed on VM’s.. guys think about it 😂..
We can run databases on containers but do we have to?
If you go to CNCF landscape you can see n number of databases there. That itself means a large number of companies are adopting Kubernetes way of running applications.
You can definitely run a DB on Kubernetes and get all the benefits of Kubernetes platform. It will make your life easier when you want to scale up your DB. Kubernetes make you DB workload more resilient to failures.
Apart from the advantages, I still like to mention one point, there will be a performance degradation if the storage is not on the same node, where the DB workload pod is running. In most of the cases, people consider using a storage service like ceph if the Kubernetes cluster is on edge location or any storage service if it is on cloud. This draws a trade-off between performance and management. This is still negligible according to the storage hardware choice you make.
How to choose a database?
Choose a database which is Kubernetes friendly, you can find a lot of operators for database engines like MySQL and PostgreSQL. You can simply goto CNCF landscape and select a database of choice from there. One important thing to consider is the community support of the database which you choose.
What about other tools?
Tools which we commonly used for message streaming and queuing, message parsing etc are commonly seen deployed in containers, for eg Kafka, NATS, logstash, fluentd etc.
If you can put it on container, run it on Kubernetes
Will there be more skills required than knowing Kubernetes? When you have your application running on Kubernetes, you should also need some tools along with it, for eg. Prometheus+Grafana+Alert manager for observability and alerting. And Helm is also a nice-to-have tool for application deployment. This can get growing as you may need to have more control over the cluster itself.
If you are thinking, you still need to have skillsets on a handful of tools to manage an application deployed in Kubernetes and the Kubernetes platform itself, and find yourself in dilemma, whether to go with Kubernetes or not. Let me remind you that thousands of architects have already embraced Kubernetes. And this is the right time for you to adopt Kubernetes in your ecosystem and contribute to the community.
Hope you are not confused!!🙂