Concept
Most software programmes used to be large monoliths that ran as either a single process or a small number of processes dispersed across a few servers in the past. Even now, these outdated systems are still widely used. They update just occasionally and have long release cycles. At the conclusion of each release cycle, developers package up the entire system and provide it to the operations team for deployment and monitoring. When hardware fails, the operations team member manually migrates it to the remaining healthy servers.
Today, these big monolithic legacy applications are slowly being broken down into smaller, independently running components called microservices. Microservices may be designed, launched, updated, and scaled independently since they are disconnected from one another. This makes it possible for you to swiftly and frequently alter components in order to stay up with the quickly shifting business requirements of today.
However, as the number of deployable components increases and data centres get bigger, it gets harder to configure, manage, and maintain the system as a whole. Finding the best locations for each of those components to maximise resource efficiency and minimise hardware costs is much more difficult. This is a lot of manual work. We require automation, which includes automatic configuration, supervision, and failure-handling, as well as automatic scheduling of those components to our servers. This is where Kubernetes comes in.
Kubernetes enables developers to deploy their applications themselves and as often as they want, without requiring any assistance from the operations (ops) team. But Kubernetes doesn’t benefit only developers. It also helps the ops team by automatically monitoring and rescheduling those apps in the event of a hardware failure. The focus for system administrators (sysadmins) shifts from supervising individual apps to mostly supervising and managing Kubernetes and the rest of the infrastructure, while Kubernetes itself takes care of the apps.
Let’s try to understand this using small example
Imagine you have to deploy three applications on three different servers (nodes).
- For that, you have to manually check the resource availability of the server.
- Once you select the server, You need to manually deploy the application on each server.
- After deployment, you have to keep monitoring it.
- For any reason, if your application gets down it will remain unavailable until you run it again.
- For any reason, if your server crashes you have to manually move your application to another running server.
- You have to manually deploy additional applications if the level of traffic unexpectedly increases.
There are many more scenarios which can happen. It will initially be controlled manually if there are fewer applications, but just think how challenging it will be if there are 30+ applications running on the server.
Here Kubernetes comes into the picture. Kubernetes eases this difficulty. All the above manual work can be handled by Kubernetes alone.
Conclusion
We’ve discussed what actually Kubernetes is. You should now have an understanding of how Kubernetes works and how it reduces manual work.
MtroView team is giving its best & working hard to provide the content to help you understand things better in simple terms. We are heartily open to receiving suggestions to improve our content to help more people in society. Please write to us in the form of a comment or can also send an email to mtroview@gmail.com
Pingback: What is kubectl and how it is work in Kubernetes? - MtroView
Pingback: Difference between Stateless and Stateful state in kubernetes - MtroView