Kubernetes

Overview

Kubernatives is a platform to working with Containers not necessary Dockers. This is containers in general you can use alternatives to Dockers to manage the Containers also but Kebernatives give you a key few things which is a platform and can be extended.

Kubernetes Img

At its core Kubernatives means to do is Deployments and easy way to scale and gives you Monitoring.

What is Kubernates?

Thus, by definition Kubernetes commonly referred to as “K8s” is an open-source system for automating deployment, scaling and management of containerized applications which aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts” which supports a range of container tools, including Docker.

In Kuberante there is a Master Node and this is part of the cluster which knows the other servers you create and you can deploy Containers too.

Kubernetes Img 01

Deployment

Actual process for Deployment is simple too, you just have to mention what kind of image is required to create a Container from and provide the criteria, then it creates what is called a Deployment which is called an Application.

You can provide further requirements too like number servers required, RAM, ..etc which will be held within the Deployment.

Kubernetes Img 02

Deployment us an ongoing process, it has a Deployment controller that in a sense if an application goes wrong Kubernates will recognize and autoheal.

Kubernetes Img 03

Scaling

If there is more traffic in the Application, that requires scaling if Application.

When you deploy an application in Kubernetes Engine, you define how many replicas of the application you’d like to run. When you scale an application, you increase or decrease the number of replicas.

Each replica of your application represents a Kubernetes Pod that encapsulates your application’s container(s).

Kubernetes Img 04

Monitoring

In Kubernetes monitoring and performance analysis is done in Kubernetes Clusters. where signals are collected from kubelets and the api server, processes them, and exports them via REST APIs or to a configurable timeseries storage backend.

Why Kubernates?

Kubernetes is a really powerful orchestrator with lots of features that facilitate app deployment. Below is focussed on only a bunch of benefits that is found most useful.

Kubernetes Img 05

Kubernetes allows easy container management. Actually, in Kube you do not manage containers directly, but pods instead. A pod consists of one or more tightly coupled containers that constitute a deployable object. To make apps in your pods available to users, Kubernetes introduced an abstraction called service which defines a logical set of pods and has an external IP address. The appreciable part in this architecture is that if your pods fail and restart, you don’t have to redeploy your app.

The Deployment Controller simplifies a number of complex management tasks. For example:

  • Scalability. Software can be deployed for the first time in a scale-out manner across Pods, and deployments can be scaled in or out at any time.
  • Visibility. Identify completed, in-process, and failing deployments with status querying capabilities.
  • Time savings. Pause a deployment at any time and resume it later.
  • Version control. Update deployed Pods using newer versions of application images and roll back to an earlier deployment if the current version is not stable.

Second, Kubernetes allows horizontal autoscaling for your pods, so when at some point your app is accessed by a huge number of users, you can tell Kube to replicate your pods and balance the load across them to avoid downtime.

Third, Kubernetes suits both stateless and stateful app because it allows mounting not only ephemeral but also persistent volumes. It supports a number of storage types (nfs, glusterfs, etc.) and cloud storage systems. A PV lifecycle doesn’t depend on any pod that uses the PV so you can keep the data as long as you need it.

Fourth, Kubernetes is an open-source orchestrator that allows customization and supports a number of pre-built solutions that you might need to run your app. In contrast to proprietary solutions, with Kube you get no vendor lock-in, so you can always migrate your apps from one infrastructure to another.

Fifth, you can use a number of webhooks, which are essentially HTTP-callbacks that initiate a certain action on a server. For example If a webhook is configured to automate updating your apps from DockerHub or any other repository, each time you push your code, Kubernetes will automatically pull it and update your pods.

The sixth feature, which in our case complements the previous one, is rolling update. Kubernetes allows updating your app in increments, so once a webhook pulls your code to your pods, Kube will scale your pods and update those that are not currently in use while exposing other pods to users.

Seventh Canary deployments. A useful pattern when deploying a new version of a deployment is to first test the new deployment in production, in parallel with the previous version, and scale up the new deployment while simultaneously scaling down the previous deployment.

Conclusion

Kubernetes remains immensely popular due to its architecture, innovation, and the large open source community around it.

Kubernetes marks a breakthrough for devops because it allows teams to keep pace with the requirements of modern software development. In the absence of Kubernetes, teams have often been forced to script their own software deployment, scaling, and update workflows. Some organizations employ large teams to handle those tasks alone. Kubernetes allows us to derive maximum utility from containers and build cloud-native applications that can run anywhere, independent of cloud-specific requirements. This is clearly the efficient model for application development and operations we’ve been waiting for.

Speaking of Kubernetes, what’s next?

Kubernetes is here for the long haul, and the community driving it is doing a great job – but there’s lots ahead. experts shared several predictions specific to the increasingly popular Kubernetes platform:

Gadi Naor at Alcide: “Operators will continue to evolve and mature, to a point where applications running on Kubernetes will become fully self-managed. Deploying and monitoring microservices on top of Kubernetes with OpenTracing and service mesh frameworks such as istio will help shape new possibilities.”

Brian Gracely at Red Hat: “Kubernetes continues to expand in terms of the types of applications it can support. When you can run traditional applications, cloud-native applications, big data applications, and HPC or GPU-centric applications on the same platform, it unlocks a ton of architectural flexibility.”

Ben Newton at Sumo Logic: “As Kubernetes becomes more dominant, I would expect to see more normalization of the operational mechanisms – particularly integrations into third-party management and monitoring platforms.”

Carlos Sanchez at CloudBees: “In the immediate future there is the ability to run without Docker, using other runtimes…to remove any lock-in. [Editor’s note: CRI-O, for example, offers this ability.] “Also, [look for] storage improvements to support enterprise features like data snapshotting and online volume resizing.”

Alex Robinson at Cockroach Labs: “One of the bigger developments happening in the Kubernetes community right now is the increased focus on managing stateful applications. Managing state in Kubernetes right now is very difficult if you aren’t running in a cloud that offers remote persistent disks, but there’s work being done on multiple fronts [both inside Kubernetes and by external vendors] to improve this.”

 

Privacy and Cookies

This website stores cookies on your computer which help us make the website work better for you.

Learn moreAccept and Close
Social media & sharing icons powered by UltimatelySocial