With the latest v1.3 release of Navops Command, Navops has added two more cornerstone features to Navops Command’s enterprise-grade functionality set, both ensuring that the workloads and services running in your Kubernetes cluster perform well and get the resources they require. Let us delve into these two capabilities in more detail below.
By ‘eviction’ we mean terminating pods within a replica set if services violate any of Navops Command’s resource sharing policies, and would consume resources more urgently needed by other workloads or services. Consider two replication controllers, one representing a critical application with a changing demand profile over the course of a day, and another comprising a less critical data analytics application. When the critical application shows low demand, it does not need to consume a lot of resources, and you can safely give those resources to the analytics application enabling it to produce results faster. When demand peaks on the critical application, however, then you’ll want to recommit resources back to it quickly. Standard auto scaling in Kubernetes is insufficient here because the autoscaler has no notion of the relative importance and service-level requirements of the two applications.
Navops Command’s resource sharing policies coupled with its new eviction capability allow you to automate this process. The critical application will have a resource share defined by a Navops Command policy oriented towards meeting its peak demand requirements. Navops Command tolerates the analytics application eating into that resource pie as long as the critical application does not need all of its resources. However, when the critical application requires additional resources, the policy violation by the analytics application will trigger the new v1.3 eviction system, and it will terminate pods in the analytics application to adjust towards the policy’s desired state.
Without Navops Command you would need to monitor the situation closely and re-scale services manually, a tedious and error-prone process that takes additional time and effort. Navops Command avoids these problems completely through automation.
Another key functionality in the v1.3 release called ‘utilization-aware scheduling’ refers to Navops Command being fully aware of the actual, current resources that every pod in the cluster is consuming. This includes not only pods scheduled by Navops Command, but pods scheduled outside of Navops and pods associated with daemon-set controllers. Resource utilization metrics captured include CPU and memory and can be extended to include any resource you want to manage via Navops Command. Examples are available space in a shared storage bucket, disk I/O, or network bandwidth to and from a host. Any consumable resource metric that can be collected can be incorporated into the utilization-aware scheduling policy. Navops Command will employ these utilization metrics as a core part of its scheduling decisions. For example, it will not dispatch workloads to nodes where resource requirements cannot be met. This helps avoid resource contention and unnecessary termination of pods owing to resource shortages. The result is better performing applications and much improved overall utilization. In a cloud resident cluster, this can translate into lower costs as the scheduler optimizes workload placement to require fewer nodes.
With these two enhancements, Navops Commands further extends its rich set of policy-based controls and automated resource sharing across multiple tenants in a Kubernetes cluster resulting in higher utilization, lower cost, and reduced administrative effort.