CPU utilization impact due to granularity kernel parameter RHEL6 vs RHEL7

by Chota Bheem   Last Updated August 10, 2018 13:00 PM

Following kernel parameters are showing very different behaviour from R6 to R7 and we are not able to figure out why. Any help appreciated.

kernel.sched_min_granularity_ns kernel.sched_wakeup_granularity_ns

Background :

  • Application already running on RHEL6
  • Low latency requirement.
  • Application equipped with robustness feature i.e once the latency starts to increase more then acceptable threshold levels(pre-defined) or CPU usage is more then 85% then it stops processing new request(s) in order to avoid overloading.

  • We are now trying to deploy in on RHEL7 virtual environment and not able to utilize the CPU to the extent we could in RHEL6. We hardly can reach up to 55-60% and observe the latency spikes beyond acceptable threshold.

Notes :

  • Application version is identical on both cases(R6/R7).
  • Database and its configuration is also identical
  • Memory and CPU setting are identical as well

In R7, we were using tuned profile which changed the following kernel parameters impacting the behaviour : kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000

If we change these values to R6 default(kernel.sched_min_granularity_ns = 4000000 kernel.sched_wakeup_granularity_ns = 4000000), then we do cpu usage to R6 levels. However, when we put the same values in R6, we don't see any adverse impact and CPU still scales up to 85-90% as earlier.

So we are looking for reason why the same parameter behaves very differently compared to RHEL6 & RHEL7?

Thanks in advance.

Related Questions

Kernel Panic on Guest-VM (Debian 9)

Updated March 25, 2018 13:00 PM