Following kernel parameters are showing very different behaviour from R6 to R7 and we are not able to figure out why. Any help appreciated.
Application equipped with robustness feature i.e once the latency starts to increase more then acceptable threshold levels(pre-defined) or CPU usage is more then 85% then it stops processing new request(s) in order to avoid overloading.
We are now trying to deploy in on RHEL7 virtual environment and not able to utilize the CPU to the extent we could in RHEL6. We hardly can reach up to 55-60% and observe the latency spikes beyond acceptable threshold.
In R7, we were using tuned profile which changed the following kernel parameters impacting the behaviour :
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
If we change these values to R6 default(
kernel.sched_min_granularity_ns = 4000000
kernel.sched_wakeup_granularity_ns = 4000000), then we do cpu usage to R6 levels.
However, when we put the same values in R6, we don't see any adverse impact and CPU still scales up to 85-90% as earlier.
So we are looking for reason why the same parameter behaves very differently compared to RHEL6 & RHEL7?
Thanks in advance.