Friday, August 10, 2018

ZABBIX upgrade to v4, some pure load test

As a final part of work i want to upgrade all to v4 line to be able to support the changes as long as possible with minimum efforts. (And to work as early adopter and bug fixer for Zabbix)

I have also decided to do some pure untouched virgin Zabbix load tests.

I took clean v4.0.0 alpha 9, applied clickhouse changes.

By the way i saw some minor elastics changes, but what is more important - they seem to be implemented file offloading of events, history, trends, perhaps something else.

That is valuable. Perhaps if found this before, the clickhouse offloading would be implemented outside if the main server process as a file parser. On the other hand, current implementation is in process, efficient and also frontend-compatible.

The reason to do load test is a to find that maybe Zabbix will be able to do out of the box what we want it to do.

This time i've tried to fetch as much SNMP traffic as possible:

pic: (1) system is 44% loaded, but (2) collecting only 21-22k new values per 5 seconds, so it's 5-6k NVPS. It's spends 10-12 seconds (4) in waiting for configuration locks, under normal conditions poller processes spends 4-5 seconds per each 50-60 values. Totaly, there are 3500(!) (4) poller threads running. And they also exhausted most of the memory.

I couldn't make it run more then 4k threads out of the box, but assuming linear CPU growth it will be able to grow to 8k after some sysctl fixes and gather 12k NVPS in the best case.

Then i've decided to test again pure accessibility check speed. As i mentioned before, i could acheive 28k NVPS on the same machine of pure accesibily 1-packet check on fixed icmp module which utilizes nmap instead of fping.


Ok, pure fping. System is quite loaded. Load average is 466 :), no idle time, 10-11kNVPS performance.

So, Zabbix cannot do alone out of the box what we need and there is a reason to apply fping and snmp patches, since they really make it possible to collect 15 times more data on the same hardware.




No comments:

Post a Comment