On 06/04/2014 08:12 AM, Carlos E. R. wrote:
On 2014-06-04 12:04, Anton Aylward wrote:
The indexing of the database did all the hard work :-)
All alarms, analysis, reports were made from that SQL database. In the multi-machine environment it was the only way to deal with the all the info.
The Syslog was handled by the InfoSec department not by operations. Part of the reason for the database was to be able to trace activity that spanned machines, routers, firewalls.
There is simply no way that a human could analyse that much data, so a human readable textfile was irrelevant.
True.
And I worked with a product that automatically collected text logs from ancient machines (that log to a printer or internal files), scanned them, and converted all that into a central structured database, which was then used to generate alarms and alert technicians in the central control center. Quite an expensive product, I believe.
So yes, binary, database, system logs do make a lot of sense.
However, in that same control center, I often used plain zgrep (rather cgrep) on an alternative Linux server that also collected the same text logs, in order to quickly search for issues, mostly issues that were not clearly defined.
I did mention 'swatch'. Using perl to do the grepping. Its useful in that it can deal with things in close proximity in the logs, but it is not a cross correlator of 'deep pattern' tool.
However, the problem was that the central machine and database was very slow and difficult to handle.
I've observed that in some products I've tested, not just syslog tools. They look fine at the trades show/exhibition, but under real loads....
The interface was via web browsers and java or javascript (I don't remember which, sorry). Java, probably. Our desktop machines with about 64 MiB ram bent under that load. So we technicians usually bypassed the system if we could - but the idea of the organized database was absolutely correct, only incorrectly implemented.
Yes, that sums it up well. Databases are intensive applications. Used in a bank to watch for all manner of attacks and frauds and other chicanery they are very, very different home or small network applications. Syslog databases run full out, non stop. If your database or if your h/w has any weaknesses or errors then they will be shown up. And yes Java and Javascript simply won't cut it.
So if systemd adds a binary log, and it is not enforced, I see no issue with it. Welcome! Actually that log is better in order to find issues with systemd itself and services.
Indeed. I've used it to trace problems with booting with proprietary drivers.
The only verified big issue with persistent systemd journal I know about is that searching it is currently horribly slow, if the disk is magnetic. Just printing the log to screen makes the disk head move madly. There have been reports here of it taking hours, not even minutes.
But that is implementation issues. The idea is sound.
And yes, I've seen poorly implemented Dbs do that. Try doing proper SQL queries with a dBase style database! I don't think its simply magnetic vs ssd. With ext4 you can preallocate and make sure the database - whatever database - is 'all in one place'. Even granted that, there could be many reasons for heavy disk activity. One can't make sweeping assertions, which many contributors to this thread have been doing. I'd ask WHY there was all that disk activity. What is the layout on the disk of that log file? Sequential records? (Often a problem with databases that log time sequential records: hit index, hit file, hit index, hit file; and the records are stored for the convenience of space packing rather than speed of access.)
In fact, you can also tell rsyslog and others to write the traditional syslog into an mysql binary database! Nothing new here.
Indeed -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org