-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2017-03-06 09:26, Stefan Seyfried wrote:
Hi Carlos,
On 06.03.2017 05:18, Carlos E. R. wrote:
Example on machine with SSD, both journal and syslog:
cer@Isengard:~> journalctl --disk-usage Archived and active journals take up 856.0M on disk. cer@Isengard:~> time journalctl | wc -l 733258 <======
It would actually be interesting to use "wc -c" to see how much actual data comes out.
cer@Isengard:~> time journalctl | wc -c 57953583 real 0m31.913s user 0m26.989s sys 0m7.521s cer@Isengard:~> cer@Isengard:~> journalctl --disk-usage Archived and active journals take up 864.0M on disk. cer@Isengard:~> More than ten times.
I was very surprised to find my journal being 1.4GB but containing only about 150MB worth of log data... But this might of course be my fault for not tuning the journal probably and just using the defaults.
You are right. But what tuning?
It is always the same time. Almost all ram is cache. Machine is not busy. Almost all the time used for reading the journal is user cpu time, not sys. So it is time decoding the files, not reading them.
Yes, journal reading is nowadays, especially when running from SSD, clearly CPU bound.
Syslog has about the same lines as the journal, compressed in much less size (16 megs), and reads much faster. I don't understand why the journal is so big in size; there must be redundancy of some sort.
Look above, on my home systems, it has about a 10x overhead:
About same on mine. More than ten, actually.
server:~ # journalctl --disk-usage Archived and active journals take up 1.3G on disk. server:~ # time journalctl -m | wc -c 150653116
real 0m24.173s user 0m22.660s sys 0m9.015s
(Hot caches). On my notebook, it's even worse:
susi:~ # journalctl --disk-usage Archived and active journals take up 792.6M in the file system. susi:~ # time journalctl -m | wc -c 5061219
real 0m1,724s user 0m1,696s sys 0m0,448s
That's 790MB for 3 weeks worth of logs.
Mine starts November 26.
susi:~ # du -sh /var/log/journal/ /var/log/ 793M /var/log/journal/ 101M /var/log/
So everything else (including 20MB YaST2 ~25MB zypp/zypper etc) takes 101M for 1year+ of logs, and journal takes almost 800MB for 3 weeks.
I'm very impressed.
Yes, similarly on my desktop machine last time I looked.
My interpretation is that it has to read, decompress, and sort.
Compression is probably done per record, not per file. And only the text. I hope.
Well, it looks like the compression is not very effective... :-)
Not if there is some redundancy... - -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iF4EAREIAAYFAli9u18ACgkQja8UbcUWM1zDewD/RLscxpGAuuDy7k/9lWy2V2HC eR5+FqO4DsqKdOUFNCMA+gO++xP5WJEwkPnzuae7L4AYUjkEKpxkSM20QPQf0dol =jpG8 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org