Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver". Rumor has it that installing a "properly configured caching DNS server" is the fix du jour. Never done this. Never. Honest. Is "BIND" still they way to go?
joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
BIND is certainly the traditional way, but I would suggest you take a look at e.g. dnsmasq. Much simpler for what you want to do. -- Per Jessen, Zürich (10.0°C)
On 1/6/2023 10:31 AM, Per Jessen wrote:
joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
BIND is certainly the traditional way, but I would suggest you take a look at e.g. dnsmasq. Much simpler for what you want to do.
Thanks, but what I have read so far recommends against dnsmasq. I think because it "only forwards". But I could be wrong, as to the reason this early in my waking period.
joe a wrote:
On 1/6/2023 10:31 AM, Per Jessen wrote:
joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
BIND is certainly the traditional way, but I would suggest you take a look at e.g. dnsmasq. Much simpler for what you want to do.
Thanks, but what I have read so far recommends against dnsmasq. I think because it "only forwards". But I could be wrong, as to the reason this early in my waking period.
Yes, dnsmasq also caches, as it should. Try googling "dnsmasq caching dns server" -- Per Jessen, Zürich (9.4°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 1/6/2023 10:25 AM, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
Using YAST, I do not find BIND, but do find pdnsd and unbound, when searching of "caching DNS". Since I find "unbound" listed as an option when looking into how to resolve the rejection, perhaps that is the way to go?
Unbound works just as well. I actually see it used with many mail setups. On 1/6/23 16:31, joe a wrote:
On 1/6/2023 10:25 AM, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
Using YAST, I do not find BIND, but do find pdnsd and unbound, when searching of "caching DNS".
Since I find "unbound" listed as an option when looking into how to resolve the rejection, perhaps that is the way to go?
joe a wrote:
On 1/6/2023 10:25 AM, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
Using YAST, I do not find BIND, but do find pdnsd and unbound, when searching of "caching DNS".
The BIND package is called "named". -- Per Jessen, Zürich (9.5°C)
On 2023-01-06 10:25, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
I run a resolver on pfSense and haven't noticed that. However, it also has a cache.
joe a composed on 2023-01-06 10:25 (UTC-0500):
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
The way to go does not include 15.3: <https://lists.opensuse.org/archives/list/announce@lists.opensuse.org/thread/OCJDZIP63AUG4TW4W5JKR6TVWZ6N2TMT/> 15.4 was released 7 months ago. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
On 06-01-2023 16:25, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
You can try "pdns-recursor".
On 1/6/23 09:25, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
Note "open resolver" is a catch-all used by spamhaus in a number of circumstances, see: https://www.spamresource.com/2021/10/be-careful-using-spamhaus-with-open.htm... Setup BIND, Small learning curve to begin, but then it just runs -- forever (almost). Added benefit is you can disable your router dhcp and move that to your server and do dynamic updates of your DNS zone files from dhcp -- really convenient and quite a bit more secure than relying on consumer grade routers to do it. You will have a copy of the Bind8ARM (administrator reference manual), but you can access on-line at https://kb.isc.org/docs/ At one point in time Yast had a DNS module that could help you setup your /etc/named.conf and your forward and reverse zone files (I don't know if that has been kept current). Give it a go, if not, I've just configured it all by hand for the last 20 years. There are relatively few config options for the /etc/named.conf for use with a simple bind setup. Then you just need your forward and reverse zone files (of which you can use a template and just modify for your setup) If you get stuck, you can drop another reply and I'm happy to include the basics of each. Worth reading up a bit on how the setup works, then the examples will make more sense. A relatively short walk-though is https://wiki.archlinux.org/title/BIND (don't worry about chrooting BIND and ignore DNS over TLS or HTTPS and the dnssec signed zones for now, if you don't need IPv6, you use the same setup, but include listen-on-v6 { none; }; as a config option) You will be glad you set up BIND and will likely enjoy it enough you finish setting up dhcp with dynamic updates. (especially if you have kids or clients that expect internet service when then bring a new device in). They get an address and your DNS zones are updated providing forward/reverse A and PTR records -- automagically... Good luck and let us know how it goes. -- David C. Rankin, J.D.,P.E.
On 2023-01-07 09:50, David C. Rankin wrote:
On 1/6/23 09:25, joe a wrote:
Looking into installing a "caching DNS" server on openSUSE Leap 15.3, mainly to play nice with the anti SPAM "black list" providers, which are currently rejecting requests "blocked due to usage of an open resolver".
Rumor has it that installing a "properly configured caching DNS server" is the fix du jour.
Never done this. Never. Honest. Is "BIND" still they way to go?
Note "open resolver" is a catch-all used by spamhaus in a number of circumstances, see:
https://www.spamresource.com/2021/10/be-careful-using-spamhaus-with-open.htm...
«If you do use any of Spamhaus's DNSBLs, though, make sure you're not doing it via a public DNS resolver or via any DNS server that is attempting a high volume of queries against Spamhaus without being registered with them. If you do, you risk the queries triggering blocks simply due to the sheer volume of DNS traffic Spamhaus is receiving. Meaning you'll end up blocking mail that wasn't spam and that you probably didn't mean to block.» So, what they mean is that you have to register with them and probably pay. «Let me be clear: I strongly recommend AGAINST using public DNS servers to query Spamhaus DNSBLs. In my testing of various common public DNS servers, I saw problems. In particular, Spamhaus intermittently rejects queries from Quad9's public DNS servers with the "open resolver" error, and in the case of Google Public DNS, Alternate DNS, Yandex and Fourth Estate's public resolvers, all queries resulted in NXDOMAIN (no DNS result found) even for IP addresses that I know were listed on one or more Spamhaus DNSBLs.» Ok, so they want to send those queries directly to spamhouse instead, and the rest to our upstream DNS resolver? The article doesn't say what to do.
Setup BIND,
Small learning curve to begin, but then it just runs -- forever (almost). Added benefit is you can disable your router dhcp and move that to your server and do dynamic updates of your DNS zone files from dhcp -- really convenient and quite a bit more secure than relying on consumer grade routers to do it.
dnsmasq also does both, and is easier to configure and handle.
You will have a copy of the Bind8ARM (administrator reference manual), but you can access on-line at https://kb.isc.org/docs/
At one point in time Yast had a DNS module that could help you setup your /etc/named.conf and your forward and reverse zone files (I don't know if that has been kept current). Give it a go, if not, I've just configured it all by hand for the last 20 years. There are relatively few config options for the /etc/named.conf for use with a simple bind setup. Then you just need your forward and reverse zone files (of which you can use a template and just modify for your setup)
The default configuration of bind was a caching resolver, needed nothing to do. But sending all queries to the root servers is not correct, either. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/7/23 03:16, Carlos E. R. wrote:
So, what they mean is that you have to register with them and probably pay.
My read was they do know large mail org or list will do a lot of lookups, and they are asking if you do that, then register the resolver address with them so they know the rapid fire DNSBL lookups are legit and not some DDS attack. The whole block list game has been rather murky over the past 10 years or so. More have come and gone than I can count (but spamhaus is still there) -- David C. Rankin, J.D.,P.E.
On 2023-01-07 10:36, David C. Rankin wrote:
On 1/7/23 03:16, Carlos E. R. wrote:
So, what they mean is that you have to register with them and probably pay.
My read was they do know large mail org or list will do a lot of lookups, and they are asking if you do that, then register the resolver address with them so they know the rapid fire DNSBL lookups are legit and not some DDS attack.
Undoable for me (who doesn't use spamhouse, anyway), on dynamic address.
The whole block list game has been rather murky over the past 10 years or so. More have come and gone than I can count (but spamhaus is still there)
Yeah -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
https://www.spamresource.com/2021/10/be-careful-using-spamhaus-with-open.htm...
«If you do use any of Spamhaus's DNSBLs, though, make sure you're not doing it via a public DNS resolver or via any DNS server that is attempting a high volume of queries against Spamhaus without being registered with them. If you do, you risk the queries triggering blocks simply due to the sheer volume of DNS traffic Spamhaus is receiving. Meaning you'll end up blocking mail that wasn't spam and that you probably didn't mean to block.»
So, what they mean is that you have to register with them and probably pay.
Only when you exceed a certain amount of traffic, and the threshold is easily high enough to accommodate private usage.
Ok, so they want to send those queries directly to spamhouse instead, and the rest to our upstream DNS resolver?
The article doesn't say what to do.
"make sure you're not doing it via a public DNS resolver" Any semi-competent DNS admin will know what to do. Otherwise, you can just switch of the Spamhaus queries. (in postfix or spamassassin or whatever else you're using).
But sending all queries to the root servers is not correct, either.
That isn't how it works - your DNS asks the root servers where to send queries for "x.x.x" and then sends queries as directed. -- Per Jessen, Zürich (9.1°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-07 10:59, Per Jessen wrote:
Carlos E. R. wrote:
https://www.spamresource.com/2021/10/be-careful-using-spamhaus-with-open.htm...
«If you do use any of Spamhaus's DNSBLs, though, make sure you're not doing it via a public DNS resolver or via any DNS server that is attempting a high volume of queries against Spamhaus without being registered with them. If you do, you risk the queries triggering blocks simply due to the sheer volume of DNS traffic Spamhaus is receiving. Meaning you'll end up blocking mail that wasn't spam and that you probably didn't mean to block.»
So, what they mean is that you have to register with them and probably pay.
Only when you exceed a certain amount of traffic, and the threshold is easily high enough to accommodate private usage.
Ok, so they want to send those queries directly to spamhouse instead, and the rest to our upstream DNS resolver?
The article doesn't say what to do.
"make sure you're not doing it via a public DNS resolver"
Any semi-competent DNS admin will know what to do. Otherwise, you can just switch of the Spamhaus queries. (in postfix or spamassassin or whatever else you're using).
But sending all queries to the root servers is not correct, either.
That isn't how it works - your DNS asks the root servers where to send queries for "x.x.x" and then sends queries as directed.
I know that (I have done it), but this contradicts the "best practice" of sending queries to your ISP DNS. So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns. I think this ought to suffice for bind: (from memory) zone "zen.spamhaus.org" { forwarders {}; } -- Per Jessen, Zürich (9.6°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-07 11:35, Per Jessen wrote:
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns.
I think this ought to suffice for bind: (from memory)
zone "zen.spamhaus.org" { forwarders {}; }
Right. I don't imagine right now how to do that in dnsmasq, though. AFAIK, it doesn't do zones. Not related, perhaps, but the other day I processed 81000 emails, which included spamassassin. It took 2217m to do, which means 36.7 mails per minute. I suspect that spamassassin is that slow because it does some online query(s), but I don't know which. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-07 11:35, Per Jessen wrote:
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns.
I think this ought to suffice for bind: (from memory)
zone "zen.spamhaus.org" { forwarders {}; }
Right.
I don't imagine right now how to do that in dnsmasq, though. AFAIK, it doesn't do zones.
Four-letter-acronym that implies "consult the documentation" :-) Yes it does. dnsmasq: (from my openSUSE infra config) server=/infra.opensuse.org/nn.nn.nn.nn That says to direct queries for that zone to nn.nn.nn.nn - I expect there is a way to say "only direct queries" too.
Not related, perhaps, but the other day I processed 81000 emails, which included spamassassin. It took 2217m to do, which means 36.7 mails per minute. I suspect that spamassassin is that slow because it does some online query(s), but I don't know which.
Check your spamassassin config ? in /var/lib/spamassassin/version - somewhere like that. If you grep '^tflags.*net', I think that'll give you all the tests that rely on network access. -- Per Jessen, Zürich (8.9°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 El 2023-01-07 a las 13:01 +0100, Per Jessen escribió:
Carlos E. R. wrote:
On 2023-01-07 11:35, Per Jessen wrote:
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns.
I think this ought to suffice for bind: (from memory)
zone "zen.spamhaus.org" { forwarders {}; }
Right.
I don't imagine right now how to do that in dnsmasq, though. AFAIK, it doesn't do zones.
Four-letter-acronym that implies "consult the documentation" :-)
Sure, I wasn't there yet. Housecleaning day ;-)
Yes it does.
dnsmasq: (from my openSUSE infra config)
server=/infra.opensuse.org/nn.nn.nn.nn
That says to direct queries for that zone to nn.nn.nn.nn - I expect there is a way to say "only direct queries" too.
Ah, yes. I have server=/scar.opensuse.org/... I think my confusion is from recollecting that you can not define a zone, at least when I looked.
Not related, perhaps, but the other day I processed 81000 emails, which included spamassassin. It took 2217m to do, which means 36.7 mails per minute. I suspect that spamassassin is that slow because it does some online query(s), but I don't know which.
Check your spamassassin config ? in /var/lib/spamassassin/version - somewhere like that. If you grep '^tflags.*net', I think that'll give you all the tests that rely on network access.
Sure, but that doesn't say which one is slow. cer@Telcontar:~> rpm -q spamassassin spamassassin-3.4.5-12.13.1.x86_64 cer@Telcontar:~> l /var/lib/spamassassin/ total 32 drwxr-xr-x 8 root root 4096 Apr 14 2021 ./ drwxr-xr-x 129 root root 4096 Nov 29 22:37 ../ drwxr-xr-x 3 root root 4096 Sep 25 2010 3.002005/ drwxr-xr-x 3 root root 4096 Dec 24 2016 3.003002/ drwxr-xr-x 3 root root 4096 Aug 8 2019 3.004001/ drwxr-xr-x 3 root root 4096 Apr 11 2021 3.004002/ drwxr-xr-x 3 root root 4096 Dec 29 21:12 3.004005/ drwxr-xr-x 4 root root 4096 Jan 3 2020 compiled/ cer@Telcontar:~> No version 3.4 there, but 3.004005 was updated recently. Searching for "spamhaus" finds files, notably "20_dnsbl_tests.cf", then "25_uribl.cf", and 30_text_de.cf, or fr, nl, pl, pt. Anyway, grepping on /var/log/mail-202301*.xz doesn't find "spamhaus" or "open resolver", so that is not my problem. The only dnsmasq error I see is: <3.4> 2023-01-03T17:05:15.907095+01:00 Telcontar dnsmasq 2179 - - Maximum number of concurrent DNS queries reached (max: 150) which is smack on when I was processing that batch of mail: <2.6> 2023-01-03T17:05:14.585077+01:00 Telcontar spamd 1353 - - spamd: processing message <fb931af7-a47d-4ab9-4782-27185208ba7a@sweet-haven.com> for cer:1000 <2.6> 2023-01-03T17:05:15.676857+01:00 Telcontar spamd 1353 - - spamd: clean message (-3.2/5.0) for cer:1000 in 1.1 seconds, 17182 bytes. <2.6> 2023-01-03T17:05:15.677000+01:00 Telcontar spamd 1353 - - spamd: result: . -3 - BAYES_00,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI,NICE_REPLY_A,RCVD_IN_DNSWL_MED,RCVD_IN_ZEN_BLOCKED_OPENDNS,RDNS_NONE,SPF_HELO_NONE,SPF_PASS scantime=1.1,size=17182,user=cer,uid=1000,required_score=5.0,rhost=127.0.0.1,raddr=127.0.0.1,rport=50892,mid=<fb931af7-a47d-4ab9-4782-27185208ba7a@sweet-haven.com>,bayes=0.000000,autolearn=disabled <2.6> 2023-01-03T17:05:15.678778+01:00 Telcontar spamd 4907 - - prefork: child states: BI <2.6> 2023-01-03T17:05:15.698558+01:00 Telcontar spamd 4907 - - spamd: handled cleanup of child pid [1353] due to SIGCHLD: exit 0 <2.6> 2023-01-03T17:05:15.701015+01:00 Telcontar spamd 4907 - - spamd: server successfully spawned child process, pid 1369 <2.6> 2023-01-03T17:05:15.702027+01:00 Telcontar spamd 4907 - - prefork: child states: II <2.6> 2023-01-03T17:05:15.812734+01:00 Telcontar spamd 1369 - - spamd: connection from 127.0.0.1 [127.0.0.1]:50902 to port 783, fd 6 <2.6> 2023-01-03T17:05:15.813877+01:00 Telcontar spamd 1369 - - spamd: setuid to cer succeeded <2.6> 2023-01-03T17:05:15.824538+01:00 Telcontar spamd 1369 - - spamd: processing message <8f2c9cba-1e0b-2f29-0cde-687a2533f0be@gmail.com> for cer:1000 <2.6> 2023-01-03T17:05:20.769119+01:00 Telcontar spamd 1369 - - spamd: clean message (-99.4/5.0) for cer:1000 in 5.0 seconds, 55080 bytes. <2.6> 2023-01-03T17:05:20.769380+01:00 Telcontar spamd 1369 - - spamd: result: . -99 - BAYES_20,DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI,MIME_HTML_ONLY,RCVD_IN_DNSWL_MED,RCVD_IN_ZEN_BLOCKED_OPENDNS,RDNS_NONE,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,URIBL_DBL_BLOCKED_OPENDNS,URIBL_ZEN_BLOCKED_OPENDNS,USER_IN_WELCOMELIST,USER_IN_WHITELIST scantime=5.0,size=55080,user=cer,uid=1000,required_score=5.0,rhost=127.0.0.1,raddr=127.0.0.1,rport=50902,mid=<8f2c9cba-1e0b-2f29-0cde-687a2533f0be@gmail.com>,bayes=0.054831,autolearn=disabled <2.6> 2023-01-03T17:05:20.777952+01:00 Telcontar spamd 4907 - - prefork: child states: BI <2.6> 2023-01-03T17:05:20.798632+01:00 Telcontar spamd 4907 - - spamd: handled cleanup of child pid [1369] due to SIGCHLD: exit 0 You can see a message that took 5 seconds to process. - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7l5nhwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVKF4AnA1Ne1rzVSKdnxOh+quA VMJxKhPYAJ4+v17jYXqY0KBnZims5fIpZ9Y/lg== =w74e -----END PGP SIGNATURE-----
Carlos E. R. wrote:
Yes it does.
dnsmasq: (from my openSUSE infra config)
server=/infra.opensuse.org/nn.nn.nn.nn
That says to direct queries for that zone to nn.nn.nn.nn - I expect there is a way to say "only direct queries" too.
Ah, yes. I have
server=/scar.opensuse.org/...
I think my confusion is from recollecting that you can not define a zone, at least when I looked.
We are sliding into off-topic, but a zone is merely a DNS term for a domain or part of one. Look up "auth-zone" in the dnsmasq docu.
Check your spamassassin config ? in /var/lib/spamassassin/version - somewhere like that. If you grep '^tflags.*net', I think that'll give you all the tests that rely on network access.
Sure, but that doesn't say which one is slow.
All DNS is slow :-)
Anyway, grepping on /var/log/mail-202301*.xz doesn't find "spamhaus" or "open resolver", so that is not my problem.
The only dnsmasq error I see is:
<3.4> 2023-01-03T17:05:15.907095+01:00 Telcontar dnsmasq 2179 - - Maximum number of concurrent DNS queries reached (max: 150)
That could quite likely delay things for you. Either increase that limit or process less mails concurrently. Depending on your objective. -- Per Jessen, Zürich (9.3°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-07 15:11, Per Jessen wrote:
Carlos E. R. wrote:
Yes it does.
dnsmasq: (from my openSUSE infra config)
server=/infra.opensuse.org/nn.nn.nn.nn
That says to direct queries for that zone to nn.nn.nn.nn - I expect there is a way to say "only direct queries" too.
Ah, yes. I have
server=/scar.opensuse.org/...
I think my confusion is from recollecting that you can not define a zone, at least when I looked.
We are sliding into off-topic, but a zone is merely a DNS term for a domain or part of one. Look up "auth-zone" in the dnsmasq docu.
Ok. I simply remember that years ago I wanted to define my own faked local domain and couldn't. With bind I can.
Check your spamassassin config ? in /var/lib/spamassassin/version - somewhere like that. If you grep '^tflags.*net', I think that'll give you all the tests that rely on network access.
Sure, but that doesn't say which one is slow.
All DNS is slow :-)
Anyway, grepping on /var/log/mail-202301*.xz doesn't find "spamhaus" or "open resolver", so that is not my problem.
The only dnsmasq error I see is:
<3.4> 2023-01-03T17:05:15.907095+01:00 Telcontar dnsmasq 2179 - - Maximum number of concurrent DNS queries reached (max: 150)
That could quite likely delay things for you. Either increase that limit or process less mails concurrently. Depending on your objective.
No, I was processing them sequentially. One at a time. One of those mails provoked a massive DNS query. In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-07 15:11, Per Jessen wrote:
Anyway, grepping on /var/log/mail-202301*.xz doesn't find "spamhaus" or "open resolver", so that is not my problem.
The only dnsmasq error I see is:
<3.4> 2023-01-03T17:05:15.907095+01:00 Telcontar dnsmasq 2179 - - Maximum number of concurrent DNS queries reached (max: 150)
That could quite likely delay things for you. Either increase that limit or process less mails concurrently. Depending on your objective.
No, I was processing them sequentially. One at a time. One of those mails provoked a massive DNS query.
Ah okay, DNS poison. Yeah, those do happen and they are meant to do exactly what you experienced, slow down processing.
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't? -- Per Jessen, Zürich (9.1°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-07 15:35, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-07 15:11, Per Jessen wrote:
Anyway, grepping on /var/log/mail-202301*.xz doesn't find "spamhaus" or "open resolver", so that is not my problem.
The only dnsmasq error I see is:
<3.4> 2023-01-03T17:05:15.907095+01:00 Telcontar dnsmasq 2179 - - Maximum number of concurrent DNS queries reached (max: 150)
That could quite likely delay things for you. Either increase that limit or process less mails concurrently. Depending on your objective.
No, I was processing them sequentially. One at a time. One of those mails provoked a massive DNS query.
Ah okay, DNS poison. Yeah, those do happen and they are meant to do exactly what you experienced, slow down processing.
I can find out that email. Guessing that it is the one that took 5 seconds to process, that is <8f2c9cba-1e0b-2f29-0cde-687a2533f0be@gmail.com> Located it. It is one of mine to the oftopic list in "text/html" format (ie, html only) with a translated article and lots of links (many to wikipedia ;-) ) _Not_ a DNS bomb :-D
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't?
Yes. The command is: formail -s procmail ./.procmail_r_gmx < alpine_r_gmx And the spam rule is this: :0f | /usr/bin/spamc -s 25000000 :0 a: * ^X-Spam-Status: Yes $HOME/Mail/zap_spam_gmx_lists This method does one at a time. I can not think of a way to make it send more. Of course, one at a time does very clear logs. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't?
Yes.
The command is: formail -s procmail ./.procmail_r_gmx < alpine_r_gmx
Ah, I thought we were talking about a postfix config, sorry. To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like find <maildir> -type f | xargs -P8 procmail ./.procmail_r_gmx (not tested of course) I don't have any procmail installed, I don't know if it accepts the mailfile name as input instead of stdin. If it doesn't, this little helper script might be useful #!/bin/sh exec procmail ./.procmail_r_gmx <$1 find <maildir> -type f | xargs -P8 helperscript -- Per Jessen, Zürich (8.6°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
Per Jessen wrote:
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Sorry, either add "-n1" to xargs or amend the script: #!/bin/sh while test -n "$1" do procmail ./.procmail_r_gmx <$1 shift done -- Per Jessen, Zürich (7.4°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-07 21:00, Per Jessen wrote:
Carlos E. R. wrote:
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't?
Yes.
The command is: formail -s procmail ./.procmail_r_gmx < alpine_r_gmx
Ah, I thought we were talking about a postfix config, sorry.
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like
find <maildir> -type f | xargs -P8 procmail ./.procmail_r_gmx
(not tested of course) I don't have any procmail installed, I don't know if it accepts the mailfile name as input instead of stdin.
No, it doesn't. That's what formail does. Oh, the folders are in mbox format, not maildir. Yes, certainly, I can divide manually email into 4 folders, say, then process. I wonder if formail can be convinced to send to 4 destinations in roundrobin, that would be nice.
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Needs formail. And mbox. But my next testing will be what I mentioned on the other thread: tell dnsmasq to send to bind on another machine which does the root query route, and reprocess to find out the time it takes. Maybe process just a thousand mails (including the ones found already to be spam). And another run doing RCVD_IN_ZEN 0 RCVD_IN_XBL 0 RCVD_IN_PBL 0 If that runs faster, it may be all I need. Not now, I am procrastinating. I should be out walking. I see my doctor pointing a finger. Thou will walk! -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-07 21:00, Per Jessen wrote:
Carlos E. R. wrote:
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't?
Yes.
The command is: formail -s procmail ./.procmail_r_gmx < alpine_r_gmx
Ah, I thought we were talking about a postfix config, sorry.
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like
find <maildir> -type f | xargs -P8 procmail ./.procmail_r_gmx
(not tested of course) I don't have any procmail installed, I don't know if it accepts the mailfile name as input instead of stdin.
No, it doesn't.
I did have a suspicion. Okay.
Oh, the folders are in mbox format, not maildir.
Yes, I am well aware, formail only deals with mbox.
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Needs formail. And mbox.
Yes. So? It doesn't need anything you haven't been already using in your single-threaded version. -- Per Jessen, Zürich (7.2°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-08 10:01, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-07 21:00, Per Jessen wrote:
Carlos E. R. wrote:
In fact, I wish I could process several emails at a time, would speed these jobs. It took two whole days.
Is there any reason you can't?
Yes.
The command is: formail -s procmail ./.procmail_r_gmx < alpine_r_gmx
Ah, I thought we were talking about a postfix config, sorry.
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like
find <maildir> -type f | xargs -P8 procmail ./.procmail_r_gmx
(not tested of course) I don't have any procmail installed, I don't know if it accepts the mailfile name as input instead of stdin.
No, it doesn't.
I did have a suspicion. Okay.
Oh, the folders are in mbox format, not maildir.
Yes, I am well aware, formail only deals with mbox.
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Needs formail. And mbox.
Yes. So? It doesn't need anything you haven't been already using in your single-threaded version.
But "find" will not work. My 80000 mails are all in a single file. Maybe (formail): -n [maxprocs] Tell formail not to wait for every program to finish before starting the next (causes splits to be processed in parallel). Maxprocs optionally specifies an upper limit on the number of concurrently running processes. -s The input will be split up into separate mail messages, and piped into a program one by one (a new program is started for every part). -s has to be the last option specified, the first argument following it is expected to be the name of a program, any other arguments will be passed along to it. If you omit the program, then formail will simply concatenate the split mails on stdout again. See FILENO. However, if that works, the logs are undecipherable. Choose poison. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-08 10:01, Per Jessen wrote:
Oh, the folders are in mbox format, not maildir.
Yes, I am well aware, formail only deals with mbox.
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Needs formail. And mbox.
Yes. So? It doesn't need anything you haven't been already using in your single-threaded version.
But "find" will not work. My 80000 mails are all in a single file.
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
-- Per Jessen, Zürich (7.2°C) Have worked in the anti-spam business for eighteen years
On 2023-01-08 12:57, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-08 10:01, Per Jessen wrote:
Oh, the folders are in mbox format, not maildir.
Yes, I am well aware, formail only deals with mbox.
If it doesn't, this little helper script might be useful
#!/bin/sh exec procmail ./.procmail_r_gmx <$1
find <maildir> -type f | xargs -P8 helperscript
Needs formail. And mbox.
Yes. So? It doesn't need anything you haven't been already using in your single-threaded version.
But "find" will not work. My 80000 mails are all in a single file.
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How? The idea is so strange that it did not register. Convert mailfolder to maildir? It seems more feasible to me to divide folder manually with Alpine in five folders, say, and process them concurrently. That's definitely doable, but I don't see how to automate the dividing action. But even doing it manually would save time and would be worth it. Google doesn't find how to split mbox automatically, only manually. It can be done with formail and procmail, and a rule that sorts based on some criteria procmail understands, like "from". I can not imagine doing it by a count criteria. Found something. <https://stackoverflow.com/questions/28110536/how-to-split-an-mbox-file-into-n-mb-big-chunks-using-the-terminal> This script might do it: awk 'BEGIN{chunk=0} /^From /{msgs++;if(msgs==1000){msgs=0;chunk++}}{print > "chunk_" chunk ".txt"}' mbox This is another version: BEGIN{chunk=0;filesize=0;} /^From /{ if(filesize>=40000000){#file size per chunk in byte close("chunk_" chunk ".txt"); filesize=0; chunk++; } } {filesize+=length()} {print > ("chunk_" chunk ".txt")} Another method (I suspected it): formail -100 -s <google.mbox >import-01.mbox formail +100 -100 -s <google.mbox >import-02.mbox formail +200 -100 -s <google.mbox >import-03.mbox I like this one. But not yet a method to divide into a number of, say, five files. Maybe, count the number of mails in the folder, and then calculate the break points for the recipe above. Could be scripted. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-08 12:57, Per Jessen wrote:
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh 'cat >maildir/$FILENO.eml' -- Per Jessen, Zürich (9.1°C) Been processing billions of emails over the last eighteen years
Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-08 12:57, Per Jessen wrote:
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh 'cat >maildir/$FILENO.eml'
Sorry, small omission: formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml' -- Per Jessen, Zürich (8.1°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-08 12:57, Per Jessen wrote:
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh 'cat >maildir/$FILENO.eml'
Sorry, small omission:
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok. Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/8/23 5:45 AM, Carlos E. R. wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-08 12:57, Per Jessen wrote:
Hello. Are you speed reading again? I wrote:
To speed it up, I would be tempted to split the mailbox into individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh 'cat >maildir/$FILENO.eml'
Sorry, small omission:
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok.
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
Except that's not maildir format. maildir is one message per file. I've been a mbox user forever (sendmail is my preferred MTA. Let's not get into MTA wars) and have recently found that there IS stuff I can't do with mbox but can with maildir... sub folders for instance. Ya, I know... Thunderbird can do it sort of but imap servers (dovecot) don't like it at all... Or maybe they do and I just don't know the correct incantation
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday, 2023-01-08 at 08:08 -0800, Bruce Ferrell wrote:
On 1/8/23 5:45 AM, Carlos E. R. wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-08 12:57, Per Jessen wrote:
Hello. Are you speed reading again? I wrote:
> To speed it up, I would be tempted to split the mailbox into > individual mails first, then run something like ...
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh 'cat >maildir/$FILENO.eml'
Sorry, small omission:
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok.
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
Except that's not maildir format. maildir is one message per file.
Correct. I do not want maildir :-)
I've been a mbox user forever (sendmail is my preferred MTA. Let's not get into MTA wars) and have recently found that there IS stuff I can't do with mbox but can with maildir... sub folders for instance.
Ya, I know... Thunderbird can do it sort of but imap servers (dovecot) don't like it at all... Or maybe they do and I just don't know the correct incantation
Yes, Thunderbird renames the sub-folder, as: something.sbd/ <--- subfolder something <--- folder something.msf <--- folder index - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7sQQRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVjW0AniPAEvV8n3UyKLsebrcm qo4K93r3AJ96w4ZYuXXOPn7iaAX2hgalGucVGg== =t64g -----END PGP SIGNATURE-----
From: "Carlos E. R." <robin.listas@telefonica.net> Date: Sun, 8 Jan 2023 14:45:20 +0100 On 2023-01-08 14:43, Per Jessen wrote:
Sorry, small omission:
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok. Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar) Here's another option: https://git.sr.ht/~rgrjr/rgrjr-scripts/tree/master/item/mbox2maildir.pl You still get 80000-odd individual message files (and you should count to make sure you have the right number), but long ago I stopped trusting mbox format for large delivery tasks like this. -- Bob Rogers http://www.rgrjr.com/
On 2023-01-08 19:39, Bob Rogers wrote:
From: "Carlos E. R." <> Date: Sun, 8 Jan 2023 14:45:20 +0100
On 2023-01-08 14:43, Per Jessen wrote:
> Sorry, small omission: > > formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok.
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
Here's another option:
https://git.sr.ht/~rgrjr/rgrjr-scripts/tree/master/item/mbox2maildir.pl
You still get 80000-odd individual message files (and you should count to make sure you have the right number), but long ago I stopped trusting mbox format for large delivery tasks like this. Well, I am not going to change to maildir, as mbox works well for me, but thanks :-)
-- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/8/23 10:39 AM, Bob Rogers wrote:
From: "Carlos E. R." <robin.listas@telefonica.net> Date: Sun, 8 Jan 2023 14:45:20 +0100
On 2023-01-08 14:43, Per Jessen wrote:
> Sorry, small omission: > > formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Ok.
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
-- Cheers / Saludos,
Carlos E. R. (from 15.4 x86_64 at Telcontar)
Here's another option:
https://git.sr.ht/~rgrjr/rgrjr-scripts/tree/master/item/mbox2maildir.pl
You still get 80000-odd individual message files (and you should count to make sure you have the right number), but long ago I stopped trusting mbox format for large delivery tasks like this.
-- Bob Rogers http://www.rgrjr.com/
PERL?! YAY!!!!! (and the crowd goes wild)
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' I don't remember for sure, though, whether concatenating the outputs of formail makes valid mbox files. It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file. -- Robert Webb
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum! I don't understand this code. How does it work? :-? It fails, though: cer@Telcontar:~/tmp/Robert_Webb> l total 1419072 drwxr-xr-x 2 cer users 51 Jan 9 12:40 ./ drwxr-xr-x 148 cer users 8192 Jan 9 12:37 ../ -rw-r--r-- 1 cer users 1453105485 Jan 9 12:37 in_ex_R_GMX_01 -rwxr-xr-x 1 cer users 108 Jan 9 12:40 run* -rwxr-xr-x 1 cer users 107 Jan 9 12:39 run~* cer@Telcontar:~/tmp/Robert_Webb> export xyzzyN=5 cer@Telcontar:~/tmp/Robert_Webb> formail < in_ex_R_GMX_01 -s /bin/sh -c \
'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory
... ^C cer@Telcontar:~/tmp/Robert_Webb>
I don't remember for sure, though, whether concatenating the outputs of formail makes valid mbox files.
It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 � 85 � 90 � 11 �N 14 � 1 �N 61 � 2 �� 5508 � 10 �N 137 �� 47 ��� 202 ���� 1 ����� 13 ����� 5 ������ 14 ������� 4 ��������� 2 ���������� 9 ����������� 1 ������������ 6 ��������������� 2 ���������������� 6 ����������������� 2 ������������������� 3 �������������������� 1 ��������������������������������� 3 ���������������������������������� 1 ����� 2 � 2 � 8 � 2 � 6 �N 1 �� 1 � 1 � 2 � 41 � 95 � 62 � 153 � 2 � 3 � 21 � 1 � 24 � 2 � 4 � 49 � 1 � 1 �� 147 � 1 �� real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~> -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On Mon, Jan 9, 2023 at 2:46 PM Carlos E. R. <robin.listas@telefonica.net> wrote: ...
cer@Telcontar:~/tmp/Robert_Webb> export xyzzyN=5 cer@Telcontar:~/tmp/Robert_Webb> formail < in_ex_R_GMX_01 -s /bin/sh -c \
'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory
-c is formail flag, so it looks like -c is consumed by formail and it calls /bin/sh with single argument 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'.
Carlos E. R. wrote:
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum!
I don't understand this code. How does it work? :-?
It's a slight variation of what I posted yesterday, but instead of individual files, it creates mailbox files.
It fails, though:
You need to create the mboxdir first. -- Per Jessen, Zürich (5.6°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-09 14:45, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote: > Oh. Split into 80000 individual mails... One per file, I assume. > How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum!
I don't understand this code. How does it work? :-?
It's a slight variation of what I posted yesterday, but instead of individual files, it creates mailbox files.
It fails, though:
You need to create the mboxdir first.
Ah! A directory named "mboxdir" (I said I did not understand the code). Ok, Trying. At least, no error. [...] Ok, worked. almost 5 minutes to produce 5 files totalling the same number of bytes. Interesting! This saved me some hours of coding and testing :-) I still don't understand how it works, but it works :-) -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-09 14:45, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote: > Carlos E. R. wrote: >> Oh. Split into 80000 individual mails... One per file, I >> assume. How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum!
I don't understand this code. How does it work? :-?
It's a slight variation of what I posted yesterday, but instead of individual files, it creates mailbox files.
It fails, though:
You need to create the mboxdir first.
Ah! A directory named "mboxdir" (I said I did not understand the code). Ok, Trying. At least, no error. [...]
Ok, worked. almost 5 minutes to produce 5 files totalling the same number of bytes. Interesting!
This saved me some hours of coding and testing :-)
I still don't understand how it works, but it works :-)
formail <mboxfile -s command The above reads a mailbox file and splits it into individual mails, which are then piped to "command", one by one. /bin/sh -c 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' the expression takes the FILENO (as produced by formail) modulus 5, so it becomes one of these: /bin/sh -c 'cat >> mboxdir/0.mbox' /bin/sh -c 'cat >> mboxdir/1.mbox' /bin/sh -c 'cat >> mboxdir/2.mbox' /bin/sh -c 'cat >> mboxdir/3.mbox' /bin/sh -c 'cat >> mboxdir/4.mbox' 'cat' just copies stdin to stdout, appending the email to the mbox file. -- Per Jessen, Zürich (5.6°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-09 15:45, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-09 14:45, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <> wrote:
On 2023-01-08 14:43, Per Jessen wrote: > Per Jessen wrote: >> Carlos E. R. wrote: >>> Oh. Split into 80000 individual mails... One per file, I >>> assume. How? > > formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum!
I don't understand this code. How does it work? :-?
It's a slight variation of what I posted yesterday, but instead of individual files, it creates mailbox files.
It fails, though:
You need to create the mboxdir first.
Ah! A directory named "mboxdir" (I said I did not understand the code). Ok, Trying. At least, no error. [...]
Ok, worked. almost 5 minutes to produce 5 files totalling the same number of bytes. Interesting!
This saved me some hours of coding and testing :-)
I still don't understand how it works, but it works :-)
formail <mboxfile -s command
The above reads a mailbox file and splits it into individual mails, which are then piped to "command", one by one.
That part I know, and have used myself.
/bin/sh -c 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
the expression takes the FILENO (as produced by formail) modulus 5, so it becomes one of these:
formail produces FILENO? What is FILENO? That's the part I don't understand. Ah. -s The input will be split up into separate mail messages, and piped into a program one by one (a new program is started for every part). -s has to be the last op- tion specified, the first argument fol- lowing it is expected to be the name of a program, any other arguments will be passed along to it. If you omit the program, then formail will simply con- catenate the split mails on stdout again. See FILENO. ... ENVIRONMENT FILENO While splitting, formail assigns the message number currently being output to this variable. By presetting FILENO, you can change the initial message num- ber being used and the width of the zero-padded output. If FILENO is unset it will default to 000. If FILENO is non-empty and does not contain a number, FILENO generation is disabled. Ah. I see now.
/bin/sh -c 'cat >> mboxdir/0.mbox' /bin/sh -c 'cat >> mboxdir/1.mbox' /bin/sh -c 'cat >> mboxdir/2.mbox' /bin/sh -c 'cat >> mboxdir/3.mbox' /bin/sh -c 'cat >> mboxdir/4.mbox'
'cat' just copies stdin to stdout, appending the email to the mbox file.
Yes, of course, I see now. I was stuck on FILENO, couldn't see where it came from. And I should have seen the mboxdir part. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2023-01-09 at 15:54 +0100, Carlos E. R. wrote:
On 2023-01-09 15:45, Per Jessen wrote:
Carlos E. R. wrote:
> Still, I think I prefer the recipe to split in just a small number > of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
...
Ah. I see now.
I'm coding that into my script, and for some reason I don't see it doesn't work. The script is, removing irrelevant parts, this: #!/bin/bash -x function parallel_gmxTT() { INPUTFILE=alpine_r_gmx OUTPUTFILES=processing.$INPUTFILE CPUCORES=2 FILENO=1 #Dummy echo $INPUTFILE $OUTPUTFILES $CPUCORES echo $OUTPUTFILES.$(expr $FILENO % $CPUCORES).mbox ls -l $INPUTFILE "$OUTPUTFILES"* echo formail < $INPUTFILE -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % 2).mbox' echo ls -ltr | tail #formail -s procmail ./.procmail_r_gmx_test LOGFILE=$HOME/Mail/procmail_alpine_gmx_test.00.log < alpine_r_gmx } cd ~/Mail case "$1" in gmxTT) time parallel_gmxTT ;; esac When running this, I get: cer@Telcontar:~> time alpine_procmail gmxTT + cd /home/cer/Mail + case "$1" in + parallel_gmxTT + INPUTFILE=alpine_r_gmx + OUTPUTFILES=processing.alpine_r_gmx + CPUCORES=2 + FILENO=1 + echo alpine_r_gmx processing.alpine_r_gmx 2 alpine_r_gmx processing.alpine_r_gmx 2 ++ expr 1 % 2 + echo processing.alpine_r_gmx.1.mbox processing.alpine_r_gmx.1.mbox + ls -l alpine_r_gmx 'processing.alpine_r_gmx*' ls: cannot access 'processing.alpine_r_gmx*': No such file or directory - -rw-r--r-- 1 cer users 164113 Jan 9 21:04 alpine_r_gmx + echo + formail -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % 2).mbox' + echo + ls -ltr + tail drwxr-xr-x 8 cer users 8192 Jan 9 20:45 _Lists - -rw-r--r-- 1 cer users 164113 Jan 9 21:04 alpine_r_gmx - -rw------- 1 cer users 1053687 Jan 9 21:07 procmail_alpine_gmx_test.log - -rw------- 1 cer users 37301 Jan 9 21:15 zap_spam_gmx_lists_test_04 - -rw------- 1 cer users 14142363 Jan 9 21:15 zap_not_spam_gmx_lists_test_03 - -rw------- 1 cer users 14089275 Jan 9 21:15 zap_not_spam_gmx_lists_test_02 - -rw------- 1 cer users 14128684 Jan 9 21:16 zap_not_spam_gmx_lists_test_04 - -rw------- 1 cer users 36489 Jan 9 21:19 zap_spam_gmx_lists_test - -rw------- 1 cer users 135377 Jan 9 21:20 zap_not_spam_gmx_lists_test - -rw-r--r-- 1 cer users 0 Jan 9 21:39 procmail_alpine_gmx_test.00.log real 0m0,046s user 0m0,040s sys 0m0,010s The formail code is not working. If I do the script commands manually in another terminal, it works: cer@Telcontar:~> cd Mail cer@Telcontar:~/Mail> export INPUTFILE=alpine_r_gmx cer@Telcontar:~/Mail> export OUTPUTFILES=processing.$INPUTFILE cer@Telcontar:~/Mail> export CPUCORES=2 cer@Telcontar:~/Mail> export FILENO=1 cer@Telcontar:~/Mail> echo $INPUTFILE $OUTPUTFILES $CPUCORES alpine_r_gmx processing.alpine_r_gmx 2 cer@Telcontar:~/Mail> echo $OUTPUTFILES.$(expr $FILENO % $CPUCORES).mbox processing.alpine_r_gmx.1.mbox cer@Telcontar:~/Mail> l processing.alpine_r_gmx.1.mbox ls: cannot access 'processing.alpine_r_gmx.1.mbox': No such file or directory cer@Telcontar:~/Mail> ls -l $INPUTFILE "$OUTPUTFILES"* ls: cannot access 'processing.alpine_r_gmx*': No such file or directory - -rw-r--r-- 1 cer users 164113 Jan 9 21:04 alpine_r_gmx cer@Telcontar:~/Mail> formail < $INPUTFILE -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % 2).mbox' cer@Telcontar:~/Mail> ls -ltr | tail - -rw------- 1 cer users 1053687 Jan 9 21:07 procmail_alpine_gmx_test.log - -rw------- 1 cer users 37301 Jan 9 21:15 zap_spam_gmx_lists_test_04 - -rw------- 1 cer users 14142363 Jan 9 21:15 zap_not_spam_gmx_lists_test_03 - -rw------- 1 cer users 14089275 Jan 9 21:15 zap_not_spam_gmx_lists_test_02 - -rw------- 1 cer users 14128684 Jan 9 21:16 zap_not_spam_gmx_lists_test_04 - -rw------- 1 cer users 36489 Jan 9 21:19 zap_spam_gmx_lists_test - -rw------- 1 cer users 135377 Jan 9 21:20 zap_not_spam_gmx_lists_test - -rw-r--r-- 1 cer users 0 Jan 9 21:39 procmail_alpine_gmx_test.00.log - -rw-r--r-- 1 cer users 77256 Jan 9 22:12 processing.alpine_r_gmx.0.mbox <==== - -rw-r--r-- 1 cer users 86857 Jan 9 22:12 processing.alpine_r_gmx.1.mbox <==== cer@Telcontar:~/Mail> ls -l $INPUTFILE "$OUTPUTFILES"* - -rw-r--r-- 1 cer users 164113 Jan 9 21:04 alpine_r_gmx - -rw-r--r-- 1 cer users 77256 Jan 9 22:12 processing.alpine_r_gmx.0.mbox - -rw-r--r-- 1 cer users 86857 Jan 9 22:12 processing.alpine_r_gmx.1.mbox cer@Telcontar:~/Mail> rm "$OUTPUTFILES"* cer@Telcontar:~/Mail> What am I missing? This line must have some error: formail < $INPUTFILE -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % 2).mbox' This other line (a previous version) also does not work: + formail -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % $CPUCORES).mbox' expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ expr: syntax error: missing argument after ‘%’ + echo - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7yIWRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVprMAoIbGj+aiD4nJETAnCjHG f2H/rPklAJ0b+cVgOD5xXOfkpttSHiRbfXQ/dg== =dOBm -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2023-01-09 at 22:34 +0100, Carlos E. R. wrote:
On Monday, 2023-01-09 at 15:54 +0100, Carlos E. R. wrote:
On 2023-01-09 15:45, Per Jessen wrote:
Carlos E. R. wrote:
...
Ah. I see now.
I'm coding that into my script, and for some reason I don't see it doesn't work.
The script is, removing irrelevant parts, this:
#!/bin/bash -x
function parallel_gmxTT() { INPUTFILE=alpine_r_gmx OUTPUTFILES=processing.$INPUTFILE CPUCORES=2
FILENO=1 #Dummy
Found it. export INPUTFILE=alpine_r_gmx export OUTPUTFILES=processing.$INPUTFILE export CPUCORES=2 export FILENO=1 #Dummy Otherwise, those are not seen inside the second bash shell.
formail < $INPUTFILE -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % 2).mbox'
formail < $INPUTFILE -s /bin/bash -c 'cat >> $OUTPUTFILES.$(expr $FILENO % $CPUCORES).mbox' - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7yLBRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVJVcAoIDUXnJdJeYGCSeqoVMI r8/e6ReiAKCYTrqKPD9zKlF5quR1PcoPxV6K9g== =0Ucs -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2023-01-09 at 22:45 +0100, Carlos E. R. wrote:
On Monday, 2023-01-09 at 22:34 +0100, Carlos E. R. wrote:
On Monday, 2023-01-09 at 15:54 +0100, Carlos E. R. wrote:
On 2023-01-09 15:45, Per Jessen wrote:
Carlos E. R. wrote:
...
Ah. I see now.
I'm coding that into my script, and for some reason I don't see it doesn't work.
Found it.
Well, my script is working. It processes in parallel. With 8 processes, it runs at 163 mails/minute (with a folder with 1000 emails + 3 possible spams). threads speed (mails/minute) 8 163 12 189 I see the cpu load at 5% per spamd. I'm guessing I can increase the number of threads, so I'm going to try 120 :-D First, I have to increase the number of childs of spamd, which now is 16, to 160 in /etc/sysconfig/spamd: SPAMD_ARGS="-d -c --max-children=160 --max-conn-per-child=1" threads speed (mails/minute) 8 163 12 189 120 246 Not a ten times increase, something must be limiting the speed somewhere. Not CPU load, it was going at 20% total. Nor ram: I have 64 G, but I didn't look while it was running. Threads had very different speeds: ········· finished thread 112 in 23 s (8 emails, 114219 bytes) ········· finished thread 28 in 240 s (9 emails, 128526 bytes ) - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7ytfxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVs4EAn1QGXAnMLLNpVmE93X9D 9hzgEwg0AJ9cQFnzSNfXA2rVxRulB9TeA2hBpQ== =ELEH -----END PGP SIGNATURE-----
On 2023-01-10 01:12, Carlos E. R. wrote:
On Monday, 2023-01-09 at 22:45 +0100, Carlos E. R. wrote:
On Monday, 2023-01-09 at 22:34 +0100, Carlos E. R. wrote:
On Monday, 2023-01-09 at 15:54 +0100, Carlos E. R. wrote:
On 2023-01-09 15:45, Per Jessen wrote:
Carlos E. R. wrote:
...
SPAMD_ARGS="-d -c --max-children=160 --max-conn-per-child=1"
threads speed (mails/minute) 8 163 12 189 120 246 > Not a ten times increase, something must be limiting the speed somewhere. Not CPU load, it was going at 20% total. Nor ram: I have 64 G, but I didn't look while it was running.
Threads had very different speeds:
········· finished thread 112 in 23 s (8 emails, 114219 bytes) ········· finished thread 28 in 240 s (9 emails, 128526 bytes )
SPAMD_ARGS="-d -c --max-children=160 --min-children=120 --max-conn-per-child=1" Now does 312 mails per minute. That's better, but not really fast. ········· finished thread 59 in 16 s (8 emails, 100818b) ········· finished thread 15 in 189 s (9 emails, 113853b) (rbl is off) -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
Now does 312 mails per minute. That's better, but not really fast.
5 per second - that's very decent speed. Don't forget, it is perl and you haven't switched off DNS, only the rbl lookups. -- Per Jessen, Zürich (5.2°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
On Sun, 8 Jan 2023 14:45:20 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-08 14:43, Per Jessen wrote:
Per Jessen wrote:
Carlos E. R. wrote:
Oh. Split into 80000 individual mails... One per file, I assume. How?
formail <file.mbox -s /bin/sh -c 'cat >maildir/$FILENO.eml'
Still, I think I prefer the recipe to split in just a small number of files, either with awk or formail.
export xyzzyN=5 formail <file.mbox -s /bin/sh -c \ 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox'
Hum! I don't understand this code. How does it work? :-?
It is the same as Per Jessen's command, except instead of using $FILENO to number the individual files, my modification uses the result of 'expr $FILENO % 5', which only produces numbers from 0 through 4, repeating as $FILENO increments, so there will only be five output files. The other modification is the appending redirection operator so that each output file will contain multiple messages.
It fails, though:
cer@Telcontar:~/tmp/Robert_Webb> l total 1419072 drwxr-xr-x 2 cer users 51 Jan 9 12:40 ./ drwxr-xr-x 148 cer users 8192 Jan 9 12:37 ../ -rw-r--r-- 1 cer users 1453105485 Jan 9 12:37 in_ex_R_GMX_01 -rwxr-xr-x 1 cer users 108 Jan 9 12:40 run* -rwxr-xr-x 1 cer users 107 Jan 9 12:39 run~* cer@Telcontar:~/tmp/Robert_Webb> export xyzzyN=5 cer@Telcontar:~/tmp/Robert_Webb> formail < in_ex_R_GMX_01 -s /bin/sh -c \ > 'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory /bin/sh: mboxdir/$(expr $FILENO % $xyzzyN).mbox: No such file or directory [...]
Although I didn't try it before, now I have, and it works (wasn't expecting that!): robert@poppies:FILENO> mkdir mboxdir robert@poppies:FILENO> ln -s blockbuster-misc.mbox in_ex_R_GMX_01 robert@poppies:FILENO> export xyzzyN=5 robert@poppies:FILENO> formail < in_ex_R_GMX_01 -s /bin/sh -c \
'cat >> mboxdir/$(expr $FILENO % $xyzzyN).mbox' robert@poppies:FILENO> grep -c '^From ' mboxdir/* mboxdir/0.mbox:2 mboxdir/1.mbox:2 mboxdir/2.mbox:1 mboxdir/3.mbox:1 mboxdir/4.mbox:1 robert@poppies:FILENO> grep -c '^From ' in_ex_R_GMX_01 in_ex_R_GMX_01:7
You do need to create the mboxdir directory, but your error messages show another problem. The command substitution: $(expr $FILENO % $xyzzyN) is not being evaluated. The errors I got when trying this without creating mboxdir were the same as yours except the command had been reduced to a number from 0 to 4. Does your shell do that style of command substitutions? But, maybe your shell checks mboxdir before trying to evaluate the command. -- Robert Webb
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
[...] It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 � 85 � 90 � 11 �N 14 � 1 �N 61 � 2 �� 5508 � 10 �N 137 �� 47 ��� 202 ���� 1 ����� 13 ����� 5 ������ 14 ������� 4 ��������� 2 ���������� 9 ����������� 1 ������������ 6 ��������������� 2 ���������������� 6 ����������������� 2 ������������������� 3 �������������������� 1 ��������������������������������� 3 ���������������������������������� 1 ����� 2 � 2 � 8 � 2 � 6 �N 1 �� 1 � 1 � 2 � 41 � 95 � 62 � 153 � 2 � 3 � 21 � 1 � 24 � 2 � 4 � 49 � 1 � 1 �� 147 � 1 ��
real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~>
Not ok. That should all be non-empty sequences of zero or more 'R's, possibly followed by an 'N' or "<eof>". It is a locale issue. If those sequences of odd characters shown (not N or R), which I am seeing as question marks inside a hexagon (Unicode U+FFFD: Replacement Character), represent invalid UTF-8 byte sequences, then the 'sed' command is unable to match and remove them. [1] Anyway, setting LC_ALL=C in the script (instead of LC_COLLATE=C) seems to fix it, although I also replaced the first two sed expressions with "equivalent" 'tr' commands. Also, the script now separately counts the EOL sequences in each of the files given as arguments (or stdin). [2] Can you try it now, Carlos? [1] 'info sed', search for 'locale' [2] Attached script (updated): eol_pattern_count -- Robert Webb
On 2023-01-15 11:16, Robert Webb wrote:
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
[...] It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 �
...
1 �� 147 � 1 ��
real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~>
Not ok. That should all be non-empty sequences of zero or more 'R's, possibly followed by an 'N' or "<eof>". It is a locale issue. If those sequences of odd characters shown (not N or R), which I am seeing as question marks inside a hexagon (Unicode U+FFFD: Replacement Character), represent invalid UTF-8 byte sequences, then the 'sed' command is unable to match and remove them. [1] Anyway, setting LC_ALL=C in the script (instead of LC_COLLATE=C) seems to fix it, although I also replaced the first two sed expressions with "equivalent" 'tr' commands.
Also, the script now separately counts the EOL sequences in each of the files given as arguments (or stdin). [2]
Can you try it now, Carlos?
Wait... you say "match and remove them". If the script does modification to my folder, I don't want to run it.
[1] 'info sed', search for 'locale' [2] Attached script (updated): eol_pattern_count--
Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On Sun, 15 Jan 2023 12:02:48 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-15 11:16, Robert Webb wrote:
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
[...] It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 �
...
1 �� 147 � 1 ��
real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~>
Not ok. That should all be non-empty sequences of zero or more 'R's, possibly followed by an 'N' or "<eof>". It is a locale issue. If those sequences of odd characters shown (not N or R), which I am seeing as question marks inside a hexagon (Unicode U+FFFD: Replacement Character), represent invalid UTF-8 byte sequences, then the 'sed' command is unable to match and remove them. [1] Anyway, setting LC_ALL=C in the script (instead of LC_COLLATE=C) seems to fix it, although I also replaced the first two sed expressions with "equivalent" 'tr' commands.
Also, the script now separately counts the EOL sequences in each of the files given as arguments (or stdin). [2]
Can you try it now, Carlos?
Wait... you say "match and remove them". If the script does modification to my folder, I don't want to run it.
I meant, to remove them from sed's internal buffer, the "pattern space". No files are modified. The script only reads from file(s) or stdin, and outputs to stdout. -- Robert Webb
On 2023-01-15 14:18, Robert Webb wrote:
On Sun, 15 Jan 2023 12:02:48 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-15 11:16, Robert Webb wrote:
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
[...] It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 �
...
1 �� 147 � 1 ��
real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~>
Not ok. That should all be non-empty sequences of zero or more 'R's, possibly followed by an 'N' or "<eof>". It is a locale issue. If those sequences of odd characters shown (not N or R), which I am seeing as question marks inside a hexagon (Unicode U+FFFD: Replacement Character), represent invalid UTF-8 byte sequences, then the 'sed' command is unable to match and remove them. [1] Anyway, setting LC_ALL=C in the script (instead of LC_COLLATE=C) seems to fix it, although I also replaced the first two sed expressions with "equivalent" 'tr' commands.
Also, the script now separately counts the EOL sequences in each of the files given as arguments (or stdin). [2]
Can you try it now, Carlos?
Wait... you say "match and remove them". If the script does modification to my folder, I don't want to run it.
I meant, to remove them from sed's internal buffer, the "pattern space". No files are modified. The script only reads from file(s) or stdin, and outputs to stdout.
Ah! Ok. cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 === 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N real 0m14,095s user 0m11,859s sys 0m1,384s cer@Telcontar:~> There was a long delay between printing the "23510666 N" line and the blank line. A second run times, with mbox hopefully cached: real 0m6,795s user 0m11,235s sys 0m1,193s -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On Fri, 20 Jan 2023 14:13:43 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-15 14:18, Robert Webb wrote:
On Sun, 15 Jan 2023 12:02:48 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-15 11:16, Robert Webb wrote:
On Mon, 9 Jan 2023 12:45:54 +0100, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2023-01-09 07:14, Robert Webb wrote:
[...] It is a good idea to check that your mail folder files have consistent line termination (newlines), lest your text processing tools get confused. The attached script, 'eol_pattern_count', will report the different EOL sequences in a file.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 23510637 N 9 � 1 ��N 142 �
...
1 �� 147 � 1 ��
real 1m0,868s user 1m2,271s sys 0m1,575s cer@Telcontar:~>
Not ok. That should all be non-empty sequences of zero or more 'R's, possibly followed by an 'N' or "<eof>". It is a locale issue. If those sequences of odd characters shown (not N or R), which I am seeing as question marks inside a hexagon (Unicode U+FFFD: Replacement Character), represent invalid UTF-8 byte sequences, then the 'sed' command is unable to match and remove them. [1] Anyway, setting LC_ALL=C in the script (instead of LC_COLLATE=C) seems to fix it, although I also replaced the first two sed expressions with "equivalent" 'tr' commands.
Also, the script now separately counts the EOL sequences in each of the files given as arguments (or stdin). [2]
Can you try it now, Carlos?
Wait... you say "match and remove them". If the script does modification to my folder, I don't want to run it.
I meant, to remove them from sed's internal buffer, the "pattern space". No files are modified. The script only reads from file(s) or stdin, and outputs to stdout.
Ah! Ok.
Your run below shows all 23510666 lines of your file are terminated by newline.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01
=== 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N
real 0m14,095s user 0m11,859s sys 0m1,384s cer@Telcontar:~>
There was a long delay between printing the "23510666 N" line and the blank line.
That's interesting. Almost nothing is happening after that line is printed, which is the last output of the script. The rest is printed by 'time'. The core of the script is the 'filter' function, which is just a single pipeline of several commands. The "23510666 N" line is the entire output of the pipeline. The last commands of the pipeline are: ... | sort | uniq -c 'uniq' can only output "23510666 N" after reading all 23510666 identical lines output by 'sort'. All of the commands in the pipeline have completed all of their i/o at that point. Following the pipeline in 'filter', the only commands executed are a 'shift' and the final, false, test of a 'while' loop. So what accounts for the delay? Is it allocated memory being freed?
A second run times, with mbox hopefully cached:
real 0m6,795s user 0m11,235s sys 0m1,193s
It is quicker, as you expected, but those times are strange. I always thought that the real (elapsed) time had to be greater than or equal to the sum of the user and sys (kernel) times. man pages: time(1), time(1p), times(2) -- Robert Webb
On 2023-01-21 02:33, Robert Webb wrote:
On Fri, 20 Jan 2023 14:13:43 +0100, "Carlos E. R." <> wrote:
On 2023-01-15 14:18, Robert Webb wrote:
On Sun, 15 Jan 2023 12:02:48 +0100, "Carlos E. R." <> wrote:
...
Your run below shows all 23510666 lines of your file are terminated by newline.
cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01
=== 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N
The above comes out almost instantly.
real 0m14,095s user 0m11,859s sys 0m1,384s cer@Telcontar:~>
There was a long delay between printing the "23510666 N" line and the blank line.
That's interesting. Almost nothing is happening after that line is printed, which is the last output of the script. The rest is printed by 'time'.
But the script is doing something all that time. Maybe you could modify the script to print timestamps to find out where.
The core of the script is the 'filter' function, which is just a single pipeline of several commands. The "23510666 N" line is the entire output of the pipeline. The last commands of the pipeline are:
... | sort | uniq -c
'uniq' can only output "23510666 N" after reading all 23510666 identical lines output by 'sort'. All of the commands in the pipeline have completed all of their i/o at that point. Following the pipeline in 'filter', the only commands executed are a 'shift' and the final, false, test of a 'while' loop. So what accounts for the delay? Is it allocated memory being freed?
I have no idea.
A second run times, with mbox hopefully cached:
real 0m6,795s user 0m11,235s sys 0m1,193s
It is quicker, as you expected, but those times are strange. I always thought that the real (elapsed) time had to be greater than or equal to the sum of the user and sys (kernel) times.
Good point, I did not notice. cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 === 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N real 0m11,933s user 0m11,996s sys 0m1,274s cer@Telcontar:~> cer@Telcontar:~> time eol_pattern_count Mail/_File/in_ex_R_GMX_01 === 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N real 0m6,780s user 0m11,254s sys 0m1,049s cer@Telcontar:~> cer@Telcontar:~> /usr/bin/time eol_pattern_count Mail/_File/in_ex_R_GMX_01 === 1453105485 Mail/_File/in_ex_R_GMX_01 23510666 N 11.11user 1.05system 0:06.66elapsed 182%CPU (0avgtext+0avgdata 8836maxresident)k 0inputs+182048outputs (0major+2911minor)pagefaults 0swaps cer@Telcontar:~> -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On Sat, 21 Jan 2023 01:33:27 +0000 (UTC) Robert Webb <webbdg@verizon.net> wrote:
A second run times, with mbox hopefully cached:
real 0m6,795s user 0m11,235s sys 0m1,193s
It is quicker, as you expected, but those times are strange. I always thought that the real (elapsed) time had to be greater than or equal to the sum of the user and sys (kernel) times.
man pages: time(1), time(1p), times(2)
It depends how many cores the programs use. And pipes can be inherently parallel. The CPU usage of 182% in Carlos' subsequent mail gives the game away.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 El 2023-01-07 a las 11:35 +0100, Per Jessen escribió:
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns.
I think this ought to suffice for bind: (from memory)
zone "zen.spamhaus.org" { forwarders {}; }
Produces "Reload failed for Berkeley Internet Name Domain (DNS)." It is not: zone "zen.spamhaus.org" in { forwarders {}; either. If I try "restart" instead of "reload", same error after a minute. Wait, I should have looked directly at syslog: Starting name server BIND /etc/named.conf:192: missing ';' before end of file Doh! <3.6> 2023-01-07T23:10:52.046862+01:00 Isengard systemd 1 - - Starting Berkeley Internet Name Domain (DNS)... <3.6> 2023-01-07T23:10:52.188848+01:00 Isengard named.init 7738 - - Starting name server BIND /etc/named.conf:189: zone 'zen.spamhaus.org': type not present zone "zen.spamhaus.org" { forwarders {}; }; Ok, I get it. But what type is it? Not "master". Not "hint"? Ah, "forward". zone "zen.spamhaus.org" { type forward; forwarders {}; }; Sorry, I haven't touched bind in ages. Ok, now it loads and answers: Isengard:~ # host -v zen.spamhaus.org Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42326 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN A ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072217 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 1223 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2528 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN AAAA ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072215 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 91 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6010 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN MX ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072217 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 43 ms Isengard:~ # Isengard:~ # dig zen.spamhaus.org ; <<>> DiG 9.16.6 <<>> zen.spamhaus.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4826 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 08d98c0cc6e0915c0100000063b9f0a5fb28be3dc07dfded (good) ;; QUESTION SECTION: ;zen.spamhaus.org. IN A ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072220 3600 600 432000 10 ;; Query time: 79 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Jan 07 23:22:29 CET 2023 ;; MSG SIZE rcvd: 137 Isengard:~ # I assume it is working as intended. However, I think this will not work towards "checking spam", because now I will become the "open" DNS server. That is, I'm not a registered client, and I am on a dynamic IP. But this part is solved. Now dnsmasq (in telcontar): server=/zen.spamhaus.org/192.168.1.16 Telcontar:~ # systemctl restart dnsmasq.service Telcontar:~ # host -v zen.spamhaus.org Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47703 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN A ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 52 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53295 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN AAAA ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 56 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12133 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zen.spamhaus.org. IN MX ;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10 Received 98 bytes from 127.0.0.1#53 in 60 ms Telcontar:~ # systemctl restart dnsmasq.service Telcontar:~ # dig zen.spamhaus.org ; <<>> DiG 9.16.33 <<>> zen.spamhaus.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1971 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: c59dc7c34cf5d1a50100000063b9f290a22169586500a43d (good) ;; QUESTION SECTION: ;zen.spamhaus.org. IN A ;; AUTHORITY SECTION: zen.spamhaus.org. 4 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10 ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Jan 07 23:30:40 CET 2023 ;; MSG SIZE rcvd: 137 Telcontar:~ # Telcontar dnsmasq doesn't seem to be querying isengard bind (at 192.168.1.16), does it? I can do it directly and Isengard responds: Telcontar:~ # host zen.spamhaus.org 192.168.1.16 Using domain server: Name: 192.168.1.16 Address: 192.168.1.16#53 Aliases: Telcontar:~ # Unrelated, Isengard bind spits these: <3.6> 2023-01-07T23:17:56.336226+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:503:ba3e::2:30#53 <3.6> 2023-01-07T23:17:56.336985+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:a8::e#53 <3.6> 2023-01-07T23:17:56.337545+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:7fd::1#53 <3.6> 2023-01-07T23:17:57.007172+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:40::1#53 <3.6> 2023-01-07T23:17:57.007811+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:c::1#53 <3.6> 2023-01-07T23:17:57.008359+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:48::1#53 <3.6> 2023-01-07T23:17:57.008813+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:e::1#53 <3.6> 2023-01-07T23:17:57.009182+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:f::1#53 <3.6> 2023-01-07T23:17:57.009519+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:b::1#53 <3.6> 2023-01-07T23:17:57.131889+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:610:510:188:192:16:188:181#53 <3.6> 2023-01-07T23:17:57.132550+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2400:cb00:2049:1::a29f:1823#53 <3.6> 2023-01-07T23:17:57.133188+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2400:cb00:2049:1::a29f:191b#53 <3.6> 2023-01-07T23:17:57.247180+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:b0c0:3:e0::283:a00e#53 <3.6> 2023-01-07T23:17:57.247805+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a02:418:6a04:178:209:46:143:a6b0#53 <3.6> 2023-01-07T23:17:57.248315+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6801:8f:ac20:b6:61ab:d20f#53 <3.6> 2023-01-07T23:17:57.248845+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a00:1650:1000:0:f9:cd5a:79:3ee8#53 <3.6> 2023-01-07T23:17:57.249372+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:972:193:182:144:102:f612#53 <3.6> 2023-01-07T23:17:57.249875+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:d014:1bf:db01:45c8:f4d6:6f50:360c#53 <3.6> 2023-01-07T23:17:57.250369+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:1470:8000:c::39#53 <3.6> 2023-01-07T23:17:57.250861+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:d012:fa9:1301:660d:4cdd:ea59:f5f1#53 <3.6> 2023-01-07T23:17:57.251316+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:f480:2800:129f:da1e:3a4f:abe5:3481#53 <3.6> 2023-01-07T23:17:57.251787+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a00:12a8:8000::fff0:3#53 <3.6> 2023-01-07T23:17:57.252253+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6c01:11d8:c0:7f06:93:9b1b#53 <3.6> 2023-01-07T23:17:57.252625+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9406::62#53 <3.6> 2023-01-07T23:17:57.253057+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:b0c0:1:d0::257b:e00e#53 <3.6> 2023-01-07T23:17:57.253493+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a02:9a8:7c00:3f1:0:1d:2b:7a#53 <3.6> 2023-01-07T23:17:57.253934+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:f480:2000:1246:9998:cf46:6cf8:1e7a#53 <3.6> 2023-01-07T23:17:57.254435+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:6a8:3dc0:0:c1:7408:3a:9dd2#53 <3.6> 2023-01-07T23:17:57.254990+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:17:57.255472+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:17:57.255854+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:4b7a:0:21:bad:f00d:bad:1dea#53 <3.6> 2023-01-07T23:17:57.256282+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9404::e0#53 <3.6> 2023-01-07T23:17:57.257050+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:678:230:2194:194:104:0:140#53 <3.6> 2023-01-07T23:17:57.257787+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:ed15:149:154:152:122:efa6#53 <3.6> 2023-01-07T23:17:57.258266+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:7402:29d:e0fa:79:3a09:80d0#53 <3.6> 2023-01-07T23:17:57.258671+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:3991:192:71:26:25:d0c6#53 <3.6> 2023-01-07T23:17:57.339010+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a01:4f8:c17:aba4:d9:900b:3e:10f6#53 <3.6> 2023-01-07T23:17:57.339909+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a00:1a48:13e0:20:a7:b903:3d:6aca#53 <3.6> 2023-01-07T23:17:57.340712+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a00:1650:1000:0:f9:cd5a:79:3ee8#53 <3.6> 2023-01-07T23:17:57.341453+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a03:f80:ed15:149:154:152:122:efa6#53 <3.6> 2023-01-07T23:17:57.342043+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a03:f80:972:193:182:144:102:f612#53 <3.6> 2023-01-07T23:17:57.342620+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2001:7c0:0:77::4#53 <3.6> 2023-01-07T23:17:57.343174+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:22:29.497465+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2a01:4f8:c0c:c146:e9:4cfb:9d:7a92#53 <3.6> 2023-01-07T23:22:29.499081+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:22:29.499675+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:29:03.401911+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:7c0:0:77::4#53 <3.6> 2023-01-07T23:29:03.479242+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6c01:11d8:c0:7f06:93:9b1b#53 <3.6> 2023-01-07T23:29:03.479925+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6801:8f:ac20:b6:61ab:d20f#53 <3.6> 2023-01-07T23:29:03.480441+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:29:03.544637+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a05:d014:1bf:db01:45c8:f4d6:6f50:360c#53 Obviously, I don't have an external IPv6 address. - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7n09hwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVn4sAnibWJuvtwA6Qk4jG8gQO QfovrJ/6AJsEkT7aiz9ZdUQw0fMd/p/yyVXzEg== =ZMa4 -----END PGP SIGNATURE-----
Carlos E. R. wrote:
I assume it is working as intended.
However, I think this will not work towards "checking spam", because now I will become the "open" DNS server. That is, I'm not a registered client, and I am on a dynamic IP.
You are guessing wildly - why don't you just try it first? An "open resolver" is a DNS resolver that permits recursive queries from anywhere. However, why don't you just disable the Spamhaus tests for your mass-testing, RBLs are largely worthless on old mails. (maybe recall what the 'R' means). Anyway, we're way off-topic. Feel free to bring the topic up elsewhere. -- Per Jessen, Zürich (7.8°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On 2023-01-08 10:19, Per Jessen wrote:
Carlos E. R. wrote:
I assume it is working as intended.
However, I think this will not work towards "checking spam", because now I will become the "open" DNS server. That is, I'm not a registered client, and I am on a dynamic IP.
You are guessing wildly - why don't you just try it first? An "open resolver" is a DNS resolver that permits recursive queries from anywhere.
According to David, it can be other things. To them, the Google 8.8.8.8 DNS server is an "open resolver". «If you do use any of Spamhaus's DNSBLs, though, make sure you're not doing it via a public DNS resolver or via any DNS server that is attempting a high volume of queries against Spamhaus without being registered with them. If you do, you risk the queries triggering blocks simply due to the sheer volume of DNS traffic Spamhaus is receiving. Meaning you'll end up blocking mail that wasn't spam and that you probably didn't mean to block.» <https://www.spamresource.com/2021/10/be-careful-using-spamhaus-with-open.html>
However, why don't you just disable the Spamhaus tests for your mass-testing, RBLs are largely worthless on old mails. (maybe recall what the 'R' means).
I want to try both and see what happens, for curiosity.
Anyway, we're way off-topic. Feel free to bring the topic up elsewhere.
Well, I still have a problem with dnsmasq in telcontar not sending queries to bind in isengard. In telcontar dnsmasq I have: server=/zen.spamhaus.org/192.168.1.16 but queries on that domain are NOT sent to that local server. Why? (the data is in my previous post) -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-08 10:19, Per Jessen wrote:
Carlos E. R. wrote:
I assume it is working as intended.
However, I think this will not work towards "checking spam", because now I will become the "open" DNS server. That is, I'm not a registered client, and I am on a dynamic IP.
You are guessing wildly - why don't you just try it first? An "open resolver" is a DNS resolver that permits recursive queries from anywhere.
According to David, it can be other things. To them, the Google 8.8.8.8 DNS server is an "open resolver".
The Google and the Cloudflare resolvers are open resolvers to _everyone_ . That is their whole raison d'être. -- Per Jessen, Zürich (7.4°C) Have worked in the anti-spam business for eighteen years
On 1/7/2023 5:40 PM, Carlos E. R. wrote:
El 2023-01-07 a las 11:35 +0100, Per Jessen escribió:
Carlos E. R. wrote:
So, IMHO, we need a configuration that sends queries to our ISP DNS, except those of spamhouse.
Correct and that is easily done. In your own DNS server, you just add a separate zone config for "zen.spamhaus.org", and clear any forwarders config. That is no big deal in any of the popular dns servers - bind, unbound, dnsmasq or powerdns.
I think this ought to suffice for bind: (from memory)
zone "zen.spamhaus.org" { forwarders {}; }
Produces "Reload failed for Berkeley Internet Name Domain (DNS)."
It is not:
zone "zen.spamhaus.org" in { forwarders {};
either.
If I try "restart" instead of "reload", same error after a minute. Wait, I should have looked directly at syslog:
Starting name server BIND /etc/named.conf:192: missing ';' before end of file
Doh!
<3.6> 2023-01-07T23:10:52.046862+01:00 Isengard systemd 1 - - Starting Berkeley Internet Name Domain (DNS)... <3.6> 2023-01-07T23:10:52.188848+01:00 Isengard named.init 7738 - - Starting name server BIND /etc/named.conf:189: zone 'zen.spamhaus.org': type not present
zone "zen.spamhaus.org" { forwarders {}; };
Ok, I get it. But what type is it? Not "master". Not "hint"? Ah, "forward".
zone "zen.spamhaus.org" { type forward; forwarders {}; };
Sorry, I haven't touched bind in ages. Ok, now it loads and answers:
Isengard:~ # host -v zen.spamhaus.org Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42326 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN A
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072217 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 1223 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2528 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN AAAA
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072215 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 91 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6010 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN MX
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072217 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 43 ms Isengard:~ #
Isengard:~ # dig zen.spamhaus.org
; <<>> DiG 9.16.6 <<>> zen.spamhaus.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4826 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 08d98c0cc6e0915c0100000063b9f0a5fb28be3dc07dfded (good) ;; QUESTION SECTION: ;zen.spamhaus.org. IN A
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072220 3600 600 432000 10
;; Query time: 79 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Jan 07 23:22:29 CET 2023 ;; MSG SIZE rcvd: 137
Isengard:~ #
I assume it is working as intended.
However, I think this will not work towards "checking spam", because now I will become the "open" DNS server. That is, I'm not a registered client, and I am on a dynamic IP.
But this part is solved. Now dnsmasq (in telcontar):
server=/zen.spamhaus.org/192.168.1.16
Telcontar:~ # systemctl restart dnsmasq.service Telcontar:~ # host -v zen.spamhaus.org Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47703 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN A
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 52 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53295 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN AAAA
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 56 ms Trying "zen.spamhaus.org" ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12133 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION: ;zen.spamhaus.org. IN MX
;; AUTHORITY SECTION: zen.spamhaus.org. 10 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10
Received 98 bytes from 127.0.0.1#53 in 60 ms Telcontar:~ # systemctl restart dnsmasq.service Telcontar:~ # dig zen.spamhaus.org
; <<>> DiG 9.16.33 <<>> zen.spamhaus.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1971 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: c59dc7c34cf5d1a50100000063b9f290a22169586500a43d (good) ;; QUESTION SECTION: ;zen.spamhaus.org. IN A
;; AUTHORITY SECTION: zen.spamhaus.org. 4 IN SOA need.to.know.only. hostmaster.spamhaus.org. 2301072229 3600 600 432000 10
;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Jan 07 23:30:40 CET 2023 ;; MSG SIZE rcvd: 137
Telcontar:~ #
Telcontar dnsmasq doesn't seem to be querying isengard bind (at 192.168.1.16), does it?
I can do it directly and Isengard responds:
Telcontar:~ # host zen.spamhaus.org 192.168.1.16 Using domain server: Name: 192.168.1.16 Address: 192.168.1.16#53 Aliases:
Telcontar:~ #
Unrelated, Isengard bind spits these:
<3.6> 2023-01-07T23:17:56.336226+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:503:ba3e::2:30#53 <3.6> 2023-01-07T23:17:56.336985+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:a8::e#53 <3.6> 2023-01-07T23:17:56.337545+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:7fd::1#53 <3.6> 2023-01-07T23:17:57.007172+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:40::1#53 <3.6> 2023-01-07T23:17:57.007811+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:c::1#53 <3.6> 2023-01-07T23:17:57.008359+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:48::1#53 <3.6> 2023-01-07T23:17:57.008813+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:e::1#53 <3.6> 2023-01-07T23:17:57.009182+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:f::1#53 <3.6> 2023-01-07T23:17:57.009519+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:500:b::1#53 <3.6> 2023-01-07T23:17:57.131889+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:610:510:188:192:16:188:181#53 <3.6> 2023-01-07T23:17:57.132550+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2400:cb00:2049:1::a29f:1823#53 <3.6> 2023-01-07T23:17:57.133188+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2400:cb00:2049:1::a29f:191b#53 <3.6> 2023-01-07T23:17:57.247180+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:b0c0:3:e0::283:a00e#53 <3.6> 2023-01-07T23:17:57.247805+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a02:418:6a04:178:209:46:143:a6b0#53 <3.6> 2023-01-07T23:17:57.248315+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6801:8f:ac20:b6:61ab:d20f#53 <3.6> 2023-01-07T23:17:57.248845+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a00:1650:1000:0:f9:cd5a:79:3ee8#53 <3.6> 2023-01-07T23:17:57.249372+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:972:193:182:144:102:f612#53 <3.6> 2023-01-07T23:17:57.249875+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:d014:1bf:db01:45c8:f4d6:6f50:360c#53 <3.6> 2023-01-07T23:17:57.250369+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:1470:8000:c::39#53 <3.6> 2023-01-07T23:17:57.250861+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:d012:fa9:1301:660d:4cdd:ea59:f5f1#53 <3.6> 2023-01-07T23:17:57.251316+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:f480:2800:129f:da1e:3a4f:abe5:3481#53 <3.6> 2023-01-07T23:17:57.251787+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a00:12a8:8000::fff0:3#53 <3.6> 2023-01-07T23:17:57.252253+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6c01:11d8:c0:7f06:93:9b1b#53 <3.6> 2023-01-07T23:17:57.252625+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9406::62#53 <3.6> 2023-01-07T23:17:57.253057+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:b0c0:1:d0::257b:e00e#53 <3.6> 2023-01-07T23:17:57.253493+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a02:9a8:7c00:3f1:0:1d:2b:7a#53 <3.6> 2023-01-07T23:17:57.253934+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:f480:2000:1246:9998:cf46:6cf8:1e7a#53 <3.6> 2023-01-07T23:17:57.254435+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:6a8:3dc0:0:c1:7408:3a:9dd2#53 <3.6> 2023-01-07T23:17:57.254990+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:17:57.255472+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:17:57.255854+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:4b7a:0:21:bad:f00d:bad:1dea#53 <3.6> 2023-01-07T23:17:57.256282+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9404::e0#53 <3.6> 2023-01-07T23:17:57.257050+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:678:230:2194:194:104:0:140#53 <3.6> 2023-01-07T23:17:57.257787+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:ed15:149:154:152:122:efa6#53 <3.6> 2023-01-07T23:17:57.258266+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:7402:29d:e0fa:79:3a09:80d0#53 <3.6> 2023-01-07T23:17:57.258671+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a03:f80:3991:192:71:26:25:d0c6#53 <3.6> 2023-01-07T23:17:57.339010+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a01:4f8:c17:aba4:d9:900b:3e:10f6#53 <3.6> 2023-01-07T23:17:57.339909+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a00:1a48:13e0:20:a7:b903:3d:6aca#53 <3.6> 2023-01-07T23:17:57.340712+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a00:1650:1000:0:f9:cd5a:79:3ee8#53 <3.6> 2023-01-07T23:17:57.341453+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a03:f80:ed15:149:154:152:122:efa6#53 <3.6> 2023-01-07T23:17:57.342043+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a03:f80:972:193:182:144:102:f612#53 <3.6> 2023-01-07T23:17:57.342620+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2001:7c0:0:77::4#53 <3.6> 2023-01-07T23:17:57.343174+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:22:29.497465+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2a01:4f8:c0c:c146:e9:4cfb:9d:7a92#53 <3.6> 2023-01-07T23:22:29.499081+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:22:29.499675+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:4b98:dc0:205:1a:de:ba:c1e#53 <3.6> 2023-01-07T23:29:03.401911+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/A/IN': 2001:7c0:0:77::4#53 <3.6> 2023-01-07T23:29:03.479242+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6c01:11d8:c0:7f06:93:9b1b#53 <3.6> 2023-01-07T23:29:03.479925+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2001:19f0:6801:8f:ac20:b6:61ab:d20f#53 <3.6> 2023-01-07T23:29:03.480441+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/AAAA/IN': 2a05:9403::26e#53 <3.6> 2023-01-07T23:29:03.544637+01:00 Isengard named 8083 - - network unreachable resolving 'zen.spamhaus.org/MX/IN': 2a05:d014:1bf:db01:45c8:f4d6:6f50:360c#53
Obviously, I don't have an external IPv6 address.
-- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar)
To stick my nose back in here, can I apply to unbound some of the comments above and elsewhere (including other threads here) about bind config? I'm finding the unbound docs seem to be for v 1.17.xx, while LEAP 15.4 provides 1.6.xx. I attempted to use some of the log config "tags" to enable more detailed logging, expecting problems, and unbound-checkconf did not like them at all. Makes me reluctant to try anything else I find in the "official docs". I'm presuming the numbering convention here says ".17" is (much) newer than ".6". That said, all I really want to do is to setup the type of nameserver that is hyped as resolving the "open resolver" issue to allow re-enabling of spamhaus checks. Certainly I can live without that, but, well, I need something to do you know. I'm gathering that setting up a "zone" for the spamhaus requests as a "non forwarding zone" is what needs done? Just seems I have to infer a lot which, as a reluctant DNS meddler, I don't have hard won knowledge there.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday, 2023-01-08 at 12:07 -0500, joe a wrote:
On 1/7/2023 5:40 PM, Carlos E. R. wrote:
El 2023-01-07 a las 11:35 +0100, Per Jessen escribió:
Carlos E. R. wrote:
...
To stick my nose back in here, can I apply to unbound some of the comments above and elsewhere (including other threads here) about bind config?
Sure, although I know nothing about unbound.
I'm finding the unbound docs seem to be for v 1.17.xx, while LEAP 15.4 provides 1.6.xx. I attempted to use some of the log config "tags" to enable more detailed logging, expecting problems, and unbound-checkconf did not like them at all. Makes me reluctant to try anything else I find in the "official docs". I'm presuming the numbering convention here says ".17" is (much) newer than ".6".
That said, all I really want to do is to setup the type of nameserver that is hyped as resolving the "open resolver" issue to allow re-enabling of spamhaus checks. Certainly I can live without that, but, well, I need something to do you know.
I'm gathering that setting up a "zone" for the spamhaus requests as a "non forwarding zone" is what needs done? Just seems I have to infer a lot which, as a reluctant DNS meddler, I don't have hard won knowledge there.
I'm going to do a bit of testing myself, but I am afraid that unless you are a registered client, you simply can not use spamhaus. It works, I think, registering your fixed IP with them. The output from spammassassin I read as meaning that SA is configured to do the testing nonetheless, and if it says "open relay" ignore the test result. In that case the best thing to do will be to disable the testing in SA. I will, nonetheless, test with that different configuration of bind, without using forwarding, to see what happens. In my case I'm trying to improve the speed of SA, not to clear out spam. - -- Cheers, Carlos E. R. (from openSUSE 15.4 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCY7sOjhwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVK6oAnAmes/93FC1CXSfe3vIG 4jQ67tiNAJ430XRz8RMrkqOFipj99cRUlE6Kog== =EVbV -----END PGP SIGNATURE-----
On 1/8/2023 1:42 PM, Carlos E. R. wrote:
On Sunday, 2023-01-08 at 12:07 -0500, joe a wrote:
On 1/7/2023 5:40 PM, Carlos E. R. wrote:
El 2023-01-07 a las 11:35 +0100, Per Jessen escribió:
Carlos E. R. wrote:
...
To stick my nose back in here, can I apply to unbound some of the comments above and elsewhere (including other threads here) about bind config?
Sure, although I know nothing about unbound.
I'm finding the unbound docs seem to be for v 1.17.xx, while LEAP 15.4 provides 1.6.xx. I attempted to use some of the log config "tags" to enable more detailed logging, expecting problems, and unbound-checkconf did not like them at all. Makes me reluctant to try anything else I find in the "official docs". I'm presuming the numbering convention here says ".17" is (much) newer than ".6".
That said, all I really want to do is to setup the type of nameserver that is hyped as resolving the "open resolver" issue to allow re-enabling of spamhaus checks. Certainly I can live without that, but, well, I need something to do you know.
I'm gathering that setting up a "zone" for the spamhaus requests as a "non forwarding zone" is what needs done? Just seems I have to infer a lot which, as a reluctant DNS meddler, I don't have hard won knowledge there.
I'm going to do a bit of testing myself, but I am afraid that unless you are a registered client, you simply can not use spamhaus. It works, I think, registering your fixed IP with them.
AHA!!!. Years ago I registered with them and quit using due to one thing or another. Not sure if I changed providers in that time (they merged and remerged several times). Ah, yes, when they did one of their Upgrades, they changed my static block. I AM getting old it seems. Thanks, I will certainly need to check that. Good luck on the configuration issues, we may be on parallel paths.
On 1/8/2023 12:07 PM, joe a wrote:
. . .
To stick my nose back in here, can I apply to unbound some of the comments above and elsewhere (including other threads here) about bind config?
I'm finding the unbound docs seem to be for v 1.17.xx, while LEAP 15.4 provides 1.6.xx. I attempted to use some of the log config "tags" to enable more detailed logging, expecting problems, and unbound-checkconf did not like them at all. Makes me reluctant to try anything else I find in the "official docs". I'm presuming the numbering convention here says ".17" is (much) newer than ".6".
That said, all I really want to do is to setup the type of nameserver that is hyped as resolving the "open resolver" issue to allow re-enabling of spamhaus checks. Certainly I can live without that, but, well, I need something to do you know.
I'm gathering that setting up a "zone" for the spamhaus requests as a "non forwarding zone" is what needs done? Just seems I have to infer a lot which, as a reluctant DNS meddler, I don't have hard won knowledge there.
Now this working as expected. Found this aid at spamhaus.org for a quick test of resolver functionality: "dig 2.0.0.127.zen.spamhaus.org +short" Success indicated by a reply of 127.0.0.10 127.0.0.4 127.0.0.2 Response of 127.255.255.254 indicated open resolver. Can use "dig @some.open.dns.server 2.0.0.127.zen.spamhaus.org +short" should produce the open resolver response. Hope that helps someone resolve (sorry, hard to resist) their issue. Unbound worked, essentially, "out of the box", with minimal configuration. Still, why LEAP appears so far behind the "current version" of unbound is curious. joe a.
On 2023-01-09 18:34, joe a wrote:
On 1/8/2023 12:07 PM, joe a wrote:
. . .
To stick my nose back in here, can I apply to unbound some of the comments above and elsewhere (including other threads here) about bind config?
I'm finding the unbound docs seem to be for v 1.17.xx, while LEAP 15.4 provides 1.6.xx. I attempted to use some of the log config "tags" to enable more detailed logging, expecting problems, and unbound-checkconf did not like them at all. Makes me reluctant to try anything else I find in the "official docs". I'm presuming the numbering convention here says ".17" is (much) newer than ".6".
That said, all I really want to do is to setup the type of nameserver that is hyped as resolving the "open resolver" issue to allow re-enabling of spamhaus checks. Certainly I can live without that, but, well, I need something to do you know.
I'm gathering that setting up a "zone" for the spamhaus requests as a "non forwarding zone" is what needs done? Just seems I have to infer a lot which, as a reluctant DNS meddler, I don't have hard won knowledge there.
Now this working as expected.
Found this aid at spamhaus.org for a quick test of resolver functionality: "dig 2.0.0.127.zen.spamhaus.org +short"
Success indicated by a reply of 127.0.0.10 127.0.0.4 127.0.0.2
Response of 127.255.255.254 indicated open resolver.
Can use "dig @some.open.dns.server 2.0.0.127.zen.spamhaus.org +short" should produce the open resolver response.
cer@Telcontar:~> dig 2.0.0.127.zen.spamhaus.org ; <<>> DiG 9.16.33 <<>> 2.0.0.127.zen.spamhaus.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47764 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 8edd52062c112efd0100000063bc6d9f7d3b14252aeb0070 (good) ;; QUESTION SECTION: ;2.0.0.127.zen.spamhaus.org. IN A ;; ANSWER SECTION: 2.0.0.127.zen.spamhaus.org. 60 IN A 127.0.0.10 2.0.0.127.zen.spamhaus.org. 60 IN A 127.0.0.2 2.0.0.127.zen.spamhaus.org. 60 IN A 127.0.0.4 ;; Query time: 87 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jan 09 20:40:14 CET 2023 ;; MSG SIZE rcvd: 131 cer@Telcontar:~> :-)
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0 That's a big gap.
On 2023-01-10 03:56, joe a wrote:
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0
That's a big gap.
As expected. Leap has to do that, that's the documented goal. Leap can not use the recent version and has to stick to the version that Leap .0 had. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/10/2023 3:35 AM, Carlos E. R. wrote:
On 2023-01-10 03:56, joe a wrote:
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0
That's a big gap.
As expected. Leap has to do that, that's the documented goal. Leap can not use the recent version and has to stick to the version that Leap .0 had.
I must protest. I can see it being a point or two behind, but a gap of from 6 to 17 is a bit much, IMHO. Assuming unbound is counting one at a time from 6 through 17 for releases.
* joe a <joea-lists@j4computers.com> [01-10-23 15:08]:
On 1/10/2023 3:35 AM, Carlos E. R. wrote:
On 2023-01-10 03:56, joe a wrote:
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0
That's a big gap.
As expected. Leap has to do that, that's the documented goal. Leap can not use the recent version and has to stick to the version that Leap .0 had.
I must protest. I can see it being a point or two behind, but a gap of from 6 to 17 is a bit much, IMHO. Assuming unbound is counting one at a time from 6 through 17 for releases.
well known saying about "assumpting". you could run Tumbleweed and not be behind as much. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet oftc
On 2023-01-10 21:05, joe a wrote:
On 1/10/2023 3:35 AM, Carlos E. R. wrote:
On 2023-01-10 03:56, joe a wrote:
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0
That's a big gap.
As expected. Leap has to do that, that's the documented goal. Leap can not use the recent version and has to stick to the version that Leap .0 had.
I must protest. I can see it being a point or two behind, but a gap of from 6 to 17 is a bit much, IMHO. Assuming unbound is counting one at a time from 6 through 17 for releases.
Protest all you like, but nevertheless, that's the foundational intention of Leap :-P Leap is based, aka shares packages, source and in some/many cases binary, with (SUSE) SLES. SLES does not do upgrades. They start with a version and maintain it for years, doing only security updates and those updates that are strictly necessary. They try, intentionally, to keep the libraries at the same version, so that there are no surprises along a time lapse of about 5 years. There are exceptions. KDE/Plasma comes from the Community, not from SUSE, so it is modern. That's how it is. It doesn't matter if we like it or not. Ok, if you don't like it, you can change to Factory aka Tumbleweed, which every thing is new and gets updated almost daily. The total reverse of Leap. There is no middle term in openSUSE. After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On Tue, Jan 10, 2023 at 09:20:11PM +0100, Carlos E. R. wrote:
On 2023-01-10 21:05, joe a wrote:
On 1/10/2023 3:35 AM, Carlos E. R. wrote:
On 2023-01-10 03:56, joe a wrote:
On 1/9/2023 2:45 PM, Carlos E. R. wrote:
On 2023-01-09 18:34, joe a wrote:
. . .
Hope that helps someone resolve (sorry, hard to resist) their issue.
Unbound worked, essentially, "out of the box", with minimal configuration.
Still, why LEAP appears so far behind the "current version" of unbound is curious.
No, it is expected.
Yea, but provided is 1.6.8, current on unbound site is 1.17.0
That's a big gap.
As expected. Leap has to do that, that's the documented goal. Leap can not use the recent version and has to stick to the version that Leap .0 had.
I must protest. I can see it being a point or two behind, but a gap of from 6 to 17 is a bit much, IMHO. Assuming unbound is counting one at a time from 6 through 17 for releases.
Protest all you like, but nevertheless, that's the foundational intention of Leap :-P
Leap is based, aka shares packages, source and in some/many cases binary, with (SUSE) SLES. SLES does not do upgrades. They start with a version and maintain it for years, doing only security updates and those updates that are strictly necessary. They try, intentionally, to keep the libraries at the same version, so that there are no surprises along a time lapse of about 5 years.
There are exceptions. KDE/Plasma comes from the Community, not from SUSE, so it is modern.
That's how it is. It doesn't matter if we like it or not. Ok, if you don't like it, you can change to Factory aka Tumbleweed, which every thing is new and gets updated almost daily. The total reverse of Leap.
There is no middle term in openSUSE.
We occasionally do these SLE version updates of packages on business demand.
After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows.
ALP is still in ongoing development and planning and something looking like a regular old school distribution is still planned. Ciao, Marcus
On 2023-01-10 21:42, Marcus Meissner wrote:
On Tue, Jan 10, 2023 at 09:20:11PM +0100, Carlos E. R. wrote:
On 2023-01-10 21:05, joe a wrote:
...
After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows.
ALP is still in ongoing development and planning and something looking like a regular old school distribution is still planned.
That's very good news (the old school distribution). Thanks. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/10/23 12:20, Carlos E. R. wrote:
After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows.
Don't be so hasty, Carlos. Maybe ALP will be fine, if not an improvement. I'm actually looking forward to it a bit. Regards, Lew
On 2023-01-10 22:15, Lew Wolfgang wrote:
On 1/10/23 12:20, Carlos E. R. wrote:
After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows.
Don't be so hasty, Carlos. Maybe ALP will be fine, if not an improvement.
I'm actually looking forward to it a bit.
I don't. The other day I read that it needs Internet to just run or install. <https://lists.opensuse.org/archives/list/users@lists.opensuse.org/message/IOEIVXKVS3LHUX3IEKUQP54BNL2M7S7C/> Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 1/10/23 15:13, Carlos E. R. wrote:
On 2023-01-10 22:15, Lew Wolfgang wrote:
On 1/10/23 12:20, Carlos E. R. wrote:
After Leap 15.x, there will be no Leap. There will be something else called Alp, I think, and people like me will have to migrate to Ubuntu, Debian, Mageia... who knows.
Don't be so hasty, Carlos. Maybe ALP will be fine, if not an improvement.
I'm actually looking forward to it a bit.
I don't. The other day I read that it needs Internet to just run or install.
I can't imagine that it would need an Internet connection to "run". Install, maybe. I've got many machines that run stand-alone, but I always manage to get an Internet connection when installing to pull down all the updates. Then I disconnect.
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
Again, let's wait and see. I don't fully understand systemd either but it's usable. Adapt and "Have a lot of fun." Regards, Lew
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
On 2023-01-10 22:17:31 Andrei Borzenkov wrote:
|On 11.01.2023 02:13, Carlos E. R. wrote: |> Then it runs on containers, not packages; some sort of virtualisation |> that I don't understand. That needs a lot of ram and more powerful |> machines. | |If you do not understand it, how do you know it needs a lot of RAM?
My understanding is that each container contains, in addition to the application itself, all of the libraries that the application uses. The rationale is that it will alleviate the dependency issues. However, multiple containers duplicating libraries will cause bloat in both RAM and on disk. Leslie -- Platform: Linux Distribution: openSUSE Leap 15.4 x86_64
On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <jlturriff@mail.com> wrote:
On 2023-01-10 22:17:31 Andrei Borzenkov wrote:
|On 11.01.2023 02:13, Carlos E. R. wrote: |> Then it runs on containers, not packages; some sort of virtualisation |> that I don't understand. That needs a lot of ram and more powerful |> machines. | |If you do not understand it, how do you know it needs a lot of RAM?
My understanding is that each container contains, in addition to the application itself, all of the libraries that the application uses.
It may. It does not mean it must. Actually this is not true even for existing use cases (there are common base containers that provide common libraries). And it certainly does not depend on software delivery methods. Nothing stops packaging all required libraries as part of an application RPM package.
The rationale is that it will alleviate the dependency issues. However, multiple containers duplicating libraries will cause bloat in both RAM and on disk.
So nobody knows how the software for ALP will be packaged but everyone automatically assumes the worst.
On 2023-01-11 02:47:02 Andrei Borzenkov wrote:
|On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <jlturriff@mail.com> wrote: |> On 2023-01-10 22:17:31 Andrei Borzenkov wrote: |> > |On 11.01.2023 02:13, Carlos E. R. wrote: |> > |> Then it runs on containers, not packages; some sort of |> > |> virtualisation that I don't understand. That needs a lot of ram and |> > |> more powerful machines. |> > | |> > |If you do not understand it, how do you know it needs a lot of RAM? |> |> My understanding is that each container contains, in addition to |> the application itself, all of the libraries that the application uses. | |It may. It does not mean it must. Actually this is not true even for |existing use cases (there are common base containers that provide |common libraries). And it certainly does not depend on software |delivery methods. Nothing stops packaging all required libraries as |part of an application RPM package. | |> The rationale is that it will alleviate |> the dependency issues. However, multiple containers duplicating |> libraries will cause bloat in both RAM and on disk. | |So nobody knows how the software for ALP will be packaged but everyone |automatically assumes the worst.
More often than not, pessimists get happy surprises. :-) Leslie -- Platform: Linux Distribution: openSUSE Leap 15.4 x86_64
On 2023-01-11 02:47:02 Andrei Borzenkov wrote:
|On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <jlturriff@mail.com> wrote: |> On 2023-01-10 22:17:31 Andrei Borzenkov wrote: |> > |On 11.01.2023 02:13, Carlos E. R. wrote: |> > |> Then it runs on containers, not packages; some sort of |> > |> virtualisation that I don't understand. That needs a lot of ram and |> > |> more powerful machines. |> > | |> > |If you do not understand it, how do you know it needs a lot of RAM? |> |> My understanding is that each container contains, in addition to |> the application itself, all of the libraries that the application uses. | |It may. It does not mean it must. Actually this is not true even for |existing use cases (there are common base containers that provide |common libraries). And it certainly does not depend on software |delivery methods. Nothing stops packaging all required libraries as |part of an application RPM package. | |> The rationale is that it will alleviate |> the dependency issues. However, multiple containers duplicating |> libraries will cause bloat in both RAM and on disk. | |So nobody knows how the software for ALP will be packaged but everyone |automatically assumes the worst.
My understanding is that the main idea of containers is to avoid the problem of different applications requiring different versions of libraries. I might also mention that IIUC the kernel also provides some sort of container implementation. Is this what this discussion is about? And is that different from the Docker, FlatPack, etc. userSpace containers? Leslie -- Platform: Linux Distribution: openSUSE Leap 15.4 x86_64
On Thu, Jan 12, 2023 at 9:02 AM J Leslie Turriff <jlturriff@mail.com> wrote:
On 2023-01-11 02:47:02 Andrei Borzenkov wrote:
|On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <jlturriff@mail.com> wrote: |> On 2023-01-10 22:17:31 Andrei Borzenkov wrote: |> > |On 11.01.2023 02:13, Carlos E. R. wrote: |> > |> Then it runs on containers, not packages; some sort of |> > |> virtualisation that I don't understand. That needs a lot of ram and |> > |> more powerful machines. |> > | |> > |If you do not understand it, how do you know it needs a lot of RAM? |> |> My understanding is that each container contains, in addition to |> the application itself, all of the libraries that the application uses. | |It may. It does not mean it must. Actually this is not true even for |existing use cases (there are common base containers that provide |common libraries). And it certainly does not depend on software |delivery methods. Nothing stops packaging all required libraries as |part of an application RPM package. | |> The rationale is that it will alleviate |> the dependency issues. However, multiple containers duplicating |> libraries will cause bloat in both RAM and on disk. | |So nobody knows how the software for ALP will be packaged but everyone |automatically assumes the worst.
My understanding is that the main idea of containers is to avoid the problem of different applications requiring different versions of libraries.
The main idea of containers is to provide an isolated environment for a running process. Having an isolated environment simplifies having different versions of libraries. But having different versions of libraries is and has always been possible without containers. Static linking, private rpath etc. Just look at mozilla or chrome.
I might also mention that IIUC the kernel also provides some sort of container implementation. Is this what this discussion is about? And is that different from the Docker, FlatPack, etc. userSpace containers?
Docker, flatpak, lxc etc are all using the same kernel facilities to implement containers. They differ in management options, image delivery format etc.
On 2023-01-12 00:58:11 Andrei Borzenkov wrote:
|On Thu, Jan 12, 2023 at 9:02 AM J Leslie Turriff <jlturriff@mail.com> wrote: |> On 2023-01-11 02:47:02 Andrei Borzenkov wrote: |> > |On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <jlturriff@mail.com> wrote: |> > |> On 2023-01-10 22:17:31 Andrei Borzenkov wrote: |> > |> > |On 11.01.2023 02:13, Carlos E. R. wrote: |> > |> > |> Then it runs on containers, not packages; some sort of |> > |> > |> virtualisation that I don't understand. That needs a lot of |> > |> > |> ram and more powerful machines. |> > |> > | |> > |> > |If you do not understand it, how do you know it needs a lot of |> > |> > | RAM? |> > |> |> > |> My understanding is that each container contains, in |> > |> addition to the application itself, all of the libraries that the |> > |> application uses. |> > | |> > |It may. It does not mean it must. Actually this is not true even for |> > |existing use cases (there are common base containers that provide |> > |common libraries). And it certainly does not depend on software |> > |delivery methods. Nothing stops packaging all required libraries as |> > |part of an application RPM package. |> > | |> > |> The rationale is that it will alleviate |> > |> the dependency issues. However, multiple containers duplicating |> > |> libraries will cause bloat in both RAM and on disk. |> > | |> > |So nobody knows how the software for ALP will be packaged but |> > | everyone automatically assumes the worst. |> |> My understanding is that the main idea of containers is to avoid |> the problem of different applications requiring different versions of |> libraries. | |The main idea of containers is to provide an isolated environment for |a running process. Having an isolated environment simplifies having |different versions of libraries. But having different versions of |libraries is and has always been possible without containers. Static |linking, private rpath etc. Just look at mozilla or chrome. | |> I might also mention that IIUC the kernel also provides some |> sort of container implementation. Is this what this discussion is |> about? And is that different from the Docker, FlatPack, etc. userSpace |> containers? | |Docker, flatpak, lxc etc are all using the same kernel facilities to |implement containers. They differ in management options, image |delivery format etc.
Ah, thank you for the clarification. Leslie -- Platform: Linux Distribution: openSUSE Leap 15.4 x86_64
On 2023-01-12 07:58, Andrei Borzenkov wrote:
On Thu, Jan 12, 2023 at 9:02 AM J Leslie Turriff <> wrote:
On 2023-01-11 02:47:02 Andrei Borzenkov wrote:
|On Wed, Jan 11, 2023 at 8:07 AM J Leslie Turriff <> wrote: |> On 2023-01-10 22:17:31 Andrei Borzenkov wrote: |> > |On 11.01.2023 02:13, Carlos E. R. wrote: |> > |> Then it runs on containers, not packages; some sort of |> > |> virtualisation that I don't understand. That needs a lot of ram and |> > |> more powerful machines. |> > | |> > |If you do not understand it, how do you know it needs a lot of RAM? |> |> My understanding is that each container contains, in addition to |> the application itself, all of the libraries that the application uses. | |It may. It does not mean it must. Actually this is not true even for |existing use cases (there are common base containers that provide |common libraries). And it certainly does not depend on software |delivery methods. Nothing stops packaging all required libraries as |part of an application RPM package. | |> The rationale is that it will alleviate |> the dependency issues. However, multiple containers duplicating |> libraries will cause bloat in both RAM and on disk. | |So nobody knows how the software for ALP will be packaged but everyone |automatically assumes the worst.
My understanding is that the main idea of containers is to avoid the problem of different applications requiring different versions of libraries.
The main idea of containers is to provide an isolated environment for a running process. Having an isolated environment simplifies having different versions of libraries. But having different versions of libraries is and has always been possible without containers. Static linking, private rpath etc. Just look at mozilla or chrome.
So, the reverse of what we have done for decades, sharing libraries among applications to save resources. That's memory bloat. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM. -- Per Jessen, Zürich (6.9°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On Wed, Jan 11, 2023 at 3:31 PM Per Jessen <per@jessen.ch> wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
Also regarding "more powerful machines". Containers are using exactly the same existing kernel facilities as "standard" applications. Every process in Linux already runs in some namespace and is using some mount points. There is zero additional overhead after the process has been started as part of container.
On 2023-01-11 13:48, Andrei Borzenkov wrote:
On Wed, Jan 11, 2023 at 3:31 PM Per Jessen <> wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
Also regarding "more powerful machines". Containers are using exactly the same existing kernel facilities as "standard" applications. Every process in Linux already runs in some namespace and is using some mount points. There is zero additional overhead after the process has been started as part of container.
Can it run the same load as I do now? A two core celeron laptop with just 4 gigs of ram, which is swapping when both firefox and thunderbird are running, plus libreoffice and a pdf reader at times? -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
On 2023-01-11 13:31, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
You have both ram dedicated to the virtual machine and to the host. You can put, for instance, a host having two virtual machines: one running dns, another running postfix. So you have a total of three machines, each with their ram and cpu, instead of a single one running the lot. There are advantages, but ram use is not one. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Carlos E. R. wrote:
On 2023-01-11 13:31, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
You have both ram dedicated to the virtual machine and to the host.
Oh my, I was not aware. :-)
You can put, for instance, a host having two virtual machines: one running dns, another running postfix. So you have a total of three machines, each with their ram and cpu, instead of a single one running the lot.
In other words, if you are intent on wasting RAM, you can.
There are advantages, but ram use is not one.
I never suggested it was, but you suggested "Any virtualisation bloats rams use" which is simply wrong, which ever way your turn it. -- Per Jessen, Zürich (7.4°C) Member, openSUSE Heroes (2016 - present) We're hiring - https://en.opensuse.org/openSUSE:Heroes
On Wed, Jan 11, 2023 at 4:19 PM Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2023-01-11 13:31, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
You have both ram dedicated to the virtual machine and to the host.
Container is not a virtual machine.
On 2023-01-11 14:37, Andrei Borzenkov wrote:
On Wed, Jan 11, 2023 at 4:19 PM Carlos E. R. <> wrote:
On 2023-01-11 13:31, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
You have both ram dedicated to the virtual machine and to the host.
Container is not a virtual machine.
I have yet not seen a proper explanation of what it will be, for dummies. -- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Containers are namespaced processes, Linux even calls them so. The memory overhead is neglectable - but of course you will run libraries side by side and not shared. Am 11.01.23 um 14:47 schrieb Carlos E. R.:
On 2023-01-11 14:37, Andrei Borzenkov wrote:
On Wed, Jan 11, 2023 at 4:19 PM Carlos E. R. <> wrote:
On 2023-01-11 13:31, Per Jessen wrote:
Carlos E. R. wrote:
On 2023-01-11 05:17, Andrei Borzenkov wrote:
On 11.01.2023 02:13, Carlos E. R. wrote: > Then it runs on containers, not packages; some sort of > virtualisation that I don't understand. That needs a lot of ram and > more powerful machines. >
If you do not understand it, how do you know it needs a lot of RAM?
Any virtualisation bloats rams use.
That is way too general. No, it doesn't. Virtualise any physical machine and you end up using exactly the same amount of RAM.
You have both ram dedicated to the virtual machine and to the host.
Container is not a virtual machine.
I have yet not seen a proper explanation of what it will be, for dummies.
On 2023-01-11 15:44, Bernd Ritter wrote:
Containers are namespaced processes, Linux even calls them so.
Means nothing to me, sorry.
The memory overhead is neglectable - but of course you will run libraries side by side and not shared.
So I will get a container for the system, at least, another for Thunderbird with its libraries, another for Firefox with its libraries, another for LibreOffice with its libraries, another for evince with its libraries, or foxit with its libraries. Not promising for an already slow machine with just 4GiB.
Am 11.01.23 um 14:47 schrieb Carlos E. R.: ...
I have yet not seen a proper explanation of what it will be, for dummies.
-- Cheers / Saludos, Carlos E. R. (from 15.4 x86_64 at Telcontar)
Another description is lightweight virtualization, but it's not virtualization. But processes don't know each other. Which for a security aspect is exactly what one wants. Not a big fan of flatpak, but flatpak has versioned flatpaks for some shared programs, so it's shared libraries again - somehow at least. Am 11.01.23 um 15:51 schrieb Carlos E. R.:
On 2023-01-11 15:44, Bernd Ritter wrote:
Containers are namespaced processes, Linux even calls them so.
Means nothing to me, sorry.
The memory overhead is neglectable - but of course you will run libraries side by side and not shared.
So I will get a container for the system, at least, another for Thunderbird with its libraries, another for Firefox with its libraries, another for LibreOffice with its libraries, another for evince with its libraries, or foxit with its libraries.
Not promising for an already slow machine with just 4GiB.
Am 11.01.23 um 14:47 schrieb Carlos E. R.: ...
I have yet not seen a proper explanation of what it will be, for dummies.
Hallo Carlos E. R., op 11-01-2023 om 15:51 schreef je:
On 2023-01-11 15:44, Bernd Ritter wrote:
Containers are namespaced processes, Linux even calls them so.
Means nothing to me, sorry.
On 11.01.2023 17:44, Bernd Ritter wrote:
Containers are namespaced processes, Linux even calls them so. The
Correct.
memory overhead is neglectable - but of course you will run libraries side by side and not shared.
Wrong. You make usual mistake of confusing program execution technology with software delivery method. They are completely orthogonal. You can use containers on RPM based distribution and you can package each dependency as separate container image just like RPM does it so they are shared.
That would mean we will have 5000 rpm packages AND 5000 containers each? That's some kind of overhead I didnt imagine. Am 11.01.23 um 17:42 schrieb Andrei Borzenkov:
On 11.01.2023 17:44, Bernd Ritter wrote:
Containers are namespaced processes, Linux even calls them so. The
Correct.
memory overhead is neglectable - but of course you will run libraries side by side and not shared.
Wrong. You make usual mistake of confusing program execution technology with software delivery method. They are completely orthogonal. You can use containers on RPM based distribution and you can package each dependency as separate container image just like RPM does it so they are shared.
On Wed, 11 Jan 2023 at 00:13, Carlos E. R. <robin.listas@telefonica.net> wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
Not so. Containers need _less_ resources than virtual machines. I wrote a series of articles explaining virtualisation about a decade back. I predicted containers would be the Next Big Thing on Linux about 3 years before Docker. :-) You might find them helpful. Here are the component articles: http://www.theregister.co.uk/Print/2011/07/11/a_brief_history_of_virtualisat... http://www.theregister.co.uk/Print/2011/07/14/brief_history_of_virtualisatio... http://www.theregister.co.uk/Print/2011/07/18/brief_history_of_virtualisatio... http://www.theregister.co.uk/Print/2011/07/21/brief_history_of_virtualisatio... http://www.theregister.co.uk/Print/2011/07/25/brief_history_of_virtualisatio... (Part 3 is the one about containers) It was also a Kindle e-book, my first book published... but it's gone now, sadly. -- Liam Proven ~ Profile: https://about.me/liamproven Email: lproven@cix.co.uk ~ gMail/gTalk/FB: lproven@gmail.com Twitter/LinkedIn: lproven ~ Skype: liamproven UK: (+44) 7939-087884 ~ Czech [+ WhatsApp/Telegram/Signal]: (+420) 702-829-053
On 2023-01-12 00:02, Liam Proven wrote:
On Wed, 11 Jan 2023 at 00:13, Carlos E. R. <> wrote:
Then it runs on containers, not packages; some sort of virtualisation that I don't understand. That needs a lot of ram and more powerful machines.
Not so. Containers need _less_ resources than virtual machines.
I wrote a series of articles explaining virtualisation about a decade back. I predicted containers would be the Next Big Thing on Linux about 3 years before Docker. :-)
You might find them helpful.
Here are the component articles:
http://www.theregister.co.uk/Print/2011/07/11/a_brief_history_of_virtualisat...
http://www.theregister.co.uk/Print/2011/07/14/brief_history_of_virtualisatio...
http://www.theregister.co.uk/Print/2011/07/18/brief_history_of_virtualisatio...
http://www.theregister.co.uk/Print/2011/07/21/brief_history_of_virtualisatio...
http://www.theregister.co.uk/Print/2011/07/25/brief_history_of_virtualisatio...
(Part 3 is the one about containers)
It was also a Kindle e-book, my first book published... but it's gone now, sadly.
Ok, yes, I read them all. Interesting read indeed. I see why this is a big thing for servers, and why you say that containers need less resources than virtual machines. I have a little hesitation about security: would a virus in one propagate to others? The share files, libraries, code, even ram. But never mind. But I have more clear than before that I am not interested in this technology for myself. I only run a machine at a time, mostly. -- Cheers / Saludos, Carlos E. R. (from Elesar, using openSUSE Leap 15.4)
participants (21)
-
Andrei Borzenkov
-
Bernd Ritter
-
Bob Rogers
-
Bruce Ferrell
-
Carlos E. R.
-
Dave Howorth
-
David C. Rankin
-
Erwin Lam
-
Felix Miata
-
Georg Pfuetzenreuter
-
Harrie Baken
-
J Leslie Turriff
-
James Knott
-
joe a
-
Lew Wolfgang
-
Liam Proven
-
Marcus Meissner
-
Patrick Shanahan
-
Per Jessen
-
Per Jessen
-
Robert Webb