2021-01-13 15:46, Neal Gompa rašė:
On Wed, Jan 13, 2021 at 8:43 AM Stephan Kulow <coolo@suse.de> wrote:
Am 13.01.21 um 14:31 schrieb Neal Gompa:
On Wed, Jan 13, 2021 at 8:26 AM Stephan Kulow <coolo@suse.de> wrote:
Am 13.01.21 um 14:08 schrieb Neal Gompa:
I do not think so. Running scripts is just another node to execute in the "to-do DAG". A %post just needs to run (sometime) after installation of the package, and, if another pkg B requires A, said %post may need to be ordered before B's installation. But that should be all. We do because we don't know what a script *does*. Anyone can do *anything* in a script, so it is unsafe to try parallelization with arbitrary scripts.
Installing packages is a pretty unsafe operation to begin with :)
I must say I can follow Jan's argumentation more than yours.
You don't know what the script is doing, which means you don't know if you're creating a race condition between two package installs. If two independent packages are modifying the same file at the same time, what is the result? This is just one example of the problems that parallel arbitrary script execution can cause.
Well, you can still serialze the scriplet part and parallelize the payload uncompression and file installation
If we didn't have %pretrans and %pre scriptlets, I would agree with you. :(
Most packages don't use scriptlets – these should be quite safe for parallelization (or installing selevel packages at once). Remaining packages (with scriptlets) – could be serialized. -- Regards, Mindaugas