Xen wrote:
On Mon, 28 Sep 2015, Anton Aylward wrote:
It does now after many years. I get the impression from what I hear at various meeting s and conferences that it is intended for sysadmins and the like and not for the end user.
Personally I detest it. Even opening a PowerShell window is rife with problems (you can hardly read what it says). I seriously don't want to script on Windows. I could never. I'm a guy with experience in Windows since 3.11 almost every day of my life, I have done programming in ASM, Pascal, Basic, Java, PHP, a bit of python, and Bash, and I *cannot* program in PowerShell. I don't want to either. Some hold that it is very powerful and bla bla.
Just call me sick (I am) but programming in Bash at least makes me feel like there is hope after all in life :p.
Ditto on that -- I use bash+cygwin to do most of my sysadmin stuff on windows when I can -- including full dumps of the registry -- FROM which I've been able to restore a previous installation's configuration (including installed programs). Ex: Win7 fell over and died when it couldn't find a "xxx.fon" file that it had on its critical file list. If the OS was 'up', it would be trivial to fix, but wouldn't boot w/o that file and since it was a LSI-BIOS-RAID0 using 4 disks there wasn't anyway I could easily take the "system disk" and install it in a linux system to restore that 1 file. At the time, due to some other bug I had no recent sys-image backup, so couldn't restore that from an image. In Win7DVD-repair console was able to rename old windir to windows2, then installed a pristine, unmodified version of win7 into windows. Then I could copy any updates from the old win7dir to the new, then restored registry settings for from the registry dumps -- so all my previously installed programs (still on disk outside of the windir could get all of their registry settings restore). All programs were restored to working order except a few Adobe progs which didn't like some of the changed guids, were restored -- and the adobe progs were fixed w/a call to Adobe support (who issued new licences -- when I explained I couldn't deactivate the old licenses because they were on the windows installation that had 'died' -- so no chance to deactivate them before-hand). Since it was a Dell machine that was BIOS licensed, Windows came up and thought it wasn't licensed, until I went to the system info page -- when it refreshed the license through the BIOS and re-activated. Another time, to upgrade the internal SSD's was able to add an external disk to make disk-image using 'dd' from /dev/sdc -> /dev/sdX. Made sure I could boot Win from /dev/sdX -- then installed new SSD array (faster+33% more space) and copied the image back from sdX -> sdc... At the time the old cygwin tools were 32-bit and couldn't be run from a windows rescue image -- which I had to work around. But new cygwin-64 tools run native and run just fine from a win7-64 repair console, so maintenance that much easier now.
Its when you get to use pipes and filters that it gets powerful. Even someting as simple as piping the output of 'find' though xargs and grep
That is really the only power of Linux.
There is also hardly or scarcely anything more powerful than that.
It's not linux, but the unix tools available on linux OR on cygwin64 -- which you can run under win7's repair console.
I think pipes (named pipes as well) should in some way remain or be the way of interprocess communciation as well. No matter how it should be implemented, the pipe should be the future.
Not always ideal if you are using multiple processes -- even in linux, it's not possible to do 2-way communication over a pipe -- (pair of pipes, yes, but you can't easily setup 2-way communication between processes in standard shell or bash). Example (bit long): Have a script that checks for "duplicate product" RPM's in a dir (but with differing versions)... It is designed to compare the versions between duplicate-named rpms and only keep the latest version. To do that reliably, I need to split off the 'ver+rel' from the rpm to do compares from -- not something you can reliably do via the name of the rpm-file, only. Thus had to call 'rpm -qp w/--queryformat' on each rpm to get their actual N+V+R... Some of the dirs like x86_64+noarch (added together) have over 23,000 entries *without* duplicates. Pulling updates into the same pool, one can end up with >30K entries. To speed things up I split the alpha-sorted list among some 'N' number of processors (N determined by experimentation, as it is also 'disk-bound, but disks are RAID10 so they can handle some parallelism) -- but I needed to have 'momma' process spawn 'N' children. Children were combined by N**.5 collectors. (having 9 children vying w/each other to talk w/momma created too much contention), so results were collected and merged by intermediate procs -- still merging 3 children requiring 2-way communication -- not easily done w/pipes...so children ended up using shared-memory to create files in memory with the collectors reading from shared-memory and using sockets to talk to momma. It would have been really painful to try to do that with pipes alone, since they only buffer ~8k/pipe -- would have been alot of overhead in process switching. Shell wasn't really up to manipulating shared memory, reliable signal delivery or sockets, so ended up w perl. Had the util utilize a "Recycling Bin" concept to "delete" old versions -- using 1 bin/device -- so a delete really becomes a 'rename' (no file copy involved). In another program involving parallelism (a wav->flac that worked on 1 album-directory at a time), I didn't know how long each step would take (vs. above I assumed roughly equal time/step and could assign #steps/child), so in the wav->flac case, I just spawned off as many worker threads as I had CPU's, then used semaphores to manage # workers. As each conversion finished it would release a semaphore and another conversion would start -- so was able to keep all processors constantly busy. Usually took(takes) < 1 minute to convert an album to flac using highest compression settings. In that case a momma process just doles out 1 file at a time to a worker -- and workers used direct-file I/O -- ended up in perl as could convert to flac or mp3 based on args. An earlier version of that used shell, but blindly spawned off conversion processes for all of the tracks in a folder at the same time -- then waited for completion. usually worked pretty well, though often more thrashing w/more tracks than cpu's. Shell is good for many things... wrote a first snap-shot generator in shell... but it wasn't fun to maintain or extend, so it went to perl... still not that fun to extend, but at least it was more reliable. Shell doesn't do as good a job handling signals -- especially in bash4.3 where user-signal handlers stopped being asynchronous and are now only handled upon pressing a key in an input line (piss-poor design for use in automation to require user key-presses in order to handle async events (like signals)). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org