[OT SED AWK GREP] How to remove matching lines?
Howdy, I have a file that's a list of hostnames from 2 different apache servers. I'd like to get rid of all duplicates with sed or awk. for example after combining the two lists and running sort on it I get the following: abc.com asdf.net asdf.net qwerty.org sdfg.net sdfg.net I want to get rid of all duplicates, such as "asdf.net" and "sdfg.net" in this example. can this be done? ---------------------------------------------------- Jonathan Wilson System Administrator Cedar Creek Software http://www.cedarcreeksoftware.com Central Texas IT http://www.centraltexasit.com
On Wed, 2 May 2001, Jonathan Wilson wrote:
I have a file that's a list of hostnames from 2 different apache servers. I'd like to get rid of all duplicates with sed or awk. for example after combining the two lists and running sort on it I get the following: I want to get rid of all duplicates, such as "asdf.net" and "sdfg.net" in this example.
uniq filename Or, if you are running sort on the file anyway, use sort -u filename. regards, davej.
* Jonathan Wilson
Howdy,
I have a file that's a list of hostnames from 2 different apache servers. I'd like to get rid of all duplicates with sed or awk. for example after combining the two lists and running sort on it I get the following:
abc.com asdf.net asdf.net qwerty.org sdfg.net sdfg.net
I want to get rid of all duplicates, such as "asdf.net" and "sdfg.net" in this example.
can this be done?
Of course -- it's Linux :-) I would use Perl though. Just put every hostname in a hash table. Then you just go like this $hostnames{$newhost} = 0. When done -- all the keys of the hashtable are your names, and no duplicates. Following is untested, just to give an idea. while(<>) { chomp($_); $hostnames($_); } foreach $key (keys($hostnames)) { print $key."\n"; } -- Mads Martin Joergensen, http://mmj.dk "Why make things difficult, when it is possible to make them cryptic and totally illogic, with just a little bit more effort." -- A. P. J.
Howdy,
I have a file that's a list of hostnames from 2 different apache servers. I'd like to get rid of all duplicates with sed or awk. for example after combining the two lists and running sort on it I get the following:
abc.com asdf.net asdf.net qwerty.org sdfg.net sdfg.net
I want to get rid of all duplicates, such as "asdf.net" and "sdfg.net" in
Actually, its a little simpler than that:
do a unique sort.
sort -u thisfile > thatfile.out
or save a step:
cat list1 list2 | sort -u > outfile
you could read it as:
concatenate list1 and list2, do a unique sort on the results and store in
'outfile'
-----Original Message-----
From: Mads Martin Jorgensen [mailto:mmj@suse.com]
Sent: Wednesday, May 02, 2001 10:35 AM
To: Jonathan Wilson
Cc: suse-linux-e@suse.com
Subject: Re: [SLE] [OT SED AWK GREP] How to remove matching lines?
* Jonathan Wilson
can this be done?
Of course -- it's Linux :-) I would use Perl though. Just put every hostname in a hash table. Then you just go like this $hostnames{$newhost} = 0. When done -- all the keys of the hashtable are your names, and no duplicates. Following is untested, just to give an idea. while(<>) { chomp($_); $hostnames($_); } foreach $key (keys($hostnames)) { print $key."\n"; } -- Mads Martin Joergensen, http://mmj.dk "Why make things difficult, when it is possible to make them cryptic and totally illogic, with just a little bit more effort." -- A. P. J. -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
participants (5)
-
Adilson Guilherme Vasconcelos Ribeiro
-
Dave Jones
-
jennifer moter
-
Mads Martin Jørgensen
-
wilson@claborn.net