* Jonathan Wilson
Howdy,
I have a file that's a list of hostnames from 2 different apache servers. I'd like to get rid of all duplicates with sed or awk. for example after combining the two lists and running sort on it I get the following:
abc.com asdf.net asdf.net qwerty.org sdfg.net sdfg.net
I want to get rid of all duplicates, such as "asdf.net" and "sdfg.net" in this example.
can this be done?
Of course -- it's Linux :-) I would use Perl though. Just put every hostname in a hash table. Then you just go like this $hostnames{$newhost} = 0. When done -- all the keys of the hashtable are your names, and no duplicates. Following is untested, just to give an idea. while(<>) { chomp($_); $hostnames($_); } foreach $key (keys($hostnames)) { print $key."\n"; } -- Mads Martin Joergensen, http://mmj.dk "Why make things difficult, when it is possible to make them cryptic and totally illogic, with just a little bit more effort." -- A. P. J.