Mailinglist Archive: opensuse (3637 mails)

< Previous Next >
RE: [SLE] [OT SED AWK GREP] How to remove matching lines?
  • From: "jennifer moter" <jmoter@xxxxxxxxxxxxxxxx>
  • Date: Wed, 2 May 2001 14:08:38 -0700
  • Message-id: <FMEDJHFGKCPKANGDENIOCEELCEAA.jmoter@xxxxxxxxxxxxxxxx>
Actually, its a little simpler than that:
do a unique sort.

sort -u thisfile > thatfile.out

or save a step:

cat list1 list2 | sort -u > outfile

you could read it as:
concatenate list1 and list2, do a unique sort on the results and store in

-----Original Message-----
From: Mads Martin Jorgensen [mailto:mmj@xxxxxxxx]
Sent: Wednesday, May 02, 2001 10:35 AM
To: Jonathan Wilson
Cc: suse-linux-e@xxxxxxxx
Subject: Re: [SLE] [OT SED AWK GREP] How to remove matching lines?

* Jonathan Wilson <wilson@xxxxxxxxxxx> [May 02. 2001 10:25]:
> Howdy,
> I have a file that's a list of hostnames from 2 different apache servers.
I'd like to get rid of all duplicates with sed or awk. for example after
combining the two lists and running sort on it I get the following:
> I want to get rid of all duplicates, such as "" and "" in
this example.
> can this be done?

Of course -- it's Linux :-) I would use Perl though. Just put every
hostname in a hash table. Then you just go like this
$hostnames{$newhost} = 0. When done -- all the keys of the hashtable are
your names, and no duplicates. Following is untested, just to give an

while(<>) {

foreach $key (keys($hostnames)) {
print $key."\n";

Mads Martin Joergensen,
"Why make things difficult, when it is possible to make them cryptic and
totally illogic, with just a little bit more effort."
-- A. P. J.

To unsubscribe send e-mail to suse-linux-e-unsubscribe@xxxxxxxx
For additional commands send e-mail to suse-linux-e-help@xxxxxxxx
Also check the FAQ at and the
archives at

< Previous Next >