John Andersen said the following on 01/03/2014 03:21 PM:
Even a "stale" cache is probably good enough in the case of monster big sites using round-robin dns schemes and very short TTLs. Those sites are still likely to have all those servers running.
Indeed. Look back and you'll see that the set of sites I fund using DIG and the set of sites James Knott found were slightly different, just a few minutes apart and even though he's just a few miles down the rad from e. That round robin for you :-( But I could equally well ignore the TTL on the sites I found and use those same records half and hour, half a day later. The site will still be there. From my POV it doesn't matter. From the POV of Yahoo, Google or Amazon it DOES matter. They need to balance the load across not only their servers but also across the incoming pipes.
In a sense such sites pervert the whole concept of DNS by using it to load balance by round-robin.
In just the same way RFC1918 addresses pervert the whole concept of peer-to-peer addressing that is the basis of the Internet. But sometimes accommodations are needed to get acceptable performance. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org