Hi, Can anyone please explain the following behaviour. Consider the following: int main( int argc, char **argv ) { int s; fd_set dread; fd_set dwrite; struct timeval to; to.tv_sec = 3; to.tv_usec = 0; s = socket( AF_INET, SOCK_STREAM, 0 ); FD_ZERO(&dread); FD_ZERO(&dwrite); FD_SET(s, &dread); if (select(s + 1, &dread, &dwrite, (fd_set *) 0, &to) < 0) { fprintf(stderr, "Error in select(): errno=%d\n", errno); return errno; } return 0; } On a (Suse i386 linux) glibc-2.1.3-27 machine this blocks for the timeout of 3 seconds and the 'dread' fd_set is cleared and the select returns 0. On a (Suse i386 linux) glibc-2.2.2-67 machine this returns immediately and the 'dread' fd_set is left untouched and the select returns a value of 1. The program above is a toy that reproduces my problem, while calling select immediately on 's' may not be a meaningful thing to do, I just want to know what the correct semantics are and why they have changed. Cheers Andy ********************************************************************* * Andrew Cheadle email: a.cheadle@doc.ic.ac.uk * * Department of Computing http://www.doc.ic.ac.uk/~amc4/ * * Imperial College * * University of London * *********************************************************************