I'm seeing a fairly reproducable problem in which the same file descriptor is being used concurrently in two threads, but _unintentionally_.
I understand that open file descriptors are shared by all threads, but somehow it looks like calling socket() in one thread is able to return a filedescriptor that's already in use by another thread.
Here's an example (a little complicated, but bear with me):
this is just stderr debugging output from my app which is running 3 threads at this time. The output is in chronological order. The thread# is in  to the left, my comments are prefixed with >>> :
 using fd 16 for /tmp/ibwd.1  state-transition: 2->3  writing to left ( 0 bytes left): 250 Ok: queued as 29C9E4637  state 3; left event LEFT_RSET: [6 bytes] RSET  all done <message-id>: 2995 bytes  writing to right ( 0 bytes left): RSET  state-transition: 3->1  state 1; right event RIGHT_354: [37 bytes] 354  using fd 14 for /tmp/ibwd.1
above thread#1 has opened a file and gets fd 14 for it.
 state-transition: 1->2  crm114[25958, 9 categories] stage done, 0.509 seconds.  connected to juggernautd(14) at localhost:787
above thread#2 opens a tcp connection to another app, and ALSO uses
 state 2; left event LEFT_EOM: [3 bytes] . write_string(): unable to write to 14; error 9: Bad file descriptor
above thread#1 chokes when trying to write to fd 14.
Each thread is independent of the others, except during allocation of work. Are there any special precautions I need to take when doing IO in a multi-threaded environment?
/Per Jessen, Zürich