I'm afraid everyone in this thread expects something that *buffered I/O* cannot provide. Note everyone :-) Opening a file using "fopen" creates a FILE structure with some buffer attached to it - unless the file to be opened appears to be an interactive device. If you really want to use the high-level (3) routines (fopen, fprintf, fread, fwrite, ...) _and_ want to ensure that no buffering inside a library is getting in your ways, you need to, at least, call "setbuf (fp, NULL)" to ensure *no* buffering will happen. BUT, it will not change anything wrt/ HOW the underlying kernel will deal with (non synchronous) write's going to the file - that's what you've seen. IF you want to change this, you need to take care there's a proper protocol for dealing with synchronous I/O used by your application; did you try something like
fd = open ("path", O_CREAT | O_TRUNC | O_DIRECT, 0644); fp = fdopen (fd, "rw");
(proper checking of return codes assumed!)?
Still, according to the "open(3)" manual page:
O_DIRECT Try to minimize cache effects of the I/O to and from this file. In general this will degrade performance, but it is useful in special situations, such as when applications do their own caching. File I/O is done directly to/from user space buffers. The I/O is synchronous, i.e., at the completion of a read(2) or write(2), data is guaranteed to have been transferred. Under Linux 2.4 transfer sizes, and the alignment of user buffer and file offset must all be multiples of the logical block size of the file system. Under Linux 2.6 alignment to 512-byte bound‐ aries suffices. A semantically similar interface for block devices is described in raw(8).
there is absolutely NO guarantee for any data smaller than 512 bytes to appear in file the same order you might have intended. If you really need this, there is no other way than to "do their own caching".
Getting back to the original poster's question, "fprintf" is using "write" at _some_ point in time, exactly when it'll do depends on something like the associated buffer's size... I agree pretty much with your statement. The OP was opening a file for writing, then performing a fork where both the
On Friday 31 March 2006 7:37 am, Manfred Hollstein wrote:
parent and child were writing to the file simultaneously. I took the
program that the OP submitted and ran it on SuSE 10 (32-bit), and ran it on
RHEL 4 (64-bit IA64). Both had the same successful result without the
overwriting that the OP experienced. (In both cases, 2.6 kernel).
The issues that come into play is that the buffers for streamIO (eg. FILE)
are user space buffers that are flushed based on a number of rules. The
flush might occur in the fprintf, or when the user calls fflush(3), or when
the user closes the file. But, there are also kernel buffers, and some
data can remain in kernel buffers for a short time before those are
physically written. This is a function of the driver and when the sync(2)
system call commits the buffers.
--
Jerry Feldman