On 03/12/2016 02:23 AM, jdd wrote:
Le 12/03/2016 01:15, Felix Miata a écrit :
according to what others said also, the journal is not the cause of your problem.
I agree. It's a metadata mechanism. It *might* impact performance very slightly. But that isn't your problem, is it?
So you should better define what the problem really is. I understand you can't record all what you want, but is this a single recording? is this a sum of a number of recordings?
There was the suggestion that the application was somehow using temp files, caching or otherwise dynamically using space. I don't know about "common", but this is an easy trick with UNIX; open a file, then delete it, or rather delete it's directory entry. So long as the application has a handle on it open the OS won't actually delete the space used. In effect the application has a self deleting temporary file. better than what many others seem to do, leaving 'litter' in /tmp and /var/tmp and /usr/tmp! So while the application is running its 'consuming' the free space and unable to write the named file. I note you have plenty of free inodes, but large files also need indirect blocks; that counts as structural metadata. I don't know how ext4 deals with these super-large files in terms of indirection blocks. I raise this because I read this: <quote src="http://kernelnewbies.org/Ext4#head-7c5fd53118e8b888345b95cc11756346be4268f4"> 2.4. Extents The traditionally Unix-derived filesystems like Ext3 use an indirect block mapping scheme to keep track of each block used for the blocks corresponding to the data of a file. This is inefficient for large files, specially on large file delete and truncate operations, because the mapping keeps a entry for every single block, and big files have many blocks -> huge mappings, slow to handle. Modern filesystems use a different approach called "extents". An extent is basically a bunch of contiguous physical blocks. It basically says "The data is in the next n blocks". For example, a 100 MB file can be allocated into a single extent of that size, instead of needing to create the indirect mapping for 25600 blocks (4 KB per block). Huge files are split in several extents. Extents improve the performance and also help to reduce the fragmentation, since an extent encourages continuous layouts on the disk. </quote> Perhaps the problem is that the disk is 'fragmented' so that a single extent of adequate size, perhaps not for the whole file, cannot be created. I do note the existence of an application called "E4rat" which 'rationalizes' aka optimizes the layout of files on an ext4FS. I don't know how much extra space it requires; I recall seeing the disk compressor for MS-Windows working very slowly if it didn't have much free space :-( I don't know if layout is an an issue. Have you looked into preallocation? This is supposed to be a beneficial feature of ext4FS. https://wiki.archlinux.org/index.php/E4rat -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org