[opensuse] filesystem freespace
On EXT3 and EXT4, is journal space allocation a component of any of the df output? IOW, if space consumed is 98% and the journal is the default size, can or does the journal prevent consumption of that last 2%? Reason for asking: I have a STB that according to blkid is using EXT3, a partition using the entirety of the "2TB" HD, for recordings. Even though there is far more freespace on the filesystem, 25991708 1K blocks (or more, subject to deleting test recording attempts), it's truncating recordings to about 13000K or 14000K, within a minute of beginning any recording. I'm trying to figure out how much space ultimately is or will be available before space is truly exhausted. Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal? How about changing the size of the journal to line up closely to the size of a typical recording file (somewhere in the 2G to 12G range). Some datapoints: # blkid /dev/sda1 /dev/sda1: LABEL="wd20azbme2t" UUID="f6c1d5e1-9390-44a0-b751-af13f7bef403" TYPE="ext3" # mount | grep sda1 /dev/sda1 on /media/hdd type ext4 (rw,relatime,data=ordered) # uname -a Linux azboxme 3.9.2-opensat #1 PREEMPT Wed May 20 12:25:28 CEST 2015 mips GNU/Linux # grep sda1 /etc/fstab # # tune2fs -l /dev/sda1 tune2fs 1.42.9 (28-Dec-2013) Filesystem volume name: wd20azbme2t Last mounted on: /media/hdd Filesystem UUID: f6c1d5e1-9390-44a0-b751-af13f7bef403 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122101760 Block count: 488378134 Reserved block count: 0 Free blocks: 6501398 Free inodes: 122100290 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 907 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 256 RAID stride: 1 RAID stripe width: 1 Filesystem created: Sat Aug 11 02:17:02 2012 Last mount time: Fri Mar 11 14:49:48 2016 Last write time: Fri Mar 11 14:49:48 2016 Mount count: 127 Maximum mount count: -1 Last checked: Thu Sep 3 04:42:25 2015 Check interval: 0 (<none>) Lifetime writes: 882 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 50196500-be68-443f-b91b-a748a5ab3497 Journal backup: inode blocks -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-12 01:15, Felix Miata wrote:
Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal?
Yes. mke2fs -t ext4 -O ^has_journal /dev/device at format time. I use that on USB sticks. Method as commented on http://www.sysresccd.org/Sysresccd-manual-en_How_to_install_SystemRescueCd_o... But I doubt the journal is the reason of your problem. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Carlos E. R. composed on 2016-03-12 01:58 (UTC+0100):
Felix Miata wrote:
Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal?
Yes.
mke2fs -t ext4 -O ^has_journal /dev/device
That looks destructive. I meant what I wrote, *converted*, dispensing with the journal from what without a journal would be an EXT2 filesystem. I have to maintain backward compat here. The STBs here do not all have kernels that know EXT4.
at format time. I use that on USB sticks. Method as commented on http://www.sysresccd.org/Sysresccd-manual-en_How_to_install_SystemRescueCd_o...
But I doubt the journal is the reason of your problem.
(I wouldn't be having this space problem if I was able to get rid of the commercials from nearly 2TB of accumulated h264 .ts files. :-( ) -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
12.03.2016 04:22, Felix Miata пишет:
Carlos E. R. composed on 2016-03-12 01:58 (UTC+0100):
Felix Miata wrote:
Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal?
You can disable journal; you cannot make ext2 out of ext4.
Yes.
mke2fs -t ext4 -O ^has_journal /dev/device
That looks destructive. I meant what I wrote, *converted*, dispensing with the journal
tune2fs -O ^has_journal /dev/device
from what without a journal would be an EXT2 filesystem.
No, it would not.
I have to maintain backward compat here. The STBs here do not all have kernels that know EXT4.
ext4 is a bit more than ext2 + journal.
at format time. I use that on USB sticks. Method as commented on http://www.sysresccd.org/Sysresccd-manual-en_How_to_install_SystemRescueCd_o...
But I doubt the journal is the reason of your problem.
(I wouldn't be having this space problem if I was able to get rid of the commercials from nearly 2TB of accumulated h264 .ts files. :-( )
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-12 02:22, Felix Miata wrote:
(I wouldn't be having this space problem if I was able to get rid of the commercials from nearly 2TB of accumulated h264 .ts files. :-( )
If they are transport mpg streams, try ProjectX to remove them. If it works, it is fast as it does not recode. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Carlos E. R. wrote:
On 2016-03-12 01:15, Felix Miata wrote:
Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal?
Yes.
mke2fs -t ext4 -O ^has_journal /dev/device
at format time.
Actually, if you're formatting, surely "mkfs -t ext2" would be easier :-), but Felix did ask about _converting_ though. I googled "convert ext3 to ext2" and stumbled over this tune2fs -O ^has_journal <ext3-device>. -- Per Jessen, Zürich (6.0°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-12 11:57, Per Jessen wrote:
Carlos E. R. wrote:
Actually, if you're formatting, surely "mkfs -t ext2" would be easier :-), but Felix did ask about _converting_ though.
The article I pointed to says that ext4 has extents, so ext4 without journal has advantages over ext2: «You could also use ext2 but it does not support extents, and then it requires more accesses to read/write large files to the disk..»
I googled "convert ext3 to ext2" and stumbled over this
tune2fs -O ^has_journal <ext3-device>.
-- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Felix Miata wrote:
On EXT3 and EXT4, is journal space allocation a component of any of the df output? IOW, if space consumed is 98% and the journal is the default size, can or does the journal prevent consumption of that last 2%?
Reason for asking: I have a STB that according to blkid is using EXT3, a partition using the entirety of the "2TB" HD, for recordings. Even though there is far more freespace on the filesystem, 25991708 1K blocks (or more, subject to deleting test recording attempts), it's truncating recordings to about 13000K or 14000K, within a minute of beginning any recording. I'm trying to figure out how much space ultimately is or will be available before space is truly exhausted.
Can an EXT3 filesystem be "converted" to EXT2, IOW, dispense with the journal? How about changing the size of the journal to line up closely to the size of a typical recording file (somewhere in the 2G to 12G range).
Usually the journal is of a "fixed" size that doesn't change as you write files out. The journal is used to hold vital, but temporary information before the actual file has been written to disk and "sync'ed" (often, closed). That way if the disk crashes before your data is written to disk and the OS structures have recorded that fact, the journal can zero out any blocks that are allocated to you but never had that recorded in the journal (so you won't accidently get someone else's data in your file). Your numbers don't add up. You are saying the recordings are truncating @ ~13M, but the disk has about 25383M (or 24G) free. I.e. out of 2048G, you have 24G free when you start -- that would be slightly 1% of free space. Most OS's won't allow you allocate more than 90-95% -- you have less than 1.2% free which is *horrible*. For my busy disks, I try to always keep freespace >20-25%, but even on disks infrequently written to, I try not to go over 80-85% usage. Second bit about your numbers. Even though you have 24G free, you are saying your files stop recording at about 5-6% of that. If those numbers are real -- what rate are you recording? The problem is when you make the disks too full, it takes the OS longer and longer to find free space, which will usually be spread out all over the disk. Your application may even be at fault, maybe writing the recordings to a temporary file before writing them to the final destination. If your recording is consuming, say 5MB/s, the OS might not be able to keep up and end up returning "no space" before you've actually run out of space -- or it may be your application is unable to buffer enough space before the OS returns from a write call -- and your application aborts the recording when it realizes it "lost" part of the recording. OTOH, if you are recording slow stuff, like a 8kb/s (<1KB/s; Note: b=bits, B=Bytes) it may be writing the file to temp-space 1st to reduce destination fragmentation. Windows Explorer has done this since, at least, XP for file downloads -- 1st downloading to tmp space, then copies completed downloads to the final destination. Glancing at your datapoints, it looks like your disk is "too full" to perform well enough to record. I'd suggest recording to a disk that has >25% free space, and then, if you want, copying it to the over-filled disk -- that might work. But better would never to use over 90% (<75-80% would be better if you care about performance) of your disk space in a partition. When a normal 100+MB/s I/O disk gets to 99% full, it's write speed may really be down to 5MB/s or less. good luck -l -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Linda Walsh composed on 2016-03-11 18:40 (UTC-0800): Thanks much for the reply Linda! :-)
Your numbers don't add up. You are saying the recordings are truncating @ ~13M, but the disk has about 25383M (or 24G) free. I.e. out of 2048G, you have 24G free when you start -- that would be slightly 1% of free space. Most OS's won't allow you allocate more than 90-95% -- you have less than 1.2% free which is *horrible*. For my busy disks, I try to always keep freespace >20-25%, but even on disks infrequently written to, I try not to go over 80-85% usage. Second bit about your numbers. Even though you have 24G free, you are saying your files stop recording at about 5-6% of that.
The current situation is complicated by unforeseen consequences tangling with foreseeable consequences and snowballing. I would have plenty of freespace if I could ever find a video editor that works to free up the 31% of space wasted by commercials. Gobs of time has been wasted trying to work around the consequences. These STBs do not provide support for filesystem sizes that >2TB disk sizes and GPT offer. And, they don't come with a full complement of the tools Linux users are used to having at their disposal. Time keeps disappearing at obscene rates trying to cope. I know we don't want filesystems with nearly zero freespace, but here there just no way around that in the foreseeable future. Adding space means piling complications on top of complications WRT not only the STB's storage, but also the backup/restore situation. These STBs make backup/restore a horribly tedious and lengthy process, not to mention keeping track of what is to be found where. I need to know how much of the space I see free that I can use. 98%-99% was temporary as a result of trying to figure out the hard way how much could be used, making recordings until the machines refused to make any more, but stopping making them while freespace is available many times the size of typical recordings smells like some other problem is involved. My current thinking is that the multiplicity of test recordings named *instant record.ts is proving to be yet another complication and shortcoming of the STB software. If numbers don't add up, it's on account of mixup between different tools, nada vs. K vs. M vs. G vs. T., cmdline vs. mc, etc. Maybe this will help: # df -h /dev/sda1 Filesystem Size Used Available Use% Mounted on /dev/sda1 1.8T 1.7T 75.4G 96% /media/wd20azbme2t # ls -lrth *.ts | tail -n7 2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts 2.4G Jan 29 18:29 bigBang0516-201601291800GDMX10we.ts 3.0G Feb 7 17:59 bigBang0510-201602071730GDMX10wd.ts 3.1G Feb 14 17:59 bigBang0516-201602141730GDMX10wd.ts 2.7G Mar 4 17:59 bigBang0705-201603041730GDMX10we.ts 1.0G Mar 11 22:00 20160311 2145 - GDMX09EV - instant record.ts 1.9G Mar 11 22:29 20160311 2200 - GDMX09EV - instant record.ts
If those numbers are real -- what rate are you recording?
Modest for HD source, usually less than 5M/hour.
...Your application may even be at fault, maybe writing the recordings to a temporary file before writing them to the final destination...
I've been unable to find any evidence of use of temporary files WRT recording.
consuming, say 5MB/s, the OS might not be able to keep up and end up returning "no space" before you've actually run out of space -- or it may be your application is unable to buffer enough space before the OS returns from a write call -- and your application aborts the recording when it realizes it "lost" part of the recording.
Many attempts to record, according to filenames and timestamps, have aborted within a minute of start. However I believe what is happening is that the #####K files are all the result of the STB first creating a many times larger recording, then cutting off a huge tail. UI behavior is different for recordings that do not get truncated vs. those that do. Hitting the stop button on whole recordings returns the UI to ready state instantly. With those truncated, UI displays a busy indicator for a considerable period instead. AFAICT, there is no "user" in the system. Everything WRT recording, and elsewhere, seems to be owned by root.root. /dev/sda1 is internal, data use only (recordings, settings backups, and my own backups of various configs). The last 3 recordings, all made after deleting the multiplicity of test recordings, and bringing freespace back to 96%, have completed normally, so I seem to be back to trying to figure out exactly how much space can actually be used. Many times on Linux PCs I've found it possible to get df to show 0% free on / without the system locking up, at least, before systemd and journald usurped predictable, tried and true. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 12/03/2016 04:54, Felix Miata a écrit :
freespace if I could ever find a video editor that works to free up the 31% of space wasted by commercials.
carlos have a thread on the subject right now, but removing commercials is only freeing a small amount of space
the STB's storage
What STB do you mean of? any reference? there are so many different ones...
# df -h /dev/sda1 Filesystem Size Used Available Use% Mounted on /dev/sda1 1.8T 1.7T 75.4G 96% /media/wd20azbme2t # ls -lrth *.ts | tail -n7 2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts 2.4G Jan 29 18:29 bigBang0516-201601291800GDMX10we.ts 3.0G Feb 7 17:59 bigBang0510-201602071730GDMX10wd.ts (...)
well, answer to one of my questions: several records. so you have to move some elsewhere jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd composed on 2016-03-12 08:33 (UTC+0100):
Felix Miata composed:
freespace if I could ever find a video editor that works to free up the 31% of space wasted by commercials.
carlos have a thread on the subject right now, but removing commercials is only freeing a small amount of space
My mistake. Not 31%, but 29%. Here in the USA, program time within a "60 minute" program averages about 42 minutes. That 18 minutes is no small amount of filesystem space wasted.
the STB's storage
What STB do you mean of? any reference? there are so many different ones...
e.g. those that use Enigma2 for DVB on Linux. I have a bunch of different STBs. The one at immediate issue was manufactured by a now defunct manufacturer, so those of us using them are stuck with whatever the FOSS community can manage without any benefit of the manufacturer's build info, with 3.9.2 or older kernels, and drivers that never had their more nefarious bugs removed. Other hardware isn't an option due to absence of crucial hardware functionality.
# df -h /dev/sda1 Filesystem Size Used Available Use% Mounted on /dev/sda1 1.8T 1.7T 75.4G 96% /media/wd20azbme2t # ls -lrth *.ts | tail -n7 2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts 2.4G Jan 29 18:29 bigBang0516-201601291800GDMX10we.ts 3.0G Feb 7 17:59 bigBang0510-201602071730GDMX10wd.ts (...)
well, answer to one of my questions: several records. so you have to move some elsewhere
That's exactly what I don't want, complicating the already too complicated with artificial categorization and backups to track, when what I really want requires less space, not more, and I'm not talking exclusively of filesystem space. Physical logistics are a big problem here. 3.0G is only .04% of 75.4G, so a whole bunch more 3G or smaller files ought to fit before space truly runs out or a commercial removal solution found to resolve the space problem. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 12/03/2016 09:09, Felix Miata a écrit :
2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts
That's exactly what I don't want, complicating the already too complicated with artificial categorization and backups to track
name seems to be self explanatory , when
what I really want requires less space, not more, and I'm not talking exclusively of filesystem space. Physical logistics are a big problem here.
3.0G is only .04% of 75.4G, so a whole bunch more 3G or smaller files ought to fit before space truly runs out or a commercial removal solution found to resolve the space problem.
at 130 euros for 4.5 Tb hard drive, I certainly won't take time compressing files... for me removing commercials is necessary when one want to read a file often, because it's boring, not to save size. In France we have what is called (I translate) "backup TV", that mean one can see the tv show for a week after it was on air through any internet box. So I do not record anymore or nearly but having a daughter working in LA, I know USA are often very far back from other countries on this respect jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd composed on 2016-03-12 09:59 (UTC+0100):
Felix Miata composed:
2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts
That's exactly what I don't want, complicating the already too complicated with artificial categorization and backups to track
name seems to be self explanatory
I have no idea what you're trying to communicate. Filenames have nothing to do with issues here. Every character in that filename has meaning.
, when
what I really want requires less space, not more, and I'm not talking exclusively of filesystem space. Physical logistics are a big problem here.
3.0G is only .04% of 75.4G, so a whole bunch more 3G or smaller files ought to fit before space truly runs out or a commercial removal solution found to resolve the space problem.
at 130 euros for 4.5 Tb hard drive, I certainly won't take time compressing files... for me removing commercials is necessary when one want to read a file often, because it's boring, not to save size.
It doesn't seem like you comprehended much of what I wrote. Compat is a major issue. HDs >2T are not an option. Consuming more physical space with more hardware is no small problem. There are more reasons than file size to remove commercials. Compression was already built into the streams. More compression will not be any material option. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 12/03/2016 10:33, Felix Miata a écrit :
jdd composed on 2016-03-12 09:59 (UTC+0100):
Felix Miata composed:
2.4G Dec 24 17:29 bigBang0711-201512241700GDMX10wd.ts
That's exactly what I don't want, complicating the already too complicated with artificial categorization and backups to track
name seems to be self explanatory
I have no idea what you're trying to communicate.
categorization, you said? isn't that done with file name?
It doesn't seem like you comprehended much of what I wrote.
may be. May be you should say what is the goal to achieve and not try your solution that can't work... Compat is a
major issue.
? usb hard drive are compatible with any system and vlc reads nearly everything, including ts files HDs >2T are not an option. but why? can't you copy to an other media with any of your boxes? Consuming more physical space
with more hardware is no small problem. There are more reasons than file size to remove commercials. Compression was already built into the streams. More compression will not be any material option.
wrong. mp4 compress much more than the mpgts you have, but you can probably not do it on the box and you have no choice: either compress more or take more room... what else? grabbing 5% more will be good for two days... jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-12 12:09, jdd wrote:
wrong. mp4 compress much more than the mpgts you have, but you can probably not do it on the box
And, they don't play (well) on several TV sets. To play them back on my STB, I have to "expand" them back. When I want to keep a movie I recorded on the, what do you call it, STB?, I copy it to an external disk, to free space on the STB (which in my case uses FAT or EXT2) I use two main methods: - via ftp from the STB to a computer. It is slow, 10Mb/s it seems. - disconnecting the disk from the STB and connecting it to the computer Once on the computer, I use ProjectX to trim the recordings. No recoding. On some, I use ffmpeg to remove black areas and recode, to smaller size. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Felix Miata wrote:
The last 3 recordings, all made after deleting the multiplicity of test recordings, and bringing freespace back to 96%, have completed normally, so I seem to be back to trying to figure out exactly how much space can actually be used. Many times on Linux PCs I've found it possible to get df to show 0% free on / without the system locking up, at least, before systemd and journald usurped predictable, tried and true.
On a data disk, not used by the OS, you can (at least on 'xfs', get it to 0 as root...er... hmm...ok, not exactly:
sudo lvcreate -C y -L 1G -n FillMe /dev/Data sudo ./mkfs-xfs-raid with_data /dev/Data/FillMe cmd = mkfs.xfs -i projid32bit=0 -d su=64k,sw=4 -s size=4096 -L with_data -f /dev/Data/FillMe
meta-data=/dev/Data/FillMe isize=256 agcount=8, agsize=32752 blks = sectsz=4096 attr=2, projid32bit=0 = crc=0 finobt=0 data = bsize=4096 blocks=262016, imaxpct=25 = sunit=16 swidth=64 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=1605, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
sudo mkdir /mnt/fillme sudo mount /dev/Data/FillMe /mnt/fillme/ -t xfs -o defaults,nodiratime,swalloc,largeio,logbsize=256k,barrier sudo chown -R law.law /mnt/fillme df /mnt/fillme Filesystem Size Used Avail Use% Mounted on /dev/mapper/Data-FillMe 1018M 33M 985M 4% /mnt/fillme
\df /mnt/fillme Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/Data-FillMe 1041644 33152 1008492 4% /mnt/fillme
\df -B 4096 /mnt/fillme Filesystem 4K-blocks Used Available Use% Mounted on /dev/mapper/Data-FillMe 260411 8288 252123 4% /mnt/fillme
xfs_mkfile -p 1008476k x \df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/Data-FillMe 1018M 1018M 16K 100% /mnt/fillme
--- The closest I could come to a full disk was 16K short. Oh well... 16K out of 1G, I am guessing, but might also get all by 16K out of 1T as well... But NOTE: If I allocated lots of little 4K files, I wouldn't get close to that amount, since each file needs at least 1 block to hold its space pointers:
rm x df -k . Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/Data-FillMe 1041644 33152 1008492 4% /mnt/fillme echo -n 1 >x df -k . Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/Data-FillMe 1041644 33176 1008468 4% /mnt/fillme ll -hs x 4.0K -rw-rw-r-- 1 1 Mar 15 13:03 x
---- The indirect blocks don't appear to be part of it's size to "du" or "ls", but they still get allocated out of "df"'s figure. Note, even 'root' can't allocate that final 4K space(tried)... But it looks like to allocate the entire 1G, it only took 1block of overhead (I'm assuming the above block table could be held in 1 block, though I might be wrong...?) L8r -l -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-03-15 21:10, Linda Walsh wrote:
The closest I could come to a full disk was 16K short. Oh well... 16K out of 1G, I am guessing, but might also get all by 16K out of 1T as well...
Once I did some tests, and XFS was the filesystem that would "waste" less space. Thus I used it for my backup DVDs. Yes, you can "format" DVDs with XFS, or anything. No need to use ISO images. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 2016-03-12 03:40, Linda Walsh wrote:
I'd suggest recording to a disk that has >25% free space, and then, if you want, copying it to the over-filled disk -- that might work. But better would never to use over 90% (<75-80% would be better if you care about performance) of your disk space in a partition. When a normal 100+MB/s I/O disk gets to 99% full, it's write speed may really be down to 5MB/s or less.
I'd suggest using XFS for the final destination. You can fill them up to capacity, and as the allocation of metadata areas is dynamic, you don't wast space there if the files are huge, as is the case. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Le 12/03/2016 01:15, Felix Miata a écrit :
I have a STB that according to blkid is using EXT3, a partition using the entirety of the "2TB" HD, for recordings.
according to what others said also, the journal is not the cause of your problem. So you should better define what the problem really is. I understand you can't record all what you want, but is this a single recording? is this a sum of a number of recordings? Is it not possible to move files to an external disk? use a bigger usb disk (I have at home 5Tb usb disks). any system needs some sort of disk caching, so a large amount of free space is mandatory jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/12/2016 02:23 AM, jdd wrote:
Le 12/03/2016 01:15, Felix Miata a écrit :
according to what others said also, the journal is not the cause of your problem.
I agree. It's a metadata mechanism. It *might* impact performance very slightly. But that isn't your problem, is it?
So you should better define what the problem really is. I understand you can't record all what you want, but is this a single recording? is this a sum of a number of recordings?
There was the suggestion that the application was somehow using temp files, caching or otherwise dynamically using space. I don't know about "common", but this is an easy trick with UNIX; open a file, then delete it, or rather delete it's directory entry. So long as the application has a handle on it open the OS won't actually delete the space used. In effect the application has a self deleting temporary file. better than what many others seem to do, leaving 'litter' in /tmp and /var/tmp and /usr/tmp! So while the application is running its 'consuming' the free space and unable to write the named file. I note you have plenty of free inodes, but large files also need indirect blocks; that counts as structural metadata. I don't know how ext4 deals with these super-large files in terms of indirection blocks. I raise this because I read this: <quote src="http://kernelnewbies.org/Ext4#head-7c5fd53118e8b888345b95cc11756346be4268f4"> 2.4. Extents The traditionally Unix-derived filesystems like Ext3 use an indirect block mapping scheme to keep track of each block used for the blocks corresponding to the data of a file. This is inefficient for large files, specially on large file delete and truncate operations, because the mapping keeps a entry for every single block, and big files have many blocks -> huge mappings, slow to handle. Modern filesystems use a different approach called "extents". An extent is basically a bunch of contiguous physical blocks. It basically says "The data is in the next n blocks". For example, a 100 MB file can be allocated into a single extent of that size, instead of needing to create the indirect mapping for 25600 blocks (4 KB per block). Huge files are split in several extents. Extents improve the performance and also help to reduce the fragmentation, since an extent encourages continuous layouts on the disk. </quote> Perhaps the problem is that the disk is 'fragmented' so that a single extent of adequate size, perhaps not for the whole file, cannot be created. I do note the existence of an application called "E4rat" which 'rationalizes' aka optimizes the layout of files on an ext4FS. I don't know how much extra space it requires; I recall seeing the disk compressor for MS-Windows working very slowly if it didn't have much free space :-( I don't know if layout is an an issue. Have you looked into preallocation? This is supposed to be a beneficial feature of ext4FS. https://wiki.archlinux.org/index.php/E4rat -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (7)
-
Andrei Borzenkov
-
Anton Aylward
-
Carlos E. R.
-
Felix Miata
-
jdd
-
Linda Walsh
-
Per Jessen