On Sat, Apr 27, 2013 at 6:45 AM, Claudio Freire
I mean, I have 3GB ram, but it uses about 40%. and it still use swap.
There's your problem then. I don't think OBS workers have swap.
Hi, Claudio, I'm absolutely beginner to shell, so I may be misleading. here's the `top -d 1 -pid <pid>` info, check it: When compiling data: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27447 marguer+ 20 0 1050m 1.0g 2596 R 89.9 33.6 4:52.83 python When finished and writing data: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27447 marguer+ 20 0 1376m 1.2g 772 D 1.0 41.5 6:10.79 python I seemed to take percent of memory usage as absolute memory usage... There might be 1.2g for a single pid. But it used 2 pids on my dual core laptop. So is that mean on a 4 core VM it will eat 1.2x4 = 4.8g or 50% more = 9.6g memories?
I don't see any input files in models. But from the code, ram usage seems to be linear on the input, except maybe for the Trie. I don't think the sorting phase can use more than 10 times the size of the input files. Does it die before building the trie?
It takes data/models/text3/data.arpa (150mb) as input. I don't know how to distiguish load phrase and sort phrase, but OBS kill it here: [ 71s] /usr/bin/python -B ../../tools/sortlm.py \ [ 71s] ./text3/data.arpa sorted3/data [ 8016s] [ 7998.603176] Out of memory: Kill process 5630 (python) score 259 or sacrifice child [ 8016s] [ 7998.604144] Killed process 5630 (python) total-vm:1357352kB, anon-rss:836216kB, file-rss:0kB The actually resuilt should be: /usr/bin/python -B ../../tools/sortlm.py \ ./text3/data.arpa sorted3/data /usr/bin/python -B ../../tools/sortlm.py \ ./text3/data.arpa sorted3/data reading N-grams reading N-grams min cost = -5.847901 writing 1-gram file writing 2-gram file min cost = -5.847901 writing 1-gram file writing 2-gram file writing 3-gram file writing 3-gram file /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.2gram \ sorted3/data.2gram.filter \ 12 8 /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.3gram \ sorted3/data.3gram.filter \ 10 8 /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.2gram \ sorted3/data.2gram.filter \ 12 8 /usr/bin/python -B ../../tools/sortlm.py \ ./text3/data.arpa sorted3/data reading N-grams /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.3gram \ sorted3/data.3gram.filter \ 10 8 min cost = -5.847901 writing 1-gram file writing 2-gram file writing 3-gram file /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.2gram \ sorted3/data.2gram.filter \ 12 8 /usr/bin/python -B ../../tools/genfilter.py \ sorted3/data.3gram \ sorted3/data.3gram.filter \ 10 8
If so... you're SOL I'd say. The trie is another project based on a C library, and tries aren't known for their compactness. You'll just have to request enough RAM to build.
How many memory will be the top a single build can request without harming the others? Greetings Marguerite -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org