On Thu, Apr 7, 2011 at 4:13 PM, phanisvara das <listmail@phanisvara.com> wrote:
On Fri, 08 Apr 2011 01:33:19 +0530, David C. Rankin <drankinatty@suddenlinkmail.com> wrote:
On 04/07/2011 05:07 AM, Dave Howorth wrote:
If you care about page presentation, you should be storing page images such as PDF rather than .odt, IMHO. :-P
Cheers, Dave
Agreed,
But consistency in document creation (or re-creation in this instance) is one of the fundamental concepts of a word-processor. It should be able to process the same words and come up with the same spacing every time.
for me it's enough if it comes out with similar, acceptable spacing in any direction. higher precision needs i'd consider 'layout,' not 'word processing.' but that's just splitting hairs, and my opinion, based on my requirements.
When we are talking about thousands and thousands of page, duplicating everything in pdf is a bit costly :p
I know these hiccups occur, so it is a matter of time before this issue is found. The documents at issue are all based on the normal OO template. The letterhead was just the simplest example, but longer documents based on the normal template are effected just the same. It will be interesting to find out just what caused this change.
Me and git bisect have a trying relationship. The big issue here will be the multiple build of Libre -- that's no little program :) Hopefully the smart guys at the document foundation can narrow the cause without me having to dedicate a box and god knows how long compiling to find the update the cause the change (fingers crossed :)
i've never really done this dissecting trick, but thought it shouldn't be necessary to compile each version you want to compare; just install them, look at the result, and compare the source. when it's clear which change / commit is responsible, one can narrow in on that particular version. no need to build all of them.
I don't think bisecting lets you do that. You give it a known good git revision and a known bad. It picks a revision 50% between those and compiles it. You test it and tell git if is good or bad. It then gets a revision 50% between the new known good and known bad pair and repeats. So its a log2 algorithm at finding the checkin that broke things. So 1 million checkins take only 20 compile/test cycles. ie. 2 **20 = (2 ** 10) ** 2 = 1024 **2 = 1024 * 1024 ~= 1,000,000 Thus with a huge number of checkins, it is relatively fast to get to the answer. The trouble is with only 500 checkins, it still takes 9 compile / test pairs. Not so good, but still a lot better than doing all 500. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org