Hella.Breitkopf@varetis.de wrote:
Steffen Dettmer
had some perl code to decode this: to convert hex to IP you may use the following one-liner:
perl -e '$a=shift;{printf "%d.%d.%d.%d\n", ("0x" . substr($a,6,2)), ("0x" . substr($a,4,2)), ("0x" . substr($a,2,2)), ("0x" . substr($a,0,2));}' <string>
[this works at least on intel archs] ^^^^^^^^^^^^^^
And that's the point: The output in the logfile is architecture dependent.
[snip]
fffffea9 = 169.254.255.255
Err, and here starts my problem: ff ff fe a9 is in my universe (if read from left to right) = 255.255.254.169 (but a9feffff *is* 169.254.255.255) The script does it backwards ...
These messages are produced by the call printk(KERN_WARNING "martian source %08x for %08x, dev %s\n", saddr, daddr, dev->name); in /usr/src/linux/net/ipv4/route.c, where saddr and daddr are 32-bit integer variables containing the IP addresses in network byte order. But this printk-call interprets them as if they where in host byte order. Therefore, if host byte order and network byte order differ, the bytes of the IP addresses are printed "from right to left". Network byte order is big endian, while host byte order on Intel is little endian, so on an Intel machine the output has the "wrong" order. Eilert -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Eilert Brinkmann -- Universitaet Bremen -- FB 3, Informatik eilert@informatik.uni-bremen.de - eilert@tzi.org http://www.informatik.uni-bremen.de/~eilert/