On Tuesday 28 March 2006 11:44 am, Verdi March wrote:
Hi,
is there a standard rule on widening conversion between signed and unsigned type? I found a reference from this URL: http://developers.sun.com/prodtech/cc/articles/ILP32toLP64Issues.html which also seems to be implemented by gcc. Can anybody confirm on this? TIA.
int main() { unsigned int ui = 0xcabcdef2; long long ll_ui = ui; unsigned 32-bit int is assigned to a unsigned 64-bit long long, sign extension does not occur.
unsigned long long ull_ui = ui; unsigned 32-bit int is assigned to a signed 64-bit long long, sign extension does not occur because ui is unsigned. int i = 0xcabcdef2; long long ll_i = i; unsigned long long ull_i = i; i is a signed 32-bit int with the high order bit set. When assigned to i_ll, sign extension occurs. The sign extension occurs BECAUSE i is signed., whether of not the result is signed or unsigned. printf("ui=%x ll_ui=%llx ull_ui=%llx\n", ui, ll_ui, ull_ui); printf("i=%x ll_i=%llx ull_i=%llx%\n", i, ll_i, ull_i);
Execution result: ui=cabcdef2 ll_ui=cabcdef2 ull_ui=cabcdef2 i=cabcdef2 ll_i=ffffffffcabcdef2 ull_i=ffffffffcabcdef2
Absolutely.
First, the rules are in the C standards, (c89 and c99). I wrote a white
paper on this a couple of years ago:
http://h21007.www2.hp.com/dspp/files/unprotected/32bitto64bit_whitepaper.pdf
Basically, the C language rules can be confusing.
Take:
int i;
unsigned int ui;
long long ll_i;
ll_i = i + ui;
In this case 32-bit i is added to 32-bit ui resulting in a 32-bit UNSIGNED
expression. The resulting unsigned expression is then widened (without sign
extension) and assigned to ll_i;.
I explain some of the rules in my white paper.
--
Jerry Feldman