I've been C programming for... (quick arithmetic) roughly 25 years now, and yet, there are still things to learn. For instance, I decided to move my code for Milhouse back from my AMD64 Linux box to my Macbook for a little "mobile hacking" over the next week. I quickly found that unlike gcc on the Linux box, gcc on this Mac still thought that "long" variables were 32 bit. Various counters in Milhouse are 64 bit values, as are the hash values that are used in the transposition table, and I quickly found out that all the places where I previously used "%ld" as a format string had to be changed to "%lld". Grumble!
You see, here's the annoying thing about C: you know that shorts can hold char values, and ints can hold shorts, and longs can hold ints, but you actually don't know how many bits any of these have without peeking using
sizeof. Luckily, the C standard requires an include file
sys/types.h which has typedefs which include types of various sizes, so if you really want a 32 bit unsigned int, you can use the type
uint32_t and be reasonably sure that it will work. Such was the state of my knowledge a couple of days ago.
But here's the thing: I didn't know any way to generate the right format string for a particular size of data value. On my AMD box,
%ld is used for 64 bit values. On my mac, I need to use
But apparently this was all thought of by the C99 standards committee. They created an include file called
inttypes.h which includes defines which tell you what format is needed. For example:
PRIu64 is the code for a 64 bit unsigned integer value. On my mac, it expands to
"ll" "u", which the C preprocessor is nice enough to cat together. Therefore, to print such a value, you need a line like:
printf("%" PRIu64 "\n", sixtyfourbitvalue) ;
Sure, it's ugly. You think they would at least include the
% in the macro. But, it does work. I'm tidying up all my code to play by these nice rules.