So I tried to find an authoritative source for you on this, but I believe it depends on the vendor.
When it comes to Linux (and iOS, it seems) you can assume that long == machine word size. A machine word is how your architecture addresses memory - on a 32bit machine Linux or iOS system, a long is 32 bits wide. On a 64 bit system iOS or Linux system, it's 64 bits wide. You'll note I hedge strongly here on referring to Linux or iOS. Windows has different behavior, and I've seen enough anecdote to imply that there are in fact many different interpretations of what int and long should be. This wikipedia article is instructive.
In the case of your SO example, on a 32 bit machine it just so happened that UInt32 and unsigned long took up the same number of bits. Hooray! On a 64 bit machine, unsigned long happened to take 64 bits, whilst UInt32, as you might assume, took 32 bits.
Now, where it gets a bit weird is that the unsigned long _(a 64 bit value on y