Curiosity.

(I’m a new programmer… shhhhh)

When looking at other people’s code, specifically with bitwise operations, I see people using hexadecmal instead of decimal.

( _data & 0x80) versus ( _data & 128)

In my programs I’ve run, it doesn’t make a difference. Is that it? Does it boil down to preference, or am I missing something?

Thanks!

In code the result will be the same. Hex representation is used because it is easier to visualize where the bits are currently. Binary is even better, but can sometimes become a bit stretched out, so hex is used.
So its not just about preference, but code readability.