In the default cmock_config file, the types uint8_t, uint16_t, and uint32_t are mapped to HEX8, HEX16, and HEX32 respectively.
When using macros like UNITY_TEST_ASSERT_EQUAL_HEX8, UNITY_TEST_ASSERT_EQUAL_HEX16 and UNITY_TEST_ASSERT_EQUAL_HEX32, passing maximum unsigned values (e.g., 0xFFFF, 0xFFFFFFFF) results in incorrect and misleading outputs due to implicit casting to signed types.
Although these macros are used for asserting hexadecimal representations of unsigned values, the internal casting first converts them to UNITY_INT16 or UNITY_INT32, which are signed types. This causes values like 0xFFFF (65535) and 0xFFFFFFFF (4294967295) to be interpreted as -1.
#define UNITY_TEST_ASSERT_EQUAL_HEX16(expected, actual, line, message) \
UnityAssertEqualNumber((UNITY_INT)(UNITY_INT16)(expected), \
(UNITY_INT)(UNITY_INT16)(actual), \
(message), \
(UNITY_LINE_TYPE)(line), \
UNITY_DISPLAY_STYLE_HEX16)
```
```
#define UNITY_TEST_ASSERT_EQUAL_HEX32(expected, actual, line, message) \
UnityAssertEqualNumber((UNITY_INT)(UNITY_INT32)(expected), \
(UNITY_INT)(UNITY_INT32)(actual), \
(message), \
(UNITY_LINE_TYPE)(line), \
UNITY_DISPLAY_STYLE_HEX32)
Using:
UNITY_TEST_ASSERT_EQUAL_HEX16(0xFFFF, actual_value, __LINE__, "Max 16-bit value");
Result in:
(UNITY_INT)(UNITY_INT16)(0xFFFF) → (UNITY_INT)(int16_t)(65535) → -1
So the actual printed value becomes -1, not 0xFFFF, which breaks the expected behavior.
To make test outputs consistent and correct when dealing with maximum unsigned values, consider the following improvements:
- For HEX-style macros (HEX8, HEX16, HEX32), avoid casting to signed integer types like
UNITY_INT16 or UNITY_INT32.
- Replace UnityPrintNumber() with UnityPrintNumberUnsigned() when the style is explicitly unsigned.
- In UnityAssertEqualNumber, consider accepting UNITY_UINT when unsigned styles are used.
In the default cmock_config file, the types
uint8_t,uint16_t, anduint32_tare mapped toHEX8,HEX16, andHEX32respectively.When using macros like
UNITY_TEST_ASSERT_EQUAL_HEX8,UNITY_TEST_ASSERT_EQUAL_HEX16andUNITY_TEST_ASSERT_EQUAL_HEX32, passing maximum unsigned values (e.g., 0xFFFF, 0xFFFFFFFF) results in incorrect and misleading outputs due to implicit casting to signed types.Although these macros are used for asserting hexadecimal representations of unsigned values, the internal casting first converts them to
UNITY_INT16orUNITY_INT32, which are signed types. This causes values like0xFFFF (65535)and0xFFFFFFFF (4294967295)to be interpreted as-1.Using:
UNITY_TEST_ASSERT_EQUAL_HEX16(0xFFFF, actual_value, __LINE__, "Max 16-bit value");Result in:
(UNITY_INT)(UNITY_INT16)(0xFFFF)→(UNITY_INT)(int16_t)(65535)→-1So the actual printed value becomes
-1, not0xFFFF, which breaks the expected behavior.To make test outputs consistent and correct when dealing with maximum unsigned values, consider the following improvements:
UNITY_INT16orUNITY_INT32.