2026-04-22
A colleague wrote this utility function to check whether a given index falls within a valid range of a buffer. It's used in a network packet parser to validate offsets before reading data. Simple bounds check — what could go wrong?
#include <stdio.h>
#include <stdlib.h>
int is_valid_offset(int offset, size_t buf_len) {
if (offset < buf_len - 1) {
return 1; // valid: room for at least 1 byte read
}
return 0; // out of bounds
}
void parse_field(const unsigned char *buf, size_t buf_len, int offset) {
if (is_valid_offset(offset, buf_len)) {
printf("Reading byte at offset %d: 0x%02x\n", offset, buf[offset]);
} else {
printf("Offset %d is out of bounds (buf_len=%zu)\n", offset, buf_len);
}
}
int main(void) {
unsigned char packet[] = {0xDE, 0xAD, 0xBE, 0xEF};
parse_field(packet, 4, 2); // should succeed
parse_field(packet, 4, -1); // should be rejected
parse_field(packet, 0, 0); // should be rejected (empty buffer)
return 0;
}
Compile and run it. You'd expect the second and third calls to be rejected cleanly. Instead, both sail right through the "valid" path. The offset = -1 case is especially dangerous — it causes an out-of-bounds read. The buf_len = 0 case is even worse.
There are actually two interacting signed/unsigned problems here, both rooted in C's implicit conversion rules.
Bug #1: Negative offset passes the check. When you compare int against size_t (which is unsigned), C's "usual arithmetic conversions" silently convert the signed value to unsigned. So offset = -1 becomes a gargantuan unsigned number — SIZE_MAX on most platforms. But that doesn't make it greater than buf_len - 1... actually wait, SIZE_MAX is definitely not less than 3, so you might think this one is fine. Let's check: (size_t)-1 is SIZE_MAX, which is not less than 3. So actually, -1 is correctly rejected on the first bug alone... unless the second bug is in play.
Bug #2: Unsigned integer underflow when buf_len is 0. The expression buf_len - 1 where buf_len is size_t and equals 0 wraps around to SIZE_MAX. Now every offset — including negative ones converted to large unsigned values — is less than SIZE_MAX. The guard check becomes if ((size_t)-1 < SIZE_MAX), which is if (SIZE_MAX < SIZE_MAX) — false. So the -1 case is actually rejected for empty buffers, but offset = 0 with buf_len = 0 passes, since 0 < SIZE_MAX is true. You'd read from an empty buffer.
The core lesson: never subtract from an unsigned value without guarding against underflow first, and never compare signed values against unsigned ones without an explicit cast or a preliminary sign check.
Here's the corrected version:
int is_valid_offset(int offset, size_t buf_len) {
if (offset < 0) {
return 0; // reject negative offsets before any unsigned math
}
if (buf_len == 0) {
return 0; // reject empty buffers before subtracting
}
if ((size_t)offset < buf_len) {
return 1;
}
return 0;
}
The fix checks the sign of offset first — while it's still an int — and checks for empty buffers before doing any unsigned arithmetic. Only then does it cast to size_t for the actual comparison. This ordering matters: every guard eliminates a class of inputs before the next comparison can be poisoned by implicit conversions.
Compilers can warn about this (-Wsign-compare, enabled by -Wall), but many codebases suppress or ignore these warnings. In security-critical code like packet parsers, a single missed signed/unsigned mismatch is a CVE waiting to happen.
