Melissa was trying to figure out why an old application wasn't writing out a data file when commanded to do so. It was an implemented feature, it should work, it had worked at one point- but it wasn't working now.
She traced through the C code and found this:
void writeconf(void)
{
int fd;
char buf[100];
int len;
int loop;
snprintf(&buf[0], sizeof(buf),"%d\n", confval);
for(loop=0; loop<10; loop++) //try 10 times to open file
{
fd=open("/tmp/conf", O_WRONLY|O_CREAT);
if(fd != -1)
{
len=write(fd, &buf[0], strlen(buf));
close(fd);
break; //Ok, ready to exit
}
}
}
The purpose of this code is to write a single integer out to /tmp/conf
.
So, first, we use snprintf
to write that integer out to a buffer. This line alone piles on the WTFs. &buf[0]
isn't wrong, but it's weird. Arrays in C are pointers to the first object in the array. &buf[0]
returns a pointer to the first object in the array. They're the same thing. I don't understand why they went the long way around, unless they didn't understand what arrays are.
Speaking of, buf
doesn't need to be 100 characters long, but that's a minor thing. But the real "yikes" is confval
itself- a global variable accessed from inside this function, which hints at many, many other bad choices.
Then we loop. We try to open the file up to ten times. If we fail, we just try again. No waiting, no timeouts, just.. try it ten times, as quick as you can.
Once we open it, we write the data, again using the &buf[0]
idiom. And… do nothing to check that we successfully wrote anything out. We just hope for the best, close the file descriptor, and exit.
This didn't answer why the file wasn't getting written, but it definitely explained why no one was seeing errors. This doesn't do any meaningful error handling. If it can't open the file, it just gives up after ten tries. If it can't write to the file, it doesn't notice.