When Sergey L. showed up for his first day at his new job, he wasn't really sure what he'd be working on. The hiring manager wasn't very specific. "Database skills are very important," he told Sergey. "You'll be our first real sysadmin maintaining some stuff that a bunch of consultants set up."

Sergey was the first sysadmin that the company had ever hired in its five year history. And no one was really sure what to do with him or where he fit in the environment. As such, he didn't really have a boss. He had a team of bosses. Specifically, everyone in the company.

Sergey had to fix and maintain solutions to various problems, but was often trapped in a neverending series of meetings. One of these meetings was a weekly status update, in which he had to explain his tasks to the whole staff. I'm sure they were as riveted hearing about SAN storage, RAID arrays, and Active Directory password security policies as I would be hearing about bidirectional deprecation balance sheet amortization.

Sergey's progress was slow, understandably. As soon as he'd be hitting his stride with a task, he'd be summoned into a meeting by one of his many bosses. Or by a flurry of instant messages. Not that he'd installed an IM client on his own; it was mandated by the company. So when anything went wrong, he'd get a tidal wave of IMs from various departments throughout the company.

So Sergey's first day kind of sucked. His second day, he feared, would be worse. As his trembling hand typed his password, one of his bosses (or, as you might refer to them, fellow employees) said "come on, we're going to the colo facility."

The company was having problems with its RAID array that held the database. The database contained all their CRM data, client data, employee data, sales data, everything. If the database server went down, everyone would have to stop working until it was back up.

The load consistently remained pretty high on the system, so it fell to Sergey to resolve bottlenecks and kill processes for queries that would stop responding. Periodically the database would become corrupt and necessitate a restore from backup. The restore process took a long time, due to the monumental size of the database. Sergey could only guess that there were problems with the software that manipulated the database, but that wasn't his job to fix.

The database was backed up nightly, and each run took about eight hours. Sometimes, though, the backup process would spill over into the next day and cause major problems. Why? Because client applications would poll the database frequently. And sometimes, said client software would send commands to alter the database's metadata. That's right; the client programs were firing off ALTER TABLE commands. If a user changed an option that affected a table's structure while the backup was running, it wreaked havoc on the backup process.

Ultimately, Sergey failed his mission of "fixing" the database. He was spending too much time in meetings, killing queries that were running too long, and patching backup problems, and all the hardware they'd thrown at the issue had a negligible impact. Sergey left the company, and in keeping touch with former coworkers bosses learned that he was blamed for all of the database's problems.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!