It's the last week of the year, and as per tradition, we look back at some of the best stories of the year. We start with this charmer, from February, a… little slice of brillant architecture. --Remy

"Why aren't we using microservices?"

It was an odd way to start a meeting, but it got Mr. TA's attention. TA was contracting with a client, and sitting in a meeting with their senior architects. TA and one of his peers exchanged a glance and a small eye-roll. They knew exactly what had happened: Alvin, the senior architect, had learned about a new fad.

The application that TA's team was working on, and the core product which the client offered, was a pretty simple data-driven application. It was, at its core, one SQL Server database, a simple front-end, and could easily run reliably off of a moderately powerful server. With co-location and rackspace costs, it could cost them a few thousand a month to operate.

But the senior engineers heard about the cloud. And they wanted everything in the cloud. And they wanted all the latest features that Azure had to offer. Which meant they were spending hundreds of thousands a month to host the application.

"It doesn't matter why we aren't using microservices," the senior architect went on. "What matters is that we're getting left behind. If we want to operate at webscale and provide maximum advantage to our users, with always-on reliability, we need to be using microservices. And don't worry: I have a plan for the transition."

Alvin pulled up a PowerPoint slide entitled "Initech's Microservice Migration".

Alvin's microservice plan called for dividing the application up into 13 modules. The boundaries were arbitrary, with only the vaguest sense of putting common functionality into the same microservice. Most of the proposed microservices weren't single purpose- they each contained multiple unrelated pieces of functionality, making them not exactly micro.

But that wasn't the biggest issue. Alvin had a vision of 13 different microservices, and each microservice needed to have its own database. After all, each microservice was supposed to be independently deployable from any other. So their single SQL Server database got split into 13 different databases.

Of course, this created all sorts of new problems. The "Admin" "micro"service (which contained a few hundred endpoints, once again, stretching the definition of "micro") had all the admin tables siloed off into its own database. But every other "micro"service needed to access the data in those admin tables.

Now, if this all stayed in one database, it'd be easy to do joins. Heck, even if this were just an on-prem SQL Server deployment, cross-database queries are completely doable. But in Azure, your only option is to use the "Linked Tables" feature, which means creating an object for each remote table you want to access. Since you have 13 databases, all needing several tables from every other, you can see how this quickly spirals out of control in terms of complexity.

"But that's not a problem," Alvin explained, when TA pointed out this problem. "We're using microservices, which means we scale horizontally."

"What do you mean by that?"

"We just add more resources to the cloud, and let our microservices collaborate," Alvin said.

"I'm not clear how that solves the problem."

"That's why I'm the architect."

It took several weeks of back-and-forth to get Alvin to explain his brillant architecture. What Alvin intended was to have every microservice fetch the related data by talking to other microservices. All the joins would just be done in the application layer, and any performance problems could be solved by going "webscale", which is to say: throwing money into a pit and setting it on fire to please the great god Azure.

Despite Mr. TA's protests, that was the direction everyone marched off in. When it rapidly became clear that this was non-viable, Alvin adapted.

"So, to boost performance, we'll replicate a few tables between databases."

Replication was an initial bulk copy, and then updating the "micro"service responsible for those tables to do its update multiple times in multiple databases. Unfortunately, due to the mess Alvin had made of things, the databases had lost referential integrity, which meant they couldn't leverage foreign key constraints to protect the data.

The worst "replicated" table was the one for tracking shipments. In the original, "source of truth" location, it was designed with a slew of NVARCHAR columns named UserDefinedField01, and UserDefinedField27 and UserDefinedField112. There was an additional lookup table that applications could use to map those fields to UI elements, but that didn't exactly help all the other "micro"services that wanted that data. So Alvin set up a replication process that normalized that data into a more traditional database schema in the other 12 databases that wanted the data. Unfortunately, it wasn't the same normalization of the data in each of those remote databases, making the maintenance of replication a gigantic pain.

At this point, it should surprise no one to learn that Alvin also just used NVARCHAR fields for basically everything, even things that absolutely have no reason to be. For example, the Users table, quite reasonably, had an autonumbered UserId field. It also had a UserName field. Every table that related (or "related", in the databases that didn't contain user information), used UserName as the foreign key.

Eventually, Alvin's microservice migration limped across the arbitrarily defined finish line. Alvin presented management with a self-congratulatory project retrospective that highlighted his leadership, his technical vision, and how their were now well positioned for the future, by using the latest techniques.

Interestingly, this retrospective completely ignored the costs of the migration, or the fact that to maintain the same level of performance as the old architecture, they had to spent almost 75% more on cloud services.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.