As a Python developer, SEO was far outside of Ian’s toolbox, more in the realm of expensive social media consultants. However, when his friend Alec asked for help, he knew he couldn’t turn him down.

Alec worked at LightBarn, a lighting supply company, and was overseeing their SEO optimizations. Alec explained that no one could actually find the company’s website with relevant keywords on popular search engines. “I looked all the way to page 150!” Alec said. “I don’t get it. We have plenty of inbound links.” Alec had worked for months writing a blog on the company’s site, and his posts were routinely linked to by other industry sites.

Cookies being cut out of rolled out cookie dough
by Anna reg

“It’s not my forte,” Ian explained, “but I’m sure I could get LightBarn up to page five, at least.” Ian suggested a 20-hour contract to improve the site’s page rank. If he couldn’t get LightBarn to page five of a typical keyword search, the contract would be terminated. It was win-win for LightBarn, and he’d be doing his friend Alec a big favor.

Oddities

Testing the site for any obvious culprits, Ian noticed that LightBarn’s site was full of oddities. Shopping carts would mysteriously lose items. Search results would be returned in arbitrary orders. He would click on one product and another, unrelated product would be displayed instead.

Ian mentioned these to Alec, who shrugged them off. “None of that should affect our page rank. Sure, every website has bugs, but as long as Google can spider our pages it doesn’t care.”

“You sure you don’t want my help pinning some of those issues down?” Ian offered. “I mean, that’s my day job.”

“No, let’s stick with the contract language,” Alec replied. To Ian’s ears, it sounded like Alec was more interested in the company’s page rank than whether it actually worked for users. Ian decided to ask after it when the existing contract was finished.

But first, he would have to think like a search engine.

Think Like A Bot

Spider bots, Ian knew, are simple creatures programs. They perform an http request for a URL like a typical web browser, scan the returned page content for links, add any matches to a table to scan later, and move on to the next URL to scan. Some use sophisticated search algorithms, such as the ability to determine if a page’s content is genuine or if it’s using keywords to spam search results, but at its core a spider bot just recurively requests pages.

So Ian wrote one himself. Using Python’s http package, he programmed it to accept several different arguments from the command line, such as the URL of origin, what browser ID to use, etc. When he finished, he gave it LightBarn’s homepage URL for the starting point and let it run.

Just over an hour later, Ian sifted through the saved page files. All of them were under 1KB, way too small for a site like LightBarn, with hundreds of lines of embedded JavaScript and lots of display-centric markup. Opening a few files at random, he noticed that they all contained just one line of text:

Please enable cookies and try again

Crash Diet

Ian explained his test to a bemused Alec later that day. “After I saw that, I disabled cookies on my browser to make sure it wasn’t just some issue with the http package. When I tried that, every page just displayed the text ‘Please enable cookies and try again.’”

“But we need cookies for session storage,” Alec said. “Otherwise the shopping cart won’t work.”

“You should only display that message on the shopping cart page. You don’t need it across your whole site. It’s no wonder search engines don’t rank your site – that’s the only thing they see when their spiders scan it.”

Alec consented to Ian’s fix. A couple weeks later, Ian noticed that LightBarn was appearing on page 3 or 4 on relevant searches on several popular search engines. Delighted, Alec offered to pay Ian in full for his services.

“Actually, I’d like to fix some of those other bugs I found at the start of the project,” Ian said. “As a friend, you may as well get your money’s worth.”