II. Thou shalt not worship shininess
10 Commandments of Scale, Theo Scholssnagle, Surge 2010

A few months ago I attended Node Summit in San Francisco, dedicated to success stories around node.js implementations in large architectures. There were a couple of very educational war stories (particularly, the so-called, "boring" Walmart Black Friday case study, presented by Eran Hammer), but, this rant is not going to be about how awesome node.js is. This rant is going to be about bad decisions. Those that are still being made in the world of technology, driven by false premises, wrong reasons and buzzword bingo.

With increased availability of new technological choices, people tend to make the mistake of jumping on "hot" technology without a good reason. Reading industry blogs and aggregators, engineers and managers alike hear about "the next best thing" and jump on whatever that is without considering the consequences (or need), justifying the move with a lot of, well, bullshit. For example, I've seen one of the largest media companies in the world make a decision to replace all existing web platforms, serving hundreds of millions of people, with Django. A week after release of version 1.0. Without a single engineer familiar with Django. Or with Python. After recently investing a significant amount into their current architecture that works like a well-oiled machine. The “official” reason? The success behind the launch of The Washington Post's new, also Django based, platform. Nevermind that that was the only production (relative) success at scale at the time. Nevermind that The Washington Post hired one of the creators of Django for that initiative, to ensure it succeeded. Nevermind all that — one success story was enough to pitch the raw idea to, and get approval from, management. There is a saying: If a man jumps off a 10 story building and survives without a scratch, what do you call it? An accident. What if he dusts off, climbs back up and jumps again with the same result? A coincidence. But what if he does it for the third time? A habit. So make your decisions based on repeatable success, not on a one-hit wonder.

Another saying that always comes to mind when seeing poor technology decisions is what we’ve all been told by our parents when we were kids: “If everyone jumped off the bridge, would you just follow?” I had a conversation over a few drinks with the founder of a relatively successful startup, who was looking to add profile management to his suite of online applications. We went pretty deep into an architecture design discussion, talking about the best way to leverage Single Sign On (SSO) and tie the profile into multitude of offered services. Eventually the conversation turned to the technology stack of choice. He told me that node.js was a strong candidate (for all the right reasons), but to my surprise, he also thought MongoDB was a right fit. Given his application suite was already using a RDBMS, and the nature of data that has to be stored is highly relational, why would anyone introduce a new, non-relational storage component to the list of existing tools in this situation? His answer: "I heard that node.js works well with MongoDB." Now, to be fair, it is a better response than "I've read on that blog from that guy who said that MongoDB is webscale,” but not far from it. I’m not hating on MongoDB (in this particular case). In fact, at OmniTI we run some of the largest MongoDB clusters in the world; but when it comes to our high volume node.js deployments, supporting 100,000 requests/sec, those have been backed by Riak and PostgreSQL. Why? Because those technologies provided a better solution for the particular problems at hand. Always select each of the architecture components to fit the role they are going to play, not just go with the newest or shiniest solution. Or the one that people claim is the best. Which brings me to my next point.

People (yes, even technical ones) often make their technology choices based upon marketing materials that usually have very little to do with the reality. There are infamous MongoDB performance benchmarks, supporting the company’s claim that it is the fastest database, that were done without actually writing the data. Yeah. Database that doesn’t guarantee data storage. That makes sense. Once you turn on guaranteed writes, performance plummets. Yet, people make assumptions solely based on those product marketing numbers, negatively impacting whole organizations—from IT to finance. Consider this: A company with an architecture of a couple of hundred servers needed file system encryption for all the application and data storage nodes to comply with security regulations. They selected a "promising" product, installed it, configured it based upon the product company’s recommendations, and noticed about 25% performance drop, impacting core business flows. Not surprisingly, the company found this unacceptable. In conversations, they mentioned that they expected 4% performance decline. Strangely precise number. As it turned out, said company never had filesystem encryption running on any instances of the application, not even development (not that it would be sufficient or comparable), and the performance benchmark expectations came straight from the marketing pamphlet that the product company provided. This harsh realization came after the company invested money in this product, made promises to their own investors and set a budget for hardware—all based on a theoretical number in a brochure. Pro tip: as a general rule, the only performance numbers that matter are the production performance numbers. "It worked fast on my laptop" is not a valid argument for. . .well, anything. Neither are benchmark numbers written in a marketing brochure. Or on a bathroom wall, for that matter.

What comes next is a justification of technology/product selection, which brings me back to Node Summit, which in turn, inspired this rant. A very positive takeaway was that large companies are adopting node.js and contributing back, which brings validity to the technology itself and promises good things to follow. But there were also presenters and panelists giving all the wrong reasons for node.js selection and adoption, which cheapens the achievements, as impressive as they may be. You really cannot take seriously any testimonial on the awesomeness of a decision to switch to node.js if it is based on the fact that “now, it takes much less than 18 months to change background color for all the web properties.” Similarly, if you claim that your application is faster in node.js without CDN than it was in its previous incarnation with CDN, I cannot take you seriously. In case it's not obvious: the problem is not with a poor technology selection to begin with, the problem is with the decision-making process and poor architectural decisions. You can't blame technology for your poor decisions. Similarly, you can't praise a technology because you didn’t repeat the same mistakes the second time around. All it means is that you've learned something from your own mistakes. Or hired smarter people.

The "shininess factor" and pseudo math both do damage, and in the tech industry, you need to keep up with everything. But, with the number of different technologies that hit the market every month (if not day), each backed by media hype and promoted by investors, it is easy to fall into a trap of buying into a promise of a product instead of the product itself. A word of advice: don't base your decision on success stories but rather on the stories of failures. Those are told by people who had to deal with similar choices, experienced the pains firsthand, had to overcome them, and can provide the only factoid that matters—production experience and performance. So do your research, challenge assumptions, and remember the wise words of Bruce Lee: mistakes are always forgivable, if one has the courage to admit them.