How do we decide whether a technology or practice is effective? Go to conferences, read blogs, pick up a book or two? We listen to the people speaking, read the lessons learnt from those blogging, and trawl over the detailed texts where authors offer up their opinions. This is the techie way.
So ingrained is this method that in interview questions we ask people “how do you learn about new stuff” and have a tick list of several things with those three at the top. This is how we improve.
Well, all of the above are deeply flawed ways to ensure that the learnings you make are actually correct. They are littered with confirmation biases, appeals to authority, anecdotal evidence and a host of other critical thinking and logical faux pas.
What are we really doing? We’re listening to the people we like the sound of, those that confirm our ideas - those that tell us Vi is better than Emacs or why tabs are better than spaces or Apple better than Android. We’re taking their word as gospel, often with very little deep critique (mainly because they tell us what we already agree with), and then holding it up as truth. It’s hardly balanced. It’s hardly going to give us the best answers.
Techie’s are really terrible for this. When Ruby on Rails first came out everyone was pointing to blog posts about how they’d seen 30% productivity improvements. At the time anyone who’d pointed out they heard it all before was dismissed as deeply cynical and no longer on the trend. Then a few years later people started pointing to blog posts that suggested failure and claiming how Scala was the thing because Twitter had abandoned Ruby for it. We’re seeing the same pattern again with the recent spate of bloggers talking about abandoning NoSQL for MySQL etc.
Why do we do this? Because we’re suffering from all those previously mentioned flaws. We’re cherry-picking the evidence (mainly anecdotal) to support whatever our case is as the time. We often overstate the case originally and then overstate the failures later. And even when the case is valid we extrapolate the learnings from one particular environment and try to crowbar it into our own even if the situations are radically different. And sometimes we do the opposite, we ignore the information coming out of projects that are similar than ours because they don’t reflect our position. Want evidence? How many of us truly hold up PHP as an acceptable technology despite the fact that several of the world’s biggest sites use them (Facebook, Wikipedia, BBC)?
This is hardly the actions of considerate professionals.
This judgement is a little unfair; we are using the best tools we have. But to make the best decisions our current methods are not sufficient. We need better data, a wider lens to look at the global state. It’s great that companies and organisations are becoming more open about their tech choices but to understand whether or not an individual project’s experiences are valid we need to becoming better at aggregating the data across. So for every blog claiming that Ruby increased their productivity by 30% we need the other blogs that tell of lost time from ripping it out because it didn’t perform or it was too buggy or whatever. For every large web project that claims success using ASP.NET MVC we need the ones that demonstrate the benefits of PHP. Then, as individuals, we need to drill in and look at the factors behind these decisions and decide whether they apply to our project.
Once we can obtain a view that is more balanced and less biased (though not necessarily accurate - it’s still anecdotal evidence remember, just more of it) we, as developers, will be able to make more informed, and hopefully, better decisions for our individual situations.