OpenSource and the Free Software Foundation have been around for a long time and starting in the early 90’s more options have become available. Linux became a somewhat common word and the use of free software became, step by step, widespread not only for home use, but in universities and corporations. This post is not about the history of OpenSource but the concepts I subscribe in my decision to use OpenSource projects.
There is no free lunch.
Indeed there is not. If you think that using free to download software, including operating system or applications then I have a bridge in Brooklyn that I want to sell to you. But the cost is indeed less. There is no cost for the software itself. In most cases you pay for the distribution, which is the media, the packaging and the shipping. But if you do not even want to pay for that you can always downloaded it from the Internet. The cost there is bandwidth.
So, what other more important cost is there? People. And the cost of these people is higher than FTEs focusing on Windows – We can talk about Sun Solaris, or AIX, or HPUX, which are not free and are also versions of Unix, which enter in the people cost discussing, but since those tools are not OpenSource we will ignore them. A good and/or decent Unix sysadmin with deep knowledge of Linux or FreeBSD runs, depending on experience, at no less than $65K/y. And that would be for an entry level guy. A similar sysadmin for Windows will cost $45K/y. So yes … it is more expensive, but for every 10 Windows servers you need 1 FTE to administer them. For Unix based system, you need 2 every 40 or 50. The math works out that to fully support a cluster of 50 Windows servers you need to spend $225K/y in people, not including license fees. For the same Unix cluster $130K/y. And there are no license fees.
To me transparency is a primary element of security. And transparency is in regards to being able to audit the code. As a CTO one of my roles is to be mindful of the company’s assets, both, digital and physical. So, having access to the code to verify its security is important. Do I look at the code each time I install a server? No, but once upon the time I did. And I try to follow up on security patches. And even if I do not, one of my two guys – not 5 – will.
But that would be at the OS level. At the application level works the same way. Any OpenSource application that you install, it’s code is available for you to scrutinize. Moreover, you are encouraged to do so and to report any finds … specially in a public way. Transparency in this case is about the lack of secrecy. It is secure because it is build secure and security exploits are widely communicated for the community’s benefit. It is also good to provide a security patch with the communication, but not obligatory. You are encouraged to contribute.
I am big in reusability. Reusability to me means several things but most importantly from a business point of view – not to be confused with pure technical meanings of reusability – is expediency. If I can start a project from 50% done by basing it on an OpenSource project then why not? I have the code and I can modify it. It means getting to market faster. It means less development cost. It means starting from a more-less stable code base and if done right, retaining that stability.
On one of my previous posts I discussed buy vs. build and the gist was that if it is core then you build. How is starting from 50% relate to that? By the nature of OpenSource you OWN the modifications, but you need to disclose that the base is OpenSource.
But reusability does not need to be on entire projects. Maybe parts of a given project are the necessary components you are looking for. Reusability is core to OpenSource, in all sorts of ways.
Quality is a hard concept in OpenSource and can be argue in both ways, against and in favor.
Against: A software producing company has a dedicated QA team to assuring the quality of the product. They know the product in and out.
In Favor: Thousands of users are proxies for QA teams. And thousands of users can find more issues than a few dedicated QA guys.
It is not about the quality of the dedicated team. I have worked with dedicated QA teams throughout my career and some of the folks on those teams were the best; but you cannot argue with share numbers and probability. The more people looking at the code or using it, the better the chances to find obscure issues.
Moreover, many of the found bugs are reported with the solution in it – even the code that fixes the bug – which accelerates the rollout of the fix. Quality in OpenSource is pride the community takes very seriously.
Community is an important aspect of OpenSource. Without the community there would not be OpenSource. Without the community there would not be quality on OpenSource projects. The community helps with problems by providing the solutions to the specific problem or similar problems.
I used to be a Red Hat beta tester for JVMs. Since I was a Java developer and used Red Hat, I always installed the Blackdown JDK and tested it as part of my regular development duties. I encountered problems and posted them and if somebody had a solution they would share it with me. I did the same. Once every other day, as I posted issues, I would scan the posts and provided answers if I had them. To a less extent I did the same with FreeBSD – which I also used.
On closed source software, it takes a longer wait to get an answer, even in the cases where you paid for ongoing support.
And this is where things get a little dicey. Is OpenSource more stable than close source? Part of the answer does reside with similar arguments as in the QA portion above, and part of it with the community argument as well. I would agree that early days of OpenSource maybe a small percentage of the projects had a high factor of robustness, but that is in the past.
Machines running Linux or FreeBSD, at least in my experience, have stayed up error free, even in the face of configuration changes, longer than machines running Windows, or Solaris, HPUX, etc. Part of the robustness comes from being able to use commodity-computing devices. There is something to say about using the lowest common denominator in terms of hardware.
Another element that brings robustness is, as stated above, transparency. Thousand of people looking at the code and contributing does make a dent – in the positive sense – towards stability and robustness. But robustness also comes from choices; as a CTO I have to choose what is best for my company and in some cases it does imply using bleeding edge components.
Robustness is a choice, and OpenSource allows you that choice.
There are other arguments, including philosophical ones with which I totally agree, but these are the most important to me as a CTO: