On open source security

In the current debate about trusting software that uses cryptographic techniques, the position that Open Source software is inherently more trustworthy than commercial software is repeated over and over again. While I personally think that the free software movement has done a great deal to advance the state of computing and is amongst last century’s accomplishments the human race can actually be proud of, I do not follow the reasoning that having the source code to a specific cryptographic product available is any statement about its trustworthiness, particularly in its security or quality of cryptographic processes.

One of the main differences between medium-sized software endeavors in commercial software companies and the open source world is the adherence to processes. Commercial software companies have a rather clear chain of responsibility and defined allocation of resources. This enables them to define specific processes that need to be adhered to in order to build their application. Potentially, this includes various stages of review and validation (from basic design drafts to implementation specifics) or testing (from source-level unit tests to functional tests of parts and ultimately, the full application). Most free software projects don’t have the luxury of developers contributing test suites, or sharp minds having a chance to review design specifications to understand the impact a specific change might have.

Serious testing and quality assurance takes time. The process of software testing is resource intensive (either you need qualified, good testers or you need to have developers who keep the testing suite in sync with the product). And the release schedule needs to accommodate testing; this means longer release cycles, slowing down the total development speed.

Whilst I am not saying that all commercial software vendors do stick to a rigid set of processes that insure their quality, I think that they are in a better situation to actually follow through on such processes if they chose to align themselves with such goals.

Also, as anybody who is involved in computer programming in a serious fashion will gladly tell you, discovering bugs is hard work. It takes dedication (even stubbornness) to hunt through code to find those situations where it doesn’t behave as intended. Many a software product have suffered from delays because there were still critical bugs that had to be resolved. In the same vein, it is even harder to spot changes that are deliberately introduced to thwart specific aspects of the product whilst leaving most everything else intact. It requires very detailed knowledge of the programming language and tools in use, the desirable outcome, deep understanding of the algorithms involved and a good set of tools to validate and verify that things are as expected.

This brings us to yet another new topic: trusting your contributors. What motivates the person to bring about changes to the application? What kind of a skill set do they have, how deep is their understanding of the techniques and tools? Open source projects seldomly do deep background checks on their contributors, or rely on outsider information about the people behind pull requests.

The next topic in that context is trusting your tools. The compiler that you use daily, are you certain that it does not alter your algorithms as it transforms your writings into another format? The libraries that you link against (maybe even dynamically, making yourself trust any further changes in those libraries), what guarantees that you know all their functions, all of their side effects? Who is more likely to invest the significant resources required to build a trusted toolchain?

Of course, having source code available offers a number of options for the users of that software. That includes deep and detailed inspection and audits. But I think the reality is that only very few, very select products are ever placed under such scrunity. And even then, the results can only be applied to one very specific version, in one very specific configuration. Any changes would have to be subjected to a similar regimen to have any significance in establishing the trustworthiness of a codebase.

Would you hazard a guess what the percentage of code is that has been subjected to such scrutiny in the free operating system you’re using? And how do you know you can trust the entity that did those audits?

Unless you have clear answers to both questions, open source software is not more trustworthy than closed source software is. Not less, either. But also: not more.

Ein Gedanke zu „On open source security“

  1. I do not see the picture that dark: Do not overrate „processes“ in so called „professional software development”.

    Closed Source allows for pretending there were security audits, but for no one to check independently (No, TÜV badges do not impress me). Open Source on the other hand allows for independent and repeatable audits. It is not that these audits have to take place really, but that they could happen any time, unexpected and by gifted people from academia. And while it takes experts to do that, there are some, and the number of security related pieces of software is limited. So while i cannot audit the software i use myself, i have solid trust that someone more capable than me already did it, and would speak out loud if something fishy would appear.

    In most companies, there is no culture of speaking out loud things found in audits, but merely silently fix them (if at all) in some future release.

    I for myself prefer an Open Source product that is not audited but could be anytime without permission of some involved party over a Closed Source product where i have to believe promises.

    But you are right in the fact that Open Source products are not immune to tinkering, and will never be.

Schreib einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind markiert *