Once upon a time – oh, say fifteen years ago – the terms open standards and open source software (OSS) were often used interchangeably. Not because they were the same thing, but because many people weren't sure what either really was, let alone the differences between them. That was unfortunate, because at that time the two had little in common, and were developed for very different purposes.
Recently, many people (especially OSS developers) have begun referring to the software they develop as “a standard.” This time around they’re a lot closer to being right.
So, what’s going on here? And is it a good thing?
The answer to the last question is mostly yes. Sometimes it’s even all good, as in the case of Kubernetes, the cloud container management software developed by the Cloud Native Computing Foundation under the stewardship of The Linux Foundation (disclosure: the LF is a client of mine). We’ll come back to that example a little later.
First, though, let’s talk about what open standards and open software are, what they’re used for, and how they fit together. To do that, we need to talk about proprietary software, too.
Back in the bad old days, just about all software was proprietary, meaning that the customer only received binary code under a license agreement that prohibited reverse engineering. That made the software a black box, and condemned the customer to using only a single vendor’s products. It also meant that a customer was locked in to that vendor’s products forever, unless a way could be found for the customer to transfer its data over to a new vendor’s system. The solution to that quandary was found in standards.
Simply stated, the most common type of standard is created to provide a way for different systems and products to work together – that is, to “interoperate.” Often, a standard will be accompanied by a test suite that allows confirmation and certification of interoperation, providing assurance to a customer that their expectations will not be disappointed when they purchase hardware and software they want to “plug and play” together.
Interoperability standards are what permit your Lenovo laptop to run Linux as well as Windows. They also allow you to send files between all types of computers and devices via a cable or a WiFi or Bluetooth connection. All told, there are over 250 standards implemented in every laptop. Overall, many thousands of hardware, software, wireless and chip standards had to be developed to enable the technology-driven world we inhabit today.
Now let’s talk about open source software, and why it exists. OSS is generally available to anyone, who can then mostly use and change it as she sees fit (subject to certain restrictions and obligations, in the case of “copyleft” licenses like the GPL). Theoretically, we may someday live in a world where all software is open software, and where everyone uses exactly the same open software. In that homogenous state of software nature, the need for “horizontal” standards (i.e., standards allowing software programs of the same type, like operating systems) would disappear.
Of course, we don’t live in such a world, and likely never will, since not every type of software is likely to attract a collaborative community. Nor would it be a good thing if there was no competition between developers of similar software, as innovation would likely stagnate. Instead, we live in a heterogenous world, where most systems run both proprietary and open source software, both of which are constantly improving and being challenged by new programs. So, the need for standards – or some other means for different software programs to interoperate – remains.
Still, a lot has changed. But it hasn’t been easy, due to the fundamental philosophical incompatibility of open standards – which only provide value where the user does not depart from them, and OSS – where, by definition, the user has the right to change whatever they want. One result of this dissonance is that many OSS developers have been hostile to open standards. But without standards, how can their software interoperate with lots of other software programs?
The initial solution to this quandary was for the separate communities supporting different software programs within the same “stack” to work closely together (the need for such collaboration is largely avoided with standards, which provide a blueprint for interoperability that the developers of any software can follow to achieve interoperability).
The first highly successful software stack benefiting from such close collaboration was the multi-layer Apache web server. More recently, stacks have emerged in areas such as storage, network virtualization, containers, cloud computing, and more. Programs in each of these stacks need to work together, and we speak of the core programs in such stacks as being “closely coupled.” The result is something like the integration of software we see in every type of Apple device.
This might sound like a sensible and superior solution except for one thing, and that’s the potential for lack of diversity and competition. Designing in and maintaining close coupling between layers in a stack where each layer includes multiple product alternatives is time consuming. Granting that favored status is up to the technical leadership within each layer. Not surprisingly, the same meritocratic approach that dominates within each program development community also applies between communities. As a result, each new open software community strives to generate the kind of technical street cred that will earn it one of those hard-won, favored places upstream and downstream in its own particular stack.
That brings us up to the present, which would be a good time to summarize where the last few decades of software development and interoperability have led us before we take a crack at predicting what the future may hold. Here we go:
- 1946 – c. 1981: Proprietary hardware rules; software arrives on board or is custom developed by or for the user. IBM dominates, and its competitors use open standards as a way to try to break that dominance.
- 1982 – c. 2000: Mainframe, mini and Apple hardware are proprietary and locked; “Win-tel” hardware/software platforms are open in the sense that multiple manufacturers can build and sell them and ISVs are broadly able to develop and sell software to run on those platforms. Standards permit many products to interoperate, and a degree of freedom exists to switch from one vendor system to another, although not, usually, to abandon the Win-Tel platform entirely. Microsoft has replaced IBM in its dominance of the most important platforms (desktops and servers), and its competitors are developing UNIX variants in their struggle to break that dominance.
- 2000 – c. 2016: The rise of Linux and other open software breaks the stranglehold of traditional proprietary vendors in many areas; software within a stack is tightly coupled and vendors compete above and around the OSS level.
- 2017: The advent of what we might call “open hybridization,” by which I mean the well-considered marriage of open source software and open standards in an effort to reap the benefits of both worlds.
To get an idea what that might mean, let’s take a look at Kubernetes (remember Kubernetes?)
What’s been going on at Kubernetes is this: instead of simply choosing upstream winners and losers, the Kubernetes commercial supporters and development community have defined the areas where Kubernetes needs to interoperate with other products, and then written the APIs (application program interfaces) that developers of other products can code to in order to achieve interoperability. In other words, the Kubernetes developers have written open standards for the interfaces between their open software and the other open and closed software that Kubernetes users may also want to run.
In order to achieve confidence among those users, Kubernetes has taken yet another page out of the open standards playbook, and launched a certification program as well. That way, distributors can prove to their customers that their Kubernetes product will indeed be able to deliver the interoperability that the user expects.
The result is that the Kubernetes ecosystem has the potential to become larger, more varied, and more competitive, because new Kubernetes developers will find it easier to enter the marketplace, integrators will find it easier to integrate Kubernetes into their package offerings, and users will enjoy wider choice and easier platform maintenance.
All of which, we will doubtless agree, is a very good thing.
The emergence of such a strategy on the part of Kubernetes is representative of what might be seen as a maturation of a marketplace that is constantly seeking the best means to deliver a broad range of interoperable products that protect customers from vendor lock-in, not to mention abandonment, if their vendor goes under or changes its technological direction. For over a century, that goal was achieved through the development and adoption of open standards. More recently, it has been partially accomplished through the development of open software.
Now (finally), users can benefit from the marriage of both of these open worlds.
Best of all, that’s being achieved in the way that puts the capabilities of each approach to best advantage, while avoiding its respective constraints. That’s exactly what a logical and well-considered approach should bring.
And make no mistake about this: there’s an important lesson here that both open standards and open source collaborations would ignore at their peril. Those collaborative consortia and foundations that take an agnostic approach utilizing both open source as well as open standard disciplines to provide solutions will be the ones that succeed. Conversely, those that remain rooted in one discipline to the exclusion of the other will find that the most interesting and important projects will go elsewhere.
There are so many different systems of Open Source or Open Standards and we are developing our society, culture and once I Pay Someone To Write My Research Paper Open Source than I know what we are.