First, the good news: last week, Google, Microsoft, Twitter and Facebook announced the Data Transfer Project, inviting other data custodians to join as well. DTP is an initiative that will create the open source software necessary to allow your personal information, pictures, email, etc. to be transferred directly from one vendor’s platform to another, and in encrypted form at that. This would be a dramatic improvement from the current situation where, at best, a user can download data from one platform and then try and figure out how to upload it to another, assuming that’s possible at all.
So what’s the bad news, and what does a hammer have to do with it?
As old readers will know and new ones can tell from the left column, one of the things I do besides lawyering is writing satirical, political cybersecurity thrillers - four to date, with a fifth out within a couple of months. Recently, Tantor Media offered me a contract to bring out the first three titles in audio. Tantor is an imprint of the largest publisher of audiobooks in the world, and I was delighted to say yes. Now the first title, The Alexandria Project, a Tale of Treachery and Technology, is available at Audible, Amazon, and everywhere else audiobooks are sold.
My, my, what a difference a decade makes. Or for some, maybe not.
Ten years ago, Microsoft was led by Steve Ballmer, who very much viewed open source as the barbarian at the software giant’s gates. The feeling was emphatically reciprocated by most in the free and open source (FOSS) community, which viewed Microsoft as a threat to the very existence of FOSS. And if Ballmer had been able to have his way back then, they would probably have been right.
Ask any journalist to pick an adjective to use in connection with standards development and the answer will invariably be "boring." But according to a recent New York Times article (yes, it also used that word - as well as "wonky"), the process of creating standards just became a whole lot more interesting - at least when it comes to the blockchain. The reason? A standards working group may have been infiltrated by state actors bent on embedding security flaws into the very standards being created for the purpose of preventing attacks.
There’s a belief in some open source circles that standards can be consigned to the ash heap of history now that OSS development has become so central to information technology. While it’s true that today many use cases can be addressed with OSS where open standards would have been used in the past, that approach can’t solve all problems. Most obviously, while resolving interoperation issues through real-time collaboration among up and downstream projects may meet the need within the same stack, it doesn’t help that stack communicate with other software.
Blockchain technology is an architecture where collaboration on software alone will often not suffice to meet the challenge at hand.
When it comes to the blockchain, most people fall into one of two camps: the hand-wavers that think the blockchain will disrupt and benefit the world as profoundly as the Internet, and those who are scratching their heads and just can't see how that could be possible. I confess that I fall more into the second camp than the first, but I do recognize that blockchain technology can provide a far superior tool to tackle some challenges than any that we've had to work with before.
I identified just such a challenge many years ago when the Internet was really taking off, and suggested that individuals needed to seize control of their personal information before commercial interests ran off with it instead, locking it away inside proprietary databases. The date of that article? February 2004, the same month that a little Web site called Facebook went live. Back then the problem was (and it still is) that the critical keys to avoiding data lock in are standards, and the process that develops those standards wasn't (and still isn't) controlled by end users.
Here's how I posed the challenge in that article:
Have you ever wondered what it would be like to read a book as its written? Or better yet, be able to make suggestions as the book develops and see your ideas help shape the result? Well, here’s your chance. If you’re already a Friend of Frank at my author site, or want to become one, that’s what you’re invited to do. As the book evolves, I'll ask for your advice, and answer any questions you may have. I’ll also give you the inside scoop about how and why each chapter is written as it is. Sound interesting? Great, because I’ve just posted the Prologue and First Chapter below. To read the future chapters for free as they’re posted, all you have to do is become a Friend of Frank.
Once upon a time – oh, say fifteen years ago – the terms open standards and open source software (OSS) were often used interchangeably. Not because they were the same thing, but because many people weren't sure what either really was, let alone the differences between them. That was unfortunate, because at that time the two had little in common, and were developed for very different purposes.
Recently, many people (especially OSS developers) have begun referring to the software they develop as “a standard.” This time around they’re a lot closer to being right.
So, what’s going on here? And is it a good thing?
Those who have followed the spread of open source software (OSS) know that a bewildering thicket of OSS licenses were created in the early days. They also know that although the Open Source Initiative was formed in part to certify which of these documents should be permitted to call itself an “open source software license,” that didn’t mean that each approved license was compatible with the other. Ever since, it’s been a pain in the neck to vet code contributions to ensure that an OSS user knows what she’s getting into when she incorporates a piece of OSS into her own program.
In the intervening years, more and more entities – private, public and academic – have decided to make public the increasingly large, various and valuable data sets they are producing. One resulting bonanza is the opportunity to combine these data sets in order to accomplish more and more ambitious goals – such as informing the activities of autonomous vehicles. But what if the rules governing these databases are just as diverse and incompatible as the scores of OSS licenses unleashed on an unwitting public?
My latest book, The Turing Test is out, and the first reviews are in. Here are a few samples from the reviews (all five star) posted at Amazon so far:
Beyond any shadow of doubt, 'The Turing Test' is a worthy addition to the Frank Adversego series and more than satisfied my every expectation ... For me, 'The Turing Test' is a stealthier creature. It packs its punches in a different but equally effective manner, delivering a terrific tension and suspense that ebbs and flows throughout a lengthy narrative peppered with twists, turns and shocking surprises ...