If you haven't delved into the work of independent (i.e., self-published) authors yet, you're missing something. Freed from the formulaic constraints of what a traditional publisher thinks will sell, self-published authors are producing astonishingly original works of fiction. I reviewed a book that epitomizes that trend last month.
Titled Skin Cage, it's written in the first person from a most unusual perspective. The choice of that viewpoint, as well as the degree to which the author succeeded in accomplishing what can only be described as a challenging task, left me more than usually interested in conducting an interview. Happily, Nico Laeser said yes.
Now that’s an intriguing question, isn’t it? Just about every other computerized process has proven to be vulnerable, and as voting becomes even more technology based, it becomes increasingly vulnerable as well. Computer systems are generic processing hosts, and to a computing platform, data is simply data. The fact that certain information tallies votes rather than credit card transactions does not make it any harder to hack. Moreover, the U.S. has a long history of documented voting fraud, so there’s no reason to assume that politicians, and their backers, have suddenly become paragons of virtue. Indeed, there’s plenty of evidence to the contrary.
When you come down to it, the only thing that’s different today is that altering votes might be easier, and that those motivated so do so may be harder to catch. So why aren’t we hearing more about that risk?
Another good question. But before we explore it, let’s add a few more observations to the pile.
Unicode marks the most significant advance in writing systems since the Phoenicians
James J. O'Donnell, Provost, Georgetown University
What is 11 1/8" x 8 3/4" x 2 1/4" and weighs 7.89 pounds? Among other things, the hardbound copy of the Unicode Standard 4.0, the Oxford English Dictionary of computerized language characters, numbers and symbols, contemporary and archaic, mainstream and obscure. The home of Khmer Lunar codes, Ogham alphabets and Cyrillic supplements. An alphanumeric expression of the means of human communication.
Me, October 26, 2003, Savoring the Unicode
Two weeks ago, I got a call from a reporter who had stumbled on two pieces I wrote in praise of new releases of the Unicode, the first in 2003 (on the occasion of the release of Unicode 4.0, referred to above), and the second in 2006, two releases later. The reason for the call was the release of Unicode version 8.0 by its stewards, the Unicode Consortium.
Well, it really is a great feeling to push that final “submit” button after you’ve uploaded the cover, the file, and all of the metadata and other information that Amazon asks for. And behold – only an hour later, my second book, titled The Lafayette Campaign, a Tale of Deception and Elections, magically appeared on line. What a great feeling.
Now, don't everyone just run out and order it at the same time. Well, ahem, on the other hand, who's stopping you?
For some years now, I've been a Fellow of a European think tank called the OpenForum Academy, which focuses on all things open: open standards, open source, open data, open research, and so on. It's an affiliate of a non-profit called OpenForum Europe, which advocates for same causes bofore the legislature and agencies of the European Union and those of its constituent states. The EU Parliament as well as governemtal agencies and legislatures in the U.K. and elsewhere have been actively engaged on these topics, and have welcomed this input.
OFE Academy is made up principally of an invited group of academics, journalists, technical experts and others that are recognized for their leadership and expertise in the area of openness (you can find a list of them here). Recently, the Academy launched a Fellow interivew series, and this week the interviewee happens to be me. Below I've pasted in a few outtakes from the much longer interview, which you can find here.
Once upon a time, standards were standards and open source software was open source software (OSS), and the only thing people worried about was whether the copyright and patent rules relating to the standards would prevent them from being implemented in OSS. Actually, that was complicated enough, but it seems simple in comparison now that OSS is being included in the standards themselves. Now what?
If this sounds unusual and exotic, it isn’t. In fact, code has been creeping into standards for years, often without the keepers of the intellectual property rights (IPR) Policies governing the standards even being aware of it.
Last July, the UK Cabinet Office adopted a rule requiring government purchasers to limit their technology acquisitions to products that implement an established list of “open standards.” Last week, Sweden took another step down the same road as it further refined a list of information and communications technology (ICT) standards. That list currently comprises sixteen standards. A posting at the European Commission EU Joinup Web site reports that other standards are to be added this year.
It takes something truly ridiculous to make me write an out and out rant. Still, every now and then I read something that I can’t avoid responding to, because of the degree to which it misrepresents reality in an area I both care about and am knowledgeable in. Yesterday I had that experience when I read an article contending that proprietary eBook formats are good rather than bad, and that while “someday” we may have a truly interoperable eBook format, for now we should just sit back and appreciate proprietary formats in this area.
What rubbish.
Most engineers are aware that patent owners can sue those that infringe their patents. It may surprise them, however to know that a patent owner can also sue someone for only “inducing” another to infringe their patent. Luckily, in both cases, the patent owner only has a right to sue if the other party acted “knowingly.”
As you might expect, the circumstances and facts that are deemed to prove knowledge are the subject of much litigation and many legal opinions. Recently, the U.S. Supreme Court added another decision to the pile, and a distinction that the court drew on this question may surprise you. It should also particularly concern open source software developers, for reasons I’ll return to below.
For all its benefits, one aspect of open source software does cause headaches: understanding the legal terms that control its development and use. For starters, scores of licenses have been created that the Open Source Initiative recognizes as meeting the definition of an “open source license.” While the percentage of these licenses that are in wide use is small, there are significant and important differences between many of these popular licenses. Moreover, determining what rights are granted in some cases requires referring to what the community thinks they mean (rather than their actual text), and in others by the context in which the license is used.
Rather like interpreting the applicability of the U.S. Constitution to modern life, except that there is no Supreme Court available to call the coin toss when people disagree.