a blog

  • In a blog posting earlier this year, Ian Murdock, Sun Microsystems’ vice president of cloud computing strategy and Debian GNU/Linux founder, wondered what might emerge as the cloud equivalent of the Linux distribution.

    Murdock pointed out that the cloud computing world today resembles the early days of Linux, during which dabblers with a surplus of time and motivation could assemble and integrate their way to a Linux platform.

    However, it wasn’t until the Linux distributions emerged and whisked away much of the code-cobbling grunt work that the platform became broadly useful.

    Certainly, the assortment of different Web-based services out there could be made more useful through a bit of Linux distributor-like massaging, but a menu of mashed-up Web services won’t add up to a platform comparable to Linux.

    The problem with that mashed-up picture–and the root, I suspect, of much of the anti-cloud sentiment that emanates from those without anything Web 2.0 to sell–is that it offers too little provision for packing everything up and moving it behind your firewall, or behind your laptop cover, if that’s what you’d prefer.

    Lately, the term “private cloud” has been turning up quite a bit, and, in many cases, I’ve seen the phrase met with derision, as though people who wish to organize their workloads a la Amazon or consume software in the style of Salesforce without entering into a blood pact with such an organization don’t “get it.”

    The power of Linux as a platform lies in its capacity not only for taking on almost any task, but also for attacking those tasks almost anywhere. Certainly, you’d be hard-pressed to achieve the level of operational efficiency of an Amazon or a Google, but that’s no reason not to seek a more portable future for the cloud.

    When I consider what might emerge as the cloud-era equivalent of the Linux distribution, the most likely candidate seems to me to be the Linux distribution itself.

    And, as I learned during a talk by Red Hat CTO Brian Stevens at last month’s IDC Cloud Computing Forum West, the Linux world’s most prominent distributor is working toward bringing just such a reality to pass.

    Through a collection of separate projects, clustered around Red Hat’s oVirt virtualization management project, the company has been working toward extending Linux’s layers to encompass a complete cloud reference implementation.

    At the IDC event, Stevens roughly outlined his company’s plans to make this reference implementation available in the form of a publicly accessible test cloud running on Red Hat-hosted machines. At the same time–which, according to Stevens, should be around this summer–Red Hat will make the reference implementation available in a downloadable form suitable for installing on a pair of servers on ones’ test bench.

    As Stevens hastened to point out, Red Hat is not out to become a cloud provider itself; rather, the company views its plans to advance the state of cloud-building as the logical next step in the platform continuum that Red Hat, Ian Murdock and others began in the early ’90s.

    Since it began life as a bare kernel intended for educational purposes, Linux has steadily accrued higher-level stack layers, which now include the capacity for hosting virtual instances of itself or other operating system environments.

    It stands to reason that Linux should continue scaling up, into a building block for any number of private, public or test clouds, each bearing their own set of the slight adaptations through which all technologies evolve.

    UPDATE: Just a few hours after I posted this, I caught sight of Mark Shuttleworth’s Ubuntu 9.10 name and goals announcement message. One of the goals for that release, and for the 9.04 version that will precede it, are cloud-building capabilities along the lines of what Red Hat has in mind:

    What if you want to build an EC2-style cloud of your own? Of all the
    trees in the wood, a Koala’s favourite leaf is Eucalyptus. The
    Eucalyptus project, from UCSB, enables you to create an EC2-style cloud
    in your own data center, on your own hardware.

  • Rules: Once you’ve been tagged, you are supposed to write a note with 20 tech-related things, facts, habits or ideas about yourself. At the end, you will tag no one, since you should have forsworn chain letters years ago.

    1. In 1998, I spent way too much of my meager salary on a Psion 5 handheld computer. The maddening orphaning of that sweet piece of hardware made me appreciate the vendor-emancipating goodness of open source platforms.
    2. I purchased my first home router (one of those timeless blue Linksys numbers) because the home OS love of my life at the time, BeOS wouldn’t work with my DSL provider’s PPPoE.
    3. Be’s focus shift away from the desktop operating system market, and subsequent sale to Palm meant the mothballing of BeOS, and gave me another reason to appreciate open platforms.
    4. About a year later, I stepped up to a wireless (and, then, for me, relatively costly) Linksys router, even though my apartment was more than small enough for an Ethernet cable to stretch wherever I might have roamed.
    5. I so loved following the development arc of Windows XP that once XP was finished, I lost interest in Windows in favor of Linux, an OS that’s always in active (and public) development.
    6. The fact that OS X is forbidden from running on non-Apple hardware (not even virtual hardware, running atop an Apple machine) annoys me to no end.
    7. I’m a big fan of unlicensed spectrum. The innovation we’ve seen in the 2.4GHz portion of the spectrum (including WiFi and Bluetooth) makes a great case for expanding the wireless commons.
    8. I love handheld computers, and spent the early part of my career at eWEEK writing (perhaps too much) about them. However, as the Internet has grown more important to me, and the pace (and affordability) of mobile Internet access stagnated, I began to lose interest in mobile devices.
    9. The fact that my iPod Touch is the best PDA I’ve ever seen or used, combined with the fact that my Touch is arbitrarily tethered to iTunes, infuriates me.
    10. I thought that Apple’s 1984 Mac Superbowl commercial was extremely creepy, and I don’t understand how it was that the locked-down Mac, then or now, was supposed to represent freedom.
    11. I loved the fact that DOS would run on crazy no-name whitebox clones. It’s the whitebox that’s actually the computer for the rest of us.
    12. I loved DOS for its relative openness, and its due to my love of openness in computing that I consider myself a PC (installed with Linux).
    13. My current home computer traces its lineage to the AMD K6-powered box that I ordered over the Internet from iDOT.com in 1998, although no parts from that ancestor remain in use.
    14. I’m still annoyed at all the asinine world-changing buzz/fluff/vapor that preceded the launch of the Segway/Ginger/It.
    15. When I’m at the command line, I often think of the “ps -ax | grep [whatever]” process-info-locating tid bit that my former colleague / hero Tim Dyck taught me when I was getting started with Linux.
    16. The Windows feature that I would most like to have on Linux is Powershell. The Get-Member cmdlet is perfect for my poking around style of learning.
    17. In my opinion, the geek community site Slashdot has the best commenting system of any site on the Web, and even better, the code that runs Slashdot is available as open source.
    18. Unfortunately, the Slash project is probably the worst-packaged open source project I’ve ever tried to install, beginning with a dependency on Apache 1.3, which practically no Linux distributions ship any more.
    19. I used to have a huge luggable PC with no hard disk, two 5.25″ drives, and a tiny amber-colored display. On a trip to visit family in Los Angeles, I lugged it along and spent most of my time playing King’s Quest III. I was always a bit unsure how to pronounce the name of the evil wizard, Manannan.
    20. I still play video games, in those limited minutes when my wife’s away and my kids are asleep. I opt for pro wrestling games on my refurbished Playstation 2.
  • I just wrapped up a review of OpenSolaris 2008.11, which, among other things, represents the most recent fruit of Sun Microsystems’ intermittent and arguably quixotic efforts to field a viable desktop alternative to Microsoft’s Windows.

    Over the past several years, I’ve reviewed quite a few of these desktop challengers–perhaps too many, considering the slender combined market share of non-Windows desktop options.

    What keeps me forging on in this coverage is my belief that despite the heavy buzz that surrounds cloud computing and all things Web, the desktop (in all its incarnations from the netbook to the workstation) is far from dead.

    Microsoft has it exactly right when officials there talk about software plus services as the way forward for computing. For all the amazing attributes of the Web as an application platform, it’s no substitute for the PC’s local storage, memory and computing power potential.

    No, the desktop hasn’t reached its end of life, but the desktop does appear to have shifted into maintenance mode. These days, the center of application innovation has moved to the Web, where SAAS (software as a service) is all the rage, and Facebook is achieving eye-popping penetration rates among mainstream users.

    Cruise over to Amazon.com’s software department and scroll through the top 150 or best-selling titles. Putting games aside, the list is dominated by familiar old faces: office suites, tax software, image editing, anti-virus.

    You’ll also find a handful of virtualization applications, an exception that proves the innovation void rule, since the primary purpose of products such as VMware Fusion is to enable users to run, albeit clumsily, those same old Windows applications on OSes too small to get a native port.

    Along the same exception-proving lines is the undisputed innovation king of the desktop, the Web browser, the makers of which are chugging steadily along toward turning the browser plus Web into a sort of desktop OS of its own.

    Now, it’s not that all the desktop application ideas have been used up, or that the browser is somehow superior as a desktop application platform.

    What’s missing from the desktop world, but alive and well on the Web, is the sort of fierce competition that arises from an open platform that is governed by standards, but accessible to a diversity of hardware and software components at every layer of the stack. As a result, Web application builders can mix and match the components of their choice to build their works on the back end, and tap a client-side market that includes every sort of device that can support a Web browser.

    In contrast, the desktop is, for the most part, a closed platform, where a single company calls the shots for the majority of the platform stack on about 90 percent of desktops.

    I submit that while the Web is indeed an attractive application platform, the level of Web and cloud activity that we now see is artificially inflated, as efforts that would otherwise be trained on the desktop are channeled instead into the Web.

    Diversity is the antidote for the innovation funk in which the desktop finds itself, and the desktop stakeholder with the greatest ability to encourage that diversity (as well as the party with the most to gain from a desktop revival) is Microsoft.

    I’ve written about an open-source Windows as a means for expanding and strengthening the platform. I believe Microsoft could help achieve some of the same ends by focusing higher up the stack, on the company’s managed code technology.

    Microsoft could work with Novell and the open-source Mono project to make Linux and related Mono-supported operating systems into first-class hosts for applications developed with the framework.

    The effort would require enough licensing changes and patent assurances to make .NET/Mono palatable for inclusion in any open-source OS distribution, and to allay developer suspicions of a possible intellectual property trap. However, the move would be a boon to application builders and users alike.

    Most importantly, the move could give the desktop, as a platform, the shot in the arm it needs to retake its place on the leading edge of computing innovation.

  • For the past few months, I’ve been using Twitter, the microblogging service that you may well be sick of hearing about.

    Right off the bat, I was attracted to Twitter because the 140 characters per entry that Twitter allows add up to about as much blog as I’m capable of mustering at most times.reader-twitter.png
    For me, microblogging just for the sake of it hasn’t been engaging enough to hold my interest, but what has kept me coming back to Twitter has been the growing network of people I follow.

    I love to dip my Twitter cup into the stream of chatter from the people who build and use the products and services that I test and write about at eWEEK. It’s also interesting to hear what other media types are talking about, although too much of that can lead to head-bursting echo chamber periods where my peers make exactly the same comments about Steve Jobs for days on end.

    But the main problem with a big and bountiful network of Twitter friends is that once your friends list grows beyond a fairly small number of people, it gets really tough to pay attention to what people are saying.

    Things get even worse when you follow megaprolific Tweetsmiths who tend to flood your Twitter stream with their chatter, or if you follow news organizations that post links to all of their stories on Twitter. This weekend, I followed the Huffington Post for about 10 minutes before deciding that I couldn’t afford to pay attention to all of the site’s output–I’d never get to see Tweets from anyone else!

    It’s too bad, because another really great thing about Twitter is the amount of standardization the service imposes on content producers. Everyone gets 140 characters, and any images or videos or sound files or advertisements have to live behind an embedded link.

    In contrast, my Google Reader is a mess of differently formatted entries, some containing only a headline, some with long, picture-laden, full-sized stories, some with way more advertisements than actual content, some with embedded videos or podcasts.

    When I fire up Reader and set out to peruse my many RSS subscriptions, I’m constantly context-switching, stopping for a moment to figure out what I’m looking at, rather than mechanically scanning for what’s interesting.

    Recently, while checking out the renovations that Google made to Reader in late 2008, I spied the heading “Stay connected to friends & family” within the application’s feed browsing interface. One of the drop-down options under the heading was Twitter–you can type in the name of a Twitter user and consume that user’s status updates through RSS. You can also subscribe to someone’s status update RSS feed right from his or her page using the familiar orange radar wave graphic in your browser’s address bar.

    I figured that if I followed each of my Twitter friends through Google Reader, I could take advantage of Google’s “auto-sort” view mode, which prioritizes feeds with fewer entries to prevent them from being drowned out by fatter feeds. Perfect for the Huffington Post scenario, right?

    What’s more, the RSS route could allow me to organize my Twitter friends into groups, and even to unsubscribe from people without unfollowing them. It hasn’t been an issue for me, as far as I know, but apparently lots of people get hurt if they don’t get a follow, which has led some of Twitter’s cooler kids to wish for a “fake follow” feature.

    Now, I wasn’t about to enter the names of each of my Twitter friends in Google Reader’s little drop-down input box, nor was I going to click the RSS button for each of my more than 300 friends. Twitter does offer an RSS feed that covers all of the people you follow, but one big friend feed wouldn’t work with Google’s auto-sort.

    I hit the Web and found a Perl script called twitter-followers-to-opml, which talks to Twitter, fetches a list of the RSS status feeds of your followers, and spits the list out into OPML, which you can use to populate your feed reader.

    I was looking for the feeds of my friends rather than my followers (although there’s significant overlap between the two lists), so I had to make a small change to the script, and swap out followers() for friends() near the top of the script. Also, since the Net::Twitter Perl library on which the script depends only fetches your last 100 friends, I had to figure out how to pass the page=2, page=3, etc., arguments to friends().

    NOTE: At the time that I’m writing this post, the link to the script I’m talking about is dead. I’ll check back later and update the post. In any case, the script seems fairly straightforward, and probably wouldn’t be too tough for an actual hacker (in other words, not me) to throw together. Alternatively, Twitter could make life easier on everyone and add an OPML export option itself.

    UPDATE: The script page is live again.

    I imported all my Twitter friend feeds into Google Reader and tagged them all “twitter,” so I could read them at once. Before I left the office yesterday I marked all the items in my Twitter folder as read and came into the office today to find over 1,000 new, auto-sorted tweets waiting for me.

    I like the way that Google Reader lets me scroll down through the tweets, marking them read as I go, and automatically refreshing the list. I scan my tweets, and middle-click the ones that pique my interest for follow-up.

    So far, so good. I’m going to boost my Twitter friend network, refreshing my OPML file to pick up the new friends as I go, and we’ll see how well this Reader/Twitter combo works for keeping the flow useful.

  • For the past few weeks, I’ve been running OpenSolaris 2008.11 on my main work notebook, in part because I’m working on a review of the OS, and in part because Linux’s rough edges have grown a bit smooth to support the desktop tweaking and fiddling with which I like to sidetrack myself.

    roughfont.png
    One of the roughest edges I’ve found on my OpenSolaris installation is the system’s font rendering within the Firefox Web browser. Many of the pages I come across on the Internet render with chunky, pixelated looking fonts that remind me of Linux+Mozilla back when I was a fresher-faced analyst based out of eWEEK’s one-time Medford location on the shores of the Mystic River.

    The typical prescription for bad-looking Firefox fonts is a visit, within Firefox, to the Advanced Fonts dialog that lives at Edit-> Preferences-> Content-> Fonts&Colors-> Advanced, where you can bar Web pages from using any fonts but the particular San Serif, Serif, and Monospace font trio you ordain. I never like to do this, however, because this workaround tends to break Web page designs, squeezing and reshuffling everything as if you were browsing on a device with a too-small display.

    Desirous of smoother-looking fonts, without page layout corruption, I hit the Internet in search of other OpenSolaris users who’d shared my complaint. I found a few references to the lack of font rendering magic in the Freetype library that ships with OpenSolaris.

    Due to potential conflicts with patents held by Apple and by Microsoft, the Freetype project ships its namesake library with subpixel rendering and its bytecode interpreter switched off. Apparently, certain savvy OpenSolaris users compile their own Freetype libraries to route around this patent tomfoolery, and so that’s so what I did — although to no avail.

    I couldn’t discern much of a difference with my recompiled library, except for much-worse looking fonts in VirtualBox, which is based on the Qt framework, where the rest of the GNOME-based OpenSolaris desktop is built out of GTK.

    As far as I could tell, my problems boiled down to having and using the right fonts. Most Web pages specify the fonts they want to use, and in many cases, the specified fonts are ones that aren’t available on Linux or OpenSolaris, at least not available out of the box. For instance, Google wants to use Arial, a Microsoft font that’s freely downloadable, but that carries certain redistribution restrictions that make it tough to bundle with open source operating system distributions.

    On Ubuntu and other Linux distributions, there’s a package called msttcorefonts, which includes a script for downloading a core set of Microsoft fonts from their approved home at Sourceforge, one by one, and then extracting the fonts from their EXE archives using a separate utility called cabextract.

    There doesn’t appear to be any such package available for OpenSolaris at this time, but it’s relatively easy to copy the Microsoft fonts you need over to the .fonts directory in your home folder, and then view the Web, for the most part, as its designers intended.

    Alternatively, there’s a set of fonts that emulate Arial, Times New Roman, and Courier New well enough so that document and Web page layouts designed with the horizontal sizes of these fonts in mind will come out looking as intended.

    These fonts, dramatically dubbed Liberation Sans, Liberation Serif, and Liberation Mono, were made available in 2007 by Red Hat, but have been limited somewhat in their usefulness by the fact that Web pages aren’t calling for Liberation Sans, they’re calling for Arial or Helvetica. As I mentioned earlier, you can force Firefox to use the libre trio for all pages, but not all pages are designed with Arial, Times New Roman, and Courier in mind.

    I imagined that there must be some sort of font substitution option available for OpenSolaris and for Linux, and, sure enough, there is. You can place a file, named .fonts.conf, in your home directory, stocked with font-blocking and font-replacement instructions.

    The file below, for instance, will block the font Arial and replace it with Liberation Sans:

    <?xml version=”1.0″?>
    <!DOCTYPE fontconfig SYSTEM “fonts.dtd”>
    <fontconfig>
    <selectfont>
       <rejectfont>
         <pattern><patelt name=”family”><string>Arial</string></patelt></pattern>
       </rejectfont>
     </selectfont>

     <alias>
       <family>Arial</family>
       <prefer>
          <family>Liberation Sans</family>
       </prefer>
     </alias>
    </fontconfig>

    You can paste in additional sections of the file to mandate other font swaps, and this trick works just as well on Linux as on OpenSolaris.

    Now, even after I’d liberated the Liberation fonts, my OpenSolaris font adventures were not quite at an end. That’s because many of the pages I encountered during my time with OpenSolaris mandated the font Lucida, which ships with OpenSolaris, but which rendered horribly for me no matter what I did.

    To make matters worse, many Sun employee blogs specify Lucida as their font. I imagine that Lucida renders beautifully on the OS X notebooks on which most folks at Sun appear to do their computing, but on Sun’s own OpenSolaris, the font is a mess.

    Strangely enough, at least a third or so of the fonts available for use on OpenSolaris through the GNOME font-setting dialog rendered very poorly on my test system. However, unglamorous chores like culling ill-suited fonts from your distribution (Ubuntu, for instance, does not ship with Lucida) is one of the honing and tightening processes that a young distribution mustundergo.

    Fortunately, banishing Lucida was easy enough to do on my own, with my now-trusty .fonts.conf file.

  • More than a few times now, I’ve heard it said that
    our new president, Barack Obama, will be an open-source president. Owing
    to the many meanings of “open,” this catchy tagline has been used
    in a lot of different contexts, most of which relate to transparency
    in government.

    There are, however, indications that the Obama administration is taking a
    close look at open source in the form most familiar to us, as a model
    for software development and licensing.

    According to a recent BBC report, former Sun
    Microsystems CEO Scott McNealy has been tapped by Obama’s administration

    to prepare a report on the use of open-source software in government.

    Should the government boost its use of open-source software? It seems obvious
    that if the government can satisfy its IT needs more efficiently through
    open source, it should do so. As taxpayers, we want to see the dollars
    we send to Washington stretched as far as possible, and the fact is
    that for many workloads, open-source platforms and applications can
    serve just as well or better than proprietary alternatives.

    However, as recent debates over industry bailouts and stimulus packages remind
    us, government spending decisions must be guided by more than bargain-hunting concerns.

    We must also consider what the impact of fewer government dollars will
    be on the software industry, much of which is wedded to proprietary
    licensing and business models. With customers cutting back on spending
    and software companies enduring layoffs alongside companies in most other
    sectors of our economy, it’s easy to argue that the drop in money spent on software licenses that would come with a larger open-source approach would prove taxing for the tech sector.

    However, the lack of licensing fees doesn’t free open-source software from the deployment costs that come along with any sort of software. Customization,
    integration and management all represent opportunities to make money.

    What’s more, where open-source software is lacking, the government can pay
    to have the software extended to suit its needs, a scenario for which open source is particularly well-suited.

    Certainly, there’s nothing preventing the government from commissioning proprietary software vendors to extend their wares to suit the nation’s needs, but sticking to the open road gives the government the opportunity to get a lot more bang for its (our) buck.

    That’s because dollars devoted to enhancing open-source platforms and applications to better suit the government’s operational needs double as infrastructure investments–software building blocks that can enable companies to
    deliver value higher up the stack and invent new employment and profit-generating
    engines.

    The gamut of Web-generation businesses from search and social media to
    SAAS (software as a service) and the cloud could not exist without their open-source
    software foundations. Future tech industries–and the customers who will come to depend on them–will manage to reach higher, innovate faster and operate more efficiently
    through the sort of down-stack commoditization that open source enables.

    To be sure, any significant government shift toward open source would prove disruptive to proprietary software makers as well, but fortunately the open-source arena is accessible to all comers, and a move to openness is well within these companies’
    power.

    The software industry incumbents that opt to embrace open source–even if only to the extent that federal dollars make it worth their while–are arguably in the best place to profit from the new sorts of businesses that can get off the ground once more platform and standards pieces can be taken for granted.

    As I discussed in my last few posts on Microsoft and open source, there
    can be lucrative roads to openness even for companies that seem least
    likely to embrace the model.

    McNealy’s own Sun Microsystems has made dramatic strides toward embracing open source over the past several years, a fact that McNealy will no doubt cite in his recommendations.

  • As somebody who enjoys blowing away his notebook computer to install a new operating system every six weeks or so, I have a special appreciation for the way that software as a service lets me leave my key applications and data, accessible and undisturbed, in the cloud.

    At least, “accessible and undisturbed” describes the way that things are supposed to be with SAAS, when the chain of components from browser to operating system to client hardware to Internet connectivity to the black box of your SAAS provider’s systems remains intact and performing as intended.

    I just wrapped up a pair of stories in which I’ve attempted to flesh out the issues involved in achieving SAAS reliability, a combination of uptime and acceptable performance that can be much tougher to ensure with services that you don’t directly control:

    In my research on the topic, it seemed to me that the discussion of measuring and ensuring SAAS performance, at least when approached from a customer perspective, is still in a relatively immature state. Most of what I encountered dealt with the perspective of the SAAS builder or vendor.

    My findings boiled down to three main elements:

    • defining your expectations at the beginning of your SAAS evaluation process;
    • discussing reliability and architecture in depth with your prospective SAAS provider; and
    • testing the services, both through user piloting and through use of monitoring and performance tools that do not require intimate access to the SAAS application or infrastructure (which, by definition, you won’t have with SAAS).

    I would be interested to learn about how eWEEK’s readers approach the problem of ensuring reliability of services, and to fold that feedback into a future update of this story. Also, beyond reliability, what are your primary SAAS concerns?

  • ·

    In my previous post, I began making a case for an open-source Windows, and several commenters have weighed in on why such a Windows licensing shift would/could/should never happen. One of the more common points of contention involves a fear of forking–the idea that an open-source Windows would be too fragmented.

    One of the most beloved aspects of Windows is the unifying force it has applied to the personal computing world. Out of a sea of competing standards and code mired in the bickering and short-sightedness of competing hardware vendors came Microsoft, and its clone-embracing business model.

    DOS/Windows wasn’t tied to any particular brand of hardware, the way that Apple’s OS X is and the way that Sun’s Solaris and assorted other proprietary Unixes were, and that hardware promiscuity is just what the budding PC world needed to thrive.

    So, if Microsoft open-sourced its operating system, wouldn’t we lose that unifying goodness? Wouldn’t the Windows world lose its center of gravity and go flying apart in all directions, tearing apart the PC ecosystem on which so many of us depend?

    No. If Windows were open-source software, the center of gravity of the Windows world would still reside at 1 Microsoft Way because, in any open-source project, the mantle of leadership rests with those who write the code or with those who pay for the code to be written. Open source or no, Microsoft’s position at the center of Windows development is secure. If you want Windows, now or in some Bizarro Superman alternate open-source reality, you’ll know where to get it.

    That’s not to say an open-source Windows would be fragmentation-free. Some level of forking and fragmentation is essential for the open-source model to work its magic. For Linux, there are 1,001 different kernel-modifying projects out there, with individual efforts emerging, dying and merging back into the canonical kernel tree all the time.

    This is how Linux came to acquire the SELinux security framework and Xen hypervisor technologies, and how the upstart OS will continue to pick up innovations until some other platform manages to seize the open innovation crown.

    Now, just because there are a ton of separate, conflicting Linux projects at various states of maturity out there doesn’t mean that Linux-consuming organizations and individuals ever have to embrace or even think about these projects. If you’re wary of fragmentation and you want to use Linux, you go to a Linux distributor and pick up an operating system that meets your needs. The distributor gets to worry about the fragmentation, and you worry about your workloads.

    Linux began life with a kernel developed by a group of volunteers in one place, a toolchain developed in another place, a graphical layer developed somewhere else, and a few desktop environments created still elsewhere, but even with these fragmented origins, a small number of distributions claim the bulk of popularity.

    If Windows went open source, what could possibly spring up to unseat Microsoft as the operating system’s chief distributor?

    Rather than cracking its core, fragmentation in an open-source Windows ecosystem would occur at the edges, as projects would emerge to enhance or otherwise modify parts of the OS. Among users for whom those modifications scratched their Windows itch, these projects would develop a following. For most Windows users, however, the integration and management costs of running nonstandard builds of Windows would keep these users attached to the mainline distribution, as defined by Microsoft.

    When any of these Windows fork projects developed enough of a following, both in the wild and within the walls of Microsoft’s campuses, Microsoft would integrate the projects back into mainline Windows, where typical users would consume the enhancements in an orderly and fragmentation-free manner.

    As I wrote in a previous column in which I called on Sun to open-source Java–an event that has come and gone without any massive Java implosion–Microsoft need not fear the fork, and neither should you.

  • ·

    In my previous post, I wrote about how Microsoft’s attitude toward open-source software has evolved, encouragingly, from outright hostility to cordial coexistence, and about how the company might maintain and extend its platform leadership position by moving beyond simple tolerance to aggressive adoption of open source.

    Obviously, Microsoft isn’t getting knocked off its perch any time soon–Windows has burrowed deeply enough into our computing landscape that Microsoft could probably switch off its development engines and coast on its momentum for another 15 years or so.

    However, there’s no question that Microsoft faces some very real challenges to its platform throne, the most daunting of which is the Web, where a seemingly omnipresent Google is working on relocating computing’s center of gravity toward the browser and away from Windows or any other particular operating system platform.

    It’s easy to see how the shift toward the Web has buoyed Microsoft’s smaller rivals, including an ascendant Apple that’s a consumer electronics-based charge on computing, and an assortment of Linuxes that are oozing into all manner of new computing products.

    One of the interesting aspects of Google’s platform strategy is the idea that what’s good for the Web is good for Google, and the company puts this philosophy into practice through a range of initiatives aimed at expanding and bolstering the Web, including investments in ventures such as Meraki and Clearwire, lobbying efforts around U.S. wireless spectrum allocation, and cold hard code such as its innovative, open-source Google Chrome browser.

    Returning to Microsoft and its own platform stronghold, it’s clear that what’s good for Windows is good for Microsoft. Though it may sound crazy, I contend that the best move Microsoft could make to broaden the reach and strengthen the core of the Windows platform would be to release the operating system as open source.

    Releasing Windows under an open-source license would benefit the platform in two major ways. First, an open-source Windows could be had for free, which would mean more legitimate Windows seats around the world and fewer barriers to upgrading to the latest version of the operating system. The result would be a larger and more modern network of Windows nodes at which ISVs could target sales of their Windows applications.

    Second, a move to open-source Windows would inject an enormous amount of vitality and innovation into the platform, as the legions of user organizations, vendors and developers now invested in Windows could take the platform in new directions, the way that a much smaller community of stakeholders now does–to great effect–in the Linux community.

    It’s fair to ask how, if Windows could be had for free, Microsoft would make money. For starters, I’m not suggesting that Microsoft open-source 100 percent of what it now calls Windows. Rather, Microsoft could divide Windows into its separate operating systems and bundled application elements, restrict its open-sourcing efforts to the operating system side of the equation, and sell client and server distributions of Windows with proprietary Microsoft applications layered atop the open-source platform base.

    This sort of division would help preserve existing Windows sales among customers who find value in Windows’ current platform-plus-bundled-applications incarnations, while freeing Windows to wriggle into the cracks between business models that only Linux can now reach.

    As matters now stand, you can build and run a business on Windows, but there’s a definite floor and ceiling to the range of businesses where Windows fits best. The very smallest startups–the garage guys–and very largest operations–the Googles and Facebooks–are driven, without fail, to choose the low-friction licensing and development flexibility of open-source platforms, and I can’t see this trend changing.

    Open-sourcing Windows would be no small technical feat, and for Microsoft, the philosophical barriers to the move might prove difficult to surmount. It may seem like a gamble, but I say going all in with an open-source Windows is just the ticket to keep the platform relevant and alive for years to come.

  • Recently, I met with members of Microsoft’s Platform Strategy Group, including the group’s director, Robert Duffner, to talk about their company’s activities around—and evolving stance toward—open-source software.

    After assuming an initially hostile position toward open source, Microsoft has adopted sort of an experimental approach—the company developed a pair of bona fide open-source software licenses, maintains an open-source project repository at codeplex.com and is working at making popular open-source components work with the Microsoft product family.

    For instance, Duffner cited Microsoft’s work toward embracing PHP—the “P” in the popular open-source LAMP stack of Linux, Apache, MySQL and PHP—by tackling the barriers to use of the Microsoft-alternative WISP (Windows, IIS, SQL Server, PHP) stack. To this end, Microsoft has been optimizing PHP to run well on Windows and helping adapt in-demand open-source applications that have been hard-coded to use the popular MySQL database to work with Microsoft’s SQL Server.

    These efforts will certainly bear dividends for Microsoft’s PHP-loving stakeholders, but no amount of integration and optimization work will address the primary barrier to a LAMP-like ascent for WISP: open source.

    Open-source development and distribution will continue to expand because between functionally similar open-source and proprietary platforms, the open options offer developers and architects a fundamentally better deal: You own the code and are free to spin up a thousand copies of MySQL with no more licensing burden as a single instance.

    For now, Microsoft is operating, tentatively, at the margins of open source, with an agenda marked more by toleration than adoption of open source. I contend that if Microsoft approached open source aggressively, the company could solidify its prominence in computing potentially for decades to come.

    In many cases, the best option for a Microsoft customer will be MySQL. MySQL runs on Windows, so Microsoft might do well to extend its embrace beyond PHP to include MySQL. After all, if certain customers are destined to choose an open-source alternative to SQL Server, Microsoft might as well make money on it and maintain the customer relationship.

    If Microsoft chose to become a MySQL distributor, Microsoft’s deep roots in the market, along with its support, services, learning, events and channel resources, could enable the company to extract more money from MySQL than anyone else, including Sun Microsystems.

    Alternatively, Microsoft could address the MySQL challenge not by joining, but by beating MySQL at its open-source game. An open-source-minded Microsoft could release SQL Server as open source. A project like this, which I’ll call OpenSQL, would be no small effort, but Microsoft has the examples set during past open-sourcing efforts, such as those of Netscape’s Navigator to Mozilla and of Sun’s Solaris to OpenSolaris migrations, to help guide the way.

    Also challenging would be the business model changes required by an OpenSQL effort, but I believe that a well-executed OpenSQL effort would deliver a healthy market share boost for Microsoft and its channel partners to tap with their service and support offerings.

    The move would be a major boon for the company’s developer ecosystem, the members of which would be free to weave Microsoft’s database product into more of their applications, and focus their resources higher up the stack.

    Speaking of moving up the stack, we could move another letter to the left and make a similar set of arguments regarding IIS, Apache and what a hypothetical OpenIIS could do to solidify the “I” in WISP.

    As for the W, that will have to wait until my next column.