a blog

  • Between the foundational free software components and licenses that the Free Software Foundation has made possible–chief of which are, respectively, the GNU Compiler Collection and the GPL–the FSF has laid much of technological and legal groundwork that underlies free and open-source software as we know it today.

    One of the features of free software licensing that pleases me most–and for which I feel most grateful to the FSF for helping build–is the flexibility that free software stakeholders enjoy for dealing with the inability or unwillingness of vendors or projects to suit user needs. Free software stakeholders may simply take the code and branch out on their own. With enough backing, these forked projects can overtake their heirs, which is what happened when the XFree86 project gave way to X.org.

    Soon, I believe, the FSF may find itself in the odd position of being likewise jettisoned by a large and important part of its user base for the FSF’s refusal to respect the needs and desires of its own stakeholders. The FSF is preparing to release version 3 of its popular and influential GPL license, complete with new provisions that would require vendors of certain types of devices to enable end users to run modified versions of the GPL-licensed software that drives those devices.

    The issue, which goes by the name of “Tivoization,” seems really to cheese off the FSF, which holds that vendors such as Tivo should not be able to foist DRM and other user-unfriendly controls on device end users. The FSF views Tivoization as a loophole left open by the GPL2, and one that needs closing in the GPL3.

    Linus Torvalds and many of the Linux kernel developers disagree.

    For a somewhat interesting debate on whether the GPL would be a freer license with or without Tivoization restrictions, you can read this exchange from the Linux Kernel mailing lists, in which Linux project leader Linus Torvalds and Alexandre Oliva, a Red Hat compiler engineer who’s active in the FSF, go back and forth on the topic.

    While I understand the arguments on both sides, I tend to agree with Linus and the kernel developers on this one.

    Even though I’m the sort of person who gets a kick out of modifying the software on black box consumer devices, adding new rules to the GPL to govern how device vendors interact with their users seems like opening a pandora’s box of confusion. The GPL3 drafts I’ve read so far talk of exceptions for particular sorts of devices–it’s OK, for instance, for medical devices to bar software modifications, and it’s OK for security systems not intended for home use to bar modifications. Do free and open-source software developers really want to take on the additional policing efforts that would be required to hold vendors’ feet to the fire on these potentially complex use case scenarios?

    The frustrating thing about this somewhat esoteric Tivoization flap is that there really is a need for a new GPL version–at its core, the license upgrade is aimed at clarifying the controls and rights that already exist in the GPL2, particularly those surrounding software patents.

    This is a worthwhile goal, since a lack of clarity surrounding software patents currently casts somewhat of a shadow over open-source software, with Microsoft co-opting Linux distributors such as Novell, Xandros and Linspire–each of whom package and sell the efforts of free software developers upstream–to cast doubt on the legality of the code they redistribute.

    The GPL3 should either shelve its anti-Tivoization restrictions all together, or, better still, spin those controls off into a separate license, as the FSF sagely did with the Affero clause that set out to redefine software distribution to include offering Internet applications up for public use. If open-source developers wish to crack down on what you could well call “Googleization” (Google’s Web applications are built on open source, but Google doesn’t share its code), then these developers can opt for the Affero General Public License.

    The good news is that even if the FSF sticks to its guns and remains unmoved by the concerns of the GPL’s most important project, the Linux developers needn’t ever move to the GPL3, nor need anyone else. Linus and the rest of the kernel team could conceivably excise the offending Tivoization portions of the GPL3 and move ahead with a license that benefits from the modernization and clarification work that the FSF has done while lacking unpalatable and arguably overreaching added controls.

    What’s more, since it was a lack of clarity surrounding patents that, in part at least, stood in the way of Sun selecting the GPL2 as a license for OpenSolaris, perhaps Sun would be interested in making available some attorneys to help with the revisions. Maybe Linus Torvalds and Johnathan Schwartz should talk it over during their upcoming dinner.

    After all, the FSF may be too interested in perfecting its vision of free to make room at the table for its VIP guests, but the nature of open source leaves those guests free to set out a new spread for themselves.

  • Dell’s customer feedback-driven initiative for preloading Linux on some of the machines it sells is moving forward with a full head of steam.

    It’s been only a handful of months since the OEM began fielding Web-borne requests to add the open-source operating system to its preloaded platform mix, and Dell is already a few weeks into filling orders for the penguin-loving public. It’s too early to judge the success or failure of Dell’s mainstream Linux foray.

    For one thing, the PC maker has not yet disclosed how many Linux aficionados have purchased one of the three Dell models on which the company is preloading Ubuntu Linux. For another, I’ve not yet tested one of these machines myself. However, I can see enough from my Web browser-based vantage to answer the questions around price, selection, positioning and support options that had curbed my enthusiasm when I last covered this topic.

    Price: Ubuntu Linux is free, and Windows Vista is not, so it stands to reason that Dell’s Ubuntu machines should cost less than its Vista machines do. Sure enough, Dell’s Ubuntu-powered XPS 410n costs about $50 less than an equally outfitted, Vista-driven XPS 410.

    For those who’d prefer to install their own operating system, there are machines that ship with a FreeDOS disk. A Dimension E520N that’s outfitted to match Dell’s XPS 410 and 410n costs $110 less than the Vista model.

    Selection:I’d wondered whether Dell might offer Ubuntu Linux on a segment of its machines too narrow to appeal to potential buyers, but the I feel satisfied by company’s three Ubuntu machines, which include a notebook, and one model each from Dell’s budget and high-performance desktop lines? Considering the customization options available for these systems, these three Ubuntu systems cover a respectable amount of ground.

    Positioning: I’m also impressed with the way Dell is positioning its Ubuntu systems. Sure, the sentence, “Dell recommends Windows Vista Home Premium” is still plastered on every corner of the OEM’s site–including those now devoted to Ubuntu. However, Dell has done a good job attempting to explain Linux to newcomers, and laying out the pros and cons of running the free operating system on your PC.

    In particular, I’m impressed with the five-minute “Linux 101” video that’s available for viewing on Dell’s Ubuntu launch page. In the future, I might even send friends or family who ask me about Linux (yes, that does sometimes happen) to Dell’s Ubuntu page for a quick primer.

    Support: Dell had announced that Canoncial, Ubuntu’s primary sponsor company, would offer optional support for an additional fee, which made me wonder how much better off customers would be buying from Dell and loading up Ubuntu themselves. However, Dell does support the hardware for its Ubuntu systems, and these machines ship with a couple of extra disk partitions to facilitate this support.

    As with Dell’s Windows machines, there’s a partition loaded with Dell diagnostic tools. There’s another partition that carries Ubuntu install media, and Dell has configured the boot menus of its Linux machines to include an option for reinstalling the operating system.

    I’m also rather pleased with Dell’s new Linux wiki, which offers pointers to all of the Linux efforts and resources, along with concise but complete information on the three models that Dell ships with Ubuntu. Dell’s Linux wiki also offers workarounds for bugs–there’s a particularly annoying-looking one that rendered some customers’ machines unbootable after their first kernel upgrade. A certain number of kinks are to be expected, however, and what I’m paying closest attention to is how Dell deals with them.

    Does the solid shape of Dell’s consumer Linux effort so far mean that we’ll soon see Dell expand its desktop Linux focus to the enterprise? Linux providers like Canonical, Novell and Red Hat are going to have to put more work into connecting the management dots to make this happen, but for its part, Dell appears to ready.

  • As my colleague Steven J. Vaughn-Nichols is reporting, Intuit is opting to get a bit cozier with Linux. It’s an eye-catching announcement, considering that lukewarm Linux support from Windows-centric application vendors like Intuit remains one of the biggest strikes against the open-source operating system as a mainstream desktop platform.

    (more…)

  • I’ve been burned more than once by lighter-than-laptop computing devices that have failed to fulfill their promise. Still, I can’t help but be excited about Palm’s Foleo mobile companion.

    The 2.5-pound device, which Palm announced recently and plans to begin shipping later this summer, will sport the display and keyboard of a typical notebook computer and the battery life and instant on/off capability of a handheld device.

    This mix of features is what I’ve long sought in a mobile computer: something that’s functional enough to write and browse with, but light enough to carry everywhere without stopping to calculate whether the added load is worth the typically short run-time delivered by notebook batteries.

    As I mentioned, however, I’ve been disappointed before. Take my IBM z50 Workpad, a pleasantly shrunken Thinkpad that weighed about 2.5 pounds, and carried a very nice keyboard and a passably large 640×480-pixel display. However, despite its form-factor attributes, the z50 fell far short of its promise, and was discontinued very shortly after it began shipping.

    The z50’s biggest problems were software related–and the same goes for most other devices of this type. For starters, the z50 shipped with the lousy Windows CE 2.11, which sported so-so Pocket Office applications, a terrible version of Pocket Internet Explorer and not much else.

    Back in 1999, when the Workpad z50 first shipped, the market for mobile device software was small compared to that for desktop and notebook computers, and things haven’t changed too much in the years since.

    With so many different form factors and resource profiles, it’s tough to build applications that will run on many different devices, and there’s so much change in the mobile device space that users are in constant danger of seeing their devices drop out of application support matrices.

    For the z50, there was never an upgrade path to access the improvements that Microsoft subsequently made to Windows CE, as IBM discontinued the product almost immediately, and Microsoft dumped the MIPS architecture that powered the z50 in favor of a focus solely on ARM chips.

    Now, I was aware of the z50’s software failings when I picked one up in 2001 for about a fifth of what it originally had sold for, but I had plans to work around WinCE. I ran NetBSD on the z50 for a time, but the arrangement never worked well enough to make it past the hacky-experimental stages with me.

    The fact that I was able to load a real operating system onto my z50 didn’t mean the device was capable of running that operating system well. After all, these sorts of devices are, by definition, underpowered compared to regular notebooks, which means that even if you’re able to get your hands on some decent software, you can’t necessarily expect your super-mobile computer to run the software well.

    I don’t expect Palm’s forthcoming mobile companion to boast a broad catalog of made-for-Foleo applications, and even though the product’s Linux innards will likely allow for more software flexibility than we’ve seen from other devices of this sort, I’m not banking on that flexibility either.

    Rather, what makes me most optimistic that the Foleo might succeed where other products have failed is the Foleo’s apparent fitness as a simple terminal for Web-based applications.

    Even if the rest of the software that ships with the Foleo ends up stinking, as long as the device gets the browser part right, it’ll offer users a way around the resource, application and upgrade limitations that have held back its predecessors.

    Based on what Palm has announced about the Foleo so far, it looks as though the device will have what it takes to do the Web well: The Foleo will ship with a Web browser from Opera; with a 1024×600-pixel display that will render Web pages without weird reformatting; and with Bluetooth and Wi-Fi radios for Internet connectivity.

    In order for the Foleo really to shine, its browser will need to offer some sort of offline application support, the likes of which Google is beta-testing now in the form of its Gears project. Opera has stated that it’s working on offline application support, but we’ll have to wait and see how that shakes out.

  • The Internet has been abuzz lately with talk of a potential pairing between Google and Salesforce.com–a line of speculation that I find particularly intriguing.

    For one thing, with the amount of ink that’s been spilled over Google-DoubleClick and Microsoft-aQuantive, and what it all means for the future of advertising, it’s refreshing for once to ponder a Google rumor that might actually hold some relevance for enterprise IT.

    Ad wars fatigue aside, what I find most interesting about a potential Google-Salesforce deal–either in the form of a blockbuster acquisition or a strategic alliance–are the compelling new sorts of services and products that might come out of it.

    I’ve become a fan of hosted applications, such as the task list keeper at Todoist.com and the programs that make up Google Apps for Your Domain, and I look forward to seeing other applications in the SAAS (software as a service) mold emerge to meet my needs. Google has recently adopted the concise “search, ads and apps” as a mission statement. So far, the company has gotten off to a promising start on the “apps” part of motto, which is the only of the three areas for which Google doesn’t enjoy a dominant position on the Web.

    In particular, Google’s enterprise SAAS roots are still awfully shallow. If Google is out to carve itself a piece of the enterprise applications market, there doesn’t appear to be any sharper implement to address this task than Salesforce, which has grown to be practically synonymous with SAAS.

    However, beyond the reputation for enterprise SAASiness that Salesforce could bring to Google, I think that the pair could go a long way toward blazing new trails for hosted applications.

    The most frequently cited drawbacks to hosted applications are security and uptime concerns. However, for a significant number of companies, particularly small to midsized concerns, it’s not clear that the security and uptime they can assure for themselves wouldn’t fall short for what an established SAAS provider could offer.

    Moving forward, I believe that a bigger SAAS concern for companies will be too few customization opportunities–if letting your apps live on Google’s data center means being limited to running only what Google offers, the possibilities of these apps will remain bound.

    One path forward for Google could resemble Amazon’s EC2, or Elastic Compute Cloud, the very cool service in which the online retailer rents out some of its considerable data center capacity for running arbitrary virtual machines. During my recent tests of EC2, I couldn’t help but wonder when Google would get into the act.

    Rather than actually host Xen machines, as Amazon does, or, on a higher level of abstraction, host grid applications in the way that Sun’s now doing, Google could find in Salesforce a way to offer customers customization opportunities that better match the sort of simplicity for which Google strives.

    Salesforce announced at its Dreamforce conference last fall a platform, called Apex, for building applications that run on the Salesforce infrastructure and integrate with Salesforce’s existing CRM (customer relationship management) applications. Marrying Apex with Google’s infrastructure would take some work–the first thing that comes to mind is that Google is a MySQL shop, while Salesforce sports an Oracle database back end. Also, Apex is still rather young, and it remains to be seen how many developers will opt to code for a platform without a clear emigration option.

    I’m not suggesting that Salesforce won’t manage to take Apex to great heights on its own, or that Google couldn’t eventually come up with a framework like this on its own, but a match between the two could take both providers’ offerings to the next level.

  • The other night, I was watching an episode of Nova on PBS about the present and future of solar power and the major changes in the way we generate and distribute energy that future solar advances will trigger. Predictably enough, the whole thing got me thinking about free and proprietary software and the friction we’re witnessing between these two models in the form of Microsoft’s recent indictments of Linux and open source in the court of public opinion.

    The ball of fire that floats in our sky has been providing our planet with free energy for as long as the Earth’s been spinning, but with our current technologies, the most effective way for us to tap this energy is by burning the fossil fuels that stored the Sun’s rays a very long time ago. When we eventually develop the technologies required to make efficient use of the abundant Solar energy in which we’ve always bathed, we’ll see our power generation capabilities grow radically broader.

    It’ll seem much less attractive to incur the expense of mining or pumping fossil fuels out of the ground, transporting them to a processing facility, burning them for electricity and pumping that juice to our neighborhoods once we figure out how, for instance, to power our homes by coating them with photo-voltaic nanotube paint. This future model of energy production will prove particularly attractive to the parts of the world that have never had the resources to build the sort of energy infrastructure that drives the developed world.

    While the technologies required to harness the Sun’s abundance remain, for now, on the horizon, the means to tap the equally unbounded intellectual potential of people around the world has already been invented–particularly where software is concerned. Just as new energy technologies will cut back on–and, in time, will likely erase–the need for massive, centralized power production infrastructures, the Internet is already dissolving the requirement that software be developed at and distributed from sprawling corporate campuses.

    Not surprisingly, the companies who’ve counted on collecting cash from every person who consumes software–chief among these being Microsoft–are regarding these changes with no small measure of discomfort. Microsoft, having amassed the means to tap the unlimited store of human knowledge in a way that hadn’t been possible for just anyone to do, is watching new technologies threaten to open up those unlimited stores of power and profit to anyone.

    When faced with such fundamental changes to the environment in which they do business, the power companies of tomorrow, and the proprietary software companies of today, can either determine how to adapt their business models to maintain their relevance, or they can fight to force these new realities into the old, familiar channels through which they’ve profited in the past.

    Unfortunately, while Microsoft clearly understands that the software landscape is changing, and while the company has taken some steps to better understand and interface with free software, Microsoft seems to think that they can deal with free software by forcing it into a proprietary software mold. Microsoft’s deal with Novell is meant to direct companies to consume free software in the form of Novell’s SUSE Linux Enterprise distribution, which, like Windows, comes with per-system license fees and restrictions on unfettered redistribution.

    Microsoft is hoping that its FUD (Fear, Uncertainty and Doubt) campaign surrounding a set of unspecified, unchallenged software patents will convince companies to treat free software as if it were not free, and therefore, not nearly as threatening to Microsoft’s Windows monopoly. Along similar lines, Microsoft has been wrangling with the EU and other government bodies out to reduce their dependence on proprietary standards and protocols to license their de facto standards in such a way that free software could not incorporate this material.

    When Microsoft representatives state that everyone must play by the same rules, as they often have during recent months, what the company means is that the business and technological realities under which they’ve built their empire shouldn’t be allowed to change. However, just as the appeal of decentralized solar power will, once technologically feasible, prove irresistible, so too will the tide of free software that’s already begun rolling in prove too powerful to turn back.

  • ·

    Earlier this month at JavaOne, Sun made good on its pledge to release Java 2 Standard Edition as free software, a move that should mark the start of a beautiful new relationship between Linux and Java.

    Until now, licensing conflicts between Sun’s Java and prominent Linux distributions such as Debian or Fedora made it tough for Java to bond with Linux and its users in the same way that the LAMP triumvirate of Python, PHP and Perl do.

    The GNOME community has debated for some time whether to move from C as its primary development language to a higher-level language such as Java or C#, but concerns over Java’s licensing or possible future patent hostility from Microsoft have stalled those efforts.

    As long as developers couldn’t count on Java being built in and distributable under the same general terms as the rest of Linux, Java could never be a central part of Linux-based operating systems, or of the foundational projects–such as the GNOME desktop environment–that complement the Linux kernel.

    However, even though Java’s joining the big happy GPL family, there’s much more work to do before we’ll see the free software floodgates open for Java. The day that Sun made its JavaOne announcement, I cruised the Internet to take the temperature of Java for the Linux desktop, and I came up with some surprisingly tepid readings.

    The closest thing to an “open-source desktop developers, your Java is ready” welcome mat I could find was a February 2007 tutorial from Sun on writing Java software for GNOME using the java-gnome bindings project.

    GNOME boasts an array of these language bindings projects, which enable developers to build applications in their chosen tongue while plugging into GNOME’s native interface elements and system services.

    I searched the package repositories on my Ubuntu workstation for packages with a dependency on the java-gnome bindings, and turned up zero applications built atop the project–that’s out of 21,371 packages in the Ubuntu repositories.

    To compare, the same sort of search based on the equivalent GNOME bindings package for Mono, the open-source implementation of Microsoft’s .Net Framework, turned up 21 packages, including the excellent Tomboy notes application that’s now an official component of the GNOME desktop.

    It turns out that the java-gnome bindings project is currently in a very unstable state–the version of the software that ships with Ubuntu and other Linux distributions is largely neglected and is considered incomplete.

    There’s a new version of the bindings in the works, but it’s not yet ready for developers to use. According to the project’s Web site, the new leaders of the java-gnome project are “now looking to secure the revenue necessary to fund the work to make the new Java bindings for GTK and GNOME a reality.”

    According to posts on GNOME’s release team mailing list, there’s a great deal of uncertainty within the project as to when a usable set of GNOME bindings for Java will again be available.

    Granted, the sort of complete, freely licensed Java that’s required to host a new generation of Java-based GNOME applications has only recently become available, but considering that Sun announced that it would be freeing Java over a year ago, and that it’s been months since the company shared its choice of GPL as a license for the project, the state of limbo in which the java-gnome project finds itself is awfully puzzling.

    In order for Java to become a viable language for open-source software, java-gnome stakeholders need to step forward to provide developers with the tools they need to choose Java for their works.

    I can’t imagine a bigger stakeholder than Sun, which calls its own GNOME desktop implementation the Java Desktop, even though the software as shipped by Sun has nearly nothing to do with Java.

    Now–better late then never–seems like a good time to transform this appellation from a branding device to a reality.

  • The small steps that Dell has taken toward offering desktop and notebook PCs preloaded with Ubuntu 7.04 could mean a giant leap forward for the viability of desktop Linux.

    Linux preloads from Dell would give computer buyers who aren’t out to install their own operating system an opportunity to choose Linux in the same way they choose Windows–by buying a system, taking it out of the box and starting to use it. Ubuntu on Dell would mean that seasoned Linux users could buy one of these systems with the knowledge that the hardware would work with their operating system of choice–even if they prefer some other distribution, hardware that works with one Linux distribution can be made to work with any distribution.

    As I opined in this space a few weeks ago, Ubuntu makes the most sense for a mainstream desktop Linux option. Ubuntu is polished and popular, and isn’t encumbered by the yearly update fees that come with the Red Hat Enterprise Linux and SUSE Linux Enterprise options with which OEMs, including Dell, have so far flirted. “Free” is a big part of Linux’s appeal, but fleeing Windows’ activation routines and genuine advantage software for Red Hat Enterprise’s entitlements and installation numbers doesn’t feel awfully free.

    However, before we go out and rent the convertible Cadillac from which Michael Dell and Ubuntu chief Mark Shuttleworth will Grand Marshal the 2007: Year of Desktop Linux parade, there remain a good many questions to be answered. Until more details emerge, it’s tough to gauge to what extent the Linux-loving hordes that flooded Dell’s Ideastorm suggestion site with comments will prove as liberal with their wallets as they were with their mouse clicks. In particular, we know fairly little about the nature of Dell’s Ubuntu offerings-to-be, including key details such as price and the selection of hardware that Dell plans to Ubuntu-enable.

    Pricing is going to get a lot of attention, since the idea that every PC comes with a “Windows tax” that’s levied whether or not purchasers plan on running a free OS alternative really sticks in the craw of the desktop Linux rooters at whom Dell is aiming this initiative. However, it isn’t clear whether the freeness of Ubuntu will translate into lower PC costs. There’s broad speculation that between licensing discounts from Microsoft and monies paid for stocking PCs with teaser software, Windows preloads pay for themselves.

    There’s also the contention that Ubuntu, as a new platform for Dell, will cost more to support. However, based on what we’ve heard so far, Dell will be providing support only for hardware, with software support being shunted to the community, to optional paid support contracts with Shuttleworth’s company, Canonical, or to another support vendor. There are certainly other costs for Dell to support an additional platform, but the support-optional status of Dell Ubuntu boxes will bolster customers’ lower-price expectations.

    In addition to fair-seeming pricing, the success of Ubuntu on Dell will depend on whether Dell offers up attractive hardware for sale with Linux. My Linux Watch colleague Steven J. Vaughn-Nichols has reported that the systems under consideration as Linux targets are among Dell’s budget machines. Cordoning off Ubuntu to the low-powered portion of Dell’s lineup would mean cutting off potential sales. A better course would be to offer Ubuntu on a cross-section of Dell’s system types.

    Dell has moved boldly so far–the time that’s elapsed between the launch of the Ideastorm site and Dell’s Ubuntu announcement has been much shorter than I’d imagined. If Dell continues to move decisively, and if it executes well, the company has an opportunity to tap into new markets and claim a substantial point of differentiation over its PC rivals. There’s opportunity, also, for the Linux community to vote with its wallets and demonstrate, not only to Dell but to the entire ISV and IHV community, that there’s money to be made from free.

  • eWEEK’s Peter Galli is reporting that Dell has joined the Microsoft-Novell Axis of Patent FUD. Dell already offers Novell’s SUSE Linux Enterprise Server for sale through its Web site, so the deal isn’t a particularly momentous one.

    Rather, it seems that the primary focus of the agreement is to provide Microsoft with a new outlet for unloading the SLES certificates it purchased from Novell. Perhaps more importantly, the Dell-Microsoft-Novell agreement opens fresh opportunities for generating foreboding press quotes about Linux’s allegedly perilous intellectual property standing.

    Consider this quote from an Associated Press story, in which Microsoft’s claim that businesses are worried about the patent standing of Linux is reported back as if it were fact:

    “The concession is meant to address concerns of corporate users who have been reluctant to use Linux because they feared Microsoft might retaliate with patent-infringement claims.”

    And then there’s this story from the Boston Globe, in which a single paragraph reports both that “Novell….agreed to compensate Microsoft for Linux software features that Microsoft claims to have patented” and that “Novell officials….denied that the payment constituted an admission that Linux contains illegal portions of Microsoft code.”

    Which is it? Did Novell agree to compensate Microsoft for the patents of which SLES runs afoul, or does Novell maintain that its Linux transgresses no such IP? It really doesn’t matter which way you take it–what matters is that the press plant, on Microsoft’s behalf, seeds of doubt regarding Linux’s IP foundation.

    Should companies running Linux worry that Microsoft may sue them? I think there’s exactly zero chance of that happening, first, because such a patent war would be extremely bad business for Microsoft, and second, because the Supreme Court’s recent smackdown of the lax tests for obviousness under which many of Microsoft’s spooky patents were granted has probably gone a long way toward neutering these threats.

    Finally, consider this. If Dell truly takes seriously Microsoft’s disingenuous patent FUD campaign against Linux, why would Dell open its customers to the threat of litigation by preloading a Linux operating system that falls outside of Microsoft’s protection?

    The answer, of course, is that no matter how often the vague patent concerns of nameless customers are paraphrased in news articles, Dell isn’t taking these threats seriously. Neither am I.

  • This week, Microsoft set loose “Longhorn” Server Beta 3, and the release is brimming with evidence of the hard work Microsoft has put into polishing the configuration wizards that work to abstract away the knotty details of the applications and services these interfaces front.

    Of course, the yellow brick road to the configuration wizard isn’t the only or best path to abstracting away complexity. Moving forward, the name of the game in IT abstraction is virtualization, a fact that Microsoft has acknowledged by taking steps to make Longhorn play better both as a guest and a host for server virtualization. Based on this late beta, however, it’s looking like Longhorn will fall short on its virtualization promises.

    Most glaringly, there’s the fact that Longhorn is set to ship without the built-in Windows Hypervisor that initially had been slated to bake virtualization right into Windows Server.

    I think that Microsoft would’ve done better to have built the open-source Xen hypervisor into Windows rather than to have built its own hypervisor. It seems that since the Xen hypervisor sits in its own layer below its guest instances, Microsoft could’ve shipped that GNU GPL (General Public License) component while maintaining proprietary licensing for the rest of its operating system.

    Granted, building your own hypervisor is a hefty task, and I’m ready to accept for now that Microsoft has good reason to chart it’s own course here, even if that means lagging behind its server OS rivals with integrated virtualization support. What’s more puzzling, though, are the holes in Microsoft’s strategies for making Longhorn Server a better virtualization guest.

    One of Longhorn’s long-expected new features is support for deploying the OS in trimmed down “core” configurations that will reduce overhead and attack surface by leaving out superfluous code.

    Why, for instance, should you have to tell Windows Server not to worry about updating to an Internet Explorer 7 revision that you’ll never be running on that machine anyway? These lower-overhead configurations are particularly important for virtualized servers, which spend the bulk of their time running heedlessly.

    However, unlike most Linux distributions, for which you can deploy a minimal configuration and then pull in what components you need to support your applications, Longhorn Core supports only a handful of Windows roles, such as those for file services or Active Directory.

    As a result, you won’t be able to deploy your own applications on Longhorn Core, nor will ISVs be able to use Longhorn Core to deploy thin, Windows-based software appliances.

    The relative lack of rigor in Windows’ software management framework also means that the new command-line version of Longhorn’s server management tool won’t work on Longhorn Core because the tool depends on the .Net Framework, and the .Net Framework depends on too broad a swath of the full-blown Windows OS to run on the server core. Microsoft’s new command-line interface, PowerShell, won’t run on Longhorn Core, either, because PowerShell also depends on the .Net Framework.

    The virtual software appliance approaches now being pushed and facilitated by the likes of rPath and its appliance-producing rBuilder platform, VMware and its VMTN (VMware Technology Network), and Amazon.com and its Elastic Compute Cloud stand a solid chance of displacing the OS as a general-purpose platform model to which Windows Server has been designed to conform.

    I used to think that licensing represented the biggest roadblock to Microsoft’s participation in this model, but with the virtualization-friendly changes the company has already made to Windows Server licensing, Microsoft has demonstrated that it’s willing to adjust its business models to adapt to new virtual realities. However, without technology changes to match its business moves, Microsoft’s going to have a tough time keeping up with Linux in this space.