a blog

  • My Microsoft Watch colleague Joe Wilcox is reporting today on some rather eye-catching Apple/Microsoft numbers:

    “Here’s a big number: 20 percent of Microsoft Office’s U.S retail sales are the Mac version, according to NPD. Here’s another: Mac users account for 10 percent of retail Windows Vista Business and Ultimate sales.”

    It doesn’t surprise me that Office for the Mac sells well. There aren’t many alternatives to Office for OS X—Apple’s recently rounded-out iWork ’08 is still rather young, and OpenOffice.org doesn’t yet fit as neatly with OS X as it does with Linux or Windows.

    What’s more, like the rest of us, Mac users live in a thoroughly Microsoft-dominated computing environment, so picking up a copy of Office along with your shiny new Mac probably strikes switchers as a safe bet.

    What I find more interesting is the volume of Windows Vista buyers who appear to be picking up a copy of Windows to run under VMware Fusion or Parallels Workstation for sort of a mega-lifeline to Windows.

    In his post, Joe asks whether it’s smart for Microsoft to aid Apple’s cause by continuing to build software for OS X. I contend that Microsoft would be better off if it did just the opposite—start worrying less about owning users’ computing environments and start focusing on selling people what they want on the platforms of their choice.

    If OS X, with its estimated 5 percent market share, can account for 20 percent of Office’s retail sales, and if a significant share of Vista purchases are going to users who wish to run the OS simply as a virtualized application layer, it’s worth asking how many dollars Microsoft is leaving on the table by limiting its cross-platform efforts.

    I know that I’ve been enticed at times by Microsoft Office, just not enough to quit using Linux on my home and work systems. For instance, when Office 2003 came out, I was enamored with Infopath, but I couldn’t get past the application’s platform limitations.

    As a result, I more or less forgot about Infopath, and today, people are wondering why no one’s heard of the application.

  • Today I attended a Sun Microsystems Chalk Talk on the company’s virtualization plans. The talk centered on two upcoming products from Sun, which ride together under the anagrammatic label xVM.

    The product duo consists of Sun xVM Server, which is Sun’s long-awaited (by me, at least) Xen hypervisor implementation, and xVM Ops Center, which is a management product for xVM Server instances.

    Sun’s been doing a lot of great work in virtualization, but until very recently the relevance of Solaris’ virtual virtues has been limited to companies running Solaris applications–a significant population, to be sure, but too many enterprises depend on Windows and Linux to allow for wholesale adoption of Sun’s technologies.

    The BrandZ enhancements that came in the recently shipped update to Solaris allow for running Linux applications within Solaris Containers, but BrandZ is stuck emulating the Linux 2.4 kernel-based RHEL 3, and the technology offers nothing for Windows server shops.

    However, the Xen-based xVM will allow companies to choose Solaris without rejecting their existing x86 operating systems. If Sun can team xVM Server with an effective management layer (and xVM Ops Center does look promising), then it can earn the opportunity to win back those who’ve forgotten about Solaris in favor of the operating system’s less mature and arguably less capable Linux and Windows rivals.

    For enterprises, Sun’s xVM will mean yet another option for server virtualization to sit alongside VMware’s ESX Server, Microsoft’s “Viridian” and a gaggle of other virtualization products based on the open-source Xen hypervisor project, including Citrix’s XenEnterprise, Virtual Iron’s eponymous product, 3Tera’s AppLogic, and the built-in virtualization platforms from Red Hat, SUSE, Mandriva and pretty much any other Linux distribution that opts to implement it.

    We didn’t get to see a whole lot of xVM at the Chalk Talk — for instance, we were played a flash movie of the code in action rather than receiving a proper demo of the running code. However, the xVM code entered the OpenSolaris project back on Sept. 19, and if I had time to run through the byzantine OpenSolaris build process, I could, presumably, try it out for myself right away. Instead, I think I’ll wait for the Nevada 75-based Solaris Community Edition build to hit Sun’s FTP servers to take my first look at xVM.

    ***

    On an unrelated note, I was struck by how many of the Sun employees in the room were running Macs. The presentation we were shown was driven by the X11 version of OpenOffice.org.

    On the other hand, for the journalists and analysts in attendance, Windows seemed to reign, with the exception, of course, of my own Foresight Linux-powered notebook.

    One of truest tests of Sun’s upcoming Linux-like Solaris respin, the one Sun’s calling “Indiana,” will be the extent to which Sun employees will be able to blow their OS X off in favor of Solaris.

  • Symantec has been turning heads with its suggestion that whitelisting might be a better way forward for ensuring the security of PCs than the blacklisting approach that current A/V products, such as those from Symantec, adopt.

    My colleagues Jim Rapoza and Larry Seltzer have recently weighed in on the idea: Jim doesn’t like it, and Larry is characteristically skeptical of it.

    I found it interesting that Jim cited potential discrimination against open-source software as a drawback to application whitelisting, since this is the model around which popular Linux distributions have been modeled for years now.

    Linux distributions such as Ubuntu or Foresight or OpenSUSE consist both of core operating system components (alongside a handful of oft-used applications) such as those that ship with Windows, and a library of other, optional applications that sit in networked repositories.

    If an application resides in the repositories of your Linux distributor, that piece of software has undergone some sort of vetting process. The vetting differs from distro to distro, and most Linuxes include packages with graduated levels of vetting. Ubuntu Linux, for instance, contains core packages, which enjoy a higher level of testing and support than do its “universe” or “multiverse” packages.

    For most distributions, these packaged applications are signed with encryption keys from the distributor, which give users the confidence that the packages are coming from a source they’ve chosen to trust.

    The downside of this application whitelisting approach is that sometimes, the applications or the application versions you want aren’t available in your distribution’s repositories. In these cases, you must package the applications yourself (and take on the vetting yourself, as well) or turn to others who’ve done the packaging work (and decide whether to trust those packagers).

    Is it a bummer not to be able to install any application you find floating out on the Internet? It depends on how highly you value the integrity of your systems. It’s the classic battle of Security vs. Convenience.

    One thing’s for sure. If you think you can skip through the Internet bending over to pluck and install any shiny app you see, you’re going to get bitten.

    Is application whitelisting a total solution? I don’t think such a thing is possible. However, I contend that traditional A/V cannot, never could and never will clean up after app install promiscuity the way that people wish it would, so better app vetting and a true commitment to least privilege models is the only way forward.

    Application whitelisting works for Linux. If Symantec can bring it to Windows, I say more power to them.

    What say you?

  • San Francisco free WiFi is dead. Long live San Francisco free WiFi!

    Earthlink, the Internet ISP and erstwhile municipal WiFi build-out partner for various US cities, has hit upon some rough financial times–rough enough that the company has recently opted to slash half its workforce and scale back dramatically on its Muni WiFi ambitions.

    My home base of San Francisco is one of those US cities, and when word came out that SF’s deal with Earthlink and Google to roll out free wireless Internet to every corner the city had fallen apart, I was feeling pretty disappointed.

    The state of wireless data access right now leaves quite a bit to be desired. The last thing that cell phone carriers want is to deliver the sort of service I’d like to consume: a plain pipe to the Internet that I can access through any computing device I choose. 802.11b/g networking is perfect for this–it’s broadly supported among devices and operating systems, and, through the magic of unlicensed spectrum, you can do it all without tangling with the FCC.

    Even though the Earthlink/Google deal threatened to saddle SF with a Muni WiFi monopolist for a term of 16 years, and would’ve meant free WiFi at a throttled rate, with broadband speeds reserved for those who paid $22 a month to Earthlink for the service, I was feeling optimistic about the deal.

    Earthlink’s service would offer an alternative to the cable/telco duopoly to which the city’s broadband options are now constrained, and the free access would’ve allowed me to pop online to check my mail or otherwise fiddle with the Internet from any quarter of the city in which I live and work.

    The potential benefits were less clear for Earthlink. While the firm would be the sole enhanced service provider for the network, there was no guarantee that enough San Franciscans would become subscribers to make the investment worthwhile for Earthlink.

    A better way forward would be for those of us in San Francisco who would benefit from a citywide wireless network to build one of our own. As luck would have it, the city has the opportunity to just that, with the help of a Google-funded startup named Meraki that’s out to expand Internet access through community deployed WiFi mesh networking.

    Meraki is now distributing its wireless mesh repeaters to individuals in San Francisco, some of whom will feed the network by sharing a portion of their available bandwidth. Meraki’s mesh model looks interesting, and looks like it has the potential to deliver the city the free, open-ended wireless network we seek, leaving San Francisco’s local government free to train its attention and its dollars on longer-term broadband goals–like extending fiber as broadly through the city as possible.

  • Are you waiting for Service Pack 1 before deploying Windows Vista? Based on Microsoft’s raft of Windows announcements this morning, it looks as though companies sold on a “better SP1 than sorry” deployment strategy will be hanging tight until Q1 of 2008.

    Here’s the word, straight from the keyboards of Microsoft’s Windows PR team:

    Windows Vista SP1 beta will be released in a few weeks to a moderate sized audience. At this time, SP1 will contain changes focused on addressing specific reliability and performance issues, supporting new types of hardware, and adding support for several emerging standards. Microsoft is targeting first quarter of 2008 for Windows Vista SP1 but will collect customer feedback from our upcoming beta process before setting a final date.”

    Naturally, however:

    “Microsoft encourages organizations not to wait for SP1 but instead deploy Windows Vista today in order to benefit from improved security, management, and deployment benefits.”

    Is Windows Vista worth skipping the wait? That might depend on whether or not you’re a volume license customer. When XP came out, Microsoft chose to spare customers who purchase Windows en masse from the phone-home software activation schemes that came along with retail XP copies. For Vista, no customer is spared the additional management tasks that accompany activation.

    I keep telling people that if I ran Windows on my desktops, I’d probably be running Vista, but considering how often I tear down and build up my systems, the added hassles of activation might have driven me back to XP. I can tell you that activation hassles are a big part of why eWEEK Labs tends to default to XP for our tests that require Windows.

    I’ll be testing the Vista SP1 beta once it becomes available, so stay tuned. Meanwhile, if you’re among those who are sticking with Windows XP, there’s a Service Pack on the way for you as well:

    “Microsoft will be releasing Windows XP SP3 to customers and partners in the next few weeks. It is a standard practice to release a service pack as a release nears end-of-life for the convenience to our customers and partners. Windows XP SP3 is rollup of previously released updates for Windows XP including security updates, out-of-band releases, and hotfixes. It will also contain a small number of new updates. This should not significantly change the Windows XP experience.”

    Rounding out this morning’s Windows announcements is a confirmation of the Windows Server 2008 delay that my colleague Joe Wilcox predicted early (and often):

    “Windows Server 2008: Microsoft’s first priority is to deliver quality products to their customers and therefore Windows Server 2008 is now slated to release to manufacturing (RTM) in the first quarter of 2008. … For more information on this, please see today’s Windows Server blog posting at http://blogs.technet.com/windowsserver/default.aspx.”

    Which service pack are you waiting for?

  • Last week I came across a video presentation of Ariel Shamir and Shai Avidan’s “Seam Carving for Content-Aware Image Resizing,” a method for resizing images by slicing out or padding uninteresting strips of pixels.

    If you haven’t seen this presentation on YouTube already, watch it here right now. I’ll wait.

    OK. Wasn’t that sweet?

    Personally, I’d love to use the method for stretching out photos I’ve taken in portrait orientation to fill my desktop wallpaper properly. When I watched this video, I was wishing that this resizing effect was available as a Gimp plug-in, or in some other form in which I could easily access it.

    Then, yesterday, the Content-Aware Image Resizing video turned up on Slashdot, along with some very interesting reader comments. Two different commenters claimed to have implemented the resizing method themselves, and they both provided links to their source code.

    I headed over to the blog site of one of these commenters, an Australian programmer named Andy Owen, downloaded his code (which Andy released under the GPL3), and compiled it. The code worked pretty well.

    I tried this out on a picture I took of the Golden Gate Bridge, which first I scaled down to 640 pixels across (to save time) and then converted to BMP (to comply with Andy’s app).

    fullsize.jpg

    Owen’s app only downsizes, and it only accepts 24-bit BMP files as input. What’s more, the app goes one horizontal pixel strip at a time, so I had to chef up a shell script to run the app a bunch of times in a row to achieve a noticeable result.

    I Content-Awarefully Resized my picture to 437 pixels across. The picture came out looking pretty good, although the suspension cables of the bridge picked up some artifacts.

    resized.jpg

    Another Slashdot commenter said that he knows a grad student of one of the presenters, and that the method was developed as a Gimp plugin initially, but that the project switched to a Windows-based application to pull off the live resizing you see in the video.

    Given this alleged background for the plug-in, and the quick implementation work of Andy Owen — and perhaps others — maybe I’ll soon have this functionality for my image manipulator of choice after all.

  • When Microsoft announced its plans to build a brand-new hypervisor into a future version of Windows Server, it seemed to me that a much simpler path to baking virtualization into Windows would be to join the ranks of vendors developing and shipping products around the open-source Xen hypervisor project.
    Microsoft must have judged that relying on an outside source&#151and a GPL-licensed one, at that&#151for a piece of technology as central as a hypervisor would be too risky or uncomfortable, leading the Redmondians to opt instead to go it alone.
    However, as the slipping ship dates for Microsoft’s home-baked hypervisor, Viridian, demonstrate, rolling a new hypervisor is no small task. What’s more, once Viridian does go live, the difficulty of convincing customers to entrust production machines to an unproven new technology threatens to stall unacceptably Microsoft’s virtual ambitions.
    Enter Citrix, which followed in a long tradition of making technology bets on Microsoft’s behalf by announcing an acquisition of XenSource, the company started by the founders of the Xen project to commercialize the technology.
    While I typically associate Xen with Linux&#151since Linux is the platform on which Xen was born and on which Xen is most often deployed&#151the folks at XenSource have their aim focused most keenly on Windows. On the Citrix investor call this morning, XenSource President and CEO Peter Levine, summed that focus up well, “Our product focus is to provide the best Microsoft Windows virtualization experience on the market.”
    For Citrix, the move means entry into the server virtualization space, as well as a rather prominent seat at the open-source community table. The XenSource purchase is akin to the big leap into the Linux community that Novell undertook when it purchased Ximian and SUSE back in 2003.
    As with Novell’s Linux pickups, the biggest impact of the Citrix deal for XenSource will be the broadened customer reach that the company’s Xen-based products will enjoy, as they tap into the Citrix network of some 5,000 channel partners.
    Also like the Novell/Linux deal, the Citrix move will probably spur concern from some that the open-source Xen might start moving in a less open direction. So far, however, the new partners seem to be saying the right things.
    The press release heralding the acquisition notes:

    “Under Peter’s leadership, Citrix is also committed to maintaining and growing its support for the Xen open-source community, led by XenSource co-founder and Xen project leader, Ian Pratt. Between now and the close of the acquisition, XenSource will work with the key contributors to the Xen project to develop procedures for independent oversight of the project, ensuring that it continues to operate with full transparency, fairness and vendor neutrality&#151principles that are critical to the continued role of Xen as a freely available open-source industry standard for virtualization.”

    For now, I don’t see a reason to doubt Citrix’s intentions, as it’s the ecosystem that’s sprouted up around Xen, including buy-in from large open-source oriented outfits such as Red Hat, Novell, and Sun; smaller proprietary software vendors such as Virtual Iron and 3Tera; hardware makers such as AMD and Intel; and from a host of non-commercial projects, that has built Xen into the promising rival to virtualization’s current king, VMware, that Xen is today.

  • As my Linux-Watch colleague Steven J. Vaughan-Nichols is reporting, SCO’s four-and-a-half year crusade to undermine the credibility of the Linux platform is now in its final throes.

    I realize that this choice of phrase doesn’t exactly convey supreme confidence that Linux is ready to leave its intellectual property FUD troubles behind, but that’s because the loudest voice in the Fear, Uncertainty and Doubt chorus–Microsoft–is singing louder than ever.

    The good news is that SCO appears finally to be finished. As Joe LaSala, Novell’s senior vice president and general counsel put it, “The court’s ruling has cut out the core of SCO’s case and, as a result, eliminates SCO’s threat to the Linux community based upon allegations of copyright infringement of Unix.”

    So, as I surmised back in 2003 (before I made the conscious decision to quit playing into SCO’s pump-and-dump stocks scheme by continuing to comment on the case), the reason that SCO refused to lay its cards on the table was that SCO was bluffing all along.

    Not only did SCO never own the Unix IP that sat at the heart of its suit, SCO knew that it didn’t own the IP. (a point that slashdot commenter MikePlacid lays out rather well here).

    However, I’m not yet ready to hoist a mission accomplished banner in the battle to clear Linux’s name, because the most damaging element of SCO’s attack still hangs over Linux. As long as Microsoft keeps crowing about the 234 Microsoft software patents on which Linux allegedly infringes, it can still be said that unresolved IP issues still dog the Linux platform.

    I see three routes toward clearing up this IP cloudiness:

    1. Microsoft can announce that companies and developers are free to pursue the technologies that best suit their needs without fear of possible future litigation.

    2. Microsoft can identify the particular patents that Linux allegedly infringes, and give the Linux community the opportunity to resolve the issues.

    3. Novell, IBM and the rest of the Linux-embracing, patent-wielding IT vendor community can pledge to wage all-out patent war on Microsoft if the firm ever makes good on its shadowy patent suit threats.

    Does the recent court decision make you feel more comfortable about the Linux platform, or do you believe that the Penguin’s legal troubles are far from over?

  • As my LinuxDevices.com colleague Henry Kingman reported this morning, Hewlett-Packard has announced plans to acquire thin-client vendor Neoware.

    This looks like a smart move for HP. Endpoint security problems, including lost, stolen or otherwise subverted client systems, remain a major problem for enterprises, and thin clients offer one of the best solutions for limiting client exposure.

    Neoware’s m100 thin-client notebook, in particular, has caught my eye, as it seems to deliver the sort of light weight, long battery life and smart network-resource leveraging I’ve been looking for.

    What’s more, I’m looking forward to seeing where HP takes Neoware’s Linux-based product lines. Neoware’s Linux-powered thin clients offer connectivity to Microsoft’s Terminal Services, but Version 1.5 of the Linux-compatible RDesktop RDP client currently lacks support for RDP 6, the version of the protocol to which Vista and Windows Server 2008 now default.

    HP could get off to a good start with its Neoware stewardship by stepping forward to provide technical and monetary resources to the RDesktop project to bring RDP 6 support to Linux.

  • Recently, there’s been a great deal of hand wringing over the possibility that Microsoft’s Office Open XML document format might be ratified as an ISO standard.

    The OOXML standardization critics point out that an ISO standard for office documents already exists, in the form of the already-ratified OpenDocument format, and that the existence of multiple overlapping ISO-standard document formats would be confusing and costly for governments, companies and individuals.

    The critics also have pointed out that the specification was crafted by Microsoft not as a standard but as a means simply of representing its legacy office file formats in XML, and doing so in a way that wards off rival format implementers by including various Office and Windows dependencies.

    Without question, OOXML falls far short of being a universal office document exchange format. Considering Microsoft’s enormous backward-compatibility commitments, I’d go so far as to say OOXML’s own authors would probably agree ODF would be a superior format on which to base a new application.

    Now, does OOXML really "deserve" to join the ranks of 16,000-plus existing ISO standards? Probably not. But, to paraphrase a favorite movie quote of mine, I suspect that in matters like these, deserving sometimes has nothing to do with it. In this situation, I don’t think it matters much, anyhow.

    For one thing, I’m fairly certain that ISO rejection of OOXML will not prompt Microsoft to adopt ODF for Office. I’m also pretty confident the lack of ISO certification for OOXML would do nothing to dissuade current Office users from continuing to run the suite. Microsoft’s Office franchise has been doing rather well without standards-body-recognized formats so far, and many believe that even without specs, the popularity of Microsoft’s formats render them standard enough for government work already.

    What’s more, with or without the ISO’s blessing, OOXML is substantially more open than are Microsoft’s legacy binary formats.

    As a user of OpenOffice.org on Linux who works in a mostly Microsoft-formatted world, I’m somewhat of a stakeholder in the ODF-vs.-OOXML horse race, and I’d like to see OpenOffice.org take advantage of this marginal boost in openness. In particular, I’d like to see vendors and projects that back ODF and Open­Office.org attack the OOXML spec’s 6,000 pages that Microsoft has offered to standardization bodies and do so with less focus on teasing out ISO inadequacies and more on identifying methods for improving support for Microsoft’s legacy Office formats.

    As for government lobbying, ODF supporters would do better to encourage governments to ensure future document accessibility by archiving documents as PDFs–a format that Office, OpenOffice.org and any applications with printing capabilities can target equally well.

    Given the level playing field of PDF, all comers–be they ODF, OOXML or neither–can be judged not on their format alone but on the mix of functionality, platform support and cost that best matches the task at hand.