a blog

  • Back in January, I wrote a column, “I want more from Firefox,” in which I described how my growing affection for Web applications was coming into conflict with my growing impatience with the immaturity of Web browsers as application hosts.

    Isolation between the Web pages or apps running atop my browser is what I sought from Firefox, for the purposes both of security and of reliability. Shortly after I wrote this column, I managed to achieve a measure of the isolation I sought by taking to running Gmail under Prism, a Mozilla Labs project for running individual Web apps in their own processes.

    Prism ensures that runaway Flash ads and required browser restarts won’t disrupt my email and instant messaging, and as a bonus, it’s done a good job keeping my webmail sessions and those of my wife’s separate on our shared home computer.

    I tried to effect some further isolation through Novell’s AppArmor security software (which now ships along with Ubuntu), but Prism proved too challenging a target for my limited profile-building prowess. Here’s hoping that sandboxing Firefox or Prism with AppArmor becomes a priority for Novell or Canonical in a future release.

    I also flirted for a time with the NoScript extension for Firefox, which presents users with a dizzying array of Javascript trust choices to make, constantly, while they browse the Web. While I don’t doubt that in the right hands, NoScript can deliver safer browsing by preventing the scripts of evil-doers from running, I found the stream of script policy-making decisions too taxing, and I disabled the extension.

    Now it’s nine months later, and judging from roadmaps studded with nuggets like Firefox’s Ubiquity and Internet Explorer’s Web Slices, it seems that the industry’s major browser purveyors are more focused on mashing up separate sites than on keeping them isolated from each other.

    And yet, more or less everything I asked for back at the beginning of the year has been delivered, not by Firefox, but by the firm that builds the apps that prompted me to want more from my Web browser in the first place–Google.

    I read Google’s introductory Chrome comic book with choir-like head nodding. The new browser may not take on all the use cases at which Internet Explorer and Firefox are aimed, but the project’s focus on delivering for Web browsers the sort of application-hosting maturity that NT brought to Windows and OS X brought to the Mac is just what the Web application space needs.

    What’s more, Google’s decision to release their work under an open source license permissive enough for even Microsoft to adopt should create a app-hosting tide to raise all our Web-faring ships. At least, I hope so, since Google’s Chrome is for now a Windows-only affair.

  • During my recent interview with Red Hat CEO Jim Whitehurst, I was struck by his assertion that if you don’t need–and aren’t getting–bulletproof uptime from your desktop operating system, then it doesn’t make sense to be paying for it.

    He has a good point.

    The fundamental job of an operating system is running applications and managing hardware. There are both free and for-a-fee operating system options, which, given requisite hardware and application maker support, perform their core go-between task similarly well. If this is the case, and you’re paying for a particular client desktop, are you getting your money’s worth?

    Whitehurst specifically was referring to the mainstream consumer desktop market, which Red Hat has steadfastly demurred from entering, but I think it’s fair to extend the analysis to enterprises as well.

    After all, the enterprise desktop is not so different from the consumer desktop, not in the usage models (where the job of the consumer desktop is arguably more challenging and diverse) and not in the reality of what makes up the enterprise or the consumer desktop.

    Windows clients–in all their multiplying home, enthusiast, business or enterprise-oriented SKUs–are more or less the same. On the Linux side of things, there are greater distinctions between home and business versions than exist in the Windows world, but these are mostly related to support and slower update pace.

    At this point, I don’t think that the enterprise desktop Linux offerings from Red Hat or Novell are sufficiently superior to their community Linux counterparts to extract license fees out of the majority of organizations and individuals that opt for Linux.

    What’s more, while Windows now enjoys a massive head start in hardware and software maker adoption, I don’t think that Windows, in its current vein of development, is sufficiently superior to free alternatives to hold them off forever. The limp uptake of Vista, even months past Service Pack 1, demonstrates how little customers value Microsoft’s recent client efforts.

    Getting the desktops we pay for means tighter lockdown and more granular permissions; it means no-hassle data encryption, both on-device and as that data moves out into the network; it means support for stateless configurations, with sessions that flow from one device to the next as users’ needs and locations change.

    As matters stand, these sorts of solutions can be built through a host of add-ons and consulting engagements. However, moving forward in a world in which basic operating system functionality has become a commodity, Microsoft, Red Hat and other platform vendors will be–and should be–held to deliver enterprise-class functionality for the enterprise-size licensing dollars they seek.

  • Our own Scott Ferguson is reporting today on the licensing snafu stemming from the ESX Server 3.5 Update 2 that VMware began shipping on Aug. 1:

    VMware released an alert Aug. 12 to warn customers and partners about problems with an update to the 3.5 version of VMware ESX and ESXi virtualization products. The update is causing disruptions and virtual machines are failing to power on. VMware has posted a temporary fix and is working to fix the update.

    Three things about this on-premises outage jump to mind:

    1. I just missed it. I downloaded this update last Friday, but I hadn’t installed it yet on the ESX Server I use for testing in our lab. I’ve been holding off on upgrading the ESX box from Version 3.0 of the product because the updated version of the Virtual Infrastructure client that the 3.5 release requires regressed on 64-bit Windows compatibility. That regression had since been fixed.

    I’m generally a fan of prompt and even automatic updates–after all, when things go wrong with updates, we can always rely on virtualization to snapshot us back into action. Unless, of course, it’s your virtualization platform that gets broken. You win this time, partisans of update conservatism.

    2. Whether you believe they’re necessary or not, mechanisms designed to lock you out of the software running on your hardware are a major pain in the ass.

    These things exist solely to enforce the business models of the companies that implement them, and while that’s not necessarily a bad thing, vendors better make double sure that the features they employ to enforce their licenses remain as transparent as possible to users.

    General absence of arbitrary lockout mechanisms: another reason to love open-source software.

    3. Catastrophic service outages are not the province of the cloud alone. Looking out at the headlines that Google’s been grabbing for its recent Gmail outages, you’d think that no one’s self-hosted e-mail or other key services ever went down, or that makers of on-premises software never push down far-reaching failures to their customers.

    Unless you’re hosting your own services, writing your own platforms, designing your own hardware, running your own network cables and generating your own electricity, you’re subject to the potential mistakes of your trusted providers. We must remind ourselves to plan accordingly.

  • Lately there’s been a lot of chatter about an operating system merger made in tech echo chamber heaven: Symbian plus Android.

    Most recently my colleague over at the Storage Station recounted a conversation between himself and Nokia Forum Director Tom Libretto that called to mind a familiar movie scene. So you’re telling me there’s a chance!

    Symbian is the popular mobile operating system developed by Nokia and others, the exclusive rights to which Nokia recently purchased from its partners before pledging to release the OS under an open-source license.

    Android is the still-unreleased open-source Linux+Java mobile operating system that Google has been assembling to form the guts of the magical, years-salivated-over GPhone.

    Hang on, did you notice that both of the sentences above include the phrases “open source” and “mobile operating system”? Oh man, these guys would be CRAZY not to merge, right?

    Wrong. Here’s why:

    1. There’s nothing to gain. The overlap between Android and Symbian must be close to 100 percent. How do you merge two completely distinct operating systems? The Android+Symbian chatter is akin to arguing that Windows and OS X could plausibly be merged. The work required in ripping out parts of each system to make way for overlapping bits from the other system would take forever, and what would be the point?

    2. Symbian is NOT an open-source operating system–at least not yet. Open sourcing an operating system takes a long time. Sun announced plans to open source Solaris in 2004. It took a year for Sun to release some of its code under the OpenSolaris banner, and it wasn’t until this year that Sun released the first truly ready-to-use incarnation of OpenSolaris. And even the 2008 incarnation of OpenSolaris isn’t billed as production-ready.

    Android is already late. There’s no way that Google is going to hold up Android for four months, let alone four years, to wait for Nokia to dot and cross its IP i’s and t’s, and do so for nothing.

    Now, just because there’s a million to one chance that Symbian and Android might merge doesn’t mean that Nokia and Google can’t collaborate on the mobile OS front. Both Symbian and Android could greatly benefit from a measure of application platform standardization–different systems that run the same apps, perhaps with Java as a common language between the two systems.

    Am I wrong? Does a Symbian/Android merger have a Lloyd Christmas’ chance in Aspen of occurring?

  • Looking out at yesterday’s Amazon S3 (Simple Storage Service) outage through his Microsoft Watch-colored glasses, my colleague Joe Wilcox views the hosted storage slip-up as a selling point for Microsoft’s Software-plus-Services twist on cloud computing.

    The Software-plus-Services pitch goes something like this: Rather than jump into cloud-based services with both feet, organizations and individuals should pursue a blended strategy, based on traditional on-premises software, complemented by hosted services where appropriate.

    The Software-plus-Services strategy makes a lot of sense, and organizations investigating whether to shift vital systems from an on premises to a hosted model shouldn’t allow themselves to get so caught up in cloud excitement that they overlook the relative immaturity of hosted services.

    With all that said, however, it’s important to keep in mind that the tag line “Software-plus- Services” doesn’t tell the whole story. Sitting behind that familiar and friendly word, “software,” are a chain of significantly more sticky concerns. A more accurately descriptive slogan might be, “Software-plus-Hardware-plus-Power-plus-Bandwidth-plus-Real-Estate-plus-Management plus-Services.”

    When you take into account everything that’s required for a business to host its own software—particularly for a startup out to break into a market, or an established player looking to avoid being bumped out of its place—putting up with a certain amount of downtime can be viewed as a cost of staying in business.

  • Back in March, when Apple unveiled the details of its eventual iPhone 2.0 upgrade, I opined that the company was on its way to seizing a slice of an enterprise smart-phone market in which the BlackBerry and the Treo currently reign.

    Now that I’ve tested the 2.0 firmware myself, I do still believe that the iPhone will become a popular enterprise device. For instance, I can report that the iPhone works quite well with Exchange-based e-mail, contacts and calendars, and that the new Cisco VPN client worked for me without a hitch.

    As my fellow labsman Andrew Garcia Andrew Garcia has outlined, Apple’s management tools, while leaving much to be desired, do indicate an encouraging change in direction for a firm that often seems allergic to considering enterprise needs.

    As with all Apple products, embracing the iPhone means relinquishing to The Steve some of the control and flexibility that organizations are accustomed to expect. Treos and BlackBerry devices come with carrier and device options that mirror the diversity of the PC market, in contrast to the locked-down, single-source rigidity that marks the Mac side of the market.

    What makes iPhone 2.0 different from the Mac, however, is that while Macs offer up more or less the same functionality as do PCs, only wrapped in a sort of leather bucket seats veneer, the new iPhone balances its locked-down aspects with something unique and worthwhile: the App Store–a software management framework that’s absent not just from Treo and BlackBerry devices, but from Macs and Windows PCs as well.

    By making available to all iPhone and iPod Touch users an official networked repository of Apple-vetted applications, the App Store lets these users purchase, download, install and update new software to extend the functionality of their devices without having to locate, decide to trust and execute transactions with a sea of separate software developers.

    Now, I would prefer it if the App Store framework offered the option of connecting to additional, non-Apple software repositories in addition to the officially sanctioned channel. In some cases, I might not want Apple injecting itself between me and my software vendor. For instance, if I’m running an application from Oracle or Salesforce.com, I want to make sure that important security updates don’t get stuck in some Apple vetting queue behind Crazy Magic Monkey Explosion IV and an assortment of 45 different tip calculators.

    What’s more, while Apple does currently provide a route through which applications that large businesses develop in-house may be installed on iPhones, that process lacks the networked delivery virtues of the official App Store channel.

    And then there’s the question of applications that Apple is unwilling to host in its repositories. At the time that I’m writing this, some of the applications that I quite liked using on my hacked iPod Touch are not yet, and may never be, available from the App Store. For instance, while there’s a decent AOL instant messenger client available in the App Store, there are no IM clients that can handle multiple services.

    With that said, I think that it’s important to point out that the iPhone has already grown significantly more open than it was at its initial debut, when the only third-party applications welcome on the device had to be piped through the device’s Safari Web browser. Here’s hoping that in time, with customer encouragement, Apple might loosen its grip further.

  • Last week in this space, I criticized Microsoft for continuing to burn cycles on superficial add-ons, such as multi-touch support in Windows Seven, while more significant pain points for Windows customers remain under-addressed.

    As I see it, Microsoft is busying itself tacking up fanciful moldings around its flagship product while the Windows through which millions of paying customers access their hardware devices and software applications remain smudged and, in some places, cracked.

    The best example of this misplaced focus relates to the undisputed No. 1 reason why organizations and individuals continue to choose Windows above all other platforms: access to Windows’ massive software catalog.

    If you’ve deemed OS X or Linux unsuitable for your needs, chances are that the root of the misfit is compatibility with software that runs only on Windows. And yet, it’s poor software management that’s at the root of most users’ Windows woes, including the malware issues that keep Windows customers ever on Orange Alert.

    The trouble is that the sea of available Windows software contains both beneficial and harmful applications. It’s very easy for users to compromise the security of their systems and of their data by installing malware, or by failing to install security patches.

    Since installing applications can broadly impact your Windows system, these rights are reserved for system administrators, who are presumably better qualified to vet applications before installing them, and to ensure that the software stays up to date once they’ve installed it.

    However, relatively few Windows users, both in businesses and homes, have access to system administrators, and businesses that do have these IT resources would be much better served turning them on core business needs.

    There is no shortage of products and services intended to help fill the software installation and update holes with which Windows is riddled, but if we’re ever to see a fundamental improvement in Windows application management, Microsoft must get involved.

    I find it hard to believe that Microsoft views media players and movie editing software important enough to bundle with Windows, and yet the best effort that Microsoft has so far mustered toward improving the state of software management is to throw up a red warning shield if PCs lack anti-virus software.

    I’d like to see Microsoft expand Windows update into a service that ISVs can plug into to centralize application updating, and through which customers can benefit from the efforts of application vetting firms such as Bit9.

    Rather than vet and update applications one-by-one, users and administrators should be able to select from one or more trusted application vetting services, and configure their Windows systems to enable regular users to install applications and subsequent updates from these pre-vetted catalogs.

    Perhaps best of all, a bulked-up Windows software management framework could serve the goals both of improving the experiences of millions of Windows users, and of allowing Microsoft to continue chasing Apple’s iPhone–which is on track to get its own pre-vetted software installation service next month.

  • When considering alternatives to Microsoft’s Office productivity suite, one of the most important issues to evaluate is that of the success with which Office rivals such as OpenOffice.org can handle Microsoft’s ubiquitous binary file formats.

    acrobat2.jpg

    Over the past few years, eWEEK Labs has approached the MS Office to OpenOffice.org file format fidelity issue several times. Our conclusions haven’t changed much since 2004, when Anne Chen and I helped one of our corporate partners test the productivity suite pair for themselves:

    “Although OpenOffice.org does a good job of handling Microsoft Office file formats, small formatting inconsistencies will require reworking of complex documents.”

    While the phrase “small formatting inconsistencies” still sums up the situation fairly accurately, organizations and individuals out to bring the open source suite into their application mix could use a more rigorous means of measuring OpenOffice.org’s handling of MS Office formats.

    That’s why, when Adobe briefed me on Acrobat 9, I was particularly interested in Acrobat’s new “compare documents” feature, which analyzes two PDF documents and parses out all of the inconsistencies between them.

    I grabbed a Word-formatted reviewer’s guide document from Microsoft’s Web site, opened it up in Word 2007, and printed it to a PDF using Acrobat 9.

    Next, I opened the document in OpenOffice.org 2.4 and used Acrobat 9 to print it to a PDF document. I could have used OpenOffice.org’s built-in PDF export function, or Office 2007’s plugin-based PDF exporter, but I opted to stick with Acrobat in order to minimize inconsistencies that the differing PDF exporters might have introduced.

    I fired up Acrobat 9 (I tested with a beta version of the software) and pointed the application’s compare document feature at my Office and OpenOffice.org-rendered PDF documents. The result? Good fidelity overall, but various inconsistencies remained. This time, however, I had Acrobat 9 on hand to point the inconsistencies out to me.

    For instance, right on the first page of the document, OpenOffice.org rendered a 935 by 227 pixel logo at 936 by 234 pixels–a formatting inconsistency that resulted in a slightly misplaced logo, but one that I would have had a tough time putting my finger on without Acrobat 9’s help.

    Another odd, slight inconsistency came in the document’s table of contents, in which OpenOffice.org rendered 146 periods between the section name and page number, where Office had rendered 145 periods.

    I also downloaded a test version of the upcoming OpenOffice.org version 3, and compared that version’s Word document rendering to that of OpenOffice.org 2.4. Both versions appeared to render my test Word document exactly the same–a result that Acrobat’s compare function confirmed.

    Since support for Microsoft’s new Office Open XML formats is one of the new features in OpenOffice.org 3.0, I fetched another document from Microsoft’s Web site, this time in the DOCX format, and cheffed up some PDFs to gauge the open-source suite’s OOXML chops. This time, the formatting differences were much more pronounced and included misplaced images and jumbled bullet lists.

    I expect to see OpenOffice.org 3.0 improve its handling of OOXML documents as it moves closer to its release. I’ll be testing the suite’s OOXML capabilities as subsequent test releases emerge, and I expect that I’ll be using Acrobat 9 to help with those tests.

    For a walkthrough of my Acrobat-fueled Office vs. OpenOffice.org file format adventures, see our slide show, here.

  • If you asked a thousand people what Microsoft could do to Windows to improve the product, would even one of them describe a yearning to use his or her fingers to move objects around on a Windows desktop?

    And yet, as demonstrated at the recent D6 conference, Microsoft has chosen this feature, multitouch support for the Windows shell, as the seed from which excitement about the forthcoming Windows Seven is supposed to grow. In the near future, Windows users will be able to use multiple fingers to move items around on their desktops, spin their family photos and play an on-screen piano. Super.

    Adding multitouch to Windows probably seems to Microsoft like a pretty safe bet–Apple’s iPhone sports that sort of interface, and people love the iPhone. If Windows becomes fingers-friendly, people should start loving Windows, right?

    The catch is that Apple’s lovable features arrive in the firm’s products in some context of usefulness–at least as understood by Apple’s customers. Multitouch on the iPhone makes sense because it allows users to ditch their styli. The fancy-looking “pinch to zoom” functionality is key to making the most of the unit’s tiny display. On a notebook or a desktop, why bother?

    Along similar lines, consider Apple’s hardware-accelerated, compositing desktop feature, Quartz Extreme, which debuted back in 2002. Apple’s user interface came with fancy graphical effects, and its hardware came with 3-D-enabled graphics adapters. During the initial Quartz Extreme demos, Jobs demonstrated how offloading graphics chores from the CPU to the GPU could free up the central processor for other work.

    Compare this with Microsoft’s Windows Vista, and its own compositing, hardware-accelerated Aero Glass interface. Windows hadn’t shipped with fancy graphical effects, and most computers running Windows did not come with 3-D-enabled graphics adapters.

    So, where Quartz Extreme meant that OS X could do its thing better, and offer users a way of getting more out of their existing hardware, Vista offered users a desktop facelift, one that required new hardware purchases.

    Rather than train all of its attention on chasing Steve Jobs and churning out dim shadows of Apple’s products (the same goes for the pursuit of Google online), Microsoft must refocus on the reasons why millions continue to choose Windows, and set about honing that value proposition.

    Organizations and individuals don’t choose Windows for eye candy, and they don’t choose Windows to smash their comfortable UI paradigms. Windows draws its strength from its massive software and hardware ecosystem, and this is where Microsoft must shore up its efforts.

    For instance, users choose Windows because they consider Windows to be the platform most likely to support arbitrary peripheral hardware. So, if a printer manufacturer refuses to write a Vista driver for a 3-year-old printer, then Microsoft should write the driver itself.

    Before you retort that Microsoft couldn’t possibly afford to take on these sorts of development responsibilities, consider that this is exactly how the Linux and open-source crowd operates, and does so with fewer resources than Microsoft boasts. Honestly, how many hardware drivers could have been written for the price Microsoft paid for the ubiquitous “The WOW Is Now” billboards that heralded Vista’s release?

    Users choose Windows because Windows boasts the biggest software catalog of any platform. However, it’s becoming increasingly difficult for individuals and organizations to know which applications they should trust enough to install on their systems, and which might carry malware but that haven’t yet made it onto hopelessly reactive badware lists.

    What’s more, once users have installed applications on Windows, they’re forced to contend with a system tray full of separate little software update applets, each with their own update schedules and alerting routines.

    Microsoft could serve its users much better by offering some sort of centralized software management facility for Windows, complete with mechanisms for vetting applications (or for enabling third parties to vet applications) and for pushing down updates consistently.

    Printer drivers may not be as sexy as multitouch (depending, I suppose, on what you’re printing), but no one is buying Windows for sexiness. We’re over here trying to get some work done. I suggest that Microsoft leave the candy coatings to the aftermarket, and get back to business.

  • When Microsoft, some time in the first half of 2009, makes good on its recent pledge to roll full support for the Open Document format into a second service pack for Office 2007, my reaction will be, “It’s about time.”

    In the meantime, we’re left to ponder why Microsoft has changed its mind about embracing ODF, and what the change will mean for the organizations and individuals that create and consume office productivity documents.

    Microsoft is citing customer demand to explain its change of heart. While I doubt that many customers asked for ODF in particular, it’s clear that there’s sufficient demand for better document format standards in general. Microsoft believed that it could satisfy these demands by crafting its own format, and by pushing it through the ISO standards process.

    It seems, however, that Microsoft has underestimated not only the amount of work required to forge its own alternative to ODF, but the relatively small return on investment that the Redmond giant has managed to enjoy for its Office Open XML efforts so far. By shipping an Office 2007 product that defaults to a brand-new XML-based format, Microsoft has managed to annoy a broad swath of its customers without appeasing the subset who are calling for open formats.

    For the majority of customers, who don’t particularly care about the new format, the switch to OOXML means jumping through hoops either to reconfigure its Office 2007 installations to default to Microsoft’s binary Office formats, or to install add-on software to OOXML-enable previous Office versions.

    Here I’m reminded of Office 2007’s other major feature, the Ribbon interface, which requires users to change the way they work in order to push more Office features to the surface and make it clearer to everyone what great value they’re getting out of running a fat-client productivity suite.

    For the customers who do care about open formats, OOXML does not–and probably cannot–fit the bill. The version of OOXML that ships with Office 2007 is not even the same version of the format that’s managed (through much controversy) to earn ISO’s stamp of approval. Indeed, the differences between the on-paper OOXML and the one that lives in Office are great enough that Microsoft has stated that Office won’t support the standardized version of OOXML until the next iteration of Office ships, at a date that remains to-be-determined.

    Since most Office users would be happy to continue using Microsoft’s old binary formats, and since those for whom open standards are important would probably prefer ODF or PDF formats anyhow, I won’t be surprised if OOXML quietly dies before that future Office iteration ever sees the light of day.