a blog

  • “Do more with less” has been the official mandate for IT departments every where for some time now, and considering our economic climate, that refrain will ring more loudly than ever in the year to come.

    However, before we return from the holiday break and set ourselves to work busily doing more of the same, I think it’s worth examining the areas in which we can accomplish more by doing less. In particular, I’m thinking of the mainstream desktop, that marvel of the late 20th century that, when studded with useful applications, serves as the tool belt of the modern knowledge worker.

    The problem is that companies are spending an inordinate amount of time and money fiddling with their workers’ tool belts, which means that companies are left with fewer resources to spend on the applications with which knowledge workers create value and fewer opportunities for IT departments to focus on contributing to their company’s bottom line.

    With all the time we spend on deployment, patching, malware scanning, backup, personality migration, license management and the like, you might think that the creation, care and feeding of individual desktops–each one snowflake-like in its uniqueness–is the end goal of companies’ IT departments. To be sure, there’s a product to buy for every client management ill under the sun, but the piling-on of more products to plaster over the architectural deficiencies of our desktops simply amounts to more toolbelt fiddling.

    The fact that Windows Vista failed to excite users was interpreted as a failure for Microsoft, but perhaps the real failure is in the premise that a client operating system should be exciting at all. It seems to me that for a piece of software with the clear mission of managing hardware and hosting applications, silent and reliable operation should be considered the pinnacle of success.

    We’re working on a feature for the first issue of 2009 about desktop virtualization, which promises to reduce client management burdens by enabling companies to relocate their tough-to-manage physical desktop instances to somewhat less tough-to-manage virtual instances. While a the trend toward ferrying our desktop workloads from one spot to another is a modest step, I do believe it’s a step in the right direction.

    In the end, our goal should be to relegate the client operating system to the background, ceding the spotlight to our data, our identities and our applications.

  • Microsoft recently announced plans to discontinue OneCare, the company’s consumer-oriented, subscription-based anti-malware product. Instead, Microsoft will offer a free-of-charge anti-malware offering called Morro.danger.jpg

    I know that conventional wisdom, certain government and industry regulations, and Windows’ “Danger, Will Robinson” Security Center alert shield all disagree with me, but I’m not convinced that anti-virus products (as we know them) are even worth what Microsoft plans to charge for Morro.

    That’s because no matter how much you pay (or don’t pay) in anti-virus licensing fees, these products carry considerable costs.

    First, as anyone who’s regularly used anti-virus software has experienced, the scanning, updating and heuristics functions of these products add up to significant system overhead. Who among us has never stepped out to grab a cup of coffee or chat idly by the water cooler while Windows cranks through some ill-timed system scan?

    Second, anti-virus products add considerable update and maintenance overhead to the systems on which they’re used. The blacklisting approach employed by traditional anti-virus, which checks files against constantly changing (and yet totally comprehensive) signature databases, requires frequent updates to operate.

    What’s more, the anti-virus software itself must be updated, lest it become a vector for attack itself. I know of one company in particular at which unpatched anti-virus software was subverted in just this way.

    And while there are freely available anti-virus products out there, a huge amount of licensing dollars are spent each year on these products, and management of these licenses by administrators with plenty of other CALs and seats and entitlements to wrangle doesn’t come for free, either.

    Finally, the costliest characteristics of traditional anti-virus products—which purport to follow helpfully behind users cleaning up any messes that occur along the way—is a false sense of security and the poor administrative practices they enable.

    Anti-virus products are an integral part of the admin-rights-by-default assumptions around which the Windows ecosystem has long been organized. The fact is that as long as users are willing and able to run software that they have no reason to trust, we’ll continue to have malware problems.

    The solution to the malware problem is tighter lockdown, beginning with a clearer division between user and administrator roles than what we’re currently accustomed to. Microsoft has begun to promote this division with User Account Control in Vista. However, UAC must be paired with whitelisting policies that prevent regular users from running arbitrary, untrusted applications.

    Rather than persist in the Sysiphisian struggle to spot and quarantine bad applications, user organizations must take control of the applications they allow onto their end points, and security vendors must build out the products and services that facilitate this control.

    If you think I’m undervaluing anti-virus, I’d love to hear you tell me why.

  • Earlier this month, Microsoft CEO Steve Ballmer made blogosphere headlines by mentioning that Microsoft might look at embracing Webkit, the open-source Web browser rendering engine that powers Apple’s Safari and Google’s Chrome.webkit-ballmer.png

    I think that a Microsoft move to Webkit—not only for the company’s mobile platforms but for the full-size version of Internet Explorer—makes great sense and would yield dividends for users, for developers and for Microsoft itself.

    Rendering Web pages properly is the No. 1 job of a Web browser, and inconsistencies among different browsers can mean bad experiences for users and major hassles for Web developers and designers—who get the treat of papering over these wrinkles.

    If Microsoft joined Apple and Google in building its browsers around Webkit, we’d see more consistent rendering among popular browsers and, therefore, happier users and developers.

    Now, rendering inconsistencies can be (and, in many cases, certainly have been) viewed as opportunities for user and developer lock-in: Use and develop for Internet Explorer, or we’ll break your tables and send you off to sleep with the phishes.

    However, I believe that Microsoft has come to understand that pursuing competitive differentiation through standards chicanery is no way to win customers and partners—especially not on the Web.

    Microsoft’s decision to ship IE 8 in a standards-compliant-by-default mode, with the option of switching over to “old IE” mode as a second option, is a good sign, as is the thoroughly HTTP- and XML-based make-up of the company’s new Azure cloud platform.

    Now, if a browser engine that “renders different” is a clear liability instead of a competitive advantage, then what’s the point of Microsoft paying to develop and maintain one?

    Microsoft can devote its IE rendering engine resources toward improving and extending the up-stack, differentiation-bearing parts of IE.

    Yes, Webkit is open-source software, but the project’s LGPL license permits its use in proprietary applications, so using Webkit won’t force Microsoft to open source anything.

    Microsoft would get to allocate its resources more efficiently, demonstrate that its open-source talk is in earnest, help assure greater rendering consistency for users and make life easier for developers.

    And Webkit should only be the beginning. There are a lot of open-source component resources out there, and such pieces that can take care of business for Microsoft while enabling the company to maintain healthy differentiation deserve a hard look.

  • ·

    A few weeks ago, managed hosting provider Rackspace bolstered its cloud hosting division with a pair of major new acquisitions—cloud storage vendor JungleDisk and virtual server provider Slicehost.

    I was struck by the announcements Rackspace made that day, but the part of the event that stuck most stubbornly in my head was the old news about the company’s messaging service offerings.

    The messaging services arm of Rackspace, called Mailtrust, serves up e-mail, contacts and calendar hosting via the familiar Microsoft Exchange platform, as well as through a less-well-known messaging option, called Noteworthy. This second option is based on the IMAP protocol, and works with Outlook and other IMAP-savvy mail clients.

    According to Rackspace officials, the company’s Exchange and Noteworthy services are integrated with each other well enough so that customers can deploy a mixture of the two solutions for users with relative seamlessness—while saving about 60 percent on the mailboxes they shift from Exchange to Noteworthy.

    Rackspace Chief Strategy Officer Lew Moorman summed up the arrangement with the an ear-catching handle: The Exchange tax loophole.

    Under this scheme, a company can deploy mailboxes on Exchange for users who require Exchange-only features, such as the product’s mobile device synchronization support. For users who don’t require this functionality, a company can issue lower-cost mailboxes that still work with existing desktop clients.

    Now, I can’t tell you just how easy Rackspace makes it to flip from mailboxes hosted on one service to the other, nor can I tell you just how well the two services’ calendar, contacts and mail facilities actually mesh, because eWEEK Labs hasn’t tested these scenarios.

    However, Rackspace’s tax loophole concept illustrates how open standards and cloud hosting can converge to enable companies to cut costs without sacrificing the functionality that matters to users.

    I see opportunities for departments to cut back on their software licensing costs in a similar way—by using application, desktop and presentation virtualization technologies to make available pools of apps for workers to consume.

    For example, if a user can get his or her job done by using OpenOffice.org Writer rather than by tying up an available Microsoft Word 2007 license, that chargeback can be saved for another departmental project. Similarly, if a user is going to be working with an application served over XenApp or Terminal Services, then why not stream down a free Linux client to host the RDP viewer instead of tying up a full-blown Windows license for the job?

    Of course, deployment-lubricating technologies such as virtualization aren’t enough to realize these sorts of scenarios. Open standards—IMAP, in the Rackspace case—are absolutely vital to enabling these models.

    So you can throw license loophole-enabling onto the pile of reasons to demand open standards in every product or service you select.

  • Over the past year or so, I’ve been pretty breathless in my enthusiasm for cloud computing, and my enthusiastic writings around the topic have prompted a pile of reader mail containing many valid concerns over a possibly cloudy IT future.

    I’ll admit that I’m an unabashed cloud cheerleader–I view it as an IT game-changer with the potential to dissolve enough of the friction associated with new technology initiatives to enable IT departments to act with more bottom line-enhancing agility.

    You spin up just enough virtual infrastructure to serve a new project. If the project works out, fantastic. If it does not, you turn off the costs spigot by flipping the off switch on that virtual infrastructure.

    If capital cost hurdles for new projects are lower, and if the sunk costs for failed projects are minimized, I think we’ll see more, better-tailored IT initiatives at companies of all sizes, which is good news both for organizations and for the IT industry.

    Now, that’s the breathlessly enthusiastic vision, but if cloud computing is to realize that vision, then the horde of platform, virtualization, remote hosting, and software as a service vendors that are all hitching their wagons to the cloud are going to have to fill in the gaps that eWEEK’s readers have rightly pointed out.

    Many of the reader concerns I’ve fielded boil down to wariness over surrendering control over organizations’ application and data to some Web-based business: How can I trust my business with these providers; what happens to my data if the provider goes belly-up; what happens when I wish to take my business elsewhere?

    In order to allay the control concerns, vendors must demonstrate at least as much (and, to enable the fantastically agile IT scenario I sketched out, even more) data and application portability in the realm of the clouds as their customers now experience on the ground.

    For instance, if an organization’s servers are cranking along on Amazon’s Elastic Compute Cloud service, and Amazon undergoes some system-wide outage, there must be a clear path through which those loads can shift to another piece of the cloud, be it some other EC2-like service, an on-premises data center, or some mixture of the two.

    For software as a service providers, the portability question is even stickier, because a SaaS provider can not be parted from its software as simply as you can move a virtual machine from one host to the next.

    Certainly, the the portability story for both of these infrastructure strategies remains a bit, um, cloudy to garner wholesale acceptance, and portability is far from the only cloud concern–security, offline accessibility, and regulatory requirements present additional challenges.

    With that said, I don’t plan on handing in my Lando Calrissian Cloud City decoder ring any time soon, so I’ll be relying on our readers’ continued cloud-wary feedback to help keep me grounded.

  • The impending announcement of Microsoft’s cloud operating system at the company’s Professional Developers Conference has me thinking about how the struggle between open source and proprietary software models will play out in the cloud.

    There’s been much chatter about how the relocation of code from one’s premises to the virtual skies might threaten, render irrelevant or somehow derail the growth of open-source software by upsetting its natural licensing boundaries and advantages.

    First, while most cloud offerings are currently built out of open-source pieces–chief among them the Xen hypervisor and Linux operating system–the fact that the open-source licenses that govern these and most other free components do not require cloud services providers to share their contributions could pose a challenge to open source from within.

    However, as projects such as Linux and Xen move forward and accrue improvements, the cost to proprietary extension-minded cloud providers of either forgoing community-driven improvements or of taking on the ever-larger integration load of synchronizing their changes with the mainline will work to keep most cloud implementations open.

    More interesting to consider is the way that licensing issues around proprietary software, especially those related to redistribution and usage metering, will fade in importance for end users once the software and hardware resources that host it come to be bundled into common utility services.

    For instance, by bundling its software with cloud-based hosting, Microsoft can offer its customers a much simpler licensing picture, in which the tricky pricing-by-projected-usage models now enforced awkwardly by Client Access Licenses and the like can shift to pricing based on actual usage.

    I imagine that Microsoft’s proprietary cloud offerings will achieve their share of success, and they will play at every level of the cloud computing stack, from slices of space in the company’s new data centers, to Hyper-V-powered virtualization, to a Windows Server-based platform, to higher-level stack layers such as SQL Server Data Services.

    However, these achievements will no more push open-source software out of the cloud than open source has pushed proprietary software out of its ancestral on-premises homes. In fact, the realities that prompted Amazon to build its cloud offerings out of Xen and Linux rather than, say, VMware and Windows, are as relevant today as they ever were.

    Namely, open-source software is available for the owning to whomever wishes to take it up, and it grows in value, rather than declines, with every new interested owner it accrues. For this reason, open source will continue to be a first option for building new platforms and services, whether they be cloud-based or not.

  • For the past six years or so, my office productivity suite of choice has been OpenOffice.org. In that time, I’ve watched the suite progress slowly but steadily toward the goal of being “just as good” as Microsoft Office.

    And yet, for my needs, the free software suite has been Office-like enough since Version 1.0. In fact, considering that I’ve spent the past six years using a Linux desktop, which Microsoft Office does not support, OpenOffice.org has been better than good enough.

    That doesn’t mean, however, that I lack for office productivity pain points. For one thing, I’m not happy with the way that fat-client suites such as Office and OpenOffice.org tend to strand my documents on whatever machine I’ve used to create them.

    I want to be able to start writing a review on the docked notebook at my office, add to it from the desktop I use when I’m in our lab and give it a read-through from my home PC–all without having to install office suites and configure network shares on every machine from which I may wish to access and modify my documents.

    Web-based productivity applications, such as those from Google and AdventNet’s Zoho, directly address my application and document siloing pain points. The applications work just as well on Linux as on Windows or most any other platform, and they do a good job with Microsoft and OpenOffice.org document formats.

    Given the rise of these Web-based alternatives and the progress they’re making toward erasing their traditional drawbacks through offline browser support and JavaScript performance improvements, it’s worth asking whether OpenOffice.org will continue to matter as we move forward.

    I believe that it will, but continued relevance is going to require that Sun Microsystems and the rest of OpenOffice.org’s stakeholders shake things up a bit.

    When I think back on my Linux desktop circa 2002, OpenOffice.org is probably the least improved, least innovative and slowest-moving major component of the lot.

    During that same period, my Web browser went from Mozilla 1.0 to Firefox 3.0, with a major architecture overhaul in between. Today, no one would call Firefox “good enough.” Mozilla’s browser efforts haven’t just managed to pile up user share, they’ve created a platform on which various other products and projects are now built.

    To be sure, through its influence on and implementation of the Open Document Format, OpenOffice.org has played an essential role in the standardization required to make a productivity application platform possible.

    Now, I’m looking to Sun and the contributors of the OpenOffice.org project to give themselves permission to blow up the suite and aim its successor not at Office’s taillights, but on the fat-client-optional roads that Microsoft is unwilling or unable to travel.

    For more on the state of OpenOffice.org, check out my reviews of OpenOffice.org 3.0 and Lotus Symphony 1.1 (registration required).

    You can also check out slide shows of OpenOffice.org 3.0 and of Symphony 1.1 (registration not required).

  • Earlier this week, I had the opportunity to get my hands on Research In Motion’s much-anticipated touch-screen device, the BlackBerry Storm.

    The new device, which had been known in rumor mill circles as the Thunder, offers up an ingenious solution to the thumb keyboard versus virtual keyboard dilemma: The Storm’s touch-screen is built atop a mechanical apparatus that turns the whole thing into one big button.

    I spent a bit of time tapping away on the Storm’s new 480-by-360-pixel display, and I found that the screen-button mechanism was balanced well enough so that no matter what part of the display I pressed, it felt as though I was hitting a real button, centered wherever I was pressing.

    The Storm sports an accelerometer, just like the iPhone, and when held in portrait mode the Storm’s virtual keyboard appears in RIM’s two-letters-per-key SureType mode. In landscape mode, the Storm’s keyboard switches to QWERTY mode.

    As impressed as I was with the traveling touch-screen (RIM is calling it ClickThrough), I was disappointed that the Storm ships without a Wi-Fi radio. The RIM folks who briefed me offered two reasons for the missing Wi-Fi:

    1. Because of the Storm’s cavalcade of different radios (Bluetooth, GPS, quad-band EDGE, single-band UMTS/HSPA, dual-band CDMA/EVDO Rev A) there’s just no room for Wi-Fi.

    2. Who needs Wi-Fi, when Verizon Wireless’ network is so fantastic? (I should mention that two people from Verizon Wireless were in attendance at the briefing as well.)

    As great as Verizon Wireless may be, and as effective as that guy with glasses and his army of pole workers who act out family connectivity scenarios and follow subscribers around may be, I want Wi-Fi in my smart phone.

    Still, even with the missing Wi-Fi, I think that RIM has an impressive device on its hands, and I’m looking forward to seeing how the hordes of BlackBerry thumb-keyboard enthusiasts take to the new display.

  • Recently, Cameron Sturdevant and I waded into the world of application whitelisting–a set of products and technologies aimed at ensuring the integrity of Windows clients by enforcing control over which applications are allowed to run.

    I think that whitelisting, when combined with diligent paring of user and application privileges, can go a long way toward granting workers leave to worry less about whether they are “security idiots” (to borrow a bit of Jim Rapoza’s phraseology) and focus more on getting their jobs done.

    However, where Web-based applications are concerned, the client security road map is much less clear, and, as Jim points out in his column this week on clickjacking, there’s no shortage of new Web-based routes through which code-wielding ne’er-do-wells can exploit our machines.

    As I’ve written recently, today’s Web browsers lack the plumbing to support the same sort of interapplication isolation that full-blown operating systems provide, but projects such as Google’s Chrome indicate that we’re at least moving in the right direction.

    Less promising is the current state of affairs around whitelisting on the Web. Application whitelisting relies on knowing where the code you run on your clients comes from, and opting to trust or not trust these code sources.

    On a client PC, even one with a large number of installed applications, it’s not too tough to go through and make reasonably informed decisions about which code to trust. On a Web page, this sort of trust audit is immensely more challenging, as snippets of script come from all over the place.

    Load up the NoScript extension for Firefox (which implements script whitelisting) and take a browse through your typical array of sites; you’ll find scripts and objects from Web analytics firms, advertising companies, providers of social networking widgets and numerous other partner firms.

    It would be nice to assume that the Web locations you’ve chosen to visit–and, therefore, to trust–monitor the assemblage of content, counters and ads as seriously as does your software vendor, but I can’t believe this is the case.

    We may need to move away from the Frankenstein-ian nature of today’s Web and introduce more control, more coherence and more specialization into the distribution end of the Web apps model–sort of a UPS or FedEx for Web apps.

    These distributors could gather together all the elements that constitute a Web application, apply sound vetting practices and serve them up under common domains, preferably along with an SSL certificate.

    I’m not calling for an end to the open Internet, but I admit that the rise of a trusted tier of sites could have a chilling effect on those outside of the system.

    However, unless we get a handle on the sources of our Web applications, the promising cross-platform application model that the Web can enable will have a tough time thawing the OS monoculture that defines today’s client computing landscape.

  • ·

    Recently, eWEEK Labs has been putting a handful of high-profile smartphones through their paces, which has led us to consider what elements would comprise the ideal business smartphone.

    While it’s easy to get caught up in the physical characteristics of a device, there’s more to an effective device than the slimness of its chassis or the thumb-friendliness of its miniature keyboard.

    As with any computing device, a smartphone is only as good as its software, and the most suitable smartphones shine not only for their out-of-the-box bits, but for their amenability to expansion through third party applications.

    When I reviewed Apple’s iPhone 3G recently, I was nearly impressed enough by the device’s combination of physical virtues, excellent bundled applications, and newfound openness to third party works to brand the popular handset as the best of its breed. Indeed, when my colleague Joe Wilcox described the iPhone 2.0 launch as the start of a compelling new platform, I was hard pressed to disagree.

    After further consideration, what tempered my enthusiasm for the iPhone was its oddly aggressive attachment to Apple’s music store front end, iTunes. It’s hard to expect IT administrators to deploy the heavy-set, consumer-oriented application for all its iPhone users, but since iTunes is a sole route through which critical system and bundled application updates can reach the device, there’s no way around this requirement.

    Considering that Apple’s 2.0 firmware release ran alongside the debut of some fledgling enterprise deployment tools for the iPhone, I expected that Apple would, in time, untether its impressive smartphone from iTunes to better meet enterprise needs. Failing that, I assumed, third parties would emerge to help emancipate the iPhone from iTunes.

    I appear to have assumed incorrectly, not for a lack of enterprising developers to cut the iTunes cord, but due to Apple’s unwillingness to allow it.

    Apple, which reserves the right to ban malware or otherwise inappropriate applications from its users’ iPhone devices, recently vetoed an app on the grounds that it duplicates iTunes’ podcast-fetching capabilities.

    In fact, more than just duplicate iTunes functionality, the offending application, called Podcaster, improves on iTunes significantly by allowing users to download podcasts without syncing with a desktop client.

    If Apple is serious about making the iPhone into an enterprise smartphone contender, excellent hardware and slick bundled applications won’t be enough to get the job done.

    Apple must also submit to loosening its grip on the iPhone enough to let the platform’s developer community take the device in new directions–even if these directions threaten to weaken the client software beachhead that iTunes helps establish on users’ PCs.