Was I Too Tough on RHEL?

In my recent review of Red Hat’s Red Hat Enterprise Linux 5 and its brand-new Xen virtualization features a bit of a hard time with regard to the limitations of its management tools. Relative to the products of VMware, the current market/mind share leader in x86 server virtualization, Red Hat’s Xen implementation has a decidedly do-it-yourself nature–less pointing and clicking and more configuration file editing and documentation digging.
During an e-mail exchange about the review, a reader challenged me on whether it was fair to criticize RHEL 5’s graphical user interface limitations and remarked on his disgust at finding how many tasks in VMware are point-and-click-oriented. Disgust seems to me like a pretty extreme reaction to a software interface, but I think that people chafe at the idea that an inferior product with a newbie-oriented interface–the archetype of which being Windows–might trump an arguably technologically superior option that greets you instead with a blinking command-line cursor and a trove of config files.
As the reader also pointed out, GUIs typically make things easy by limiting flexibility, and having your hands tied isn’t fun, even if the binding is being done by a friendly wizard. To be sure, I felt my own GUI-borne pain during my testing of VMware’s Virtual Infrastructure 3, which regresses significantly from VMware’s typically good cross-platform compatibility record by supporting only Windows for its VI3 management client. Along similar lines, while testing Virtual Iron 3.5, I felt the pinch of lacking direct control of the virtualization nodes that Virtual Iron governs instead through the graphical interface to its management server.
However, the fact that GUIs come with their own limitations can’t completely eclipse the advantages that they can provide. For instance, I value the discoverability of GUIs–when I sit down in front of a new application, a nice GUI makes it much easier to poke around and figure out what the application is capable of and to try things out and get an idea of what works and what doesn’t. What’s more, using a well-designed GUI means not having to memorize commands and arguments and not having to worry about making config file syntax errors.
Developing fluency in the command-line arcana that govern your applications of choice is a worthwhile goal, as such fluency enables you to speed, gurulike, through your tasks; to do so remotely just as easily as locally; and to script chains of operations for future use. There’s no denying, though, that a requirement for command-line gurudom presents a limitation on flexibility in its own way, as steep learning curves–be they for proprietary or free software–contribute to product lock-in.
Rather than see a winner crowned in this age-old GUI vs. CLI (command-line interface) debate, I’m on the lookout for product interfaces that blend the benefits of the GUI and CLI approaches. For instance, I was impressed by the Web interface for the ZFS (Zettabyte File System) in Sun’s Solaris 10, update 6/06, with which I could point and click our way through ZFS operation while having the GUI helpfully display for me the CLI equivalent of the commands I executed. Another promising approach is that offered up by Microsoft’s new Windows PowerShell CLI, the object-oriented nature of which lends itself to the sort of discoverability I value in graphical interfaces. PowerShell’s Get-Member command made it easy for me to figure out what other commands could do for me and how I could use them.
So, was I too tough on RHEL 5? Considering that the raison d’etre for Red Hat, as a company and a product, is to herd the best of the free software that’s available out there into easier-to-use, better-tested and more manageable configurations and considering the fact that users of server virtualization are almost certainly more interested in manipulating the contents of their virtual containers than in fussing with the containers themselves, I say no.
What do you say?

Latest GPL Draft Is a Step in the Right Direction

After a few months’ delay–during which the Free Software Foundation mulled over how to make the world safe for GNU-manity in the face of Microsoft and Novell’s patent, collaboration and baby-seal-clubbing accord–there’s a new draft of the GNU General Public License out for comment.
For now, the most promising thing to report about the draft is that the leader of GPL 2’s most prominent project doesn’t hate it.
While that sounds like pretty faint praise, the fact that Linus Torvalds–the guy who founded and who still heads the Linux kernel project–is greeting the new draft with at least guarded openness is a big step in the right direction for the GPL update effort. After all, no matter how much work goes into the update process, GPL 3’s value ultimately must be judged by the size and quality of the free software commons that the license helps delineate.
From the start of the process, however, the danger has been that the FSF might reach too far and produce a GPL so focused on ensuring software freedom for everybody that next to nobody would choose it for his or her work. For example, GPL 3 was to take on the so-called SAAS (software as a service) loophole, in which someone could create and distribute under the GPL an Internet application, which some other party could then modify and serve to the public over the Web without having to release the changes to the code.
The previous GPL 3 draft offered optional provisions that would define serving an application over the Web for distribution and would require those serving the code to release their changes. In a nod toward clarity in the license, however, the FSF has split those licensing options into a separate license, the Affero GPL, with which GPL 3 will be compatible. The result is a leaner, clearer GPL 3 that fits with the expectations of the current GPL crowd while holding out a new licensing option for those who choose it.
Less clarifying is the FSF’s revised approach to combating “Tivo-ization,” in which a party distributes software (in the case of Tivo, Linux) to users in a device or appliance, duly offers his or her modified code for download, but prevents users from running modified code on their device or appliance. In the previous GPL 3 draft, the FSF called for vendors to cough up encryption keys to free these appliances, and this was one thing that rubbed Torvalds and others the wrong way. The current draft takes a different and perhaps more palatable-sounding tack, but tracing through its conditions and provisos leads me to doubt it will stand up to greater scrutiny.
With only one discussion draft left before the license is set to go gold, I’m concerned that the FSF may be spending too much time puzzling over what I’m tempted to call unasked questions of software freedom and whether these exercises in what constitutes such freedom might be obscuring the bigger picture.
The latest discussion draft could potentially expose those developing and using the license to legal risk, an ACT lawyer says. Click here to read more.
For instance, Sun Microsystems has spoken of potentially licensing its OpenSolaris project under GPL 3. Such a move presents a great opportunity to expand the pool of GPL-licensed code and the reach of free software, provided that the FSF can coax the Linux kernel project to move to GPL 3 as well. With both platforms available under the same license, there’d be all sorts of new code-mingling opportunities that aren’t now possible under those projects’ incompatible CDDL (Common Development and Distribution License) and GPL 2 licenses.
If the FSF were to achieve nothing more with GPL 3 than the internationalization, clarification and software-patent-proofing tweaks required to bring the 16-year-old GPL 2 into sync with today’s software landscape and attract new participants to the community, the license would represent a considerable leap forward for free software.

The Year of OpenSolaris

At the end of 2006, ZDnet blogger Paul Murphy made what I thought at the time to be a poor prediction: That 2007 will see Sun’s OpenSolaris eclipse Linux in the size and activity of its developer community, and all OS development projects, save Windows, will adopt OpenSolaris’ organizational structure and licensing provisions.

Now that we’re a few months into 2007, I still think that the prediction–if judged by the metric of whether it’s likely to come true–was a lousy one. While Solaris is an awfully compelling OS, and while I’m convinced that the OpenSolaris effort is for real, I think that OpenSolaris has about as much a chance of pushing Linux to the sidelines this year as Linux has of knocking Windows off the mainstream desktop. That’s not to say that either of these scenarios couldn’t happen eventually, but twelve months is a pretty tight timeline.

Measures of accuracy aside, the prediction scores pretty well as a piece of writing, because I find that my thoughts often return to it, particularly when Sun makes a move that strikes me as either beneficial or detrimental to the forecast’s eventual fulfillment.

Sun appears to have taken a step in the right direction recently when the firm hired Ian Murdoch, the “ian” of Debian GNU/Linux, to fill the fanciful-sounding role of Chief Operating Platforms Officer. In a blog entry on the topic, Murdoch described his new job as “head(ing) up operating system platform strategy,” which sounds like a good perch from which address my number one Solaris peeve: software packaging.

I’m holding out hope because the Debian distribution that Murdoch helped found sports excellent an software packaging system that makes my life a lot more pleasant as I conduct the many software installations and updates and OS patching and testing that fill a product reviewer’s work (and often home) hours. A case in point: I have reviewed significantly fewer test releases of Solaris (since OpenSolaris was born, there have been many) than I would’ve liked, mostly because that means downloading a few GB of CD images, and running through the installer program.

In contrast, when I’m following the test cycle of Debian or of it’s popular child distro, Ubuntu, I can track the process continuously, and the cost of reviewing the latest code is typing a few words at a command line, with which I direct my test machine go fetch and install all the latest software from the network repositories I’ve selected. It’s not just for testing, either–if the system I’m using is for production, I choose stable repositories instead, and install security updates or new applications using the same tools.

While there are a couple good Solaris volunteer packaging efforts out there, I think it’ll take a software management system overhaul to bring Solaris’ software tools up to the level to which Linux users, administrators and developers have become accustomed. Without an overhaul of this sort, I can’t see OpenSolaris overtaking Linux.

There’s actually already an OpenSolaris distribution, called Nexenta, that combines the Solaris kernel with all the userland applications of Ubuntu Linux, including Debian’s slick package management system. I see a lot of potential in the approach, but the project could really benefit from more support from Sun. So far, however, it’s been important to Sun that OpenSolaris distributions sink or swim on their own, and there’s been no indication whether the firm might someday bring Debian’s packaging tools into canonical versions of Solaris. Whether or not Sun opts to pursue a Debian tools tack for Solaris, it’d make sense to contribute some resources to the Nexenta project.

Another, and potentially better route to Linux-challenging packaging tools for Solaris could be Conary, the software management tool that organizes rPath Linux and a host of derivative distributions based on rPath’s rBuilder platform. Despite all my good experiences with Debian’s software tools, I’ve found that creating and managing new packages is quite a bit easier with Conary. Again, it’d be smart for Sun to stow its “sink or swim” concerns and to devote some resources toward building a Conary-based distribution of OpenSolaris.

None of this will likely be enough to turn the open source operating system world on its head in the next eight months, but if Sun keeps making smart moves around OpenSolaris, such a reversal is certainly within reach.

Red Hat Needs to Lighten Up

When I learned that Red Hat Enterprise Linux 5, which is a big release for Red Hat to which I’ve been looking forward for some time, was coming out on March 14, one of the first thoughts that crossed my mind was, “Great, when’s CentOS 5 coming out?”
Even though Red Hat has always been very nice about providing us with entitlements to test their products, entitlements are a major pain to mess with. Sometimes our entitlements expire, and I have to head over to the Red Hat Network to unentitle some machines in order to entitle others– frankly, from our perspective as a testing organization, and as a group of people who often build up and tear down systems in different combinations, RHEL is actually more of a pain to work with than is Windows, for which there were (until Vista, at least) volume license copies that we could use flexibly and without expiration.
Even better are free Linux distributions, which you can get in all sorts of forms and from all sorts of locations. Debian is my favorite example of deployment flexibility: I download a small netinstall image, from which I boot a virtual or physical system, choose a network mirror that’s close to me and pull down just the packages I need, in their up-to-date form.
Just because Debian is much more pleasant to deploy, however, doesn’t mean that I get to ignore RHEL, which is probably the most important Linux distribution around, in terms of hardware and software certifications and in terms of its prominence among the enterprise infrastructures that our readers are running.
Fortunately, there’s CentOS, an open source project that takes the source RPMs that Red Hat diligently offers up for public download, strips out Red Hat’s trademark-encumbered artwork, and improves–significantly–on RHEL by returning to it the flexibility that free distributions like Debian enjoy.
The big drawback to CentOS, however, is that while CentOS, practically speaking, really is RHEL, CentOS isn’t RHEL enough for Red Hat to support or furnish services for it. What’s more, running RHEL by some other name puts you in an unclear support situation with ISVs who’ve certified their products for RHEL–just ask Oracle, which in recent weeks has been expressing consternation over certification and its own RHEL rebrand.
Back when Red Hat first divided its free, support-optional Red Hat Linux product into the free, bleeding edge and community-supported Fedora and the metered, stable, and Red Hat backed RHEL, it probably made good business sense to bid adieu to any customers unwilling to pay per system. If you wanted a Red Hat distribution with a long support term and a stable development arc, you had no other choice–whether or not you planned on consuming the support for which you’re paying when you buy Red Hat’s free software.
However, now that RHEL may be had for free in the form of CentOS, does it really make sense for Red Hat to maintain its self-imposed separation from customers who want a support-optional way to run RHEL? While it’d certainly make life easier for me and for others with needs similar to mine if Red Hat let itself loosen up again, I contend that it’d be Red Hat itself that’d stand to benefit best from the move. For one thing, by allowing CentOS to stand between itself and a growing segment of its user community, Red Hat is missing out on important feedback, bug reporting, and mindshare, which may sound fluffy, but it’s the stuff of which Red Hat’s dominant status in the Linux world was built.
Also, Red Hat is allowing its brand to become watered down–as I mentioned above, there’s currently uncertainty regarding the support status of rebranded versions of RHEL. However, as time goes on, and rebrands like CentOS and Oracle’s Unbreakable Linux prove themselves to be truly compatible with RHEL, it’s hard to imagine ISVs turning down the dollars of companies running these clones. Speaking of turning down dollars, Red Hat’s decision to cede a growing portion of its market to CentOS means closing doors to services money from customers who want to run RHEL, but do so with more of the flexibility to which free software is heir.
To those who counter that Red Hat can’t stay afloat without requiring all RHEL users to pay for support contracts, whether they want them or not, I say that if Red Hat support delivers real value, then Red Hat has nothing to worry about. If, however, Red Hat’s health truly relies on leveraging customers to buy something they don’t actually need, then the Linux giant is destined for a fall.

Pondering Linux Preloads

In recent weeks, there’s been quite a bit of buzz surrounding Linux and its chances for earning a spot as a preloaded option on the client PCs sold by major computer OEMs.
Buzzing most loudly has been Dell’s IdeaStorm customer suggestions site, which has turned up a ton of support for the notion–even though many are discounting this support as the ditto-headed diggs of Linux zealots who can’t be counted on to put their credit cards where their clicks are.
Can Linux preloads contribute meaningfully to the sales and success of a major PC OEM? I believe they can, provided that these OEMs keep in mind the mantra that’s driven Linux to where it is today: free.
Now, lest you chide me, pointing out that acquisition fees are only one part of a cost calculation, and so on, it’s not the monetary outlay that I find most costly.
Beyond acquisition costs, the greatest benefits of a free-as-in-free operating system, versus a proprietary OS like Windows, are related to flexibility: no hassles when getting updates (WGA); no hassles for reinstalling your OS (recovery disks); if you want to put your OS on a second machine, you’re allowed; if you want to give a copy to your friend, you’re allowed.
One of the problems with the limited forays into Linux preloading we’ve seen from major OEMs so far is that these initiatives have focused on Linux distributions, which, like Windows, carry per-system fees.
What’s tougher still is that, unlike Windows, these distributions–namely, RHEL (Red Hat Enterprise Linux) and SLED (SUSE Linux Enterprise Desktop)–require that new fees be paid each year to keep up with security and bug-fix updates.
Obviously, there’s a place for RHEL and SLED and their pricing structures, but as client operating systems, I don’t believe they have what it takes to displace Windows while asserting per-system licensing limits similar to those with which Microsoft saddles users.
The free versions of Red Hat’s and Novell’s Linux operating systems–Fedora and OpenSUSE, respectively–aren’t particularly well-suited to go up against Windows, either, due to the very short life cycles of both distributions.
Red Hat or Novell could extend support terms for their per-system-fee-free distributions, but as long as these companies hope to bring in license dollars from their enterprise distributions, it would seem that neither will be inclined to do so.
I’ll offer to Dell, Hewlett-Packard and Lenovo the same advice I tried to hand Oracle when Larry Ellison and company were shopping for a Linux distribution to ingest: Look to Ubuntu.
In my opinion, Ubuntu Linux is the only distribution ready to go head-to-head with Windows as an OEM preload option. In its LTS (Long-Term Support) incarnation, Ubuntu Linux is slated to receive updates for three years.
Ubuntu’s Debian GNU/Linux foundation is strong, with excellent software management tools and an effective model for organizing volunteer resources that has helped Ubuntu amass what’s probably the broadest catalog of ready-to-install software of any distribution.
As with Oracle–which ended up simply rebranding RHEL–Dell and its OEM rivals seem tied for now to the enterprise distribution track.
Dell has talked about certifying some of its systems to run SLED, which seems tailored less to spur new sales than it is to anger Microsoft as little as possible.
Still, the work that Dell does to make its machines run well with SUSE will, through the workings of free software, raise the tide of Linux compatibility overall.
Time will tell whether it’s Dell–or one of its rivals–that ends up profiting best from that tide.

NAC Is Whack?

Network access control schemes seem to be all the rage, with IT heavyweights and smaller players alike pushing them full force.
While there’s no disputing that these NAC initiatives are aimed at worthwhile goals, what remains to be seen is whether and to what extent these initiatives are worth the time and money that enterprises must lay out to implement them.
In other words, is NAC all it’s cracked up to be? I have my doubts, specifically concerning the portion of NAC involving network endpoint assessment–the proposition that by quizzing a client about its operating system, patch level and anti-virus signature currency, it’s possible to determine whether that client may be trusted.
My initial concern, as a user of desktop Linux and a proponent of keeping one’s client platform options open, is that NAC could erect new barriers to running non-Windows operating systems. I see the technology landscape teeming with all sorts of new clients, and if using networked services is to require not only the capability to talk across the network but also to satisfy some necessarily narrow-minded health monitor, many of those potential networked clients could be kept out.
Of course, it’s the right and, certainly, the responsibility of administrators of a well-managed enterprise infrastructure to exert control over the users and clients that access their networked services. However, it seems to me that for a company’s well-managed clients, NAC’s security posture checking is redundant–a managed system won’t be running unvetted software, and administrators of these machines will already have the authority to enforce vulnerability and anti-virus updates.
For systems that an IT organization does not manage tightly, such as the personal systems of telecommuters or the laptops of partners’ employees, NAC can’t be enough to offer acceptable health guarantees. Who knows what malware might lurk on your employees’ home machines, regardless of what the client reports about its own health?
What’s more, the managed clients of your partners won’t necessarily be managed under the same policies you’ve chosen to mandate. For instance, what happens when your idea of system health equals a completely patched system, and one of your partners has held back a particular patch due to some incompatibility it introduces with one of their key applications?
The answer, in the cases both of the partner-policy conflict and the non-supported client scenarios, is that you’d create exceptions. With the constant stream of OS patches and anti-virus signature updates–and the unforeseen conflicts among them–NAC seems to me like an ongoing policy-writing nightmare. At the very least, it sounds like more work than already overworked IT departments are probably prepared to assume.
Rather than pursue assurances of health that you can’t completely trust from endpoints that you can’t completely control–at the expense of implementing, deploying and managing NAC policies, software and hardware for an as-yet-unproven return–companies would do better to focus on hardening their clients and servers to better withstand the malware that will inevitably worm its way into their networks.
On the server side, companies would do well to focus on the security functionality that’s beginning to flow from niche trusted operating systems into mainstream platforms such as Linux and Solaris. On the client side, companies should tighten their management grip over the systems they own. Where that’s not possible, companies should explore methods of carving out reasonably secure beachheads within otherwise unmanaged clients, such as through virtual machines or terminal services.
Yes, it’d be nice if there were a way to sniff at a client and arrive, automagically, at a state of confidence in which that client could be trusted. Not even the most ardent supporter of NAC would suggest that the technology is currently capable of such a feat. Until such assurances can be reliably obtained, let’s worry less about implementing trustability and prepare ourselves instead for suspicion.

Look Out, Microsoft

We in the computer trade press remain ever poised to chronicle the next major battle between Microsoft and whatever company, governing body or new concept that seems even remotely positioned to challenge the Redmond giant. During the past few years, Google has been one of our favorite such challengers, even though Google hasn’t yet directly struck at Microsoft’s productivity application and client operating system core.
However, with Google’s announcement on Feb. 22 of a premium version of its Web-based messaging, calendar, word processor and spreadsheet application suite, all those Google versus Microsoft headlines–and Steve Ballmer chair-throwing anecdotes–take on a whole new flavor. While Google officials are saying that the company’s new, $50-per-user-per-year offering is targeted at companies not currently using Office, I believe that the suite threatens not only to cost Microsoft some Office and Exchange Server licensing dollars, but also to potentially destabilize Microsoft’s Windows desktop monopoly.
Initially, the most obvious roadblock to Google application adoption will be a perceived paucity of features. There’s a conventional wisdom that every desktop in a given office needs a baseline of productivity applications, and that that baseline is defined by Microsoft Office. But rather than attempt to clone Microsoft’s most popular products and enter into a futile feature-list-length race, Google is suggesting a new baseline.
Take, for instance, Google’s rather trim word processor. An online word processor stuffed with as many features as Microsoft Word wouldn’t have worked for Google, anyway. And as everyone–including Microsoft–agrees, most people don’t use the vast majority of any Office application’s features.
Microsoft arguably doesn’t have the freedom to deliver a lighter-weight Word that would do what most people need it to do, so the company has spent most of its time during the last several years working to make it easier for people to stumble across features they weren’t missing in the first place. Google, on the other hand, can announce right off the bat that users of its suite shouldn’t expect full Office parity, and that if your users need functionality that the Google suite doesn’t offer, you can spend the licensing dollars for Word on those users.
While the difference in cost between Google’s application offering and the costs for Microsoft Office and Exchange Server licenses is significant, what’s really disruptive about Google’s approach is what it has to offer individuals and small businesses: All of Google’s applications are freely available individually, and the standard version of Google’s application suite, which is limited to 25 users per domain, is free as well. While these free services lack the premium edition’s 99.9 percent uptime pledge for Gmail and lack support for add-ons to enable directory integration and e-mail archive management, they do offer solid entry points to Google’s application suite at all points of the consumer-to-enterprise continuum.
It’s been Microsoft’s knack for hooking users all along that continuum that’s set it apart from its competition–Windows support for things like video games may have no direct bearing on your enterprise, but you’d better believe that it impacts the size of the IT administrator talent pool with Windows familiarity.
With pricing and distribution–Google apps await all comers at every browser, on every platform, anywhere that’s connected to the Internet–Google can manage to readjust users’ application expectations. Indeed, I certainly don’t miss continually e-mailing myself stories as I move from work to home or between computers in our lab–and my attachment to that “feature” has been enough to heavily curtail my use of OpenOffice.org in recent months.
Of course, the same Web foundation that provides for the cross-platform support and ever-presence of Google’s apps remains the suite’s most unavoidable drawback–while Google apps are waiting for you wherever there’s Internet access, we’re still far from enjoying solid Internet access everywhere. I’ve been rather pleased with the online titan’s app efforts so far, but to really impress me–and keep Steve Ballmer throwing chairs, either figuratively or literally–Google’s got to figure out how to conquer offline as well.

Microsoft, Novell Have Much to Prove

Microsoft and Novell have been making a big deal of their big deal to work together to soothe customers’ cross-platform pain points. But it remains to be seen whether and how far the deal will end up extending beyond the realm of press releases and presentation slide decks.
In fact, vaporous cooperation pledges aren’t even the worst of what might come of the Microsoft-Novell deal. Many in the open-source world fear that the deal’s patent pledges represent a route through which Microsoft could litigate GPL-licensed software projects into submission. This fear was real enough for Jeremy Allison–who, as one of the primary developers of Samba, certainly knows a thing or two about Microsoft, Linux and proprietary protocols–to tender his resignation from Novell.
For now, I’m prepared to set aside such fears. Microsoft knows that patent wars are bad for business, unless you happen to be in the business of pumping out pleadings. However, the collaboration initiatives that Novell and Microsoft have so far trumpeted–initiatives that deal with Web services, virtualization and document formats–haven’t convinced me that we’re at the dawn of the new era of interoperability.
Web services without interoperability is a non-starter, and neither Microsoft nor Novell commands this space fully enough to get away with anything less. Similarly, in the virtualization space, Novell and Microsoft are both upstarts looking to take a piece out of VMware, and OS agnosticism has marked the latter company’s wares from early on. If the virtualization offerings of a Microsoft and Novell collaboration can’t play nice with Linux and Windows alike, the virtualization initiatives of neither company are going anywhere.
While perhaps more enticing, the new duo’s announcements regarding document format compatibility can only go so far. Document formats are tied inextricably to the applications that create them, so compatibility can never be 100 percent complete unless you’re running the same versions of an application. Even different versions of the same Microsoft Office application will have format compatibility issues.
I contend that if Microsoft and Novell want to demonstrate their respective willingness to ease the cross-platform concerns of their customers, they would get the biggest bang for the buck by taking on a problem I’ve heard neither mention. I’m calling on the pair to teach Novell’s Evolution groupware client to speak to Microsoft’s Exchange Server in the same language that Outlook speaks: MAPI (Messaging API).
While it’s now possible to access Exchange Server quite well from any client on any platform via IMAP, IMAP is a mail-only solution. And if you’re not going to use Exchange’s calendaring functionality, why use Exchange at all?
There is a plug-in for Evolution, called the Exchange Connector, that provides access both to the e-mail and calendaring functionality of Exchange, but the Connector doesn’t work for many users. The trouble is that the Connector communicates with Exchange over the same channels as does Outlook Web Access, rather than through the MAPI interface that Outlook uses. This makes Evolution, at best, a second-class citizen as far as Exchange is concerned.
I’ve personally experienced enough ups and downs with the Connector to quit using it to access my own Exchange Server mailbox. I’m not alone: The lack of solid support for Exchange from a Linux mail client has been enough, for years now, to stall desktop Linux deployments in Exchange shops.
Until now, it arguably would have been naive to expect Microsoft, which has worked hard to shore up its client operating system monopoly, to participate in granting Linux clients full access to Exchange. But, as we’ve been told, the Redmond giant is out to help customers solve their cross-platform problems.
So, does the Novell/Microsoft deal really merit the “historic” label that many have attached to it, or was the accord only so much hype? Microsoft and Novell, your customers are in pain–now’s the time to deliver.

Vista: Permission Granted

Among early adopters of Microsoft’s freshly minted Windows Vista operating system, the strongest reactions so far seem not to revolve around the system’s fancy new looks or its handy search facilities, but rather around Vista’s knack for asking permission to carry out operations that require administrative privileges.
Summing up the annoyance felt by many Vista users so far, my colleague, Microsoft Watch’s Joe Wilcox, recently suggested that if Vista were a car, flicking your turn signal would prompt a pop-up to look both ways before turning out into traffic.
In some cases, Vista could certainly keep its concerns to itself. For example, if I trust an application enough to install it, it stands to reason that I trust the application enough to allow it to talk over the ports it’s designed to use. So Vista’s firewall needn’t bug me about cracking a hole in my local firewall.
I believe that Joe’s automobile turn signal analogy says more about the unrealistic expectations of Windows users than it does about any nannyish-ness on Vista’s part.
Flicking on your turn signal is a well-defined use for your car–in the same way that flipping through your applications menu, changing your desktop wallpaper or firing off an e-mail with the Windows Mail client are well-defined uses of your Windows machine. These sorts of operations won’t trigger a security prompt in Vista, even though they can possibly get you into trouble. For all its rumored overprotectiveness, Vista won’t intervene to prevent you from sending a drunken, angry e-mail to your boss, for instance.
However, when it comes to the sorts of actions for which Vista will ask permission–such as installing some application or plug-in you’ve found on the Internet, bringing down your firewall or disabling those pesky UAC (User Account Control) prompts altogether–it’s appropriate that Vista applies the brakes.
The operations Vista asks about fundamentally modify your machine and can lead toward your PC behaving in ways that you didn’t intend. To use the car analogy again, they’re more like undertaking a do-it-yourself windshield replacement or popping in a fuel injection system you bought on eBay than they are like using your turn signal. You wouldn’t expect to fundamentally modify your car without knowing what you’re doing–or allow someone you don’t trust to do the same–and expect that everything would work just fine. So why should users expect the same from their operating systems?
In defense of Windows users who are beginning to chafe under the yoke of appropriate rights management, Microsoft has pretty much trained us to behave in this way by doing way too little to enable and encourage sane management practices for its operating systems.
With Vista, Microsoft has begun to change its ways, and now Windows users must learn to change their ways, too. For starters, if you don’t want Windows bugging you about the potentially destabilizing effects of what you (or your end users) are doing, start getting used to the idea that willy-nilly software installation and system modifications aren’t every user’s computing birthright. As annoying as it may sound, these sorts of activities must be undertaken with much more care than most of us are accustomed to according them.
Microsoft can make things easier for its users by taking a page out of the software management playbooks of Linux distributions, which typically offer a framework of network-accessible repositories of cryptographically signed packages. These packages can be self-hosted, hosted by the Linux provider or hosted by trusted vendors, yet they are accessible with the same set of software management tools. In OpenSUSE, for example, it’s possible to grant a regular user the right to install packages from preset repositories, which can help strike a balance between self-service and IT department vetting.
I’d like to see Microsoft work with software vendors to extend Windows Update to offer similar functionality. IT departments could bless trusted repositories from which regular users could install applications and updates without sacrificing safety or requiring elevated rights. I can imagine third-party certification bodies emerging to offer companies and individuals a much larger catalog of checked-out software than they could manage to vet themselves. Such a service might be a good value-add for OEMs to extend to their customers, as well.
None of this will save you from sending that ill-advised e-mail–or from blindly changing lanes, for that matter–but we should at least be able to expect that our machines act as we intend them to.

What Do You Mean “New” Evil Empire?

While cruising through Slashdot this afternoon, I came upon an item in which Rolling Stone blogger Charles Coxe asks whether Apple may become Is the New Evil Empire.

Once but the student (see their classic 1984 ad, their PC vs. Mac ads and oh, everything else that’s ever come out of their mouth), it seems that little ol’ Apple finally could be turning into the Master.

The 1984 ad was then and still is extremely creepy–it was meant to lash out at some imagined IBM monoculture, when in fact the PC/DOS/Windows ecosystem was much more “free” than the Mac side of things. On the PC side of things, you got to choose your hardware, and fire your supplier if it irked you too much.
All that gear, unburdened from OS/hardware lock-in, was the primordial soup from which Linux and open source eventually sprang. On the Mac side, you get what Apple gives you, and you’re happy about it, because giant billboards tell you you’re on the side of Gandhi and Muhammed Ali.
Again, creepy and just a bit EEEvil…
And what about Apple’s “you’re not a journalist,” crusade, waged not just against the sorts of creative professionals to whom Apple’s supposed to cater, but specifically against those creative professionals who work to crank the Apple buzz machine.
However, being an evil empire and being viewed as an evil empire are two different things. As long as Apple keeps up the sharp marketing, and as long as Apple’s market share remains low enough so that most people interact with Apple only through those advertisements, I think Microsoft’s rep as keepers of darkness will remain safe.