Category Archives: FE

Exploring LotusLive Symphony

Today at its Lotusphere 2011 event in Orlando, IBM announced tech preview of LotusLive Symphony, a Web-based office app duo to extend its Lotus Symphony productivity suite. I reviewed the desktop-bound edition of Symphony a few months back, and Andrew Garcia took on the LotusLive online collaboration service.

It’s interesting to see IBM add a Web app component to Symphony, much as Microsoft has done with its own Office 2010 and Office Web Apps offerings. I tend to use a combination of Web and desktop-native apps in my daily work–both platforms offer certain advantages, so why not take advantage of both?

After spending about 20 minutes of wandering within LotusLive Symphony–a few minutes of which I’ve embedded for your viewing below–it seems that IBM’s new Web apps, which include word processing and spreadsheet components, are off to a decent enough start.

I tried the new Web-based Symphony out with one of Microsoft’s Word-formatted reviewer guides–these tend to be heavily formatted, so they offer a decent test of applications’ Office-format fidelity chops. I found various small formatting issues, which is the standard I’ve come to expect. One feature that caught my eye was a task assignment capability I’ve not seen in competing Web-based products.

What’s your take on Web-based versions of existing desktop apps? Me-too feature of the new decade, wise embrace of the Web, or something else?

Fedora’s NOTABUG Bug Gives Linux Users a “You’re Holding it Wrong” Moment of Their Own

About four years ago, I wrote a blog post (since lost, apparently, to the sands of blog platform migration) entitled “What Is Fedora’s Prime Directive?” At issue, more or less, was whether it was appropriate for the Fedora project to push an Xorg modification that stood to deliver benefits to users of open source graphics drivers at the cost of disrupting the systems of closed-source graphics driver users.

Less important to me than the particulars of that issue was the way the project reacted to the problem, and what the dustup meant for the ongoing questions around the mission of Fedora. While that particular dustup is long gone, some of the basic questions remain, as highlighted last week in a flareup around a particular Fedora 14 bug involving Adobe’s Flash plugin.

In short, the developers behind the glibc C library project on which most, if not all Linux distributions depend changed its implementation of a particular function in pursuit of potential performance benefits on certain processors. The change, while in keeping with the intended, and well-documented, use of the function, caused problems for various sloppily-coded software components, most prominent of which was Flash.

A bug was filed by a Fedora user who experienced problems with Flash on Fedora 14, and several other users who encountered it, among them Linus Torvalds himself, chimed in on the discussion. Also, many tweets were fired to and fro on the issue, though I don’t think anyone offered up a pithy hashtag on the subject for our convenience.

You can check out this short youtube video I recorded to see the issue in action. In the video, I’m running a brand-new instance of Fedora 14 64-bit from the project’s LiveCD, I cruise to Red Hat’s Web site, I encounter content that requires Flash to view, install Flash, and hit the issue called out in the bug report. You’ll need to have your sound turned on–this is an audio issue. The borked sound begins about halfway through.

The bug itself was marked: CLOSED NOTABUG, with the rationale being that the developers of the Flash applet and other affected applications were in the wrong for using this function inappropriately, so it was their problem, not Fedora’s problem. Simple. Done and done.

Of course, part of the point of a Linux distribution is that it acts as a buffer between upstream projects and individual users. Yes, we could all roll our own Linux-based OSes from thousands of different open source projects, and deal with the integration ourselves, but performing these chores is how Red Hat earns its money, and it’s how Fedora–while a free, community-supported project rather than a product–earns its mindshare.

That is, of course, unless what many Fedora critics have long claimed is true–that Fedora is simply a bleeding-edge test bed for the release that Red Hat gets paid for, and that Red Hat would not likely push out into the world with a broken Flash implementation–regardless of who was to blame for the brokenness.

It’s perfectly fine for Fedora to be just such a distribution–YMMV, caveat “emptor,” etc. However, the Fedora Project projects itself as much more than this. Here’s the tag line from the fedoraproject.org Web site:

Fedora is a fast, stable, and powerful operating system for everyday use built by a worldwide community of friends. It’s completely free to use, study, and share.

I don’t mean to suggest that the Fedora project or Red Hat is acting in bad faith, or hiding at all its true intentions. Rather, the explanation now, as in past years, seems to me to be that as an open source project, Fedora houses a mixture of the attitudes and motivations of its contributors, with a tilt toward those of Red Hat, as Red Hat is the project’s biggest presence.

For Red Hat, it makes sense to push the envelope with Fedora, and to allow its slower-moving enterprise releases to benefit from what often amounts to creative destruction in the Fedora cycles.

Now, less clear to me is whether this arrangement pays adequate dividends for individual backers and users of Fedora (who may want to use Flash without implementing wacky workarounds), particularly given the presence of plenty of Fedora alternatives. Fortunately for those individuals, the switching costs between different Linux distribution options are fairly low, so they can vote with their feet.

Check out my review of Fedora 14, and if you’re in the mood for a walk down memory lane, take a peek at this slide gallery we created of the first 13 Fedora Linux releases.

Should Enterprise Turn its Back to the Mac?

Lately, I’ve had Mac on the brain—a state that’s stemmed in parts from P. J. Connolly’s coverage of Microsoft’s Office 2011 for the Mac, from Apple’s recent “Back to the Mac” event at its Cupertino headquarters, and from Apple’s disclosure that the increasingly consumer-oriented company plans to drop its most enterprise-oriented product, the XServe.

In particular, I’ve been considering where Apple’s insistence on tight(ening) control of its hardware, software, and third party application stack makes sense in an enterprise context.

The Apple event offered the public an early peek at the upcoming version of the company’s OS X 10.7 operating system. The new product, which is set to ship next summer under the code name, “Lion,” caught my attention in a way that no OS X release has done since Apple embraced Intel’s x86 processors.

I’ve been consistently lukewarm in my reception of most OS X releases because for me, the tight control that Apple demands over OS X and OS X Server haven’t come along with enough benefits to outweigh the limitations.

For instance, how can an enterprise IT administrator take seriously a server OS that’s banned, arbitrarily, from partaking in virtualization, the biggest server technology of the past several years? In a world in where server workloads are going virtual and headed for the clouds, it isn’t worth investing one’s time in an OS banned from running on alternate platforms.

What’s catching my eye about Lion, however, is Steve Jobs’ promise to make OS X more iOS-like in its function and management. While OS X fails sufficiently to outshine its more flexible OS rivals, iOS is an all together different matter.

When Apple’s mobile platform debuted, it blew away the competition and far outshone every other smartphone or handheld computer out there. And while Apple’s rivals have certainly upped their game, iOS continues to impress me.

For instance, Apple’s strategies for managing the limited resources of mobile devices by suspending and resuming applications as needed, and by offering up a set of background APIs intrigue me. After all, mobile devices aren’t the only sort of computing clients that suffer from hardware limits, and throwing more RAM at our problems isn’t always feasible.

What’s more, Apple’s App Store model for vetting and deploying applications, while certainly limiting, could actually work well in an enterprise setting–provided, of course, that an enterprise’s own administrators could control the system and make their own vetting choices.

As a Linux aficionado, I cringe over the hood welded shut nature of Apple’s products, but in a managed PC setting, I’m a huge fan of client control technologies such as SE Linux and Application Whitelisting. The client as appliance model stands to save IT time fiddling with desktops and free up resources better directed at driving business value.

Obviously, I’m making some leaps here. As currently situated, Apple and its platforms are not really tailored for, or aimed at, enterprises. With that said, news of enterprise uptake of the iPad keeps coming, and, through reported support deals with Unisys and local resellers, Apple appears to be taking business sales and support more seriously.

Even if Apple opts not to embrace the enterprise (a continued holdout against virtualization would be a dealbreaker) a more iOS-like OS X could still deliver more locked-down and tightly-manageable clients for companies through the same forces of competitive osmosis that’s led smartphone vendors to remake their wares in the iPhone’s image.

New Life for OpenOffice.org

From an IT columnist perspective, Oracle’s acquisition of Sun Microsystems is a gift that keeps on giving. As the enterprise software giant works its way through digesting Sun’s many hardware platforms, software products, intellectual property holdings, and open source communities, there’s no shortage of fresh topics to cover.

Last week, another such topic presented itself, when a group of vendors and individuals launched LibreOffice, a fork of the OpenOffice.org productivity suite that Sun first shipped in 2002. The group also announced the creation of the nonprofit Document Foundation to maintain the breakaway office suite project.

Screenshot-7.png
Where Sun Microsystems had been bent on building itself out as a grand steward of open source projects, Oracle has pursued a spartan approach toward the projects it inherited from Sun. For instance, Oracle has continued development on Solaris while allowing the open source project OpenSolaris to sink beneath the waves.

Similarly Oracle hasn’t ignored OpenOffice.org—the company has selected “Oracle Open Office” as the name for the commercial version of the suite, in lieu of the StarOffice moniker that Sun used for its paid edition of the suite.

Given Oracle’s ambivalent stance toward running open source projects for their own sake, and taking into account the central role of OpenOffice.org on the Linux desktop, it’s not surprising that major Linux and OpenOffice.org distributors Canonical, Novell and Red Hat—all of which are supporting the Document Foundation—would prefer a more certain home for the suite.

It’s not that OpenOffice.org under Oracle appears at risk of moving backward—it seems to me that development on the suite under Oracle has continued at a pace that seems more or less the same as before the Sun acquisition.

The trouble is that as a project, OpenOffice.org needs to do more than maintain its pace. Even with Sun’s open source enthusiasm, development on the project merely crept along—particularly when compared to Mozilla’s Firefox. For instance, while the suite’s chief target, MS Office, has begun making a move to the Web, OpenOffice.org showed no such signs.

Instead, the Web-ward growth that was absent from Sun’s roadmap has indeed begun, in the form of an upcoming Oracle Cloud Office product that’s no more open (in terms of source or of development) than are the Web office offerings from Microsoft, Google, and Zoho.

“Fork” is often treated as a dirty word in open source circles, but project splits can be very effective, as was the case when, in 2004, the X.org project split off from the XFree86 project that developed the graphics layer for Linux and it cousins, delivering a much-needed shot of life into those OSes.

I’m hoping that the creation of the Document Foundation, and of the LibreOffice project, will enable the developers and backers of the code base to blow up the project and remake it into a faster-moving, more broadly accessible, and more relevant project moving forward.

VMware/SUSE: Stacking the Deck in Favor of Enterprises

The recent wave of vertical integration among enterprise IT vendors appears headed for another crest: There are reports in The Wall Street Journal and elsewhere that Novell is preparing to split itself in two, selling its SUSE Linux operations to a strategic buyer and the balance of its properties to a private equity investor.

The most likely suitor for SUSE Linux appears to be VMware, which has been busily amassing middleware and application acquisitions such as Zimbra and SpringSource to layer on top of its virtualization stack. While the rumor that some tech titan might add an enterprise Linux distribution to its holdings is always within earshot, a VMware/SUSE pairing makes particular sense.

Despite its stack-building efforts, VMware has been without an operating system layer. But in recent months, VMware has cozied up to SUSE, with deals around bundling SUSE Linux with vSphere and building virtual appliances atop Novell’s Linux platform.

More than just adding new product pieces, a SUSE pickup would leave VMware better equipped to compete with and interoperate with its chief virtualization rivals. Novell’s SUSE virtualization solutions are based on Xen, which powers the products of Citrix, Amazon, Oracle and others; SUSE’s engineers are no strangers to the KVM and libvirt technologies that drive Red Hat’s virtualization products; and SUSE has maintained a considerable effort around interoperability with Microsoft’s Hyper-V products.

If Novell’s SUSE holdings include its open-source .NET implementation, Mono, that piece could provide an additional plank in VMware’s competitive platform.

While it’s easy to get carried away thinking about the potential synergies that major acquisitions might bring, for enterprise IT customers, the sort of vertical integration that Oracle and VMware have pursued must be greeted with some measure of concern. With consolidation comes the specter of reduced competition, and, potentially, loss of focus, as large organizations work to digest their acquisitions.

With regard to a potential VMware/SUSE matchup, the good news for enterprises is that the virtualization, operating system and middleware layers in play are matched with a certain amount of standardization, which is bolstered in places by solid open-source reference implementations. This standardization promises to preserve play among the layers, and make the specter of vertical integration less ominous.

This dynamic, whereby vertical integration exists alongside enough standardization to allow customers to choose among the layers, is reflected in the cover story of Monday’s eWEEK print issue. In it, Wayne Rash examines the ways in which enterprises are achieving unified communications success without swallowing entire unified product stacks from their UC providers.

As with the stack layers in play in a potential VMware/SUSE matchup, standardization and a demand for interoperability in the UC landscape work as a balance against vendor inclinations to differentiate themselves into silos. Of course, neither product space is an interoperability nirvana, but offerings from each class are delivering real results for customers, and that’s what’s most important.

Requiem for a Wave

Google announced Aug. 4 its intention to kill off its Wave project before the end of this year, citing poor user uptake. Out in the twitterverse (or at least the bit of it that I follow), the move has been met with broad approval, even rejoicing — which I don’t quite understand. If you don’t like Wave, don’t use it, right?

wave-shot.png
I saw promise in Wave — at least in the technology behind Wave, although I was never quite satisfied with the way that Google implemented it. From the first time I saw it, at a pre-release demo at Google’s San Francisco campus less than one year ago, Wave has looked like an in-development project, something on the road to being a product, but clearly not yet there.

For instance, one of the most frequently cited use cases for Wave was as a sort of e-mail/instant messaging replacement, which didn’t sound bad — I know that my e-mail is severely overloaded, and could use a next-gen upgrade. The trouble was that Google left us without any e-mail-to-Wave migration or integration path — if the thing was to supplant e-mail, just how was that supposed to work?

Another problem with Wave, specifically as an e-mail and IM replacement, is that where I can choose from many different e-mail servers, hosted by many different providers and in many different ways, there was only Google’s Wave. While the bits behind Wave are open source, Google might have attracted more Wave uptake if it had released a reference implementation of the Wave server for the community to take up, hack on and combine with other projects.

When I left that Wave demo a year ago, I was brimming with ideas for a Wavey future. I reached back into my notes to pull out this idea, for a Wave-powered forum software project:

Forums can be great for finding support — answers to all sorts of questions — but as they grow, they can be very difficult to digest and to participate in. If an answer has been found, it could be in the middle of a 7 page thread, with the pages before dealing with refining the question and so on, and the latter pages dealing with additional questions from people for whom the fix didn’t work.

The info in the forms has real value, great value, but it needs curation, needs pruning, it needs to start small, with a question, and grow as more information comes in, and as people suggest potential fixes, and so on, and then shrink as the answer and the question are both well defined, and maybe grow again as new wrinkles emerge, or as new people maybe ask a variation on that question, and then shrink, etc.

And then there’s the question of people asking questions that have been answered elsewhere, which is a sort of pollution of the forums, and other people simply telling them to search the forums, which is also forum pollution.

In our wavy example, it could be ok to have people ask their redundant questions, maybe in a special wave for that purpose, and bots sitting in the wave could reply suggesting (via search) waves where those questions maybe have been answered. The questions could time out (another bot could handle this) and you’d lose that pollution, and the guys telling people to search would have been replaced by the auto searches.

I can’t say that I’ve been a heavy–or even moderate–user of Wave, but we did try a few things with the service, like using Wave+embedding for a couple of event liveblogs. Also, the labs guys and I fired up a Wave to brainstorm about eWEEK’s 2011 edit calendar (see pic, above).

The open-source licensing and the pledge from Google to integrate some Wavy bits into its other apps mean that we haven’t necessarily seen the last of Wave, and I’m glad of that. I do wish, however, that Google had given this innovation more of an opportunity to succeed.

Open Source Software: All or Nothing at All?

Over the past year or so, there’s been a lot of discussion in open source software circles around so-called open core software business models, in which the “core” of a product is freely available under an open source license, typically with a “community edition” label, while some amount of features are withheld from the free version and made available in one or more proprietary licensed “enterprise editions.”che.jpg

The specific features that an open core vendor holds back depend on the product, but the typical definition is that if you’re an enterprise running the application in a production setting, you’ll want the enterprise edition.

The open core vendors say, more or less, “We believe in the value of open source, but this code doesn’t write itself, and we’re trying to make some money here. This open core deal is our game plan for paying our costs and making that money.”

The open core detractors say, more or less, “Part-way open source isn’t open source at all. You’re enticing customers with open source branding, only to pull a bait and switch with your lame crippleware.”

Now, I’m a pretty big fan of open source software — I can’t tell you off hand how the system I’m running scores on the Virtual Richard M. Stallman test (there doesn’t appear to be a vrms package available for Fedora 13), but with the exception of some codecs, hardware driver blobs, and Web-based applications, my home and work computers run open source software, from the core to the edge.

And not only am I a fan of the software itself, but I’m a fan of the open source model. I think it’s a great way to get things done. Openness means that a solution to the problem at hand could come from anywhere. For instance, I don’t know who’s responsible for writing the drivers that have turned the Linux-incompatible multifunction printer I bought four years ago (foolishly, without researching it first) into a Linux-compatible MFP, but since the Web site of my printer’s maker remains silent on Linux support, I’m pretty sure it wasn’t them.

It’s due in part to that open source fandom, and part to the enterprise products focus that comes with working for eWEEK so long that I have a tough time getting upset at the open core vendors. Enterprise software can be very cool, but very expensive. In nearly all cases, enterprise applications could be of use to many more organizations than can afford to buy them. The applications that have been popping up under open core licensing schemes have the promise of expanding access to worthwhile enterprise technologies to many more organizations.

Now, if the community version of an application doesn’t do everything that the enterprise edition does, does that make the community option “crippleware?” I’d say that depends on what you’re looking to do with the software.

Several months ago I reviewed an open core application, Talend Open Studio, which I’ve since used in a couple of small projects–for which, incidentally, my budget was zero. Call it crippleware if you like, but it worked for me.

There’s nothing strange about any open source software company presenting customers with the “free for enthusiasts, developers or small projects, but if you’re using it in production, you’re really going to want to get the paid version” line of marketing. Isn’t that exactly how Red Hat’s sales pitch goes?

I don’t mean to equate Red Hat with these open core vendors–the fact that Red Hat makes all of its works available under a free software license, thereby opening the door for clone challengers like CentOS and Oracle Unbreakable Linux, is a major differentiator, and a big reason why Red Hat looms so large in the industry.

The bigger reason why Red Hat looms large is that Red Hat’s software solves organizations’ problems, and this is the best way to judge the open core vendors. Is the community edition too crippled to be useful? Then I won’t use it, and I won’t recommend it. It’s not as though there’s a shortage of lousy, overpromise/underdeliver software out there.

As long as the software that an open core vendor labels as open source is indeed open source, I don’t have a problem with open core. My rule of thumb for whether something is open source goes something like this: “if I can’t fork your code into a new product and compete with you, it isn’t open.”

Particularly in product classes where there’s no inexpensive or open source option at all, an offering that combines an open core with an optional closed crust is certainly better than nothing. At best, this software can put enterprise technologies within reach for more organizations.

At worst, we can all ignore it.

OpenGoo vs. Google Apps: Host It Your Way

This week I’ve been testing out OpenGoo, an open source online office project that’s meant to provide a more open alternative to Google Apps.

Specifically, the code that comprises OpenGoo is freely accessible, and, as a plain old LAMP application, OpenGoo gets to leave the confines of its makers’ firewall and live in your data center, or desktop, or hosting service of choice.documents.png

As I’ve written in the past, I’m a big fan of Google’s Apps. However, I’m also a fan of reserving the right to fire any of your suppliers, and to do so, ideally, without disturbing adjacent layers of your stack: Swap Intel for AMD, IBM for HP, Xen for VMware, Red Hat for SUSE, Domino for Zimbra, and so on.

With something like Google Apps, every layer of the stack, all the way up to your data, is under Google’s control. Of course, that’s the point of SaaS–someone else manages and serves up the application, and you get to focus on taking care of business.

As I see it, the perfect application would be available in SaaS form, from multiple hosting providers, as well as in commercially-supported on-premises, and self-supported open source incarnations.

Perfection is a pretty tall order, but we’re starting to see open source Web applications that offer the sort of deployment flexibility that I’m looking for, even if they don’t nail all the features as well as do better-established online incumbents do.

OpenGoo is certainly not perfect, but the project is progressing at a promising clip. I’ll be keeping tabs on OpenGoo as it continues to develop. For now, check out my review and slide gallery of OpenGoo, and go demo the suite for yourself. I’d love to hear what you think of it — drop me a line in the comments below, ping me on Twitter or Identi.ca.

Next up on my review docket is another SaaS/on-premise/open source challenger in the enterprise application space: SugarCRM.

Don’t Wait for Google to Upgrade Your Gmail Security

Brian Prince is reporting today that Google is considering enforcing SSL encryption by default for its Google Apps users. It’s a good idea–as eWEEK Labs’ own Andrew Garcia discussed recently, your on-the-go applications can have an awful lot to say about you.

In fact, enforcing HTTPS encryption for Google Apps by default is such a good idea that you shouldn’t wait for Google to implement it. Whether you administer a Google Apps domain or are an individual user of the service, you should enable the mandatory SSL (Secure Sockets Layer) encryption yourself, right now.

Here’s how you do it:

gmailssl1.png

  • Step Four: Click the “Save changes” button.
  • Step Five: Take a look at the General Settings page within one of your managed Gmail accounts, and confirm that your users have no choice but to connect via SSL.

gmailssl2.png

If you’re a regular Gmail user, or if you have a Google Apps account without an established SSL connection policy, you can use this same settings page radio button to upgrade Gmail security on your own.

Desktop Linux: I’m Here for the Apps

The best and the worst attributes of Linux as a desktop operating system involve acquiring and maintaining software applications. For me, the positives outweigh the negatives, making Linux the best desktop operating system option I’ve encountered, and the one I choose at work and at home.

If Linux is to pile up more desktop adherents, the vendors and communities that back the open-source platform need to work together to accentuate those positives and shrink down the negative aspects of getting and managing software on Linux.

The GOOD: Software packages are your friends

All the bits that make up a typical Linux distribution, from the kernel to the little applets on the task bar, are divvied up into software packages, each of which contains metadata about what its bits are supposed to do, where they came from and what other packages they require to operate.

As a result, Linux is modular enough to squeeze into all sorts of vessels, and to integrate code contributions from many different sources without becoming unmanageable. In fact, as long as you stick to software that’s been packaged up for your Linux distribution of choice, there’s no better platform for staying in control of what runs on your system.

In order to keep my Linux desktop up-to-date, I don’t have to spend my time clicking on a succession of different vendors’ system tray-embedded update applets, reiterating to Adobe, Apple, et al, that I still don’t want to install that Yahoo tool bar or take them up on that Quicktime up-sell opportunity.

The BAD: Linux and proprietary software don’t mix well (usually)

The desktop market share of Linux is pretty small compared to that of Macintosh or Windows, which tends to make the platform an unattractive development target for software vendors. Making things worse, Linux’s slender share is divided up between many different Linux distributions, many of which require different packages.

As a result, even those software vendors that do support Linux often end up producing script-based installers that work on multiple Linux distributions, but that leave these systems in a less manageable state by working outside of Linux’s various software packaging frameworks.

When you’re dealing with open-source software, the companies and communities that build Linux distributions are able to grab the source and package it up for easy consumption, thereby tackling the packaging and distribution work on behalf of the upstream developers. This doesn’t work so well with proprietary software, in part because the code is closed, and in part because volunteer packagers prefer to donate their time to free software projects.

There are exceptions, however. When I fire up a fresh Ubuntu install and cruise to YouTube.com, the system offers to download Adobe’s Flash Player, and when the inevitable security issues emerge, my proprietary Flash Player updates occur alongside all my free software updates, and there’s not an unwanted tool bar offer to be seen.

The UGLY: Open-source software gone wild

When the source code is freely licensed and available, it’s much easier for someone to package. However, someone still has to take the time to do the packaging–just because a given Linux distribution could offer packages for an open-source application doesn’t mean that the distribution will offer them.

Significantly complicating matters is the fact that it’s much easier to configure, compile and install an open-source application than it is to package that application. Sure, you have to install some development tools and root out various needed libraries, but it usually isn’t too tough to go from a source code tarball to a running application.

The ugliness accrues as you move forward, and parts of your system are updated without any regard for the dependencies of the roll-your-own open source apps living on your machine. When security updates for your self-compiled apps come down, they won’t get applied unless you keep an eye out for them, fetch the code anew, and repeat the configure, compile, install procedure. Forget about managing this process on multiple machines.

This is what gives me pause about Novell’s SLED series of Linux desktops, and their constrained selection of software packages. Why should we mess with unpackaged apps, or spend time packaging them yourself, when other options are available with much more expansive package catalogs?

THE WAY FORWARD:

1. Everybody Hug

Ubuntu is (and has been for a couple years now) my desktop Linux distribution of choice because Ubuntu offers a very large catalog of packaged and ready to install applications. Ubuntu owes its large application catalog, in large part, to the Debian project, which has been cranking out software packages on a volunteer basis for years now.

I’d love to see Novell and Red Hat figure out a way to work with the Debian project to reuse the packaging work that its members are doing to broaden the range of software packages available for easy installation. It would take some work to translate the Debian packaging efforts to work with Novell and Red Hat’s RPM-based distributions, but Novell already has a project underway capable of building packages for SUSE, Red Hat, and Ubuntu-based distributions.

2. Proprietary Apps are People, Too

No less important than maximizing the work that’s already being done around packaging open-source applications, it’s important that major Linux distributors make it easier for proprietary software vendors to package their wares for Linux.

I know that many in the open-source community have an allergic reaction to proprietary software, but if open platforms such as Linux are to realize their potential, they must host proprietary applications just as well as, and better, than proprietary platforms do.

Again, Novell’s OpenSUSE Build Service seems to offer a decent foundation for moving forward. While the OpenSUSE-hosted version of the service is limited to open-source software, Novell makes the service available for download and self-hosting, so proprietary software vendors could use this code to produce packages for multiple distributions, particularly if Canonical, Red Hat and other Linux vendors began pitching in to help streamline the process.

3. Write Once, Run Anywhere, for Real

Considering the chicken-and-egg issues that desktop Linux faces regarding the relatively small market opportunity it offers to ISVs, I find it surprising that distributors haven’t put more effort into advancing the state of Web application delivery and management on Linux.

I’m a heavy user of Google’s hosted mail service, which I run within a site-specific browser, Mozilla Prism, in order to protect myself from evil-doing scripts that might take advantage of the sorry state of inter-tab isolation in most of today’s Web browsers, and to isolate my mail session from things such as runaway Flash ads adjacent browser tabs.

As far as I’m aware, however, Ubuntu is the only Linux distribution with a ready-to-install package for Prism. What could be more enterprise-appropriate (I’m looking at you, SLED) than providing for isolation between running Web applications? And more than simply packaging Prism, or something like it, why don’t any Linux distributions employ one of their built-in privilege management frameworks, such as SELinux or AppArmor, to offer enhanced Web browser isolation right out of the box?