a blog

  • A couple of days ago, my colleague Andrew Garcia forwarded me what seems like the 20th plea I’ve seen over the past couple of years to call or write my representatives in Congress to Save Net Radio. This latest plea came from Pandora.com, a fantastic Internet-based radio station that holds more claim on my online hours than any single Web purveyor this side of Google.

    If you haven’t heard of Pandora, and if you like music, go check it out after you read this post. Pandora draws on a huge database that’s packed with detailed information about individual songs. Pandora pairs the information in its database with your song preferences to deliver custom radio stations that play songs you may never have heard of, but that mesh well with your stated tastes.

    Like every other Internet-based radio station, Pandora has been operating under the threat of large increases in the rates it must pay in exchange for playing copyrighted music over the Internet. You can check out my colleague Jim Rapoza’s explanation of the Web radio threat, and his own call to help save Web radio, in his column, “Pump It Up for Web Radio.”

    While I love Pandora, and while I dislike the trend toward ever-broadening copyright restrictions, I think the time has come for Pandora to redirect a share of its “Save Us” efforts toward stepping up and saving itself.

    As luck would have it, just as the software world offers individuals an antidote to restrictive licensing practices and to the ill-conceived laws that enforce these practices, so too does the world of music and the arts have an available–if less-recognized–antidote to the copyright mischief of the RIAA and its ilk: Creative Commons licensing.

    There’s a certain amount of music out there–and that amount is growing all the time–that’s specifically licensed to allow for redistribution and derivation and free play over the Internet airwaves. The idea is that since obscurity is a band or musician’s worst enemy, it’s worthwhile for artists to choose to license their works more liberally than has been the norm, and some artists have begun embracing this model.

    As a fan of the ideas around free software and free culture, I’ve often set out to find liberally licensed music to consume. I figure that ours is an attention economy, and that I should make an effort to reward the musicians who are embracing Creative Commons licensing by investing a portion of my attention in their content.

    The trouble is that I know music through brand names–Lee Dorsey, MC Hammer, Lee Greenwood (not really). I don’t know the names of the artists that license their music in a way that’s immune to the Kill Net Radio forces, and while I can look up lists of artists through the Creative Commons Web site, I don’t have the time to spend hours filtering through these artists to separate the wheat from the techno.

    If only there was a way for me to tell a Net Radio station which artists and songs I do like, and for that station to suggest other, more liberally licensed artists and songs … oh, that’s right, there is–or, there could be: Pandora.

    Unfortunately, even though the success of liberally licensed music is absolutely key to Pandora’s continued survival, and even though I’m definitely not the first person to conceive of using Pandora’s music database in this way, Pandora does not offer a means of connecting users with RIAA-proof music.

    In order to help itself in its ongoing music licensing struggles, Pandora doesn’t have to dump “proprietary” music all together. It could simply turn its excellent music locating prowess on more liberally licensed music and give Pandora listeners the option for filtering by those licenses.

    There’s precedent that given the presence of friendlier options, restrictive licensing schemes can be made to bend through market pressure.

    Creative Commons founder Lawrence Lessig used to give a talk on free culture (he’s hung up his copyright cleats to pursue corruption in government) in which he told the story of how BMI came into being. As Lessig tells it, ASCAP raised rates 448 percent between 1931 and 1939. BMI came along offering a slate of less attractive music to license and managed to win over a majority of broadcasters. (For more, check out Lawrence Lessig’s free culture presentation here; the ASCAP/BMI story comes in around the 5:44 mark.)

    I find it too audacious to hope that Congress will get around to right-sizing our country’s runaway IP regulation regime any time soon, so if the businesses that run Internet radio stations and the listeners who frequent them want to see this form of enjoying music continue, they should begin marshaling their (in Pandora’s case, considerable) resources toward solving the problem themselves.

  • While gathering support contract pricing information for my Ubuntu 8.04 review, I noticed a somewhat surprising item listed among the benefits of paying Canonical for a Linux distribution the company gives away for free:

    Protect your business against IP infringement claims

    The Ubuntu Assurance from Canonical covers your business for claims of intellectual property infringements arising from your use of Ubuntu. The Ubuntu Assurance is included in Canonical Support contracts for eligible customers. This offering is designed to safeguard your business and make deploying Ubuntu even easier through warranties from Canonical and an indemnification offering.

    The legend that companies and individuals might risk lawsuits for running applications that infringe on copyrights or patents gained popularity when SCO began threatening to run down Linux end users in retaliation for secret (SCO refused to detail them) upstream IP violations.

    The idea was (and still is) that since open-source licenses explicitly declaim liability for SCO-style attacks, and since most open-source software projects don’t have the resources to pay on lawsuit judgments anyhow, open-source software is riskier for companies than proprietary software would be.

    Unless, of course, your open-source software provider was an IT titan with a big sack of patents (and lawyers) slung over its back. During the early days of the phantom SCO menace, Sun was quick to point out that it offered indemnification for this sort of thing, and even appeared to bolster SCO’s claims by paying the company for expanded rights to some of the Unix IP over which SCO (wrongly) asserted ownership.

    Red Hat responded to the SCO business by challenging the infringement accuser to prove its claims, and by assuring customers that direct lawsuits for running Red Hat-distributed software were a threat too remote to worry about.

    Since then, Red Hat appears either to have bought its own line about the unlikelihood of infringement lawsuits, or to have bent to customer demands for protection, because the Linux leader now pledges to indemnify its fee-paying customers.

    Novell has taken a lot of heat for its deal with Microsoft, which many view as having legitimized vaguely-defined infringement threats to open-source software. Red Hat and Ubuntu 8.04 have maintained their distance from such a deal, but does the fact that Red Hat and Canonical both offer its customers indemnification for those sorts of threats further legitimize them?

    In other words, is open-source software indemnification a real threat, or isn’t it?

    I recently came across a great Q&A on the topic at RedMonk’s Web site. In the final analysis, Stephen O’Grady’s answer is a definite maybe:

    Q: Is it safe to say, then, that you are skeptical of the value of indemnification?

    A: Yes. I don’t dismiss it, of course, because larger enterprises are right to lower their attack surface through such mechanisms. But it is not–to me–a feature worth either paying a premium for or altering a buying decision about.

    Historically, the probability that you will require indemnification is minimal, and the future prospects are low in an environment that tends to favor vendors settling such matters amongst themselves.

    Considering that open-source software and processes are serving an increasingly prominent role in the IT industry landscape, and that actual lawsuits against open-source end users haven’t been materializing, I don’t think that companies or individuals running open source without service-fee-based indemnification are in any particular danger.

    Maybe I’m wrong–if I get served for running Linux without an annual service contract, I’ll be sure to write about it.

  • Today while trolling around on Slashdot I came across this open-source development flareup tidbit:

    Slashdot | Pidgin Controversy Triggers Fork
    “Pidgin, the premier multi-protocol instant messaging client, has been forked. This is the result of a heated, emotional, and very interesting debate over a controversial new feature: As of version 2.4, the ability to manually resize the text input area has been removed; instead, it automatically resizes depending on how much is typed. It turns out that this feature, along with the uncompromising unwillingness of the developers to provide an option to turn it off, annoys the bejesus of very many users.

    Last week or so, I’d read about this Pidgin fork, somewhat lamely named Funpidgin, and I even visited the project’s Web site to take a peek. I skimmed over the project page, didn’t understand the point of the fork, chalked it up to wacky open-source developer intransigence and moved on.

    As it turns out, I ran into Pidgin’s new No-Input-Box-Resizing-for-You “feature” a few weeks ago while testing Ubuntu and Fedora. I was annoyed that I couldn’t resize the input box as I’m wont to do in Pidgin, thought it was an obvious bug and ignored it, expecting that the issue would be fixed by the time that Ubuntu 8.04 and Fedora 9 shipped.

    The Pidgin developers should listen to their users, plenty of whom have weighed in against the pointless resize-restriction. As for me, it’s been a little while now since I’ve used Pidgin regularly, since I’ve taken to instant messaging through Gmail.

    Gmail’s IM interface isn’t great–there’s no per-buddy or per-group status-setting, it limits me to Jabber or AIM networks, and, just like Pidgin, it won’t let me resize my input box. Unlike Pidgin, however, my chat logs get to live in the cloud, where they’re accessible (and searchable) from wherever I am. Lately, I’ve prized that convenience over the customization options that a fat IM client affords.

    Pidgin’s newest “feature” tips the fat vs. thin calculus further in Gmail’s favor.

  • Now that Windows Vista Service Pack 1 has enjoyed a few weeks in the limelight in which to entice the “wait-for-SP1” IT shops to jump to Microsoft’s latest and greatest client operating system, it’s time to introduce the OS upgrade we’ve all been waiting for: Ladies and gentlemen, put your hands together for Windows XP SP3.

    Today we are happy to announce that Windows XP Service Pack 3 (SP3) has released to manufacturing (RTM). Windows XP SP3 bits are now working their way through our manufacturing channels to be available to OEM and Enterprise customers.

    We are also in the final stages of preparing for release to the web (i.e. you!) on April 29th, via Windows Update and the Microsoft Download Center. Online documentation for Windows XP SP3, such as Microsoft Knowledge Base articles and the Microsoft TechNet Windows XP TechCenter, will be updated then. For customers who use Windows XP at home, Windows XP SP3 Automatic Update distribution for users at home will begin in early summer.

    Microsoft released the third service pack for its now-venerable XP operating system to manufacturers this morning, which is good news for the many organizations who’ve yet to find a reason to undertake a migration to Vista.

    XP SP3 is a rather modest upgrade, one that falls much more in line with XP’s first service pack than with the security feature-packed SP2 release, but the new service pack stands as an important reminder that while XP will soon leave the retail channel, the operating system on which most organizations have come to depend is still very much supported by its maker.

    The biggest new feature in XP SP3 is the addition of support for Windows Server 2008’s Network Access Protection, which is aimed at enforcing network health by determining policy compliance on systems that access your network. Whether XP SP3 will be interpreted by Windows sites as a green light for NAP deployments remains to be seen, but it’s clear that NAP would be dead on arrival without support for Microsoft’s biggest platform.

    In my initial tests of XP SP3, in an update-from-SP2 scenario (clean-install-from-slipstreamed-media tests are on the way), I haven’t run into any problems, nor have I experienced any performance increases or slowdowns.

    I can report, however, that my favorite feature from Windows Vista has made its way back to XP. As with Vista, XP may now be installed without providing an activation key during the installation process–a major boon for testers who need to spin up XP installations for brief periods of time before blowing them away.

  • Recently, I came across a blog post about how to install a LiveCD version of Red Hat’s upcoming Fedora 9 release onto a USB stick, leaving space on the stick for data to persist between reboots.

    Impressed by the persistent USB LiveCD fun and partition encrypting installer improvements, I chose to throw caution to the wind and load up Fedora 9 Beta on my main notebook, replacing the Hardy Heron Beta install I’d been running–quite stably–for several weeks.

    Read on for the testing details, but the bottom line for Fedora 9 is more or less the same as with previous Fedora versions: Fedora can indeed be used for anything. Its primary purpose is to serve as a leading-edge development platform for Red Hat’s initiatives. As Red Hat confirmed very clearly last week, providing a mainstream desktop/notebook operating system is not one of their product goals.

    While I’ve very recently called on Red Hat (and Novell) to address mainstream Linux users more directly, I can certainly respect their decision to focus on their bread-and-butter products. What’s more, even if they aren’t productizing it, Red Hat’s desktop and notebook work does continue, and is definitely evident in other, more end-user focused distributions–such as the Ubuntu release, which I returned after spending two weeks with Fedora 9.

    Spending Time with Fedora 9

    As I mentioned above, it was Fedora 9’s support of USB stick-based persistent LiveCD deployments that enticed me to download the in-development distro, and when I tried it out for myself, it worked great. Fedora 9 booted up from the 2GB USB stick I used, and, through the virtue of solid state, did so a lot more quickly than CD-based LiveCD images do. I used my portable Fedora 9 system to browse the Internet. I downloaded some things, and I installed a piece of software, too. I tried rebooting the system, and, sure enough, the changes I’d made did persist.

    Next, while perusing the Fedora 9 Beta release notes, I saw that Fedora 9 now offers a partition encryption option at install time. It’s been possible to set up a Linux system with encrypted partitions for some time now, but only Debian and Ubuntu had implemented it as an install-time option.

    Installation on my trusty ThinkPad T60 was smooth, and the encryption part boiled down to checking off an option box. I booted up and keyed in my encryption passphrase. Everything seemed to work, including my wireless network.

    I hit the Web, and found that Flash and Java didn’t work properly for me. For Flash, I hit the same bug that Linus did–YouTube videos wouldn’t play. As for Java, the StatTracker applet for my Yahoo-hosted fantasy basketball league wouldn’t load.

    I was prepared for these snags, however, since I know that Red Hat and Fedora are focused on pushing the all-free software envelope wherever possible. As a result, Firefox came pre-configured with the LGPL (Lesser General Public License) Swfdec-Mozilla plug-in, in lieu of a pointer to Adobe’s Flash download page.

    I downloaded Flash from Adobe, and I put off downloading Java from Sun. Sadly, at the time I was testing Fedora, I was already more or less out of contention in my fantasy basketball league.

    Next, I applied all the available updates (since Fedora 9 was still in development, there were many). I restarted, and found that my X server wouldn’t start. Annoyingly, Red Hat’s usually great fail-safe display mode didn’t kick in, either. I had trouble with this when I tested Fedora 8, and the fix required me to twiddle with a config file to force my way into fail-safe mode. After I got my X back, I installed the RadeonHD driver that my notebook requires, and I was back in business.

    I turned next to installing the KVM virtualization software and Red Hat’s virt-manager virtualization management tool. I used the virt-manager tool to create a virtual machine, and I hit another familiar snag. As in my tests last year of RHEL (Red Hat Enterprise Linux) 5, I found that by policy, SELinux (Security Enhanced Linux) wouldn’t allow the virtualization application to read files in my home directory, where I’d instructed the setup tool to create the hard drive image file for my VM.

    Like I wrote in that RHEL 5 review:

    RHEL 5’s SELinux policy expects images to live in /var/lib/xen/images, and images stored elsewhere–such as ours—require re-labeling to avoid SELinux errors.

    While this was a good opportunity to see RHEL 5’s new SELinux Troubleshooter in action, it would have been helpful if RHEL’s virtualization manager had given us a heads up about this earlier.

    I ended up tossing SELinux into permissive mode, which records and flags the activities it’s been instructed to deny, but doesn’t deny them. Even though SELinux can be a pain, I’m a believer in MAC (mandatory access control) enhancements for operating systems, and I’m rooting for them to succeed.

    Feature wise, the combination of KVM and virt-manager is nowhere near as good as VMware Workstation, but it serves the same basic functions, and it’s free. The Windows XP VM that I created under Fedora 9 performed well, and the fact that KVM is built into the system is a major plus. One of my pet VMware peeves is having to compile and recompile drivers, and the fact that KVM is part of the Linux kernel means never having to think about drivers.

    Fedora 9 may no longer power my primary notebook, but I’m not done with it. I’m looking forward to digging into the distribution’s PolicyKit and PackageKit frameworks, and spending time with the much-needed FreeIPA project, which rides along with this release of Fedora. Stay tuned.

  • Gartner made news April 9 by contending that Windows is in danger of collapsing under its own weight. According to Gartner analysts Michael Silver and Neil MacDonald, radical changes to Windows are required. Their prescription: a more modular Windows.

    Windows is a massive piece of software, and even though it’s not presented as such externally, the operating system is made up of many separate parts. Making the seams between those parts more obvious and providing a way for components to be swapped in or out with ease would make for a more flexible and manageable Windows.

    Microsoft agrees with this assessment–and, in fact, Microsoft has been agreeing for the last 10 years or so.

    Not long after I started at PC Week (which soon became eWEEK), I took a briefing with members of Microsoft’s Windows team about the future client operating system that would eventually be called Windows XP. The biggest take-away of the conversation was that Windows was going modular.

    During the lead-up to Windows Vista, one of the key differences between Vista and the Windows releases that came before was that Vista was to be modular.

    Now, Windows 7 is on the horizon, and the word on everyone’s lips is MODULAR.

    Yet, even though Microsoft has been talking about modularity in Windows for the past 10 years, Windows remains more monolithic than modular, and that applies to Windows in both its client and its server incarnations.

    Windows Server 2008 demonstrates some progress toward modularity, with a Server Core configuration that strips away much of what makes up a full Windows installation, but a stripped-down operating system isn’t necessarily a modular one. It’s not what you take away, it’s how you take it away and how you’re able to add it back.

    For instance, Windows Server 2008 cannot host .Net applications, because even though Microsoft has been talking about modularity in Windows, and about managed code through .Net for years now, Microsoft has not bothered to package .Net modularly–the framework has so many dependencies throughout Windows that only the full Windows Server installation will do.

    Meanwhile, the open-source implementation of Microsoft’s .Net, Novell’s Mono, is packaged very modularly, and can operate happily on a stripped-down instance of Linux, with the lower overhead and reduced attack surface that Microsoft touts for its Windows Server 2008 Server Core.

    If Microsoft is serious about making Windows more flexible, the company needs to follow the lead of Linux–Windows needs a software packaging framework that breaks every component in the Windows code base into manageable chunks that customers, ISVs and Microsoft’s own development teams can manipulate and extend to serve their needs.

  • I came across an interesting item on OSNews today — a link to a Computerworld story in which Terri Forslof, manager of security response at TippingPoint, explains why Ubuntu Linux was the only OS left standing at the pwn2own contest her firm sponsored at CanSecWest.

    “There was just no interest in Ubuntu,” said Terri Forslof, manager of security response at 3Com Corp.’s TippingPoint subsidiary, which put up the cash prizes awarded at the contest last week at CanSecWest…. “It was actually a lack of interest” on the part of the PWN to OWN contestants, Forslof said. “[Shane Macaulay’s] exploit would have worked on Linux. He could have knocked it over. But [the contestants] get a lot more mileage out of attacks on the Mac or Windows,” she continued.

    The story doesn’t mention how many people attempted to hack the Ubuntu machine at PWN 2 OWN. Does anyone out there have this information? If indeed Ubuntu was “ignored” during the contest, I imagine that someone in attendance must have noticed.

    While short on details about how many attacks Ubuntu faced compared to Vista and OS X, the story was particularly long on complimentary quotes about how the mighty prowess of Windows Vista SP1 made the OS tougher to crack than PWN 2 OWNers had perhaps expected. Hooray for Vista.

    Back at OSNews, I found the contribution of one commenter particularly interesting:

    “Prior to joining TippingPoint, Terri was a Security Program Manager for the Microsoft Security Response Center, focused on driving the resolution of security vulnerabilities within Microsoft products.”

    http://dvlabs.tippingpoint.com/team/tforslof

    Were you at CanSecWest? Just how ignored was Ubuntu?

    UPDATE: My colleague Ryan Naraine tells me that only four contestants had signed up for the PWN2OWN challenge, so:

    1. It’s entirely possible that Ubuntu was indeed completely ignored;
    2. It’s pretty sad that with so few participants, both Mac and Vista went down, anyway;
    3. I must remind myself to pay less, or at least more guarded, attention to PWN2OWN next year.

  • Today, eWEEK’s Clint Boulton is reporting on the latest efforts to save the Sprint-Clearwire nationwide WiMax wireless data network scheme.

    Back in July, Sprint and Clearwire struck up an arrangement in which the two companies would work together to build out and maintain a nationwide wireless network based on the years-old, often-demoed-but-too-seldom-spotted-in-the-wild WiMax technology.

    In November, the deal fell through, although both companies maintained that they would continue on with their individual WiMax efforts, with potential inter-network roaming agreements.

    Now, according to a report in the Wall Street Journal, Sprint and Clearwire are in talks with Comcast, Time Warner Cable, and Bright House Networks–the country’s first, second and sixth largest cable TV providers–to resuscitate the national WiMax network.

    According to the report, Intel and Google are each prepared to make significant investments in the scheme, as well.

    Setting aside Google and Intel, both of which stand to benefit from any development that connects more people to the Internet and gives people more reasons to buy computer equipment, it appears sensible enough for this particular group of wireless and cable companies to push a third major broadband option to ride alongside cable and DSL.

    That’s because while individual cable providers enjoy duopolist or monopolist status in their particular fiefdoms, they don’t enjoy the same national reach as do the major cellular carriers. Sprint may be a major cellular carrier, but Sprint lacks the fixed broadband assets that AT&T and Verizon boast.

    However, while the WiMax effort from Sprint et al would appear to help cover the companies’ broadband bases, and while I’d love to see another broadband option emerge, I’m not convinced that a national WiMax network will manage to succeed.

    The trouble is that all the telephone, cable, satellite, and wireless companies are in the same business–that of data delivery. If today’s bit barons are paying attention, they should be able to see that the average consumer of data delivery services is spending his or her money very inefficiently:

    You pay for a landline;
    You pay for wireless voice;
    You pay for broadband Internet at home;
    Your company pays for Internet connectivity;
    You pay for cable or satellite TV;
    You (may) pay for wireless data;
    You (may) pay for Wi-Fi hotspot access;
    You (may) pay for satellite radio;

    Every one of these services can be or already is provided through the Internet, and it won’t be long before businesses and consumers start figuring out ways to shorten their data service bill pay lists without giving up too much of what they require.

    Today, the average data services consumer could probably cut his or her data bills significantly with a bit of technological cleverness–the sort of cleverness that comes in pre-packed, ready-to-use form in the technology products of the near future.

    A data services shake-out is coming, and it’s not clear to me whether the wireless, long-range, mobility-amenable benefits of WiMax can outweigh the technology’s woeful lack of penetration.

    In my opinion, the much more promising route to inexpensive, ever-present broadband runs through unlicensed spectrum, perhaps in the form of the “white space” scheme that strange bedfellows Google and Microsoft are pushing.

    Of course, this route is currently far more vaporous than WiMax, but the knack of unlicensed models to admit all comers means that risks associated with new products are spread more broadly, and there’s more opportunity for organic solutions to develop for the problems that pop up along the way.

  • Apple’s announcement yesterday that it plans to add support for Microsoft’s Exchange groupware server on the iPhone and the iPod Touch devices has gotten me thinking about Exchange support (or lack thereof) on other platforms, such as Linux and, strangely enough, Apple’s own OS X. It’s possible now to link up pretty much any mail client on any platform with Exchange via IMAP, but in order to access all the non-mail data that makes Exchange worthwhile, you need to find another route.

    As eWEEK Labs’ own Tiffany Maleshefski detailed in her coverage of Microsoft’s latest release of Office for the Mac, the software giant’s Mac Business Unit opted not to include full Exchange support in Office 2008 because to do so would have been too difficult. At issue, apparently, was the complexity of the MAPI (Messaging API) interface through which Outlook on Windows communicates with Exchange. It seems to me that in the face of long-term, loudly expressed customer demand, there must have been a way for Microsoft to bring its own messaging API to one of its own products. I suppose, though, that when there isn’t a will, it doesn’t matter whether there’s a way.

    In any case, with Exchange support for an Apple platform on the way in the form of ActiveSync for the iPhone, I wonder whether iPhone’s elder sibling, OS X, might be allowed to become Exchange-fluent via ActiveSync as well. And, if OS X can get connected to Exchange through ActiveSync, perhaps Linux clients could be allowed to do the same. Right now, apart from mail-only IMAP, the best way for a Linux user to link up with Exchange is through the Exchange Connector plug-in that ships with Novell’s Evolution groupware client, but even though Novell and Microsoft are now pals, MAPI support remains a Windows-only affair.

    Here’s another possibility: Depending on how Microsoft defines “high-volume,” Exchange might fall under the firm’s recent interoperability initiative, in which Microsoft pledges to work toward enabling openness and interoperability for those who wish to teach their applications to talk to Microsoft’s high-volume products.

    Related Stories:

    Commentary: iPhone Goes Enterprise, Treos and BlackBerrys Go Away?

    Commentary: Microsoft’s Interop Forecast Is Partly Cloudy

    Review: Office 2008 Leaves Mac Users Wanting

    Review: Exchange Expansion

    Commentary: Mac Is Hard

    Commentary: Microsoft, Novell Have Much to Prove

    Commentary: Giving Up on Linux?

    Commentary: Novell Continues to Buy Open Source

    Commentary: Is Novell Committed to Open Source?

  • Come this June, in enterprises across the country, I expect that Treos will begin to wither in the eyes of one-time loyalists, and that erstwhile thumb-keyboard addicts will start to judge their BlackBerrys to be significantly sourer. That’s because June is the month in which Apple has promised to ship an enterprise and third-party application embracing the 2.0 version of the firmware that drives its popular but so-far solidly consumer-focused iPhone and iPod Touch devices.

    The software update, which will be freely available for current iPhone users, and offered to iPod Touch owners for a nominal but as-yet-undisclosed fee, addresses the enterprise functionality gaps in the current firmware by adding support for Microsoft Exchange via ActiveSync, as well as a Cisco VPN client, and a set of enterprise policy enforcement tools to include remote device wiping. In addition, iPhone 2.0 will allow for client-side installation of third-party applications through an Apple-run service called the App Store.

    Apple’s iPhone will be far from the first mobile device to offer the enterprise connectivity and management features that Steve Jobs announced today. However, from a hardware perspective, the iPhone and the iPod Touch are, by far, the most impressive mobile devices I’ve ever laid hands on.

    As for software, my biggest qualm about Apple’s platform has been its resistance to third-party applications. While the version of the Safari Web browser that ships with the iPhone is certainly very impressive, it’s not until you “jailbreak” one of these devices and begin trying out the various underground applications that you see their full potential.

    Of course, the trouble with the jailbreak route, which takes advantage of a security vulnerability in a previous version of Safari to load an application installer (one that seems very similar to the official App Store tool that Apple announced today), is that you can’t fully trust the applications to which you gain access. When I hacked my own iPod Touch to admit new software onto the device, I was introduced to a “community sources” package from the “official” installer repository, through which I could install still other software sources, which were further removed from the initial jailbreak/installer developer team I’d decided to trust by hacking my device in the first place.

    Under the software distribution scheme that Apple announced today, developers will pay $99 to join the program, which will be the sole sanctioned source of iPhone applications. I imagine that there will always be a way to install unsanctioned applications, but I’m hoping that Apple executes well enough with its App Store to keep me on the software repository straight and narrow.

    It’s not clear what, if anything, Apple will do to vet these applications, but I’m pleased to see that all apps offered through the Apple service will be signed by a certificate that should at least ensure that the applications you choose to install will come from the source you believe they’re coming from. Beyond the binary-signing, I’d like to see Apple move away from allowing all applications to run as root, which is the current state of iPhone application permissions.

    And what about Palm, RIM, and their cellular carrier overlords? Here’s hoping that Apple’s incursion into these firms’ enterprise terrain proves incentive enough to prompt significant new hardware advances–and leverage enough for Palm, RIM and others to force the wireless oligarchs to allow these advancements to come to market.