a blog

  • Today, Google rolled a much-anticipated new component into its family of online applications: Google Sites.

    The new service is the fruit of Google’s 2006 purchase of hosted wiki provider JotSpot, and I’ve been looking forward for some time now to see what the search giant would do with its purchase, and to see how well it would integrate it with the rest of the Google Apps suite.

    I’ve only spent a short time with Sites so far, but the service looks impressive. It’s easy to edit pages, and all the standard wiki bits appear to be in place. As for the integration, I was able to insert calendar items, documents, presentations and spreadsheets from Google Apps, as well as items for other Google properties, such as YouTube videos. I could also insert some of the spiffy new spreadsheet-backed forms I’ve made since Google debuted its simple form builder earlier this month.

    More important than those Web app tie-ins is the way that Google’s recently launched Team Edition dissolves initial deployment barriers by letting people at a company get started with a Google Apps account without going whole-hog and migrating their e-mail system to Google.

    Things like Web-based forms (and to a certain extent, even wikis) are no big deal–it’s easy to build them, and it’s not too tough to host them, and you can figure out plenty of effective ways to pull out and use the data you collect in these forms. The trouble is that while many individuals, workgroups and organizations have a need for simple data collection tools, and while basically anyone can throw together an application for carrying out this collection, who really wants to spend their time on this?

    I certainly do not wish to take on another IT commitment, and our company’s IT department is way too busy working on other projects to support my ad hoc application needs.

    For instance, I’ve been looking for a better way to accept product review pitches from the vendors that eWEEK Labs covers. Right now, companies and their public relations reps e-mail us with pitches–way too many pitches to process and respond to or act on effectively, particularly when the pitches don’t always include all the information we need, and when the e-mail messages that bear these pitches aren’t always sent to the most pertinent contact in our group.

    No problem, I tell myself, I’ll chef up a Web-based form, with fields for the info we want, and categorization to route the pitches to the appropriate labs analysts, and I’ll link it up somehow to a mediawiki instance hosted from our lab. The rub, of course, has been figuring out how to chef up that simple system with no new budget and with no new system administration responsibilities.

    A combo like Google Sites and Google Forms promises to address business needs like these, and do it for free, to boot.

    Here are some screens I snapped during my Sites & Forms safari:

    your own Site is a click away

    Enter your Google Site vitals and just click through with the default theme–you can change it (and choose from a lot more options) later.

    requisite ugly themes

    The editing tools appeared rich enough. I didn’t see how to create a link for an as-yet-nonexistent page, though, nor did CamelCase appear to engender links.

    big and rich editing

    I wanted to embed a spreadsheet-backed form, so I headed over to Google Docs to create one.

    build a form

    I created a list-type field for my simple form.</em

    building a list

    I could fill out my form while embedded in my Sites site. I could also fill out the form right from my e-mail client.

    my embedded form

    I circled back to Google Apps to check my Form traps and found my lone entry waiting in my spreadsheet.

    forms3.jpg

    That’s about 10 minutes with Google’s newest App citizens. I say so far, so good. What do you think of Google’s recent Apps developments?

  • When I read about the recent Princeton University paper on subverting hard drive encryption by fishing for encyption keys in system RAM, I got to wondering about the vulnerability of my own Ubuntu-powered notebook computer.

    After all, support for out-of-the-box hard drive encryption is one of the reasons why I opt for Ubuntu for my primary work machine. Ubuntu inherits this capability from Debian, which introduced the security feature in its Etch release, and, surprisingly, Fedora and OpenSUSE still lack support for configuring hard drive encryption at install time (you may, however, hack your way to hard drive encryption on these or any other Linux system).

    I was interested to find that the Web site at which the Princeton paper is hosted offers up an “try-it-at-home” experiment in memory remanence–which forms the foundation of the encryption exploit.

    When I saw that this story had popped up again, this time in the form of a justification for Apple’s characteristically hardware restrictive practice of soldering the RAM of the Macbook Air into place, I thought I’d try the experiment out on my own system and report the results.

    As directed in the experiment guide, I cheffed up a small python program with which to fill my RAM with a recognizable keyword, Argon (the pirate’s favorite chemical element), I ran the program, watched GNOME’s system monitor panel applet to see my RAM fill up, and then I held down my power button to brusquely shut off my system.

    I then booted back up, typed in my encryption passphrase, and typed “sudo strings /dev/mem > Desktop/mem.txt” into a terminal to see if any of that pirate keyword booty still lurked in my 3GB of RAM. Here’s what I found:

    booty.jpg

    No argon to be found, which bodes well, at least initially. The Princeton researchers offer on their site a handful of reasons why my RAM might have been wiped:

    If you don’t see any copies of the pattern, possible explanations include (1) you have ECC (error-correcting) RAM, which the BIOS clears at boot; (2) your BIOS clears RAM at boot for another reason (try disabling the memory test or enabling “Quick Boot” mode); (3) your RAM’s retention time is too short to be noticeable at normal temperatures. In any case, your computer might still be vulnerable — an attacker could cool the RAM so that the data takes longer to decay and/or transfer the memory modules to a computer that doesn’t clear RAM at boot and read them there.

    And, of course, my Thinkpad doesn’t sport soldered-in RAM, so evil-doers might be able to drop my notebook into a chill chest, pop out the RAM, and go journeying through my sensitive review notes and voluminous personal musings about getting better organized.

    If I figure out a way to make this experiment work, I’ll be sure to update you.

  • Today Microsoft laid out a major new interoperability initiative that’s meant to “increase the openness of its products and drive greater interoperability, opportunity and choice for developers, partners, customers and competitors.”

    During the press conference that Microsoft executives Steve Ballmer, Ray Ozzie, Bob Muglia and Brad Smith held this morning, much was made about the pains Microsoft is taking to include the open-source software community in the new interop initiative.

    However, the legal environment surrounding interoperability between Microsoft’s products and the open-source applications that have sprung up to rival Redmond’s proprietary wares is scarcely less murky today than it was yesterday.

    To Microsoft’s credit, the firm did make available for download almost 800MB of Windows Server and Windows Communication protocol specifications (in PDF format, no less) to join the 20MB of Office binary format specifications that Microsoft made available on Feb. 15.

    Unlike those Office binary format specifications, which are covered under Microsoft’s we-pledge-not-to-sue-you Open Specification Promise, the Windows Server and Communication protocols are covered under a different, somewhat twisty promise that reminds me of the scene from Pulp Fiction where Vincent lays out for Jules the rules surrounding Amsterdam hash bars:

    Me: Okay, so tell me again about the Windows protocols.

    Microsoft: Okay, watcha wanna know?

    Me: Open-source apps can interoperate with Windows now, right?

    Microsoft: Yeah, it’s legal, but it ain’t 100 percent legal. I mean, you can’t just develop an open-source app that interoperates with Windows and start using it or selling it. I mean, we want you to use these protocols, but only in certain designated ways.

    Me: Example?

    Microsoft: Yeah, it breaks down like this, okay, it’s legal to develop open-source software with the Windows protocols, it’s legal to distribute those apps, but only for noncommercial purposes.

    If you pay for our as-yet-undisclosed patent license, it’s legal to sell or use those apps, but but, that doesn’t matter, because … get a load of this, we have no intent to sue people who infringe on these patents.

    But we might change our minds.

    So where does that leave open-source developers who wish to build products that work well with Windows?

    If I were developing open-source software, or if I were looking to build a business on open-source software, and if I allowed my applications to become entwined with Microsoft’s 30,000 pages of no doubt very useful specifications, I’d feel (to use another Pulp Fiction reference) like Marvin, riding along with Vincent and Jules, with Vincent’s handgun waving around casually in my face.

    And you remember what happened to Marvin.

    On the brighter side of today’s big news, Microsoft has pledged to detail the specific patents attached to the protocols and specifications it’s released today. That hasn’t happened yet, but when it does, it will give open-source developers something really useful to sink their teeth into.

    Rather than rack up legal fees trying to figure out what counts as commercial versus noncommercial distribution, or how it’s possible to legitimately label an application with patent-based distribution encumbrances as “open,” the open-source community can get to work on working around the intellectual-property Maginot Line that Microsoft is trying to erect around its most vital competitors.

    And I’ve got to say it: Royale with Cheese.

  • For the past several months, anyone who’s asked me about the latest big new thing in IT has gotten an earful about Amazon’s Elastic Compute Cloud, or EC2. With this service, which I reviewed last year, you pay Amazon 10, 40 or 80 cents an hour (depending on the RAM, storage and CPU) and you get a virtualized server running in one of Amazon’s data centers.

    This flexible machine hosting, combined with software appliances that bundle an operating system with your application of choice, lets you worry about your business workloads, and leaves all the details around physical machine maintenance, connectivity, and power consumption to a firm that probably boasts more data center management expertise than yours does.

    It’s a compelling model, and one that offers all companies, but small companies in particular, a route for scaling up their Web-based businesses very quickly. EC2 also gives larger companies with their own data center capacity the option of turning to “the cloud” to handle spikes in traffic, rather than requiring these businesses to over-provision to meet occasional capacity crunches.

    Right now, however, there’s a big asterisk floating alongside EC2. The service, which is based on technology from the open source Xen hypervisor project, is a Linux-only proposition. This works out just fine for businesses built on the Linux, Apache, MySQL and Perl/PHP/Python (LAMP) stack, but EC2 doesn’t offer much to firms that rely on Windows Server applications.

    While this Windows support asterisk signals a limitation of EC2, I contend that this bit of rhetorical punctuation would be best interpreted as a wake up call to Microsoft. After all, I’m not the only one who’s taken note of EC2. Red Hat has teamed with Amazon to make Red Hat Enterprise Linux available in Amazon’s cloud, complete with tools to help manage mixed in-the-cloud and on-premises server deployments. While many businesses currently rely on Windows Server, I’m sure that Red Hat and other Linux platform vendors would be only too happy to help customers migrate to Linux-based, cloud-ready alternatives to companies’ Windows-anchored applications.

    As luck would have it, Microsoft now boasts its own virtualization platform technology, in Windows Server 2008‘s Hyper-V. Building out a Windows Compute Cloud (WC2) service would give Microsoft the opportunity to demonstrate that its new hypervisor is capable of powering a major virtualized infrastructure–a sorely needed proof-point for the brand-new Windows component. What’s more, a WC2 platform would be a major boon to Microsoft’s channel partners, particularly those selling to SMB customers for whom hosting their own on-premises servers is an unwelcome diversion from running their businesses.

    Cloud computing services certainly present challenges, such as dealing with the added latency and security issues that come with entrusting machines to remote providers. And, of course, there’s the specter of service outages, to which users of Amazon’s EC2 and Simple Storage Service offerings were treated for a few hours just last week.

    To some extent, however, we’re all headed cloudward for our computing, and our vendors will either lead us there, or be left scrambling to follow–Microsoft included.

  • Wireless handset and infrastructure giant Nokia has announced plans to acquire Trolltech, a purveyor of application frameworks for desktops and mobile devices. Trolltech is perhaps best known for the QT framework, which forms the core of the open-source KDE (K Desktop Environment).

    For Nokia, the primary motivation behind the Trolltech pickup appears to be Qtopia, Trolltech’s application platform for Linux-based mobile devices, consumer electronics and embedded devices. Trolltech has racked up a respectable stable of Qtopia-based devices, but the platform hasn’t exactly become a household name.

    Now that Nokia is backing Trolltech and Qtopia, we may see this mobile platform begin attracting more of the attention that’s so far been reserved for its elder sibling in the desktop application space, Qt.

    Speaking of which, Nokia isn’t a desktop applications company, and this apparent mismatch has many wondering what’s going to happen to Qt, and what’s going to happen to KDE.

    Nokia has said, as acquirers always say, that life for Trolltech’s existing products and customers will go on as before. Just how closely Nokia intends to adhere to the status quo remains to be seen.

    For now, Nokia seems to be beginning its relationship with KDE in good faith, and has announced its plans to become a patron of KDE.

    It seems to me that not only will Qt (and by extension, KDE) continue to fare well under Nokia’s stewardship, but that the Trolltech acquisition may be opening the door to a whole new class of crossover notebook/smart-phone devices.

    There’s a world of Web-based applications out there, and I’m on the lookout for lightweight, low-cost, long-battery-life devices that can keep me computing with those applications wherever I go.

    I’m not the only one, either. As I’m certain Nokia noted, the current top seller in Amazon’s Computers and PC Hardware category is Nokia’s Linux-powered N800 Internet Tablet PC. No. 2 on the list is the Asus Eee 4G Micro Laptop PC, which is also Linux-powered.

    Apple’s iPhone has demonstrated that it’s possible to wring much more functionality out of a small-form-factor device than the previous decade of incrementally improving wireless devices has indicated.

    Unfortunately, Apple’s slick-looking but fundamentally conservative MacBook Air seems to demonstrate that the Apple isn’t yet ready to follow up on the iPhone with computing devices that break the typical notebook mold.

    However, while Apple has a profitable Macintosh notebook business to protect, Nokia faces no such encumbrance, and, armed with its new Trolltech assets, might find itself in the perfect position to deliver us the sort of next-generation computing devices we need to bid adieu to today’s bloated client paradigm.

  • Today, Sun Microsystems turned heads by announcing plans to lay down seven and a half percent of its current market capitalization to acquire open-source database vendor MySQL AB.

    Why did Sun do it? Look no further than the other major acquisition announced today, in which Oracle declared victory in its months-old bid to purchase middleware giant BEA Systems.

    The growth of Web 2.0 companies is stoking demand for the database-anchored software stacks on which these companies depend. As Sun’s Alan Packer outlined in a Dec. 10 blog post, this increased demand, plus fast-moving multicore processor advances, plus slow-moving licensing reforms from entrenched database players, equals dramatically rising costs for proprietary database products.

    In the face of these costs, more companies are looking to open-source database alternatives. How can database incumbents avoid losing dollars to open-source upstarts? Packer offers four strategies, but let’s look at strategy one:

    “Strategy 1: Resistance is Futile – You Will Be Assimilated. Picking off your competitors can get a lot easier when they are open-source companies, because most of them struggle to address a major discrepancy between their penetration and their annual revenue.”

    “Note that Oracle has already made some raids across the border, having acquired InnoBase, maker of InnoDB, MySQL’s most popular transactional engine, and Sleepycat Software, maker of Berkeley DB, another transactional engine used with MySQL. In response, MySQL has scrambled to introduce Falcon, a transactional database engine of its own.

    “Any of the major proprietary database companies could reasonably play the role of the Borg in this scenario, though, since all of them have very deep pockets. MySQL is probably the most vulnerable to takeover, since it’s privately held. PostgreSQL may be more difficult to silence, since it is developed by an active community rather than a single company.”

    In the face of today’s news, those fears of seeing MySQL picked off by a proprietary database vendor go a long way toward illuminating Sun’s acquisition logic.

    Of course, Sun didn’t purchase MySQL simply to make the world safe for open-source software. The MySQL pickup gives Sun yet another seat at the open-source table, and a particularly choice one, considering the ever-broadening role that the LAMP stack is playing among Web 2.0 companies. It’s tough to ignore that the biggest Web 2.0 hitter of all, Google, is a major MySQL shop, and one that’s recently become active in the MySQL community.

    What’s more, by bringing MySQL into Sun Microsystems’ fold, Sun is also improving its chances of shifting that acronym to SAMP: Solaris, Apache, MySQL and Perl/PHP/Python.

    Finally, and also in the context of proprietary database price pressure, Alan Packer’s post offers some good insight on what’s behind Oracle’s BEA pickup:

    “Strategy 3: Revenue Pull-Through. Include the database as a bundle with other pieces of your software stack. Focus the customer’s attention on buying something else, and chances are they won’t notice or won’t care that they’ve bought your database as well.”

    If you’ve been paying attention to Oracle, this shouldn’t be a newsflash — from hypervisor to operating system to middleware to enterprise apps, Oracle’s out to own a head-to-toe software stack, with support and licensing all tied up into what must end up looking like an omnibus spending bill.

    So Sun’s forging ahead to reclaim a software-and-services-fueled spot as the dot in dot com, and to do so on the strength of open source. Oracle’s out to show Slashdot that they picked the wrong company figurehead to dress up as Locutus.

    What’s the better route forward?

  • Apple’s subnotebook wunder-machine is nigh. I’ve been waiting for an ultralight Web and writing machine for a long time now. So, should I run out and pant outside my local Apple store until the first units arrive?

    Based on the information available right now, let’s weigh the pros and cons …

    Pros:

    The Macbook Air is slim and light.

    The unit’s multitouch-equipped touch pad looks interesting, but I’d have to try it out for myself to gauge its worth.

    The Macbook Air offers up to 5 hours of battery life.

    Cons:

    Limited connectivity. There’s only one USB port, and USB hubs never seem to work as well as built-in ports do. There’s no Ethernet adapter, either.

    Disposable? The battery is apparently not user-replaceable, nor is RAM, nor is the hard drive. It’s one thing to sell a disposable $300 mobile device, it’s quite another to sell a disposable $1,800 to $3,000 notebook. Who among us has never swapped a notebook battery, or upgraded our RAM?

    Five hours of battery life sounds great, but I assume that’s for a Macbook Air outfitted with a solid-state drive, which adds $1,000 to the cost of the unit. I’d rather pay less for a device with much less storage. I can keep more data up in the AIR, right?

    My Snap Judgment (subject to change):

    If my checkbook were fat enough, I’d probably preorder the Macbook Air immediately. Of course, in the land of “ifs,” I’d probably also buy myself a whole branch of Fry’s Electronics to use as a personal playground of gadgetry and wires.

    However, as things currently stand, I’ll probably wait for a few generations of Macbook Air notebooks to pass by (along with several friendly price drops) before I’d pick up one of these very slick-looking machines.

    How does the Macbook Air strike you?

  • Back when Windows XP was in development, I wrote a column titled, “Ding, Dong, the Witch Is Dead (Almost)!”

    I was writing about how Windows 98 was soon to done in by a more stable, more secure version of Windows, and about how the new version would, alongside OS X and Linux, usher in an era in which applications would be more sanely isolated from each other. No longer would we have to worry about single applications crashing and taking down our whole systems.

    Lately, though, I’ve been displeased to find that misbehavior of certain applications I use is visited upon other, totally unrelated applications, leading to crashes, system resource problems and even potential security breaches on the machines I use. The problem is that a growing number of the applications I rely on are served up to me through my Web browser, and compared to operating systems, Web browsers do a lousy job playing host to applications.

    Case in point: A few months ago, while reading a post on a security blog, I carelessly clicked on a proof-of-concept exploit of a Google cross-site scripting vulnerability. Without realizing it, I’d allowed this code to configure my Gmail account to forward all messages to the author of the POC. Google fixed the gap, but didn’t do much to advertise it to their users, and any unintended forwarding setups persisted after the fix occurred.

    Fortunately, I was too lame to get a golden ticket to the then invitation-only Gmail service until every possible permutation of my name had been claimed by someone else, so I only use that account as a destination for mailing list messages and quasi-junk mail. In any case, the exploit writer closed his e-mail account fairly quickly under the server strain of more inattentive Gmail users than he’d perhaps anticipated.

    Sure, I should have been clicking more carefully, but does computing in the software-as-a-service world have to mean settling for crude isolation between my blog reading and e-mail management applications?

    Even if I amp up my script-running vigilance–I’ve been getting acquainted with the NoScript plug-in for Firefox–I’ll still have to worry that some Flash ad on a Web site in tab one will demolish the performance of the online Word processor I’m using in tab two, or even crash my whole browser session.

    Software as a service is turning Web browsers into the operating systems of the Internet. If they know what’s good for them, Google, Salesforce.com, et al will start working more closely with groups such as the Mozilla Foundation to help deliver to us browsers to serve as the credible application hosts that we require.

  • If you are a command line guru, you call upon your zypper, yum, conary or apt-get from the terminal, and you awk sed grep your way to what you’re after.

    For me, unless I know exactly what package I want–and I often don’t–I typically turn to Synaptic, the graphical package manager that graces my Ubuntu notebook. Synaptic is a really nice application, and I’ve spent untold hours nerdily sifting through the massive software catalog that Ubuntu inherits from Debian.

    There are six packages in the repositories that match a search for “Software Defined Radio.” They hail from the GNU Radio project, and it pleases me to know that when I finally get around to playing with SDR and GNU Radio, they’re waiting just a few clicks away.

    I’ve also used Synaptic to resolve unasked questions, like when I tried to gauge the health of the java-gnome bindings project by searching for packages with a java-gnome dependency.

    Like I said, nerdily.

    I’ve mentioned favorably Ubuntu’s simpler Add/Remove Applications tool in reviews before, but I don’t usually use it myself, since I think of myself as a Power User.

    It turns out, though, that the simpler tool launches faster and searches faster, too. I needed a color picker to help me come up with a color hex with which to customize a Web application. Just after I kicked off a search for “color picker” in Synaptic, I flipped over to the Add/Remove program, typed in the same search and still beat good old Synaptic.

    So even us Power Users (circa Windows 2000) can learn to benefit from simpler tools–at least some of the time.

    And happily, during the brief link hunt that I carried out to come up with the apt-get et al howto links from the first paragraph, I learned a couple new terminal tricks. I guess I’m off to play with apt-cache policy now, and inch imperceptibly toward command line gurudom. Nerdily.

  • ·

    I received an iPod Touch for Christmas, and before I loaded it up any MP3s or set a single Safari bookmark, I “jailbroke” the device, thereby opening it to all sorts of handy community-supplied applications, including, as Ryan Naraine is reporting today, potentially malicious code:
    snap_074529.jpg

    Security Watch – Apple – Malicious iPhone (Prank) Trojan is Eye-Opener
    According to warnings from two anti-virus vendors, a malicious iPhone software package circulating on the Web could cause legitimate third-party applications to be nuked if the Trojan is uninstalled from iPhones.

    Even though the Trojan that Ryan wrote about wasn’t all that malicious–an application that messes with its neighbors upon uninstall sounds more like shoddy packaging than naughty pranksterism–the fact is that a jailbroken iPhone or iPod Touch is a malware outbreak waiting to happen.

    The screenshot to the right says it all: When you’re running anything on an iPhone, you’ve doing it as the superuser. I imagine that when Apple decides officially to open their superfly devices to third-party applications, they’ll rectify the run-as-root situation, since full-sized OS X handles this pretty well.

    In addition, I’d like to see the software development community members whose apps populate the Linux-like Installer.app repositories on my iPod Touch implement a code signing framework such as the ones that Ubuntu, Red Hat and others provide. You may not be able to tell for sure if the app you’re installing will do what it’s supposed to do, but at least you can feel confident about where it came from.

    eWEEK Labs’ mobile and wireless expert, Andrew Garcia, was too sensible to leave his iPhone jailbroken, but I plan to keep my iPod Touch hacked. Without third party apps, the Touch is a slick MP3 player with a Web browser, but with the app doors open, it’s the best handheld computer I’ve ever used.