Looking Ahead to oVirt 3.1

We’re about one week away from the release of oVirt 3.1, and I’m getting geared up by sifting through the current Release Notes Draft, in search of what’s working, what still needs work, and why one might get excited about installing or updating to the new version.

Web Admin

In version 3.1, oVirt’s web admin console has picked up a few handy refinements, starting with new “guide me” buttons and dialogs sprinkled through the interface. For example, when you create a new VM through the web console, oVirt doesn’t automatically add a virtual disk or network adapter to your VM. You add these elements through a secondary settings pane, which can be easy to overlook, particularly when you’re getting started with oVirt. In 3.1, there’s now a “guide me” window that suggests adding the nic and disk, with buttons to press to direct you to the right places. These “guide me” elements work similarly elsewhere in the web admin console, for instance, directing users to the next right actions after creating a new cluster or adding a new host.

Storage

Several of the enhancements in oVirt 3.1 involve the project’s handling of storage. This version adds support for NFSv4 (oVirt 3.0 only supported NFSv3), and the option of connecting external iSCSI or FibreChannel LUNs directly to your VMs (as opposed to connecting only to disks in your data or iso domains.

oVirt 3.1 also introduces a slick new admin console for creating and managing Gluster volumes, and support for hot-pluggable disks (as well as hot pluggable nics). With the Gluster and hotplug features, I’ve had mixed success during my tests so far–there appear to be wrinkles left to iron out among the component stacks that power these features.

Installer

One of the 3.1 features that most caught my eye is proof-of-concept support for setting up a whole oVirt 3.1 install on a single server. The feature, which is packaged up as “ovirt-engine-setup-plugin-allinone” adds the option to configure your oVirt engine machine as a virtualization host during the engine-setup process. In my tests, I’ve had mixed success with this option during the engine-setup process–sometimes, the local host configuration part of the setup fails out on me.

Even when the engine-setup step hasn’t worked for me, I’ve had no trouble adding my ovirt-engine machine as a host by clicking the “Hosts” tab in the web admin console, choosing the menu option “New,” and filling out information in the dialog box that appears. All the Ethernet bridge fiddling required from 3.0 (see my previous howto) is now handled automatically, and it’s easy to tap the local storage on your engine/host machine through the “Configure Local Storage” menu item under “Hosts.”

Another new installer enhancement offers users the option of tapping a remote postgres database server for storing oVirt configuration data, in addition to the locally-hosted postgres default.

oVirt 3.1 now installs with an HTTP/HTTPS proxy that makes oVirt engine (the project’s management server) accessible on ports 80/443, versus the 8080/8443 arrangement that was the default in 3.0. This indeed works, though I found that oVirt’s proxy prevented me from running FreeIPA on the same server that hosts the engine. Not the end of the world, but engine+identity provider on the same machine seemed like a good combo to me.

Along similar lines, oVirt 3.1 adds support for Red Hat Directory Server and IBM Tivoli Directory Server as identity providers, neither of which I’ve tested so far. I’m interested to see if the 389 directory server (the upstream for RHDS) will be supported as well.

7 thoughts on “Looking Ahead to oVirt 3.1

  1. Hi, I did severals install on centos 6.2 and fedora 17 with3.1 but never works..
    engine-setup works without any error but when i tried to access to the web just only load the page without any login and no error like page not found..

    [root@testing ~]# rpm -ql |grep ovirt
    rpm: no arguments given for query
    [root@testing ~]# rpm -qa |grep ovirt
    ovirt-engine-restapi-3.1.0-3.15.el6.noarch
    ovirt-engine-tools-common-3.1.0-3.15.el6.noarch
    ovirt-log-collector-3.1.0-11alpha.el6.noarch
    ovirt-image-uploader-3.1.0-11alpha.el6.noarch
    ovirt-engine-jbossas711-1-0.x86_64
    ovirt-engine-userportal-3.1.0-3.15.el6.noarch
    ovirt-engine-setup-3.1.0-3.15.el6.noarch
    ovirt-engine-backend-3.1.0-3.15.el6.noarch
    ovirt-engine-config-3.1.0-3.15.el6.noarch
    ovirt-engine-dbscripts-3.1.0-3.15.el6.noarch
    ovirt-engine-3.1.0-3.15.el6.noarch
    ovirt-engine-sdk-3.1.0.4-1.el6.noarch
    ovirt-iso-uploader-3.1.0-11alpha.el6.noarch
    ovirt-engine-genericapi-3.1.0-3.15.el6.noarch
    ovirt-engine-webadmin-portal-3.1.0-3.15.el6.noarch
    ovirt-engine-notification-service-3.1.0-3.15.el6.noarch
    [root@testing ~]# netstat -atnp
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 859/rpcbind
    tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1103/engine-service
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1071/sshd
    tcp 0 0 0.0.0.0:662 0.0.0.0:* LISTEN 884/rpc.statd
    tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1094/postmaster
    tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1188/master
    tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 1103/engine-service
    tcp 0 0 0.0.0.0:892 0.0.0.0:* LISTEN 1045/rpc.mountd
    tcp 0 0 0.0.0.0:4447 0.0.0.0:* LISTEN 1103/engine-service
    tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN –
    tcp 0 0 0.0.0.0:32803 0.0.0.0:* LISTEN –
    tcp 0 0 0.0.0.0:8009 0.0.0.0:* LISTEN 1103/engine-service
    tcp 0 0 127.0.0.1:5432 127.0.0.1:57484 ESTABLISHED 1483/postgres
    tcp 0 0 127.0.0.1:57484 127.0.0.1:5432 ESTABLISHED 1103/engine-service
    tcp 0 0 127.0.0.1:46795 127.0.0.1:8009 ESTABLISHED 1212/httpd
    tcp 0 0 192.168.1.98:22 192.168.1.100:42485 ESTABLISHED 1386/sshd
    tcp 0 0 127.0.0.1:46793 127.0.0.1:8009 ESTABLISHED 1211/httpd
    tcp 0 0 127.0.0.1:8009 127.0.0.1:46795 ESTABLISHED 1103/engine-service
    tcp 0 0 127.0.0.1:45694 127.0.0.1:47501 TIME_WAIT –
    tcp 0 0 127.0.0.1:8009 127.0.0.1:46793 ESTABLISHED 1103/engine-service
    tcp 0 0 :::111 :::* LISTEN 859/rpcbind
    tcp 0 0 :::80 :::* LISTEN 1205/httpd
    tcp 0 0 :::22 :::* LISTEN 1071/sshd
    tcp 0 0 :::662 :::* LISTEN 884/rpc.statd
    tcp 0 0 ::1:5432 :::* LISTEN 1094/postmaster
    tcp 0 0 ::1:25 :::* LISTEN 1188/master
    tcp 0 0 :::443 :::* LISTEN 1205/httpd
    tcp 0 0 :::892 :::* LISTEN 1045/rpc.mountd
    tcp 0 0 :::2049 :::* LISTEN –
    tcp 0 0 :::32803 :::* LISTEN –
    tcp 0 0 ::ffff:192.168.1.98:80 ::ffff:192.168.1.100:40401 TIME_WAIT –
    [root@testing ~]#

  2. something happen with jboss

    2012-08-04 11:01:52,114 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC00001: Failed to start service jboss.deployment.subunit.”engine.ear”.”engine-scheduler.jar”.STRUCTURE: org.jboss.msc.service.StartException in service jboss.deployment.subunit.”engine.ear”.”engine-scheduler.jar”.STRUCTURE: Failed to start service
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1767) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
    at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
    Caused by: java.lang.IllegalStateException: Container is down
    at org.jboss.msc.service.ServiceContainerImpl.install(ServiceContainerImpl.java:508) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceTargetImpl.install(ServiceTargetImpl.java:201) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceControllerImpl$ChildServiceTarget.install(ServiceControllerImpl.java:2228) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceTargetImpl.install(ServiceTargetImpl.java:201) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceControllerImpl$ChildServiceTarget.install(ServiceControllerImpl.java:2228) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceBuilderImpl.install(ServiceBuilderImpl.java:307) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:150) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final]
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA]
    … 3 more

    2012-08-04 11:01:53,290 INFO [org.apache.coyote.ajp.AjpProtocol] (MSC service thread 1-2) Pausing Coyote AJP/1.3 on ajp–0.0.0.0-8009
    2012-08-04 11:01:53,349 INFO [org.apache.coyote.ajp.AjpProtocol] (MSC service thread 1-2) Stopping Coyote AJP/1.3 on ajp–0.0.0.0-8009

      1. Thank yoU!!!

        I need an operational ovirt to show it on an event next week..
        I think with my laptop i5 with 8gb i can create a manager with a two nodes, I want to show live migration and others features.. There will another products (Vmware and Microsoft,, ) I want to show the face of opensource solutioN!!

      2. Hi Rino — On a single laptop, you (probably) won’t be able to install an ovirt setup capable of doing live migration. Each oVirt host that runs VMs must have the hardware extensions for virtualization. It’s possible to set up an ovirt install that’s all virtual, you can add the hosts and set up storage, show what everything looks like, but not run VMs. I said probably, because it’s *possible* to set this up with nested KVM, but in my experience, that’s tough to set up and very unstable.

      3. I can install on a desktop and just only Local Host Data (no gluster or posix fs) .
        And it works.. at least i can show it.. 🙂
        But in my laptop (i5 with 8gb im still having issues with a lot of random erros..)
        But reading your post I learn a lot 🙂
        Maybe i will use your video to show to the public 🙂
        Thanks

Comments are closed.