Open your virtual datacenter

Start using oVirt now »

Manage your virtualized networks

oVirt manages virtual machines, storage and virtualized networks.

Easy to use web interface

oVirt is a virtualization platform with an easy-to-use web interface.

KVM based virtualization management

oVirt is powered by the Open Source you know - KVM on Linux.

Current release: 4.0.0 (2016-06-23)

Read the release notes Get started with oVirt 4.0.0

oVirt News

oVirt 4.0 is released!

On behalf of the oVirt community, I am pleased to announce a major new release today, oVirt 4.0. This latest community release, now ready for download, has several new features, including a brand-new dashboard management and monitoring system; enhanced container support; faster live migration speeds; and a new direct-for-disk image uploader.

As the upstream development project for Red Hat Enterprise Virtualization, oVirt’s integrated virtualization enables cost savings for enterprises without the need to re-develop applications to conform to cloud platforms' APIs. oVirt also shares services with Red Hat’s cloud solutions including RDO, Red Hat's community OpenStack distribution, as well as stronger container support that integrates tools from Project Atomic, Red Hat's robust container management tool set.

New features highlighted in this release are:

New Administration Portal: Our UX team has created a stunning new dashboard to monitor and control your datacenter, based on feedback from oVirt users. Administrators can visualize the strength of their datacenters and virtual machines at a glance with the highly visual dashboard in place.

Improved Live-Migration Performance: Much faster migrations speeds for host-to-host migration with policies that are now completely customizable.

Improved Image Features: In previous versions of oVirt, VM images needed to be uploaded via command line. With oVirt 4.0, these images can now be selected and uploaded to the oVirt instance right from within the web-based portal. oVirt 4.0 also now enables importing libvirt VMs using the virt-v2v tool.

Container Support: Support for Atomic guest OS machines is included, with reports available about containers running in them.

New oVirt Node: The just-enough operating system version of oVirt has been revamped and includes a Cockpit-based management system.

Developer Improvements: The new API v4 is cleaner and provided improved performance, and a Ruby SDK is now included in oVirt.

Other exciting new features in this release of oVirt include the capability to deploy additional hosts for Hosted Engine directly from the Web portal, as well as improvements to Gluster hyper-converged setups, per-interface MAC anti-spoofing, and FiberChannel over Ethernet support via VDSM hook.

A complete list of oVirt 4.0 features is available in the oVirt 4.0 Release Notes. oVirt 4.0 is a big step forward in improving the virtual datacenter management experience, improving what is already available in oVirt with more speed and power than ever, while adding improvements to the platform that will take it to the next level of virtual machine management in an increasingly DevOps-oriented IT environment.

Download the latest version of oVirt today!

View article »

Modifying oVirt-generated ifcfg files

oVirt is using a bridge based setup to configure networks on the managed hosts. The setup process is done by generating and maintaining network interface configuration files (ifcfg file), which define the network devices used by oVirt. Should any changes be done to these files by an outside party, oVirt will try to restore them to the desired state, to keep the network configuration intact. There are however situations in which the user want to intentionally introduce permanent changes into some of these files, and prohibit oVirt from overwritting them. In order to do so, VDSM hook script can be used.

Lets look at an example, where the user want to add the following entries to the 'ens11' network interface: USERCTL=yes ETHTOOL_OPTS="autoneg on speed 1000 duplex full"

A VDSM hook invoked before ifcfg file modification can be used to accomplish this. The hook script should be placed inside the "/usr/libexec/vdsm/hooks/before_ifcfg_write/" directory on the VDSM host. VDSM must have execute permissions for this script. VDSM will check this directory every time ifcfg configuration is changed, and executes each script it finds in this directory. The script will receive a json dictionary as input. The dictionary contains two elements: - ifcfg_file - full path of the ifcfg file to be written - config - the contents of the ifcfg file to be written

For example: { "config": "DEVICE=ens13\nHWADDR=52:54:00:d1:3d:c8\nBRIDGE=z\nONBOOT=yes\nMTU=1500\nNM_CONTROLLED=no\nIPV6INIT=no\n", "ifcfg_file": "/etc/sysconfig/network-scripts/ifcfg-ens7" }

Modified ifcfg file contents (under the "config" entry) can be returned,and will be used by VDSM as the new ifcfg file content. If nothing is returned, VDSM will use the unmodified content.

A sample hook script will look as follows:


import os
import datetime
import sys
import json
import hooking

hook_data = hooking.read_json()

ifcfg_file = hook_data['ifcfg_file']
config_data = hook_data['config']
# adding to ens11 ifcfg file: USERCTL=yes and ETHTOOL_OPTS="autoneg on speed 1000 duplex full"
if 'ens11' in ifcfg_file:
    config_data += "USERCTL=yes\nETHTOOL_OPTS=\"autoneg on speed 1000 duplex full\"\n"
    hook_data['config'] = config_data

Following is a description of the hook script.

Reading in the data from the json file:

hook_data = hooking.read_json()

Getting the value of the new ifcfg file content:

config_data = hook_data['config']

Getting the name of the ifcfg file which will be modified:

ifcfg_file = hook_data['ifcfg_file']

Modify the content of the ifcfg file:

config_data += "USERCTL=yes\nETHTOOL_OPTS=\"autoneg on speed 1000 duplex full\"\n"
hook_data['config'] = config_data

Write the content of the ifcfg file:

View article »

Advanced users authentication, using Kerberos, CAS SSO and Active Directory

I have a environment where hard coded password are avoided. We prefer to use Kerberos. We also provided a SSO for Web UI using CAS. We use ActiveDirectory for users backend.

So I wanted a oVirt installation that will use kerberos for API authentication. For the web UI, Kerberos is not always the best solution, so I wanted to integrated it in our CAS.

The Apache part was easy to setup. It needs an external module, auth_cas_module, that can be found at Apache's CAS module. It builds without special tweaks with

make install

I will show only subpart of the whole Apache setup and only authentication related part

# The CAS modules
LoadModule authz_user_module      /usr/lib64/httpd/modules/
# Needed because auth_cas_module forget to link openssl
LoadModule ssl_module            /usr/lib64/httpd/modules/
LoadModule auth_cas_module       /usr/lib64/httpd/modules/

# For the kerberos authentication on the API
LoadModule auth_gssapi_module    /usr/lib64/httpd/modules/
LoadModule session_module        /usr/lib64/httpd/modules/
LoadModule session_cookie_module /usr/lib64/httpd/modules/

CASLoginURL https://sso/cas/login
CASValidateSAML On
CASValidateURL https://sso/cas/samlValidate

<VirtualHost *:443>

    RequestHeader unset X-Remote-User early
    <LocationMatch ^/api($|/)>
        RequestHeader set X-Remote-User %{REMOTE_USER}s

        RewriteEngine on
        RewriteCond %{LA-U:REMOTE_USER} ^(.*@DOMAIN)$
        RewriteRule ^(.*)$ - [L,P,E=REMOTE_USER:%1]

        AuthType GSSAPI
        AuthName "GSSAPI Single Sign On Login"
        GssapiCredStore keytab:.../httpd.keytab
        Require valid-user
        GssapiUseSessions On
        Session On
        SessionCookieName ovirt_gssapi_session path=/private;httponly;secure;
    <LocationMatch ^/(ovirt-engine($|/)|RHEVManagerWeb/|OvirtEngineWeb/|ca.crt$|engine.ssh.key.txt$|rhevm.ssh.key.txt$)>
        AuthType CAS
        Require valid-user
        CASAuthNHeader X-Remote-User

The file httpd.keytab contains the kerberos ticket for the service HTTP. In my setup, the realm using for Linux machine is different than the active directory's domain, and a trust was established between them. So the keytab is created using MIT kerberos.

It was generated using the following kadmin commands:

addprinc -randkey HTTP/VHOST@REALM
addprinc -randkey HTTP/FQDN@REALM
ktadd -k .../http.keytab -e aes128-cts-hmac-sha1-96:normal -e aes256-cts-hmac-sha1-96:normal HTTP/VHOST@REALM
ktadd -k .../http.keytab -e aes128-cts-hmac-sha1-96:normal -e aes256-cts-hmac-sha1-96:normal HTTP/FQDN@REALM

Kerberos can be surprising when resolving principal and http client uses different method. Some requests an ticket using directly the Host header. Some other choose the reverse of the IP used for the connection. So if Apache is configured using a virtual host, both principal for the virtual host and the FQDN pointed by the reverse of the IP should be created and added to the keytab.

The authn file /etc/ovirt-engine/extensions.d/ is : = apachesso-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
ovirt.engine.extension.binding.jbossmodule.class =
ovirt.engine.extension.provides = = apachesso = DOMAIN-authz = HEADER
config.artifact.arg = X-Remote-User

And the authz file /etc/ovirt-engine/extensions.d/ is: = DOMAIN-authz
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
ovirt.engine.extension.binding.jbossmodule.class =
ovirt.engine.extension.provides =
config.profile.file.1 = ..../aaa/

I had some difficulties with AD backend. A straightforward solution would have been :

include = <>

vars.domain = DOMAIN
vars.user = BINDDN
vars.password = BINDPWD
vars.forest =

pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = srvrecord
pool.default.serverset.srvrecord.domain = ${global:vars.domain}

pool.default.ssl.startTLS = true
pool.default.ssl.truststore.file = .../domain.jks
pool.default.ssl.truststore.password = 
# Only TLSv1.2 is secure nowadays
pool.default.ssl.startTLSProtocol = TLSv1.2

# long time out should be avoided
pool.default.connection-options.connectTimeoutMillis = 500

But if fails. We have a special setup with about 100 domain controlers and only two of them can be reached from the ovirt engine. So my first try was so defined them directly in the configuration file:

pool.default.serverset.type = failover
pool.default.serverset.failover.1.server =
pool.default.serverset.failover.2.server =

But that fails. oVirt-engine was still using a lot of unreachable domain controler. After some digging I found that other part of the ldap extension use a different serverset, I don’t know why it don’t reuse the default pool. It’s called pool.default.dc-resolve (it should be called pool.dc-resolve, as it’s not the default but a custom one), so I added in my configuration:

pool.default.dc-resolve.default.serverset.type = failover
pool.default.dc-resolve.serverset.failover.1.server =
pool.default.dc-resolve.serverset.failover.2.server =

I worked well, but there is a better solution as Ondra Machacek point it to me. In Active Directory, there is something called a “site”, with a subset of all the domain controler in it. It can be found under CN=Sites,CN=Configuration,DC=DOMAIN,....

To list them:

ldapsearch -H ldap://somedc -b CN=Sites,CN=Configuration,DC=DOMAIN -s one -o ldif-wrap=no cn

The information to write down is the cn returned

You get a list of all sites, just pick the right one, remove all the serverset configuration and add :

pool.default.serverset.srvrecord.domain-conversion.type = regex
pool.default.serverset.srvrecord.domain-conversion.regex.pattern = ^(?<domain>.*)$
pool.default.serverset.srvrecord.domain-conversion.regex.replacement = GOOD_SITE._sites.${domain}

The entry _sites.${domain} don’t exist in the DNS, so to check that your regex is good, try instead:

dig +short _ldap._tcp.GOOD_SITE._sites.${domain} srv

It should return only reachable domain controlers.

So the final /etc/ovirt-engine/aaa/ was :

include = <>

vars.domain = DOMAIN
vars.user = BINDDN
vars.password = BINDPWD
vars.forest =

pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = srvrecord
pool.default.serverset.srvrecord.domain = ${global:vars.domain}

pool.default.ssl.startTLS = true
pool.default.ssl.truststore.file = .../domain.jks
pool.default.ssl.truststore.password = 
pool.default.ssl.startTLSProtocol = TLSv1.2

pool.default.connection-options.connectTimeoutMillis = 500

pool.default.serverset.srvrecord.domain-conversion.type = regex
pool.default.serverset.srvrecord.domain-conversion.regex.pattern = ^(?<domain>.*)$
pool.default.serverset.srvrecord.domain-conversion.regex.replacement = GOOD_SITE._sites.${domain}

With this setup, my python client can connect to ovirt-engine using kerberos ticket, web users are authenticated using CAS. And there is no need to duplicate user base.

View article »

Up and Running with oVirt 3.6 and Gluster Storage

In November, version 3.6 of oVirt, the open source virtualization management system, hit FTP mirrors featuring a whole slate of fixes and enhancements, including support for storing oVirt's self hosted management engine on a Gluster volume.

This expanded Gluster support, along with the new "arbiter volume" feature added in Gluster 3.7, has allowed me to simplify (somewhat) the converged oVirt+Gluster installation that's powered my test lab for the past few years.

Read on to learn about my favored way of running oVirt, using a trio of servers to provide for the system's virtualization and storage needs, in a configuration that allows you to take one of the three hosts down at a time without disrupting your running VMs.


I want to stress that this converged virtualization and storage scenario is a bleeding-edge configuration. Many of the ways you might use oVirt and Gluster are available in commercially-supported configurations using RHEV and RHS, but at this time, this oVirt+Gluster mashup isn't one of them. What's more, this configuration is not "supported" by the oVirt project proper, a state that should change somewhat once this Self Hosted Engine Hyper Converged Gluster Support feature lands in oVirt.

If you're looking instead for a simpler, single-machine option for trying out oVirt, here are a pair of options:

  • oVirt Live ISO: A LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you're up, this is definitely a low-performance option, and not suitable for extended use or expansion.

  • oVirt All in One plugin: Run the oVirt management server and virtualization host components on a single machine with local storage. The setup steps for AIO haven't changed much since I wrote about it two years ago. This approach isn't too bad if you have limited hardware and don't mind bringing the whole thing down for maintenance, but oVirt really shines brightest with a cluster of virtualization hosts and some sort of shared storage.

Read More »

My Devconf.CZ 2016 experience

On the first weekend of February I had the pleasure of attending DevConf.CZ 2016, which took place in the wonderful city of Brno, Czech Republic. It's a relaxed, young and vibrant conference and it was fun and rewarding from my perspective. Here's a disorganized personal summary:

I'd like to thank our project's current community manager Mikey Ariel, and former community manager Brian Proffitt for once again, just one week after FOSDEM, leading the effort of representing the oVirt project and community in this event.

1.[RFE] register .vv files so they'll be opened automatically with remote-viewer 2. [RFE] Disable power management (display and computer

View article »

Welcome to the new website!

As part of our efforts to upgrade the website and improve the community experience, we migrated the oVirt website from a MediaWiki site to a static site, authored in Markdown and published with Middleman. This was a major project that took more than 6 months and involved many contributors from all aspects of the project.

I'd like to take this opportunity to thank all the people who were involved with this migration, from content reviewers to UX designers and Website admins who gave their time and brain power to make this happen.

The old MediaWiki site is still available in read-only, and will be taken offline once we fix some pending issues, including handling PDF files and such.

What's new?

The new Website is full of improvements and enhancements, check out these highlights:

  • Source content is now formatted in Markdown instead of MediaWiki. This means that you can create and edit documentation, blog posts, and feature pages with the same Markdown syntax you know.
  • The Website is deployed with Middleman and stored on GitHub. This means that you can make changes to content with the same GitHub contribution workflow that you know (fork, clone, edit, commit, submit pull request). We even have an "Edit this page on GitHub" link at the bottom of every page!
  • New layout and design, from breadcrumbs to sidebards and an upgraded landing page.
  • Automatic redirects from the old MediaWiki site. This means that if the wiki page exists in the new website, previously-released URLs will redirect to that page. If the page was removed, the Search page will open with the page title auto-filled in the search box.
  • Hierarchical content structure. This means that instead of flat Wiki-style files, the deployed Website reflects an organized source repo with content sorted into directories and sub-directories.
  • Official oVirt blog! This first post marks the beginning of our new blog, and we welcome contributions. This means that if you solved a problem with oVirt, want to share your oVirt story, or describe a cool integration, you can submit a blog post and we will provide editorial reviews and help publish your posts.
  • Standardized contribution process. The GitHub repo now includes a file that you can use to learn about how to add and edit content on the website. We welcome pull requests!

Known Issues

Despite our best efforts, there are still a few kinks with the new website that you should be aware of:

  • Attempting to navigate to (without www.) leads to a redirect loop. We have a ticket open with OpenShift, our hosting service to fix this.
  • Only http is available. We also have a ticket with OpenShift to add SSL and enable https.
  • Home page and Download page are still being upgraded by our UX team, expect some cool new changes soon!
  • Feature pages look-and-feel is still under construction. You can still edit and push feature pages as usual.

What's Next

Even though the Website is live, the work is hardly over. We'd like to ask for your help in:

  • Reviewing content for anything obsolete or outdated; each page in the new website includes a header toolbar with metadata from the original wiki page for your convenience
  • Submitting blog posts or any other content that you wish to share with the oVirt community
  • Reporting bugs and proposing enhancements, for example broken links or missing pages

We hope you will enjoy the new oVirt Website, looking forward to your feedback and contributions!

View article »
See all blog posts

Upcoming events

Case Study

Universidad de Sevilla

When one of the largest universities in Spain needed a virtualization solution to host their virtual desktop interface program, UDS Enterprise helped the institution find a virtualization solution that delivered superior flexibility at a much lower cost than proprietary solutions.

That solution would be oVirt. Today, more than 3,000 students use this virtual desktop infrastructure, with the prospect of the rest of the student body participating as the program grows.

Read the full Universidad de Sevilla case study

View all case studies

Packed with Features

  • Choice of stand-alone Hypervisor or install-on-top of your existing Linux installation
  • High availability
  • Live migration
  • Load balancing
  • Web-based management interface
  • Self-hosted engine
  • iSCSI, FC, NFS, and local storage
  • Enhanced security: SELinux and Mandatory Access Control for VMs and hypervisor
  • Scalability: up to 64 vCPU and 2TB vRAM per guest
  • Memory overcommit support (Kernel Samepage Merging)
  • Developer SDK for ovirt-engine, written in Python

View all features…

Community is Key

Everyone is encouraged to join the oVirt community, and help us bring our open source software to virtual datacenters worldwide.

Community can be found in many places in the global community. Keep track of the latest happenings in the oVirt community, including new release announcements, and send your thoughts and links to virtualization-related topics on these social media channels: