Installing Ubuntu on Hyper-V with Nested Virtualization

Hello, this post is a follow up to my previous post: Hyper-V Nested Virtualization Setup

Table of Contents

    In the last post, we left off with completing the Hyper-V environment for passing-through CPU scheduling functions to the guest operating system. Now we will install Ubuntu 22.04.3 LTS and enable virtualization on it so that we can host QEMU network images from within our Windows desktop instance.

    Ubuntu Install in Hyper-V Environment

    To continue directly, we are in the process of loading the virtual machine operating system. So, let’s see what that looks like. First, we will open ‘Hyper-V Manager’ with administrator privileges if it isn’t already. Next, we select our server VM to ‘Start’, so that we can complete the installation dialogue.

    Starting the virtual machine for the first time!

    Once you have started the boot of the VM, then we need to ‘connect’ to the virtual machine so that we can monitor the bootup and installation process. And provide the necessary inputs for a successful completion.

    Once Connected, you should see the “GNU GRUB” boot menu for Ubuntu, and you can just select the default option to try and install the operating system.

    Providing you do have ‘secure boot’ and a ‘TPM’. And you created the virtual machine as a ‘Generation II’ VM, then you may receive an error message when you connect to the virtual machine. Which means your UEFI settings for the DVD-ROM are not quite correct.

    The following images are exampling the error that presents, and how to resolve it, once you have ‘shut off’ the VM instance and it’s no longer actively running. If you see the ‘Start PXE over IPv4.’ splash screen, you are definitely going to have an error on your DVD-ROM UEFI settings in the VM.

    As you can see, the error message is due to the certificate template association being set to ‘Microsoft Windows’ by default. And it needs to be ‘Microsoft UEFI Certificate Authority’, for the secure boot options to be functional. And this functionality is a requirement of operating the new JUNOS images and other vendors as a minimum requirement. I also pass-through TPM functionality so that it does not need to be virtualized at the higher layers.

    Now we are going through the Ubuntu installation process. There are a million ways to facilitate your operating system with this setup. I am just giving a mostly generic example of how this can be done. Only you will know what the requirements and expectations are for your environment. Please be sure to deviate where necessary for your specific situation and requirements.

    Choose the options that are appropriate for your application. And once you are through with the installation process, open a terminal window. My personal preference is to perform all updates and upgrades via the CLI terminal session as I think it is more expedient and easier than using the GUI. But to each their own. Once you have upgraded the software that you desire, it is time to get virtualization going on this Ubuntu instance.

    Verify Guest OS Sees vCPU

    First, let’s verify that the Ubuntu installation sees the processors.

    egrep -c '(vmx|svm)' /proc/cpuinfo

    If you return a ‘0’, then you should double-check that you have completed the PowerShell command referencing the virtual machine name, to pass through the CPU scheduling functionality. The following code box will have the command for getting this done. Before performing, please make sure that your VM instance is in an OFF/Shutdown state. Then open a PowerShell instance with administrator privileges on your host system. Please make sure to replace the ‘Ubuntu223’ VMName in the command string to your appropriate VM name.

    Allowing CPU control pass-through to guest OS

    Set-VMProcessor -VMName "Ubuntu223" -ExposeVirtualizationExtensions $true
    Enable nested virtualization in your host system via PowerShell session with elevated privileges
    Enable nested virtualization in your host system via PowerShell session with elevated privileges

    Once you have completed this, you can spin the VM back up, and re-run the egrep command on the ‘/proc/cpuinfo’ and verify that you have the correct vCPU count now that you have allowed the CPU control through to the guest OS.

    KVM Installation in the Guest OS

    Once you have verified the access to the virtual CPU’s, it is time to get KVM functionality going and verified on the guest operating system. First, we will verify that the system is ready to perform virtualization by running the ‘cpu-checker’ module. Which will need to be installed first. Then, once we have run ‘cpu-checker’ and virtualization functionality has been verified. We will install the virtualization libraries. So let’s get started on verifying we are ready for virtualization.

    #The following command will install the cpu-checker module.
    sudo apt install -y cpu-checker
    
    #Once installed, then run the command to verify virtualization functionality.
    kvm-ok
    
    #The expected output is:
    INFO: /dev/kvm exists
    KVM acceleration can be used

    Install KVM QEMU on Ubuntu

    And now for where the magic happens! Let’s install the virtualization libraries and management modules that most Linux based virtualization platforms need as a requirement. Within your guest OS, open a terminal session and run the following command string.

    #The following string will install all modules at once and auto accept dialogue.
    
    $ sudo apt install -y qemu-kvm virt-manager libvirt-daemon-system virtinst libvirt-clients bridge-utils

    That command string is installing 6 different libraries/modules that are necessary for virtualization. And serving/hosting virtualization. So, let’s break these down into the individual modules and put the purpose to each one.

    qemu-kvm – An open-source emulator and virtualization package that provides hardware emulation. Link will take you to the module’s documentation.

    virt-manager – A Qt-based graphical interface for managing virtual machines via the ‘libvirt’ daemon. Meaning this module will allow you to manage and visualize your virtual machines via GUI within the guest OS. If you are performing a ‘headless’ install or do not intend to use the GUI management application. Then this module may not need to be installed.

    libvirt-daemon-system – A package that provides configuration files required to run the libvirt daemon. The libvirt libraries are extensively configurable. So if you have special needs, then you should check out the module documentation to make sure you are using the best configuration for your use cases.

    virtinst – A set of command-line utilities for provisioning and modifying virtual machines. This tool supports both text based & graphical installations, using VNC or SDL graphics. Or a text serial console.

    libvirt-clients – A set of client-side libraries and APIs for managing and controlling virtual machines & hypervisors from the command line.

    bridge-utils – A set of tools for creating and managing bridge devices. At this point, your guest OS should be connected to the default hypervisor bridge for internet access. It is advisable to setup a secondary bridge for connecting to the management, or inter-VM communication that is separate from the NAT bridge.

    Phase 2 Completed!

    Alright, that pretty much wraps up phase 2 of running QEMU VM’s within a Windows environment. At this point, the example Ubuntu guest operating system is now capable of running QEMU virtual machines. It can run docker if desired, and is capable of running a containerlab instance, or GNS3. As well as running some of the vendor images natively. Which is what I will go over with my next blog post. Setting up a JUNOS ‘development’ or ‘lite-mode’ MX QEMU image within our nested virtualization, which anyone within the allowed countries list can download and try an installation for themselves as well from the Juniper.net website.

    Leave a Reply