Skip to content

Deploying IP Fabric Virtual Machine (VM)

All VM images are available at https://releases.ipfabric.io/images/. Access is restricted to registered customers only. Please contact our sales representative if you are interested in a trial of IP Fabric.

Important

Please remember that IP Fabric uses CLI access (SSH or Telnet) to connect to devices for collecting data. It’s important to place the VM in the proper network segment to prevent high ACL or firewall configuration overhead.

OVA Distribution Details

The appliance is built on top of Debian 12, which has been officially supported since ESXi version 8.0.

The minimal required Virtual Hardware Version is vmx-20, supported by ESXi 8.0, Fusion 13.6, Workstation Pro 17.x, and Workstation Player 17.x. For details, see the VMware KB articles 1003746 and 2007240.

This system type is required because we need the virtio/paravirtualized drivers for storage and network:

Note that we also have requirements about the processor itself – see Hardware Requirements. These cannot be described through the OVA image definition.

If you do not use the virtio/paravirtualized drivers for storage and network, performance will be degraded.

Setting VM From Scratch – Importing VMDK Image

Importing VMDK is the recommended way.

If you do not have access to an ESXi host for importing, you can try to import the disk (VMDK) and set up the machine manually. Ensure the following are configured correctly:

  • Virtual Hardware Version is at least vmx-20
  • virtio/paravirtualized drivers for storage and network

See the detailed instructions in Deploying VM on VMware ESXi Using VMDK Image.

Deploying Through vSphere or VxRail – Converting SHA256 OVA Image to SHA1

You may experience problems deploying through vSphere/VxRail. vSphere/VxRail is refusing the SHA256 version of our OVA image. When trying to create a virtual machine using the SHA1-based OVA image, you may experience problems with importing the image because of unsupported “hardware”. In this case, please see the next paragraph about deploying manually.

VMware’s KB article on converting OVA images: “The OVF package is invalid and cannot be deployed” error when deploying the OVA (2151537)

Importing SHA1-Based OVA Image

This might lead to unexpected results, such as wrong hardware assignments, degraded performance, etc.

operation not supported on this object

This states the inability to deploy the OVA image with the required hardware requirements through itself (vSphere). However, if the same OVA image is deployed through ESXi, no warnings are present while creating the virtual machine.

Deploying VM on VMware

Deploying VM on VMware vSphere Using OVA Image

  1. Deploy the OVA to your vSphere environment as described in Deploy an OVF or OVA Template.
  2. Edit VM settings and adjust according to your network size as described in the Operational Requirements section.
    1. Change CPU count.
    2. Change memory size.
    3. Add a new empty virtual disk or resize the main system disk.
  3. Power on the VM and complete IPF CLI Config.

Invalid OVF checksum algorithm: SHA256

Importing the OVA on older vSphere/ESXi hosts may result in an error stating that the OVF checksum is invalid. Please refer to OVA Distribution Details on how to resolve the issue.

Deploying VM on VMware ESXi Using VMDK Image

  1. Go to https://releases.ipfabric.io/images/, select the folder with the highest version number, and download the ipfabric-<x.y.z+build>.vmdk file.

  2. Log in to the VMware ESXi web interface.

  3. Select Virtual Machines and click Create / Register VM.

  4. A New virtual machine dialog appears. In its 1st step Select creation type, select Create a new virtual machine:

    VMware ESXi - Create a new virtual machine

  5. In the 2nd step Select a name and guest OS:

    1. Specify the VM’s Name.

    2. In the Compatibility field, select at least ESXi 8.0 virtual machine, which corresponds to the Virtual Hardware Version 20 (vmx-20). Refer to Virtual machine hardware versions for mapping between Virtual Hardware Versions and ESXi versions.

    3. In the Guest OS family field, select Linux.

    4. In the Guest OS version field, select Debian GNU/Linux 12 (64-bit).

    Deploying VMDK on Older ESXi Systems

    It is possible to deploy VMDK on older ESXi systems that do not support Debian 12. Please choose the latest available Debian (64-bit) version. For the best experience, use a supported version of ESXi.

  6. In the 3rd step Select storage, keep the default settings.

  7. In the 4th step Customize settings:

    1. Remove the automatically added hard disk:

      VMware ESXi - Remove disk

    2. Change the SCSI Controller to VMware Paravirtual (PVSCSI) and the Adapter Type under Network Adapter to VMXNET 3:

      VMware ESXi - Change storage and network

    3. Click Add hard disk, select Existing hard disk, and import the downloaded ipfabric-<x.y.z+build>.vmdk file:

      VMware ESXi - Add disk

    Unsupported and/or invalid disk type while importing VMDK

    The disk format of the VMDK file may be incomatible with your ESXi version. To convert it to a compatible format, please refer to the VMware documentation for detailed instructions.

  8. Power on the VM and complete IPF CLI Config.

Deploying VM on Hyper-V

The qcow2 disk image file can be converted to different formats. Using this method, we will create a VHDX usable on Microsoft Hyper-V and manually create a new VM.

  1. Download ipfabric-*.qcow2 from the official source.

  2. Convert the qcow2 image to VHDX. (Be sure to change the filenames in the command examples below.)

    • Windows instructions:
      1. Download the QEMU disk image utility for Windows.
      2. Unzip qemu-img-windows.
      3. Run: qemu-img.exe convert ipfabric-<*>.qcow2 -O vhdx -o subformat=dynamic ipfabric-<*>.vhdx
    • Linux instructions:
      1. Install qemu-utils: sudo apt install qemu-utils
      2. Run: qemu-img convert -f qcow2 -o subformat=dynamic -O vhdx ipfabric-<*>.qcow2 ipfabric-<*>.vhdx
  3. Create a new Hyper-V virtual machine and specify its Name and Location:

    Hyper-V - Specify Name and Location

  4. In the Specify Generation step, select Generation 1:

    Hyper-V - Specify Generation

  5. Assign memory. (Check requirements in the Operational Requirements section.)

    Hyper-V - Assign Memory

  6. Configure networking:

    Hyper-V - Configure Networking

  7. Connect a virtual hard disk:

    Hyper-V - Connect Virtual Hard Disk

  8. Verify the Summary and click Finish:

    Hyper-V - Summary

  9. Wait for the VM to be created.

  10. Edit the VM CPU settings. (Check requirements in the Operational Requirements section.)

    Hyper-V - VM Settings

    Hyper-V - VM Settings - Hardware - Processor

  11. Optionally, increase the disk size based on the Operational Requirements section.

    - Extend the system disk or add a new empty virtual disk if necessary.

  12. Close the VM Settings window.

  13. Start the VM.

Deploying VM on Nutanix

  1. Go to https://releases.ipfabric.io/images/, select the folder with the highest version number, and download the ipfabric-<x.y.z+build>.vmdk file.

  2. Import the ipfabric-<x.y.z+build>.vmdk file to the Nutanix hypervisor and follow Nutanix’s official documentation – Nutanix import OVA and Quick tip how to deploy a VM from OVF to AHV.

  3. Edit the VM hardware settings and adjust according to the network environment size. (Check requirements in the Operational Requirements section.)

    1. Change CPU count.
    2. Change memory size.
    3. Extend the system disk or add a new empty virtual disk if necessary.
  4. Power on the VM and complete IPF CLI Config.

Deploying VM on KVM

We currently have the limitation that drives need to be /dev/sdx. Usually, Linux hypervisors use the virtio-blk driver, which is represented as /dev/vdx in the guest system. To overcome this limitation, use virtio-scsi as the drive controller.

  1. Download qcow2 system disk to your KVM hypervisor.

  2. Resize the qcow2 data disk so it corresponds to your network’s needs if necessary. Use the following command:

    qemu-img resize ipfabric-disk1.qcow2 100G # (up to 1000G for 20 000 devices)
    
  3. Deploy the VM to your hypervisor with the virt-install utility by issuing the following command (chose CPU and RAM size according to the size of your network):

    virt-install --name=IP_Fabric --disk path=<path to the disk>.qcow2 --graphics spice --vcpu=4 --ram=16384 --network bridge=virbr0 --import
    
    • This command deploys a new virtual machine with the name IP_Fabric, system qcow2 disk, 4 CPU cores, 16 GB of RAM, and connects the VM to the internet through the virtbr0 interface. (If your machine has a different bridge interface name or you want to connect it to the internet directly through the device network card, you need to change the --network parameter.)
    • This command also starts up the VM.
  4. Additionally, you can create and add a new empty virtual disk if needed.

Deploying VM on VirtualBox

Warning

Deploying IP Fabric on VirtualBox is currently not officially supported – it is not tested, and we cannot guarantee that it will work.

  1. Download the OVA image.

  2. Import the OVA image via File → Import Appliance…:

    VirtualBox - Import Virtual Appliance

  3. In the next step of the Import Virtual Appliance guide:

    1. Set CPU and RAM as per the hardware requirements for your use-case.

    2. Set the Network Adapter to Paravirtualized Network (virtio-net).

    3. Keep the Import hard drives as VDI option checked for importing the disk image in the default VirtualBox format. (Otherwise, the disk image will be imported as VDMK, the default format of VMware.)

    VirtualBox - Import Virtual Appliance - Appliance Settings

  4. Right-click the newly created virtual machine and select its Settings…

  5. In the System section, select ICH9 as the Chipset:

    VirtualBox - VM Settings - System

  6. In the Display section, select VMSVGA as the Graphics Controller:

    VirtualBox - VM Settings - Display

    - Or to what VirtualBox suggests when an invalid Graphics Controller is selected:

    VirtualBox - VM Settings - Display - Invalid settings detected

    Warning

    When an invalid Graphics Controller is selected, it can lead to issues in the virtual machine and even on the host machine.

  7. In the Storage section, select virtio-scsi as the Controller Type:

    VirtualBox - VM Settings - Storage

  8. In the Network section, select Bridged Adapter and re-check in Advanced that the Adapter Type is Paravirtualized Network (virtio-net):

    VirtualBox - VM Settings - Network

  9. Start the VM.

Deploying VM on Azure

Uploading IP Fabric Disk File

The first step of deploying to Azure requires creating a VHD file from the qcow2 image, uploading it to a blob storage container, and then creating an Image to use for a Virtual Machine.

  1. Log in to the Microsoft Azure Portal and create or use an existing Resource Group.

    In the Microsoft Azure documentation, a resource group is defined as:

    … a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

    Please follow the instructions in Create resource groups.

    Create a Resource group

  2. Create or use an existing Storage Account for the IP Fabric VM.

    A storage account is an Azure Resource Manager resource. Resource Manager is the deployment and management service for Azure. For more information, see Azure Resource Manager overview and Creating Storage Account.

    Create a Storage account

  3. Create or use an existing Blob Storage Container.

    Azure Blob Storage allows you to store large amounts of unstructured object data. You can use Blob Storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about Blob Storage, read the Introduction to Azure Blob Storage.

    Create a Blob Storage container

  4. Convert the IP Fabric-provided qcow2 image to VHD using QEMU. The recommended way to convert the image:

    qemu-img convert -f qcow2 -o subformat=fixed,force_size -O vpc ipfabric-6-3-1+1.qcow2 ipfabric-6-3-1+1.vhd
    

    QEMU Version

    Please use qemu-img version 2.6 or higher. According to the Azure documentation:

    There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed in QEMU 2.6. We recommend using either qemu-img 2.2.0 or lower, or 2.6 or higher.

    You may check the qemu-img version that you are using with:

    qemu-img --version
    
  5. Upload the VHD image to the storage account blob container created using the Azure Storage Explorer.

    Upload the VHD image

    Blob Type

    When uploading the VHD image to Azure, make sure to select Page Blob as the Blob Type. Azure images can only be created from a Page Blob source.

    VHD Upload

    For uploading the VHD image, please use the Azure Storage Explorer (a native Windows app) instead of the Azure web UI. If you upload the VHD image via the Azure web UI, you might encounter the following error:

    The specified cookie value in VHD footer indicates that disk ‘ipfabric-6-3-1+1.vhd’ with blob https://…/vhd/ipfabric-6-3-1+1.vhd is not a supported VHD. Disk is expected to have cookie value ‘conectix’.

Sizing IP Fabric VM

Prior to creating the IP Fabric image, it is necessary to know the type of server required. Azure Regions contain different server sizes, so performing this step will ensure you select the correct Region in the next step.

IP Fabric Hardware Requirements

  1. Check the IP Fabric Hardware Requirements documentation.
  2. Record the number of CPUs.
  3. Record the RAM requirements.

Azure VM Finder

For this example, we will use minimum of 16 CPUs and 32 GB memory requirements.

  1. Please visit the Azure Find your VM website.
  2. Select Find VMs by workload type.
  3. Select all for Workload type and click Next.
  4. Enter minimum and maximum CPU and RAM values.
    1. vCPU: min 16, max 24
    2. RAM: min 32 GB, max 56 GB
  5. Select Premium SSD for Disk Storage.
  6. Data Disk can be left as default as IP Fabric does not use a separate disk for data.
  7. Under Operating system: To use a custom VM image, select Linux and then CentOS to see VM availability and pricing information.
  8. Select your preferred Region(s).
  9. Under the Recommended Virtual Machine(s), find an Instance with either an Intel or AMD processor that will suit your needs.
  10. Record the Instance and Region names you would like to use for the deployment.

Creating Image

Search Images

Search and select Images in the portal’s search bar, and then Create a new Image.

Create an Image from VHD

  1. Select the correct Subscription and Resource group.
  2. Name the image.
  3. Select the Region that was recorded from Azure VM Finder.
  4. Set OS type to Linux.
  5. Set VM generation to Gen 1.
  6. Browse the Storage blob to find and select your uploaded VHD.
  7. Set Account type to Premium SSD.
  8. Set Host Caching to Read/write.
  9. Set Key management to Platform-managed key.
  10. Optional: Add custom Tags.
  11. Select Review + create, wait for validation, and then click Create.

Creating VM

After creating the Image, go to the Resource and select Create VM:

Create VM

Basics

VM Details

Basics Continued

  1. Fill out the required Project details and Instance details sections:

    1. Select the correct Subscription and Resource group.

    2. Name the virtual machine.

    3. Select an Availability Zone.

    4. Using the information in Sizing IP Fabric VM, select the appropriate instance size.

  2. Specify an Administrator account using Password authentication with a secure password.

    Username

    Username must not be autoboss, osadmin, or root. Optionally, use the default azureuser.

    SSH Public Key

    Specifying SSH public key authentication will disable SSH Password authentication for the entire VM requiring either:

    • Manually editing /etc/ssh/sshd_config to enable password authentication for the osadmin user.
    • Using the configured key(s) to SSH into the VM anytime CLI access is required (most secure).
  3. Inbound port rules > Public inbound ports should be set to None.

  4. Set Licensing > License type to Other.

Disks

VM Disks

  1. Enabling Encryption at host is recommended if it is available.

  2. Select the OS disk size based on resource requirements matrix.

  3. OS disk type can be Premium SSD (locally-redundant storage) or Premium SSD (zone-redundant storage).

Networking

VM Networking

  1. Select or create a new Virtual network and Subnet.

  2. Please see Network security groups for information on securing access to your VM.

Public IP

IP Fabric contains sensitive information about your network, so it is highly recommended to use private networks only.

Other Configuration Options

  1. Management: Can be left to defaults.
  2. Monitoring and Advanced:
    • This is outside the scope of a normal IP Fabric deployment.
    • Installing Extensions may impact the application, and future upgrades could remove these from the VM.
    • If required, please reach out to your Solution Architect to explore options.
  3. Tags: Optional, assign custom tags to the resources being created.

Review + Create

Ensure validation passed and click Create.

Post Deployment

  1. Connect to the IP Fabric VM via SSH with the username created during the deployment:

    # password authentication:
    ssh azureuser@ip_address
    
    # SSH public key authentication:
    ssh -i identity-file.pem azureuser@ip_address
    
  2. Run IPF CLI Config:

    sudo ipf-cli-config -a
    

Console Access

Please note that the Azure serial console might not be accessible for setting the osadmin password in IPF CLI Config. In that case, please contact the IP Fabric Support team or your Solution Architect. We can connect to the appliance via SSH with the default/factory osadmin password (that is overwritten during IPF CLI Config) and run IPF CLI Config manually.