Skip to content
  • Home
  • Trend Micro Stuff
  • Splunk Stuff
  • Random Things
  • Linux
  • IBM AIX
  • RISC-V
  • About
Procyon Labs

Procyon Labs

A Place For My Stuff

Deploying Trend Micro Virtual Network Sensor on Proxmox VE

Posted on 2025-07-282025-10-17 By beaker No Comments on Deploying Trend Micro Virtual Network Sensor on Proxmox VE
Trend Micro LogoTrend Micro Virtual Network SensorProxmox Logo

Abstract

This document describes the process of deploying the Trend Micro Vision One Virtual Network Sensor on a Proxmox VE system.

Officially, Trend Micro only supports (as of this writing) the following virtualization platforms and cloud service providers for VNS:

  • Hosted Virtual Machines
    • Microsoft Hyper-V (VHDX)
    • KVM (QCOW2)
    • VMware ESXi / vCenter (OVA)
    • Nutanix AHV (QCOW2)
  • Cloud Service Providers
    • Amazon Web Services (AWS)
    • Microsoft Azure
    • Google Cloud

Both the KVM and Nutanix options will generate the same vns_meta.iso and vns_system.qcow2 files. The qcow2 image works on both, because Nutanix uses KVM for AHV and CentOS for Acropolis OS (AOS). The only difference is the KVM package includes a deployment script for RHEL based distributions.

You will need to setup two network bridge interfaces: one for management and one for data (traffic monitoring/TAP/SPAN).

FYI: The Trend Micro Service Gateway itself is a stripped down version of Rocky Linux.

For an example of how to configure a Mellanox switch for monitor sessions, I have this guide:

Configuring a Passive Tap / Monitor Port: Mellanox SX1024 (Onyx) Network Switch

Download Virtual Network Sensor

You can fetch the appliance from your Vision One console under the Network Security section in the Network Inventory subgroup. Click the big blue “Deploy Virtual Network Sensor” button to begin the process.

From here, select the “Nutanix AHV” platform (or KVM – doesn’t matter).

Set the Admin password for the appliance. This is kinda cool, because they generate the image on the fly using your account information. I imagine dozens of Oompa-Loompas vigorously assembling the image while floating on an Azure cloud while you wait for the download link.

Your downloaded file will be a zip archive (for example, VirtualNetworkSensor_nutanix_image.1.0.1504.zip) containing the VM as a qcow2 image (“vns_system.qcow2”) and a metadata file named “vns_meta.iso”. Extract this zip file, and remember where you put the contents. Don’t laugh. It happens.

Upload to PVE

Login to your Proxmox web interface. Navigate to Datacenter > Node > local (Node) – or wherever you setup VM storage. My node is named “rizzo”, and the storage for it is a directory on a local RAID array (thinpools are too restrictive for uploads) called “rizzo-raid5”:

For storage options, you should have available a type called “Import.” If it isn’t there, edit your storage entry under Datacenter and make sure it is selected (it isn’t by default). It should look something like this (after uploading the qcow2 file – I’m jumping ahead with this screenshot):

Click Upload, find your qcow2 file, then upload it.

You’ll also have a metadata image named vns_meta.iso in that archive. We need to upload that to the “ISO Images” bin like this:

Make sure you use the meta ISO that came with this downloaded instance – because it is the brains behind the operation to connect to your Vision One environment.

Create a New Virtual Machine

Now go back to your node and right click on it. Select “Create VM“

Sizing Reference:
MbpsVirtual CPUsVirtual MemoryStorage
10028 GB50 GB
500412 GB50 GB
1000618 GB50 GB
2000824 GB100 GB
50001636 GB150 GB
100002648 GB200 GB
Virtual Appliance Specifications
  • General
    • VM ID: your choice
    • Name: your choice
    • Start at boot: Checked
  • OS
    • Storage: wherever your ISO files are stored
    • ISO image: vns_meta.iso (the file you uploaded)
    • Type / Guest OS: Linux / 6.x – 2.6 Kernel (last I checked, this image was based on CentOS Stream 8)
  • System (Note: This is not a UEFI w/ EFI image)
    • Machine: q35
    • BIOS: Default (SeaBIOS)
    • SCSI Controller: VirtIO SCSI Single
    • Add TPM: Unchecked
  • Disk
    • Bus/Device: VirtIO Block
    • Storage: same as VM
    • Disk size (GiB): see table above – this is just a dummy disk anyway, and will be replaced with the actual qcow2 image later
  • CPU
    • Sockets: 1
    • Cores: see table above
    • Type: host
  • Memory
    • Memory (MiB): see table above
      • Mental math! The web UI expects MiB, not GB. I have a 10GbE monitor port, so according to the table above, I need 48GB RAM. 48 x 1024 = 49152 MiB
  • Network
    • Bridge: vmbr0 (or whatever your management network is – we’ll add the monitor port later)
    • Firewall: Unchecked
  • Confirm
    • Double check everything, then click Finish without checking the “Start after created” option

Configure and Add Network Monitor Bridge Interface

For your PVE node, go to the Network section under System in the node’s left navigate menu. Click Create and select Linux Bridge.

The assumption is that you have a NIC on the PVE host connected to the monitor/SPAN/TAP port of your switch (or other TAP device). My NIC is a Mellanox ConnectX-4 40GbE QSFP connected to a Monitor Session port on a Mellanox SX1024 network switch. My NIC is named enp67s0np0 for this example.

  • Name:
    • vmbr01 (whatever you like, my bridges use vmbrXX for taps and vmbrX for subnets)
  • Autostart:
    • Checked
  • Bridge ports:
    • enp67s0np0 (in my system)

Click Create, then Apply Configuration.

Linux Bridge Creation Example

That’s not all! We have one little trick to make sure all packets get seen.

SSH into your Proxmox node (or use the web UI shell). Edit the /etc/network/interfaces file (with vi of course – we’re not animals), and look for the bridge we just created (i.e. vmbr01). We need to append four very important lines to the stanza, highlighted below in my example:

auto vmbr01
iface vmbr01 inet manual
      bridge-ports enp67s0np0
      bridge-stp off
      bridge-fd 0
      bridge-vlan-aware yes
      bridge-vids 2-4094
      bridge-ageing 0
      up ip link set $IFACE promisc on

These lines do the following for us:

  • Enables VLAN awareness on the bridge
  • Defines the allowed VLAN ID range on the bridge
  • Sets the MAC address aging (or ageing in the UK, which is where this feature was probably developed!) time in the bridge’s forwarding database (FDB) to 0
  • Issues a shell command that runs when the interface is brought up, enabling promiscuous mode on the bridge interface (vmbr01), allowing it to see all traffic – not just frames destined to its MAC

If you’re logged in via SSH, enter this command to apply the networking changes:

# ifreload -a

If you’re logged in via the Proxmox web UI shell, this one feels more complete (but it will kill any SSH connections):

# systemctl restart networking

Now back to the web management UI. Click on the VM’s Hardware options and click the Add -> Network Device button. Select the monitor bridge you created, uncheck Firewall, and click Add.

Your hardware settings should look something like this when finished:

Replace Dummy Disk with Virtual Network Sensor QCOW2 Image

We’ll need to SSH into the PVE node for this. You could also access the node’s shell by clicking the “>_ Shell” button on the navigation menu, but SSH feel more natural to me.

Navigate to the storage directory (VMID is the one you set at creation):

# cd /var/lib/vz/images/<VMID>/

On my system, it is: cd /mnt/pve/rizzo-raid5/images/400/

Move your uploaded QCOW2 file to this directory and rename it to match the VM’s expected disk format:

# mv /var/lib/vz/images/your_uploaded_image.qcow2 vm-<VMID>-disk-0.qcow2

On my system, it is: mv /mnt/pve/rizzo-raid5/import/vns_system.qcow2 vm-400-disk-0.qcow2

Start and Configure Virtual Appliance

Go back to the node in Proxmox and click on the VM to select it. Navigate to the “>_ Console” and click “Start Now” – you will soon be presented with a login prompt (well, not that soon – some initialization has to happen first):

Within the VNS console, logon to the Command Line Interface (CLI) with the admin account and password you set when downloading the appliance:

Type enable and press enter to enable the administrative commands. The command prompt changes from > to #. These are the basics of what to configure to get started:

# configure hostname <hostname>
# configure network primary ipv4.static <ip> <submask> <gateway> <dns1> [dns2]
# connect

Example configuration session:

> enable
# configure hostname tm-vns-1
# configure network primary ipv4.static 192.168.2.6 255.255.255.192 192.168.2.1 192.168.6.9
# connect
Trend Vision One  : good
Network Inventory : processing
Network Analytics : good
Service Gateway   : Service Gateway is not paired (I'm using a direct connection)
# exit

If it grabbed an IP address via DHCP without asking you (rude!), then it may take a little while for your static IP change to register in Vision One. Either way, after about 10-15 minutes it should have sync’d properly with your Vision One environment.

Check the Vision One Network Security -> Network Inventory page for the appliance status and then Network Security -> Network Overview for activity. It may take a while for things to populate, so wait for (or create!) an attack for something interesting. Have fun!

Helpful Links

  • Virtual Network Sensor FAQ
  • Virtual Network Sensor CLI commands
  • Virtual Network Sensor system requirements
  • Ports and URLs used by Virtual Network Sensor
Trend Micro Stuff Tags:proxmox, traffic analysis, trend micro, virtual network sensor, vision one

Post navigation

Previous Post: Deploying Trend Micro Service Gateway Virtual Appliance on a VPS
Next Post: Configuring a Passive Tap / Monitor Port: Mellanox SX1024 (Onyx) Network Switch

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Configuring a Passive Tap / Monitor Port: Mellanox SX1024 (Onyx) Network Switch
  • Deploying Trend Micro Virtual Network Sensor on Proxmox VE
  • Deploying Trend Micro Service Gateway Virtual Appliance on a VPS
  • Deploying Trend Micro Service Gateway Virtual Appliance on Proxmox VE
  • Installing IBM AIX 7.3 w/ Latest TL/SP Fix Packs and DNF Toolbox for AIX

Copyright © 1999-2025 Procyon Labs

Powered by PressBook WordPress theme