Every public cloud runs on dedicated bare metal servers with a hypervisor layer. When you provision a cloud VM, you are renting a slice of someone else’s dedicated hardware. Running that hypervisor layer yourself on InMotion bare metal or unmanaged dedicated hardware gives your team the same capability, direct hardware access, full VM density control,…
Proxmox VE
Proxmox Virtual Environment is the practical choice for most teams building a private cloud on a single dedicated server or small cluster. It is open source, Debian-based, and ships with a web UI that manages both KVM virtual machines and LXC containers from the same interface. The enterprise subscription adds repository access and support contracts, but the community edition is fully functional for production use.
Proxmox handles VM live migration between nodes, shared storage configuration, high availability clustering, and the Proxmox Backup Server integration that makes VM snapshot backups genuinely straightforward. For teams that want to run a private cloud without hiring a dedicated VMware administrator, Proxmox is the right starting point.
VMware vSphere / ESXi
VMware ESXi remains the enterprise standard in organizations with existing VMware infrastructure, certified integrations, and teams with VMware certifications. The licensing model changed significantly after Broadcom’s acquisition of VMware in 2023, which pushed many organizations to evaluate Proxmox and KVM alternatives more seriously. For organizations already committed to the VMware ecosystem, ESXi on dedicated bare metal remains valid. For teams starting fresh, Proxmox or KVM are worth evaluating first on cost grounds.
KVM with libvirt
Linux KVM (Kernel-based Virtual Machine) is the hypervisor layer underneath both RHEL’s virtualization stack and many cloud providers’ infrastructure. libvirt provides the management API; virt-manager or Cockpit provide basic GUIs. For teams comfortable with Linux administration and infrastructure-as-code tooling (Terraform, Ansible), KVM with libvirt offers more flexibility than Proxmox at the cost of a less integrated management experience.
VM Density Planning on 192GB RAM
The practical question when provisioning a private cloud node is how many VMs fit. The answer depends entirely on VM workload profiles.
These numbers assume no memory overcommitment. Proxmox and KVM both support memory ballooning and overcommitment, which allows provisioning more memory to VMs than physically exists by banking on VMs not using their full allocation simultaneously. For development environments, 2x overcommitment is reasonable. For production database VMs, never overcommit.
Keep roughly 16-24GB of physical RAM outside of VM allocation for the hypervisor, storage caching (the host OS page cache for VM disk images), and any management services running on the bare metal host.
CPU Oversubscription Ratios
The AMD EPYC 4545P provides 16 cores and 32 threads. CPU oversubscription ratios define how many vCPUs you provision relative to physical threads:
1:1 ratio (32 vCPU total): Appropriate for production VMs running consistent workloads. No VM ever waits for CPU time.
2:1 ratio (64 vCPU total): Safe for mixed environments where development and production VMs coexist. Development VMs typically sit idle.
4:1 ratio (128 vCPU total): Suitable for development-only environments with bursty but non-simultaneous workloads. Unacceptable for production.
For Proxmox, the CPU usage metric on the host dashboard shows real CPU steal: when the sum of all VM CPU usage exceeds 100% of host capacity, VMs start waiting for CPU time. Monitor this on new deployments before committing to a production VM density.
Storage Configuration for VM Fleets
VM Disk Images on NVMe
NVMe storage as the backend for VM disk images changes the performance profile of every VM on the host. VM disk I/O goes through the hypervisor layer, but the underlying NVMe throughput means a VM performing a database write-heavy operation does not noticeably impact other VMs on the same host.
In Proxmox, create a local-lvm storage pool pointing at the NVMe drive. This uses LVM-thin provisioning, which allocates disk space from the NVMe pool on demand rather than pre-allocating full VM disk sizes. A pool of VMs provisioned with 50GB disks each may only actually use 200GB of NVMe space if most VMs have sparse data.
RAID Configuration for VM Storage
InMotion uses mdadm RAID 1 (software mirroring) across the dual NVMe drives. This gives the VM storage pool redundancy: a single NVMe drive failure does not lose VM data while awaiting replacement. For a private cloud hosting production VMs, this baseline protection is important.
The RAID 1 configuration provides 3.84TB of usable NVMe storage for VM disk images. For a fleet of 15 VMs averaging 200GB provisioned disk per VM, that is 3TB of provisioned capacity. With LVM-thin overprovisioning, actual utilization will typically be 40-60% of provisioned capacity, leaving comfortable headroom.
Proxmox Backup Server
Proxmox Backup Server (PBS) runs as a service on the hypervisor host or a separate machine and handles deduplicated incremental VM backups. A 20-VM environment with 100GB average VM disk usage produces roughly 2TB of unique data. With deduplication, PBS typically stores 3-5 daily backups in under 3TB of space, depending on VM change rates.
Premier Care’s 500GB backup storage volume supplements local PBS storage for off-server copies of critical VM backups.
Network Configuration and VLAN Isolation
Isolating VM groups from each other at the network layer is a core private cloud requirement, particularly when development, staging, and production VMs share the same physical host.
In Proxmox, network bridges (vmbr0, vmbr1, etc.) map to physical NICs or VLANs. Creating separate bridges for each environment group and assigning VMs to their respective bridge provides Layer 2 isolation. VMs on the production bridge cannot directly communicate with VMs on the development bridge without going through a router or firewall VM.
For multi-server clusters, a 10Gbps network port provides the inter-node bandwidth needed for live VM migration and shared storage access without competing with VM network traffic on a congested 1Gbps link.
Cost Comparison: Private Cloud vs. Cloud VMs
The cost comparison becomes clear when you price the cloud equivalent of a private cloud VM fleet:
The crossover happens quickly. A team running more than 5 cloud VMs consistently reaches the cost point where private cloud on dedicated hardware is cheaper per VM. At 15 VMs on an Extreme server, the cost per VM is roughly $23 per month vs. $52-138 per AWS VM depending on instance type.
Managed vs. Unmanaged for Virtualization
Running a full hypervisor stack requires root access to the physical server. InMotion’s managed dedicated servers include OS management from the APS team, but the hypervisor configuration itself sits in the customer’s domain. For Proxmox deployments specifically, InMotion’s managed configuration coexists with customer-managed VM administration.
For teams that want the physical server managed (hardware monitoring, OS updates, network configuration) while controlling their own VM layer, managed dedicated is the right model. For teams that want full unmanaged access to configure the base OS and hypervisor stack independently, InMotion’s bare metal servers provide that foundation.
Getting Started with Proxmox on InMotion
Order an Extreme or Advanced dedicated server based on required VM count
Request Proxmox VE installation from InMotion APS at provisioning time
Configure local-lvm storage pool on NVMe volume for VM disk images
Set up network bridges and VLAN tagging for environment isolation
Install Proxmox Backup Server for scheduled VM snapshot backups
Add Premier Care for OS-level management and 500GB off-server backup storage
Most teams running Proxmox on InMotion hardware find that VM density doubles their previous cloud spend efficiency within the first month. The management overhead of a private cloud is real but substantially lower than assumed, particularly with Proxmox’s unified web interface.
