MayaNAS On-Prem (qcow2/KVM)
Deploy MayaNAS on your own infrastructure using the qcow2 virtual machine image with KVM, libvirt, or Proxmox.
Prerequisites
Section titled “Prerequisites”- Linux host with KVM/QEMU support (
kvm-okshould return positive) libvirtandvirt-install(or Proxmox VE)- At least 4 vCPUs, 8 GB RAM for the VM
- Network connectivity (bridge or NAT)
- A local or network-attached SSD for metadata (recommended)
- S3-compatible object storage backend (MinIO, Ceph RGW, or cloud S3/GCS)
Step 1: Download the qcow2 Image
Section titled “Step 1: Download the qcow2 Image”Obtain the MayaNAS qcow2 image from ZettaLane. The image contains the full MayaNAS stack pre-installed.
# Example (replace URL with actual download link)wget https://releases.zettalane.com/mayanas/mayanas-latest.qcow2Step 2: Deploy the VM
Section titled “Step 2: Deploy the VM”Option A: virt-install (libvirt)
Section titled “Option A: virt-install (libvirt)”virt-install \ --name mayanas-node1 \ --ram 8192 \ --vcpus 4 \ --disk path=/var/lib/libvirt/images/mayanas-node1.qcow2,format=qcow2 \ --import \ --os-variant rocky9 \ --network bridge=br0 \ --graphics none \ --console pty,target_type=serialOption B: Proxmox VE
Section titled “Option B: Proxmox VE”- Upload the qcow2 to Proxmox storage
- Create a new VM (Linux, 4+ vCPUs, 8+ GB RAM)
- Import the qcow2 as the boot disk:
Terminal window qm importdisk <VMID> mayanas-latest.qcow2 local-lvm - Attach the imported disk as SCSI and set it as boot device
- Start the VM
Option C: QEMU direct
Section titled “Option C: QEMU direct”qemu-system-x86_64 \ -enable-kvm \ -m 8192 \ -smp 4 \ -drive file=mayanas-latest.qcow2,format=qcow2 \ -nic bridge,br=br0 \ -nographicStep 3: First Boot Configuration
Section titled “Step 3: First Boot Configuration”On first boot, the VM obtains an IP via DHCP (or configure static IP).
- Find the VM IP from your DHCP server or hypervisor console
- SSH into the VM:
Terminal window ssh mayanas@<VM_IP> - Run initial setup (if not already configured):
Terminal window sudo /opt/mayastor/config/standalone_setup.sh
Step 4: Configure Object Storage Backend
Section titled “Step 4: Configure Object Storage Backend”MayaNAS needs an S3-compatible backend. Configure it via the Web UI or CLI:
# Example: Configure MinIO backendexport MAYANAS_S3_ACCESS_KEY="minioadmin"export MAYANAS_S3_SECRET_KEY="minioadmin"export MAYANAS_S3_BUCKET="mayanas-data"export MAYANAS_S3_ENDPOINT="http://minio.local:9000"Supported backends:
- MinIO — self-hosted S3-compatible storage
- Ceph RGW — S3 gateway for Ceph clusters
- AWS S3 — direct cloud backend
- GCS — via S3-compatible interop API
Step 5: Access the Web UI
Section titled “Step 5: Access the Web UI”http://<VM_IP>:2020Login: admin / default password (check /opt/mayastor/config/ or set during setup).
Step 6: Add Metadata Disk (Optional)
Section titled “Step 6: Add Metadata Disk (Optional)”For better performance, attach an SSD to the VM and configure it as the metadata disk:
- Attach an SSD disk to the VM (via hypervisor)
- The disk appears as
/dev/vdbor/dev/sdb - Configure in MayaNAS Web UI under Storage settings
Step 7: Create NFS/SMB Shares
Section titled “Step 7: Create NFS/SMB Shares”Use the Web UI or CLI to create shares, then mount from clients:
sudo mount -t nfs <VM_IP>:/<POOL>/<SHARE> /mnt/dataHA Deployment (2-Node Cluster)
Section titled “HA Deployment (2-Node Cluster)”For high availability, deploy two VMs and configure clustering:
- Deploy two VMs following steps 1–4 above
- Ensure both VMs can reach each other on the network
- Configure clustering via the Web UI or use the cluster setup script:
Terminal window sudo /opt/mayastor/config/cluster_setup.sh - The VIP will float between the two nodes for automatic failover
Troubleshooting
Section titled “Troubleshooting”# Check MayaNAS logstail -f /opt/mayastor/logs/mayanas-terraform-startup.log
# Service statussystemctl status mayastor
# Check ZFS pool statuszpool status
# Check NFS exportsshowmount -e localhost