Skip to content

MayaNAS Overview

MayaNAS is a multi-protocol cloud storage platform built on OpenZFS, deployed on standard cloud VMs (GCP, AWS, Azure) with object storage as the bulk capacity tier. NFS, SMB, S3, and SFTP file/object protocols share a single OpenZFS pool with consistent identity and snapshots. iSCSI and NVMe-oF block protocols are also supported. Two nodes serve clients simultaneously through floating VIPs with sub-60-second failover.

  • Six protocols on one pool — NFS v3/v4, SMB 3.1.1, S3, SFTP, iSCSI, NVMe-oF
  • OpenZFS special vdev architecture — metadata and small blocks on local NVMe SSD (50 GB default, sub-millisecond latency); bulk data on cloud object storage at multi-GB/s throughput
  • Active-Active HA (default) — both nodes serve clients simultaneously with floating VIPs. Sub-60-second failover with zero client evictions on the surviving node
  • Active-Passive HA — single shared VIP that migrates on failure. Simpler client configuration; one node idle as standby
  • Active Directory + Native Windows ACLs — one-command domain join configures SSSD, winbind, Kerberos, and Samba. NFS Kerberos (krb5/krb5i/krb5p), SMB security=ADS. Native NTFS ACLs via vfs_zfsacl. Windows Previous Versions in Explorer via vfs_shadow_copy2
  • Snapshot mirroring for DR — asynchronous incremental ZFS replication between clusters. Encrypted SSH transport, on-the-wire compression, resumable. Cross-region, cross-cloud, on-prem capable. No per-TB license
  • Customer-controlled bucket — your data lives in your GCS/S3/Azure Blob bucket under your IAM. Walk-away cost is bandwidth
  • Cloud object storage backend — GCS, S3, Azure Blob, MinIO, Wasabi, Cloudflare R2, on-prem Ceph
  • Web UI on port 2020 for management (login: admin)
TypeNodesVIPsUse Case
Active-Active HA22 VIPs (one per node)Default — production, both nodes serve clients simultaneously
Active-Passive HA21 shared VIPProduction with simpler failover model
Single1NoneDev/test, evaluation
CloudMarketplaceTerraform
GCPGCP MarketplaceTerraform
AWSAWS MarketplaceTerraform
AzureAzure MarketplaceTerraform
On-Premqcow2/KVM

MayaNAS uses the OpenZFS special vdev pattern to combine NVMe-class metadata performance with object-storage capacity economics:

  1. Compute instance — runs the MayaNAS storage engine
  2. Special vdev (local pd-ssd / EBS / Premium SSD) — 50 GB default, holds all pool metadata (directory entries, file attributes, ZFS bookkeeping) and small-block files. Sub-millisecond directory listings and stat/lookup
  3. Cloud object storage (GCS/S3/Azure Blob) — durable bulk data tier via objbacker.io. Petabyte-scale capacity at standard object-storage rates
  4. ZFS ARC — RAM cache for hot blocks, repeat-access at line rate
  5. Floating VIPs — alias IP on GCP, custom route on Azure, ENI on AWS

Clients mount NFS/SMB/S3/SFTP/iSCSI/NVMe-oF shares using the VIP address. On failover, the failed node’s VIP migrates to the surviving node in under 60 seconds.

MayaNAS is not a ZFS-only system. It uses OpenZFS for file/object workloads where its snapshot, clone, and checksum features matter, and supports LVM, mdadm RAID, DRBD, or raw block devices for block workloads (iSCSI, NVMe-oF) where they are the right fit. Best tool per workload, not per product.

After deployment on any cloud:

  1. Wait ~2 minutes for the cluster to auto-configure after Terraform/Marketplace finishes provisioning infrastructure. The cluster sets up one OpenZFS pool, one ZFS dataset, and exports it to your VPC subnet automatically — no admin guide required
  2. SSH to the instance (user varies by cloud: mayanas on GCP, ec2-user on AWS, azureuser on Azure)
  3. Web UI at http://<instance-ip>:2020 (or via SSH tunnel: ssh -L 2020:localhost:2020 user@instance)
  4. Password is auto-generated — retrieve via terraform output or instance metadata
  5. Mount shares via NFS, SMB, S3, SFTP, iSCSI, or NVMe-oF using the floating VIP