Skip to content

MayaNAS on GCP Marketplace

Deploy a MayaNAS HA NAS cluster directly from the GCP Marketplace using Google Cloud’s Infrastructure Manager (Terraform).

  • A GCP project with billing enabled
  • Compute Engine API enabled
  • Cloud Storage API enabled
  • IAM permissions: Compute Admin, Storage Admin, Network Admin, Service Account User
  1. Go to GCP Marketplace
  2. Search for MayaNAS HA Storage
  3. Click Deploy

The marketplace wizard presents the following sections:

FieldDescriptionDefault
Deployment NameUnique name for the cluster (used for all resource naming)
Deployment TypeActive-Active HA or Active-Passive HAActive-Active HA
FieldDescriptionDefault
Primary ZoneGCE zone for the primary node
Machine TypeVM instance typen2-standard-4
FieldDescriptionDefault
Boot disk typeDisk type for OSpd-balanced
Boot disk sizeSize in GB20
FieldDescriptionDefault
Metadata disk typeSSD type for metadata cachingpd-ssd
Metadata Disk Size (GB)Size of each metadata disk (50 GB suits 1-10 TB pools — ZFS metadata is roughly 0.3-0.5% of pool size)50
FieldDescriptionDefault
Cloud Storage SizeLogical storage size (e.g., 1T, 500G, 2T)1T

Valid range: 100G–999G or 1T–1000T. Charges are based on actual GCS usage.

FieldDescriptionDefault
Allow Web UI Access (Port 2020)Opens firewall for Web UItrue
Source IP ranges for Web UIRestrict access to specific IPs0.0.0.0/0

Click Deploy and wait for Infrastructure Manager to provision all resources. This typically takes 3–5 minutes for VM, disk, and bucket provisioning. After provisioning completes, the cluster auto-configures itself in under 2 minutes — one OpenZFS pool, one ZFS dataset, exported to your VPC subnet. No admin guide required, no manual setup.

The Web UI password is auto-generated. Retrieve it from the deployment outputs or from within the instance:

Terminal window
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/mayanas-cloud_user_password
Terminal window
# Option 1: Direct access (if port 2020 is open)
http://<EXTERNAL_IP>:2020
# Option 2: SSH tunnel (recommended for production)
gcloud compute ssh INSTANCE_NAME --zone=ZONE --project=PROJECT -- -L 2020:localhost:2020
# Then open http://localhost:2020

Login: admin / password from deployment output.

Terminal window
gcloud compute ssh mayanas@INSTANCE_NAME --zone=ZONE --project=PROJECT_ID

After creating a share in the Web UI:

Terminal window
# Active-Passive (single VIP)
sudo mount -t nfs <VIP_ADDRESS>:/<CLUSTER>-pool/<SHARE_NAME> /mnt/data
# Active-Active (each node has its own VIP)
sudo mount -t nfs <VIP1>:/<CLUSTER>-pool-node1/<SHARE> /mnt/node1
sudo mount -t nfs <VIP2>:/<CLUSTER>-pool-node2/<SHARE> /mnt/node2

The VIP is automatically assigned from the 10.100.x.0/24 range using a deterministic region-based algorithm. The VIP address is shown in the deployment outputs.

  • 1–2 Compute Engine instances (depending on deployment type)
  • GCS bucket(s) for data storage
  • SSD persistent disk(s) for metadata
  • Service account with Compute/Storage/Network Admin roles
  • Firewall rules (SSH, NFS 2049, optional Web UI 2020)
  • VIP alias IP range for HA failover
Terminal window
# Check startup logs
tail -f /opt/mayastor/logs/mayanas-terraform-startup.log
# Check service status
systemctl status mayastor
# Check cluster status
mayacli cluster status