Skip to content

MayaScale on GCP Marketplace

Deploy a 2-node MayaScale active-active storage cluster from the GCP Marketplace. MayaScale always deploys two nodes with server-side replication and automatic failover.

  • A GCP project with billing enabled
  • Compute Engine API enabled
  • IAM permissions: Compute Admin, Storage Admin, Network Admin, Service Account User
  1. Go to GCP Marketplace
  2. Search for MayaScale Composable Storage
  3. Click Deploy
FieldDescriptionDefault
Deployment NameUnique cluster identifier (lowercase, numbers, hyphens)
Performance & Availability PolicyDetermines instance type, SSD count, and network configZonal Medium

The performance policy is the key decision. It auto-configures everything else:

PolicyWrite/Read IOPSMachine TypeSSDs/NodeCapacity/Node
Zonal Basic75K / 100Kn2-highcpu-41375 GB
Zonal Standard130K / 380Kn2-highcpu-82750 GB
Zonal Medium200K / 700Kn2-highcpu-1641.5 TB
Zonal High350K / 1.2Mn2-highcpu-3283 TB
Zonal Ultra800K / 2Mn2-highcpu-64166 TB

Regional Policies (Cross-Zone HA, <2ms Latency)

Section titled “Regional Policies (Cross-Zone HA, <2ms Latency)”
PolicyWrite/Read IOPSMachine TypeSSDs/NodeCapacity/Node
Regional Basic63K / 130Kn2-highcpu-41375 GB
Regional Standard126K / 400Kn2-highcpu-82750 GB
Regional Medium207K / 780Kn2-highcpu-1641.5 TB
Regional High333K / 1.2Mn2-highcpu-3283 TB
Regional Ultra765K / 2Mn2-highcpu-64166 TB

Choose Zonal for maximum performance, Regional for zone-failure resilience.

FieldDescriptionDefault
Primary ZoneZone for primary node (secondary auto-selected)
Machine TypeLeave empty for auto-selection from policy(automatic)
FieldDescriptionDefault
Enable Web UI Public Access (Port 2020)Opens firewall for management UIfalse

Click Deploy. Infrastructure Manager provisions all resources in 3–5 minutes.

  • 2 Compute Engine instances with local NVMe SSDs
  • Dedicated backend network (10.200.0.0/24) with MTU 8896 jumbo frames
  • Service account with Compute/Storage/Network Admin roles
  • Firewall rules (SSH, backend replication, optional Web UI)
  • Placement policy (zonal) or cross-zone distribution (regional)
  • TIER_1 networking enabled for 30+ vCPU machines

The password is shown in deployment outputs. From within the instance:

Terminal window
curl -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/mayascale-cloud_user_password
Terminal window
# Direct (if enabled)
http://<EXTERNAL_IP>:2020
# SSH tunnel (recommended)
ssh -L 2020:localhost:2020 mayascale@<EXTERNAL_IP>
# Then open http://localhost:2020

Login: admin / password from deployment output.

Terminal window
ssh mayascale@<EXTERNAL_IP>

From a client VM in the same VPC:

Terminal window
# Install NVMe CLI
sudo apt install nvme-cli # Debian/Ubuntu
sudo yum install nvme-cli # RHEL/Rocky
# Discover available targets
nvme discover -t tcp -a <VIP_ADDRESS> -s 4420
# Connect to volume
nvme connect -t tcp \
-n nqn.2019-05.com.zettalane:mayascale-data-node-1 \
-a <VIP_ADDRESS> \
-s 4420
# Verify connection
nvme list
lsblk

The volume appears as /dev/nvmeXn1. Format and mount:

Terminal window
sudo mkfs.ext4 /dev/nvme1n1
sudo mount /dev/nvme1n1 /mnt/data

For policies with 2+ SSDs per node, multiple volumes are available across both VIPs:

Terminal window
# Volume 1 via primary VIP
nvme connect -t tcp -n nqn.2019-05.com.zettalane:mayascale-data-node-1 -a <VIP1> -s 4420
# Volume 2 via secondary VIP
nvme connect -t tcp -n nqn.2019-05.com.zettalane:mayascale-data-node-2 -a <VIP2> -s 4421
Terminal window
# Check startup logs (primary node only has startup script)
tail -f /var/log/syslog | grep mayascale
# Check NVMe-oF target status
nvmet status
# Check cluster health
ssh mayascale@<NODE1_IP> "mayacli cluster status"