MayaScale on GCP Marketplace
Deploy a 2-node MayaScale active-active storage cluster from the GCP Marketplace. MayaScale always deploys two nodes with server-side replication and automatic failover.
Prerequisites
Section titled “Prerequisites”- A GCP project with billing enabled
- Compute Engine API enabled
- IAM permissions: Compute Admin, Storage Admin, Network Admin, Service Account User
Step 1: Find MayaScale in GCP Marketplace
Section titled “Step 1: Find MayaScale in GCP Marketplace”- Go to GCP Marketplace
- Search for MayaScale Composable Storage
- Click Deploy
Step 2: Configure Deployment
Section titled “Step 2: Configure Deployment”Deployment Configuration
Section titled “Deployment Configuration”| Field | Description | Default |
|---|---|---|
| Deployment Name | Unique cluster identifier (lowercase, numbers, hyphens) | — |
| Performance & Availability Policy | Determines instance type, SSD count, and network config | Zonal Medium |
The performance policy is the key decision. It auto-configures everything else:
Zonal Policies (Same-Zone, <1ms Latency)
Section titled “Zonal Policies (Same-Zone, <1ms Latency)”| Policy | Write/Read IOPS | Machine Type | SSDs/Node | Capacity/Node |
|---|---|---|---|---|
| Zonal Basic | 75K / 100K | n2-highcpu-4 | 1 | 375 GB |
| Zonal Standard | 130K / 380K | n2-highcpu-8 | 2 | 750 GB |
| Zonal Medium | 200K / 700K | n2-highcpu-16 | 4 | 1.5 TB |
| Zonal High | 350K / 1.2M | n2-highcpu-32 | 8 | 3 TB |
| Zonal Ultra | 800K / 2M | n2-highcpu-64 | 16 | 6 TB |
Regional Policies (Cross-Zone HA, <2ms Latency)
Section titled “Regional Policies (Cross-Zone HA, <2ms Latency)”| Policy | Write/Read IOPS | Machine Type | SSDs/Node | Capacity/Node |
|---|---|---|---|---|
| Regional Basic | 63K / 130K | n2-highcpu-4 | 1 | 375 GB |
| Regional Standard | 126K / 400K | n2-highcpu-8 | 2 | 750 GB |
| Regional Medium | 207K / 780K | n2-highcpu-16 | 4 | 1.5 TB |
| Regional High | 333K / 1.2M | n2-highcpu-32 | 8 | 3 TB |
| Regional Ultra | 765K / 2M | n2-highcpu-64 | 16 | 6 TB |
Choose Zonal for maximum performance, Regional for zone-failure resilience.
Compute Resources
Section titled “Compute Resources”| Field | Description | Default |
|---|---|---|
| Primary Zone | Zone for primary node (secondary auto-selected) | — |
| Machine Type | Leave empty for auto-selection from policy | (automatic) |
Network & Access
Section titled “Network & Access”| Field | Description | Default |
|---|---|---|
| Enable Web UI Public Access (Port 2020) | Opens firewall for management UI | false |
Step 3: Deploy
Section titled “Step 3: Deploy”Click Deploy. Infrastructure Manager provisions all resources in 3–5 minutes.
Post-Deployment
Section titled “Post-Deployment”What Gets Deployed
Section titled “What Gets Deployed”- 2 Compute Engine instances with local NVMe SSDs
- Dedicated backend network (
10.200.0.0/24) with MTU 8896 jumbo frames - Service account with Compute/Storage/Network Admin roles
- Firewall rules (SSH, backend replication, optional Web UI)
- Placement policy (zonal) or cross-zone distribution (regional)
- TIER_1 networking enabled for 30+ vCPU machines
Retrieve Credentials
Section titled “Retrieve Credentials”The password is shown in deployment outputs. From within the instance:
curl -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/mayascale-cloud_user_passwordAccess the Web UI
Section titled “Access the Web UI”# Direct (if enabled)http://<EXTERNAL_IP>:2020
# SSH tunnel (recommended)ssh -L 2020:localhost:2020 mayascale@<EXTERNAL_IP># Then open http://localhost:2020Login: admin / password from deployment output.
SSH Access
Section titled “SSH Access”ssh mayascale@<EXTERNAL_IP>Connect NVMe-oF Client
Section titled “Connect NVMe-oF Client”From a client VM in the same VPC:
# Install NVMe CLIsudo apt install nvme-cli # Debian/Ubuntusudo yum install nvme-cli # RHEL/Rocky
# Discover available targetsnvme discover -t tcp -a <VIP_ADDRESS> -s 4420
# Connect to volumenvme connect -t tcp \ -n nqn.2019-05.com.zettalane:mayascale-data-node-1 \ -a <VIP_ADDRESS> \ -s 4420
# Verify connectionnvme listlsblkThe volume appears as /dev/nvmeXn1. Format and mount:
sudo mkfs.ext4 /dev/nvme1n1sudo mount /dev/nvme1n1 /mnt/dataMultiple Volumes
Section titled “Multiple Volumes”For policies with 2+ SSDs per node, multiple volumes are available across both VIPs:
# Volume 1 via primary VIPnvme connect -t tcp -n nqn.2019-05.com.zettalane:mayascale-data-node-1 -a <VIP1> -s 4420
# Volume 2 via secondary VIPnvme connect -t tcp -n nqn.2019-05.com.zettalane:mayascale-data-node-2 -a <VIP2> -s 4421Troubleshooting
Section titled “Troubleshooting”# Check startup logs (primary node only has startup script)tail -f /var/log/syslog | grep mayascale
# Check NVMe-oF target statusnvmet status
# Check cluster healthssh mayascale@<NODE1_IP> "mayacli cluster status"