-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
examples: Connect multiple Nutanix sites and migrate VMs between them #68
Comments
// assign |
How you expecting the example, Is it like steps to be collected in README? |
Yes, I would imagine the artifact of this looking like:
|
I can able to provison two cluters unable to connect bastion host Vasubabus-MacBook-Pro:nutanix-clusters vasubabu$ ssh -i |
If you provisioned the node in the last 24 hours, you can use console.equinix.com to see the root password and you should be able to login with that. Once logged in, check to see if your SSH key is included in ~/.ssh/authorized_keys The Nutanix Cluster is provisioned using the bastion host with this SSH key. If the cluster successfully provisioned then this key was working at some point. |
This is not the bastion public IP. This is the SOS user@host. Either the output variable is misnamed, including the wrong value, or what you copy/pasted into the comment is not accurate. |
In the demo https://www.youtube.com/watch?v=aUD26EJmtIc&t=30s the protection policy is already configured, Can you have any example steps to create a protection policy. |
I don't have background on how to configure Nutanix protection policies. docs.nutanix.com and the community is where I would search. |
How to avoid this common virtual_ip_address for both clusters, Is different subnets address solves the problem ? |
@codinja1188 yes. Setting a different |
observed one new issue |
I am trying to create a remote config in the cluster1 but unfortunately cluster is not accessable, Can you help me what is the IP which is accessable to external
|
The bastion public IPs are the only public addresses in either cluster. If both clusters are sharing the same VRF, the nodes in both clusters should be able to reach other by having an OS level route to the Metal Gateway for the whole VRF CIDR (not just the part of the VRF assigned to their cluster). |
Looks like dhcp lease is failed to create, Can you check and point me failure.
|
@codinja1188 is the code that ran into this error available somewhere? Can you push it to #71 I can't tell without seeing the code, but I would look out for overuse of All subnet references within the module are derived from cluster_subnet. |
The problem could be with both clusters living in the same VLAN. This would mean that DHCP services from both bastion nodes can offer competing address space. (whoops) We'll have to use separate VLANs and Gateways per cluster. We can still share one VRF across the two VLANs and they will have the Layer 3 routing we need. graph TD
Internet[Internet 🌐]
A[Common VRF: 192.168.96.0/21]
subgraph ClusterA["Cluster A"]
direction TB
A1[VLAN A]
A2[VRF IP Reservation A<br>192.168.96.0/22]
A3[Gateway A]
A4[Bastion A<br><DHCP, NTP, NAT>]
A5[Nutanix Nodes A]
end
subgraph ClusterB["Cluster B"]
direction TB
B1[VLAN B]
B2[VRF IP Reservation B<br>192.168.100.0/22]
B3[Gateway B]
B4[Bastion B<br><DHCP, NTP, NAT>]
B5[Nutanix Nodes B]
end
A -->|192.168.96.0/22| A1
A1 --> A2
A2 --> A3
A3 --> A4
A4 --> A5
A -->|192.168.100.0/22| B1
B1 --> B2
B2 --> B3
B3 --> B4
B4 --> B5
Internet --> A4
Internet --> B4
|
Here are more details to understand the issue Terraform Outputs:
Cluster1 Bastion Network configurartion:
Cluster1 Nutanix AHV Host:
Cluster1 Nutanix CVIM Node:
Cluster2 Bastion Network configurartion:
Cluster2 Nutanix AHV Host:
Cluster2 Nutanix CVIM Node:
|
Here is the PR to create the common VRF B/w clusters #79 |
The problem we discussed is that the 192.168.96.0/21 network needs to be known in both Cluster A and Cluster B specifically so that the netmask is known as /21 in both clusters. For example, the bastion nodes should be 192.168.96.2/21 and 192.168.100.2/21. The DHCP advertisements should have gateway addresses of either .96.1 or .100.1 (either will work), ideally we use the one specific to the /22 range for each cluster. The DHCP range for each cluster should be limitted to addresses within the /22 for that cluster, but we need to be carefull that the subnet is /21. |
As you suggested used common gateway(192.168.96.1) b/w cluster A and Cluster B Cluster A(Bastion )
Cluster A(Nutanix AHV Host )
Cluster A (CVM Host)
Observation on Cluster B
Cluster B (Bastion)
Cluster B (Nutanix Host)
Cluster B(CVM Host)
Unfortunately the common gateway IP (192.168.96.1) is not pingable in Cluster B Help me here. |
Here are my changes I applied |
My run-through (
On a subsequent
|
Per previous conversations with @codinja1188, this may be due to specifying an even number of |
Create an example/ which creates two Nutanix clusters. For purposes of the demo and limitations on availability, these sites may be in the same physical location.
Setup a protection policy between those clusters.
Create a VM in one of the clusters and migrate to the other.
https://www.youtube.com/watch?v=aUD26EJmtIc&t=30s
This may not be a fully automatable example, it may be supported by example Terraform to create the multiple sites and any network resources needed to connect those environments securely and reliably (such as a Fabric connection connecting the VLANs, extending different VRF ranges to each cluster). The README.md will go over what is automated and what needs to be done manually.
The example/README.md would take advantage of much of the same instruction provided in https://equinix-labs.github.io/nutanix-on-equinix-metal-workshop/.
The text was updated successfully, but these errors were encountered: