You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
same applies to cloud-api. We currently have a 10.244.0.0/16 hard coded in metal-core for route-maps which allow announcements from within this network, and in the cloud-api where this network is specified for pod cidr usage.
In practice this is a user defined network where he wants to start services in a overlay network, only visible and route-able within the boundaries of his private network.
We could consider to allow to specify such networks without any hard checks if this network, or multiple private overlay networks, overlap with existing networks.
Regarding the technical implementation of this... are there any ideas?
I think this issue is pretty urgent.
To me it does not sound like it is so easy to do without making the API harder to understand for end users.
We need to keep in mind that the metal-api does not know anything about clusters or Kubernetes.
Yes, technically this should be handled as any other network, but it must be "attachable" to any cluster/machines, we can add it a by default to all machines, but this sounds crazy ?
Current workaround (splitting up 10.244.x.x network into smaller pieces, still hard-coded into metal-core) restricts Kubernetes clusters to maximum of 4094 pods and 4094 services.
The text was updated successfully, but these errors were encountered:
same applies to cloud-api. We currently have a 10.244.0.0/16 hard coded in metal-core for route-maps which allow announcements from within this network, and in the cloud-api where this network is specified for pod cidr usage.
In practice this is a user defined network where he wants to start services in a overlay network, only visible and route-able within the boundaries of his private network.
We could consider to allow to specify such networks without any hard checks if this network, or multiple private overlay networks, overlap with existing networks.
Regarding the technical implementation of this... are there any ideas?
I think this issue is pretty urgent.
To me it does not sound like it is so easy to do without making the API harder to understand for end users.
We need to keep in mind that the metal-api does not know anything about clusters or Kubernetes.
Yes, technically this should be handled as any other network, but it must be "attachable" to any cluster/machines, we can add it a by default to all machines, but this sounds crazy ?
Current workaround (splitting up 10.244.x.x network into smaller pieces, still hard-coded into metal-core) restricts Kubernetes clusters to maximum of 4094 pods and 4094 services.
The text was updated successfully, but these errors were encountered: