-
Notifications
You must be signed in to change notification settings - Fork 83
Init cluster failed due to discovery container issue #250
Comments
What's your k8s version? |
We released added support for L8s 1.14+, that was showing the same error. #249 Are you using the latest master branch? |
Hello guys! Thanks for your quick replies :-) [root@booger CPU-Manager-for-Kubernetes]# oc version So 1.13 at the moment. |
Yes, I was using master branch. Should I use remotes/origin/cmk-release-v1.3.1 ? |
What is the exact command that the "discover" container is executing? is it provided by this pod defintion? |
I tried out the latest master branch, with K8s 1.13 and didn't run into this issue. Does your master branch include the latest commit for K8s 1.14 support? Yes, thats the command run by the discover. @mJace I haven't tested with 1.10 but I imagine so. |
Yes, it has this commit: cc50f8f |
@oglok Does the system environment variable 'NODE_NAME' exists in your pod?
I also tested your repo's cmk image in my k8s v1.14 cluser. And it works.
|
Hey! I couldn't catch the "discover" container, but the "install" one has that env var:
|
I've run the cmk-discover pod manifest manually, and I got into it. It has NODE_NAME var, and then, it fails with the same trace as before:
|
Any clue guys? |
@oglok Did you manage to resolve the issue with discover pod? I am facing the same issue. |
Having this cluster-init pod definition:
The image was stored in my own Quay registry, in order to store it somewhere easily accesible.
The install container pastes the cmk binary into the workers in /opt/bin. However, I'm getting the following trace in the discover container:
I'm not sure what is the command being run, but I can do something like this in the worker nodes:
The text was updated successfully, but these errors were encountered: