-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve pipe status when the pod is full of exceptions #4977
Comments
I think the error is cascaded from the Integration (which is the custom resource in charge to run the application) to the Pipe. We'll have a look to ensure this is still the case, thanks for reporting. |
I had a look and this is what happening. The
So, the Integration, therefore the Pipe, are having a very short period of time, likely a few seconds while the Pod is trying to restart that is in Running state, but eventually turn to the correct error state:
I think this is the expected behavior in the sense that the Integration is running successfully until the Pod crashes. I am not sure if we can change this behavior, and, if that would make sense considering that eventually the Pipe and Integration are correctly set their status. Wdyt @lburgazzoli ? |
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
I applied a fix that should close this issue. However, that would work when the user defines a readiness probe (via health trait for instance). The Integration (hence, the Pipe) won't be moved to the running phase until the Pod ready condition is reached. |
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
I think it all depends on the reason for which the POD fails, i.e.:
what was the reason fro this Ready -> Error loop ? |
@lburgazzoli no, it was more a general problem in the order we were monitoring the Integrations. I found the root cause and applied a fix under review right now. Thanks. |
ah ok, gogin on my backlog of notification so I'm not fully up to date yet :) |
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes apache#4977
When the user uses a startup probe, the Integration won't turn as running until the condition is reached Closes #4977
Requirement
As a developer I want to see on the
k get pipes.camel.apache.org
if things are okProblem
I have a broken
Kamelet
, in a pipe. see here for details: apache/camel-kamelets#1785The pod for that
aws-s3-source-pipe
is not working, since it is full of exceptions.Now, checking the
pipe
status:In fact the pipe is not ready, since its pod has an exceptions and it should be set to
ready:false
, with areason
for that, like other kube resources do.As the reason some text from the exception could be used.
But in order to learn more about the error I have to check the log of the pod, and do not see this on a kube-native level (e.g. via CRs)
Proposal
Reflect expection and do not set the status to
READY
, but report that it is NOT ready, and reason the exception detailsOpen questions
No response
The text was updated successfully, but these errors were encountered: