You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Right now when you lock a cluster or resource, you don't have an easy way to determine what will change when you unlock it. In the puppet/chef world you would do a dry run and get a list of all the intended changed resources that you can review before doing a real run of their agents.
Describe the solution you'd like
In an ideal world we would run a command/API call and get a detailed output with a full list of resources and what specifically will change on the next run of the controllers. This would match puppet/chef behavior.
Describe alternatives you've considered
Talking with @alewitt2 it seems a doable version of this is to have the controllers continue to process things as usual, and but when a cluster or resource is locked the controller would just record that they "want" to change a resource instead of changing it. The end users could then take that list and look at the current resource and whatever is providing the newest resource and figure out the changes for themselves.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Right now when you lock a cluster or resource, you don't have an easy way to determine what will change when you unlock it. In the puppet/chef world you would do a dry run and get a list of all the intended changed resources that you can review before doing a real run of their agents.
Describe the solution you'd like
In an ideal world we would run a command/API call and get a detailed output with a full list of resources and what specifically will change on the next run of the controllers. This would match puppet/chef behavior.
Describe alternatives you've considered
Talking with @alewitt2 it seems a doable version of this is to have the controllers continue to process things as usual, and but when a cluster or resource is locked the controller would just record that they "want" to change a resource instead of changing it. The end users could then take that list and look at the current resource and whatever is providing the newest resource and figure out the changes for themselves.
The text was updated successfully, but these errors were encountered: