-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to pass DNS challenge with Caddy 2.8+ #42
Comments
Same issue here. I tried re-issuing my AWS keys, but AWS is reporting that they are "not used". I think for some reason it is not presenting the auth. |
I am wondering if we just need to bump the caddy version since there were so many breaking changes Line 6 in 8e49e75
|
It looks like it is related to this issue: libdns/route53#235 (comment) Which is related to this issue: aws/aws-sdk-go-v2#2370 (comment) |
There is a bug in the version of the AWS SDK that libdns/route53 currently uses, so instead use a fork that has the SDK version bumped. Related: * aws/aws-sdk-go-v2#2370 (comment) * libdns/route53#235 * caddy-dns/route53#42
Ran into the same issue with a single individual domain, not wildcard. The fix mentioned here that ryantiger685 mentions worked for me. Looks like PRs in that repository need to get merged to fix this officially. Edit: Just tested wildcard and that's working with this fix as well. |
Just ran into this as well after upgrading Caddy to v2.8.4. |
Could you test this with the latest version and {
"module": "acme",
"challenges": {
"dns": {
"provider": {
"name": "route53",
"wait_for_propagation": true,
}
}
}
} |
FWIW, I'm using a Dockerfile to build https://github.com/lucaslorentz/caddy-docker-proxy with this plugin, and simply rebuilding the container with the latest release of this plugin and Caddy 2.8.4 was enough to solve the DNS challenge problem described in this thread, although I am not using a wildcard domain. I did not need to use the |
Yes, this works! Just tested with a new domain. Feels good removing all the hacks :) This may be unrelated but just to note, I did get a new error from Route 53: I just added |
{
"level": "error",
"ts": 1719515037.2461495,
"logger": "tls.obtain",
"msg": "will retry",
"error": "[*.stage.foo.bar.com] Obtain: [*.stage.foo.bar.com] solving challenges: presenting for challenge: adding temporary record for zone \"foo.bar.com.\": exceeded max wait time for ResourceRecordSetsChanged waiter (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/152473533/17457386443) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)",
"attempt": 4,
"retrying_in": 300,
"elapsed": 546.902648806,
"max_duration": 2592000
} Edit: I manually deleted the TXT record from Route 53, restarted Caddy, and the wildcard domain works! Not sure what happened here the first time but might just have been something on my end. I saw that these two are the first errors which led me to do the extra troubleshooting: {
"level": "error",
"ts": 1719514555.4299963,
"logger": "tls.issuance.acme.acme_client",
"msg": "cleaning up solver",
"identifier": "stage.foo.bar.com",
"challenge_type": "dns-01",
"error": "deleting temporary record for name \"foo.bar.com.\" in zone {\"\" \"TXT\" \"_acme-challenge.stage\" \"wEz6Z5Ta1vy5Z9ebcVcfyZTmptaYdfc-QtYRA_wV6Bs\" \"0s\" '\\x00' '\\x00'}: exceeded max wait time for ResourceRecordSetsChanged waiter"
} {
"level": "error",
"ts": 1719514643.3972101,
"logger": "tls.issuance.acme.acme_client",
"msg": "cleaning up solver",
"identifier": "*.stage.foo.bar.com",
"challenge_type": "dns-01",
"error": "deleting temporary record for name \"foo.bar.com.\" in zone {\"\" \"TXT\" \"_acme-challenge.stage\" \"JvKk2qrEWpbsgvZ06rU1GKc28NKvKAxP_gwc-j1IVGA\" \"0s\" '\\x00' '\\x00'}: operation error Route 53: ChangeResourceRecordSets, https response error StatusCode: 400, RequestID: d4277a4b-bef0-423b-bfef-8e68495ea501, InvalidInput: Invalid XML ; javax.xml.stream.XMLStreamException: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 248; cvc-complex-type.2.4.b: The content of element 'ResourceRecords' is not complete. One of '{\"https://route53.amazonaws.com/doc/2013-04-01/\":ResourceRecord}' is expected."
} |
fwiw, the plugin can take the value from the |
@kdevan The |
@aymanbagabas Hi! Just to clarify, we should be setting |
We still get the "exceeded max wait time for ResourceRecordSetsChanged waiter" we have set wait_for_propagation to "true" and set a "max_wait_dur" to 120. Anyone else still having this issue? |
The only way to get it working for me with a wildcart certificate was this: *.mydomain.tld {
tls {
dns route53 {
region "ca-central-1"
wait_for_propagation true
}
}
} Importantly, setting |
For anyone also having trouble. I finally made this work by removing the "wait_for_propagation true" from the caddyfile and it worked right away. tls { |
There was a bug with
If EDIT: I've updated the readme to indicate that defining AWS_REGION and aws credentials are required |
Amazing this was unexpected! This new requirement of AWS region totally brought down my whole set of reverse proxies including my cloud when the certs needed to be updated. As soon as I saw that region error I came here. One question is what region? Does it even matter? Do I use the one I see in the AWS console? Below is working for me now for wildcards. My IAM credentials are environment variables. As others mentioned some times old _acme records don't get cleaned out so I do so via the AWS console. If I feel like I need a clean slate (recreate all the certs) I delete all the caddy settings/certs and restart. At least for arch they can be found at tls <redcat>@gmail.com {
dns route53 {
max_retries 10
region "us-east-1"
wait_for_propagation true
}
resolvers 8.8.8.8 1.1.1.1
} |
Wildcard DNS challenge stopped working after update to Caddy 2.8.
The minimum reproducible setup:
Caddy config:
Dockerfile:
Logs:
Everything pass fine with Caddy 2.7.6.
Any suggestions are appreciated.
The text was updated successfully, but these errors were encountered: