Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: #23 seems to have broken etl jobs with a local dev deployment #24

Open
delocalizer opened this issue Dec 24, 2024 · 0 comments
Open

Comments

@delocalizer
Copy link

delocalizer commented Dec 24, 2024

My setup

Using gen3-helm to deploy to a kind cluster on my laptop.

Current master gen3-spark: https://quay.io/repository/cdis/gen3-spark/manifest/sha256:1e235e924d23df8db32e88fde6a6a228a41ca1b8070fd3c6ee933c96c2ad3275
Current stable gen3-spark: https://quay.io/repository/cdis/gen3-spark/manifest/sha256:f9ab04742d907bf8f02b8bc3add6e6eb53637739134fdd9b57c6eaf5a9faa4e7

What I observe

If I use the default gen3-spark image in values.yaml:

etl:
  image:
    spark:
      repository: quay.io/cdis/gen3-spark
      tag: master

and try to create an etl job, I see this in gen3-spark pod logs:

/bin/bash: line 5: python: command not found
...
ERROR: Cannot set priority of namenode

and etl jobs always fail, with no spark service on port 9000.

If I revert to stable tag, the gen3-spark pod starts fine and etl jobs succeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant