-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version 2 Releasing Soon #321
Comments
Very exciting and a lovely Christmas surprise! will be interesting to see how it runs on my Orange Pi 5 plus and Raspberry pi 5 that arrived this week. I imagine performance will likely be the same because it relies on other project tools for a lot of the functionality. I guess that's a good excuse to create a fresh instance, that being said, I have seemed to struggle getting the chats to download on the RPI5 but not the OPI5+ which is odd (I've tinkered a lot to try get it working). I imagine this has already been requested before, but any chance we could get progress bars or at least logs for the video move? Less of an issue when using an nvme drive, but when I've ran ganymede on slow boards via micro SD cards, it can be a long process with the larger vods. Thanks for your work dude, and Merry Christmas! |
Further update to the issues I mentioned on my raspberry pi 5.. So it seems to be fine in Ubuntu, but not Raspberry pi OS, I was scratching my head, reinstalled the OS multiple times, toyed with the compose file for hours (replicated 3 different working yaml's from other machines I've tested etc). Very weird indeed, I wonder if either it's a temporary bug in raspberry pi OS that will get patched later down the line, or if there is some dependency missing from a clean install. |
Strange that it's not working on Raspberry PI OS. Is that OS 32bit? I know you can install Ubuntu on the newer PIs using 64 bit, that maybe that is why it's working? Any errors in the logs? You should be able to exec into the container
I'm surprised downloads work fine on the SD card, that must be slow! I'll look into implementing a progress bar and transfer speed rate on the queue page. |
Yeah, very odd indeed! 64bit Rasberry pi OS, i tried the server image, recommended and full version with all the same result that it would error out with the chat download but the video would process fine. I would of suspected it might be as RPOS is Debian based instead, but I have it running fine on other SBC's which are running dietpi which is also debian based as far as I'm aware. I will say that all of these instances have been deployed via docker, would be interesting to see how it responds to a local deployments. They can be a tad slow via an SD card that's for sure! At one point i was running it on an Orange Pi Zero 3, a 10 hour VOD could take 22 hours to render the chat! It was fine for my use case at the time of just archiving for a friend that wanted their streams backed up. Pretty impressive that a $15 SBC can manage it at all not gonna lie. I did want to ask, is there a way to allow Ganymede to utilise more resources? I understand a fresh out of the box deployment can run two jobs, but it would be nice to let it rip through a single job quickly when resources allow for it. On my orange pi 5 plus it can only end up using between 35-45% when of the CPU running from an NVME (a tad of a performance boost from the SD on the OPIZ3, a 2 hour vod takes about 45-50 mins). Im not sure if this is how Ganymede is coded to work or if there is some fiddling required with my docker and that a local deployment would use all the resources possible. One last suggestion as i have been rambling, is it worth having a user submitted performance spreadsheet where people can submit what render speeds people manage on different hardware? I just have a hunch people might be curios what experience to expect from different hardware and it could guide their choice if they are buying something dedicated to this task (a twitch friend saw my instance and thought it was super cool and i had to do a bit of manual benchmarking for them to make their decision). Could be a google docs spreadsheet on the main page. |
Let me know if you find out any more regarding the RPI OS issue. I can help out if you're able to manually run the chat download.
I've tried to get the chat render to go faster but had no luck. I've even tried using hardware accelerating while encoding and could not get it to go any faster. You may want to post an issue in the upstream repository https://github.com/lay295/TwitchDownloader about if the ffmpeg process can be any faster (consume more resources if possible).
Sounds great, feel free to post an issue or discussion to get it started and I can pin it for others to see and contribute. |
Cheers dude, I'll let you know once I get a chance to do more testing. I've actually got the last RPI OS on the first SSD that I tested with lying about, so I will just need to dig it out and swap the drive/complete a manual install. Interesting that you tried to fiddle about with it and didn't manage to speed it up yourself, presume that was a local deployment to? I'll try get the testing done first and then potentially I'll look into posting an issue in the upstream repository. Once I've got some benchmarks complete I'll go about it! Hopefully, I can find a non sub walled VOD with a nicely timed length to use (such as 1, 2 or 4 hours and not 3:23:26 for example). |
Version 2 has been released. If you encounter any issues please open a new issue or post here. |
Hi @Zibbp, Thanks for the new release. I tried to upgrade to it but I'm facing an issue with the config file.
After this, it shows a few logs about workers but nothing more and the container is not ready. |
Does the JSON file look normal? Are the permissions to the |
The JSON file looks normal. I generated it multiple times and the error always comes back. Current config.json that fails:
|
Hmm I was able to start the container with the provided JSON config. Can you add this to your API service in the entrypoint: "cat /data/config.json" - TEMPORAL_URL=temporal:7233
volumes:
- ./data:/data
- ./logs:/logs
- ./vods:/vods
ports:
- 4800:4000
+ entrypoint: "cat /data/config.json" As the JSON is valid, I'm thinking that it's trying to read an empty file. |
I use Kubernetes, so I used command & args instead. Here's what I got from inside the API container logs:
|
I went back to v1.4.3 only for the API server to see if I could get the same thing. I got an error related to the database:
Turned "db_seeded" from false to true in the config, restarted the container and it was up and running. |
I'm starting to think it may be related to the package used for the config. Still not sure why it's only effecting you. I've created a new branch that drops the version to an older version. This is slightly newer than the one running in Can you pull #333 build the image locally and give it a try? |
Just tried with the image I built and unfortunately I still face the same config error. |
Can you share your Kubernetes manifests? It must be something in there then |
Here is my manifest:
I use a nfs mount that keeps the data. The config file is also there and is always mounted correctly in the container (the data is also saved when I shutdown the container as is). I've been able to start the API server by going manually in the container, turning db_seeded to true in the config and then starting the API with
I'm not sure it's coming from my setup. The issue seems to happen when it initialize, it loads the config but fails and it also replace the config file even if it loads it. |
I've been finally able to fix my issue. I commented out this part of the code to stop the refreshConfig function to do anything to my configuration (that was already newly created anyway). Now I can start my container 100% of the time. I went back to v2.0.0 to make sure that was the real fix and not something else and I got the config error again. Switched back to my image and the error was gone. } else {
log.Info().Msgf("config file found at %s, loading", configPath)
err := viper.ReadInConfig()
// Rewrite config file to apply new variables and remove old values
+ //refreshConfig(configPath)
log.Debug().Msgf("config file loaded: %s", viper.ConfigFileUsed())
if err != nil {
log.Panic().Err(err).Msg("error reading config file")
}
} |
quick question, I had a queue built up since it got stuck while I was sick. After upgrading I still see the queue of 70 some items but I don't see any workflows running. Is there a way to import the old queue to have it process these with the new temporal workflow? |
Unfortunately not, you will need to re-archive the VODs to get them to process using the new system. If you have the list of IDs you can create a simple bash loop to archive the vods. curl --request POST \
--url http://IP:4800/api/v1/archive/vod \
--header 'Content-Type: application/json' \
--cookie 'access-token=<ACCESS TOKEN COOKIE VALUE>' \
--data '{
"vod_id": "twitch_vod_id",
"quality": "best",
"chat": true,
"render_chat": true
}'
Not sure how commenting out the |
For some reason, if you check the logs I provided in my first post, it loads the config 2 times based on the "config file loaded" log. In the code, the debug message is placed after the function and since my config was being edited all the time the container started, I commented it to see if it would change anything. Not sure what's running it 2 times but commenting the function did fix my issue.
It works for me:
It does error out (every single time I start the container), but the new attempt allows the API server to start. Timing issue maybe?
Restarting the container make it works, but I get the database error instead (I have to set
|
The config being loaded twice is expected. The second is the worker process loading the config. I'm assuming this is causing some issues as the API and worker are trying to load and perform modifications to the config file at the same time. I'm working on an update to the branch now. |
I've pushed some more changes to #334 if you want to give it another try. I've added a delay starting the worker process to hopefully resolve the config file conflicts. I've also updated the worker to not refresh the config which should fix the |
The config error is completely gone now, that's good. Tested both with and without config and it works. The only thing remaining is the db error when creating a new config. It should set
|
Pushed a new commit to the branch that detects if the DB should be seeded if there are no users. I've also removed the config options as that will no longer be needed. Testing this on my side now. |
All good for me it seems. All the issues are now fixed. Thank you! |
Sweet, thanks for helping troubleshoot! I'll publish a new release shortly. |
Not entirely sure if this is connected, but not sure the 'to not break existing installs' part is working, at least not for me. Running on a Synology NAS. Haven't changed anything in the config/docker-compose for months.
Edit: Adding the new sections to the docker-compose sorted things out, but the fallback didn't seem to work for me at least |
Ya I ended up not bundling it in the API container to make it easier on myself and keep everyone running the same setup. |
I have a question with this update: how can I restart part of the process of an item in the queue, since the "restart" button in not there anymore? I had to restart Ganymede at some point, and some of the items in the queue does not have a workflow anymore, some of them still need to convert the video, others need to move the converted video and some need to start the download. |
That's because the queue items are over a day old and the workflow items currently get removed after a day. I've got a fix ready to keep workflows around for significantly longer for the temporal image but need to make some changes for the API and frontend. You'll need to re-archive the queue items. |
Alright, I will do the remaining of the tasks by hand and manually set them as completed for the ones with the download completed (I don't want to redownload 20+GB per VOD, thank you for logging the ffmpeg commands executed btw). |
I'm working on a version 2 release and hope to have it out by Christmas🎄.
Features
New Queue System
I'm overhauling the queue system to use Temporal workflows. This allows me to easily create tasks and put them in workflows for an easier and more robust queue system. This new system includes automated retries and hopefully less one-off queue issues. Using Temporal also introduces the idea of distributed worker nodes so if you have multiple systems you can balance load between them. Distributed workers will not be available in the next release, maybe in the future.
The v2 release will include some changes to the docker-compose file to accommodate the Temporal server that will need to run. To not break existing installs, I'm planning on running an ephemeral temporal server in the main API container until users can update their compose file. This ephemeral server will not persist data between restarts of the API container so I advice everyone to update their compose file with the proper service (available when v2 is released).
Frontend Updates
I gave the frontend a new coating of paint for the dark theme. I've also updated all of the packages to run on the latest versions and fixed any bugs that came with that.
Other Features
Chapters/Categories
I'm hoping to get #317 in this release utilizing the new queue system
Planned features
Thumbnails
In a future release I'm planning on adding support to generate video thumbnails, similar to when scrubbing a Youtube video, using FFmpeg. Generating the thumbnails is slow and resource intensive so it will be opt-in. Initially it will probably be manual, requiring users to click a button to generate thumbnails for a video to test the functionality out.
The text was updated successfully, but these errors were encountered: