From c8493f045e515da4cdd980a498f3a6e642f23856 Mon Sep 17 00:00:00 2001 From: Bertrand Richard Date: Thu, 25 Jan 2024 13:19:11 +0100 Subject: [PATCH] Anat3: added MRQ details --- _posts/2024-01-01-anatomy-3.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/_posts/2024-01-01-anatomy-3.md b/_posts/2024-01-01-anatomy-3.md index 25e19d1e6e6e6..1f2da4e3bf975 100644 --- a/_posts/2024-01-01-anatomy-3.md +++ b/_posts/2024-01-01-anatomy-3.md @@ -123,11 +123,15 @@ One little caveat is that this doesn't handle cars already being on the overtaki Since this scenario is fully-autonomous with no user input, and the target hardware is a single computer with three monitors, we can render the scenario into a video instead of having it run in realtime. This has multiple benefits: improved visual quality, perfect framerate, etc. -I really wanted to try out the new [nDisplay Movie Render Queue pipeline](https://dev.epicgames.com/community/learning/tutorials/9VX5/unreal-engine-export-ndisplay-renders-using-mrq), which sounds exactly like what I want. Even though we have a single computer, we still have three displays, so being able to correctly configure viewports would be great. +This part turned out to be quite complex, and for a rather unexpected reason: sound! -However... It seems that [Movie Render Queue](https://docs.unrealengine.com/5.3/en-US/render-cinematics-in-unreal-engine/) doesn't support [spatialized sounds](https://forums.unrealengine.com/t/cant-render-out-sound-from-ue5-using-render-queue-or-movie-scene-capture-reliably/577782), which obviously is an issue for us, as all our sounds are spatialized. We could switch to 2D audio for Ego sounds, but that would still be problematic for other vehicles. +At first, I really wanted to try out the new [nDisplay Movie Render Queue pipeline](https://dev.epicgames.com/community/learning/tutorials/9VX5/unreal-engine-export-ndisplay-renders-using-mrq), which sounds exactly like what I want. Even though we have a single computer, we still have three displays, so being able to correctly configure viewports would be great. -So I went back to the legacy renderer, which is way less fancy, doesn't have nDisplay support, but at least works with spatialized audio. I could only use it for audio and still use MRQ for video, but I'm too lazy to set this up. +However... It seems that [Movie Render Queue](https://docs.unrealengine.com/5.3/en-US/render-cinematics-in-unreal-engine/) doesn't support [spatialized sounds](https://forums.unrealengine.com/t/cant-render-out-sound-from-ue5-using-render-queue-or-movie-scene-capture-reliably/577782), which obviously is an issue for us, as all our sounds are spatialized. + +So I went back to the [legacy renderer](https://docs.unrealengine.com/4.26/en-US/AnimatingObjects/Sequencer/Workflow/RenderAndExport/RenderMovies/), which is way less fancy, doesn't have nDisplay support, but at least works with spatialized audio. Or at least, that's what I thought? I didn't have much time to investigate the issue, but the generated sound track was captured *after* the end of the level sequence, which meant the sound was pretty useless since it didn't actually record anything happening during the video. + +In the end, I decided to switch back to Movie Render Queue and de-spatialize all ego-sound so that they would be recorded. That meant I lost all other sounds, but since it only included non-ego vehicles, it's not that big of an issue for this highway scenario. And as usual: clicking buttons to render, package or build isn't my thing, so our [Discord bot](/whats-new-2023-05/#bots) learned the new `!render` command to render a [Sequence](https://docs.unrealengine.com/5.3/en-US/unreal-engine-sequencer-movie-tool-overview/). I could have gone a bit further and have it take as argument the scenario CSV file, so that researchers could just type `!render my_scenario.csv` and get a video without ever touching Unreal. But for now, it's definitely not worth implementing that, so maybe for the next project!