-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VRAM Budget adjustments #3378
VRAM Budget adjustments #3378
Conversation
akleshchev
commented
Jan 9, 2025
•
edited
Loading
edited
- Instead of reducing bias when not over budget, start resducing it only once there is 10% free memory to use, to avoid memory immediately going over budget
- Intel UHD on a 8gb PC was using 3.5Gb of shared vram which is way too generous given viewer's ram needs. For intels caped vram budget reported by DX9 to 25% ram.
Instead of when there is any space.
876d94b
to
dc456ae
Compare
@@ -4681,6 +4682,23 @@ void LLWindowWin32::LLWindowWin32Thread::checkDXMem() | |||
|
|||
// Alternatively use GetDesc from below to get adapter's memory | |||
UINT64 budget_mb = info.Budget / (1024 * 1024); | |||
if (gGLManager.mIsIntel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More a general question: What about Intel's discrete GPU cards? Isn't there a more general distinction required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect those would have accurate gGLManager.mVRAM already, but without having a physical one, hard to be sure. worst case 25% is still a reasonable limit.
Potentially we can substract desc.SharedSystemMemory to get real memory in case of descrete GPU (more than a Gb real=>descrete), but I'm not comfortable doing that without having hardware to test it. Otherwise not sure how to detect those without hardcoding GPU names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or mIsIntel
should only be true for integrated GPUs - if there is a way to detect that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not the purpose of mIsIntel. AMD has built in GPUs and those need same treatment just in case OS/bios is overeager with shared memory. The problem is detection.
What about slowing down bias decrease? Currently it is evaluating and decreasing bias every frame AFAIK. |
That is indeed too fast and changes multiple times before memory usage has a chance to change. I will check it, but might leave it for later. |
You can probably pull the check for the evaluation timer outside of |
I think it should be constrained not by timer, but framerate. Higher framerate -> more textures get processed, the faster bias should adjust. Timer isn't the right solution here. FREE_PERCENTAGE_TRESHOLD in decrement should already halve reduced decrease rate compared to ExtraFPS. For now I will keep things as is, will let QA test it, but will keep an eye on bias in heavy regions. |