As the title says, the texture memory calculation currently is a big, inconsistent mess:
  1. LLViewerTexture::isMemoryForTextureLow() and LLViewerTexture::getGPUMemoryForTextures():
Former was used in LLViewerTexture::updateClass() for a more aggressive memory reduction if "texture memory" got low before the PBR release. Now it is only used in the Lag-Meter floater. Latter function is only used in LLViewerTexture::isMemoryForTextureLow(). LLViewerTexture::getGPUMemoryForTextures() itself uses LLWindow::getAvailableVRAMMegabytes() to determine the amount of available memory of the GPU.
  1. LLWindow::getAvailableVRAMMegabytes():
This method is platform dependent. In short, this is only more or less correct on Windows, while on macOS it's basically a wild guess. Wild is actually a good description of what is going on for either platform.
Let's talk about Windows first:
The total amount of VRAM on the GPU is queried via DirectX - and might just override what was initially detected by querying WMI. This query also returns the amount of memory used. If this amout is not returned for some reason, it is estimated as: LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024. Remember this - it will be important later. Now there is some reserve of the total VRAM calculated that should be available to other processes. This is 2GB if the total VRAM of the GPU is more than 4GB, and half of the VRAM in any other case - remember this as well. The total available VRAM is now calculated as: total VRAM - reserve - VRAM used.
Now let's switch to macOS:
On macOS, the available VRAM is only estimated, based on the total VRAM and the VRAM already used. Former is calculated - as already seen for Windows - as: LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024. Apparently a reserve is not necessary on macOS and the available RAM is calculated as: total VRAM - VRAM used
Since we have now more or less scientifically determined how much VRAM currently is still available, we would assume the result of LLWindow::getAvailableVRAMMegabytes() is used in the texture pipeline to determine the discard level of the textures the viewer is displaying, correct?
WRONG! Apart from initially mentioned in LLViewerTexture::getGPUMemoryForTextures(), LLWindow::getAvailableVRAMMegabytes() is only used in one other location: The texture console for information purposes. But what else is used in the texture pipeline to determine the discard level of textures? Well, this brings us to:
  1. LLViewerTexture::updateClass():
Here, we see some more magic happen: First, the total amount of used VRAM by the viewer is calculated/estimated as the sum of LLImageGL::getTextureBytesAllocated() / 1024.0 / 512.0 - which is the same as LLImageGL::getTextureBytesAllocated() * 2 / 1024 / 1024 as seen before - and LLVertexBuffer::getBytesAllocated() / 1024.0 / 512.0. Also the total amount of VRAM of the GPU is needed. Of course there should also be some reserve for other applications, so they get granted a reserve of 512MB - if you remember what we found out earlier, this reserve is completely from the reserve calculated earlier - if there was a reserve all. Anyway... no matter what, the viewer always claims 768MB for itself, which leads a mimimum total VRAM of 768MB. Based on that calculated total VRAM and the estimated VRAM usage of the viewer, the over-usage percentage is calculated to determine by how much the discard level on the texture has to be increased. You might have noticed that here suddenly the memory used by Vertex Buffers are also taken into account, while previously it was not.
  1. Honorable mention: RenderMaxVRAMBudget debug setting:
This setting overrides the total VRAM reported and can be used to cap the amount of VRAM the viewer is actually using. The setting description says that a restart is required. However, a restart is only required to get a "correct" display in the Lag Meter floater and the texture console since its value gets passied into LLWindowManager::createWindow() at startup and affects the result of LLWindow::getAvailableVRAMMegabytes() - and that only on Windows, since it is not part of the macOS implementation. But for the texture discard level calculation, it is instant, because it is taken into account in LLViewerTexture::updateClass().
So, here's a list of questions:
  1. What is the point of LLViewerTexture::isMemoryForTextureLow(), LLViewerTexture::getGPUMemoryForTextures() and LLWindow::getAvailableVRAMMegabytes(), if their sole purpose is just informational?
  2. Why is the data displayed fundamentally different from the data the viewer's decisions are based on?
  3. If the displayed data should actually reflect what the viewer is really doing, how about having one calculation that is used throughout the viewer?
  4. Why is this a total mess?