Upscaled Texures to at Least 2048x2048!
in progress
sylar Oppewall
It would be Logical, with the new PBR and Enhanced Lighting, to have increased texture size 2048 x 2048, 4096 x 4096, Would be ideal! Would be so much more detail beauty depth to the realism!
ty in advance!
Log In
sylar Oppewall
Update: wow well I am so glad that this suggestion got the traction and has been implemented!
I can’t wait to see the Gorg rich details in everything from our sexy skins to clothing and architecture! With PBR and 2048 image compression! Xo Sil
Beware Hax
please make it so if a viewer does not support 2048 textures (or changes a setting to disable it), it receives a downscaled 1024 texture from the asset system.
Pazako Karu
Beware Hax texture discard level debug option already exists, although it seems to be a bit finicky. Plus, it's not a permanent change. Full implementation of this as a setting (max texture resolution) would be ideal.
josecarlos Balan
I am in the beta grid in the region "rumpus room 2048" I am trying to upload 2048 px textures with the latest viewer "Second Life Release 7.1.5.8472515256 (64bit)" but I always get that error, is it a normal error? Do you know when this capacity will be available in the main grid? I have several projects to finish and I don't know if it would be advisable to wait to be able to use this texture size
elfteam6redleader Resident
Still wish that APNG request from long ago had gotten traction. That said, my bigger concern would be loading times and bandwidth. Bandwidth for SL assets probably doesn't grow on trees, and sending larger assets can eat into that depending upon the content, resulting in tighter margins for LL. Maybe 2048x2048/4096x4096 textures for premium members to help make up for their added server costs? Maybe even a slider called "maximum texture size" for everyone so that users with less Bandwidth/VRAM could choose to see the smaller texture? Just adding thoughts.
Send Starlight
elfteam6redleader Resident That is why APNG would have been a bad choice, it uses the deflate algorithm which is to say it is just a zip file which isn't great at compressing images but is great at not making them lose any quality. APNG would have increased their bandwidth and storage issues, not alleviated them. The reason I had suggested Webp animated images instead is that this is a format that can pack images tighter than jpg, and while lossy you can barely tell visually and it has animated support. Facebook/Meta, and other companies, for example have already converted all their images from jpg/png to webp to take the pressure off their servers.
primerib1 Resident
Send Starlight Or rather than WebP, use JPEG-XL which has "progressive resolution" baked in into the format itself.
Send Starlight
primerib1 Resident Well, webp is fully patent-free and free software. Whereas, we know that MS just grabbed a patent on the core algorithm in JPEG-XL: https://www.theregister.com/2022/02/17/microsoft_ans_patent/ Also, google refused to support JPEG-XL in Chrome, which means most people can't view it without special software and/or safari.
Gimp has poor support for JPEG-XL (only 8-bit lossless). And, Krita has a problem where it flattens the layers when anims are made with JPEG-XL, etc. Whereas Krita and Gimp fully support all the features in WebP, as well as all major browsers as it is a part of the standard. I like JPEG-XL but it didn't get implemented properly across the board again, unironically the same problem JPEG-2000 had which LL also bet on. Broad support and tools which can edit and load the format properly is important for a platform that caters to creators where people need to be able to work with the images.
Also, Webp is slightly faster on the decoder and 2x to 10x times faster encoding than JPEG-XL. However, JPEG-XL is a good replacement for AVIF as the latter is quite a bit slower. Having said all that, though, I really like JPEG-XL's jpg recompression. But, I think the rest of the arguments weigh against it.
Dark Nebula
This is just going to encourage creators to use 2048x2048 for every single part of their item and have it, if more creators actually used a UV map that wasn't a single texture for each piece, i'd be for it.
Send Starlight
Dark Nebula Maybe that could be solved in the Mesh Uploader, where It could automatically downscale those textures if it is above the number of faces allowed.
Also, a new inventory file type called a Texture Atlas could be provided that could be used across linksets. The Atlas could auto-unwrap a mesh, linkset or set of linksets and tightly pack them into a merged uv map in a single high resolution texture. Providing an optimal use of the 4k texture feature, if also implemented. Substance Painter has a similar auto-unwrap feature. This could allow even older products to be optimized.
In my mind the Texture Atlas could even be used by the consumer and applied to objects they don't personally have permissions to edit but are copy, since they're not modifying the textures just requesting it be optimized. Creators making new objects could package their newer products with these atlas. And, customers could also apply the concept to older products. Or, alternatively, LL could just apply it to all objects automatically instead. Or, clients could make a request to the server automatically when they notice an unoptimized linkset that hasn't been processed yet. I guess the implementation details could be left up to LL on how to go about it best.
Doing all this optimization without an external Texture Atlas file would unfortunately kind of break the Export DAE feature, because you would end up with a merged atlas that isn't as easy to paint as the original uploaded model(s). So, hopefully, you didn't lose the original source files. Unless the export feature was redesigned to fetch a cached unoptimized version from the server, or something along those lines.
Whereas if instead, the optimization was tied to the Texture Atlas inventory file, it would be convenient in case you wanted to re-download your original full mesh because original linkset object file for the mesh(s) would point to the non-optimized version still. Essentially, the Texture Atlas file acts a hint to the server that you want the optimized edition. You could drag the atlas file into the region to rez the optimized linkset. In fact, I guess it could just be called an AtlasLinkset, because it represents a different linkset that is stored on the server with the optimization already applied, technically speaking.
Applying this only to copyable objects means that the atlas wouldn't allow anyone to bypass copy restrictions, though unfortunately it would also mean that the optimization couldn't be applied to old gachas. Unless of course, the atlas file somehow flagged the original associated object when it itself became rezzed and then removed it from the inventory as well, in which case then it could be applied to no-copy items. Though, they'd really have to test the heck out of that.
Right clicking the AtlasLinkset in the Inventory and clicking Edit would open a floater window where you could drag in any number of linkset objects from your inventory and have them all share a single uv map. If they wanted to put an arbitrary limit on the number of linkset objects they could just provide say four slots or something like that. Though, it'd be kind of nice if instead it was an ever expanding list that could grow to encompass as many linksets as you want to drop in there, with an upper bag limit on the list. Or, a detectable limit on how much could be unwrapped into a single uv map.
In the case where it can't all fit into one map, the atlas might even be allowed to generate 2 or 3. But, it would fiercely try to keep it to 1. An option on the slider called something like Pack %, could be provided to let the creator of the atlas guide it on how lenient it should be.
Sorry if this is hard to read, I don't know how to explain it better than this. >.<
Send Starlight
I'm for this. Unfortunately my JIRA request for 4k has been lost, where I provided a way around the issues a lot of people have with this feature.
Essentially, my jira entry requested that the client negotiate with the server and let it know it can't handle 4k either because of hardware or network issues or if the user selected they didn't want the 4k in prefs. The server can then automatically downscale the texture and cache it on the server-side on the fly at a lower resolution, e.g. 512x512. If it's been a long while, e.g. weeks or months (this interval is up to LL to mitigate server storage costs), since that texture has been requested then the server can automatically delete the old lower resolution cache file for that texture automatically to free up space at the lab.
This completely solves the performance and networking issues everyone has with the feature while allowing 4k for the users that desire and can handle it. Effectively the client can auto-dial in the tightness of the resolution.
LL will have to sit down and have a meeting about feasibility; the proper intervals to put on storage times, and how to implement the feature in a cost effective way. But, I think it can be managed. Also, if they're worried about cost they could make 4k a premium account feature, and even the premium accounts could turn it off if they don't want it. Since there are way more non-premium users, they would all still stay on non-4k and put less burden on LL.
Also, my other amazon glacier storage (lower cost) archival feature request was lost from JIRA, which if paired with the 4k feature attempt would also help mitigate costs and make this all possible. That feature request was: detect if textures haven't been accessed in years (the amount of time is up to LL to decide is reasonable) and use Amazon Glacier to move the textures into cold storage. Cold storage costs mere pennies compared to active storage, Amazon states it is 68% cheaper than the usual storage costs. The only problem with cold storage is that it takes time for it to be unfrozen (1–5 minutes) so you'd have to display an indicator or something letting people know a texture is being unfrozen, maybe the texture itself says "BEING UNFROZEN" or something on it. This would rarely happen, literally people would have had to not view a texture for several years for it to have been frozen. Why this is important is that right now LL has a massive amount of data stored that it is financially buckling under. If they can glacier storage a large amount of content that hasn't even been touched in years it can mitigate such a large amount of cost that not only can they hire more engineers but it would cover the cost of storing 4k textures and even the cost of the caching idea I mentioned above. They'd want to set up an AI that can go through the images and make sure to not include lewds in the cold storage, however.
My webp jira entry was also lost and if they implemented that; storage space for all images would be lower, network bandwidth usage would be lower, and we would also get animated webp instead of having to convert gifs to textures with 3rd party tools.
Kristy Aurelia
One thing I'd really like to know, is how the viewer is going to handle partial textures.
It was mention during CCUG/TPVD meetings that viewer now only requests up to X resolution texture if it does not need a higher resolution version, which is great in theory... but what happens if:
The viewer decides it wants up to 512x512 texture due to low memory, or graphics settings etc. and the source texture is 2048x2048. Due to Jpeg2000 format, that is fine, the viewer partially decodes the texture and uses it, however, what about all the compression artefacts? 512x512 is 16x less pixels than 2048x2048 and compression optimizes for space usage in general, meaning that if you did download 2048x2048 texture, and downsampled it to 512x512, the end result will look a lot better than partially decompressed image from 512x512 worth of data.
primerib1 Resident
Kristy Aurelia This is why LL should migrate from Jpeg2000 to JPEG-XL, which has progressive-resolution baked-in into the format and encoding. See this: https://www.youtube.com/watch?v=UphN1_7nP8U
Skyler Ghostly
Not to cause strife, but those opposed for the sake of VRAM are the biggest contributors to Second Life falling behind. The introduction of PBR is a start towards a modern engine but a long ways away. Any new features aren't going to be ideal for some, but it's either progress with loss or stagnation with loss.
The Valve Source Engine supports 2048, and that was in 2004 ... Second Life must move with the times or be left behind.
Tetsuryu Vlodovic
Count me among the opposed.
However, what I would suggest is at least the possibility to reallocate the texture area, so you could for example upload a texture that is 2048 pixels wide but only 512 pixels tall. Useful for panoramas!
Toothless Draegonne
Seems a lot of arguments against this, are actually arguments for it. You don't want a ton of VRAM used up in textures, and yet people are just using a ton of 1024s on different faces to get what they want.
You say that the texel density is too high, without counting mip maps and discard levels that could be set per-viewer to whatever maximum detail level you like.
Then there's the ways in which 2 and 4K textures could be used to increase efficiency. Why have seperate 1K textures for each and every part of a thing, when you can have a single high rez map, with everything intelligently UVmapped onto it? Only one texture to load and then half your entire scene is taken care of.
Yes, bad creators will always make bad things, and they will continue to make bad things regardless. This should not be used as a reason to not implement changes that could, in the hands of people who know how to work with them, result in regions actually running and loading faster than if they were limited to 1K textures.
If your worries are based on trying to negotiate the insanity of the Madlands with a 2GB video card, well, that's already a lost cause and always has been. It's always been in your power to lower your viewer's max detail levels. Nothing will change that.
Load More
→