✨ Feature Requests

  • Search existing ideas before submitting- Use support.secondlife.com for customer support issues- Keep posts on-topicThank you for your ideas!
Improved Texture Fast Cache system
Hi, I would like to propose two ideas for improving the Texture Fast Cache system. The first is a simpler design while the second is a little more elaborate. Existing System The Texture Fast Cache currently works by saving a 16x16 or less texture to a 16x16 buffer along with a header information on the source texture information. IE Width Height #of Components (RGB=3, RGBA=4) Discard Level of the scaled down image This is stored in a fixed 1,028 buffer. It then writes raw image data to the entry, this is triggered when a new texture is being loaded after the first time it was loaded as part of the texture fetch system. The Fast Cache has a fixed size of 1024x1024 entries, but the Fast Cache file starts as an empty file and will keep expanding by appending to the existing file the new entry data. The data is read back when an existing texture is being loaded after either first login and load from a region, or after a region unloads and the textures are purged from memory and requested again from cache. The system goes through the process of creating a brand new texture to use for the 16x16 texture, then loads the actual target texture at the desired discard level. The file has various locks that protect the file. Proposal #1 I propose that we make the file a fixed size, pre-allocate the entire 1024x1024 entries and write out a zeroed out file to disk. Now there are two options I propose to give to the user. Disk Based (Memory Map) Texture Fast Cache Create a memory map file pointer to the file and directly assign the file pointer data to the actual LLRawImage so that you don't have to load in the data you can assign the texture data directly to the RawImage and not need to allocate or de-allocate the memory. This would save on some disk IO as well, reduce the amount of RAM being used and data being copied to load the 16x16 raw image. Writes to the file can be done directly by mapping the next entry to the next fixed pointer and writing the data to memory. The underlying system will handle writing the memory to the file system. The file size would be a little more then 1 GB on disk. Memory Based Texture Fast Cache Load the entire fast cache file into memory and give direct access to the fixed size data. There can be either a process that over a set period of time writes the data to the fastcache file, or after so may updates writes to the file. This would bypass the disk IO and virtual memory to disk writes in the background, but comes at the cost of system RAM being used to keep the fast cache in RAM. This would reduce latency of the Fast Cache. It would take up around 1 GB of system memory as well as 1GB of disk space. Choice 1 would require an updated LLAPR file which supports Memory Maps (already submitted as a PR) and possibly 64 BIT files. Proposal #2 This would be similar to proposal #1, where you have a memory mapped version and an memory based version. But in this case, I propose that instead of write a throw away 16x16 image that has to go through the same initialization process as the actual texture it needs to load, that we save the MAX_DISCARD_LEVEL texture to a a Fast Cache Body file and store the header for the fast cache in a Fast Cache Header file. The header would still store the same information, but in addition, it would store an offset into the Fast Cache Body to load the texture data. This way, the lowest tier of texture cache would be in ready to read format, and could be memory mapped upon creating of the LLRawImage and save on memory copies and decoding of the JPG2000 data as well as having the needed information for the texture already to go. This would also prevent VRAM fragmentation as we are not allocating and de-allocating 16x16 images all the time. The memory version of the same design would have 2 allocated buffers, the larger fixed size from the Fast Cache that is read from disk and a second where the data is stored in a vector that can grow over time, where once the system logs out, it will commit the Vector data to the Texture Fast Cache body data and header as well. This is partly due to the fact that each texture may have different MAX_DISCARD_LEVEL dimensions. Some textures are at Discard = 0 2Kx2K would need a 64x64 pixel body entry size, 1Kx1K would need 32x32, and 512x512 and lower would be 16x16 image. So the mixing of the sizes would need an offset stored in the header to do the calculation. One thing that could be done is have different fast cache files per MAX_DISCARD_LEVEL texture size IE 64x64, 32x32 and 16x16 image and store those in 3 different body's that can be referenced with the same header. Another option, would be to have a fixed 64x64 pixel entry size. The head would use 64x64 offset for the entry. fixed size for the entries,but some entries could be grouped together to make a Atlas like image so better utilize the space.
0
server and region reorganization
Hiya everyone, i wrote some weeks ago already a feedback about the too high number of disconnects and crashes in SL and we got told during the round table meeting, that this will be fixed and LL is working seriously on this now. But all our sailors didn't stop thinking about the reasons of the disconnects and crashes as well and we are now very sure, that LL wont ever solve the problem with software changes. To explain why, we need to go some years back in history and and look at when the disconnects started to become a frustrating problem. LL moved SL to Amazon servers about 6-7 years ago, changed the crossing process for that and the disconnects began. During the 6-9 months test time i had minimum 10 crashes EVERY DAY and was very close to leave SL like a lot of friends did. It became less over the years, but NEVER stopped and a lot new friends start to leave SL frustrated. LL is doing a great job to make SL much more beautiful and exciting with new connected continents, much better graphics like PBR and many more things. That is like giving us a Porsche or Bentley to drive, but unfortunately they forgot to improve the roads as well and let us still drive on the dirt tracks with only 25km/h. This means for SL: Our sailors cut down the graphic parameters to a minimum, decrease DD and view angle to a minimum, reduce all their scripts in their active Avi to a miminum and more and more are using the old default Avi for racing to reduce the crash risk. And we thought for some times it would work, but it was a fake and the conditions didn't really change much. We talk a lot about our races after our races in our bar and the idea i want to write about now grew more and more. Our problem is the organization of the servers. There are about 5 regions on one server. They will be randomly new combined weekly with every rolling restart. That means for example region A, M, G, O, and W are this week on server 1 and next week the 5 regions can be on server 1, 5, 8, 15 and 20 and on sever 1 can be region H, D, P, Y and B. The regions are all different equipped and so the servers will work differently every week. In my last feedback i wrote, that we have in our race area regions, which are statistically worse that others and we crash there more often, but we didn't understand why that changes weekly. Now we know. I am not a server expert, but i am a shopping expert and i can tell you that shopping all your stuff in one big shop is much faster than to buy all in 5 differenmt shops. 5 shops can be a bit cheaper, but when time is my priority then i am willing to pay a bit more and can use the time i won for nicer things i enjoy more. Here is now our idea: Have neighbored regions like A, B, C, D and E on always server 1 and region F, G, H, I and J always on server 2... and so on. I don't have to change the server, when i cross from region A to B or C or D or E. Server changes happen a lot less, because i have neighbored regions on the same server and they don't get reorganized weekly new. I will win time for crossings and they will be much safer. if i still crash on some regions i can easier analyze the reason for that. I suggest to try and test this idea for example on a small part like Blake Sea area, if you need to test this idea on RC Channel first, because i think those regions should be all on RC Channel. Our racing community can support the tests in our race area, because that is all on Main Channel and there i will collect the feedback from our community for you. This will be a game changer and all your awesome work to make SL more beautiful can be used much safer and more. Isn't that a great feeling? Cheers Bianca♥
31
·
tracked
Vulkan Support – Future-Proofing Second Life for Better Performance & Graphics
🔴 Summary 🔴 Second Life has come a long way, but OpenGL is becoming outdated. To ensure SL remains visually competitive and runs smoothly on modern hardware, I propose that Linden Lab begins development on Vulkan support as a long-term goal. This transition would greatly improve performance, reduce crashes, and allow SL to take full advantage of modern GPUs. 🟡 Why Vulkan? 🟡 ✅ Better FPS & Performance – Vulkan is optimized for multi-core CPUs and modern GPUs, meaning higher frame rates and less lag in complex environments. ✅ More Stability & Fewer Crashes – Vulkan manages memory more efficiently than OpenGL, reducing viewer crashes and graphical glitches. ✅ Future-Proofing Second Life – OpenGL’s development has slowed, while Vulkan is the industry standard for new and upcoming graphics engines. ✅ Improved Graphics Potential – Vulkan supports advanced rendering features that could enhance lighting, shadows, reflections, and materials in SL. 🟢 How This Transition Could Work Smoothly 🟢 Instead of a sudden shift, I suggest a gradual development plan (2025-2030): 1️⃣ 2025-2026: Linden Lab researches Vulkan feasibility and starts experimental development. 2️⃣ 2027-2028: An optional Vulkan beta mode is introduced for testing and optimization, running alongside OpenGL. 3️⃣ 2029-2030: Vulkan becomes the default renderer, with OpenGL as a fallback for older systems. 4️⃣ Community Engagement: Regular updates from Linden Lab on progress, plus support for third-party viewers adapting to Vulkan. 🔵 Why Start Now? 🔵 Even though this transition will take years, starting early ensures SL stays ahead rather than falling behind other virtual worlds. A well-planned Vulkan integration could attract new users while making SL smoother for current residents. If you agree, please upvote and share your thoughts in the comments! Let’s show Linden Lab that the community is ready for a modern and optimized Second Life! 👍 💬 �
13
·
tracked
Load More