šŸ“ƒ SLua Alpha

General discussion and feedback on Second Life's SLua Alpha
Rework function delays in slua
Rather than having in built "freezes" in certain functions like object rez / ll email etc, which are trivially circumventable by "LINK_MESSAGE"ing another script in the same prim and having that other script swallow the delay (providing a 'burstable' limit on these calls) these could be better handled in slua by reworking how these delays are implemented - perhaps on a per prim or per script setting where these calls can be made at "no cost" up to some specific rate after which they stall or fail (for example the way HTTP requests can be performed - there is no function delay here but there is still an overall limit on HTTP requests) The work around of having often multiple 'worker' scripts that just listen for a link message then execute the call to llEmail / etc has always been a bodge, but also rather necessary one if you don't want your main server script to become unresponsive regularly, which is quite important for a simultaneous multi user server style script (e.g. providing a region wide service to all players, the main script here can not be stalled without noticable impact to the players and yet some of these functions with delays in are needed for operation). Made a longer post here https://community.secondlife.com/forums/topic/530018-of-functions-with-delays/ Since so much is changing in the lua implemented stuff we could use this opportunity to redefine or rework how these function delays work, /only/ for pure lua scripts compiled using the new lua mode, while legacy LSL to lua / LSL mono / LSL classic scripts can maintain their previous behaviour regarding delays. This would be great to avoid what is essentially a "gotcha" that new scripters sort of 'need to know' how to work around to have a performant system that needs to use these delayed calls, as well as avoiding the wasted overhead of multiple worker scripts with their overheads and the time wasted in link messaging (especially since all the worker scripts receive all link messages even though only one of them will take up the work) Would just be a great and rare opportunity to revisit a rather archaic piece of the scripting languages implementation that just doesn't really work as advertised anyway once you know of the bypass by offloading to worker scripts, not often breaking changes can be brought in to a new language checkpoint. If this isn't clear enough please let me know and I'll give some better examples or something but I assume this fairly trick is widely known.
3
Ā·
Feature
Add Table flattening to ll.* functions that accept long lists.
SLua has no means of concatenating tables, this makes several functions designed for LSL incredibly unwieldy to use. A common practice for instance is to to loop a function call and build up a list of parameters append them to an existing list, then once done send it all to SetLinkPrimitiveParamsFast. SLua also struggles somewhat for memory and hits out of memory errors on lists that aren't incredibly long compared to LSL. Due to the table memory allocation doubling all of a sudden, which for many users is an enourmous footgun, as tabl[#tabl+1] = 1 taking their script form 64k memory to OOM is incredibly counter intuative. This leaves slua looking no better than LSL to many scripters for a very common task in SL. Until either better more 'lua' api's exist, or table memory allocation can be changed, it would be nice if the functions could accept nested tables and flatten them. Proposal ll.SetLinkPrimitiveParamsFast( 1, { PRIM_COLOR, {1,vector(1,0,0),1.0}, {PRIM_TEXT, "Text", {vector(1,1,1)},1.0}, PRIM_SIZE, vector(1,1,1) }) -- OR ll.SetLinkPrimitiveParamsFast( 1, { {PRIM_COLOR, 1,vector(1,0,0),1.0}, {PRIM_TEXT, "Text",vector(1,1,1),1.0}, {PRIM_SIZE, vector(1,1,1)} }) Would get flattened to ll.SetLinkPrimitiveParamsFast( 1, { PRIM_COLOR, 1,vector(1,0,0),1.0, PRIM_TEXT, "Text",vector(1,1,1),1.0, PRIM_SIZE, vector(1,1,1) }) This would make using SetLinkPrimitiveParams allot less painful, and also help with things like - LinkParticleSystem - RezObjectWithParams - SetAgentEnvironment - HTTPRequest This would also help aleviate the memory impact of LONG tables for set linkprimitive params, by allowing a scripter to break up their long list into several smaller ones and send them together. Not always, but 10 tables of 8 keys are 2560 bytes compared to 80 keys taking 4096 with the memory doubling tables do as you expand them. There is of course... one major caveat to this, the flattened table, while it exists needs to be be counted against the script that's running >.>
8
Ā·
Feature
Script Memory Limits Change
The present system of limiting script memory on a per script basis gives scripters an incentive to create often incredibly inefficient workarounds when they want/need more memory than the limit allows, such as creating a large number of slave scripts and passing data back and forth via ll.MessageLinked. Simply increasing the limit to something more reasonable could alleviate this somewhat, but it doesn't really address the underlying problem, how to responsibly allocate memory per creation, whether that creation is a single prim, or a collection of linksets. I therefore propose the following alternative: Give each linkset a parameter adjustable by anyone with modify rights, Call it Linkset Memory Limit (LML). Default to a reasonable value, say 1 MB. Add this parameter to the collection of items that affect the linkset's LI, at a reasonable cost, say, 1 LI/MB. (I don't know if it's actually reasonable, but 1 LI/MB does make a mentally convenient conversion factor.) Per current practice let, the final LI be the max of the per item LIs. Then each script in the linkset draws memory from the common memory pool as defined by the LML. The simplest implementation of this feature at the script level would probably be to change ll.SetMemoryLimit to be limited to the currently available memory in the LML pool, with an argument of -1 meaning grab it all. This would allow the script to reserve needed memory and to free it when no longer needed. Adding ll.GetLML, llGetFreeLM, and possibly ll.SetLML (see discussion below), together with existing parcel prim count functions would constitute a sufficient set of functions. It might be possible to automate dynamically allocating memory to scripts from the common pool as needed, but I'm not prepared to say how desirable this might be. It would likely improve the use of the common memory pool, but it could greatly complicate impact and debugging of out-of-memory conditions and I'm not sure how it would work with attempts by a script to reserve needed memory in advance. This new feature would give scripting creators the ability to make efficient use of as much memory as they might need, with a clearly visible cost that prospective object owners can see and can budget against their available resources. It also give owners with linkset modify rights the ability to adjust this cost within the limits of the scripts' ability to adapt. It entirely eliminates the creator's incentive to use inefficient workarounds that are costly to sim resources and performance. It gives creators a reasonable incentive to limit their use of memory in order to improve the market appeal of their products and/or the land impact of creations they use themselves. It doesn't particularly affect the case for using HTML and external servers for very large or shared datasets when HTML delays are acceptable. Since the LI of worn objects is not currently counted against any budget, in order to keep worn object impact under control, it would be necessary to have a per agent LI limit as well. This limit would probably apply only to the LML LI, but grins one might possibly make the case for applying it to other LIs as well (no doubt with much cussing by our more elaborately decked out friends). A higher agent LML LI limit could become another perk of higher account levels. Since dividing and merging of linksets usually occurs at creation time, and we want creators to manage this process, I would not recommend elaborate algorithms for managing the allocation of LML on merge or divide. It probably suffices on merge to keep the LML of the root of the merged object. On division, it probably suffices to give objects with a new root the default LML. Allowing a script to change the LML of an object would mean that the script could be dynamically changing the LI of the object. This would have both pros and cons. Pros such as allowing a single script to adapt the object to present limits or to set the LML of a newly divided/merged/created object to reasonable values. Cons such as removing from the owner complete control over the LI of their rezzed out objects, and making marketplace data about LI less reliable. Perhaps a reasonable compromise would be to limit such functions to objects owned by the parcel owner per other parcel functions. The prospect and limitations of any ll.SetLML function definitely needs more discussion. It would be nice if this feature also applied to LSL scripts either when compiled to Mono and to the Lua VM, or possibly just when compiled to the Lua VM. If automatically applied to existing content, it would probably break a significant number of existing inefficient workarounds and other instances where the number of 64kb scripts in the object exceeds 16. (Beware of furniture with nPose and AvSitter systems and many seats.)
8
Ā·
Feature
Ā·
tracked
Load More
→