šŸ“ƒ SLua Alpha

General discussion and feedback on Second Life's SLua Alpha
lljson encode and decode functions with vectors, quaternions, and UUIDs.
Right now, vectors and quaternions encode to strings and decode to strings with the "<... >" format. Likewise UUIDs decode to strings. This is a major nuisance to scripters when decoding tables with embedded vectors, quaternions or UUIDs, requiring special decoding functions in each and every script using them. The underlying problem is that at present there is no way to differentiate between an encoded vector, quaternion, or UUID and the equivalent string value. i.e vector(1,2,3) and "<1,2,3>" both encode to "<1,2,3>". I'd like to propose a variation to encode/decode, (call them lljson.pack and unpack or better, perhaps, add an optional EncodingType argument to encode and decode), that when a vector or quaternion is encountered, encodes them as they are now. UUIDs would then be encoded by adding the same <> delimiters around the current UUID string. When a string starting with < and ending with > is encountered, encode it adding an extra < and > at each end, then decode such strings by removing the added < and >. Then the vectors, quaternions, and UUIDs can be uniquely identified as such by the undoubled delimiters and the appropriate internal format and decoded directly to the appropriate type. This would allow the encoding and decoding of all SLua types in tables without special intervention by the scripters--significantly simplifying scripting such operations and greatly improving performance (one optimized pass through the data in C, rather than one in C followed by one of random scripter quality in SLua) when passing tables between scripts, or storing and retrieving tables in Linkset Data. The only reason for keeping encode and decode as they are now is for the sake of compatibility with external json operations and even then if vectors, quaternions, or UUIDs are involved the proposed operations would likely be superior as it would be necessary to make accommodations for these types on the remote end anyway.
13
Ā·
planned
Table lengths, concatenation, and probably other things!
Currently there are surely a lot of little parity problems. These are just some that I've noticed so far- local a = {1, 2, 3, 4} local b = {1, 2, nil, 4} local c = {1, 2, d = 3, 4} ll.OwnerSay(`a {#a} `) ll.OwnerSay(`b {#b} `) ll.OwnerSay(`c {#c} `) A and B will return 4, but C will return 3. Getting the length of a table with dictionary elements requires looping thru the entire table with pairs() each time, requiring a cludgy metamethod to have it work how one would 'expect', when just a table.len() would be useful. local Tbl = {} Tbl.__index = Tbl Tbl.__len = function(len) local incr = 0 for _ in pairs(len) do incr = incr + 1 end return incr end Likewise, list concatenation is a major thing in SL, allowing one to build up lists for llSetLinkPrimitiveParams for instance to save on function calls. This is entirely impossible in Lua with merging even two tables requiring your own function, or another cludge, to say nothing of matching the functionality of SL where multiple lists can be strung together easily- Table. __concat = function(a, b) if type(b) == "table" then for _, v in ipairs(b) do a[#a+1]=v end else a[#a+1]=b end return a end } As these are probably minor, basic functions that would get a lot of use, I feel like small things like this should start being added to the language, to prevent them from needing to be created in every script itself. The performance would also be higher if implemented in C on top of that. Lua is advertised as being an extremely tiny language, but that can be a little detrimental when we have to make even basic helper functions ourselves in our limited 64kb memory.
3
Ā·
tracked
Script Memory Limits Change
The present system of limiting script memory on a per script basis gives scripters an incentive to create often incredibly inefficient workarounds when they want/need more memory than the limit allows, such as creating a large number of slave scripts and passing data back and forth via ll.MessageLinked. Simply increasing the limit to something more reasonable could alleviate this somewhat, but it doesn't really address the underlying problem, how to responsibly allocate memory per creation, whether that creation is a single prim, or a collection of linksets. I therefore propose the following alternative: Give each linkset a parameter adjustable by anyone with modify rights, Call it Linkset Memory Limit (LML). Default to a reasonable value, say 1 MB. Add this parameter to the collection of items that affect the linkset's LI, at a reasonable cost, say, 1 LI/MB. (I don't know if it's actually reasonable, but 1 LI/MB does make a mentally convenient conversion factor.) Per current practice let, the final LI be the max of the per item LIs. Then each script in the linkset draws memory from the common memory pool as defined by the LML. The simplest implementation of this feature at the script level would probably be to change ll.SetMemoryLimit to be limited to the currently available memory in the LML pool, with an argument of -1 meaning grab it all. This would allow the script to reserve needed memory and to free it when no longer needed. Adding ll.GetLML, llGetFreeLM, and possibly ll.SetLML (see discussion below), together with existing parcel prim count functions would constitute a sufficient set of functions. It might be possible to automate dynamically allocating memory to scripts from the common pool as needed, but I'm not prepared to say how desirable this might be. It would likely improve the use of the common memory pool, but it could greatly complicate impact and debugging of out-of-memory conditions and I'm not sure how it would work with attempts by a script to reserve needed memory in advance. This new feature would give scripting creators the ability to make efficient use of as much memory as they might need, with a clearly visible cost that prospective object owners can see and can budget against their available resources. It also give owners with linkset modify rights the ability to adjust this cost within the limits of the scripts' ability to adapt. It entirely eliminates the creator's incentive to use inefficient workarounds that are costly to sim resources and performance. It gives creators a reasonable incentive to limit their use of memory in order to improve the market appeal of their products and/or the land impact of creations they use themselves. It doesn't particularly affect the case for using HTML and external servers for very large or shared datasets when HTML delays are acceptable. Since the LI of worn objects is not currently counted against any budget, in order to keep worn object impact under control, it would be necessary to have a per agent LI limit as well. This limit would probably apply only to the LML LI, but grins one might possibly make the case for applying it to other LIs as well (no doubt with much cussing by our more elaborately decked out friends). A higher agent LML LI limit could become another perk of higher account levels. Since dividing and merging of linksets usually occurs at creation time, and we want creators to manage this process, I would not recommend elaborate algorithms for managing the allocation of LML on merge or divide. It probably suffices on merge to keep the LML of the root of the merged object. On division, it probably suffices to give objects with a new root the default LML. Allowing a script to change the LML of an object would mean that the script could be dynamically changing the LI of the object. This would have both pros and cons. Pros such as allowing a single script to adapt the object to present limits or to set the LML of a newly divided/merged/created object to reasonable values. Cons such as removing from the owner complete control over the LI of their rezzed out objects, and making marketplace data about LI less reliable. Perhaps a reasonable compromise would be to limit such functions to objects owned by the parcel owner per other parcel functions. The prospect and limitations of any ll.SetLML function definitely needs more discussion. It would be nice if this feature also applied to LSL scripts either when compiled to Mono and to the Lua VM, or possibly just when compiled to the Lua VM. If automatically applied to existing content, it would probably break a significant number of existing inefficient workarounds and other instances where the number of 64kb scripts in the object exceeds 16. (Beware of furniture with nPose and AvSitter systems and many seats.)
8
Ā·
tracked
Load More
→