Scripting Features

  • Search existing ideas before submitting
  • Use support.secondlife.com for customer support issues
  • Keep posts on-topic
Thank you for your ideas!
[Feature Request] llLinksetDataWriteWithValidation()
Linkset Data allows multiple scripts in the same linkset to read/create/update/delete data. We currently do not have any LSD functions with an append validation protocol. Experience KVP updating has a validation protocol w/ feedback which can address race conditions between scripts in the same linkset, different objects in the same region or different objects grid-wide. The following function satisfies the need stated above for LSD: integer llLinksetDataWriteWithValidation(string name, string value, string pass, string original_value) A new failure code constant LINKSETDATA_VALIDATION with value 6 will be needed for this function. Functionality: If name does NOT exist and value is NOT an empty string and original_value is an empty string, the entry is created and 0 is returned, else, 6 is returned. If name exists and value is NOT an empty string and original_value matches name's current value and pass matches name's current pass, the entry is updated and 0 or 5 is returned, else, 6 is returned. If name exists and value is an empty string and original_value matches name's current value and pass matches name's current pass, the entry is deleted and 0 is returned, else, 6 is returned. Other typical check failures such as empty name, name not found, pass mismatch, etc. will return the same values as they do now before validation handling is done. The server is actually already doing these checks in the background to determine what integer value to return with exiting LSD functions. There's just, currently, no validation protection. This feature offers validation protection for creates, updates and deletes which is a must in an environment where multiple scripts can potentially modify an entry at the same time creating race conditions. Since it has password input, this function becomes a defacto all-in-one append function for creates, updates and deletes, something even KVP currently doesn't offer.
6
·

tracked

Touch Pointer Capture
There is an improvement that could be made to touch events to greatly extend their capabilities. This is a preliminary specification and suggestion. This can be improved and refined later on via community/linden feedback. New functions * integer llCaptureTouch(integer detected, integer mode) * llReleaseTouchCapture(integer handler) * integer llHasTouchCapture() llCaptureTouch would be called within touch_start (and maybe touch event would be allowed too) to start touch capture. During touch capture the viewer would continue to pass touch events from across any surface even outside the prim it started from. Background This feature is heavily influenced from my web development career, here: https://developer.mozilla.org/en-US/docs/Web/API/Element/setPointerCapture The feature allows to capture touch events of elements outside of it's boundaries, this is useful for many applications, a basic one would be for example a slider prim to capture touch events outside of its boundaries which has plagued many scripted HUDs/UIs. Another option would be for draggable HUDs. But as a longtime SL user I am also especially looking forward to having the touch events across any in-world surface which has significant applications in improving user interactions. For example being able to start a drag to move around furniture in the house and then use raycasting to check for walls so the furniture could place itself in an ideal spot -- the furniture could actively show a "ghost" representation of itself while the user was still dragging . This is a common user interaction seen in many games and other 3D applications that feels intuitive and friendly. Spec ## For llCaptureTouch, the integer mode constant is a bitfield of * CAPTURE_WORLD * CAPTURE_SCREEN * CAPTURE_CONFIRM_TAP * CAPTURE_PASSTHROUGH ## So how does a user know the pointer capture is ongoing... * Change cursor to a "dragging" cursor similar to web browsers (Jakob’s Law -- familiarity / existing mental model) * Apply "dragging" cursor to existing grab behaviours as well to emphasize ## ...and how do they exit it? * (Default / intended behaviour) When you stop holding down the mouse button and enter touch_end event the pointer capture is released * The script can call llReleaseTouchCapture to release pointer capture (should scripts receive a touch_end here? I think yes, fire touch_end) * Pressing ESC should be a familiar escape hatch, always forces release of pointer capture, fire touch_end * If you are in CAPTURE_CONFIRM_TAP mode however, you need to touch a second time to stop capture What coordinate system is by default? If the touch event started from an in-world object it is in the absolute WORLD coordinates as if you touched any in-world prim normally. If the touch event started on a HUD attachment, it is in SCREEN coordinates. There are three ways SCREEN coordinate capture could be implemented here: Simplest would be to simply just pass mouse screen coordinates directly along and ignore any raycasting against HUD attachments (e.g. stop updating llDetectedTouchFace etc, only need llDetectedTouchPos). This is because the usecase in a HUD is a bit different and there isn't really a "world" to raycast against in a HUD. Raycast only against own HUD that touch started from -- current behaviour but then sending only mouse coordinates outside boundaries / where raycast fails to hit any other prim of the linkset Raycast against all HUD attachments -- This could be used to allow HUDs to snap against the bounding box of another HUD or allow for very interesting HUD-to-HUD interactions, for example a temp-on-attach experience hud showing the inventory loot of a dead monster or loot chest and the user dragging items into an RPG/minecraft-like inventory game HUD. The llDetected* could be enough info to figure out inventory slot grid and then communicate item handover to the game HUD. Privacy Concerns: I highly recommend 3) because it's damn useful but it could also allow to see what HUDs are attached by consequence which is a privacy concern. You could implement code to only provide select information but it could also hinder intentional usecases like cross-hud game inventory. Other checks could be done like only allowing scripts on same experience to see each other for full info. Another is when an in-world object request pointer capture for screen coords to only allow 1) but 3) if it started from a HUD, but might prevent usecases of dragging a game object to hud intentionally such as into inventory game hud, again maybe only when same experience. Overriding coordinate system ## CAPTURE_WORLD Using CAPTURE_WORLD allows a HUD to override default coordinate system instead use WORLD coordinates for captured touch events. E.g. dragging and item from a HUD to drop into the world and showing a ghost representation during capture for preview. ## CAPTURE_SCREEN CAPTURE_SCREEN on the other hand would allow an in-world object to capture touch events in SCREEN coordinates, for example to drag an item into an inventory game HUD or other usecases. CAPTURE_CONFIRM_TAP? I was thinking that another mode of touch capture could require a second click to confirm that then also only then releases pointer capture. By default without confirm tap, the behaviour is as follows: User holds down mouse -- touch_start is fired Script calls llCaptureTouch, pointer capture is initiated All touches outside prim boundary are passed to touch events User lets go of mouse -- touch_event is fired and pointer captured is released With confirm tap it changes the cursor type to maybe something with a hand with secondary icon, e.g. like Location Select -- https://learn.microsoft.com/en-us/windows/win32/menurc/about-cursors The behaviour is as follows: User holds down mouse -- touch_start is fired Script calls llCaptureTouch(0, CAPTURE_CONFIRM_TAP) All touches outside prim boundary are passed to touch events User lets go of mouse -- touch_end is fired and pointer capture continues. touch() events still continue for ghosting/preview purposes User clicks again to confirm location -- touch_start / touch_end are fired. Pointer capture is released. llHasTouchCapture returns true only in touch_start in this case (otherwise it is false as pointer capture in any other case is not possible when touch had just started) Confirm tap is just an optional extension of the feature. It is meant for possible better accessibility (holding down mouse is not necessarily always easy for everyone) and alternative usability (click once on a button, it shows furniture ghost outline snapping to floor and avoiding intersecting walls, click once again to confirm furniture location). Privacy Concerns: A different cursor should be used for good UX and you can also implement on-screen text similar to mouselook shows instructions. The concern is a rogue script using this to tracking mouse coordinates such as a malicious vendor script. Onscreen text and cursor indicator should help show a script is still capturing touch. How does a script know how/if pointer capture was released to avoid a confusing intermediary state? Viewer/sim could maybe have a timeout on the confirm tap mode or it could be endless / until agent leaves region / logs out. Scripts could also maybe have a function that returns a constant to indicate how capture was released on touch_end, I was thinking via llHasTouchCapture instead returning a constant instead of a boolean e.g. * CAPTURE_NONE (no capture happened / initial state), * CAPTURE_ACTIVE (actively capturing touches), * CAPTURE_CANCELLED (user pressed ESC / cancelled capture), * CAPTURE_RELEASED (via llReleaseTouchCapture), * CAPTURE_END (touch capture ended successfully by default letting go of mouse button / normal touch_end or on second click for confirm tap mode). CAPTURE_PASSTHROUGH? In somecases, a touch capture started from an object might wish to have raycasts pass through itself. For example a piece on a chessboard does not care about touches on itself anymore but against the board below it. This avoids hacky raycast or other workarounds such as setting the piece invisible / out of the way / phantom? Another example is when moving furniture to another location, you want to raycast onto the floor of the building rather than against the furniture where it is currently at if the user was doing a small adjustment. Multi-users llCaptureTouch could have integer detected argument, as touch events can have multiple events from different users at once, to indicate from which user to capture from related to the touch event detected number. llReleaseTouchCapture could require a integer handler that was returned by llCaptureTouch, similar to listen handlers. This is if we want to support multiple captures by different users. This could be used for example on a board strategy game where multiple users are interacting with it. Usecases The usecases are many way beyond this list but here is a few off the top off my head: * Game HUD that features an inventory grid (RPG, minecraft, modern resident evil, or diablo-like) -- What happens when a user wants to drag an item from inventory grid to another window, another HUD or even drop something on the ground? * Game object dragged into that inventory grid -- What if the game wants to allow users to drag a game object into their inventory? * Strategy game board -- there are rezzed game pieces, the touch could be started from a piece or from the board, tracking touches across different objects can have different intentions and meanings * Ghosting/preview -- being able to show a ghost/preview of a drag operation is a very common design pattern for 3D worlds / games. This is achieved because the script can track touches continuously and thus show a preview of an action based on where the user is dragging -- see https://www.youtube.com/watch?v=_zxU1khDXcU * Smart furniture placement -- hypothetical way to place furniture smartly like a game would, showing a ghost preview, snapping to walls and floors, moving away to avoid intersections using raycasts and showing the preview location. * HUD to drop an element after confirming location: https://www.youtube.com/watch?v=N-Qur11cvYQ * Tool HUDs that can avoid workarounds of fullscreen prims on HUD to capture screen location: https://www.youtube.com/watch?v=9mPc_9yX2mM * Reliably tracking screen coordinates for HUDs as it takes a moment before a prim can resize itself to fullscreen to capture all the input: https://www.youtube.com/watch?v=ZAP-PZJC7v4 Feel free to submit new usecases following format above ("name -- single sentence if possible") Wishlist while we are enhancing touch * integer llDetectedMeta(integer d) -- bitfield that indicates if SHIFT and/or CTRL key are held down. This is a very common UI design pattern to extend functionality. For example while dragging in capture mode the shift key could be pressed during dragging to enable snapping to grid when placing furniture smartly with a script and many other usecases, etc. llDetectedMeta could be used outside of capture mode for normal touch events too, enhancing SL platform as a whole. Alternative name: llDetectedKeyboardMeta
5
·

tracked

Add function to get a list of objects in the region
Sensors and listeners are limited in what objects they can detect, and both may require extensive setup and overhead in order to pick up all objects of interest in a region. I suggest a new function to retrieve a list of object UUIDs that the owner has permission to return, whether at the region or parcel level, or the owner's own objects otherwise. list llGetObjectList ( list filters, integer start, integer count ) Return a list of object keys matching provided filters. The list should be in the same order that the simulator stores object data, which presumably has the oldest objects first and newest objects last. Some possible filters: OBJECT_LIST_FILTER_BY_NAME, string pattern OBJECT_LIST_FILTER_BY_OWNER_KEY, key id OBJECT_LIST_FILTER_BY_PARCEL, vector pos OBJECT_LIST_FILTER_BY_TYPE, integer type pattern would ideally be the same format as used for llLinksetDataFindKeys . type would ideally align with object types used by llDetectedType . Objects owned by estate managers always have permission to return any objects in the region, so they should be allowed to see a list of all objects in the region with or without filters. Objects that are owned by the same resident/group as parcel(s) in the region have the same privilege. Objects that are given object return permission by a group role on group-owned land require the owner be present in the region to return objects, so the same behavior should be expected for this function. Applications: Updating all instances of an object in a region, without requiring a listener or even an active script in the objects Finding and returning objects with high resource consumption (land impact, script memory, etc) Finding one's own objects when you don't have permission to return objects for the parcel/region
9
·

tracked

Load More