Add RGPD and European law compliance vs data & privacy, bots etc.
under review
Woolfyy Resident
Second Life is not compliant with European law / RGPD with bots redirecting groups to discord or systems having links to HTTP servers etc. Legally compliance must be opted-in with a clear consent from the user i-e full explanation as what is the use and visibility of data going outside + tracing of when and what got consent on from each user.
Time for Linden Lab to regulate it or it could get serious fines to pay as well as legal constraints from Europe. In the same way the recent laws signed by President Biden could also result in serious legal problems.
Ideally there should also be a logo or so to explicitly visualize the fact that a group is linked to Discord or any other external media.
Log In
Gwyneth Llewelyn
Second Life — both the
product
and the viewer
— is
compliant with European Laws and the European General Data Protection Regulation (GDPR). Please stop claiming the contrary, unless you wish to present proof from a court of law's sentence on the subject. To the best of my ability, I couldn't find any. In Europe, entities, public or corporate, also are
innocent until proven guilty :)The GDPR is more concerned with
informing
users about what happens to their data, and less with enforcing
what can legally be done or not.Ultimately, what is
forbidden
(and that's the reason why fines are applied to non-compliant applications or companies developing such applications) is to fail to inform users what happens with their data
. In other words: it's not
forbidden to send user's private data to a third-party (which may do with that data lots of things, many possibly not entirely legal... that is irrelevant) so long as the user is informed about it
. What is
forbidden is to do it anyway without the user's explicit consent — that's
what happened in the past with the many fines applied to non-compliant products, services, and companies.Linden Lab states very clearly what is done with your data when you use the SL Official Viewer to connect to the SL Grid(s).
Now, if a
bot
sends your
data, or some of your data, to unknown third parties, without allowing you to explicitly give it consent
, then the bot
is not complying with the GDPR. For instance, it was usual in the past for event hosts holding discussion events in text chat to ask permission from all participants to post the chat log somewhere (on the community forums, for instance, or copying and placing it on a notecard to distribute in-world to others). Sometimes, one user or two would not
give that permission, and, as a consequence, some chat logs would have been redacted to edit out these users' comments.With 'bots and other similarly automated tools, it might be harder to ask for permission to send a resident's data elsewhere.
But that is a problem for the 'bot owner, not Linden Lab
. The only thing that LL is supposed to do, in this circumstance, is to facilitate
the ability for users to opt-in to whatever real-world data is being captured by the 'bot itself.It is questionable if a 'bot may be able or not to access other residents'
real-world
data. That's one of those cases where there had to be an actual litigation to get a court's view of the application of the law. My guess is that 'bots cannot capture any "real-world" data, so they're fine to do so — after all, the LL ToS states that, once you're in the public areas of SL, you automatically grant everybody a license to view whatever content is produced. Technically speaking, text is also "produced content", therefore, whatever you type in SL, is supposed to be publicly available to everybody else in SL
.Now, the argument is... what if I
capture
such text and send it to someplace else where the LL ToS do not
apply, such as Discord (but it could also be on WhatsApp, or a non-LL wiki, or a personal blog...)?Theoretically, again, the special license automatically granted to the content you create in SL
only applies to SL
. It might be argued that capturing text chat (or recording voice chat!) and sending somewhere else without explicit permission
is, indeed, a violation of the other person's copyrights
(but not necessarily their privacy
). That's why the current procedure is "ask first, post later, restrict to those who have explicitly given consent". The same obviously applies to textures, 3D models, etc., with an even more special exception set on Snapshots and Machinimas, which can
be posted "outside SL" without requiring special
authorisation, so long as the scene is entirely captured in publicly-accessible land.That said... it's not up to SL to control what you do with content generated inside SL brought
outside
SL. What LL's ToS only says is: to join SL, you need to agree to a special set of mutual licensing between all participants in the same virtual environment, which is only valid inside that virtual environment
. Outside it, the usual legislation applies, according to jurisdiction; in other words, copyrights and authorship laws regulate what can and cannot be done with content extracted
from SL.And that ought to apply to text, too, since text
is
protected in most jurisdictions under the auspices of copyright & authorship laws.Again, note that none of the above is related to
privacy
laws. It's just creative content
laws.And, I'm sorry to contradict you, but there are no privacy laws applicable to
public
content, i.e. what you write in SL which is publicly
written on a public
channel... is, well, public
. Using that content inside
SL does not relinquish your copyright
and authorship
rights over what you've written; the ToS only activates an automatically-granted license to your content to other SL users while they're connected to the virtual environment. What happens beyond
that is not
regulated by the ToS. It's just copyright laws. And, unless given special permission, you theoretically cannot copy that text (even if attributing the "authors" of such text!) and post it anywhere else.That most certainly includes Discord.
A 'bot that just copies
publicly written words
from one chat group to an external system — be it Discord, be it something else — is not
violating any privacy
laws. It's "only" violating copyright laws, if the 'bot was not explicitly given permission for that. It is also irrelevant if there are no obvious mechanisms to tell the 'bot to only
copy text from residents who have
given permission (there are, but let's assume there aren't for the sake of the argument!). If the 'bot cannot be trusted to only copy authorised content, it is not only in violation of RL laws, but it's also
in contempt of the ToS, and, as such, should be suspended/banned from SL altogether. The 'bot owner
, beyond that, would also
be liable to be sued for content piracy in the real world
, and, assuming that such a litigation case actually reaches the court, LL would respond to a request by the court to give out the name and address of the user owning the 'bot in question — so that person would be the target of a lawsuit (in some jurisdictions, piracy may even be a crime
, so it might be more serious than just paying a fine; it would depend on the jurisdiction and the gravity of the offense).That said, to comply with international copyright & authorship laws, group chats would only need to have a checkbox saying, "I authorise all text on this group to be sent elsewhere, outside the environment of Second Life". Such a checkbox would probably require being linked to a web page (or potentially a notecard or similar mechanism) where the actual usage of the text outside SL would be described. And, according to European laws, such a checkbox
must
be opt-in by default — you cannot
assume that people "automatically" grant others the right to use their content, even if such content is "merely text". Doing so would, indeed, violate a lot of laws — and remember, it's not only Europe that has such laws, these days most of the Western world (and, in the US, especially California!) has some legislation regulating content-related rights in the digital world.Now, where does the GDPR enter in this equation?
Imagine, for a moment, that the 'bot would, indeed, get permission (via the checkbox) to copy someone's chat to Discord. But what the 'bot owner would
not
tell anybody was that their 'bot could actually hack into other people's computers, figure out their address/location, their computer fingerprint, and a plethora of other data which are not
publicly accessible, just harvested by the 'bot "because it can", and, worse than that, a constant stream of such data would be sent to a third-party that is not
confined to the European Union (such as Discord).Now
that
is a privacy violation!In such a scenario, if discovered, the 'bot owner would be liable to the full extent of the GDPR laws, and I agree that they would have to pay hefty fines for such a violation, and — who knows? — possibly even face jail time (again, depending on the degree of offense and the jurisdiction). *
GDPR was enacted precisely to avoid 'bad actors' to send out non-public data about European citizens to third-parties,
without their explicit knowledge and
full permission.
*This is what happens, indeed, when Apple or Google capture non-public data (such as location information) without their users' explicit knowledge and without their explicit permission. The same also happens when web pages use cookies or any other similar tracking mechanisms which extract non-public data from individuals, not telling them what they're doing, nor telling them what they're going to do with that data, or where they store it.
In the 'bot scenario, it's clear that the 'bot owner is forced, by the GDPR, to
fully disclose
what data is being gathered — beyond the text chat, that is — how it is being processed, where it is archived, who is given access to it, and so forth. *And all that must
be written in advance before the user clicks on the checkbox — it cannot be otherwise. That's the "informed consent" bit — people
must* know what they're committing to when they use a service, since it's very easy to capture non-public data outside the scope of such a service, and funnel it into data-crunching algorithms. So... no 'bots capturing chat without permission, period. No 'bots capturing any non-public data about users, period. That's a no-no, and liable not only to lawsuits, but even (possibly) criminal indictments.But what about Linden Lab? How are they a "concerned party" in all of this, and thus liable to the threats you've made about terrible things that may happen to LL if they "fail to comply with the GDPR"?
It's simple, really. LL
would
be accused of violating the GDPR if and only if
the Second Life Viewer, or anything inside the SL Grid, could
be easily used to extract such non-public data and send it to third parties without the user's knowledge
(much less their permission!).But we're talking about a 'bot,
not
the SL Viewer.Also, we're not talking about the
text
sent to Discord. That, as said, is covered under copyright & authorship laws, and, as said, the checkbox + a page/notecard explaining what will happen to the text would suffice.Instead, we're talking about a 'bot that has the possibility of penetrating into the remote computers of all participants in the text chat and extract data from them,
without
their knowledge. This is actually even worse
than mere "privacy violation" — it's invasion of property, a cybercrime in most jurisdictions.Needless to say, such 'bots don't exist :-) (but we're talking hypotheticals here).
But what if things are slightly less obvious, i.e., what if the 'bot owner claims that they are
not
attacking directly any computers, but instead they're tapping into a stream of data captured via the SL Grid, which — for some weird reason — already does that on behalf of the 'bot owner. In other words: let's suppose that the communication between the viewer and the SL Grid somehow sends personal data from the user to LL's SL Grid, and, at the same time, it also retrieves such personal data
from SL residents. The "normal" viewer could hide all of that inside its code, and regular users would never know; a 'bot, however, could tap directly into the communications and extract all of that for later storage.It is conceivable that, in such a scenario, Linden Lab would be given a fair warning to close down such "information leaks" being accessible to third-parties so easily, and possibly given a reasonable amount of time to fix their code. After that time,
if
LL continued to ignore the issue, they would be in contempt of European law, and could not only be fined, but, more likely, their services would be not made available in Europe, until they agreed to comply.This is, indeed, what companies such as Apple, Microsoft, Google etc. have faced. They
did
capture personal information from their users to do tracking & profiling; they did not
tell anyone what was going on; when the European authorities informed them that they were violating the law, they ignored such 'threats', until, well, they had to face a court of law, present their arguments (the hardest one to explain was not how
the data was captured, not even how it was being processed
, but rather, why didn't you tell your users what you were doing with their data?" — the answer being, most likely, that they didn't want their users to
know, and
that* is what is highly forbidden, or even criminal, under the European laws), and face the consequences of the sentence.Therefore, it could be argued that LL, by refusing to shut down the data stream containing private data, was
at least
negligent, but could even be considered to be an accomplice of the 'bot owner — a facilitator, so to speak, for the 'bot owner's illegal activity. And, as such, LL would be liable to any lawsuits against them, even if they weren't the actual party performing the privacy breaches
. They weren't, but they made it too easy
and refused to change it — then they're also at least partially responsible.This is, for instance, why Apple and Google are not very fond of allowing apps that show pornography to their users, to give a stupid example. Pornography for adults is
mostly
legal across many jurisdictions. However, there is always the possibility that a child borrows their parents' mobile phone and gets caught watching pornography — truly nothing that hasn't been done in the past! Apple and Google are not
the "criminals" here, for selling/showing pornography to minors. However, they made it too easy for minors to grab their parents' mobile phones and simply use them that way. It would be like having guns in the house but giving the children the keys to them. These companies could, in theory, be accused of being negligent, and be required — by a court of law — to making sure that no children can ever watch pornography even of their parents' mobile phone
. Some jurisdictions allow parents to explicitly give permission to their children to watch pornography, but not all do. And, to make matters even worse, both Apple and Google receive fees from pornographic content sold through their stores, so, in a sense, Apple and Google are financially benefitting
from allowing
pornography apps in their stores — which makes them also partially
responsible for the "crime". As such, to avoid any problems or issues, pornography is banned on the Apple and Google shops — even if it could be legitimately been shown or sold under certain circumstances. It's just that it's too dangerous for Apple & Google to expose themselves to the risk that "something might go wrong".There is a reason for the "Disneyfication" of a lot of products and services available via the Internet — it's the safest way to guarantee that no laws are ever broken, even if the content providers or infrastructure providers are not really part of the scheme.
Anyway, I digress. Let's go back to Second Life, and look at the
reality
of what's going on there.* *
Fact:
Linden Lab doesn't capture any private data from their users and sends it to other viewers.It's not merely a question of repeating what LL already claims on their ToS and related articles.
We have the source code of the viewer.
If such data were being extracted and sent in either way, we would know
. 'Bots are unable to capture data that doesn't exist
. So, that fact
can be easily demonstrated by just looking at the code; or, conversely, LL's alleged negligence and/or malicious intent is easily disproven by noting that LL's code does not allow any private data to be extracted and transferred by their viewers. That should be enough, in a court of law, to show that LL is not liable to any "leak" of the private user's data that they store; and since we have the source code of all
versions of SL (well, almost all, I'd say), LL could even prove retroactively
that no data had been captured and/or leaked in the past, using their viewer.But what about malicious TPVs, or, for that matter, malicious 'bots?
TPVs are easier to deal with: since the source code for SL is under the GPL, so must TPVs be, which means that
their
source code is auditable as well. Indeed, to be listed as an official
TPV, there are several requirements that have to be met. Those who do not
comply with such strict requirements — by tampering with the code! — can be excluded from the SL Grid, at the Lab's discretion, without even a warning, because whereever the ToS rules, it is absolute :)We all know about 'fake' TPVs 'pretending' to be legitimate and which allow all sort of malicious things — and which, inevitably,
will
be banned from the SL Grid. While one may argue that some
malicious TPVs may
connect to the grid for a period of time before LL bans them, it's a fact that LL does all they can
to prevent that from happening, and, as quickly as they are able to, they close the doors to the culprits. That is a "reasonable" response which would please any court of law. It's like saying, "look, I always lock the doors of the bank, and have a security system installed with cameras and connections to security providers, and all secret documents from my clients are hidden in a safe, of which only I know the combination, and there are always guards at the premises — nevertheless, the thieves managed to go through all that, bribing a guard, and blowing up the safe with dynamite during a heavy thunderstorm, so nobody noticed the noise." Even if those clients would sue the bank for negligence in protecting the most secret documents of their clients, they would be acquitted, by showing all the precautions they have taken to avoid being robbed. Alas!... Thieves, sometimes, are cleverer (or have access to tools, bribes, etc. which cannot be foreseen). All the bank can do is follow all established business practices in safeguarding access to the vaults, but... there is no security solution that works 100% of the time.Linden Lab is in the same position. Sure, we know that there continue to be malicious TPVs available for download from "certain" websites (which will be up one day and disappear the next), and that there are a few malicious users downloading them, connecting to the SL Grid, and violating the ToS in every possible way. But they won't be able to do it for long. As soon as a new malicious TPV is released and tested, LL's security team starts their internal procedures to validate that specific TPV, try to understand how to correctly identify it, and block it as soon as possible. That's all we can demand from LL; there will
always
be malicious actors which will able to get an edge over LL's own teams, and still get their viewers to connect. They're just not many, and they can't log in forever; as time passes, it becomes more and more difficult to 'crack' into LL's grid and bypassing all security, and many malicious developers have just given up, defeated, because the time and cost of constantly developing ways to bypass LL's security grows and grows, and the results are probably not worthy all that effort.There is, however, a catch. The above assumption is that malicious TPVs (or 'bots) can, effectively,
retrieve private RL data from other residents
. Now this is impossible
— you can only retrieve what the SL Grid allows you to retrieve, and such private data never
even comes close to the grid (it's stored on servers unconnected to the grid). There are no bugs to exploit, because the data is not there in the first place.It would be like claiming that you could devise a malicious TPV viewer that, once you log in to the grid, enables you to steal from the bank accounts of any resident who happens to be in your field of sight. But that's impossible, because the information about payment methods is
not
on the grid. Griefers and black hat hackers can claim
that their viewer has such a mechanism built into it, but such claims are simply lies
to fool the gullible — there is simply no way
to capture data that doesn't exist in the first place!Granted, it is
arguable
that some
malicious TPVs might
be able to induce its users to enter payment data into it, which would then be forwarded to the criminals. Again, this is something that can
be boasted about, but very likely untrue. And the reason is simple: you don't enter any payment data into the SL Viewer
. What you do
, instead, is to enter your payment data into LL's website
, which the viewer can then tap into to emit a L$ buy order. But the data never
leaves that special LL webserver — which is not
connected to the grid anyway. What is being claimed, therefore, is that a malicious TPV also
includes a fake link to a fake page which looks exactly like LL's own page, and then use simple phishing techniques to lure innocent victims into typing their payment data there.But that will
only
work for the person actually using the malicious TPV. Their
data might be compromised. But not everybody else's
.Thus, in the scenario of the 'bot copying text
as well as private data
to Discord, or some place else, where neither the ToS applies, nor (potentially) the GDPR, is an impossible one. The 'bot cannot copy data that isn't available. There is not even a requirement to give special permission to such a 'bot to get access to your private data, because it cannot access it anyway
. All it can
access is what a legitimate viewer can, and that means what can be captured from the stream of communication data between the viewer and the server. 'Bots can only "see" what a normal, regular user can "see" in their own viewers; not "more". There is no "more" to see!That's why I'm often so confused about any claims that "LL is violating the GDPR by allowing 'bots to extract private data from their users". They are
not
— such data is not
stored on the grid, so even the most devious hacker could not retrieve what is not there.It's not as if every avatar carries with them a magic notecard where all private data is stored, and, somehow, a 'bot is able to extract that notecard from
other people's
inventory, and tell nobody about it. While the whole notion of "opening someone else's inventory" is also impossible, even if it weren't, there is no such notecard
, nor any kind of asset, visible or otherwise, where real-world, private data is being stored, in any possible way.It is also argued often that 'bots can, thanks to their magic properties, know the IP addresses of other users, and, through those addresses, easily figure out their RL locations. Again, that's a fallacy, because the SL Grid communication protocol does
not
include IP addresses from others
. Sure, you get your own IP address, and that from whatever grid servers (simulators and asset servers) have been contacted by your
'bot or viewer. But nothing is being transmitted from other
users.It is conceivable that LL does, indeed, store such IP addresses for their own statistical purposes (after all, the ToS claims so, and we've agreed to it). But one thing is having malicious crackers attacking the statistics servers and access LL's databases — i.e., LL would be victim of a serious cybercrime. The other thing is claiming that, somehow, such information can be retrieved magically through a 'bot. It cannot. That data doesn't exist on the grid, and, as such, cannot be retrieved by a 'bot.
We all know how it's possible to capture people's IP addresses using media-on-a-prim, or by using a media stream, or even by asking innocently for a gullible user to click on a website somewhere, where the malicious actors can retrieve both the IP address and the UUID (and name) of the avatar. That's all true and correct.
But Linden Lab is not responsible for what happens off-world
. All they do is explain that, by opening any such links, you're not on the SL Grid any longer, nor on any of LL's own servers, and, as such, ToS doesn't apply. And they made it so that all these options are opt-in — you cannot be forced
to click on a link you don't want. That's as far as LL's responsibility goes — duly informing users that whatever they connect to outside SL is not
protected under the ToS.So, what exactly
is
that secret private data that somehow is being captured by other residents and their malicious 'bots, which they can
retrieve, and which would possibly fall under the auspices of the GDPR (or even other laws)?Well, all I can think of — based on my own circle of acquaintances, reading on the forums, and so forth — is that residents are not aware that your current grid location can be tracked (perhaps not with extreme precision, but it
can
), but, to the astonishment of many, a 'bot can "know" what attachments you're wearing on your avatar. Oh, and don't mention the mysterious UUID, which anyone can capture in SL!Let's analyse this, step by step
Aye, indeed, you can track an avatar without their knowing about it. Probably deep inside your fantastic mesh head is a script which periodically 'pings' an external server, telling your location, so that the head's content creator knows what popular areas you're visiting, in order to figure out where to put more ads or something. While I have zero evidence of the existence of such scripts, it's
possible
they exist.A similar technique are visitor counters. They can be placed inside
any
prim, and made invisible; no avatar will know the visitor counter is there, unless they know where to search for them. All major shops — and possibly almost all venues out there — generally want to have an idea about how well-attended their shop or event is, and that means having visitor counters.Visitor counters do not register merely that "someone is here". They can capture the avatar's precise location, their name, and if they're wearing scripted attachments, for instance. And by cleverly combining arrays of visitor counters, you can get a pretty good idea of how a specific avatar is moving across the grid. Visitor counters are ubiquitous and collect vast amounts of statistics, all the time.
Even better than visitor counters are 'bots. All viewers list the avatars that are in the region — on the mini-map, on the list of nearby people in chat, triggering alerts when some people log in and out near your avatar, and so forth. In order to be able to track all of that, the viewer
does
store a list of avatars in a region, and that means that a 'bot can also
retrieve that list — usually very quickly. It's actually much less laggy to capture that information via a 'bot (which will cover the whole region in a matter of seconds — and doesn't even need to move
for that!) than by using arrays of visitor counters spread across a region. 'Bots are just specialised viewers which retrieve that information as easily as a viewer.The lack of understanding about the kind of tools that are required for the viewer to render scenes also leads to some misinformation. When the statistic/metric 'bots were engaged in a huge scandal in 2023, one of the things that shocked residents most was that 'bots could know what you're wearing! Why — obviously! How else would
viewers
be able to render your
avatar, if they didn't know what is being attached? Obviously, this is the kind of thing that never crossed people's minds — how does the viewer know
how to render scenes? The most obvious things are somehow rarely mentioned. While it's true that the viewer can only retrieve certain information (and not more!), it's also true that, in order to work at all
, a viewer needs
some information.That also reminds me of how people were accused of "stealing full-rez textures" — presumably using a malicious viewer and/or a 'bot and/or some kind of weird device — when they were told that nobody
needs
to "steal" textures. They're retrieved by the viewer and saved in your cache
. Obviously! How could the viewer know
how to apply textures to some object or mesh, if it didn't have the textures for that? And, naturally, if you want your object to be rendered with high-rez textures, then those have to be downloaded first
— and archived for later re-use. You just need to go through your cache and pick the textures you want, in the fullest possible resolution. SL cannot work otherwise
!And if a texture is just another file in your disk... why, then you can copy it and re-upload it as your own. It's sad, but that's what is known as the "analogue hole": if you can see something in
your
computer, it means it has been downloaded to it, in some way — and if you know where it is stored, you can always
make a copy.I remember being argued that textures should
never
be stored on disk, unencrypted
, from where they can be so easily "pirated". Instead, they should be encrypted with a password that nobody knows but... but whom? Because, ultimately, whatever is encrypted on disk, has to be decrypted to be sent to the texture memory; that means that whatever encryption mechanism (and key) are used to encrypt the textures, the reverse process has to be in place somewhere
, since you cannot display
an encrypted
texture. You need the decrypted version for that. And that decryption happens "somewhere" — in memory. So you can always write some application which retrieves the texture from memory as from disk — encrypted or not, since, at some moment in the rendering pipeline, all textures must
be in place, and that means that they must have been decrypted first...Last but not least... UUIDs. It is often claimed that UUIDs are some sort of "magic private number" identifying and avatar, and that collecting those is "a privacy issue".
Well, it
would
be, if such a key were, in fact, a "secret" of some sort. But regarding asset content & storage, an UUID is just an entry in a table — it references a certain bit of content. Every item in SL must therefore have an UUID, and anything that requires addressing such content will need its UUID.What that means is that the UUID is
not
some sort of secret that must be hidden at all costs. Rather, almost all LSL scripts operate upon UUIDs; you can get other avatars' UUIDs either through scripting but also via many TPVs which list them with the avatar's profile. The "panic of 2007" when residents learned that people were — gasp! — collecting
lists of UUIDs for nefarious purposes is long gone (hint: the "nefarious purpose" was actually a simple but easy way to spam avatars with unsolicited IMs; LL put an end to that mostly by limiting how many IMs can be sent per object per unit of time; nobody is saying that being spammed is a welcome thing, and I'm obviously glad we don't get any more such spam, but it's hardly a terrible conspiracy to "steal" our most closely guarded secrets).That said...
What, exactly, are the "privacy issues" that you always mention — on this feedback issue as well as on others?
Are you merely confusing copyright/authorship issues — which
do
require explicit consent to be given! — with actual
privacy issues, or are you talking about something completely different?Since the target here seems to be Discord, what exactly are you afraid that Discord finds out about in-world group chat that would be a violation of your privacy or of anybody else's?
What can 'bots find that are privacy concerns and which, for some reason, the SL Viewers cannot? (Name one!)
While your concern for LL's legality status is admirable — nobody wants LL to be the victim of endless lawsuits in different courts — why do you presume that
Linden Lab
is not
compliant with the GDPR? Note that this is not the same as saying that individual users, through their activity in SL, might be or might not be compliant. In other words, where is LL essentially benefitting (financially or otherwise) by deliberately ignoring some circumstances where the GDPR would apply, or, through their inaction, appear to be condoning such circumstances?One thing is "demanding" that a group chat having a 'bot that exports text to Discord or other platforms has, indeed, an opt-in checkbox, and a space for a link or for some text (not unlike the Estate Covenants) which clearly identifies, as you so well put it, "what is used, shared, for what and with a mandatory opt-in + related TOS". Of those things, only one is obvious, i.e.
what
is shared, namely, text
(there is nothing else to be shared). To be fully compliant, the 'bot-enabled group must
clearly state what happens to the text after it leaves the SL Grid via the 'bot.Granted, one might argue that a few things are possible...
- LL would only be non-compliant with the GDPR if they refused any information to be addedregarding the destination of the text. In other words,ifthe group owners, in its description, clearly state what happens to the text, and directs potential group members to the terms of service applying specifically to that group, thenthe groupwould be compliant — after all, joining a groupisopt-in, and defining group rules is the job of the owner (not LL). Where LL interferes is if the group uses a 'bot to send text elsewherewithout telling anyone.
- LL could automaticallyflag a group that has at least one 'bot active in it. This allowed the 'bot's identity to be kept secret (a 'bot is just a regular account, after all) while still alerting the remaining users to the possibility that the group owners are not necessarily "playing fair" and not telling the whole story.
- Groups couldbe limited to non-'bots only by their owners. That would make those groups automatically "safe", so to speak. If a groupallows'bots to join, such information would be clearly visible, and it would be up to residents to decide if they want to join the group or not.
In other words...
- If a group ownerclearly explains that, by joining the group chat, a resident automatically grants permission for their text to be 'exported', via a 'bot, to a different platform, and provides a link that explains exactly what gets exported, under which circumstances, and what happens next, then the group will be fully GDPR-compliant, and LL wouldn't need to worry (neither would anyone in the group). The option to join the group or not would be up to individual residents. Also, joining a group doesn't automatically mean that one isforcedto participate in group chat; if one disagrees with the policy, but nevertheless joins the group, one can always remain silent — the choice is in the hands of the resident. This assumes that the group ToS clearly explains that themembershipof a resident isnot'exported' anywhereunlessthe member actively types anything in group chat (i.e., there are no "surprise leaks").
- None of the above involves LL in any way. It's up to the group owners to comply. The only thing that LL cando is to shut down a group which knowingly lied to its members, 'forgetting' to tell them that all group text chat was being copied to Discord — LL would technically notneedto do sopro-actively(legally speaking, there would have to be a complaint from an European resident to the authorities, against the group owners, and wait until the courts decide what to do — if the decision is "immediate group shutdown", then LL wouldhaveto comply), but theycould. They could even add a few clauses on their ToS to make that more explicit, although it isimplied(i.e., no illegal activity can be done by residents while using LL's services; if you're found doing something illegal, then you get kicked out; however, arguably, LL is only bound to the laws of the Great State of California — at least for the ToS).
- If LL introduces additional mechanismsto facilitate residents to be fully compliant with the GDPR (either in group chat but also on other cases, e.g. public chat), at their discretion, that is a Good Thing and should be encouraged; nevertheless, all that is required from LL is that they do notremoveexisting facilities that are being used to comply with the GDPR. Here is a stupid example: imagine that a group uses a URL for residents to read about their GDPR-compliant group ToS. But now LL decides that links in group descriptions are "security hazards" (or something like that) and therefore removes them automatically. This makes the group owners unable to comply with the GDPR, since the available space for the group description is not enough (and group notices cannot be made "sticky", AFAIK). In that case, one could argue that LL is somehowdeliberately preventingits users to be GDPR-compliant, and for that, they would be liable themselves.
- That said, in all these cases, I personally prefer to avoid listing all possible things individually. Today, Discord is popular, so let's make a rule about Discord. Tomorrow, perhaps its Threads or BlueSky, which work slightly differently, so let's change that. The week after the next, US President Biden and European Commission President von der Leyen's decide to unify the US and European rules regarding privacy, as well as establishing a common dictionary of terms for describing such privacy issues (what is considered to be an issue and what is not) and that requires a new revision of the ToS — again.
Instead, the challenge is for LL's legal team to make an over-arching statement that LL is aware of the many national and international laws regarding safety, privacy, copyright/authorship, minor protection and so forth, and will endeavour to comply with all of them (so long as they aren't contradictory!) and to give residents the required tools so that they, in turn, are also compliant with such laws. And then, of course, each case is a case, and up to LL to "decide" what their policy will be regarding issues raised now and the future. They already use that approach for a lot of things, the latest of which having to do with the ever-conflicting issue of age play vs. paedophilia... the first being "allowed" or at least "tolerated", the second being obviously absolutely forbidden (since it's a crime in most of the world).
- Similar to the form for DMCA claims (which are concerned with copyright & authorship), LL couldhave a form for GDPR issues (which are concerned with privacy issues, real or perceived)
Don't forget...
... the reason why Google, Apple, Microsoft, Facebook & friends are under the EU's scrutiny is
mostly
because they never felt the necessity of telling their users what real-world data is being collected by them and what is being made with it — namely, profiteering from selling such data... Facebook's desire to enter the Metaverse bandwagon was
mostly
because they were angry at Google and Apple, which, in an effort to remain GDPR-compliant, required all apps on their stores to disclose what data they collected and what they did with it, something that Facebook did not
want to do, and accused the two companies of acting as a "duopoly" by imposing their own rules over hardware only controlled by them
. Thus, Facebook wanted to sell their own device — the Oculus — where they would implement their
rules instead. Suffice to say that this was enough for the EU to start taking a good
look at what Facebook does now
on the Web...... I still wonder how exactly Google, Microsoft, Yahoo and a few others "get away" without disclosing that they thoroughly comb through their users' mailboxes in order to extract profiling data, which gets resold, or at least used for providing better-targeted ads to their own ad agencies. Sure, recently, I have seen some
new
popups from Google, explaining that we have to give them explicit permission to track us across all their platforms, but their explanations about what exactly they do
are not described in what I would consider to be "clear" language. Apple may
do the same as the others, but they usually escape being noticed, because they don't sell or give away their users' data to anyone
(and Apple, as a company, is not
liable to bribery; they have way too much cash in the bank, besides their market value in shares, obviously). That doesn't mean that Apple doesn't use that data for themselves
— regardless of what
they do with the data, they must
clearly state what
data is being collected and for what purpose (which they only do... when a gun is pointed directly to their heart, I mean, wallet)But I digress. The point is that it's not fair to compare LL with Big Tech, which play by their own rules, unless they're curbed by legislation. LL
does
comply with pretty much every legislation out there :) and does so willingly, sometimes erring on the side of caution, just to be absolutely sure they cannot be seen as deliberately trespassing upon any laws, willingly or not.That said — I think it would be fair to give a more detailed description of the immensity of data that is being sent by every viewer to LL's servers, which LL vaguely refers to as "metrics" or "telemetry data". It is highly likely that this is, indeed, just what LL does with the data. But I personally don't
know
. I only know that the data is
being sent because we can see how it's done in the code
. And it's definitely quite a lot
. To the best of my knowledge, while I have gladly accepted all ToS ever created by LL since mid-2004, I don't exactly recall anything specifically mentioning what data they collect from my computer and what they do with such data. Probably it's buried deep in their documentation and it's only my fault for being lazy and not having dug deep enough...Woolfyy Resident
Gwyneth Llewelyn No time to read your ten kilometers long posts for nothing but knowing that contrary to you i have over than 20 years of practice of international law and i know what i say ... knowing too that you already demoed that you don't even know what is hacking in another feed.
No time to lose on my side ... have a good night !
FYI My expertise, including on international law is paid in real and not in L$ ... and i don't take any customer at less than USD 100 000 .. SL for me is a relaxing hobby and mostly on building and scripting .. and your posts are ending up not relaxing but boring.
Law is based on precise facts, not approximately supposed / interpreted ones depending on what you think that you read or understood ... moreover i'm not going to explain the meaning of opted-in etc. knowing that i am French which has one of the most severe laws in the privacy and so on fields. Due to US abuse on privacy matters around the use of data in AI it is going to be even harder ...
Spidey Linden
under review
Zandrae Nova
I would recommend a checkbox in the group tab where people can check, "Group chat linked to a third party service." And display that in the group info.
Woolfyy Resident
Zandrae Nova Legally and according to the European law, LL would need to do as Apple did with its latest iPhone marketplace information = what is used, shared, for what and with a mandatory opt-in + related TOS.
Typically bots, hhtp data going outside SL, Discord linked in the wild to SL groups etc. are totally not legal in their current form, especially according to what is named "informed consent".
Any lawyer can confirm and in case of legal action has 100% chances to win.
Zandrae Nova
Woolfyy Resident Ah I see. Thanks for the information.