Through Facebook’s looking-glass: Smart glasses and creepy technologies
On 9 September 2021, Facebook (FB) launched Ray-Ban Stories, the first iteration of their controversial smart glasses. Designed in collaboration with the eyewear company Ray-Ban, the glasses have the same design as Ray-Ban’s well-known Wayfarer model and incorporate two 5-megapixel cameras and open-air speakers, which add only negligible weight, according to their creators. This allows the wearer to take pictures, make short (30-second) videos, listen to music and take calls. Currently, the Ray-Ban Stories can be bought within the European Union (EU) in Ireland and Italy.
While such glasses might herald the next step in the wearables market, they are also a next step for creepy technologies and surveillance societies. A mundane object enshrined in the mainstream culture, such as the Ray-Ban frames, suddenly becomes a potential tool for surveillance. To determine if these smart glasses represent a real threat to human rights in general and privacy and data protection in particular, it is necessary to look at the risks triggered by their design and use.
Ray-Ban Stories’ privacy policy
According to the product’s privacy page, several features were incorporated during the design phase with the purpose to protect privacy and data protection. First, the photos and videos taken by the glasses are not automatically accessed by FB or uploaded to any profile. The glasses are synchronised with the app ‘Facebook View’, which allows users to upload and edit the pictures. Facebook claims that the glasses can store 500 pictures or 30 short videos encrypted, but provides no information about data security protocols other than the existing ones in case of unauthorised access to a FB account – for instance, if the glasses are stolen.
Second, according to FB, there are two ‘actions’ that alert people in the vicinity of the wearer that somebody is using the Ray-Ban Stories. To take a picture or record a video, the wearer has to press a small button placed on the right side of the frame, or give a voice command. This causes a tiny capture LED light to turn on, warning other people. However, as both the Irish and Italian Data Protection Authorities (DPAs) have already criticised, whereas in the case of a picture or video taken by a mobile phone it is relatively easy for the subject to notice such activity, in the case of FB’s glasses the quotidian gesture of touching the frame will largely remain unseen. Contrary to FB’s claim that their LED raises more awareness than mobile phones’ cameras do, I can attest as a spectacle wearer myself that people often adjust their glasses, regularly touching the frames and their faces in general.
The LED light has also received criticism because it is very discreet. It is not clear at which distance it can be perceived by the human eye, let alone in the case of visually impaired people. Stories’ voice command might be the most privacy-friendly feature, but it is an optionable one, and distance from the person potentially being filmed remains an issue.
The Irish and Italian DPAs have called on Facebook to mount a public campaign raising awareness around persons wearing Ray-Ban glasses with a white LED on, as this signals they might be taking photos or recording. Otherwise, those who are not familiar with such technology will not pay attention to the LED. It should be noted that DPAs are independent governmental agencies tasked with overseeing the implementation of data protection legislation through investigation and remedial powers. They offer professional advice on data protection issues and manage complaints about violations of the General Data Protection Regulation (GDPR) and other national legislation.
Legal analysis
Apart from the abovementioned joint communication by the Italian and Irish DPAs, Italy’s DPA has posed a series of questions to FB about the glasses’ compatibility with privacy laws. These questions concern the legal basis for Facebook to process personal data; measures to safeguard bystanders, especially children; mechanisms to anonymise collected information; and the features of the voice assistant used to operate the glasses. Some of the answers of these questions can be found in FB View’s Data Policy. However, the policy also reveals some disturbing details: FB collects information on the amount of time spent filming videos, and information on transcriptions and audio recordings of the spoken interactions with Facebook Assistant, including any background sound during such interaction and any inadvertent invocation of the Assistant.
This preliminary analysis shows that Ray-Ban Stories very much belong in the category of creepy technologies and they do indeed entail a threat to the rights to privacy and data protection. Moreover, they potentially put at risk other rights, such as freedom of assembly. Similar to reports on the use of facial recognition technology during recent protests, it would not be difficult to imagine that widespread use of Ray-Ban Stories might prevent some people from attending gatherings or demonstrations, to avoid being photographed by someone wearing some cool, and ostensibly harmless, Ray-Ban shades.
What next?
This first iteration of smart glasses by Facebook is just that: the first. Future iterations might include the incorporation of virtual or augmented reality features and the extremely controversial use of facial recognition technology.
Additionally, although FB’s product will attract a great deal of the mainstream attention, only five days after the Stories launch, the Chinese company Xiaomi released its own smart glasses. It will be worth analysing how rights to privacy and data protection are tackled by this Asian manufacturer’s device policy.
The functionalities offered by smart glasses are not new; they very much remind us of the traditional spy cameras placed inside ordinary objects such as pens. However, the fact that Big Tech firms are betting on the smart glasses market will require attentive monitoring by EU legislators, policymakers, privacy advocates and DPAs. Issues such as consent (Art. 7, GDPR), how the use of this technology affects the rights of the data subject (Chapter 3, GDPR) or the potential use of Ray-Ban Stories by law enforcement authorities in a similar way to body cameras (see the Law Enforcement Directive) must be supervised and enforced by the competent institutions. Otherwise, we may move to an even more surveilled and monitored society that makes Alice’s crazy looking-glass world pale in comparison.
Natalia Menéndez González is a PhD researcher in the EUI’s Law Department. Her thesis focuses on the legal implications of facial recognition technology (FRT) empowered by AI. She is concurrently investigating Facebook’s use of FRT, the privacy impact of FRT during the COVID-19 emergency and ethical implications of natural language processing models. Her relevant publications are here.