Smart glasses: what was old is new again?
Rédigé par Martin Biéri
-
11 May 2026For more than a decade, smart glasses seemed confined to a niche audience and limited uses. Yet recently, these devices have begun to win over an increasingly broad public.
Glasses that take us back in time: what’s new since 2012?
In a perfect parallel of form, smart glasses feature prominently in Time magazine’s list of top innovations of the year in 2025… but also in its 2012 list! In fact, smart glasses are not a new topic. Until recently, they were primarily associated with Google Glass, launched in 2013 and widely regarded as a commercial failure in specialized media (such as the ZDNet article “Google Glass: timeline of a failure” – in French), in marketing textbooks (for example, L'essentiel du management de l'innovation, Albéric Tellier, 2022), as well as in the scientific literature (Understanding controversies in digital platform innovation processes: The Google Glass case, Klein et al., 2020). Subsequent launches shifted away from the general public toward professional users, including Google Glass Entreprise Edition and Snapchat Spectacles (see, for example, Snap launches new Spectacles that are more powerful but not suited to the general public).
The specialized media outlet The Verge produced a podcast in late 2025 on the history of Google Glass, describing them as follows: “It was a revolution, and also a problem: Google made face devices extremely uncool, and its early user base was so off-putting they became collectively known as ‘Glassholes’”. (a decidedly negative portmanteau combining “glasses” and “asshole,” used to describe the rude behavior of some wearers - softening by LINC).
Over the past two years, the number of smart glasses sold (across all brands) has been steadily increasing. According to Counterpoint Research, global sales rose from 378,000 units before 2023 to around 1 million in 2023, and nearly 3 million in 2024. This trend appears to be confirmed by recent news from Meta: shipments of the new Meta Ray-Ban Display, paired with the Neural Band, which were expected to reach the European market at the beginning of 2026, have been put on hold to first meet demand in the United States. No specific timeline has been given for the other countries initially targeted in Meta’s “international strategy,” including France. According to Bloomberg, the delay is also attributed to European regulations on batteries (which must be removable) and on AI. Moreover, according to figures released by EssilorLuxottica, 7 million Ray-Ban and Oakley Meta glasses were sold in 2025, representing a market share of around 60% (Le Monde, February 2026).
This recent success of smart glasses has also restarted momentum at Google, which has in turn announced upcoming consumer models for 2026 in three different formats: a first model capable of taking photos and videos and interacting with an AI assistant (based on its Gemini agent); a second model featuring an augmented-reality display that allows navigation similar to a smartphone or computer; and finally, a third project (Project Aura), in which the “computer” component is separate, in the form of a small puck, in order to keep the glasses lightweight while still augmented. On the side of Chinese manufacturers, similar devices can also be found from Xiaomi, notably with its AI Glasses, as well as from Alibaba and its Quark AI. (Notably, the two articles reporting on these new ambitions share a common angle: “taking on Meta.”) One of the most symbolic illustrations of this trend is the prominence of these new devices at the major global technology event, the Consumer Electronics Show 2026 (CES), and the “Sino-American battle” among designers. Samsung is also entering the race, with an announcement of its own glasses planned for late 2026 - and has chosen the Android XR operating system, notably as part of a partnership with Google.
Early analyses suggest that this renewed enthusiasm in the smart glasses market will not, in the short term, offset the investments planned for the development of these smart glasses and headsets. For Meta, however, this could represent a “bet” on the end of smartphones and an opportunity for Mark Zuckerberg’s company to compete directly with Apple and Google - a strategy it had already pursued with its Oculus devices.
The newsletter The Glitch by L’Atelier BNP Paribas (issue no. 259), dedicated to the Consumer Electronics Show 2026, provides an overview of the various applications of smart glasses, particularly “in the service of accessibility for people with visual impairments or blindness”. Among the examples highlighted are the glasses developed by .lumen, which can “guide” users (via multiple cameras and operating locally); SeeHaptic, which relies on a kind of “spatial braille”; and Hapware, which translates nonverbal communication into vibrations through a connected bracelet.
What’s fueling this excitement?
Several hypotheses can be put forward (and they are not mutually exclusive!):
- The stylistic dimension: by partnering with eyewear brands (Ray-Ban and Oakley), Meta has managed to integrate its smartphone-sync technology (for listening to music, for example) and the ability to take photos or record videos into everyday—or even “fashionable”—glasses. However, this stylistic angle had already been explored, as had similar functionalities, notably through collaborations with designers (such as fashion designer Diane von Furstenberg and Google Glass in 2014) or their adoption by celebrities (as documented by Time magazine the same year). In a similar vein, Google had announced a comparable partnership with EssilorLuxottica - the owner of Ray-Ban and Oakley - in 2015… but perhaps too late? While this partnership did not ultimately materialise, it is expected to do so with two other brands: Warby Parker and Gentle Monster.
- The technical dimension: in addition to the improved performance and features offered (photo, video, battery life), it is also important to highlight the integration, by manufacturers, of mainstream AI models that can be queried directly via voice command. This option is very recent and does not apply to devices sold before autumn 2025. It is also worth noting that these glasses are part of broader developments in virtual reality headsets and the metaverse. Overall, with the exception of the AI chatbot, Google Glass already had a similar operating principle (side buttons and voice commands) and comparable battery life: Meta announced 6 hours of use before recharging for its first generation of glasses, while Google Glass lasted around 5 hours under normal, continuous use.
- On future projections, it can be noted that the discourse has slightly changed: while people still speak of a revolution, the tone is more measured. “This latest innovation (…) reveals a new horizon and (…) completely redefines the concept of ‘hands-free’” (Francesco Milleri, CEO of EssilorLuxottica), whereas in 2012 the talk was about the end of screens and smartphones. The new glasses are now presented more as a new interface complementing existing devices (such as computers and smartphones), rather than replacing them, offering a new connected object that allows users to play music, read messages, launch GPS navigation, interact with an AI, and so on.
- Price and accessibility: the $1,500 price tag of Google Glass sparked considerable debate when they were released in 2013. In 2026, the cheapest models (from lesser-known brands) now start at around €50, while “premium” models are priced between €350 and €500.
- More broadly, it is also necessary to situate the use of these glasses within a much more “intensive” form of digital consumption than in 2013, increasingly centered on photo and video channels (with the emergence of new social networks, streaming platforms, live content, stories, shorts, etc.), particularly through influencers. In the same way that wearable devices have become more present in everyday life than they were 13 years ago, starting with smartwatches, AI has followed a similar trajectory: it is now embedded in mainstream usage and directly integrated into these new connected devices, which are therefore also “smart” in a new sense. These practices and technological shifts may have helped overcome the issue identified in Klein et al. (2020, already cited): “Many digital innovations (such as Google Glass) are ‘solutions in search of a problem’: the problem to be solved and the type of value that can be created by a (often disruptive) digital innovation is not clear to the actors involved”.
- Finally, a last theoretical hypothesis, which brings together all the previous elements, is that we have moved from the era of the “innovators” to that of the “early adopters”, according to the innovation diffusion curve (Everett Rogers, 1962—and also discussed in another chapter of L'essentiel du management de l'innovation by Albéric Tellier, mentioned earlier). This model describes the adoption of innovations over time through several stages: innovators, early adopters, early majority, late majority, and laggards. The social acceptability of these devices may therefore have shifted to a new stage over nearly 15 years.
What has been the public’s reception of the glasses?
At the beginning of 2026, the LINC launched a survey with Harris Interactive – Toluna to explore how French people perceive smart glasses, in a context marked by the growing spread of these devices. The survey was conducted online from January 22 to January 29, 2026, among a sample of 2,128 people representative of the French population aged 18 and over.
See the dedicated article on the survey and its results
Smart glasses: an innovation with an ambiguous status
- Very few people have ever tried this type of glasses (9%) or report owning them (1%);
- The profile most interested in these glasses: men aged 25–49, from higher socio-economic categories, and living in urban areas;
- While the “innovative” aspect is associated with smart glasses, they are also seen as gadgets;
- Privacy risks are clearly identified (67%), as are health risks (65%), although these devices are also considered useful as an “assistive” tool for certain disabilities (78%), for hands-free use (59%), or for specific activities such as sports or healthcare (57%).
Limited interest among certain segments of the population
One third of respondents (36%) say they would like to acquire smart glasses, while two thirds say the opposite (62%). This suggests a relatively significant potential market (for comparison, 34% of respondents own a tablet and 25% a smartwatch - although it is important to distinguish between the intention to acquire and actual ownership). Among those willing to acquire them, respondents highlight technological advances (real-time translation, GPS, photo and video capabilities, AI). Among those not interested in smart glasses, the main reason is lack of perceived usefulness, followed by price. Data security is a concern for 22% of this group.
The specific case of photos and videos: distrust and concerns
- When it comes to the use of glasses for photos and videos, mistrust and concerns prevail, particularly regarding issues linked to image rights and consent (57%), but also AI and deepfakes (37%). Data theft or leakage is mentioned by 34% of respondents, and 26% cite concerns related to facial recognition;
-The intrusive nature of the glasses is widely acknowledged: 81% of respondents believe that the risk of people being photographed or filmed without their consent is higher than with a smartphone;
- Nearly 9 out of 10 people say they would find it disturbing to be photographed or filmed without consent and would like to be informed when this occurs, although no single notification method is clearly preferred (light indicator, sound alert, multiple signals at once, or smartphone notification).
A previous survey and a qualitative study from the perspective of smart glasses users
In 2013, a similar survey was conducted in the United States by BiTE Interactive, with some figures reported by ZDNet: ““only” 10% were willing to wear this gadget of the future. According to more detailed adoption forecasts, 38% of respondents did not want to own Google Glass even if the price were affordable. 45% of respondents feared that this new device would be uncomfortable to wear and socially inappropriate. Finally, 44% simply did not find the product appealing”.
Another perspective, from a German research team (D. Bhardwaj, A. Ponticello et al., 2024), also helps to examine the experience of people wearing these devices, in relation to those around them: “We present the first study focusing on the user perspective and examine privacy issues related to wearing camera-equipped glasses in the presence of bystanders”. The results suggest that visual indicators of photo or video capture are often ineffective (depending on distance, location, lighting conditions, but also on third parties’ understanding of how these signals work), and also highlight a certain “burden” or discomfort experienced by users themselves, as they try to avoid intruding on the privacy of passers-by.
Associated risks, the glass half empty
Some argue that these so-called “smart glasses” are ultimately no more intrusive than a smartphone, and that doomsayers are merely reproducing the same techno-panic that accompanied the first cameras. This is notably the argument put forward by Jeff Jarvis on his blog Buzz Machine: I see you: The technopanic over Google Glass (March 7, 2013). While it is true that these glasses rely on sensors already found in smartphones (camera, microphone, etc.), they are revolutionary in that they see everything the wearer sees, whereas a phone only sees what its user chooses to show it. This represents a fundamental shift in perspective, insofar as some of these new devices can be permanently connected, while others are merely “connectable” and do not necessarily have the ability to transmit data autonomously. Here emerges the specificity of “wearable computing” and ambient computing: glasses see what I see, a watch can take my pulse or measure my skin temperature… And tomorrow, perhaps, sensors will continuously measure my brain waves.
IP Reports no.2, Le corps, nouvel objet connecté, 2014
Early use conflicts…
The first consumer-oriented smart glasses quickly attracted criticism regarding the intrusive nature of these new devices. Privacy risks emerged very early in their use, as already noted by LINC in a 2016 review of Snap Spectacles: “The Wall Street Journal journalist Joanna Stern, who tested the glasses, ends up dedicating 75% of her article to the issue of privacy and to the varied reactions of people discovering they were being filmed”.
Indeed, much of the criticism focused on the intrusive nature of these devices, as well as on voice control in public spaces (inevitably raising questions of intimacy, misuse, or technical malfunctions), as illustrated by a sketch from the satirical show Saturday Night Live or by the depiction of what it means to be a “glasshole” (see definition above) on the news site Mashable. This context notably pushed Google to publish a list of “Do’s and Don’ts” in response, in addition to already implemented measures such as voice activation and a red indicator light signalling video recording.
For next-generation glasses such as Meta Ray-Ban glasses, this aspect has also been taken into account in the design: a light indicator turns on like a flash when taking photos and remains lit for the duration of video recording, in order to signal these actions to people around the wearer. For some models, an audio signal is also played. While glasses do not invent covert photography or video recording, one could argue that embedding photo and video systems into an everyday object (prescription or sunglasses) in a more discreet form may contribute to the increase of such phenomena, especially when capture can be triggered directly via a button placed on one of the arms of the glasses. This concern was already raised in 2013, as shown in an article from Le Nouvel Obs titled “Google Glass: an arrest filmed without the suspects’ knowledge”, in which the journalist writes: “The filmed scene is of limited interest, except that it shows how passers-by do not react while being filmed. Some seem puzzled, surely more by the frame of the glasses than by the red light indicating recording.” It is also worth noting a first judicial decision in Philadelphia at the end of March 2026 concerning these glasses: a formal ban on the use of smart glasses inside “First Judicial District buildings, courthouses, or offices, even for people who have a prescription”.
… their misuse
The effectiveness of this indicator becomes even more questionable when one sees tutorials and forum posts proliferating online explaining how to get rid of these warning lights. The stated motivations vary: filming at night (or in the dark) without the lights turning on… or more openly, avoiding having to inform people at all (sometimes accompanied by remarks encouraging a form of “voyeurism”). Some users resort to black tape (even if it is not so easy to “trick” the glasses), or even to drilling in order to cover or disable the LED. The comments and replies to such content (posts or videos) clearly reveal the tension surrounding these new devices, oscillating between gratitude and condemnation of practices perceived as violating privacy (see for example here). It is also worth noting that commercial products already exist that allow users to block these lights (such as light-blocking covers).
It is also possible to develop more technically sophisticated forms of misuse, namely extending the capabilities (for now framed by designers) to other technologies, such as facial recognition. For instance, students at Harvard University developed glasses embedding the PimEyes software, an image search engine based on facial recognition, making it possible to identify people encountered in the street. These glasses were also tested in a video by tech content creator Amy Plant for educational purposes, who explicitly highlights from the introduction the issues associated with such use, concluding: “any generation that doesn’t put its face on social media… well, they’re right.” Such concerns become even more significant when these technologies are used within law enforcement operations, as reported by the specialized media outlet 404 Media, with the case of a US officer using recordings from Meta Ray-Ban glasses during an operation, highlighting a certain blurring of boundaries, with the use of a personal device in an official public mission. The New York Times also reports that Meta is considering (according to an internal company memo) adding a facial recognition system to its glasses, initially for visually impaired users, and later more broadly for the general public.
However, it is also interesting to note the existence of alternative projects, such as the Ban-Rays project: smart glasses… designed to detect other smart glasses equipped with a camera. To do so, the developers rely on two approaches: one based on the analysis of optical signals and the detection of light reflections from cameras, and another based on network analysis (Bluetooth), although the latter appears less effective. A similar, more consumer-oriented project is already available on some app stores: Nearby Glasses, which is also based on Bluetooth signals emitted by glasses. In March 2023, LINC launched ARBlock, the first and most effective personal proxy and ad blocker in augmented reality.
On this issue, it is also worth noting the open letter signed by 75 associations defending digital rights and fighting for the rights of women and minorities, addressed to Meta, or more precisely to Mark Zuckerberg, calling for the abandonment of the facial recognition feature in smart glasses, given the risks for individuals, particularly for vulnerable populations and minorities.
And the same “classic” risks associated with connected devices more broadly
New articles published at the beginning of 2026 highlighted issues related to data labelling by “digital labor,” particularly in Kenya. These workers, tasked with helping to “improve the product,” especially the accuracy of AI systems and object recognition in the real world, receive images captured by users of smart glasses. In doing so, they may come across highly personal content, including intimate moments: “nude bodies,” bank card numbers, personal files, and more, sometimes revealing (at best) inappropriate use of the glasses by certain users. According to Kenyan workers interviewed in a survey by the Swedish newspaper (Svenska Dagbladet, see link above), Meta’s blurring algorithm can sometimes fail and does not always properly obscure faces.
This annotation issue is not new, nor specific to glasses: a similar scandal occurred in 2019 involving Siri, when audio excerpts were sent to annotators in Ireland and could include intimate moments, an issue that resurfaced in 2025. In the case of glasses, Meta responded that no data from European smart glasses is sent to Kenya - an answer that leaves aside other issues related to downstream processing: access to poorly “anonymised” recordings, sometimes linked to inappropriate use of the glasses, questions of international data transfers, and the use and oversight of subcontractors. Similarly, it can be assumed that European users’ data may one day also be processed for such purposes.
Another point is that these glasses - especially in their more advanced models - are deeply integrated into the ecosystem of their manufacturer, whether through photo retrieval (which must go through the Meta app, for example) or the use of the chatbot. Indeed, in the case of Meta Ray-Ban glasses, users must install the META AI application and maintain an internet connection, which implies that voice recordings, photos, and videos are by design sent to Meta’s servers to generate responses. As with voice assistants (see from page 37), this raises several issues, starting with the reuse of data for service improvement purposes.
For more information and to follow the actions of the CNIL on smart glasses, see the article “Smart glasses: the CNIL calls for vigilance” published on the CNIL website, as well as its forthcoming action plan.