Most users perceive data from the mesh as augmented reality—information overlaid on the user’s physical senses. For example, computer-generated graphics will appear as translucent images, icons, or text in the user’s field of vision. While visual AR data—called entoptic data—is the most common, other senses may also be used. AR input includes acoustic sounds and voices, odors, tastes, and even tactile sensations. This sensory data is high-resolution and seemingly “real,” though it is usually presented as something ghostly or otherwise artificial so as not to be confused with real-world interactions (and also to meet safety regulations).
User interfaces are customized to the user’s preferences and needs, both graphically and content-wise. Filters allow users to access the information they are interested in without needing to worry about extraneous data. While AR data is typically placed in the user’s normal i eld of vision, entoptics are not actually limited by this and may be viewed in the “mind’s eye.” Nevertheless, icons, windows and other interaction prompts can be layered, stacked, toggled, hidden, or shifted out of the way if necessary to interact with the physical world.
Every mesh represents themselves online via a digital avatar. Many people use digital representations of themselves, whereas other prefer more iconic designs. This may be an off-the-shelf look or a customized icon. Libraries of avatars may also be employed, enabling a user to switch their representation according to mood. Avatars are what other users see when they deal with you online — i.e., how you are represented in AR. Most avatars are animated and programmed to reflect the user’s actual mood and speech, so that the avatar seems to speak and have emotions.
Entoptic tags are a way for people to “tag” a physical person, place, or object with a piece of virtual data. These e-tags are stored in networks local to the tagged item, and move with the item if it changes location. E-tags are viewable in AR, and can hold almost any type of data, though short notes and pictures are the most common. E-tags are often linked to particular social networks or circles within that network, so that people can leave notes, reviews, memorabilia media, and similar things for friends and colleagues.
Since reality can be overlaid with entoptics of hyper-real quality, modern users can “skin” their reality by modifying their perceptual input. Environments around them may be modified to i t their particular tastes or mood. Need your spirits boosted? Pull up a skin that makes it seems like you’re outdoors, with the sun shining down, the sounds of gentle surf in the background, and butterflies drifting lazily overhead. Pissed off? Be comforted as flames engulf the walls and thunder grumbles ominously in the distance. It is not uncommon for people to go about their day, accompanied by their own personal soundtrack that only they can hear. Even olfactory and taste receptors can be artificially stimulated to experience sensations like the smell of roses, fresh air, or freshly-baked pastries. While originally developed to make “space food” less distasteful and as a method to counter space-induced cabin fever for those that weren’t born in space, vast archives of aromas, tastes, and environments are available for download.
Skins do not need to be kept private, they may also be shared with others via the mesh. Tired of your cramped habitat cubicle? Decorate it with a custom-themed skin and share it with visitors to make them feel more comfortable. Found a new music track that livens up your day? Share it with others around you, so they can nod to the same beat.
Skinning can also be used for the opposite effect. Any undesired content of reality can be edited out, veiled, or censored by modern software programs or muses that engage in real-time editing. Tired of looking at someone’s face? Add them to your killfile, and you’ll never have to acknowledge their presence again. AR censorware is also common in some communities with strict religious or moral convictions