1. CONTEXT
Brain-computer interfaces today

The idea of using our mind to operate machines or communicate with other people is old. It doesn't sound absurd - if we are able to interpret light waves and emit sound waves and memorise things and dream, why shouldn't we be able to send wireless signals that are understandable by other people and machines? It wouldn't be strange if we were born with this ability. Maybe we are, but we haven't been able to develop it enough yet.

In the meanwhile, we are trying to understand how our brain works, and there are at least two ways of doing so. The first one is by using the brain alone, our intellect, without the use of any external help. Weather the brain is capable of understanding itself is one of the oldest philosophical questions. In any case, the introspection would be subjective and it wouldn't serve to explain all the brains. The other way of exploring our brain is with the help of technology, which we conceive using our brain, with the limitations that this entails. Is like creating a mirror to see our brain reflected, to understand how it works. Today, this mirror reflects very little, there is a lot that we don't see yet.

We have developed technologies that allow us to read and interpret brain waves. Algorithms can transform certain signatures from those brain waves, into computer actions. However, those technologies are not able to read and interpret our brain precisely. We can also say that we are not trained to communicate with the technology we have available today. Technology today is far from enabling brain control for the activities we use a computer for. Being able to write an email or to make a search query with our thought seems a long way down the road.

The interest to get better at reading and decipher our brain waves is threefold:

  • Learn how our brain works to create computers that think alike (Cognitive computing, involving neuroscience, computing, nanotechnology)
  • Learn how our brain works and communicates with our organs to create better prosthetics (Mind-controlled prosthetics, involving neuroscience, medicine, robotics)
  • Learn to interpret our brain activity to be able to use machines with thought (Brain-computer interfaces, involving neuroscience, computing, design)

As a designer, I have a special interest on the last point. Research in mind-computer interfaces is mainly driven by neuroscientists and computer scientists. Other fields like design, education or social sciences seem to be waiting for the technology to be mature enough to enter the playground - then, the rules of the game may be set.

This article is a collection of thoughts about the future of mind-computer interfaces, from the design perspective.

An emerging design space

During the last decade design has blurred its boundaries with technology and human sciences. There is not only more understanding between an engineer, a designer and an ethnographer, but there are also Interaction Designers that see the benefit of being a bit of the three.

Design schools have given technology and human factors an important role in their curriculum, as well as value applicants with such backgrounds for master degrees. Neuroscientists can be the next ones to come into the equation, being very welcomed in design schools and taking part in product design teams.

Also designers will become more interested in tinkering with brain interfaces, exploring new types of sensors, pulling and trying to understand data from the brain, and designing for it. An hybrid of interaction designer and neuroscientist can lead a new branch of design.

Despite there is plenty of literature about how our brain works (at least as far as we know) and how we are reading and interpreting it today, I didn't find any material from an interface design perspective. This article collects some ideas exploring the design territory, in design terms, from a designer point of view. It is not about sensors and prosthetics, brain waves and decoding algorithms. It is about approaching the mind as a design space:

  • How can we [people] shape our mind to better interface with computers?
  • How might we [designers] help people explore, learn and use new mind capabilities to perform computer actions with their brain?
  • Which are the essential skills, capabilities, mental functions and processes that are involved in a mind-computer interaction?
  • How the evolution of brain computer interfaces can redefine the human-machine relationship?
  • Which behaviors and social dynamics will emerge due to the extensive use of mind-computer interfaces?

From an interface design perspective, because the mind is people's last private space, I want to think that this is not all about designing interfaces to be uploaded to people's minds. I prefer to imagine the designer's role as helping users create their own mental interfaces, and design the products and services that will connect to it.

A new language

"Push the cube with your mind!"

This is one of the first things you may be told when trying a brain-computer interface: to push a cube on a screen with your mind. It is a good start to explore mental actions and see how the computer reacts to it.

Somehow is like learning how to drive in the early 19th century, when cars didn't follow any manufacturing standard, neither the roads did. As cars became more popular, we developed ways to teach how to drive so people automate actions gradually, we created a convention of signs, a specific terminology, etc. This same process will be needed to establish a framework in which we can design, teach, learn, practice, use and talk about mental interfaces.

This article uses certain terminology based on design terms and the activities we use computers for today: communicate, expand our knowledge, consume media, etc. In the future, we may do these things differently, and we may do different things too.

2. THE MENTAL SPACE

Our brain (human processor) has a variety of interfaces to communicate with everything around us. Our body is the container of all the interfaces: our hands, eyes, ears, etc. Those are human interfaces. They can be input interfaces (ears), output (voice), or both (eyes, fingers).

Likewise, a microprocessor (computer processor) has a variety of interfaces to communicate with everything around it. A screen, a mouse, a wireless chip, etc. Those are computer interfaces. They can be input, output or input/output interfaces too.

When human and computer interfaces understand each other and interact, we call them human-computer interfaces. When we want to click a button on the screen with a mouse, our brain uses the hand as output interface and the eyes as input interface, while the computer processor uses the screen as output interface and the mouse as input interface.

In fact, the hand and the arm also provide feedback about their movement back to the brain, known as sensory feedback.

We will be able to perform similar actions using our mind and a brain-computer interface.

In mind-computer communication, when we push with our mind in order to move a cube on the screen, we are creating a mental interface. The mental interface acts as the hand - it is a human interface that talks to a computer interface (an EEG headset for instance). While the hand exists in the physical space, the mental interface exists in the mental space.

Every time we need to perform an action with our mind, our brain creates an interface in our mental space to be able to perform that action. It is a temporary interface that exists for the period of time we need to perform the action for.

Like the hands, the mental space belongs to each individual, and it cannot be created by anyone else than its owner.

Mind ergonomics

Designing tools to extend our body capabilities require taking into account the variety of subjects that will use these tools. Ergonomics and human factors provide models and standards to design equipment and devices that fit the human body and its cognitive abilities. The design of a computer mouse or an EEG headset has taken into account the features of the group of users that will use the device. A similar set of data models and conventions will be required once mind-computer interfaces become more popular.

As an example, in order to design a pair of gloves for a 3 years old girl we can use some ergonomics data: the average hand length for that age (10.7 cm) will set the standard glove size, and the deviation (0.7 cm) can set the other sizes and/or the stretching capabilities of the material. Likewise, if we need to design a mind-machine interface for the same girl to practice writing numbers in a computer screen using her mind, we will want to know about her abilities to create and navigate mental interfaces at this age. Studies on mental capabilities of a large variety of population will configure mental space and mental interface models, or the ergonomics of the mind. Knowing how the environment or more specifically, visual, auditive and other stimuli can affect the user and its mental space will be also determinant in order to design solid mind-machine interfaces.

Like in any other product or service, there are other decisions that will also apply while designing mind-computer interfaces. To design for the average or extreme users, to include or exclude the impaired, to value efficiency over providing a pleasurable experience, etc. these are all decisions where design and business among other aspects need to compromise and be responsible for.

A multidimensional space

A space is defined by n dimensions. Depending on the mental interfaces we need to use, we will configure a mental interface with one, two, three, n-dimensions.

In the example of pushing the cube with the mind, the action can be performed using the dimension focus. In this case the user will operate in a uni-dimensional mental space, creating a uni-dimensional mental interface that helps him focus: i.e. an imaginary tunnel. As he moves further down the tunnel, his focus dimension increases, pushing the cube on the screen forward.

The mental actions that we are able to recognise today are categorised in two types of dimensions:

    Thoughts that involve motion generate ideomotor impulses, very similar to the ones generated to move parts of our body and fairly easy to detect using a brain-computer interface.
  • Motion: forward, left, shrink, etc.
  • Emotion: excitement, focus, etc.

A combination of better technology and more experience in controlling our mental states will increase the type of dimensions we can use.

These are some other potential types of dimensions:

  • Brainwaves
  • Aesthetic characteristics: smooth, more blue
  • Evocative qualities: more dynamic, more like Dalí
  • Temporal: later, in the past
  • Semantic: tell me about "the theory of relativity"
Act on our brainwaves alone may never be possible. It would be like being able to generate a 440Hz perfect sine wave using our vocal cords.

Using dimensions that have a direct translation to what computer sensors can detect (i.e. brainwaves) makes the computer's job easy, although we are not trained to modify our brainwaves independently. Using dimensions that are closer to the human layer of perception or meaning (i.e. colors or words) would make our job easy, but computers are not able to capture those dimensions yet.

Mental transitions

Our mental state is in permanent transition due to external stimuli and inner tensions. Variations in our environment, emotions, drugs, etc. are agents for mental changes. We can classify mental transitions in three aspects:

  • Awareness: a mental transition can be conscious or unconscious.
  • Voluntariness: a mental transition can be intentional or non-intentional.
  • Origin: a mental transition can be self-induced, or externally induced.

A dream is a non-intentional, self-induced mental transition. It can be conscious or unconscious.

Meditation is a conscious, intentional, self-induced mental transition.

Hypnosis is an unconscious, externally induced mental transition. It can be intentional or non-intentional.

A mental action is a conscious, intentional, self-induced mental transition to accomplish a specific goal. i.e. push the cube to increase a number.

Some mental transitions can be captured by a mind-computer interface to trigger computer actions. In order to create more efficient mind-computer interactions, algorithms will need to get better at differentiating between conscious/unconscious and intentional/non-intentional mental transitions. Sensing the environment or analysing physical features like face expression can help filtering the signal from the brain.

We can also get better at reaching specific mental states by:

  • Learning how to make unconscious mental transitions, conscious.
  • Learning how to self-induce mental transitions that are normally externally induced.
  • Optimising the mental transition's routes by finding distinguishable paths that don't interfere with other trigger points in the mental space; create interfaces that are apart one to the other in the mental space.
  • Reducing the transition time. A fright is a good example of a fast transition, externally induced in this case. With practise or with the help of external stimuli we may be able to jump from one point to another in the mental space.
  • Being able to perform more than one action at the same time by using multiple mental interfaces simultaneously.
Two-way mind-machine communication

We interact with our environment using our five senses, distributed within our body interface. We use these senses - mostly three; sight, touch and hearing - to communicate with computers. However, there is the possibility of communicating with computers bypassing those senses, using solely our brain - a sixth sense, or sensor if we like.

There are examples of people being able to see or hear applying electrical impulses to their brain. Also cases of applying electrical impulses to other organs that then send the signal to the brain.

Being able to write to the brain brings a new lot of opportunities (and concerns). A two-way communication allows to artificially modify the mental space:

  • Inducing a specific mental state or mental transition.
  • Loading a mental interface in the user's mental space.

Today there are many applications that can be intrusive - after the user's decision (i.e. alerts, alarms, calls, notifications), without the user's agreement (i.e. advertisment) and even without the user's awareness (subliminal messages). A two-way mind-machine communication creates a deeper level of intrusion, but with similar characteristics. We may or may not be conscious of those intrusions. We also may or may not be responsible for the machine to perform those intrusions.

Some possibilities when using two-way mind-machine interfaces:

  • Send feedback to the user's brain about an action.
  • Deliver silent alerts and notifications.
  • Seamless passwords: in order to validate the user, send a signal to the user's brain and measure its reaction - it can unequivocally identify the user.
  • Spatial connection: create a positional link between the user and the machine so they are able to locate each other, in distance and direction, similar to how we use our ears to locate objects that make sound.

From the computer point of view, the user can become a new peripheral, an auxiliary device. It could use the human brain as processing power, to take human-reasoned decisions, or as sensor.

3. MENTAL INTERFACES

A mental interface is a human interface created by a machine user to perform a mental transition that can be sensed by the machine through a mind sensor. A mental interface is the human side of a mind-machine interface.

Mind sensor is a more generic definition than brain-computer interface - there may be other ways besides brainwave reading to capture mental states and thoughts.

Like tangible or graphical user interfaces, good mental interfaces will have to find a balance between two main aspects:

  • Being easy and intuitive for the user to understand and operate. Good affordances, short learning curve, appealing, rewarding and forgiving.
  • Being technologically feasible. In this case a combination of the hardware and the algorithms required to read the user's mental actions.

For example, it is easy and intuitive for the user to think of green color and red color to use as a 'yes' or a 'no'. However the technology today is not able to easily distinguish between green and red thoughts, so we could use push and pull instead. This is an example of the compromise between user friendliness and technology capabilities.

Mind-computer interfaces won't be operating alone, at least initially. As it happens today, a human computer communication would most likely be a combination of interfaces - visual, auditive, tangible, and mental. With the introduction of mind-computer interfaces, we will see the role of the other type of interfaces evolve, to complement or support the experience.

As with other interfaces, the designer's job will be to:

  • Understand when it is meaningful to operate a machine with a mental interface and define a combination of interfaces (visual, tactile, etc.) that complement adequately for the purpose of the interaction.
  • Create the right language so users understand what they can do, and how can they do it.
  • Define actions that are easy to learn and perform, aligned with the user's intention - a natural mental user interface.
  • Create systems that evolve with the user and his experience on using mental interfaces.

As an example, this could be the reasoning of an interface designer creating the mental button for sending an email: "The user has to focus until he reaches X units, in order to send the email. But for some users it is easier to imagine a big green dot than to focus. Although the green dot is generally more difficult to identify by the computer, so it may not always work. Maybe the focus is the default option, and in addition, the user can use the green dot. Maybe the user can customise the color of the dot. Or choose another shape. Or maybe the user can record a preferred mental interface. As long as the computer distinguishes the mental transition clearly..."

Human-computer interfaces are intuitive when the interaction is aligned with the purpose. Because mind-computer interfaces read the unfiltered mind, we want mental interfaces to be as close as the natural thought as possible.

Learning how to use mental interfaces

Mental interfaces exist only inside an individual, who is the one able to create those interfaces, and operate with them (considering 1-way mind-machine communication). Mental interfaces are hidden - you can guess what may happen in someone's mind while using a mind-computer interface, but it won't be explicit enough to be able to learn by imitation.

Learning how to drive can be done by imitation, and repetition. Learning how to do an addition involves some methods for the student to understand how it works, methods to practise, and methods to become more efficient at performing the operation. Learning how to use mental interfaces is closer to the second example, requiring methods that promote exploration and self-discovery.

There are many examples that show a variety of strategies to learn how to use an interface:

  • Training period before use.
  • Learning on demand. Instruction book.
  • Progressive learning embedded in the use, discovery.
  • A good article on game design about letting the player discover the rules rather than explicitly explain them.

Whatever strategy is followed by a specific application that uses a mind-computer interface, ideally there should be a learning curve by both the user and the application in order to discover each other's abilities to understand one to another.

A unique, evolving interface for each user

Customisation in human-computer interfaces is a variable designers and engineers have to deal with - allowing the user to skin a visual interface, re-arranging icons in the dock or creating keyboard shortcuts for frequent actions, etc. Giving freedom to the user doesn't mean it will make the interface more efficient or simple. Taking away this freedom is not a sign of any of the too either. It comes down to taking the right design decisions for the target user, with no specific formula.

In the case of 1-way mind-computer interfaces, the user has full control of the mental interface. Only the external input (visual interface, sound, etc.) can be a way to induce the user to use a certain mental interface in a certain way. Because of the different background and nature of each user (left-brained / right-brained for instance) users will recognise the optimal mental interfaces only after a period of exploration. Imagine a mail client that suggests the user to create a mental interface with an imaginary envelope fading away in order to send the email. The user can learn how to use that interface, but if the application allows a degree of exploration the user may find it easier to use a static image of a blue post box as mental interface, or to imagine his hand throwing the letter like a frisbee (translating in ideomotor commands that are easy to capture by a brain-computer interface).

Some applications that use a brain computer interface have a training mode, allowing to record a baseline for idle state, and mental transitories to be associated to different computer actions.

In order to customise the user interface, the application could suggest different mental interfaces for the user to try and evaluate which one feels more natural. The extreme case is to record a mental interface - the user is given some freedom to create his own mental interface with the condition to generate distinctive brain signals. While it will be easier for designers and engineers to force all user to use a similar mental interface, good mind-computer interfaces should be tolerant to different mental interfaces, giving the user the flexibility to design the best interface for each action.

Besides taking into consideration the variety of users, brain-computer interfaces should support and also make use of the evolution of a single user to improve the synergy with the machine. Computer interfaces should evolve as users get faster at navigating the mental space and more experienced creating adequate mental interfaces.

Users may develop ways to automate mental transitions, creating mental shortcuts and workarounds to optimize their mental actions. Understanding how the mind-computer interface works may give ideas to the user for hacking his mind and/or the computer interface. Users may use the computer with mental interfaces that were never considered, or use a certain mental interface to use the computer in a way it wasn't intended for.

Natural mental interfaces

Natural interfaces are those that are based on natural - not artificial - elements or gestures, making them very intuitive, easy to adopt, and disappearing over time. Natural mental interfaces will be the ones that induce mental actions that are aligned with the user's natural mental state at the moment of the action.

For example, let's imagine someone who is informed to have won a free ticket for his favourite band concert. The visual interface requests him to accept the free ticket or reject it. A natural mental interface in this case will be one that uses excitement as a dimension, as the user might be excited at the moment of performing the mental action. It wouldn't be natural to activate the "Yes, I want that free ticket" using the relax dimension - although it could be a good self-control test game.

In some cases we will want our emotions to be in harmony with the nature of the mental interface - the interaction will flow, it will feel very natural. In other cases we may want the mental space to be distant from the variables that relate to our emotions.

Mental grid

Elements in a graphical user interface are normally arranged in a grid. Grids help to organise those elements, set them up in hierarchy and present them in an aesthetic manner. It also helps the users to create mental models of the interactions, and it helps the designers to adopt certain conventions on interface design.

Likewise, while designing mental interfaces, the interface designer - and the user - may need to organise the mental space in a grid: respecting blank spaces, distribute poles or trigger points equally distant one to another, etc. Since there is no tangible common ground between the interface designer and the user, setting some standards will help building a model that help designing and using interfaces. There will be an unspoken language, an agreement, between the designer and the user, as it happens with web design. With extended use of mental interfaces the user will develop expectations of usage that will be satisfied if the designer has respected the standards.

Mental grids may have their fundamentals both in mind ergonomics and technology feasibility. For example, let's imagine for a specific action the user needs to arrange two coloured areas as a mental interface. For the user it might be easier to create two rectangles than any other shape. For the computer to see those rectangles it might be easier if they are spread apart from each other.

The user will benefit of knowing that any time two colors need to be used as mental interface, this would be the optimal mental grid to arrange them. There is an obvious benefit for the designer to be aware of this type of conventions too - it is key to understand the constraints of an interface (specially if it's sitting in the user's head) to have control over the design of the overall experience. Otherwise is like asking a game designer to design a game without knowing if it will be played with a keyboard or a two-button gamepad.

Metaphors

Metaphors in human-computer interfaces make use of previous knowledge or mental models the user has to facilitate the understanding and use of those interfaces. Many of the metaphors used in GUI are references to the physical world. Some of these metaphors have evolved or disappeared in favour of interfaces based completely on flat, digital references. There are also gestural metaphors based on natural gestures, such as swipe with a finger to browse pictures in a screen - as you would do with photographs over a table.

While designing mental interfaces, some of the traditional metaphors may still be useful, as well as new metaphors that reference elements from a GUI.

Using touch-screen gestures as a mental interface may work well since they will trigger ideomotor commands, which are relatively easy to detect with a brain-computer interface. We could imagine our hand swiping from left to right to move images on the screen with our mind.

Visual language

Today most of the instructions on how to use a mind-computer interface use words to describe the mental actions. While words might be descriptive enough for simple actions, the use of visual language can be more powerful to convey mental states or transitions that can range from very literal to more evocative. This visual language will contribute to better communicate, learn and use mental interfaces.

We should differentiate between the visual representation of a mental action (the what. i.e. focus) and a mental interface (the how, i.e. focus in an white dot in an black space).

  • Representation of mental actions, regardless of the mental interface used by the user to perform such action.

  • Representation of mental interfaces to serve as suggestion for the user to perform a specific mental action, or to learn how to create a mental interface.

    While this type of representation could be presented in third person, it seems more adequate to be presented in first person, as it would be seen from the user's mental space perspective.

    Some examples of representation of mental interfaces:

    • Focus:
    • Accept / OK (the middle example would be based on ideomotor commands):
    • Sending an email:

    A mental interface may be a sequence, so it should be its representation.

These simulations may be useful to indicate which mental interfaces are better sensed and interpreted by the hardware and the algorithms that compose the mind-computer interface. They may be used for documentation or in the GUI that supports the mind-computer interaction.

There might be a third type of visual representation for the mental dimensions, to indicate which dimensions is the application or the hardware compatible with.

The new role of the GUI's

Screen based interfaces are at the center of human-computer interactions today. From desktop computers to smartphones, the screen is the main output interface, and it has also become the main input for portable devices, enabling direct manipulation of the elements on the screen with a tangible connection.

Graphical user interfaces play different roles in a human-computer interaction. A GUI:

1. Informs the user about which actions can be performed.
2. Suggests how to perform such actions. Affordances are the how embedded in the interface itself.
3. Can be the element to perform the action.
4. Gives feedback about the user actions and computer processes.

If we take the example of a button:

1. You can confirm. 2. by clicking. 3. here. 4. You are clicking me now / Done.

Using mind-machine interfaces, those responsibilities and phases of the interaction will split between the mind-computer interface and the graphical user interface and/or others interfaces in the play. A one-way mind-computer interface working as input needs a complementary interface to inform the user about (1) the what, (2) the how, and (4) the feedback. In this case, the GUI can become a more evocative interface, supporting the mental activity. Only bi-directional mind-computer interfaces can provide interactions that rely purely on mind-computer communication.

Similarly to what happens today with speech-to-text input, mind-computer interfaces will be introduced progressively into everyday products and services. The complementary GUI may inform when a mind-computer interaction is available.

Decision, intention and confirmation

User-computer interactions are composed of a series of cycles through their interfaces.

The cycles may involve different interfaces and they always involve the brain and the computer processor, to interpret what happened during the rest of the cycle when it is their turn.

Cycles can be broken down into three main phases:

  • Decision: the user is shown some options and takes a decision.
  • Intention: the user approximates the interface that will serve to communicate the decision.
  • Confirmation: the user communicates his decision to the computer.

Taking the interaction of 'clicking the button Yes to confirm a transaction' as an example, the interaction cycles in those three phases can be represented as follows:

Or simplified:

In cases like typing words in a keyboard, the intention is not communicated but can be induced using an auto completion system.

From the three steps the user follows when clicking the Yes button (decision, intention, confirmation) the computer only reacts to some of them.

When the user moves the mouse pointer towards Yes, there is an intention that is not being used by the computer to take action. It will only take action upon a mouse click. Hovering is another approximation towards the final action - it is like opening the safety switch cover, that allows the button to be clicked. Until that point, the user can change his mind.

Mind-computer interfaces could follow this convention, using mental transitions as intention, and a specific mental action to confirm, like a click. However, the interface may take advantage of being much closer to the user's thought, interpreting and acting upon the user's intention, or even upon the initial decision thought, if there is a signature associated to it that the computer is able to identify.

While it may be difficult for the computer to learn the user's intention by analysing the movement of the mouse, mental interfaces with polarised mental triggers may provide the computer with a clearer hint of the user's purpose.

An aspect that has a big impact on the overall experience of an interaction, is how forgiving is this interaction to the user's mistakes, or flexible to changes of mind. There are two approaches to that:

  • Impulsive: the computer acts to every user input but allows correction (backspace, ctrl+Z). i.e. using a keyboard.
  • Reflective: the computer only takes action after the user confirms an intention. i.e. point and click to accept a transaction: pointing is an intention but doesn't trigger any computer action. Clicking triggers the computer action.

Mind-computer interfaces will probably combine those strategies to balance effectiveness, safety, forgiveness and redundancy.

Mental buttons

A mental button is a place in our mental space that after being activated and detected by a brain-computer interface, triggers an action in the machine. This triggers are configured in the machine - from all our mental actions the machine reacts to some of them according to certain logic gates.

Mental triggers have a 'hit area', a range that needs to be designed, as if it was a screen button.

In mental interfaces, how comfortable does the user feel to activate a trigger can affect the performance notably.

Examples:

To activate A (i.e. to accept), you could use the variable F (i.e for focus) in different ways, from less to more complex:

  • Cross the value F=100
  • Reach the value F=100 and stay for a second in a close range, 95 < F < 105
  • Jump exactly to the value F=100

While designing physical buttons or on-screen buttons, we give an adequate size to them, depending on our fingertips, the type of device we're using, etc. In a GUI, a tiny button is difficult to see and difficult to point while a larger button may be, in general, more comfortable - it will be similar in mental triggers. Moreover, the hit area may change as the user is more experienced with the interface.

The trigger may have different states. Like a screen button, besides defining its edges, a mental trigger needs to give feedback to the user that confirms the intention - the hover or the click.

Mental triggers may also change position in the mental space, depending on the user's idle state. There are buttons in today's interfaces that have a relative position to the position of the mouse pointer - the right-click menu is an example of that. Likewise, mind-computer interfaces may consider the idle state of the user's mind as a relative zero point to configure the trigger points.

There may be another case when the user is in a mental state which is not adequate to perform the action he is willing to do. Let's imagine the user needs to focus in order to perform an action - but the user is already in a high level of focus. There are different options from here:

  • The computer doesn't take into account the idle mental state of the user and performs the action automatically.
  • The user is asked to bring the focus level down before performing any action.
  • The user is proposed another interface, using the same dimension (defocus) or another dimension (push forward, for instance).

The user's mental state can highly influence the performance of mind-machine interfaces. It seems obvious that the computer will have to take into account the user's mental state and adjust the interface, real time. It seems equally important to inform the user about his current mental state so he can learn and adjust the mental interfaces.

4. OTHER ASPECTS
Feedback

One of the roles of computer interfaces is to provide feedback to the user, as well as prompt confirmation of actions that require a double check. A traditional process of feedback and confirmation using a mind-computer interface would be:

1 - The user performs a mental action.
2 - The computer receives the signal and sends a confirmation query back to the user.
3 - The user confirms the action.
4 - The computer performs the action.

There is a natural and measurable reaction to visual stimuli, named N1.

The user is conscious of the feedback and confirmation process. However there might an opportunity of performing a much seamless loop by using the user's natural reaction to external stimuli.

1 - The user performs a mental action.
2 - The computer receives the signal and sends a specific stimulus back to the user as feedback.
3 - The user reacts differently (positive/negative variance) to the stimulus if the feedback is aligned with his intention or not. This reaction is captured by the mind-computer interface.
4 - The computer performs the action if the user's reaction is positive, and returns to the confirmation status if it's negative.

In this case, steps 3 and 4 are performed by the user's subconscious.

Safety

Besides the technology being reliable enough to perform critical operations, there might be design decisions that can help create more secure mental actions.

One option could be to distribute decision trigger points in a combination of different dimensions in the mental space. In order to trigger A a combination of three dimensions' value ranges is needed. The trigger area in B is set by a combination of two dimensions' values in a certain relationship one to the other. C is triggered by a combination of three dimensions' value in a certain relationship one to the other, plus the transition path hitting the trigger area from a certain angle range.

In multi-choice decisions the user may create mental interfaces that operate along paths that don't intersect one with another.

Privacy

The use of personal networked devices and applications exposes users' information to others that may take advantage for its value as individual or aggregated data. We search things that interest to us, we buy things we like, we communicate with people we love, etc. Although many of those steps are public today, there is a layer that has never made public yet. What we think is still private.

By using brain computer interfaces we unveil part of our most private human space: our mind. These interfaces may provide a more pleasant, efficient, natural way of interact with computers but also they can gather data from us that we are not aware of, or that we don't allow third parties to have.

Collecting brainwaves from users (or customers or potential customers) without context might be useful, but it definitely gains value when paired with contextual information: their activity at the moment, the actual visual stimuli, the music they are listening to, who they are talking with, where they are, etc.

From a purely design perspective, there is the opportunity to use users' mind activity to improve a service or product by collecting their reactions - i.e. "most of the users feel confused between the 2nd and 3rd step of this process". Other fields like advertising will probably find in mind-computer interfaces its best ally in the future.

Privacy settings may look like this in the future:

Apps/Webs/Software in general will need to be really transparent on what do they use 'from you', and for which purposes.

There might be a fine line on what is strictly necessary for the computer to sense the user's intentions, and other information that might help algorithms perform better but be also very valuable for other purposes.

Undo an action / Quit a process / Viruses

There will always be a need for an undo. How does the mental interface for undoing an action with you mind looks like? Maybe it makes sense to be related to our emotional reaction to something we did wrong or was misinterpreted by the computer.

There might be occasions in which we need to interrupt the mind-computer communication, specially if it is bidirectional:

  • To reset our average mind state without any external stimuli.
  • If our mind enters a loop with the computer that is out of control.
  • Suspicion of our mind being tricked, i.e by an advertisement that guides you (your mind) towards buying something, or a (mind?) virus. After the interruption there may be a way to report the cause: "Do you want to report a suspicion of misuse of mind-computer communication?"

A physical interface may be appropriate to interrupt and restart the communication with the computer.

The environment

The environment is a continuous stimuli to our senses, affecting the way we feel or behave. Our brain is able to filter those inputs, as well as the outputs - i.e. we are able to remain calm in a stressful environment. However, during mind-computer communication the computer may receive signals that carry our unfiltered reaction to external stimuli, adding noise to our voluntary intention.

In order to minimize this effect computers may be equipped with sensors to measure the environment conditions in order to subtract the noise from the brain signal that is due to stimulus from the environment.

The design of the space can become a key aspect of a successful mind-computer interaction. There will be spaces more adequate than others for using this type of interfaces.

The use of mind-computer interfaces has an impact on the environment too - changes in social dynamics and new behaviors around people using such interfaces as it happened with the use of headphones, game consoles or mobile phones.

Some scenarios
  • Consuming media:

    Consider watching a movie using a two-way mind-computer interface. Brain stimuli may complement or augment the visual and auditive experience, by simulating senses or inducing feelings (stress, excitement).
    It may be possible to experience a movie in a short time with brain stimuli - the viewer is provided with a skimmed version of the movie, with focus on the key parts of the plot. These parts can be selected from aggregated data from a large audience, by capturing the viewer's attention or excitement while watching the movie at its full length.
    Different time resolutions and stimuli options may define new standards of quality for media consumption.

  • Device location awareness

    Using two-way communication people may be aware of the location of their devices, wirelessly, at any time or on demand. Similar to a how a beeper locator works with sound, the brain may develop the ability to transform an external stimuli to spatial coordinates. Devices may be aware of their owners' position as well.

  • Use of drugs to enhance the performance of mind-computer communication
  • Reduce the user's perception of senses like hearing or vision in order to increase the focus on the mind-computer channel by artificially disabling certain areas in the brain.
  • Patent mental interfaces. Systems to identify which mental interface people is using to perform a certain mental action.
  • Monitor the intangibles of the self: emotions, creativity, etc.

Ishac Bertran | 2013 | ishback.com