Apple’s MacBook Pro series is back in the media thanks to the company’s announcement of the new “Touch Bar”.
Announced last Thursday, the Touch Bar uses retina display and multitouch technology to replace the MacBook Pro’s top row of static function keys.
It might seem like a simple idea, but it builds on a long history of research on what is referred to as “human–computer interaction”.
The feature deserves the attention it’s receiving as it provides a glimpse into how we will be interacting with computers in the not so distant future.
It’s not a new idea, but once again Apple has managed to bring an innovation to the mass consumer market.
The origins of touch
Using touch for interacting with computers became commonplace with the advent of the iPhone in 2007.
But touch-based consumer devices have a long history and include the Personal Digital Assistants (PDA) that were popular through the late 90s and early 2000s.
PDAs used a stylus pen along with a touch-sensitive screen.
This form of interaction was first proposed in 1963 by Ivan Sutherland at MIT’s Lincoln Labs as part of a new “man-machine” interface called “Sketchpad”.
Being able to directly tap on a button with a pen offered a much more intuitive way to interact with computers compared to moving a mouse and clicking on menus and icons.
But it felt constrained due to the requirement of using a pen, and its limitation of reading only one input at a time.
Using a pen didn’t quite match the way we interact with objects in the physical world and made tasks such as typing slow and cumbersome.
The breakthrough of multitouch
This limitation was lifted when Jeff Han, who at the time was a research scientist at New York University, presented his vision of an “interface free” computer at the TED conference in 2006.
In his talk, Han demonstrated moving, zooming and manipulating virtual objects on a multitouch tabletop computer using both hands.
The original TED video of his talk went viral, receiving more than 4 million views and inspiring researchers around the globe.
But attempts at bringing multitouch tabletops to the mass market such as the Microsoft Surface – a 30-inch tabletop computer interface (now known as PixelSense to distinguish it from their tablet product) – eventually failed.
Yet, many of the interaction concepts developed for tabletop computers, such as the now ubiquitous pinch-to-scale gesture, set the foundation for how we interact with smartphones today.
Bringing touch to the keyboard
While multitouch provides an intuitive way for manipulating virtual objects, a recurring challenge is the lack of “tactile” feedback.
Typing on typewriters a generation ago and computer keyboards today works so well because of the physical shape and mechanics of their keys.
Indeed, Apple is putting a lot of effort into retaining the tactile feedback of its laptop keyboards while reducing their height.
Approaches to making keyboards smaller and portable, such as laser projection keyboards, remain neat gimmicks, as typing on flat surfaces that don’t provide any sense of feedback feels unnatural and slow.
Apple’s new Touch Bar is therefore taking a strategically sensible approach by only replacing one row of keys on its new MacBook Pro keyboard.
Typing on this new bar will feel unnatural – more like tapping a button on an iPhone than using a keyboard.
By retaining the tactile feel of the main keys, users will barely notice this shortcoming.
Indeed, those who were already able to get their hands on the new Touch Bar have described it as “very, very cool.”
The user-centred thinking behind the Touch Bar
Many of the function keys in the top row of current MacBook keyboards are rarely used.
So replacing the keys with a dynamic touch display, makes it possible to provide function keys that are relevant to the user’s context.
For example, the Touch Bar displays a reply button when the user is in Apple’s Mail application.
This not only makes the function keys more meaningful, but it also makes interacting with applications easier.
There is a principle in human–computer interaction research known as Fitts’s law, which describes that the smaller and further away an object is, the more difficult it is to click on it.
Tapping a reply button in the Touch Bar will therefore feel much easier in many situations, compared to painfully moving a mouse cursor across the screen.
Where to from here
What makes the Touch Bar an exciting innovation is that it blurs the boundary between physical buttons and digital touch displays.
It has its shortcomings, such as the lack of tactile feedback, but it’s a first attempt at bringing fully customisable or adaptive keyboards to the mass market.
Apple has learnt its lesson from introducing innovations too early.
Products such as its first personal computer with a graphical user interface, the Apple Lisa, and its attempt at launching a PDA with the Apple Newton, were too expensive to be successful.
Producing fully customisable keyboards for a mass market is still a cost issue.
But it’s only a matter of time until they will become the standard for interacting with personal computers.
As more and more digital devices become part of our everyday lives and activities, we will see a further diversification of means for interacting with computing devices.
We might type emails on an e-ink keyboard, touch, swipe, shake and squeeze to view digital content on mobile devices, use our minds to navigate virtual environments, and talk to our personal home assistant about the weather forecast.
This article was originally published on The Conversation.
Follow StartupSmart on Facebook, Twitter, LinkedIn and iTunes.
The post Is Apple’s new keyboard “Touch Bar” revolutionary or commonsense? appeared first on StartupSmart.