The iPhone's touchscreen explained
Have you ever wondered how the iPhone's touch screen actually works? Multi-touch, gestures, support —all of these magical things are driven by quite an interesting set of technologies, which we will take a closer look at in the article below.
Touch screens fundamentals (Touch screens 101)
When we take a closer look at most touch screen devices, there are several principles that unite all of them. Touch screens receive input from users by monitoring the changes of some predefined parameters. These parameters change as a user interacts with the touch screen. In other words, the user triggers some changes as he touches the screen and the sensors on the screen register that.
Touch screen sensors can react to a whole bunch of different things. Some of them monitor changes in the sound waves of near-infrared light reflection that occur when a user touches the screen. Thus, when a user touches the screen, he prevents light from falling onto the screen's surface, and sensors are quick to catch the lack of light and calculate the location of the finger. Other devices (including the iPhone) rely on changes in electric currents caused by the user touching the screen.
Although there are multiple systems of monitoring the placement of fingers on a touchscreen, currently the most popular are these two:
● Capacitive touch screens. When a user touches the screen, he changes the amount of electrical charge stored in a certain point on a screen. The charge itself is being held in a separate layer of capacitive material.
● Resistive touch screens. Devices that have resistive touch screens use two neighboring layers of materials — resistive and conductive. When a user touches the screen, he makes them touch one another in some area of the screen. This produces changes in the circuits' resistance, which is registered by the device's sensors.
Many devices and gadgets apart from the described principles rely on other technologies and algorithms when defining user input. These include defining changes along an axis, using system-wide averages, or using baselines.
Things get somewhat more complicated when one needs to use multi-touch, a feature that we love iPhones for. In fact, the two-finger zoom and the ability to intuitively use several fingers would not work as expected before the iPhone arrived.
So, what's so tricky about multi-touch? The hard part about this is that existing technologies or, say, pre-iPhone touch screens were good at defining one finger location. When it came to defining the positions of several fingers, the results were sad: some of them would give error messages, and others just ignored anything other than the first touch.
The reasons for such an inability to use multi-touch comes from all kind of approximations that pre-iPhone devices were using to find out where the user's finger was on screen and where it moved to. These included:
● Using a system-wide average to define the location of the touch spot
● Defining changes along an axis or direction
● Creating baselines to calculate the finger's movement
So, what is so special about the iPhone's input registering circuitry that it is capable to handle multi-touch?
iPhone multi-touch: a mix of existing tech done right
Indeed, the magic multi-touch gestures that we love to use in our iPhones is a good mix of existing technologies plus loads of creative thinking. Apple's engineers did what they always do when designing the iPhone — they thought differently than the rest of the industry.
As we have mentioned earlier, the iPhone's touch screen uses a capacitive method of defining the location of the user's fingers. Touching the surface changes the electric charge (capacity) in a certain point of the screen.
In order to handle multiple touches, the iPhone's engineers arranged sensors registering capacity change into a grid. Each of these sensors is responsible for one tiny spot on the screen with all of them arranged in something we can call a matrix or coordinate system. In practice, that means that each sensor can monitor its own tiny area and send information about its state to the iPhone's processor.
As a user touches the screen of an iPhone with several fingers, each finger is changing the electric charge at a group of points on the grid. Each of the points affected sends the following info to the processor:
● The coordinates of the point on the grid
● The amount of change in electric charge
When this info is passed to the processor, its capabilities and iOS algorithms allow it to define the meanings of the user's touches.
His Almightyness, the iPhone Processor!
With data on where the user touched the screen gathered, the gesture-processing algorithms of the iPhone's operating system and its processor come into action. These algorithms start looking for predefined patterns of human gestures in this data:
- The background noise in the data is removed.
- Pressure points are measured and grouped together.
- The exact coordinates of each finger is calculated.
By comparing the coordinates of the fingers over time, the iPhone's algorithms trace their movement and define if they correspond to any gesture known to the device. If the user's input is recognized as a valid multi-touch gesture, the corresponding command is passed from the system to applications that are currently running, the device's screen, or other hardware. Of course, the user's input can be meaningless to the system; in that case, such gestures are discarded.
Since all of the algorithms described are executed in nanoseconds, iPhone users perceive the device's response as instantaneous. This creates the comfortable user experience that we love iPhone for.
Not helpful? Ask our experts. It's fast and free.