Wayne Westerman

The man who taught glass to feel

By VastBlue Editorial · 2026-03-26 · 22 min read

Series: The Inventors · Episode 1

Wayne Westerman

A Problem That Hurt

The pain starts in the tendons. Not the dramatic, sharp pain of a fracture or a cut — the quiet, accumulative kind that whispers before it screams. You notice it first as a tightness in the forearms after a long session at the keyboard. Then the tightness becomes a dull ache that does not fully recede overnight. Then the ache migrates: into the wrists, up through the extensor muscles, into the fingers themselves. One morning you wake up and your hands feel like they belong to someone much older. You try to type and the act of depressing a key — an action requiring roughly 60 grams of force, repeated thousands of times per day — sends a wire of pain from your fingertip through the carpal tunnel and up into your forearm. This is repetitive strain injury. It is not dramatic. It is not urgent. It is simply relentless.

In 1996, Wayne Westerman was a twenty-something doctoral student in electrical engineering at the University of Delaware, and this was his daily reality. RSI had progressed from inconvenience to medical concern. The tendons in his hands and wrists were inflamed from years of mechanical keyboard use — the cumulative toll of writing code, composing research papers, and doing the thousand small acts of computer-mediated work that constitute a graduate student's existence. The standard medical counsel was rest, ergonomic keyboards, wrist splints, perhaps cortisone injections if the inflammation became severe enough. Some doctors recommended voice recognition software, which in 1996 meant Dragon Dictate running on a 166 MHz Pentium, misrecognising every third word. For a PhD student in electrical engineering — someone whose work product was equations, circuit diagrams, and C code — voice recognition was not a solution. It was a bad joke.

The ergonomic keyboard industry offered its own responses: split keyboards that reduced ulnar deviation, tented keyboards that rotated the wrists to a more neutral position, keyboards with lighter key switches. All of them still required the fundamental mechanical action that was causing the problem: pressing a physical key against a spring, bottoming out against a hard surface, releasing, and repeating. The force was reduced but not eliminated. The repetitive impact was gentled but not removed. For someone whose tendons were already damaged, a slightly softer hammer was still a hammer.

There is a category of invention that emerges not from ambition but from desperation. The inventor does not sit down to change the world. The inventor sits down because the world, as currently configured, is causing them specific, measurable, daily pain. Westerman needed an input surface that did not require force. Not less force — zero force. A surface that could read intent from the geometry of touch rather than the mechanics of pressure. A surface where the act of communicating with a computer did not require doing controlled violence to your own body.

So he built one.

The 364-Page Document

Westerman's doctoral thesis, submitted in 1999 under advisor John Elias, bore the title "Hand Tracking, Finger Identification, and Chordic Manipulation on a Multi-Touch Surface." It ran to 364 pages. For context, the average electrical engineering dissertation at the time ran between 150 and 200 pages. Westerman's was nearly double because the problem he had chosen did not fit neatly into any single discipline. It required signal processing, materials science, human factors research, algorithm design, and systems integration. Each subproblem demanded its own literature review, its own mathematical framework, its own experimental validation. The result reads less like a dissertation and more like a technical founding document for an industry that did not yet exist.

The opening chapters survey existing touch technology with the meticulous patience of someone who has read every paper in the field and found them all insufficient. Westerman catalogues the prior art with a thoroughness that borders on judicial. Resistive touchscreens — the kind you encountered at ATM machines and early Palm Pilots — worked by pressing two conductive layers together. They could detect a single point of contact with reasonable accuracy, but they required physical pressure and wore out over time as the conductive coatings degraded. Surface acoustic wave (SAW) touchscreens used ultrasonic waves propagating across a glass surface, detecting touch by measuring the attenuation of the wave at the contact point. Clean and durable, but limited to single-touch and expensive to manufacture. Infrared touch frames projected a grid of IR beams across the screen surface and detected interruptions. Accurate but bulky, and the beam spacing set a hard limit on resolution. Capacitive single-touch screens, pioneered for industrial and military applications, sensed the electrical disturbance created by a fingertip's natural capacitance. Elegant but fundamentally limited: they could tell you that a finger was touching the screen, and roughly where, but not how many fingers, and not which ones.

Westerman's thesis dismissed all of these not because they were bad technology, but because they were answering the wrong question. They asked: "Where is the user touching?" Westerman asked: "What is the user's hand doing?" The distinction is the difference between a light switch and a musical instrument. A light switch captures a single binary input. A musical instrument captures the continuous, simultaneous, expressive movement of multiple fingers, and translates that movement into meaning.

The sensing surface Westerman designed used a dense, orthogonal grid of capacitive electrodes — rows of transmitting electrodes on one layer, columns of receiving electrodes on another, separated by a thin dielectric. When a finger approached the surface, it coupled capacitively with the electrodes beneath it, altering the mutual capacitance at the intersection points. By scanning every row-column intersection in rapid succession — a technique now called mutual capacitance scanning — the system produced a two-dimensional capacitive image of the entire surface at each sampling interval. Think of it as a low-resolution thermal camera, except instead of measuring heat, it measured the electrical proximity of conductive objects. Each frame was a grayscale map where bright spots indicated fingers and dimmer regions indicated palms or hovering digits.

The raw capacitive image was noisy. Ambient electromagnetic interference, thermal drift in the electrode impedances, variations in skin moisture and contact area — all of these introduced signal artifacts that had to be separated from genuine touch data. Westerman applied spatial filtering techniques drawn from image processing: Gaussian smoothing to suppress high-frequency noise, adaptive thresholding to compensate for baseline drift, and blob detection algorithms to identify contiguous regions of elevated capacitance. Each detected blob represented a candidate contact point, but candidate contact points were not yet identified fingers.

The finger identification problem was where Westerman's work crossed from competent engineering into genuine invention. Given a set of blobs on a surface, the system needed to determine not just that five things were touching, but which finger each blob belonged to. The left index finger behaves differently from the right thumb. A pinch gesture between the thumb and forefinger has different semantic meaning than a two-finger scroll using the index and middle fingers. Westerman's algorithm used a biomechanical hand model — a statistical representation of typical hand geometry, finger spacing, and joint constraints — to assign identity to each contact point. The model incorporated the physical reality that human fingers cannot move independently of the hand's skeletal structure: if the index finger is here, the middle finger can only be within a constrained arc of positions relative to it. By fitting the detected blobs to this biomechanical model at each frame, the system could maintain persistent finger identities even through rapid motion, partial occlusion, and overlapping contact regions.

364 Pages in Westerman's doctoral thesis — Nearly double the average electrical engineering dissertation. The document contained the complete technical blueprint for an industry that would not emerge for another eight years.

Consider the full algorithmic pipeline, running at every sampling interval of roughly ten milliseconds. Scan all electrode intersections. Construct the capacitive image. Filter noise. Detect blobs. Apply the biomechanical hand model to assign finger identities. Track identities across successive frames using predictive motion models. Interpret the collective trajectory of identified fingers as a gesture with semantic meaning — a pinch, a rotation, a swipe, a chord. Each of these sub-problems had its own research literature, its own unsolved edge cases, its own computational cost. Westerman solved all of them in a single, integrated system running on late-1990s hardware. The prototype worked. The gesture recognition was reliable. And almost nobody noticed.

FingerWorks

In 1998, before his thesis defence, Westerman and Elias incorporated FingerWorks in Newark, Delaware. The company's ambitions were modest in the way that only genuinely revolutionary things can be: they wanted to build a keyboard that did not hurt to use. Everything else — the gesture recognition, the multitouch paradigm, the reimagining of human-computer interaction — was, from FingerWorks' perspective, a means to that end.

The company produced two primary products. The TouchStream LP was a full keyboard replacement — a flat, featureless multitouch surface approximately the size and shape of a conventional keyboard, but with no moving parts whatsoever. Where keys would be on a normal keyboard, the TouchStream had printed labels on a smooth surface. You typed by touching the surface in the key regions, and the system registered your keystrokes through capacitive detection with zero actuation force. But the TouchStream was not just a flat keyboard. Because the entire surface was a continuous multitouch sensor, the same area that accepted typing could also accept gestures. The transition was seamless and contextual: type with your fingers in the home row position, and the surface acted as a keyboard. Lift your hands and return them in a different posture — fingers together, palm flat — and the surface became a gesture controller.

The gesture vocabulary was extensive and internally consistent. One-finger movements controlled the mouse cursor. Two-finger vertical motion scrolled. Three-finger horizontal swipes cut, copied, and pasted. A pinch gesture selected text. An expand gesture deselected. Thumb-and-finger combinations mapped to modifier keys: thumb plus finger tap equalled a click, thumb slide equalled drag. The iGesturePad offered the same gesture surface in a smaller form factor, designed to sit alongside a conventional keyboard for users who wanted multitouch gestures without abandoning their mechanical keys entirely.

The learning curve was steep. FingerWorks documentation acknowledged this frankly — the company estimated two to four weeks before a new user would reach their previous typing speed, and several months before the gesture vocabulary became second nature. This honesty was commercially suicidal and technically admirable. FingerWorks was not selling convenience. It was selling a fundamentally different relationship with your computer, and it was honest about the cost of that transition.

~$340 Price of the FingerWorks TouchStream LP — A standard keyboard cost $15. The price-to-value ratio only made sense if your hands hurt enough, or if you could see far enough into the future.

At $340, the TouchStream cost more than twenty times the price of a standard keyboard. In a market where Microsoft and Logitech competed on margins measured in single dollars, FingerWorks was asking customers to make an investment that required either medical justification or technological faith. The customers who bought were a self-selecting tribe: RSI sufferers for whom the device was literally therapeutic, software developers who spent ten or more hours a day at the keyboard and calculated the efficiency gains of gesture shortcuts over mouse-keyboard switching, and a small contingent of technologists who simply recognized that they were using a product from a decade in the future.

The user community that formed around FingerWorks products had the intensity and insularity of a religious order. Online forums — hosted on the company's own website and mirrored on enthusiast sites — accumulated thousands of posts from users sharing gesture configurations, typing techniques, and testimonials that read more like conversion narratives than product reviews. "Once you learn the gestures, you cannot go back." "I can type for eight hours without pain for the first time in three years." "My colleagues think I'm using a cutting board. They have no idea." When FingerWorks eventually disappeared, these forum archives became objects of genuine grief. Users who had bought TouchStreams guarded their devices like relics, knowing that no replacement existed and none was forthcoming.

FingerWorks never sold more than a few thousand units per year. By any conventional business metric, it was a marginal enterprise — a niche hardware company in Delaware, serving a customer base you could fit in a lecture hall. But the product worked. The technology was real. And the patent portfolio that Westerman and Elias had built, claim by methodical claim, described a complete multitouch system with remarkable breadth. US Patent 6,323,846 covered the method and apparatus for integrating manual input — the foundational multitouch architecture. Continuation patents extended the claims to specific gesture types, finger identification methods, and surface configurations. Each patent was a brick in a wall that, viewed from the outside, described the entire future of touch interaction. Every piece of the puzzle, documented, filed, and defended.

The Quiet Acquisition

In early 2005, Apple acquired FingerWorks. The terms were not disclosed. There was no press release. There was no TechCrunch article. There was no Macworld announcement. The FingerWorks website simply went dark. One day it was there — product pages, gesture tutorials, user forums buzzing with discussion. The next day: nothing. A blank page. Existing customers who tried to order replacement parts or accessories discovered that the company no longer existed as a commercial entity. The TouchStream, a product that its users considered irreplaceable, was no longer manufactured. The forums were gone. The documentation was gone. It was as if FingerWorks had been airbrushed out of the technology landscape.

Apple had been developing touch interface technology internally since at least 2003, when a small team within the company's hardware engineering group began exploring alternatives to the click wheel for a rumoured tablet device. But Apple's internal multitouch efforts had a problem that FingerWorks' patent portfolio solved: freedom to operate. You can develop multitouch technology in a lab, but if someone else owns the foundational patents on finger identification, gesture recognition, and simultaneous multi-contact tracking, you cannot ship a product without either licensing those patents or acquiring them outright. Westerman and Elias's patents were not just technically impressive — they were strategically essential. They covered the entire signal processing pipeline from raw capacitance measurement to gesture interpretation. Any company building a multitouch product would eventually collide with those claims.

Westerman and Elias joined Apple as part of the acquisition. For two years, nothing public happened. Two years of silence during which the most consequential interface technology since the mouse was being integrated into a product that would redefine personal computing. Two years during which Westerman — the man who had built multitouch to save his own hands — was presumably adapting his technology from a flat keyboard surface to a 3.5-inch glass screen. The engineering challenges of that adaptation were non-trivial: different substrate materials, different electrode geometries, different power constraints, different computational budgets, and a completely different interaction model. A keyboard replacement assumes ten fingers and a desk. A phone assumes one or two fingers and a moving hand. The signal processing had to be rethought. The gesture vocabulary had to be reinvented. The biomechanical hand model had to be replaced with a thumb model.

Then, on January 9, 2007, Steve Jobs walked on stage at the Moscone Center in San Francisco. He was wearing his uniform: black turtleneck, jeans, New Balance sneakers. The auditorium was packed. The rumour mill had been churning for months — an Apple phone, a widescreen iPod, a new internet communicator. Jobs opened with a piece of showmanship so effective it has become a case study in product launches: he announced three products, then revealed they were all the same device.

"We have invented a new technology called multitouch," Jobs told the audience. He slid his finger across the screen. The page scrolled with a physics-based momentum that made it look like the content had mass. He placed two fingers on a photograph and spread them apart. The image zoomed in, smoothly, as if the glass were a window and he was pulling the world closer. He pinched and the image retreated. He rotated with two fingers and the image rotated. The audience, many of whom had spent their careers in technology, reacted with audible astonishment. They were watching the future arrive, and it looked effortless.

The gap between a technology existing and a technology mattering is almost always a gap of context, not capability. Multitouch existed in 1999. It mattered in 2007. The technology did not change. The product around it did.

Editorial observation

"We have invented a new technology called multitouch." The sentence is worth parsing. Apple had not invented multitouch. Wayne Westerman had, eight years earlier, in a 364-page doctoral thesis that almost nobody outside the University of Delaware had read, to solve a problem that most people would have handled with a wrist brace and a bottle of ibuprofen. What Apple did — and this is not a small thing — was recognise that a technology built for one man's damaged hands could become the interface paradigm for an entire species of device. Apple provided the screen, the industrial design, the operating system, the app ecosystem, and the marketing machinery that made multitouch feel inevitable rather than novel. But the sensing, the signal processing, the finger identification, the gesture recognition — the invisible infrastructure that made the glass feel alive — that was Westerman's work, refined and adapted but fundamentally the same intellectual architecture described in his thesis and protected by his patents.

The iPhone sold over two billion units across its successive generations. The pinch-to-zoom gesture became as instinctive as pointing. Every smartphone that followed — every Android device, every tablet, every touchscreen laptop — implemented some version of what Westerman had described in 1999. And the engineer who made it possible remained silent.

The Silence

Wayne Westerman has never given an interview about his role in creating multitouch. Not one. Not to the New York Times, not to Wired, not to the Wall Street Journal, not to an obscure engineering podcast, not to a documentary filmmaker, not to a biographer. In an era where the creators of far less consequential technologies maintain active blogs, post regularly on social media, speak at conferences, and cultivate personal brands, Westerman's public silence is so complete that it functions as its own statement.

His name, however, is not silent. It appears on dozens of Apple patents filed between 2005 and the present day. The patent record tells its own story — not the story Westerman might tell in an interview, but the story of what he has been building in the years since FingerWorks disappeared. Continuation patents extending the original FingerWorks claims into new hardware configurations. New patents covering proximity sensing — detecting a finger's approach before it makes contact with the surface. Patents on force-sensitive touch (what Apple would eventually market as "3D Touch" and later "Haptic Touch"). Patents on touch discrimination — distinguishing a deliberate tap from an accidental brush, a fingertip from a knuckle, a stylus from a finger. As recently as 2023, new filings with Westerman's name have appeared in the USPTO database, each one a small window into the problems he is still solving at Apple, two decades after his thesis.

80+ Patents listing Wayne Westerman as inventor — Spanning from the original FingerWorks filings in the late 1990s through Apple continuation patents filed as recently as 2023. A two-decade paper trail of invisible authorship.

The patent continuation chain is itself a remarkable document of technological evolution. The original FingerWorks patents described multitouch on an opaque, flat surface — a keyboard replacement. Apple's continuation patents adapted the same core claims to transparent surfaces (screens), curved surfaces (watch faces), pressure-sensitive surfaces (Force Touch trackpads), and surfaces with haptic feedback (the Taptic Engine). Each continuation extended the intellectual territory that Westerman had staked out in his thesis, applying the same fundamental principles — capacitive sensing, finger identification, gesture recognition — to hardware that did not exist when the original claims were written. The patent portfolio is a map of one man's ideas propagating through an entire product ecosystem.

This silence is worth dwelling on because it represents a philosophy of invention that is genuinely rare. The default mode for technology creators in the 21st century is visibility — keynotes, Twitter threads, podcast appearances, TED talks, Substack newsletters, personal brands carefully tended like bonsai trees. The incentive structure rewards public presence: visibility attracts funding, attracts talent, attracts speaking fees, attracts acquisition interest. Westerman opted out of all of it. He built the interface layer that defines how four billion people interact with their most intimate device, and then he disappeared into the company that bought his life's work. He chose to remain an author whose name appears only in patent filings and academic citations — the two places where credit is given with precision but without fanfare.

You can read his thesis. It is publicly available through ProQuest for anyone willing to pay the access fee or visit a university library with a subscription. It is dry, technical, and 364 pages long. It contains no anecdotes, no personal narrative, no design philosophy, no vision for the future of computing, no acknowledgement that the author is describing something that will eventually be touched by billions of hands. It contains mathematics. It contains circuit diagrams. It contains performance benchmarks for a finger identification algorithm tested on prototype hardware in a university lab in Delaware. It is a document that says: here is a problem, here is a solution, here is proof that the solution works. Nothing more. Nothing less. It is, in its own austere way, a perfect document.

What the Silence Teaches

Every smartphone you have ever held, every tablet your children use to watch cartoons, every laptop trackpad you absent-mindedly scroll while half-watching television, every interactive museum display, every airport check-in kiosk, every point-of-sale terminal where you tap to pay, every in-car navigation screen you pinch to zoom on a map, every smart home panel, every ATM with a capacitive screen — all of them trace their interface lineage to a document most people will never read, written by a man most people have never heard of, to solve a problem most people would have accepted as simply part of life.

The surface area of Westerman's influence is difficult to overstate. Multitouch is not merely a feature of modern devices — it is the foundational interaction paradigm. It replaced the mouse-and-keyboard model that had dominated personal computing since 1984. It made computing accessible to toddlers who cannot read and elderly users who never learned to use a mouse. It enabled entirely new device categories: the modern tablet could not exist without multitouch, nor could the smartwatch in its current form. When Tesla put a seventeen-inch touchscreen in the Model S and eliminated most physical controls, it was building on Westerman's work. When surgeons manipulate medical imaging with gestures in a sterile operating theatre, they are using a descendant of his algorithms. When a four-year-old swipes through photos on a phone with the casual confidence of a native speaker, she is using an interface whose naturalness is the product of years of signal processing research conducted by a graduate student in pain.

The lesson from Westerman is not inspirational in the way Silicon Valley prefers its lessons. There is no pivot, no growth hack, no disruption narrative, no vision deck, no seed round mythologised in a magazine profile. The lesson is structural: the most consequential inventions often come from people trying to solve a specific, personal, painful problem. They are not trying to change the world. They are trying to get through the day. The world changes as a side effect.

There is a deeper lesson too, one that the technology industry is poorly equipped to hear. The most important person in the room is not always the one on the stage. Steve Jobs introduced multitouch to the world with the showmanship of a born performer, and he deserves immense credit for recognising what the technology could become and building the product that made it matter. But the technology itself — the sensing, the math, the algorithms, the years of patient engineering — came from a man who never stood on that stage, never sought to stand on that stage, and has shown no indication that the absence bothers him. In an industry obsessed with founders and visionaries, Westerman is a reminder that the deepest work is often done by people who have no interest in being seen doing it.

Westerman's hands hurt. So he invented a new way for humans to communicate with machines. Then he went back to work. He is, by all available evidence, still there — still filing patents, still refining the technology, still making glass feel things. The rest of us just use it.

Sources

  1. Westerman, W. "Hand Tracking, Finger Identification, and Chordic Manipulation on a Multi-Touch Surface." PhD Dissertation, University of Delaware, 1999. — https://www.proquest.com/docview/304517037
  2. US Patent 6,323,846 — "Method and apparatus for integrating manual input" (Westerman & Elias, filed 1998) — https://patents.google.com/patent/US6323846B1
  3. FingerWorks product archive and user community documentation — https://web.archive.org/web/2005*/fingerworks.com
  4. Apple Inc. Form 10-K filing, 2005 (FingerWorks acquisition referenced in IP portfolio)
  5. Jobs, S. Macworld 2007 keynote address, January 9, 2007 — https://www.youtube.com/watch?v=MnrJzXM7a6o
  6. US Patent 7,663,607 — "Multipoint touchscreen" (Apple continuation of FingerWorks patents) — https://patents.google.com/patent/US7663607B2