Sophie Wilson
The instruction set that powers six billion devices
By VastBlue Editorial · 2026-03-26 · 22 min read
Series: The Inventors · Episode 6
The Computer Literacy Project
In 1981, the British Broadcasting Corporation launched the most ambitious computer education initiative any government broadcaster had ever attempted. The BBC Computer Literacy Project was born from a documentary called "The Mighty Micro," presented by Christopher Evans, which had argued that microcomputers would transform society within a decade. The BBC's response was not merely to report on this transformation but to equip the British public to participate in it. The plan was sweeping: a television series, a network of learning centres, published course materials, and — most critically — a reference computer that schools and households could buy.
The specification for this reference computer was demanding. It needed to support a full programming language (BBC BASIC, a structured dialect that the BBC commissioned specifically for the project), handle colour graphics, produce sound, support networking (via the Econet local area network standard), and connect to external peripherals through standardised interfaces. It needed to be robust enough for daily classroom use by children who had never touched a computer. And it needed to cost less than four hundred pounds — a price point that placed it within reach of school budgets across the state education system.
Several British companies submitted proposals. Sinclair Research, then the most visible name in British microcomputing, offered the ZX Spectrum — affordable but too limited in its I/O capabilities and too fragile for classroom deployment. Acorn Computers, a small Cambridge company run by Austrian physicist Hermann Hauser and engineer Chris Curry, proposed a machine they had not yet finished building. They called it the Proton. The BBC evaluated the prototypes and chose Acorn. The Proton became the BBC Micro.
The BBC Micro became one of the most consequential computers in British history. Over 1.5 million units were sold. It was installed in over eighty per cent of British secondary schools. A generation of British engineers — many of whom would go on to found technology companies, lead research programmes at Cambridge and Imperial College, and define the architecture of the internet age — learned to programme on its MOS Technology 6502 processor. The cultural impact was disproportionate to the machine's technical specifications. Britain produced more software engineers per capita in the late 1980s than any other European country, and the BBC Micro was a significant reason why.
Wilson at Acorn
Sophie Wilson, then known as Roger Wilson before transitioning, had joined Acorn as an undergraduate student at Cambridge and quickly became indispensable. Her first significant contribution was designing the controller for Acorn's Atom computer. By the time the BBC contract arrived, she was Acorn's most capable systems programmer — responsible for designing the BBC Micro's operating system, writing much of BBC BASIC itself, and solving the low-level hardware-software integration problems that determined whether the machine would actually work reliably.
Wilson's BBC BASIC was not a trivial achievement. Most BASIC dialects of the era were crude interpreters — slow, limited in structure, and hostile to good programming practice. Wilson's implementation was a sophisticated piece of systems software: it supported named procedures with local variables, offered inline assembler (allowing programmers to drop into 6502 machine code for performance-critical routines without leaving the BASIC environment), and ran fast enough to serve as a practical development tool. BBC BASIC became the standard teaching language for a generation of British computer science education, and its quality reflected Wilson's instinct for designing systems that were both powerful and accessible.
The BBC Micro made Acorn a commercial success. But Wilson was already thinking about what came after the 6502 — a processor that was reaching the limits of what it could do, hamstrung by an architecture designed in the mid-1970s for a world of 1 MHz clock speeds and 64 kilobytes of memory. Acorn needed a next-generation processor, and the existing options on the market were unsatisfactory.
Why Start From Nothing
Acorn's engineers evaluated every available 32-bit processor. Intel's x86 was powerful but complex — its instruction set had grown organically over multiple generations, accumulating legacy cruft that consumed transistors without adding proportional capability. The variable-length instruction encoding made decoding expensive in silicon, and Intel's pricing reflected a monopolist's confidence. The Motorola 68000 was architecturally more elegant — its flat 32-bit address space and orthogonal instruction set appealed to programmers — but its die size meant higher manufacturing costs, and Acorn needed volume economics. The National Semiconductor 32016 was promising on paper but catastrophically buggy in silicon. Acorn had actually started building a business computer, the Acorn Business Computer, around the 32016 before concluding that the chip's errata list was longer than its feature list and the project could not be shipped.
Wilson proposed a radical alternative: design a processor from scratch. Not by assembling existing components differently, but by rethinking what a processor instruction set should look like, informed by recent academic research that the semiconductor industry had not yet commercialised.
The RISC Revolution That Industry Ignored
The research in question came primarily from two university programmes running simultaneously on opposite sides of San Francisco Bay. At Berkeley, David Patterson's group had built the RISC-I processor in 1982 and RISC-II in 1983 — experimental chips that demonstrated a principle Patterson called "Reduced Instruction Set Computing." At Stanford, John Hennessy's group was building MIPS (Microprocessor without Interlocked Pipeline Stages), pursuing the same core insight from a slightly different angle.
The central finding of both projects was counterintuitive: processors with fewer, simpler instructions could execute programmes faster than processors with many complex instructions. The reason was pipelining. A simple instruction — load a value, add two numbers, store a result — could be broken into discrete stages (fetch, decode, execute, write-back) and executed in a pipeline, with each stage processing a different instruction simultaneously. If every instruction completed in a single clock cycle, the pipeline could sustain a throughput of one instruction per cycle once filled. Complex instructions disrupted this flow. A single instruction that required multiple cycles to execute — such as a string copy or a polynomial evaluation, common in CISC architectures — would stall the pipeline, forcing subsequent instructions to wait. The complex instruction might do more work per instruction, but it did less work per cycle. Simpler was faster.
Patterson and Hennessy also demonstrated that compilers were becoming sophisticated enough to generate efficient code from simple instruction sets. The traditional argument for complex instructions — that they made assembly programming easier and generated denser code — was losing relevance as high-level languages displaced hand-written assembly. A compiler could schedule simple instructions to fill pipeline slots efficiently, something that was nearly impossible with variable-length, multi-cycle complex instructions.
Yet the commercial processor industry had not adopted RISC. Intel's x86 was entrenched in the IBM PC ecosystem, and software compatibility — the vast library of existing x86 programmes — created a moat that no architectural elegance could breach. Motorola continued investing in the 68000 family. The only commercial RISC processors were workstation chips from Sun (SPARC, based on Berkeley's work) and MIPS Computer Systems (Hennessy's spinout), both targeting expensive professional markets, not the volume consumer market where Acorn operated. Wilson had read the Berkeley papers. She visited the National Semiconductor facility in Israel and Western Design Center in Arizona. She became convinced that RISC was correct, that the industry was ignoring it for commercial rather than technical reasons, and that a small team could build a RISC processor that outperformed anything Acorn could buy.
Designed in BBC BASIC
Wilson wrote the initial ARM instruction set specification — the complete definition of every instruction the processor would support, every addressing mode, every register operation — as a simulation running in BBC BASIC on a BBC Micro. This was not a proof of concept or a sketch. This was the design methodology. The simulator modelled the processor cycle by cycle, executing real programmes against the proposed instruction set and revealing architectural problems before they were frozen into silicon.
The instruction set Wilson designed reflected several deliberate choices that distinguished ARM from both the Berkeley and Stanford designs. First, conditional execution: every ARM instruction included a four-bit condition code field, meaning any instruction could be made conditional without requiring a separate branch instruction. This eliminated many short branches in typical code — the kind of if-then sequences that litter compiled output — reducing pipeline flushes and improving code density. It was an idea that neither RISC-I nor MIPS had implemented, and it was Wilson's own innovation.
Second, the barrel shifter: ARM included a hardware barrel shifter that could shift or rotate one operand as part of any data-processing instruction, at no additional cost in clock cycles. This meant that common operations like array indexing (which requires multiplying an index by the element size, a power-of-two shift) or bit manipulation could be folded into a single instruction. It gave ARM an effective code density that was closer to CISC than most RISC designs achieved, without sacrificing the single-cycle execution that made RISC fast.
Third, the load-store architecture: like all RISC designs, ARM separated memory access from computation. Only load and store instructions could touch memory; all arithmetic and logic operations worked on registers. ARM provided sixteen 32-bit general-purpose registers — generous by the standards of 1985, when the 6502 had three 8-bit registers and the x86 had eight registers of varying widths. The large register file reduced the frequency of memory accesses, which was critical for power efficiency because memory access consumed far more energy than register operations.
Wilson and Steve Furber, who handled the hardware implementation, used the BBC BASIC simulator as the authoritative specification that the silicon had to match. The collaboration was unusually tight: Wilson defined what the processor should do; Furber determined how to build it in gates and transistors; the simulator served as the contract between them. When the first ARM1 chip arrived from VLSI Technology in April 1985, it was tested against the simulator's outputs. It matched. The chip worked correctly on its first silicon run — an almost unheard-of achievement in processor design, where first silicon typically requires months of debugging and at least one respin.
The transistor count told the story. The ARM1 used approximately 25,000 transistors. Intel's 386, released the same year, used 275,000 — eleven times more. The 386 consumed over two watts at comparable clock speeds; the ARM1 drew so little power that the prototype ran without a heat sink, and the power pins on the test board were not even connected in the initial bring-up because the chip ran on leakage current alone. The 386 was more powerful in absolute terms. But Wilson had not been optimising for absolute power. She had been optimising for the ratio of performance to power and performance to cost. In 1985, this seemed like optimising for the wrong thing. Thirty years later, it turned out to be the only thing that mattered.
The Newton and the Birth of ARM Ltd
By the late 1980s, Acorn was struggling commercially. The BBC Micro's sales had peaked, and Acorn's follow-up products — the Archimedes range, powered by ARM2 and ARM3 processors — were technically impressive but commercially marginal, unable to compete with the IBM PC's growing dominance. The ARM processor was better engineering than anything Acorn could sell around it.
Apple saw the potential. In the late 1980s, Apple was developing the Newton — a handheld personal digital assistant that was intended to be as revolutionary as the Macintosh. The Newton needed a processor that was powerful enough for handwriting recognition and graphical display but consumed little enough power to run on batteries for hours. Apple evaluated every available chip. The ARM architecture, with its extraordinary performance-per-watt ratio, was the only viable option.
In November 1990, Acorn, Apple, and VLSI Technology formed a joint venture: Advanced RISC Machines Ltd. Apple invested $3 million and provided engineering resources. Acorn contributed the ARM intellectual property and its processor design team, including Wilson and Furber. VLSI Technology, which had fabricated the original ARM1, provided manufacturing expertise. The new company had twelve employees and occupied a barn above a turkey farm near Cambridge. It was, by any conventional measure, a marginal enterprise.
The Newton launched in 1993 and was a commercial disaster. Its handwriting recognition was unreliable, its price point was too high, and the market for handheld computers did not yet exist in the way Apple had imagined. Apple discontinued the Newton in 1998 after Steve Jobs returned to the company and eliminated products he considered distractions. The product that had justified ARM's existence as an independent company was dead.
But the architecture survived the product. While the Newton failed, ARM had quietly signed licensing deals with other companies — Texas Instruments, which used ARM cores in its digital signal processors; Nokia, which was beginning to design the next generation of mobile phones; and a growing list of embedded systems manufacturers who needed processing power without the power consumption of x86. The Newton was the spark, but the fire it started burned in a direction nobody had predicted.
The Licensing Bet
ARM made a decision that seemed commercially naive and turned out to be structurally brilliant. It would not manufacture chips. It would not design complete processors. It would license its instruction set architecture — the design blueprints — to any company that wanted to build an ARM-compatible processor.
The economics were stark. Intel made money by designing processors, manufacturing them in its own fabrication plants, and selling the finished chips at margins that sometimes exceeded sixty per cent. A single Intel processor might sell for two hundred dollars. ARM's model was the inverse: ARM would collect a one-time licence fee (typically between one and ten million dollars, depending on the level of customisation the licensee wanted) and then a per-unit royalty on every chip manufactured using the architecture. The royalty was typically between one and two per cent of the chip's selling price — often just a few pennies per unit. If a chip sold for five dollars, ARM might receive five to ten cents.
Pennies per chip looked like a terrible business compared to dollars per chip. But pennies per chip multiplied by billions of chips produced a different equation entirely. By the time ARM went public on the London Stock Exchange in 1998, the cumulative volume of ARM-based chips was approaching one billion units. By 2025, that number had reached 280 billion. At an average royalty of a few cents per chip, the mathematics of scale had made ARM one of the most profitable business models in the history of the semiconductor industry — generating billions in revenue with no factories, no inventory, and no manufacturing risk.
The licensing model also had a structural advantage that was not obvious in 1990 and became decisive by 2000. Because ARM did not compete with its licensees — ARM never manufactured a chip that competed with Qualcomm's Snapdragon, Samsung's Exynos, or MediaTek's Dimensity — every chipmaker in the mobile ecosystem could adopt ARM without fear of helping a competitor. ARM was Switzerland. It was the only major processor architecture whose owner had no interest in competing downstream.
Contrast this with Intel. When Intel sold processors to Dell, it was also competing with Dell's other suppliers — and in some cases competing with Dell itself, since Intel occasionally sold reference platforms. Intel's integrated model — designing, manufacturing, and selling finished products — created conflicts of interest throughout its supply chain. AMD, the only other x86 licensee, existed in a state of permanent legal warfare with Intel over licensing terms. The x86 ecosystem was a duopoly defined by litigation and distrust.
ARM's ecosystem was a commons. Hundreds of companies licensed the architecture. Each competed on implementation — on circuit design, on manufacturing process, on integration with custom peripherals and accelerators — but all shared the same instruction set. Software written for one ARM chip ran on another. The architecture was the lingua franca; the silicon was where differentiation happened. This structure was uniquely suited to the mobile market, where dozens of manufacturers needed a common platform but none could afford to develop a proprietary instruction set from scratch.
ARM does not make chips. It makes the blueprint that other companies follow. The most deployed computing architecture on earth is not a product — it is a licence. A Swiss passport in the semiconductor trade wars.
Editorial observation
Apple Silicon and the Desktop Inflection
For decades, ARM's dominance was confined to mobile and embedded devices — markets where power efficiency was the overriding constraint. The desktop and laptop markets remained Intel's territory, where raw single-threaded performance justified higher power consumption. ARM was what you used when you could not afford the watts. x86 was what you used when you could.
Apple's M1 chip, announced in November 2020, demolished this distinction. The M1 was an ARM-based system-on-chip designed by Apple's in-house silicon team, led by Johny Srouji, and built on the foundation of the A-series chips Apple had been developing for iPhones and iPads since 2010. But the M1 was not a phone chip repurposed for a laptop. It was a ground-up design for desktop-class workloads: professional video editing, software compilation, scientific computation, machine learning inference.
The M1's performance stunned the industry. In single-threaded benchmarks, it matched or exceeded Intel's best laptop processors. In multi-threaded workloads, it was competitive with chips that consumed three to four times more power. Its integrated GPU outperformed many discrete graphics cards in its class. And it did all of this while consuming so little power that the MacBook Air — the laptop Apple chose for the M1's debut — required no fan. The machine was silent. It ran cool enough to hold on your lap. And it lasted eighteen hours on a single battery charge.
The M1 proved that ARM's performance-per-watt advantage was not just a mobile story. Given enough engineering investment, an ARM-based processor could match x86 on absolute performance while maintaining its structural advantage in power efficiency. Apple subsequently transitioned its entire Mac lineup — including the Mac Pro workstation — to ARM-based Apple Silicon, abandoning a fifteen-year partnership with Intel. It was the most significant architectural transition in the history of personal computing, and it validated the design philosophy Sophie Wilson had established forty years earlier: that optimising for efficiency rather than raw power would eventually win, even in markets where raw power had always been the measure of success.
280 Billion and Counting
By 2025, over 280 billion ARM-based chips had been manufactured. ARM processors powered virtually every smartphone on earth (Apple's A-series and M-series, Qualcomm's Snapdragon, Samsung's Exynos, MediaTek's Dimensity), the vast majority of tablets, a growing number of laptops, most IoT devices, most automotive infotainment systems, and an increasing share of data centre servers (Amazon's Graviton, Ampere's Altra, Microsoft's Cobalt). The architecture that began as a side project in a Cambridge office had become the most widely deployed computing platform in human history.
Wilson was recognised as a Fellow of the Royal Society in 2019 and a Fellow of the Royal Academy of Engineering. She was appointed Commander of the Order of the British Empire. She remains one of the most important — and least publicly known — figures in the history of computing. She designed the language that six billion devices speak, and she did it by choosing to optimise for constraints — power consumption, simplicity, cost — that nobody else thought were important in 1985.
The Constraint Lesson
Wilson's career illustrates a principle that recurs in every consequential technology decision: the most important architectural choices are constraints, not capabilities. She did not build a more powerful processor. She built a simpler one, and bet that simplicity — which translated directly into lower power, lower cost, and easier licensing — would matter more than raw performance when the market shifted from desktop to mobile. The bet took fifteen years to pay off. When it did, it paid off for the entire industry.
The phone in your pocket speaks ARM. The tablet on your desk speaks ARM. The processor in your car speaks ARM. The server that answered your last web search may speak ARM. The language was designed by a twenty-six-year-old in Cambridge, simulated in BBC BASIC, and fabricated on a chip smaller than a fingernail. It is one of the few truly universal things that human engineering has produced.
Sources
- Furber, S. "ARM System-on-Chip Architecture." Addison-Wesley, 2000.
- ARM Holdings prospectus, London Stock Exchange, 1998
- Computer History Museum, "Oral History of Sophie Wilson," 2012 — https://www.computerhistory.org/collections/catalog/102746653
- Patterson, D. and Hennessy, J. "Computer Organization and Design: The RISC-V Edition." Morgan Kaufmann, 2017.
- SoftBank/ARM Holdings annual reports, ARM total chip shipments 2015-2025 — https://www.arm.com
- Royal Society Fellowship citation for Sophie Wilson, 2019 — https://royalsociety.org/people/sophie-wilson-12747/
- Bray, J. "The Communications Miracle: The Telecommunication Pioneers from Morse to the Information Superhighway." Springer, 1995.
- Patterson, D. and Ditzel, D. "The Case for the Reduced Instruction Set Computer." ACM SIGARCH Computer Architecture News, 1980.
- Apple Inc. "Apple Unleashes M1." Press release, November 10, 2020. — https://www.apple.com/newsroom/2020/11/apple-unleashes-m1/