Subscribe to the webAfriqa Podcast

Subscribe to the Podcast Today!
Please subscribe to the Podcast. Today!

Help webAfriqa’s continued publishing. Subscribe to its Patreon Podcast Today!

Since 1997 webAfriqa has offered free access to accurate information and invaluable knowledge about Guinea and Africa on its dozen of websites. However, the production is not free. It costs time, expenses and know-how on daily basis. To keep providing its service, webAfriqa needs urgently your contribution. Support webAfriqa today by becoming a patron of its Patreon channel.

Ranging from $5 to $50/month, the subscription rates are inclusive of all income levels.

Please act today. Funding through subscription will sustain and improve the websites. Otherwise, they remain at risk. In other words, the continued publication of webAfriqa depends on your prompt commitment and generous support.

When you subscribe you get access to my webAfriqa Podcast titled Why Is Africa Lagging?
Building on the rich content of the websites and on four decades of research, teaching, writing and pondering, it explores and seeks answers to why Africa is a perennial economic and technological laggard, compared to the other continents. And based on accurate facts and authoritative sources, it strives to demonstrate that Africa is —and has been for six centuries— between the Hammer of foreign hegemonies and the Anvil of indigenous elites and rulers.

The first three sessions of the webAfriqa Podcast are online for patrons to access. Dozens more will be recorded and posted.

Thank you!

Tierno S. Bah


Abonnez-vous au webAfriqa Podcast !

Aidez webAfriqa en vous abonnant au webAfriqa Podcast sur Patreon.

Depuis 1997, et à travers ses onze sites, webAfriqa offre l’accès gratuit à des informations de qualité et à des connaissances inestimables sur la Guinée et l’Afrique.

Mai la production de ces sites n’est pas gratuite. Elle est onéreuse et  coûte, au quotidien, du temps, des frais et du savoir-faire. Pour poursuivre et améliorer son service public, webAfriqa a besoin, de façon urgente, de votre contribution.

Supportez webAfriqa, aujourd’hui même, en devenant un parrain de son canal sur Patreon channel. Echelonnés de $5 à $50 par mois, les taux d’abonnement incluent tous les niveaux de revenu. Prière donc de s’abonner sans tarder.

Les recettes serviront à maintenir et à améliorer les sites. Sans quoi, la prestation de ces services serait incertaine, non viable. En clair, la publication continue de webAfriqa dépend de votre prompt engagement  et de votre généreux support.

L’abonnement vous donne accès au webAfriqa Podcast, mon nouveau programme intitulé Pourquoi l’Afrique est-elle en retard ? Mon traitement de cette interrogation majeure  s’appuie sur le riche contenu des sites web, d’une part, et sur mes quatre décennies de recherche, d’enseignement, de publication, et de réflexion, d’autre part. J’explore et cherche des réponses à la question de savoir pourquoi l’Afrique est, de façon pérenne, en retard économique et technologique sur les autres continents.
Me basant sur des faits incontestables, des preuves matérielles et sur des ressources faisant autorité, je m’efforce de démontrer que l’Afrique est placée, depuis plus de six siècles, entre le Marteau d’hégémonies extérieures et l’Enclume d’élites et de dirigeants autochtones.

Les trois premières sessions sont déjà disponibles pour les parrains sur Patreon. Des dizaines d’autres y seront enregistrées et publiées.

Merci d’avance.

Tierno S. Bah

Why Is Africa Lagging?

The central question

There is a widespread awareness of Africa’s ranking as the least developed continent. Therein persists a nagging, perplexing, often frustrating and vexing question. People ask and would like to know Why? How? When? Who? Where? It is highly relevant to earnestly seek answers to Africa’s status as a perennial economic, technical and technological laggard.
These are not merely academic or rhetorical interrogations. They are real-life and, more often than not, life-threatening issues. Thus, every year thousands of young African undertake risky journeys in quest for better living conditions in Europe, Asia, and America.
The recent and steady exodus of inexperienced and unskilled youths  compounds an older, long-standing brain-drain. Both phenomena deprive Africa of its main resource: people. Trained technicians and experienced professionals, teenagers and young adults —the seeds of the future— flee abroad to “greener pastures.”

1999. Death of Yaguine Koita and Fodé Tounkara

One of the root causes of Africa’s stalling consists in what ​Leopold S. Senghor decried as the “ deterioration of the terms of exchange.” Actually, that euphemism harkens back to the Colonial Pact of 1898. Still alive—and worsening—, it dealt Africa a crippling hand. For it sealed the role of the continent as (a) a coerced supplier of raw material and (b) an induced consumer of imported goods.

Approach

The central question will be broken down into dozens of sub-topics that range from the tool-making gap, to slavery, colonization, “independence”, globalization, the Cultural Heritage (language, religion, arts, crafts, literature, ethnicity, nationhood, civilization, tradition, modernity, politics…), racism, alienation, affirmation, collaboration and resistance to foreign hegemony, war, peace, the past and the present.

The webAfriqa channel creator

Elaborating on the Africa, Between the Anvil and the Hammer byline as a linguist, an anthropologist, a technologist, a semanticist, and a web publisher, Tierno S. Bah shares four decades of research, teaching, debating, writing and pondering on the main issue and its many corollaries.
Again, the question Why is Africa Lagging is neither fortuitous nor frivolous. To the contrary, it is a permanent, controversial, highly charged, all-around (history, economy, culture, politics, social), major, legitimate, and utterly challenging theme. A mega-quandary, it has no binary choices, clear-cut answers, or simple solutions!
The webAfriqa channel is backed by the webAfriqa Portal, published since 1997. Espousing the Open Web philosophy, the Portal offers tens of thousands of text, images, audio and video documents, carefully selected from authoritative sources, reliable data, relevant information and genuine knowledge bases. The Portal includes webFuutawebPulaakuwebMandewebGuinéeCamp Boiro MemorialBlogGuinéeSemantic AfricaSemanticVocabAfricawebAmeriqa, etc.
Last, steeped in history and blending social sciences with digital tools and technologies, the channel will focus on the prerequisites that Africa must meet in order to break the chains that keep it down and out.

Tierno S. Bah

A quarter-century of Linux

Linus Benedict Torvalds with the Penguin, mascot of Linux
Linus Benedict Torvalds with the Penguin, mascot of Linux

Linux celebrates its 25th anniversary: a quarter-century in which it truly changed the world. Luckily for me, I was an early convert. And an adopter, if not in practice at least in mind. It was 1991, and I was living in Washington, DC, Southwest. Somehow my MCIMail account was among the recipients of a mailing list message that is likely to remain a memorable and historic announcement. It read:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Hello everybody out there using minix –
I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
I’ve currently ported bash (1.08) and gcc (1.40), and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them ?
Linus (torvalds@kruuna.helsinki.fi)
PS. Yes – it’s free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(.

I don’t recall giving Linus Torvalds a technical feedback or even a broad suggestion. For I was still a UNIX newbie challenged by an entrenched industrial operating system. For a while, I looked into A/UX —Apple’s defunct version of UNIX. Next, I made unsuccessful efforts to run an Apache web server on MachTen UNIX, from Tenon Intersystems. That company’s Berkely Software Distribution (BSD)-based OS targeted Macintosh computers built on either the PowerPC, M68K or G3 chips.…

Dr. Bob Kahn (left) and Dr Vinton Cerf (right): inventors of the TCP/IP Internet, which made the creation of Linux possible, and spurred its growth and popularity.
Dr. Bob Kahn (left) and Dr Vinton Cerf (right): inventors of the TCP/IP Internet, which made the creation of Linux possible, and spurred its growth and popularity.

Months after receiving Torvald’s email, I had the privilege of participating in the 1992 Kobe, Japan, conference. Co-inventor, with Dr. Robert Kanh, of the TCP/IP Stack — of Standards and Protocols — that underlies the Internet, Dr. Vinton Cerf chaired the event. And I was part of a group of technologists from eight African countries (Algeria, Tunisia, Egypt, Kenya, Zambia, Nigeria, Senegal, Guinea) who were invited to the meeting. There, with the other delegates, we witnessed and celebrated the founding of the Internet Society.…
In hindsight — and for a social sciences and humanities researcher like me —, the early 1990s proved serendipitous, challenging and groundbreaking. As Linux began to gain foothold, I alternatively tested some of its distributions: MkLinux, Red Hat, CentOS, Ubuntu, Debian… before settling on CentOS and Ubuntu. Ever since, I keep busy managing my Linux Virtual Private Server (VPS) which hosts a fairly complex array of services,  languages, utilities, applications, front-end frameworks (Bootstrap, Foundation), the Drupal, WordPress and Joomla Content Management Systems, etc. The VPS runs in full compliance with rules, regulations and Best Practices for efficiency, availability, productivity and security. It delivers rich content on each of my ten websites, which, together, make up my webAfriqa Portal. Still freely accessible —since 1997—, the sites offer quality online library collections and public services: history, anthropology, economy, literature, the arts, political science, health sciences, diplomacy, human rights, Information Technology, general topics, blogging, etc. They are searchable with the integrated Google Custom Search Engine.
Obviously, with the mobile devices onslaught, websites can double up as apps. However, beyond responsive web design stand  Web 3.0 era aka of the Semantic Web. Hence the raison d’être of the Semantic Africa project. It is yet a parked site. Hopefully, though, it will  evolve into an infrastructure capable of mining and processing Big Data and Very Large  African Databases (MySQL, MongoDB), with advanced indexing and sophisticated search features (Solr, Elasticsearch). The ultimate goal is to build networks of knowledge distribution aimed at fostering a fuller understanding of the African Experience, at home and abroad, from the dawn of humankind to today.
Needless to say, such an endeavor remains a tall order. Worse,  an impossible dream! For the roadblocks stand tall; chief among them are the predicaments of under-development (illiteracy, schooling, training, health care, food production, water supply, manufacturing, etc.), compounded by the self-inflicted wounds and crippling “technological somnanbulism” of African rulers and “elites.”

Looking back at the 2014 USA-Africa Summit in Washington, DC, I will publish additional articles about the continent’s economic and technical situation and prospects. One such paper is called “Obama and Takunda:  a tale of digital Africa,” another is named  “African telecommunications revolution: hype and reality.”

For decades now, proprietary and Open Source software have been competing head to head around the world for mind and market share. I wonder, though, to what extent African countries seek to leverage this rivalry. Are they implementing policies and spending resources toward balancing commercial applications with free software? Are they riding the Linux wave ? Or are they, instead, bucking the trend? To be determined!
Anyway, I share here Paul Venezia’s piece “Linux at 25: How Linux changed the world,” published today in InfoWorld. The author is profiled as “A devoted practitioner (who) offers an eyewitness account of the rise of Linux and the Open Source movement, plus analysis of where Linux is taking us now.”
Read also “A Salute To Shannon
Tierno S. Bah

Linux at 25:
How Linux changed the world

I walked into an apartment in Boston on a sunny day in June 1995. It was small and bohemian, with the normal detritus a pair of young men would scatter here and there. On the kitchen table was a 15-inch CRT display married to a fat, coverless PC case sitting on its side, network cables streaking back to a hub in the living room. The screen displayed a mess of data, the contents of some logfile, and sitting at the bottom was a Bash root prompt decorated in red and blue, the cursor blinking lazily.

I was no stranger to Unix, having spent plenty of time on commercial Unix systems like OSF/1, HP-UX, SunOS, and the newly christened Sun Solaris. But this was different.

The system on the counter was actually a server, delivering file storage and DNS, as well as web serving to the internet through a dial-up PPP connection — and to the half-dozen other systems scattered around the apartment. In front of most of them were kids, late teens to early 20s, caught up in a maze of activity around the operating system running on the kitchen server.

Those enterprising youths were actively developing code for the Linux kernel and the GNU userspace utilities that surrounded it. At that time, this scene could be found in cities and towns all over the world, where computer science students and those with a deep interest in computing were playing with an incredible new toy: a free “Unix” operating system. It was only a few years old and growing every day. It may not have been clear at the time, but these groups were rebuilding the world.

A kernel’s fertile ground

This was a pregnant time in the history of computing. In 1993, the lawsuit by Bell Labs’ Unix System Laboratories against BSDi over copyright infringement was settled out of court, clearing the way for open source BSD variants such as FreeBSD to emerge and inspire the tech community.

The timing of that settlement turned out to be crucial. In 1991, a Finnish university student named Linus Torvalds had begun working on his personal kernel development project. Torvalds himself has said, had BSD been freely available at the time, he would probably never have embarked on his project.

Yet when BSD found its legal footing, Linux was already on its way, embraced by the types of minds that would help turn it into the operating system that would eventually run most of the world.

The pace of development picked up quickly. Userspace utilities from the GNU operating collected around the Linux kernel, forming what most would call “Linux,” much to the chagrin of the GNU founder Richard Stallman. At first, Linux was the domain of hobbyists and idealists. Then the supercomputing community began taking it seriously and contributions ramped up further.

By 1999, this “hobby” operating system was making inroads in major corporations, including large banking institutions, and began whittling away at the entrenched players that held overwhelming sway. Large companies that paid enormous sums to major enterprise hardware and operating system vendors such as Sun Microsystems, IBM, and DEC were now hiring gifted developers, system engineers, and system architects who had spent the last several years of their lives working with freely available Linux distributions.

After major performance victories and cost savings were demonstrated to management, that whittling became a chainsaw’s cut. In a few short years, Linux was driving out commercial Unix vendors from thousands of entrenched customers. The stampede had begun— and it’s still underway.

Adaptability at the core

A common misconception about Linux persists to this day: that Linux is a complete operating system. Linux, strictly defined, is the Linux kernel. The producer of a given Linux distribution — be it Red Hat, Ubuntu, or another Linux vendor — defines the remainder of the operating system around that kernel and makes it whole. Each distribution has its own idiosyncrasies, preferring certain methods over others for common tasks such as managing services, file paths, and configuration tools.

This elasticity explains why Linux has become so pervasive across so many different facets of computing: A Linux system can be as large or as small as needed. Adaptations of the Linux kernel can drive a supercomputer or a watch, a laptop or a network switch. As a result, Linux has become the de facto OS for mobile and embedded products while also underpinning the majority of internet services and platforms.

To grow in these ways, Linux needed not only to sustain the interest of the best software developers on the planet, but also to create an ecosystem that demanded reciprocal source code sharing. The Linux kernel was released under the GNU Public License, version 2 (GPLv2), which stated that the code could be used freely, but any modifications to the code (or use of the source code itself in other projects) required that the resulting source code be made publicly available. In other words, anyone was free to use the Linux kernel (and the GNU tools, also licensed under the GPL) as long as they contributed the resulting efforts back to those projects.

This created a vibrant development ecosystem that let Linux grow by leaps and bounds, as a loose network of developers began molding Linux to suit their needs and shared the fruit of their labor. If the kernel didn’t support a specific piece of hardware, a developer could write a device driver and share it with the community, allowing everyone to benefit. If another developer discovered a performance issue with a scheduler on a certain workload, they could fix it and contribute that fix back to the project. Linux was a project jointly developed by thousands of volunteers.

Changing the game

That method of development stood established practices on their ear. Commercial enterprise OS vendors dismissed Linux as a toy, a fad, a joke. After all, they had the best developers working on operating systems that were often tied to hardware, and they were raking in cash from companies that relied on the stability of their core servers. The name of the game at that time was highly reliable, stable, and expensive proprietary hardware and server software, coupled with expensive but very responsive support contracts.

To those running the commercial Unix cathedrals of Sun, DEC, IBM, and others, the notion of distributing source code to those operating systems, or that enterprise workloads could be handled on commodity hardware, was unfathomable. It simply wasn’t done — until companies like Red Hat and Suse began to flourish. Those upstarts offered the missing ingredient that many customers and vendors required: a commercially supported Linux distribution.

The decision to embrace Linux at the corporate level was made not because it was free, but because it now had a cost and could be purchased for significantly less — and the hardware was significantly cheaper, too. When you tell a large financial institution that it can reduce its server expenses by more than 50 percent while maintaining or exceeding current performance and reliability, you have their full attention.

Add the rampant success of Linux as a foundation for websites, and the Linux ecosystem grew even further. The past 10 years have seen heavy Linux adoption at every level of computing, and importantly, Linux has carried the open source story with it, serving as an icebreaker for thousands of other open source projects that would have failed to gain legitimacy on their own.

The tale of Linux is more than the success of an open kernel and an operating system. It’s equally as important to understand that much of the software and services we rely on directly or indirectly every day exist only due to Linux’s clear demonstration of the reliability and sustainability of open development methods.

Anyone who fought through the days when Linux was unmentionable and open source was a threat to corporate management knows how difficult that journey has been. From web servers to databases to programming languages, the turnabout in this thinking has changed the world, stem to stern.

Open source code is long past the pariah phase. It has proven crucial to the advancement of technology in every way.

The next 25 years

While the first 15 years of Linux were busy, the last 10 have been busier still. The success of the Android mobile platform brought Linux to more than a billion devices. It seems every nook and cranny of digital life runs a Linux kernel these days, from refrigerators to televisions to thermostats to the International Space Station.

That’s not to say that Linux has conquered everything … yet.

Though you’ll find Linux in nearly every organization in one form or another, Windows servers persist in most companies, and Windows still has the lion’s share of the corporate and personal desktop market.

In the short term, that’s not changing. Some thought Linux would have won the desktop by now, but it’s still a niche player, and the desktop and laptop market will continue to be dominated by the goliath of Microsoft and the elegance of Apple, modest inroads by the Linux-based Chromebook notwithstanding.

The road to mainstream Linux desktop adoption presents serious obstacles, but given Linux’s remarkable resilience over the years, it would be foolish to bet against the OS over the long haul.

I say that even though various issues and schisms regularly arise in the Linux community — and not only on the desktop. The brouhaha surrounding systemd is one example, as are the battles over the Mir, Wayland, and ancient X11 display servers. The predilection of some distributions to abstract away too much of the underlying operating system in the name of user-friendliness has rankled more than a few Linux users. Fortunately, Linux is what you make of it, and the different approaches taken by various Linux distributions tend to appeal to different user types.

That freedom is a double-edged sword. Poor technological and functional decisions have doomed more than one company in the past, as they’ve taken a popular desktop or server product in a direction that ultimately alienated users and led to the rise of competitors.

If a Linux distribution makes a few poor choices and loses ground, other distributions will take a different approach and flourish. Linux distributions are not tied directly to Linux kernel development, so they come and go without affecting the core component of a Linux operating system. The kernel itself is mostly immune to bad decisions made at the distribution level.

That has been the trend over the past 25 years — from bare metal to virtual servers, from cloud instances to mobile devices, Linux adapts to fit the needs of them all. The success of the Linux kernel and the development model that sustains it is undeniable. It will endure through the rise and fall of empires.

Paul Venezia
Paul Venezia

The next 25 years should be every bit as interesting as the first.

Paul Venezia
InfoWorld

Salute to Shannon

I just read the New Yorker’s article titled “Claude Shannon, the Father of the Information Age, Turns 1100100.” The evocation of Shannon’s career took me back decades ago to the 20th century’s last quarter. It was the academic year 1967-1968. And I was an eighteen-year old freshman student at the Institut Polytechnique de Conakry (Guinea). I was in Propedeutics, which, then, designated the first of the four-year university system. Newbies belonged in three categories:

  • Propedeutics A (Maths, Physics)
  • Propedeutics B (experimental sciences: chemistry, biology)
  • Propedeutics C (literature, linguistics, humanities)

Based on the baccalaureate transcripts (Série A) from my high-school of Labe (Fuuta-Jalon), I was automatically placed in Propedeutics C. There I took the class taught by Belgian professor of linguistics, Ms. Claire Van Arenberg. She brilliantly exposed our young minds to Claude Shannon’s concepts and some of their implications. It was all theoretical, of course. However, her explanations registered front, center and back in my mind. And they stuck in there, never dimming or fading out.
Fast forward some 15 years later, to January 22, 1982. I arrived at JFK International Airport aboard the regular PanAm flight Dakar-New York. I was on my way to the University of Texas at Austin, as an assistant-professor and a recipient of a Fulbright-Hayes Research Fellowship in sociolinguistics. Upon settling down in the heart of the Lone Star State, my first shopping trophy was a tablet-size Sinclair 64kb RAM computer. It was a disappointment. So, I quickly returned it. Overlooking Radio Shack’s Tandy desktop computer, I purchased a 128K  Apple IIc, with external monitor. I connected it to a dot-matrix printer and 9.6 kbit/s modem. The two peripherals fetched hundreds of dollars. But, to me, they were worth their high price. For in Conakry, I had toiled for years as a co-publisher of Guinea’s journal, Miriya, Revue des sciences économiques et sociales. Preparation of each issue was a real pain. Armed with a typewriter, a pair of scissors and a glue container, we had to literally cut and paste words and letters during the pre and post-print phases. Consequently, the minute I saw a full screen word processor in action in Austin,  I was sold. Today, while I no longer have the peripheral devices, I still own the Mac IIc with its AppleWorks OS and its staple applications software (word processing, database, spreadsheet). And I can still turn it on and run it…
Better yet, I now manage my own fiber-optics based Linux CentOS Virtual Private Server (VPS) network, built on the TCP/IP Stack with its standard array of servers (dns, web, ssh, wsftp, mail, etc.). It is home to my webAfriqa Portal, which includes ten public-facing websites. webAfriqa is dedicated to research and publishing information and knowledge about Fulɓe, Africa and its Diaspora. The server also hosts a dozen internal sandboxes, where I experiment and tinker with a variety of Content Management Systems, languages, tools, and utilities. This Open Source software environment includes WordPress, Drupal, DSpace, MySQL, MongDB, Solr, XHML, XML, CSS/Sass, JavaScript/jQuery, PHP, Python, JAVA, etc.)… It’s been a long, learning and enlightening journey, looking back from my first interface with a computer.
I find it fascinating that Shannon’s fundamental concept, the bit, belongs also in every day English. Two words embed it:  the nimble and the byte. The latter, too, predates the digital revolution since it was a currency in Medieval Europe.
The New Yorker‘s article pays tribute to Shannon’s creative genius. It unwittingly speaks for me. And it inherently expresses my intellectual debt and deep gratitude to the Father of the Information Age. It is a fitting salute from one of America’s premier journalistic and literary publications. I enjoyed reading it and I wholeheartedly second it.
Tierno S. Bah


Claude Shannon
the Father of the Information Age, Turns 1100100

Twelve years ago, Robert McEliece, a mathematician and engineer at Caltech, won the Claude E. Shannon Award, the highest honor in the field of information theory. During his acceptance lecture, at an international symposium in Chicago, he discussed the prize’s namesake, who died in 2001. Someday, McEliece imagined, many millennia in the future, the hundred-and-sixty-sixth edition of the Encyclopedia Galactica—a fictional compendium first conceived by Isaac Asimov—would contain the following biographical note:

Claude Shannon: Born on the planet Earth (Sol III) in the year 1916 A.D. Generally regarded as the father of the information age, he formulated the notion of channel capacity in 1948 A.D. Within several decades, mathematicians and engineers had devised practical ways to communicate reliably at data rates within one per cent of the Shannon limit.

Claude Shannon (1916-2001). A hundred years after his birth, Claude Shannon’s fingerprints are on every electronic device we own.
Claude Shannon (1916-2001). A hundred years after his birth, Claude Shannon’s fingerprints are on every electronic device we own. (Photo: Alfred Eisenstaedt / The Life Picture Collection / Getty

As is sometimes the case with encyclopedias, the crisply worded entry didn’t quite do justice to its subject’s legacy. That humdrum phrase—“channel capacity”—refers to the maximum rate at which data can travel through a given medium without losing integrity. The Shannon limit, as it came to be known, is different for telephone wires than for fibre-optic cables, and, like absolute zero or the speed of light, it is devilishly hard to reach in the real world. But providing a means to compute this limit was perhaps the lesser of Shannon’s great breakthroughs. First and foremost, he introduced the notion that information could be quantified at all. In “A Mathematical Theory of Communication,” his legendary paper from 1948, Shannon proposed that data should be measured in bits—discrete values of zero or one. (He gave credit for the word’s invention to his colleague John Tukey, at what was then Bell Telephone Laboratories, who coined it as a contraction of the phrase “binary digit.”)

“It would be cheesy to compare him to Einstein,” James Gleick, the author of “The Information,” told me, before submitting to temptation. “Einstein looms large, and rightly so. But we’re not living in the relativity age, we’re living in the information age. It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten.” That old world, Gleick said, treated information as “vague and unimportant,” as something to be relegated to “an information desk at the library.” The new world, Shannon’s world, exalted information; information was everywhere. “He created a whole field from scratch, from the brow of Zeus,” David Forney, an electrical engineer and adjunct professor at M.I.T., said. Almost immediately, the bit became a sensation: scientists tried to measure birdsong with bits, and human speech, and nerve impulses. (In 1956, Shannon wrote a disapproving editorial about this phenomenon, called “The Bandwagon.”)

Although Shannon worked largely with analog technology, he also has some claim as the father of the digital age, whose ancestral ideas date back not only to his 1948 paper but also to his master’s thesis, published a decade earlier. The thesis melded George Boole’s nineteenth-century Boolean algebra (based on the variables true and false, denoted by the binary one and zero) with the relays and switches of electronic circuitry. The computer scientist and sometime historian Herman Goldstine hyperbolically deemed it “one of the most important master’s theses ever written,” arguing that “it changed circuit design from an art to a science.” Neil Sloane, a retired Bell Labs mathematician as well as the co-editor of Shannon’s collected papers and the founder of the On-Line Encyclopedia of Integer Sequences, agreed. “Of course, Shannon’s main work was in communication theory, without which we would still be waiting for telegrams,” Sloane said. But circuit design, he added, seemed to be Shannon’s great love. “He loved little machines. He loved the tinkering.”

For instance, Shannon built a machine that did arithmetic with Roman numerals, naming it THROBAC I, for Thrifty Roman-Numeral Backward-Looking Computer. He built a flame-throwing trumpet and a rocket-powered Frisbee. He built a chess-playing automaton that, after its opponent moved, made witty remarks. Inspired by the late artificial-intelligence pioneer Marvin Minsky, he designed what was dubbed the Ultimate Machine: flick the switch to “On” and a box opens up; out comes a mechanical hand, which flicks the switch back to “Off” and retreats inside the box. Shannon’s home, in Winchester, Massachusetts (Entropy House, he called it), was full of his gizmos, and his garage contained at least thirty idiosyncratic unicycles—one without pedals, one with a square tire, and a particularly confounding unicycle built for two. Among the questions he sought to answer was, What’s the smallest unicycle anybody could ride? “He had a few that were a little too small,” Elwyn Berlekamp, a professor emeritus of mathematics at Berkeley and a co-author of Shannon’s last paper, told me. Shannon sat on Berlekamp’s thesis committee at M.I.T., and in return he asked Berlekamp to teach him how to juggle with four balls. “He claimed his hands were too small, which was true—they were smaller than most people’s—so he had trouble holding the four balls to start,” Berlekamp said. But Shannon succeeded in mastering the technique, and he pursued further investigations with his Jugglometer. “He was hacking reality,” the digital philosopher Amber Case said.

By 1960, however, like the hand of that sly machine, Shannon had retreated. He no longer participated much in the field that he had created, publishing only rarely. Yet he still tinkered, in the time he might have spent cultivating the big reputation that scientists of his stature tend to seek. In 1973, the Institute of Electrical and Electronics Engineers christened the Shannon Award by bestowing it on the man himself, at the International Symposium on Information Theory in Ashkelon, Israel. Shannon had a bad case of nerves, but he pulled himself together and delivered a fine lecture on feedback, then dropped off the scene again. In 1985, at the International Symposium in Brighton, England, the Shannon Award went to the University of Southern California’s Solomon Golomb. As the story goes, Golomb began his lecture by recounting a terrifying nightmare from the night before: he’d dreamed that he was about deliver his presentation, and who should turn up in the front row but Claude Shannon. And then, there before Golomb in the flesh, and in the front row, was Shannon. His reappearance (including a bit of juggling at the banquet) was the talk of the symposium, but he never attended again.

Siobhan Roberts
The New Yorker