A quarter-century of Linux

Linus Benedict Torvalds with the Penguin, mascot of Linux
Linus Benedict Torvalds with the Penguin, mascot of Linux

Linux celebrates its 25th anniversary: a quarter-century in which it truly changed the world. Luckily for me, I was an early convert. And an adopter, if not in practice at least in mind. It was 1991, and I was living in Washington, DC, Southwest. Somehow my MCIMail account was among the recipients of a mailing list message that is likely to remain a memorable and historic announcement. It read:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Hello everybody out there using minix –
I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
I’ve currently ported bash (1.08) and gcc (1.40), and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them ?
Linus (torvalds@kruuna.helsinki.fi)
PS. Yes – it’s free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(.

I don’t recall giving Linus Torvalds a technical feedback or even a broad suggestion. For I was still a UNIX newbie challenged by an entrenched industrial operating system. For a while, I looked into A/UX —Apple’s defunct version of UNIX. Next, I made unsuccessful efforts to run an Apache web server on MachTen UNIX, from Tenon Intersystems. That company’s Berkely Software Distribution (BSD)-based OS targeted Macintosh computers built on either the PowerPC, M68K or G3 chips.…

Dr. Bob Kahn (left) and Dr Vinton Cerf (right): inventors of the TCP/IP Internet, which made the creation of Linux possible, and spurred its growth and popularity.
Dr. Bob Kahn (left) and Dr Vinton Cerf (right): inventors of the TCP/IP Internet, which made the creation of Linux possible, and spurred its growth and popularity.

Months after receiving Torvald’s email, I had the privilege of participating in the 1992 Kobe, Japan, conference. Co-inventor, with Dr. Robert Kanh, of the TCP/IP Stack — of Standards and Protocols — that underlies the Internet, Dr. Vinton Cerf chaired the event. And I was part of a group of technologists from eight African countries (Algeria, Tunisia, Egypt, Kenya, Zambia, Nigeria, Senegal, Guinea) who were invited to the meeting. There, with the other delegates, we witnessed and celebrated the founding of the Internet Society.…
In hindsight — and for a social sciences and humanities researcher like me —, the early 1990s proved serendipitous, challenging and groundbreaking. As Linux began to gain foothold, I alternatively tested some of its distributions: MkLinux, Red Hat, CentOS, Ubuntu, Debian… before settling on CentOS and Ubuntu. Ever since, I keep busy managing my Linux Virtual Private Server (VPS) which hosts a fairly complex array of services,  languages, utilities, applications, front-end frameworks (Bootstrap, Foundation), the Drupal, WordPress and Joomla Content Management Systems, etc. The VPS runs in full compliance with rules, regulations and Best Practices for efficiency, availability, productivity and security. It delivers rich content on each of my ten websites, which, together, make up my webAfriqa Portal. Still freely accessible —since 1997—, the sites offer quality online library collections and public services: history, anthropology, economy, literature, the arts, political science, health sciences, diplomacy, human rights, Information Technology, general topics, blogging, etc. They are searchable with the integrated Google Custom Search Engine.
Obviously, with the mobile devices onslaught, websites can double up as apps. However, beyond responsive web design stand  Web 3.0 era aka of the Semantic Web. Hence the raison d’être of the Semantic Africa project. It is yet a parked site. Hopefully, though, it will  evolve into an infrastructure capable of mining and processing Big Data and Very Large  African Databases (MySQL, MongoDB), with advanced indexing and sophisticated search features (Solr, Elasticsearch). The ultimate goal is to build networks of knowledge distribution aimed at fostering a fuller understanding of the African Experience, at home and abroad, from the dawn of humankind to today.
Needless to say, such an endeavor remains a tall order. Worse,  an impossible dream! For the roadblocks stand tall; chief among them are the predicaments of under-development (illiteracy, schooling, training, health care, food production, water supply, manufacturing, etc.), compounded by the self-inflicted wounds and crippling “technological somnanbulism” of African rulers and “elites.”

Looking back at the 2014 USA-Africa Summit in Washington, DC, I will publish additional articles about the continent’s economic and technical situation and prospects. One such paper is called “Obama and Takunda:  a tale of digital Africa,” another is named  “African telecommunications revolution: hype and reality.”

For decades now, proprietary and Open Source software have been competing head to head around the world for mind and market share. I wonder, though, to what extent African countries seek to leverage this rivalry. Are they implementing policies and spending resources toward balancing commercial applications with free software? Are they riding the Linux wave ? Or are they, instead, bucking the trend? To be determined!
Anyway, I share here Paul Venezia’s piece “Linux at 25: How Linux changed the world,” published today in InfoWorld. The author is profiled as “A devoted practitioner (who) offers an eyewitness account of the rise of Linux and the Open Source movement, plus analysis of where Linux is taking us now.”
Read also “A Salute To Shannon
Tierno S. Bah

Linux at 25:
How Linux changed the world

I walked into an apartment in Boston on a sunny day in June 1995. It was small and bohemian, with the normal detritus a pair of young men would scatter here and there. On the kitchen table was a 15-inch CRT display married to a fat, coverless PC case sitting on its side, network cables streaking back to a hub in the living room. The screen displayed a mess of data, the contents of some logfile, and sitting at the bottom was a Bash root prompt decorated in red and blue, the cursor blinking lazily.

I was no stranger to Unix, having spent plenty of time on commercial Unix systems like OSF/1, HP-UX, SunOS, and the newly christened Sun Solaris. But this was different.

The system on the counter was actually a server, delivering file storage and DNS, as well as web serving to the internet through a dial-up PPP connection — and to the half-dozen other systems scattered around the apartment. In front of most of them were kids, late teens to early 20s, caught up in a maze of activity around the operating system running on the kitchen server.

Those enterprising youths were actively developing code for the Linux kernel and the GNU userspace utilities that surrounded it. At that time, this scene could be found in cities and towns all over the world, where computer science students and those with a deep interest in computing were playing with an incredible new toy: a free “Unix” operating system. It was only a few years old and growing every day. It may not have been clear at the time, but these groups were rebuilding the world.

A kernel’s fertile ground

This was a pregnant time in the history of computing. In 1993, the lawsuit by Bell Labs’ Unix System Laboratories against BSDi over copyright infringement was settled out of court, clearing the way for open source BSD variants such as FreeBSD to emerge and inspire the tech community.

The timing of that settlement turned out to be crucial. In 1991, a Finnish university student named Linus Torvalds had begun working on his personal kernel development project. Torvalds himself has said, had BSD been freely available at the time, he would probably never have embarked on his project.

Yet when BSD found its legal footing, Linux was already on its way, embraced by the types of minds that would help turn it into the operating system that would eventually run most of the world.

The pace of development picked up quickly. Userspace utilities from the GNU operating collected around the Linux kernel, forming what most would call “Linux,” much to the chagrin of the GNU founder Richard Stallman. At first, Linux was the domain of hobbyists and idealists. Then the supercomputing community began taking it seriously and contributions ramped up further.

By 1999, this “hobby” operating system was making inroads in major corporations, including large banking institutions, and began whittling away at the entrenched players that held overwhelming sway. Large companies that paid enormous sums to major enterprise hardware and operating system vendors such as Sun Microsystems, IBM, and DEC were now hiring gifted developers, system engineers, and system architects who had spent the last several years of their lives working with freely available Linux distributions.

After major performance victories and cost savings were demonstrated to management, that whittling became a chainsaw’s cut. In a few short years, Linux was driving out commercial Unix vendors from thousands of entrenched customers. The stampede had begun— and it’s still underway.

Adaptability at the core

A common misconception about Linux persists to this day: that Linux is a complete operating system. Linux, strictly defined, is the Linux kernel. The producer of a given Linux distribution — be it Red Hat, Ubuntu, or another Linux vendor — defines the remainder of the operating system around that kernel and makes it whole. Each distribution has its own idiosyncrasies, preferring certain methods over others for common tasks such as managing services, file paths, and configuration tools.

This elasticity explains why Linux has become so pervasive across so many different facets of computing: A Linux system can be as large or as small as needed. Adaptations of the Linux kernel can drive a supercomputer or a watch, a laptop or a network switch. As a result, Linux has become the de facto OS for mobile and embedded products while also underpinning the majority of internet services and platforms.

To grow in these ways, Linux needed not only to sustain the interest of the best software developers on the planet, but also to create an ecosystem that demanded reciprocal source code sharing. The Linux kernel was released under the GNU Public License, version 2 (GPLv2), which stated that the code could be used freely, but any modifications to the code (or use of the source code itself in other projects) required that the resulting source code be made publicly available. In other words, anyone was free to use the Linux kernel (and the GNU tools, also licensed under the GPL) as long as they contributed the resulting efforts back to those projects.

This created a vibrant development ecosystem that let Linux grow by leaps and bounds, as a loose network of developers began molding Linux to suit their needs and shared the fruit of their labor. If the kernel didn’t support a specific piece of hardware, a developer could write a device driver and share it with the community, allowing everyone to benefit. If another developer discovered a performance issue with a scheduler on a certain workload, they could fix it and contribute that fix back to the project. Linux was a project jointly developed by thousands of volunteers.

Changing the game

That method of development stood established practices on their ear. Commercial enterprise OS vendors dismissed Linux as a toy, a fad, a joke. After all, they had the best developers working on operating systems that were often tied to hardware, and they were raking in cash from companies that relied on the stability of their core servers. The name of the game at that time was highly reliable, stable, and expensive proprietary hardware and server software, coupled with expensive but very responsive support contracts.

To those running the commercial Unix cathedrals of Sun, DEC, IBM, and others, the notion of distributing source code to those operating systems, or that enterprise workloads could be handled on commodity hardware, was unfathomable. It simply wasn’t done — until companies like Red Hat and Suse began to flourish. Those upstarts offered the missing ingredient that many customers and vendors required: a commercially supported Linux distribution.

The decision to embrace Linux at the corporate level was made not because it was free, but because it now had a cost and could be purchased for significantly less — and the hardware was significantly cheaper, too. When you tell a large financial institution that it can reduce its server expenses by more than 50 percent while maintaining or exceeding current performance and reliability, you have their full attention.

Add the rampant success of Linux as a foundation for websites, and the Linux ecosystem grew even further. The past 10 years have seen heavy Linux adoption at every level of computing, and importantly, Linux has carried the open source story with it, serving as an icebreaker for thousands of other open source projects that would have failed to gain legitimacy on their own.

The tale of Linux is more than the success of an open kernel and an operating system. It’s equally as important to understand that much of the software and services we rely on directly or indirectly every day exist only due to Linux’s clear demonstration of the reliability and sustainability of open development methods.

Anyone who fought through the days when Linux was unmentionable and open source was a threat to corporate management knows how difficult that journey has been. From web servers to databases to programming languages, the turnabout in this thinking has changed the world, stem to stern.

Open source code is long past the pariah phase. It has proven crucial to the advancement of technology in every way.

The next 25 years

While the first 15 years of Linux were busy, the last 10 have been busier still. The success of the Android mobile platform brought Linux to more than a billion devices. It seems every nook and cranny of digital life runs a Linux kernel these days, from refrigerators to televisions to thermostats to the International Space Station.

That’s not to say that Linux has conquered everything … yet.

Though you’ll find Linux in nearly every organization in one form or another, Windows servers persist in most companies, and Windows still has the lion’s share of the corporate and personal desktop market.

In the short term, that’s not changing. Some thought Linux would have won the desktop by now, but it’s still a niche player, and the desktop and laptop market will continue to be dominated by the goliath of Microsoft and the elegance of Apple, modest inroads by the Linux-based Chromebook notwithstanding.

The road to mainstream Linux desktop adoption presents serious obstacles, but given Linux’s remarkable resilience over the years, it would be foolish to bet against the OS over the long haul.

I say that even though various issues and schisms regularly arise in the Linux community — and not only on the desktop. The brouhaha surrounding systemd is one example, as are the battles over the Mir, Wayland, and ancient X11 display servers. The predilection of some distributions to abstract away too much of the underlying operating system in the name of user-friendliness has rankled more than a few Linux users. Fortunately, Linux is what you make of it, and the different approaches taken by various Linux distributions tend to appeal to different user types.

That freedom is a double-edged sword. Poor technological and functional decisions have doomed more than one company in the past, as they’ve taken a popular desktop or server product in a direction that ultimately alienated users and led to the rise of competitors.

If a Linux distribution makes a few poor choices and loses ground, other distributions will take a different approach and flourish. Linux distributions are not tied directly to Linux kernel development, so they come and go without affecting the core component of a Linux operating system. The kernel itself is mostly immune to bad decisions made at the distribution level.

That has been the trend over the past 25 years — from bare metal to virtual servers, from cloud instances to mobile devices, Linux adapts to fit the needs of them all. The success of the Linux kernel and the development model that sustains it is undeniable. It will endure through the rise and fall of empires.

Paul Venezia
Paul Venezia

The next 25 years should be every bit as interesting as the first.

Paul Venezia
InfoWorld

Salute to Shannon

I just read the New Yorker’s article titled “Claude Shannon, the Father of the Information Age, Turns 1100100.” The evocation of Shannon’s career took me back decades ago to the 20th century’s last quarter. It was the academic year 1967-1968. And I was an eighteen-year old freshman student at the Institut Polytechnique de Conakry (Guinea). I was in Propedeutics, which, then, designated the first of the four-year university system. Newbies belonged in three categories:

  • Propedeutics A (Maths, Physics)
  • Propedeutics B (experimental sciences: chemistry, biology)
  • Propedeutics C (literature, linguistics, humanities)

Based on the baccalaureate transcripts (Série A) from my high-school of Labe (Fuuta-Jalon), I was automatically placed in Propedeutics C. There I took the class taught by Belgian professor of linguistics, Ms. Claire Van Arenberg. She brilliantly exposed our young minds to Claude Shannon’s concepts and some of their implications. It was all theoretical, of course. However, her explanations registered front, center and back in my mind. And they stuck in there, never dimming or fading out.
Fast forward some 15 years later, to January 22, 1982. I arrived at JFK International Airport aboard the regular PanAm flight Dakar-New York. I was on my way to the University of Texas at Austin, as an assistant-professor and a recipient of a Fulbright-Hayes Research Fellowship in sociolinguistics. Upon settling down in the heart of the Lone Star State, my first shopping trophy was a tablet-size Sinclair 64kb RAM computer. It was a disappointment. So, I quickly returned it. Overlooking Radio Shack’s Tandy desktop computer, I purchased a 128K  Apple IIc, with external monitor. I connected it to a dot-matrix printer and 9.6 kbit/s modem. The two peripherals fetched hundreds of dollars. But, to me, they were worth their high price. For in Conakry, I had toiled for years as a co-publisher of Guinea’s journal, Miriya, Revue des sciences économiques et sociales. Preparation of each issue was a real pain. Armed with a typewriter, a pair of scissors and a glue container, we had to literally cut and paste words and letters during the pre and post-print phases. Consequently, the minute I saw a full screen word processor in action in Austin,  I was sold. Today, while I no longer have the peripheral devices, I still own the Mac IIc with its AppleWorks OS and its staple applications software (word processing, database, spreadsheet). And I can still turn it on and run it…
Better yet, I now manage my own fiber-optics based Linux CentOS Virtual Private Server (VPS) network, built on the TCP/IP Stack with its standard array of servers (dns, web, ssh, wsftp, mail, etc.). It is home to my webAfriqa Portal, which includes ten public-facing websites. webAfriqa is dedicated to research and publishing information and knowledge about Fulɓe, Africa and its Diaspora. The server also hosts a dozen internal sandboxes, where I experiment and tinker with a variety of Content Management Systems, languages, tools, and utilities. This Open Source software environment includes WordPress, Drupal, DSpace, MySQL, MongDB, Solr, XHML, XML, CSS/Sass, JavaScript/jQuery, PHP, Python, JAVA, etc.)… It’s been a long, learning and enlightening journey, looking back from my first interface with a computer.
I find it fascinating that Shannon’s fundamental concept, the bit, belongs also in every day English. Two words embed it:  the nimble and the byte. The latter, too, predates the digital revolution since it was a currency in Medieval Europe.
The New Yorker‘s article pays tribute to Shannon’s creative genius. It unwittingly speaks for me. And it inherently expresses my intellectual debt and deep gratitude to the Father of the Information Age. It is a fitting salute from one of America’s premier journalistic and literary publications. I enjoyed reading it and I wholeheartedly second it.
Tierno S. Bah


Claude Shannon
the Father of the Information Age, Turns 1100100

Twelve years ago, Robert McEliece, a mathematician and engineer at Caltech, won the Claude E. Shannon Award, the highest honor in the field of information theory. During his acceptance lecture, at an international symposium in Chicago, he discussed the prize’s namesake, who died in 2001. Someday, McEliece imagined, many millennia in the future, the hundred-and-sixty-sixth edition of the Encyclopedia Galactica—a fictional compendium first conceived by Isaac Asimov—would contain the following biographical note:

Claude Shannon: Born on the planet Earth (Sol III) in the year 1916 A.D. Generally regarded as the father of the information age, he formulated the notion of channel capacity in 1948 A.D. Within several decades, mathematicians and engineers had devised practical ways to communicate reliably at data rates within one per cent of the Shannon limit.

Claude Shannon (1916-2001). A hundred years after his birth, Claude Shannon’s fingerprints are on every electronic device we own.
Claude Shannon (1916-2001). A hundred years after his birth, Claude Shannon’s fingerprints are on every electronic device we own. (Photo: Alfred Eisenstaedt / The Life Picture Collection / Getty

As is sometimes the case with encyclopedias, the crisply worded entry didn’t quite do justice to its subject’s legacy. That humdrum phrase—“channel capacity”—refers to the maximum rate at which data can travel through a given medium without losing integrity. The Shannon limit, as it came to be known, is different for telephone wires than for fibre-optic cables, and, like absolute zero or the speed of light, it is devilishly hard to reach in the real world. But providing a means to compute this limit was perhaps the lesser of Shannon’s great breakthroughs. First and foremost, he introduced the notion that information could be quantified at all. In “A Mathematical Theory of Communication,” his legendary paper from 1948, Shannon proposed that data should be measured in bits—discrete values of zero or one. (He gave credit for the word’s invention to his colleague John Tukey, at what was then Bell Telephone Laboratories, who coined it as a contraction of the phrase “binary digit.”)

“It would be cheesy to compare him to Einstein,” James Gleick, the author of “The Information,” told me, before submitting to temptation. “Einstein looms large, and rightly so. But we’re not living in the relativity age, we’re living in the information age. It’s Shannon whose fingerprints are on every electronic device we own, every computer screen we gaze into, every means of digital communication. He’s one of these people who so transform the world that, after the transformation, the old world is forgotten.” That old world, Gleick said, treated information as “vague and unimportant,” as something to be relegated to “an information desk at the library.” The new world, Shannon’s world, exalted information; information was everywhere. “He created a whole field from scratch, from the brow of Zeus,” David Forney, an electrical engineer and adjunct professor at M.I.T., said. Almost immediately, the bit became a sensation: scientists tried to measure birdsong with bits, and human speech, and nerve impulses. (In 1956, Shannon wrote a disapproving editorial about this phenomenon, called “The Bandwagon.”)

Although Shannon worked largely with analog technology, he also has some claim as the father of the digital age, whose ancestral ideas date back not only to his 1948 paper but also to his master’s thesis, published a decade earlier. The thesis melded George Boole’s nineteenth-century Boolean algebra (based on the variables true and false, denoted by the binary one and zero) with the relays and switches of electronic circuitry. The computer scientist and sometime historian Herman Goldstine hyperbolically deemed it “one of the most important master’s theses ever written,” arguing that “it changed circuit design from an art to a science.” Neil Sloane, a retired Bell Labs mathematician as well as the co-editor of Shannon’s collected papers and the founder of the On-Line Encyclopedia of Integer Sequences, agreed. “Of course, Shannon’s main work was in communication theory, without which we would still be waiting for telegrams,” Sloane said. But circuit design, he added, seemed to be Shannon’s great love. “He loved little machines. He loved the tinkering.”

For instance, Shannon built a machine that did arithmetic with Roman numerals, naming it THROBAC I, for Thrifty Roman-Numeral Backward-Looking Computer. He built a flame-throwing trumpet and a rocket-powered Frisbee. He built a chess-playing automaton that, after its opponent moved, made witty remarks. Inspired by the late artificial-intelligence pioneer Marvin Minsky, he designed what was dubbed the Ultimate Machine: flick the switch to “On” and a box opens up; out comes a mechanical hand, which flicks the switch back to “Off” and retreats inside the box. Shannon’s home, in Winchester, Massachusetts (Entropy House, he called it), was full of his gizmos, and his garage contained at least thirty idiosyncratic unicycles—one without pedals, one with a square tire, and a particularly confounding unicycle built for two. Among the questions he sought to answer was, What’s the smallest unicycle anybody could ride? “He had a few that were a little too small,” Elwyn Berlekamp, a professor emeritus of mathematics at Berkeley and a co-author of Shannon’s last paper, told me. Shannon sat on Berlekamp’s thesis committee at M.I.T., and in return he asked Berlekamp to teach him how to juggle with four balls. “He claimed his hands were too small, which was true—they were smaller than most people’s—so he had trouble holding the four balls to start,” Berlekamp said. But Shannon succeeded in mastering the technique, and he pursued further investigations with his Jugglometer. “He was hacking reality,” the digital philosopher Amber Case said.

By 1960, however, like the hand of that sly machine, Shannon had retreated. He no longer participated much in the field that he had created, publishing only rarely. Yet he still tinkered, in the time he might have spent cultivating the big reputation that scientists of his stature tend to seek. In 1973, the Institute of Electrical and Electronics Engineers christened the Shannon Award by bestowing it on the man himself, at the International Symposium on Information Theory in Ashkelon, Israel. Shannon had a bad case of nerves, but he pulled himself together and delivered a fine lecture on feedback, then dropped off the scene again. In 1985, at the International Symposium in Brighton, England, the Shannon Award went to the University of Southern California’s Solomon Golomb. As the story goes, Golomb began his lecture by recounting a terrifying nightmare from the night before: he’d dreamed that he was about deliver his presentation, and who should turn up in the front row but Claude Shannon. And then, there before Golomb in the flesh, and in the front row, was Shannon. His reappearance (including a bit of juggling at the banquet) was the talk of the symposium, but he never attended again.

Siobhan Roberts
The New Yorker

Panama Papers and Open Source Software

According to reports, outdated and vulnerable versions of WordPress and Drupal — broadly used as Open Source Content Management Systems — are behind the Panama Papers Breach.
Tierno S. Bah
Sarah Gooding. WordPress Tavern
Sarah Gooding. WordPress Tavern

Authorities have not yet identified the hacker behind the Panama Papers breach, nor have they isolated the exact attack vector. It is clear that Mossack Fonseca, the Panamanian law firm that protected the assets of the rich and powerful by setting up shell companies, had employed a dangerously loose policy towards web security and communications.Drupal logo
wordpress-logoThe firm ran its unencrypted emails through an outdated (2009) version of Microsoft’s Outlook Web Access. Outdated open source software running the frontend of the firm’s websites is also now suspected to have provided a vector for the compromise.

In initial communications with German newspaper the Süddeutsche Zeitung (SZ), an anonymous source offered the data with a few conditions, saying that his/her life was in danger.

“How much data are we talking about?” the SZ asked.

“More than anything you have ever seen,” the source said.

The Panama Papers breach is the largest data leak in history by a wide margin, with 2.6 terabytes of data, 11.5 million documents, and more than 214,000 shell companies exposed.

Forbes has identified outdated WordPress and Drupal installations as security holes that may have led to the data leak.

Forbes discovered the firm ran a three-month-old version of WordPress for its main site, known to contain some vulnerabilities. But more worrisome was that, according to Internet records, its portal used by customers to access sensitive data was most likely run on a three-year-old version of Drupal, 7.23.

The current version of the Drupal 7.x branch is 7.43.
A release candidate (Drupal-1.0 RC1 ) of the 8.x branch is available for testing from Drupal.org, pending an April 20th  final release. — Tierno S. Bah

This information is partially inaccurate, however. While looking at the site today, I found that the firm’s WordPress-powered site is currently running on version 4.1 (released in December 2014), based on its version of autosave.js, which is identical to the autosave.js file shipped in 4.1. Since that time WordPress has had numerous critical security updates.

The main site is also loading a number of outdated scripts and plugins. Its active theme is a three-year-old version of Twenty Eleven (1.5), which oddly resides in a directory labeled for /twentyten/.

The Mossack Fonseca client portal changelog.txt file is public, showing that its Drupal installation hasn’t been updated for three years. Since the release of version 7.23, the software has received 25 security updates, which means that the version it is running includes highly critical known vulnerabilities that could have given the hacker access to the server. This includes a 2014 SQL injection vulnerability known in the Drupal community as “Drupalgeddon,” which affected every site running Drupal 7.31 or below.

Investigators have not confirmed if the open source software vulnerabilities were used to access the data, but it is certainly plausible given the severity of the vulnerabilities in both older versions of WordPress and Drupal.

“They seem to have been caught in a time warp,” Professor Alan Woodward, a computer security expert from Surrey University, told Wired UK. “If I were a client of theirs I’d be very concerned that they were communicating using such outdated technology.”

If these Open Source software vulnerabilities provided the access point for this massive leak, then this company’s global fiasco was entirely preventable. Although many people welcome the uncovering of corruption and dirty money transactions of famous people and world leaders, the reality is that these kinds of exploits can also be carried out on well-meaning organizations that exist to protect people’s health records, financial data, and other sensitive information.

This leak is not a blow to Open Source software’s credibility but rather underscores how low a priority some companies place on their tech departments and web security. With the rampant software vulnerabilities in this age, not updating software for years constitutes abject neglect of customers.

The bottom line is that software needs to be updated. This kind of routine maintenance is as foundational to a company’s business as brushing teeth or showering is for one’s health. Law firms and companies with such a lax approach to security are either ignorant or unwilling to spend the money to maintain technology that they don’t fully understand. The Panama Papers serve as a reminder that having a competent, skilled tech department is critical for any company that deals in sensitive information.

Sarah Gooding
WordPress Tavern

FCC chairman for strongest net neutrality rules

Tom Wheeler, FCC Chairman
Tom Wheeler, FCC Chairman, Washington, DC

The chairman of the Federal Communications Commission just said he’s proposing the “strongest open Internet protections” the Web has ever seen.

In a Wired op-ed, FCC Chairman Tom Wheeler announced he wants to regulate Internet providers with the most aggressive tool at his disposal: Title II of the Communications Act.
In addition to covering fixed broadband providers such as Comcast and Time Warner Cable, the draft rules would cover wireless providers such as T-Mobile and Sprint.

The rules would also make speeding up or slowing down Web traffic — a tactic known as prioritization — illegal. And it would ban the blocking of Web traffic outright.

It all adds up to the most significant intervention ever undertaken by federal regulators to make sure the Web remains a level playing field. It is, depending on your ideology, either an unprecedented example of government overreach that will ruin the republic or the most egalitarian, pro-competitive thing the FCC may do in the 21st century.

“My proposal assures the rights of Internet users to go where they want, when they want,” Wheeler wrote, “and the rights of innovators to introduce new products without asking anyone’s permission.”

The FCC is expected to vote on Wheeler’s proposed rules on Feb. 26.

The draft rules seek to impose a modified version of Title II, which was originally written to regulate telephone companies. It will waive a number of provisions, including parts of the law that empower the FCC to set retail prices — something Internet providers fear above all.

However, contrary to many people’s expectations, the draft rules will also keep other parts of Title II that allow the FCC to:

  • enforce privacy rules on carriers
  • extract funds from Internet providers to be used as subsidies
  • make sure services such as Google Fiber can build new broadband pipes more easily, according to people familiar with the plan.

Internet providers won’t be asked to contribute to the subsidy fund, known as Universal Service, right away. The draft rules merely open the door to that obligation down the road should the FCC determine that step is necessary.

[The Universal Service Fund helps schools and libraries buy Internet service and reduces the cost of telephone service for low-income Americans. It also subsidizes connectivity for rural areas. If the FCC later decides to ask Internet providers to pay into the fund, the money would go toward these programs.]

In addition, senior FCC officials confirmed, Wheeler’s draft proposal applies strong rules to the Internet backbone — the part of the Web responsible for carrying Internet traffic to the doorstep of Comcast, Verizon and others before those companies ferry that content to you. The proposal stops short of laying down specific regulations there; it merely lays down the expectation that companies should not favor some Web traffic over others in that part of the network. But under the draft rules, the FCC will reserve the right to investigate deals such as the kind Netflix has signed with Comcast, Verizon and others in the Internet backbone. That’s a huge deal for Netflix.

“This is a historic moment for applying the Communications Act to preserve freedom of expression,” said Gene Kimmelman, president of the consumer group Public Knowledge. “By using targeted non-discrimination policing powers, I think the FCC chairman is doing more today to protect and promote freedom of expression than we’ve seen in decades of debate about how broadband services should be treated.”

The announcement reflects a major turning point for Internet regulation, and a huge moment in the history of the Web. Wheeler’s proposed rules stand to determine who — and how — Internet providers are allowed charge for services.

Wheeler’s proposal has Republican critics seething

“It is a power grab for the federal government by the chairman of a supposedly independent agency who finally succumbed to the bully tactics of political activists and the president himself,” said Sen. John Thune (R-S.D.), the chair of the Senate Commerce Committee, in a statement.

To understand the magnitude of what’s happening, consider this: Since Columbia Law scholar Tim Wu coined the term “net neutrality” in a seminal paper in 2003, the FCC has tried to implement net neutrality rules twice — and failed. Both times, the rules were struck down in court. Now, the FCC is trying a third time. And its leader — a former lobbyist for the cable and wireless industries, no less — appears to be swinging for the fences.

For consumer groups that have been pressing for aggressive rules all along, this is a major victory. It’s a significant setback for Internet providers that wanted the flexibility to try new business models. And importantly, it’s the culmination of a year’s worth of reflection by Wheeler himself, who months ago was in a very different place on the issue.

Wheeler wasn’t always sold on what President Obama said should be the “strongest possible rules” for net neutrality.

Let’s rewind to last January, when a federal court tossed out the FCC’s existing rules on the grounds that the agency had exceeded its congressionally granted authority. In the wake of that ruling, Wheeler said he’d follow the court’s “roadmap” to a solution that would stay on the right side of the law.

In the spring, he rolled out a proposed rule that many ISPs liked but consumer groups hated. The problem? It tacitly allowed for Internet providers to speed up some forms of Web traffic in exchange for payment — a tactic known as paid prioritization. This is the one thing net neutrality rules were supposed to prevent.

The mere possibility of paid prioritization slipping through touched a nerve with grassroots activists, who argued that only Title II would be enough to keep the broadband industry from setting up a tiered Internet favoring wealthy, established businesses.
In a world with paid prioritization, they said, start-ups and small businesses would be shut out of the market because they couldn’t afford to pay ISPs for priority access to customers. They also wouldn’t be able to afford the legal fees associated with filing complaints to the FCC when abuses occurred.

Then came a late-night comedian named John Oliver. Oliver, who’d made a name for himself on “The Daily Show” with Jon Stewart, took on the FCC’s initial proposal with a blistering, 14-minute rant that accused the agency of undermining net neutrality and even lobbed a few bombs at Wheeler himself.

“That is like needing a babysitter and hiring a dingo,” Oliver said. “They shouldn’t call it ‘Protecting Net Neutrality,’ they should call it ‘Stopping Cable Company F***ery.'”

Oliver’s net neutrality segment kicked the grassroots organizing machine into overdrive. Proponents of stronger rules flooded the FCC with millions of comments calling for Title II. By the end of the process, it had become clear that the public had spoken, despite a significant counter-effort by those backing the industry position.

Industry officials admit that they were outmaneuvered by the Internet activists, who kept the pressure on with protests outside the FCC and even a sit-in outside Wheeler’s house that prevented the 6-foot-4 chairman from driving to work in his Mini Cooper.

Meanwhile, other advocates of strong net neutrality were coming forward with alternative proposals that began gaining traction at the FCC in August and September. Mozilla, the maker of the popular Firefox browser, suggested that the FCC split the Internet in two. Apply Title II to the Internet backbone, it said, while leaving the part of the Internet between consumers and their Internet providers untouched under Title I. Tim Wu, the Columbia Law scholar, put forward his own proposal.

Momentum began building for a “hybrid” approach that leaned substantially on these proposals. Quietly, the FCC began talking to Internet providers, consumer groups and Web content companies about a compromise plan. The Wall Street Journal reported in October that a hybrid plan was in the works. It’s still unclear just how close the parties were to an agreement, but people close to the negotiations say the news alarmed the White House, which sought to intervene before the hybrid proposal could really get off the ground.

On Nov. 10, President Obama dropped a major statement on net neutrality — an unusual attempt by a president to influence a legally independent agency. The move set up a partisan confrontation with Republicans in Congress. Many believe that’s what prompted Obama to weigh in in the first place: His party had just lost a midterm election and net neutrality was a strong populist issue Democrats could lead on.

Regardless of Obama’s motivations, his statement had the effect of pushing Wheeler to abandon the hybrid plan and adopt Title II, numerous officials inside and outside the agency said.

“Oliver and the president were probably the two most prominent [turning points],” said an industry official, “and then a series of ongoing drip, drip, drip every day for several months” by grassroots protesters.

Brian Fung, Washington Post
Brian Fung

Brian Fung
Washington Post (The Switch)