Monday, September 18, 2017

Looking Under the Rock: Equifax's Credit Breach

On Sept. 8, the credit-rating agency Equifax announced that they had discovered a security breach that compromised the data of over 140 million U. S. consumers.  The company admitted they had found out about the hack on July 29, almost six weeks before their public announcement.  Hackers were able to obtain names, Social Security numbers, addresses, birthdates, and even some driver license numbers.  The hackers gained access to Equifax's data through a flaw in a piece of open-source web software called Apache Struts.  The cybersecurity arm of the U. S. Homeland Security Administration had released a fix for the Apache Struts flaw back in March, but Equifax didn't apply it well enough to prevent the hack that began three months later, in May.  Equifax is currently being sued and is overwhelmed with consumers requesting freezes of their credit reports so as to prevent hackers from applying for credit under false names. 

Most of the time, the three quasi-monopoly credit rating agencies Equifax, TransUnion, and Experian are largely invisible to the public eye.  They don't sell their products directly to consumers—their customers are banks, loan companies, and other extenders of consumer credit.  The only time you as a consumer have any dealings with one of the Big Three may be when you apply for a home loan or car loan.  The rating you receive from a credit agency can mean the difference between buying a home and renting for the rest of your life, or being able to borrow more money on a credit card without paying ruinous interest.  So although there's not much you can do to affect what the agencies say about you, they hold considerable financial power over you.  The least you can expect from them is to act as responsible guardians of the highly personal data they accumulate under your name.  And Equifax's data breach betrayed that trust.

This is an odd situation, but has come about through the nature of our consumer-credit-intensive economy.  Back in the nineteenth century, when consumer credit was most often an informal arrangement between a general-store customer and the owner who knew the customer personally, there was no widespread need for consumer credit information.  However, commercial firms were interested enough in the creditworthiness of other firms that the "Mercantile Agency" of Dun, Barlow & Co. arose.  By 1876, this firm had a network of informants all across America, typically small-town lawyers, who periodically sent reports on local merchants to headquarters in New York City.  The reports were compiled and printed in a quarterly Reference Book to which interested credit-extenders subscribed. 

Dun, Barlow & Co. eventually became Dun & Bradstreet, a firm which still provides financial data on commercial firms today.  But then as now, credit-rating agencies sell information about consumers to companies, and it is in their self-interest to protect that information from compromise.  In this, Equifax has signally failed.

I have previously discussed in this space the qualities that any company caught in a crisis should have.  Among these are prompt action and transparency.  So far, Equifax has stumbled on both counts.  While it has to take a certain amount of time to apply patches to large software systems such as Equifax runs, data security is the essence of their business, and the three-month delay between learning about the Apache Struts flaw in March and the time when the data breach began in May was too long.  It took Equifax another two months to discover the breach, and then six more weeks went by before they announced to the public that it had happened.  Such delays might be excusable in a mom-and-pop grocery store, but not for one of the three largest credit-reporting firms in the U. S. 

What can you as a consumer do if you think your data may have been compromised?  Equifax has announced the waiver of the usual ten-dollar fee for a credit freeze, and if you can manage to push your way through their clogged website and phone tree to request one, that is one thing you can do.  And at least one law firm has announced its intention to launch a class-action lawsuit on behalf of all 140 million Americans affected by the breach.  But neither of these things will address the fundamental structural problem:  too much of our personal information is stored in places that are too vulnerable to unscrupulous hackers.

If (as is possible) it turns out that the hackers were not based in the U. S., there is an international twist to this tale.  In that regard, the Homeland Security Agency deserves kudos for doing what it ought to be doing:  finding ways that hackers can attack U. S. interests and helping private firms prevent such attacks.  But if the private firms drop the security ball, the government has wasted its time telling them about the problem.

In general, I regard government regulation as a last resort when other measures fail.  But as firms get larger and affect more and more people in a country, it's probably appropriate for them to come under the regulation of that country's government.  There is always going to be some kind of relationship between large firms and government, but that relationship can be either benign or malign for the consumer.  The pre-breakup Bell System was allowed to monopolize telecommunications in the U. S. until the 1980s, and in turn it accepted close government supervision and regulation of its tariffs and profits.  It may not have been the most innovative telecomm service in the world, but it was stable, predictable, and reliable.   

It may be time to require the Big Three credit agencies to submit to some kind of data-integrity requirement, or face penalties for data breaches that are so severe they will clean up their act.  But our track record of penalizing these types of agencies for past messups is poor.  One need only think back to the housing-bubble collapse of 2008 in which commercial rating agencies were gold-plating financial instruments that looked as solid as a rock until the bubble burst and knocked them over, revealing a nest of roaches and scorpions underneath. 

Equifax is at best guilty of incompetence.  Perhaps the marketplace will punish it enough to make it mend its ways.  But it may be time to re-examine some of our basic assumptions about the responsibilities of private credit-rating firms in our consumer economy.  And in the meantime, keep an eye on your credit rating.

Sources:  I referred to an article on the CNN website at, a New York Times column by Ron Lieber posted on Sept. 14 at, and the Wikipedia articles on Equifax, Dun & Bradstreet, and credit freezes.  My information on Dun, Barlow & Co. in 1876 comes from p. 41 of a reproduction issue of the Asher & Adams Pictorial Album of American Industry (1876) published in 1976 by Rutledge Books. 

Monday, September 11, 2017

Mr. Damore, Welcome To the Prophet Club

In the Bible, being a prophet was not a sought-after job.  Prophets were chosen by God to deliver messages that more often than not turned out to be unwelcome.  And sooner or later, the same lack of welcome greeted the prophet himself as he stood in the city gate telling the people things they didn't want to hear.  Bad things tended to happen to prophets when they got on the wrong side of the establishment.  The prophet Jeremiah, after telling King Zedekiah to surrender to the attacking Babylonians, was accused of treachery and thrown into a muddy well, where he was left to die.  Only the intervention of a friendly official rescued him from a miserable death.

I don't think former Google engineer James Damore has any special line to the Almighty, but by now he has experienced the same thing that the biblical prophets discovered:  say things that the leadership doesn't want to hear, and sooner or later you're going to pay for it.  In response to a ten-page memo he posted entitled "Google's ideological echo chamber" in which he criticized the atmosphere created by gender-diversity programs at his company, the Internet lit up with a storm of attacks on him, and Google ended up firing him.  But exactly what did he say?  First, some background.

Like many companies these days, Google has initiatives and programs in diversity, including ones that attempt to change the fact that the percentage of women in computing is about 24%, according to an organization called Girls Who Code.  The desired change, naturally, is an increase to something closer to the representation of women in the overall U. S. population, which is 50.8%. 
I say "naturally" because there is a widely held assumption that when the percentage of women in a desirable field of endeavor—CEO suites, being rich, holding political office, or working at any job that the culture perceives to be desirable—falls below 50.8%, this proves that there is injustice somewhere that needs to be rooted out so that the percentage will more closely approach the magic 50.8%. 

If you look at this assumption on its own in the cold light of logic, you can start to see some holes in it.  Some of the highest-paying jobs in the country are in professional sports.  Where are the protests that there aren't any women playing for the Green Bay Packers?  I don't want to start a trend, you understand.  And professional football itself is losing popularity in view of the revelations of long-term brain damage it can cause.  But the point is that many of the assumptions and assertions surrounding issues of gender diversity are based on something besides mathematically exact logic.  And that's a good thing, because logic and undisputed facts can take you only so far.  Something else is needed in order to discuss these matters intelligently:  an ability to articulate the foundations of one's moral judgments.  But these days, that ability is much rarer than the ability to code.

I have read Mr. Damore's memo, and at one point he refers to "moral biases."  Judging from his words, he is neither a political scientist nor a philosopher, but he recognizes that more than logic is required to deal with human-relations issues such as diversity and gender roles.  In his memo, he wrote some things that are undoubtedly unpopular in the Silicon Valley setting of Mountain View:  "On average, men and women biologically differ in many ways."  He cites personality differences that women show compared to men, many of which are positive:  agreeableness, ability to work in teams, and so on.  And he admits that males tend to rank higher on aggressiveness and the willingness to put in long unpleasant hours to get ahead in an organization.  He winds up his memo with a recommendation to "[h]ave an open and honest discussion about the costs and benefits of our diversity programs." 

It is a matter of public record that Mr. Damore was let go by Google shortly before Aug. 7.  Legally speaking, Google is probably not breaking any law to fire him, as California has what is called "employment at will," which means an employer can fire you at any time for any reason, or no reason at all.  Nevertheless, firing him doesn't contribute to an atmosphere at the company that would encourage an open and honest discussion about the costs and benefits of diversity programs. 

Along with Mr. Damore's memo, the website Gizmodo posted a statement from Google's diversity officer, in which she said of the memo, "I found that it advanced incorrect assumptions about gender."  But she didn't say what those incorrect assumptions were.

Engineers are trained to be logical, using known facts about the world to create useful products.  But human life is about more than logic and reasoning.  What Mr. Damore calls "moral biases" are really each person's conclusions, drawn from his or her world view, about what constitutes right and wrong.  And while "Googlers" (as they call themselves) may be mental giants when it comes to logic, programming, and the skillful exploitation of the Internet to generate revenue, neither Mr. Damore nor his opponents in the company are able to articulate the bases of their moral principles any better than they could when they were in high school, or perhaps earlier. 

Instead of a reasoned debate based upon clearly expressed moral principles, what happened when Mr. Damore posted his memo was the Internet equivalent of a riot, at which point Google called in their human-resources cops to quell the riot by arresting (firing) the riot's instigator—the cyberspace equivalent of dumping Mr. Damore down a muddy well.  He won't die from it, but he's certainly been soiled in the sight of many.  And it's far from clear that the conservative media outlets which have started to lionize Mr. Damore as a martyr to their causes will encourage meaningful debates about gender diversity either.  Mr. Damore may have left one echo chamber only to walk into another one of a more conservative bent. 

It's possible to have a reasonable, logical debate about gender diversity, but only if everyone can lay their moral cards on the table first.  And these days, we lack the vocabulary and often the courage to do so.

Sources:  I referred to reports about James Damore's firing carried by the San Jose Mercury-News at and Bloomberg News at  The percentage of women who code is from, and Gizmodo carried Mr. Damore's original memo and the response by Google's diversity officer at  The story of what happened to Jeremiah after he said unpopular things is in the 38th chapter of the Old Testament book of the same name.

Monday, September 04, 2017

Arkema's Crosby Nightmare: The Price of Ignorance

If you lived within three miles of a chemical plant where dangerous substances were being made or handled, maybe you wouldn't want to know all the details.  But I bet you'd like first responders in the area to know what was there so they could take appropriate actions if anything went wrong. 

Well, Texas is a good place to live in many ways, but about 3,800 people living within a 3-mile radius of the Arkema chemical plant in Crosby, Texas probably wish they lived somewhere else right now, that is, if they haven't already evacuated because of the record-breaking floods from Hurricane Harvey.  Because several refrigerated trailers parked at the plant are full of chemicals that must be refrigerated to keep them from exploding.  And as the plant flooded shortly after Harvey hit, the power went out, the trailers started warming up, and one of them has gone up in flames already, sending noxious smoke into the neighborhood and forcing 18 first responders to seek medical attention.  And beyond a few general statements from the plant owners about organic peroxides, the public still doesn't know what is in those trailers.

Here's what apparently happened, as I have gathered from news reports. 

Arkema is a multinational chemical-manufacturing conglomerate based in France.  Its Crosby plant is a few miles northeast of the center of town on U. S. 90, and Crosby is northeast of Houston.  The setting is suburban, not really rural, as the 3,800 people within a 3-mile radius can attest. 

Texans are used to chemical plants.  They provide jobs and tax revenues, and while the smells and other hazards associated with chemical plants are drawbacks, the extreme safety precautions taken by most plant operators mean that millions of dollars' worth of chemicals are produced every day in the state without incident, under normal circumstances.  But Hurricane Harvey was anything but normal.

Organic peroxides are extremely reactive chemicals that are used in the polymerization of plastics, among other processes.  While I am no chemist and don't know any more about them than any other random resident of Crosby, I can well believe that some of them are so reactive that you have to keep them cooler than room temperature or else they will decompose violently, leading to an explosion.  Handling such stuff is a challenge, naturally, sort of like shipping frozen fish around, except instead of spoiling if it warms up, it blows up in your face.  So the plant no doubt has a lot of refrigeration machinery to keep its processes cold enough to preserve the nasty stuff, and refrigerated semi-trailers to take it where it needs to go—namely, other chemical plants that are equipped to keep the chemicals cold until they are used. 

All this has gone on up to now without any major incidents, although Arkema has reportedly been cited by regulators several times in the past for safety infractions.  Then last week came forecasts that Harvey, which was only a tropical depression as late as the Tuesday before it struck, was heading toward the Houston area.

The Arkema operators apparently decided that they ought to shut down the plant and move the existing stock of explosive chemicals off site in trailers.  So at some point before the hurricane hit, they loaded nine refrigerated semi-trailers with volatile chemicals that needed to be kept cool, and connected the refrigeration machinery to the local power utility, which was backed up by emergency power onsite. 

A skeleton crew of 11 stayed through the storm, making sure the power was still on to the trailers and switching to emergency power generators when the local utility power failed.  Then the water started to rise, and on Tuesday Aug. 29, the crew was ordered to evacuate, leaving the trailers behind.  As of today (Sunday, Sept. 3) one trailer has exploded, and the others are expected to go at any time.

At that point, one can question why they didn't take the trailers with them.  A number of reasons come to mind:  (1) they didn't have enough tractors (trucks) to haul them out, (2) the flood waters were so high that it might not have been possible to drive away from the plant in such heavy vehicles, (3) the prospect of dragging potentially explosive stuff all over flooded Houston was worse than leaving it there.  For whatever reason, the crew left the chemicals behind, and shortly thereafter Arkema officials announced that within a few days, the chemicals would of a certainty explode and make quite a mess.

An article in the Austin American-Statesman raises the question of why the contents of the plant have not been made public.  Every such plant has to file what's called a Tier II report detailing the chemicals made or used with the U. S. Environmental Protection Agency.  But under current law, it's not easy to gain access to that report.  You have to go to special reading rooms in Federal buildings and you can't photocopy them.  After the 2013 ammonium-nitrate explosion in West, Texas that killed more than a dozen people, the Obama administration proposed requiring the Tier II reports to be in a user-friendly format more widely available to the public.  But manufacturers and Texas regulators opposed this proposal, saying that such information could be used by potential terrorists.  And the new EPA administrator under the Trump administration, Scott Pruitt, has agreed to delay the change by at least two years. 

I said it after West and I'll say it again.  It is stupid that a Tier II report, or something like it, is not made available to first responders near any plant which they might reasonably be expected to respond to.  If you can't trust your local firemen to keep a secret, who can you trust?  And to me, the terrorist excuse sounds phony.  The more likely reason companies don't want Tier II reports released to the public is either out of concerns that competitors will use it, or that environmental protest groups will use the presence of certain chemicals to bring pressure to bear to shut the plant down. 

These concerns are legitimate, but they do not outweigh the needs of those sworn to protect the public from harm, to know what they are up against. 

So far, nobody has died as a result of the Arkema plant explosions.  But there are more trailers waiting to go off, and we still don't know what's in them.

Sources:  I referred to reports on the Arkema explosions and hazards that were carried by Houston TV stations at and  The Sept. 2 print edition of the Austin American-Statesman carried an article by Jeffrey Schwartz entitled "Information scarce on chemical plant blasts," on p. A11.

Monday, August 28, 2017

Chicago Objects to Driverless Cars

You know a technology's beyond the infancy stage when politicians start paying attention to it.  Driverless cars such as those being test-fielded by Waymo and other firms have upset a couple of Chicago aldermen to the extent that they have tried to enact a ban on them, saying they're too dangerous.  In an editorial as remarkable for its brevity as for its directness, the Chicago Sun-Times claims to take the side of the future in a piece titled "Driverless cars on the road to the future."

According to the Sun-Times, the aldermen—Ed Burke and Anthony Beale—are worried about jobs.  More perhaps than many other large U. S. cities, Chicago is a working-class town, and drivers of many kinds—cabbies, truck drivers, delivery people, airport personnel—make up a significant number of voters.  As such peoples' representatives, the aldermen elected by a particular district should take the concerns of their constituents seriously.  And if driverless cars threaten jobs, well, trying to do something about it is within the rights of a reasonable politician.  There's a whiff of hypocrisy in claiming safety concerns about a matter that's really more about jobs, but no more than usual in today's political environment.

The editorial writers take the side of Illinois's Governor Bruce Rauner, who may sign a bill that would prohibit Chicago and other cities from enacting a ban on driverless cars.  Conflicts between municipalities and state governments seem to be cropping up more frequently these days, typically with big cities taking more progressive positions and getting reined in by more conservative state legislatures and governors (Rauner is a Republican). 

If driverless cars become a significant percentage of cars on the road, that will mark one of the biggest technological changes in transportation since the introduction of the automobile.  We've been told that it will come with tremendous advantages compared to today's status quo:  lower accident rates, less traffic congestion, and the freedom to use your commute time for things other than steering and using the brake pedal a lot.  And probably the most visible downside is what it will do to the job market for paid drivers.  Every car on the road that used to need a driver but doesn't anymore represents a potential lower- to middle-class job that becomes history. 

The basis on which the editorial favors driverless cars is what historians call the Whig theory of history.  This is the idea that the farther back you go, the worse things were, and that human history is an unbroken series of triumphs over ignorance and primitive ways that will eventually issue in Paradise on earth.  After the horrors of the twentieth century (World War II, to name one), anyone who gives the matter a moment's thought will begin to see holes in the Whig theory.  But it's been around so long that it has become a kind of cliché idea that some writers spout automatically. 

The editorial writers claim that Chicago will become a "cow town" if it doesn't accept driverless cars, and they cite NASA's use of computers to get a man to the moon as an example of how computerized transportation is a good idea.  Well, these are not so much arguments as they are assertions and bad analogies.  The editorial winds up by saying no one banned Model Ts to protect jobs for blacksmiths, and "Old occupations may fade, but new ones come along."  The overall thrust of the article is basically to say, "Deal with it, and don't do something stupid that will make Chicago look like some kind of backward provincial hick town."

Beneath this rather trivial discussion are a number of serious questions.  What role should government play in the deployment of driverless cars?  Should their regulation be at a local, regional, state, or national level, or some combination of the preceding?  What, if anything, should be done to protect the jobs of people whose livelihood is threatened by the advent of driverless cars?  How can we get from the level of automotive safety we have now to a better one by implementing driverless cars, without running into some emergent large-number problem that could cause a significant increase in serious accidents, injuries, and deaths?  Almost none of these questions were addressed by the editorial writers, but if they were operating under a house rule that prohibits editorials of much more than 300 words, well, there's not a lot you can say in 300 words. 

By this point you may have the impression that I favor a ban on driverless cars.  I don't favor a ban, and I don't favor a law against a ban.  What I favor is a serious, in-depth discussion of the questions regarding driverless cars, and at least in the case of this editorial by one of the nation's major newspapers in a city of 2.7 million people, I don't see signs of that. 

There are many signs that the elusive thing called unity is on the decline in this country.  The degree to which citizens trust government to do the right thing has fallen precipitously compared to where it was several decades ago.  And people don't trust a system that they don't understand or feel that they cannot influence when it allows or encourages things that can cause them harm. 

Perhaps the two Chicago aldermen responded to their constituents in the wrong way, but at least they saw a genuine threat to jobs and went about trying to do something about it in response.  Back when the average person could read things that required more than one step of logic to understand, there were typically a few local newspapers to read in any major city, two or three TV and radio networks, and maybe the newsreel at the movie house.  That was it, as far as finding out what was going on in the world, and as a result, those who operated the media took care to use it with a reasonable degree of responsibility, and something like rational debate about great public matters of interest could be carried on.

But now, the media have fractured into a million tweets, Instagrams, and other detritus of electronic communications, most of which are too short to convey anything more than a burst of emotion.  The Chicago paper's editorial is only 300 words long because they probably know from experience that people won't read 1000-word editorials anymore, if they read anything at all.  And talking heads in video clips are pushing out text-based media of all kinds anyway.

I hope we as a nation, and the city of Chicago in particular, both reach a beneficial accommodation to the advent of driverless vehicles that will benefit most people while injuring as few as possible.  But to get there in a way that makes people feel included, the discussion will have to be at a higher level than the Sun-Times has set for us.

Sources:  The editorial titled "Driverless cars on the road to the future" appeared on the Chicago Sun-Times website on Aug. 25, 2017 at

Monday, August 21, 2017

Cyber Command Gets a Promotion

On Friday, Aug. 18, President Trump announced that the Defense Department's U. S. Cyber Command would be elevated to the status of a "unified combatant command," joining the nine other commands such as the U. S. Central Command (CENTCOM) that oversees all military operations in the Middle East, and the U. S. Strategic Command in charge of nuclear weapons.  The heads of these commands are just below the Secretary of Defense in the chain of command, and each unified combatant command cuts across the traditional armed-services divisions of army, navy, and air force. 

According to a report at the website Politico, the promotion of the Cyber Command has been in the works for years, but carrying out this promotion is in line with the President's campaign promises to bolster the Cyber Command.  Currently that Command is headed by Admiral Mike Rogers, who also heads the National Security Administration (NSA).  The Senate must confirm a new Cyber Command leader before the reorganization is fully implemented, but no particular problems are expected on that score.

After taking an initial leadership position, the U. S. has appeared lately to be lagging in the recognition that cyberwarfare is no longer some science-fiction pipe dream.  The nature of cyberwarfare makes it difficult to state with certainty exactly who is responsible for what.  But most experts agree that, for example, Russia has been plaguing the Ukraine with cyberattacks of many kinds for the last few years, ranging from invading servers used by news media to causing widespread power blackouts in large cities such as Kiev in the middle of the winter.

Probably the first cyberattack that became widely known and has definite attribution was called Stuxnet.  Developed by the U. S. NSA, possibly with cooperation from Israel, it was a clever attack on Iran's uranium centrifuges in 2010 that caused numbers of them to self-destruct.  Stuxnet was the last major focused cyberattack we know of that the U. S. has committed, but by the nature of the business, there may be others we don't know about yet. 

In conventional warfare, the enemy is in a clearly defined geographical area, and even wears uniforms and puts insignia on their equipment so you can tell who are the good guys and who are the bad guys.  Alas, such formality is long gone in many battlefields, and in the anonymous world of cyberspace it is next to impossible to identify the source of an attack in terms of a physical location and which people are doing the bad stuff.  In this regard cyberwarfare borrows from the world of espionage the mysteries and guesswork that makes spy novels so interesting, and makes actual espionage work so frustrating. 

But just because the enemy can't always be clearly identified, that doesn't mean we can ignore what they can do.  There is an old saying that generals always prepare to fight the last war, meaning that military thinkers are slow to deal with combat innovations.  The elevation of the Cyber Command to a level equal to the Strategic Command says that, organizationally at least, we are taking the threat of cyberattacks and the damage they could cause at least as seriously as we are taking the threat of nuclear attacks, which are far less likely but have a higher potential for damage.

Or maybe not.  At any given time, there is probably a maximum amount of damage that a determined cyberattacker could do with the capabilities they have and the nature of the target.  One advantage that the U. S. has compared to smaller and more tightly organized countries is that we have a lot of diversity in our technical infrastructure.  For example, in the recent flap about Russia's attempt to sway U. S. elections, no one has found any convincing evidence that Russian hackers were able to manipulate electronic vote counting.  Even if they had wanted to, the hackers face the difficulty that votes are counted in literally thousands of different jurisdictions using a wide variety of systems.  Anybody wanting to mess with a voting district that was big enough to make a difference would probably have to have a spy physically present for some time in order to gather enough information to give a cyberattack even a chance of success.  Something of the same principle applies to our electric grid, which is a congeries of old and new technology with a bewildering variety of SCADA (supervisory, control, and data acquisition) systems.  Again, a determined cyberattacker would have to focus on one system that is particularly vulnerable and large enough to make a terrorist attack worthwhile in terms of headlines.

Despite these built-in defenses, the U. S. should not be complacent with regard to the possibility of a crippling cyberattack, and the promotion of the U. S. Cyber Command to the board of Unified Combatant Commands is a step in the right direction.  As I mentioned not long ago in a blog on ransomware, one of the U. S. government's primary responsibilities is to defend the nation against attacks, and this includes cyberattacks.  The spectacle of private companies, even small ones, getting held up for ransom by hackers is morally equivalent to a cross-border raid by physical invaders.  What would normally be a domestic police matter then becomes an international incident, and the intervention of the U. S. military would be appropriate in both cases.

But a lot is yet to be defined about the responsibilities of the military on the defense side.  Historically, the computer industry has held consumers responsible for cybersecurity to the extent of installing patches and upgrades promptly and following good cybersecurity "hygiene."  But as attacks become more sophisticated, there may have to be closer cooperation among private technology developers, their customers, and the military, which up to now has not had much input into the business except as a good customer. 

If history is any precedent, not much will change in a major way until a foreign cyberattack succeeds with a truly crippling blow that costs many billions of dollars, affects millions of people, or results in multiple deaths and injuries.  Then we will get serious about how the military can fight the next war—a cyberwar—and not the last one.

Sources: carried a story entitled " Trump elevates U.S. Cyber Command, vows 'increased resolve' against threats" on Aug. 18, 2017 at  I referred to an article in Wired Magazine published June 20, 2017 at and the Wikipedia article on Unified Combatant Command.  My blog on ransomware appeared on Mar. 27, 2017 at

Monday, August 14, 2017

The Ethical Spin on Spinners

The first time I saw one in a store, I couldn't figure out what it was for and I had to ask my wife.  "Oh, that's a fidget spinner," she said.  "You don't need one."  She's right there.

As most people under 20 (and a few people over 60) know, fidget spinners are toys that you hold between your finger and thumb and spin.  That's it—that's the whole show.  When the fad showed signs of getting really big, somebody rushed into production battery-powered Bluetooth-enabled spinners.  My imagination obviously doesn't run in mass-marketing directions, because I couldn't think of what adding Bluetooth to a spinner could do.  Well, a quick Amazon search turns up spinners with little speakers in each of the three spinning lobes (playing music from your Bluetooth-enabled device), spinners with LEDs embedded in them and synced to the rotation somehow so that when you spin it, it spells out "I LOVE YOU," spinners with color-organ kind of LEDs that light in time to music—you name it, somebody has crammed the electronics into a spinner to do it.

But all this electronics needs super-compact batteries, and where there's batteries, there's the possibility of fire.  Already, there have been a couple of reports of Bluetooth-enabled spinners catching on fire while charging.  No deaths or serious injuries have resulted, but the U. S. Consumer Product Safety Commission (CPSC) has put out a nannygram, as you might call it:  don't overcharge the spinner, don't plug it in and leave it unattended, don't use a charger that wasn't designed for it, and so on.  I am not aware that teenagers are big fans of the CPSC website, but nobody can say the bureaucrats haven't done their job on this one.

The Wikipedia article on spinners discounts claims that they are good for people with attention-deficit disorder, hyperactivity, and similar things.  Seems to me that holding a spinning object in your hand would increase distraction rather than the opposite, and some high schools have agreed with me to the extent of banning the devices altogether. 

As a long-time manual tapper (no equipment required), I think I can speak to that aspect of the matter from personal experience.  Ever since I was a teenager or perhaps before, I have been in the habit of tapping more or less rhythmically on any available surface from time to time.  My wife is not exactly used to it—she will let me know now and then when it gets on her nerves—but it's no longer a huge issue between us.  Often when she asks me to stop, it's the first time I've fully realized I'm doing it, and that's part of the mystery of tapping or doing other habitual, useless things with your hands.

The most famous manual fidgeter in fiction was a character in Herman Wouk's World War II novel The Caine Mutiny, Captain Philip F. Queeg, who had the habit when under stress of taking two half-inch ball bearings out of his pocket and rolling them together.  (Queeg lived in an impoverished age when customized fidget toys were only a distant dream, so he had to use whatever fell to hand, so to speak.)  During the court martial that forms the heart of the novel, a psychologist is called to the stand to speculate on the reasons for Queeg's habit of rolling balls.  The doctor's comments ranged from the sexual to the scatological, and will not be repeated here.  But it appears that psychology has not made much progress in the last seventy years to find out why some people simply like to do meaningless motions with their hands.  That hasn't kept a lot of marketing types from making money off of them.

Fidget spinners are yet another example of the power of marketing to get people to buy something they didn't know they wanted till they saw one.  I don't know what the advertising budget was for the companies that popularized the toy, but I suspect it was substantial.  For reasons unknown to everyone but God, the thing caught on, and what with Bluetooth-enabled ones and so on, the marketers are riding the cresting fad wave for all it's worth before it spills on the beach and disappears, as it will.  Somehow I don't think we're going to see eighty-year-olds in 2100 taking their cherished mahogany spinners out of felt-lined boxes for one last spin before the graveyard.

Like most toys, fidget spinners seem to be ethically benign, unless one of them happens to set your drapes on fire.  Lawsuits are a perpetual hazard of the consumer product business, but the kind of people who market fad products are risk-takers to begin with, so it's not surprising they cut a few corners in the product safety area before rushing to the stores with their hastily designed gizmos.  By the time the cumbersome government regulatory apparatus gets in gear, the company responsible for the problematic spinners may have vanished.  Here's where the Internet and its viewers' fondness for exciting bad news can help even more than government regulations.  When hoverboards started catching fire a year or two ago, what kept people from buying more of the bad ones wasn't the government so much as it was the bad publicity the defective board makers got on YouTube.  And that's a good thing, when consumers who get burned (sometimes literally) can warn others of the problem.

As for Bluetooth-enabled spinners, well, if you want one, go get one while you can.  They'll be collectors' items pretty soon.  And those of us who learned how to cope with tension the old-fashioned way by drumming on a tabletop can at least rest assured that they aren't going to take our fingers or tabletops away.  But they might tell us to stop tapping.

Sources:  Slate's website carried the article  
"New Fidget Spinner Safety Guidelines Prove We Can’t Have Nice Things" by Nick Thieme at  I also referred to the Wikipedia article on fidget spinners.  Herman Wouk's Pulitzer-Prize-winning novel The Caine Mutiny was published in 1952, and led to a film of the same name starring a considerably miscast Humphrey Bogart.

Monday, August 07, 2017

Giulio Tononi and His Consciousness Meter

If you're reading this, you're conscious of reading it.  Consciousness is something most of us experience every day, but for philosophers, it has proved to be a tough nut to crack.  What is it, exactly?  And more relevant for engineers, can machines—specifically, artificially intelligent computers—be conscious? 

Until recently, questions like this came up only in obscure academic journals and science fiction stories.  But now that personal digital assistant devices like Siri are enjoying widespread use, the issue has fresh relevance both for consumers and for those developing new AI (artificial intelligence) systems.

Philosophers of mind such as David Chalmers point out that one of the more difficult problems relating to consciousness is explaining the nature of experiences.  Take the color red, for example.  Yes, you can point to a range of wavelengths in the visible-light spectrum that most people will call "red."  But the redness of red isn't just a certain wavelength range.  A five-year-old child who knows his colors can recognize red, but unless he's unusual he knows nothing about light physics and wavelengths.  Yet when he sees something red, he is conscious of seeing something red.

One popular school of thought about the nature of consciousness is the "functionalist" school.  These people treat a candidate for consciousness as a black box and imagine having a conversation with it.  If its answers convince you that you're talking with a conscious being, well, that's as much evidence as you're going to get.  By this measure, some people probably already think Siri is conscious.

Now along comes a neuroscientist named Giulio Tononi, who has been working on something he calls "integrated information theory" or IIT.  It has little to do with the kind of information theory familiar to electrical engineers.  Instead, it is a formal mathematical theory that starts from some axioms that most people would agree on concerning the nature of consciousness.  Unfortunately, it's pretty complicated and I can't go into the details here.  But starting from these axioms, he works out postulates and winds up with a list of characteristics that any physical system capable of supporting consciousness should have.  The results, to say the least, are surprising.

For one thing, he says that while current AI systems that are implemented using standard stored-program computers can give a good impression of conscious behavior, IIT shows that their structure is incapable of supporting consciousness.  That is, if it walks like it's conscious and quacks like it's conscious, it isn't necessarily conscious.  So even if Siri manages to convince all its users that it's conscious, Tononi would say it's just a clever trick.

How can this happen?  Well, philosopher John Searle's "Chinese room" argument may help in this regard.  Suppose a man who knows no Chinese is nevertheless in a room with a computer library of every conceivable question one can ask in Chinese, along with the appropriate answers that will convince a Chinese interrogator outside the room that the entity inside the room is conscious.  All the man in the room does is take the Chinese questions slipped under the door, use his computer to look up the answers, and send the answers (in Chinese) back to the Chinese questioner on the other side of the door.  To the questioner, it looks like there's somebody who is conscious inside the room.  But a reference library can't be conscious, even if it's computerized, and the only candidate for consciousness inside the room—the man using the computer—can't read Chinese, and so he isn't conscious of the interchange either.  According to Tononi, every AI program running on a conventionally designed computer is just like the man in the Chinese room—maybe it looks conscious from the outside, but its structure keeps it from ever being conscious.

On the other hand, Tononi says that the human brain—specifically the cerebral cortex—has just the kind of interconnections and ability to change its own form that is needed to realize consciousness.  That's good news, certainly, but along with that reassurance comes a more profound implication of IIT:  the possibility of making machines whose consciousness would not only be evident to those outside, but could be proven mathematically.

Here we get into some really deep waters.  IIT is by no means universally accepted in the neuroscience community.  As one might expect, it's rather unpopular among AI workers who either think consciousness is an illusion, or that brains and computers are basically the same thing and consciousness is just a matter of degree rather than a difference in kind. 

But suppose that Tononi's theory is basically correct, and we get to the point where we can take a look at a given physical system, whether it's a brain, a computer, or some as-yet-uninvented future artifact, and measure its potential to be conscious rather like you can measure a computer's clock speed today.  In an article co-written with Christof Koch in the June 2017 IEEE Spectrum, Tononi concludes that "Such a neuromorphic machine, if highly conscious, would then have intrinsic rights, in particular the right to its own life and well-being.  In that case, society would have to learn to share the world with its own creations." 

In a sense, we've been doing exactly that all along—ask any new parent how it's going.  But Tononi's "creation" isn't another human—it would be some kind of machine, broadly speaking, whose consciousness would be verified by IIT.  There has been talk about robot rights for some years, fortunately so far entirely on the hypothetical level.  But if Tononi's theory comes to be more widely accepted and turns out to do what he claims it will do, we may some day face the question of how to treat entities (I can't think of another word) that seem to be as alive as you or me, but depend for their "lives" on Pacific Gas and Electric, not the grocery store.  

Well, I don't have a good answer to that one, except that we're a long way from that consummation.  People are trying to design intelligent computers that are actually built the way the brain is built, but they're way behind the usual AI approach of programming and simulating neural networks on regular computer hardware.  If Tononi is right, the conventional AI approach leads only to what I was pretty sure was the case all along—a fancy adding machine that can talk and act like a person, but is in fact just a bunch of hardware.  But if we ever build a machine that not only acts conscious, but is conscious according to IIT, well, let's worry about that when it happens.

Sources:  Christof Koch and Giulio Tononi's article "Can We Quantify Machine Consciousness?" appeared on pp. 65-69 of the June 2017 issue of IEEE Spectrum, and is also available online at  I also referred to the Wikipedia article on integrated information theory and the Scholarpedia article at