Monday, August 14, 2017

The Ethical Spin on Spinners


The first time I saw one in a store, I couldn't figure out what it was for and I had to ask my wife.  "Oh, that's a fidget spinner," she said.  "You don't need one."  She's right there.

As most people under 20 (and a few people over 60) know, fidget spinners are toys that you hold between your finger and thumb and spin.  That's it—that's the whole show.  When the fad showed signs of getting really big, somebody rushed into production battery-powered Bluetooth-enabled spinners.  My imagination obviously doesn't run in mass-marketing directions, because I couldn't think of what adding Bluetooth to a spinner could do.  Well, a quick Amazon search turns up spinners with little speakers in each of the three spinning lobes (playing music from your Bluetooth-enabled device), spinners with LEDs embedded in them and synced to the rotation somehow so that when you spin it, it spells out "I LOVE YOU," spinners with color-organ kind of LEDs that light in time to music—you name it, somebody has crammed the electronics into a spinner to do it.

But all this electronics needs super-compact batteries, and where there's batteries, there's the possibility of fire.  Already, there have been a couple of reports of Bluetooth-enabled spinners catching on fire while charging.  No deaths or serious injuries have resulted, but the U. S. Consumer Product Safety Commission (CPSC) has put out a nannygram, as you might call it:  don't overcharge the spinner, don't plug it in and leave it unattended, don't use a charger that wasn't designed for it, and so on.  I am not aware that teenagers are big fans of the CPSC website, but nobody can say the bureaucrats haven't done their job on this one.

The Wikipedia article on spinners discounts claims that they are good for people with attention-deficit disorder, hyperactivity, and similar things.  Seems to me that holding a spinning object in your hand would increase distraction rather than the opposite, and some high schools have agreed with me to the extent of banning the devices altogether. 

As a long-time manual tapper (no equipment required), I think I can speak to that aspect of the matter from personal experience.  Ever since I was a teenager or perhaps before, I have been in the habit of tapping more or less rhythmically on any available surface from time to time.  My wife is not exactly used to it—she will let me know now and then when it gets on her nerves—but it's no longer a huge issue between us.  Often when she asks me to stop, it's the first time I've fully realized I'm doing it, and that's part of the mystery of tapping or doing other habitual, useless things with your hands.

The most famous manual fidgeter in fiction was a character in Herman Wouk's World War II novel The Caine Mutiny, Captain Philip F. Queeg, who had the habit when under stress of taking two half-inch ball bearings out of his pocket and rolling them together.  (Queeg lived in an impoverished age when customized fidget toys were only a distant dream, so he had to use whatever fell to hand, so to speak.)  During the court martial that forms the heart of the novel, a psychologist is called to the stand to speculate on the reasons for Queeg's habit of rolling balls.  The doctor's comments ranged from the sexual to the scatological, and will not be repeated here.  But it appears that psychology has not made much progress in the last seventy years to find out why some people simply like to do meaningless motions with their hands.  That hasn't kept a lot of marketing types from making money off of them.

Fidget spinners are yet another example of the power of marketing to get people to buy something they didn't know they wanted till they saw one.  I don't know what the advertising budget was for the companies that popularized the toy, but I suspect it was substantial.  For reasons unknown to everyone but God, the thing caught on, and what with Bluetooth-enabled ones and so on, the marketers are riding the cresting fad wave for all it's worth before it spills on the beach and disappears, as it will.  Somehow I don't think we're going to see eighty-year-olds in 2100 taking their cherished mahogany spinners out of felt-lined boxes for one last spin before the graveyard.

Like most toys, fidget spinners seem to be ethically benign, unless one of them happens to set your drapes on fire.  Lawsuits are a perpetual hazard of the consumer product business, but the kind of people who market fad products are risk-takers to begin with, so it's not surprising they cut a few corners in the product safety area before rushing to the stores with their hastily designed gizmos.  By the time the cumbersome government regulatory apparatus gets in gear, the company responsible for the problematic spinners may have vanished.  Here's where the Internet and its viewers' fondness for exciting bad news can help even more than government regulations.  When hoverboards started catching fire a year or two ago, what kept people from buying more of the bad ones wasn't the government so much as it was the bad publicity the defective board makers got on YouTube.  And that's a good thing, when consumers who get burned (sometimes literally) can warn others of the problem.

As for Bluetooth-enabled spinners, well, if you want one, go get one while you can.  They'll be collectors' items pretty soon.  And those of us who learned how to cope with tension the old-fashioned way by drumming on a tabletop can at least rest assured that they aren't going to take our fingers or tabletops away.  But they might tell us to stop tapping.

Sources:  Slate's website carried the article  
"New Fidget Spinner Safety Guidelines Prove We Can’t Have Nice Things" by Nick Thieme at http://www.slate.com/blogs/moneybox/2017/08/11/cpsc_just_released_fidget_spinner_safety_guidelines_proving_we_can_t_have.html.  I also referred to the Wikipedia article on fidget spinners.  Herman Wouk's Pulitzer-Prize-winning novel The Caine Mutiny was published in 1952, and led to a film of the same name starring a considerably miscast Humphrey Bogart.

Monday, August 07, 2017

Giulio Tononi and His Consciousness Meter


If you're reading this, you're conscious of reading it.  Consciousness is something most of us experience every day, but for philosophers, it has proved to be a tough nut to crack.  What is it, exactly?  And more relevant for engineers, can machines—specifically, artificially intelligent computers—be conscious? 

Until recently, questions like this came up only in obscure academic journals and science fiction stories.  But now that personal digital assistant devices like Siri are enjoying widespread use, the issue has fresh relevance both for consumers and for those developing new AI (artificial intelligence) systems.

Philosophers of mind such as David Chalmers point out that one of the more difficult problems relating to consciousness is explaining the nature of experiences.  Take the color red, for example.  Yes, you can point to a range of wavelengths in the visible-light spectrum that most people will call "red."  But the redness of red isn't just a certain wavelength range.  A five-year-old child who knows his colors can recognize red, but unless he's unusual he knows nothing about light physics and wavelengths.  Yet when he sees something red, he is conscious of seeing something red.

One popular school of thought about the nature of consciousness is the "functionalist" school.  These people treat a candidate for consciousness as a black box and imagine having a conversation with it.  If its answers convince you that you're talking with a conscious being, well, that's as much evidence as you're going to get.  By this measure, some people probably already think Siri is conscious.

Now along comes a neuroscientist named Giulio Tononi, who has been working on something he calls "integrated information theory" or IIT.  It has little to do with the kind of information theory familiar to electrical engineers.  Instead, it is a formal mathematical theory that starts from some axioms that most people would agree on concerning the nature of consciousness.  Unfortunately, it's pretty complicated and I can't go into the details here.  But starting from these axioms, he works out postulates and winds up with a list of characteristics that any physical system capable of supporting consciousness should have.  The results, to say the least, are surprising.

For one thing, he says that while current AI systems that are implemented using standard stored-program computers can give a good impression of conscious behavior, IIT shows that their structure is incapable of supporting consciousness.  That is, if it walks like it's conscious and quacks like it's conscious, it isn't necessarily conscious.  So even if Siri manages to convince all its users that it's conscious, Tononi would say it's just a clever trick.

How can this happen?  Well, philosopher John Searle's "Chinese room" argument may help in this regard.  Suppose a man who knows no Chinese is nevertheless in a room with a computer library of every conceivable question one can ask in Chinese, along with the appropriate answers that will convince a Chinese interrogator outside the room that the entity inside the room is conscious.  All the man in the room does is take the Chinese questions slipped under the door, use his computer to look up the answers, and send the answers (in Chinese) back to the Chinese questioner on the other side of the door.  To the questioner, it looks like there's somebody who is conscious inside the room.  But a reference library can't be conscious, even if it's computerized, and the only candidate for consciousness inside the room—the man using the computer—can't read Chinese, and so he isn't conscious of the interchange either.  According to Tononi, every AI program running on a conventionally designed computer is just like the man in the Chinese room—maybe it looks conscious from the outside, but its structure keeps it from ever being conscious.

On the other hand, Tononi says that the human brain—specifically the cerebral cortex—has just the kind of interconnections and ability to change its own form that is needed to realize consciousness.  That's good news, certainly, but along with that reassurance comes a more profound implication of IIT:  the possibility of making machines whose consciousness would not only be evident to those outside, but could be proven mathematically.

Here we get into some really deep waters.  IIT is by no means universally accepted in the neuroscience community.  As one might expect, it's rather unpopular among AI workers who either think consciousness is an illusion, or that brains and computers are basically the same thing and consciousness is just a matter of degree rather than a difference in kind. 

But suppose that Tononi's theory is basically correct, and we get to the point where we can take a look at a given physical system, whether it's a brain, a computer, or some as-yet-uninvented future artifact, and measure its potential to be conscious rather like you can measure a computer's clock speed today.  In an article co-written with Christof Koch in the June 2017 IEEE Spectrum, Tononi concludes that "Such a neuromorphic machine, if highly conscious, would then have intrinsic rights, in particular the right to its own life and well-being.  In that case, society would have to learn to share the world with its own creations." 

In a sense, we've been doing exactly that all along—ask any new parent how it's going.  But Tononi's "creation" isn't another human—it would be some kind of machine, broadly speaking, whose consciousness would be verified by IIT.  There has been talk about robot rights for some years, fortunately so far entirely on the hypothetical level.  But if Tononi's theory comes to be more widely accepted and turns out to do what he claims it will do, we may some day face the question of how to treat entities (I can't think of another word) that seem to be as alive as you or me, but depend for their "lives" on Pacific Gas and Electric, not the grocery store.  

Well, I don't have a good answer to that one, except that we're a long way from that consummation.  People are trying to design intelligent computers that are actually built the way the brain is built, but they're way behind the usual AI approach of programming and simulating neural networks on regular computer hardware.  If Tononi is right, the conventional AI approach leads only to what I was pretty sure was the case all along—a fancy adding machine that can talk and act like a person, but is in fact just a bunch of hardware.  But if we ever build a machine that not only acts conscious, but is conscious according to IIT, well, let's worry about that when it happens.

Sources:  Christof Koch and Giulio Tononi's article "Can We Quantify Machine Consciousness?" appeared on pp. 65-69 of the June 2017 issue of IEEE Spectrum, and is also available online at http://spectrum.ieee.org/computing/hardware/can-we-quantify-machine-consciousness.  I also referred to the Wikipedia article on integrated information theory and the Scholarpedia article at http://www.scholarpedia.org/article/Integrated_information_theory.

Monday, July 31, 2017

Should Bad Engineers Go to Jail?

-->
By and large, most engineers do their jobs well enough to stay employed, and their actions are a net benefit to the firms that pay them, and hopefully society at large.  But every so often, things go wrong, and someone is hurt or killed in connection with an engineered product or service.  What should be the consequences for engineers and managers if wrongdoing—intentional or otherwise—can be traced to their actions, or irresponsible inactions?  In other words, should bad engineers go to jail?

Jail is for criminals who have been duly prosecuted and convicted of a violation of criminal law.  Or at least I used to think so, until I read an article entitled "Limited Liability" in a recent issue of The New Yorker.  In it I saw an amazing statistic:  only one out of twenty U. S. criminal cases in both state and federal courts nowadays actually go to trial.  The other nineteen or so are resolved in plea bargains. 

A plea bargain is a deal arranged between a defendant's lawyers and the prosecuting lawyers.  Except in unusual cases in which one defendant gets immunity from prosecution in exchange for ratting on other defendants, few defendants get out of a plea bargain scot-free.  And the same article informs me that the outcome of a plea bargain depends a lot on the social and legal status of the defendant.

If you're a white-collar criminal, and this would include engineers and the corporations they typically work for, it has become almost unheard of for any jail time to be served.  Instead, the typical outcome is what's called "deferred prosecution."  Instead of formally charging a corporation with a crime, prosecutors investigating corporate wrongdoing will make a deferred-prosecution agreement in which the company acknowledges it did something wrong, pays a fine, and promises not to do it again.  But some companies have gone through this wrist-slapping process two or three times for the same type of misbehavior, and increasingly look at the fines as simply a cost of doing business.

On the other hand, if you're a low-level drug dealer, or user, or even an impoverished innocent bystander to a crime who was arrested by mistake, your chances of going before a jury of your peers and a judge who can declare you innocent before the law are vanishingly small.  Instead, you will probably be offered some kind of plea bargain, and the less well off you are, the more likely the "bargain" will involve jail time.  The article quotes a federal judge who said in 2014 that for most Americans, the Sixth Amendment right to a trial by jury is now a "myth." 

Curiously, the exception to this rule of relative immunity from prosecution for engineers seems to be if you get involved in espionage or illegal transfer of classified information, especially to China.  A few years back, I described the sad but avoidable case of University of Tennessee professor J. Reece Roth, who was convicted in 2010 of giving restricted military information about aerospace research to China.  And googling "Engineer Goes to Jail" turned up the case of a former Northrup Grumman engineer named Noshir Gowadia, who in January of 2011 was sentenced to 32 years in prison for selling defense secrets to China. 

Engineers in the U. S. generally do not have to be licensed to practice engineering, and so the threat of losing one's license for malpractice does not exist, as it does in some other countries.  Nevertheless, there are federal and state laws that engineers in many professions must be aware of in order to practice their profession, well, professionally.  And most of the time, most engineers stay well clear of any criminal violations. 

But it's disheartening somehow to discover that the old retort engineers would give to a ridiculously irresponsible proposal, namely, "You could go to jail for that!" is not as true as it once was.  It seems that unless you committed a very specific type of crime—namely selling or giving defense secrets to China—that engineers (at least those working for corporations, which is most of us) don't have to worry much about going to jail.

One reason for this is the long history of legally limited liability that has been granted to corporations by the legal system.  One of the main reasons for forming a corporation is that one's liability, in case the corporation gets sued, is limited to the extent of the money you have put into the corporation.  By contrast, if you are operating in the business world as a sole proprietor or as part of a partnership, a lawsuit can not only bankrupt your business, it can bankrupt you personally.  This hazard discouraged people from buying stock in publicly traded companies until the mid-1800s, when laws limiting the liability of stockholders were enacted.

Most engineering ethics violations that wind up in the legal system are civil suits rather than criminal prosecutions, and the rules are different for civil and criminal cases.  Being sued is no picnic either, and those who have had involuntary dealings with the legal system can testify that even if you win a lawsuit as a defendant, the experience can be harrowing, draining, and expensive.  Again, because most engineers work in a corporate environment, they are protected in most cases from immediate personal involvement when a company is sued, but their actions may be critical to the outcome of a civil trial or settlement. 

It's rare for an engineer to be found criminally negligent as a result of an accident such as the one that happened last week at the Ohio State Fair.  A young man named Tyler Jarrell, who had signed up for the U. S. Marine Corps only a few days earlier, took his date to the fair.  They got on a ride called the Fire Ball, in which riders are seated in open-air seats under a protective frame and then swung and spun high in the air.  Something went wrong and several people were flung out of the ride.  Jarrell flew fifty feet to his death, and several others were injured seriously. 

KMG, the Dutch company who made the ride, ordered all operators of that type of ride to shut them down until the cause of the accident is ascertained.  That may take a while.  But already the chain of possible responsibility is complicated:  the man who was operating the ride, the Ohio state inspectors who observed the ride being assembled and certified it as safe, Amusements of America (the New Jersey firm who owned and provided the ride), and KMG.  Forensic engineers will try to find out what other engineers did wrong, if anything, to contribute to this sad mishap.  But with responsibility potentially spread among so many actors and organizations, the likelihood that anyone will go to jail as a result is small. 

Maybe that's a bad thing, or maybe it's a good thing.  But anyway, it's a fact in today's United States.

Sources:  The New Yorker for July 31, 2017 carried "Limited Liability" by Patrick Radden Keefe on pp. 28-33.  I referred to online reports from Cleveland.com at http://www.cleveland.com/open/index.ssf/2017/07/post_65.html and http://www.cleveland.com/open/index.ssf/2017/07/the_ohio_state_fair_fire_ball.html.  The conviction of Gowadia was reported by CNN in 2011 at http://www.cnn.com/2011/CRIME/01/25/hawaii.spy.sentenced/index.html, and I described the Roth case in 2010 at http://engineeringethicsblog.blogspot.com/2010/09/mixing-academia-and-military-secrets.html.


Note added Aug. 7, 2017:  At least one person at VW has actually gone to jail for what the media is calling Dieselgate.  The site

https://jalopnik.com/vw-exec-pleads-guilty-faces-up-to-seven-years-in-jail-1797552831

on Aug. 5, 2017 reported that Oliver Schmidt, former head of U. S. regulatory compliance, pled guilty in a plea bargain to a charge of defrauding the U. S. and violating the Clean Air Act and will pay a fine of between $40,000 and $400,000, spend up to seven years in jail (where he is now), and be deported back to his native Germany after that.  Although technically he's a manager, not an engineer, Schmidt probably has an engineering background and was in a position of responsibility.  So sometimes engineering managers do go to jail.
 

Monday, July 24, 2017

Pokemon's Epic Fail in Chicago


Full disclosure:  I generally don't play games much anymore, whether video, online, offline, board, ball, table, or hunger.  So anything I write about games is going to be at one remove as an observer, not a participant.  That isn't necessarily bad, but in case you are an enthusiastic game player, you should know I am an outsider to all that.

Nevertheless, I can imagine what it would be like to get involved in Pokemon, the online mobile phone game, to such an extent that I would pay many hundreds of dollars to fly from Singapore to Chicago for a chance to play in a Pokemon Go Fest scheduled for July 22 in Grant Park.  And I can imagine the eager anticipation I would feel as I waited in line several hours to scan a QR code, verifying I was in the park and ready to play, only to find that I couldn't even log onto the game. 

According to a report in the Chicago Tribune, that was the experience of thousands of Pokemon players, some of whom had flown in from as far as Australia, Denmark, and Singapore.  After four hours of problems, officials of Niantic, Pokemon's developer, canceled the event and awarded everyone a Lugia, a creature that was apparently one of the big attractions of the event.

I suppose those attending might be in the same mood as competitors in a deep-sea fishing contest who all arrived to find the boat was out of commission.  But to make up for it, everybody who showed up got a package of frozen tuna.  It's not the same thing as catching it yourself.

Philosophically, sports and games occupy a peculiar place in the wide realm of human behavior.  Anyone who has watched a pair of dogs do what my wife calls "the puppy dance"—flopping their front legs flat on the ground while facing each other, butts in the air, then jumping up and chasing each other around the yard—realizes that the instinct to play is something we share with other animals.  And yet it's not just instinct—it's the source of much delight, mutual aid, and fraternal feelings, even joy.  Those who would dismiss sports, play, and the joy they bring as not being worthy of serious consideration are admonished by C. S. Lewis that "Joy is the serious business of Heaven."  And surely games are part of heavenly joy, I would hope.

As online games go, Pokemon has the comparative virtue of getting people out and about, and encouraging real-world interactions in the flesh, so to speak.  Sure, it's silly to run around a public space staring at your phone in hopes that some server somewhere will cause a mysterious fictional creature to show up on it.  But when you come right down to it, that silliness is shared by all games.  Why do we pay certain individuals many millions of dollars to throw an oblate pigskin farther and more accurately than most other people can?  Yet we do, and while football is enjoyed vicariously a lot more than it is enjoyed in person, it probably contributes its share to the sum total of human happiness.

So what was lost when Niantic dropped the ball, so to speak, and failed to prepare its servers adequately for the estimated 20,000 Pokemon players who showed up in Chicago?  A lot of disillusioned people were only slightly mollified to get a Lugia as a consolation prize.  And Niantic has metaphorical egg on its face after recovering from similar problems following the game's original introduction a few years back. 

But other than that, as engineering crises go, this was a minor one.  Nobody got hurt or killed, the monetary losses were limited to plane and hotel tickets, and if the old PR saying that there's no such thing as bad publicity is true, Pokemon got some free advertising, though not exactly in a form they would prefer.

I am no network specialist, and so I'm not going to speculate about the technical reasons for the failure.  Like anybody else on the street would guess, I suppose somebody somewhere didn't correctly estimate how much resources would be needed, and they were caught by surprise when the demand peak clogged the available servers, and things froze up.  We are used to this kind of thing in ordinary, as contrasted to digital, life when a performance event turns out to be much more popular than its organizers expected, and after the auditorium fills up lots of people have to be turned away.  It's a good sort of problem to have, in a way, but when the limitation is electronic and not physical it can get frustrating.

Every game seems to attract a different kind of person, and it sounds like Pokemon players as a group, even the fanatical ones who fly halfway around the world to play, are a fairly well-behaved bunch.  The worst thing that happened at the Pokemon Go Fest was that people booed John Hanke, Niantic's CEO.  Contrast that to the bloody and even fatal riots that can happen at European soccer games, and the benign character of Pokemon looks even better.  And the very choice of Chicago for this internationally popular event says something about the folksy and Middle-Western-style character of the game.  It wouldn't have drawn the same type of crowd in New York's Central Park or Los Angeles's Griffith Park, and things might have gotten considerably uglier.

As a dedicated non-game-player, I'm still concerned that millions of young (and not so young) men and women spend thousands of hours of their lives playing video and online games instead of spending time with live friends, spouses, or even for example, working.  You can always have too much of a good thing.  But even after Niantic's epic fail in Chicago, I have to say that Pokemon seems to be a pretty harmless way for people to spend their free time, even if it doesn't always work. 

Sources:  The Chicago Tribune website carried the story "Pokemon Go Fest refunds all tickets as players can't get game to work" by Robert Holly on July 22, 2017 at http://www.chicagotribune.com/bluesky/originals/ct-bsi-pokemon-go-fest-day-20170722-story.html.

Monday, July 17, 2017

Silicon Valley Wants Inside Your Head—Literally

A recent article in the engineering professional's magazine IEEE Spectrum reveals that several powerful Silicon Valley entrepreneurs are sponsoring initiatives to breach the barrier separating our brains from the rest of the world.  They all fall into the category of "brain-computer interfaces" or BCIs. 

For example, Facebook wants to develop a noninvasive (meaning you don't need surgery to wear it) system that would let you type five times faster on your smart phone than you do now.  A former Facebook executive named Mary Lou Jepsen is trying to develop an MRI-type device that will "interpret the patterns of neural activity associated with thoughts"—mind-reading, in other words.  Elon Musk, true to form, has thrown caution to the winds with his program to implant a sensor in your brain, bypassing the old-fashioned eyes, ears, and fingers and mainlining the Internet straight to your hippocampus, or wherever the thing will be attached. 

There are two things I'd like to say about these projects.  One is technical, and the other is moral.

The technical aspects of BCI projects are daunting, to say the least.  While some research has been done already into ways of communicating with the brains of people with "locked-in" syndrome (e. g. sufferers from Lou Gehrig's disease who can no longer move any voluntary muscles), progress has been slow and the systems have been customized to each individual.  The brain is the final frontier of biology, in that it is the most complex organ known and probably the one we know the least about in comparison to what there is to know—which, in a sense, is all human knowledge, since all human knowledge is, materially speaking, contained in brains.  The self-reflexive nature of brain research makes me wonder if there isn't something analogous to Gödel's incompleteness theorems at work in the brain's attempt to understand how the brain works. 

Mathematician Kurt Gödel showed in 1931 that every mathematical system of a certain complexity is bound to have statements in it that cannot be either proved or disproved without going outside the system.  The brain analogy of this is that the brain may not be able to understand exhaustively everything about itself. 

Whether or not that is the case is a purely speculative question at this point—just the kind of issue that the Silicon Valley types are not interested in.  They want to do something with the brain, not understand it, and their research is way toward the development end of R&D, with explicit timelines and the whole apparatus of high-tech development programs favored by those with essentially infinite amounts of cash.

What a contrast it is to the way some wealthy corporations used to behave.  Physicist Mark P. Mills points out in a recent article in the journal New Atlantis that U. S. corporations spend only about 7% of their total R&D money on basic research, which the government's Office of Management and Budget defines as "study directed toward fuller knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications toward processes or products in mind."  Mills makes the telling point that while the basic-research labs of the old pre-breakup Bell System and IBM can count thirteen Nobel Prizes to their credit, the free-spirited pursuit of knowledge wherever it leads is no longer in favor in the U. S. corporate world.  Though the lag between a discovery and the awarding of a Nobel Prize can often be decades, Mills looks in vain for any comparable scientific achievements from the tightly-application-focused "moon-shot" projects currently favored by Silicon Valley.

The technical point here is that those pursuing BCIs may have bitten off more than they can chew, and the nature of the problem might require a longer-term, less focused perspective.  Even if the goal of brain-computer interfaces is worthy of pursuit, we may be in for a long marathon instead of a sprint.

Now for the moral issue.  Is it right to read another person's mind?  Especially if they are not fully aware of what is involved in the process?  Ah, the corporations say, we would never do such a thing without your consent.  Yes, I reply, the same kind of consent I give whenever I load a new piece of software on my computer and lie that I have read and understood eight pages of legal gobbledegook when I click the button that will let me load the software. 

We have already been trained to allow snooping at a scale that twenty years ago would have been regarded as outrageous.  Everyone who gets online has probably had the experience of doing a web search for a consumer item in one place, only to find ads for it popping up later during a completely unrelated activity.  A combination of cookies and data-sharing among Internet companies on a grand scale means that privacy, at least when it comes to things you search for online, is mostly a thing of the past. 

Should we let the greedy hands of the Internet reach into the last remaining sanctuary of privacy, the human mind itself?  I am reminded in this connection of a passage in one of C. S. Lewis's Chronicles of Narnia series, The Voyage of the Dawn Treader.  In it one of the English children transported to Narnia is named Lucy, and at one point she is alone in a magician's house, perusing a great book of magic.  She comes upon a spell "which would let you know what your friends thought about you."  She says the magic words, and a kind of television process shows her two friends of hers in a train.  She hears them talking about her, and not in a nice way, either. 

A bit later, Aslan the Lion appears, and says to her, "Child. . . I think you have been eavesdropping."  When she replies that she didn't think it counted as eavesdropping if it was magic, he replies "Spying on people by magic is the same as spying on them in any other way."  I don't know how popular the Chronicles are in Silicon Valley, but it's just possible that a moral lesson a child could understand needs to be taught to some of our most powerful technical leaders.

Sources:  The IEEE Spectrum article "Silicon Valley's Latest Craze:  Brain Tech" by Eliza Strickland appeared on pp. 8-9 of the July 2017 print issue.  The Spring 2017 edition of The New Atlantis carried Mark P. Mills' article "Making Technological Miracles" on pp. 37-55.  I also referred to the Wikipedia articles on Gödel's incompleteness theorems and the IBM Zurich Research Laboratory.  Lucy's exploit with magic is found on pp. 131-135 of the Macmillan paperback edition of The Voyage of the Dawn Treader, originally copyrighted 1952.

Monday, July 10, 2017

Does the U. S. Need a New Star Wars Program?


On the Fourth of July last week, the world saw one rocket's red glare that wasn't fired in celebration:  North Korea launched the latest in a series of intercontinental ballistic missile (ICBM) tests.  The timing was intentional, and the North Korean news agency quoted its leader Kim Jong-un as saying, "The American bastards must be quite unhappy after watching our strategic decision."  Not exactly diplomatic language.  Although the test missile went mostly straight up and down and landed harmlessly in the Sea of Japan, if directed toward the east, experts say it could have reached as far as parts of Alaska.  According to the New York Times report, it is unlikely that Pyongyang has a small enough nuclear weapon to fit on their ICBMs, but they seem to be devoting a great part of their pitifully small GNP to reach their ultimate goal of being able to threaten the continental U. S. with a nuclear warhead.

The North Korean government is one of the few remaining bastions of old-fashioned, dictatorial despotism, and rational behavior is not to be expected from them.  But missiles are. 

There are some parallels between this situation and the way the final years of the old Soviet Union played out.  When President Reagan announced his Strategic Defense Initiative ("Star Wars") in 1983, it arguably contributed to the eventual downfall of the USSR as that nation's confidence waned that they could counter the U. S.'s initiative with anything as effective.  As it turned out, Star Wars as originally planned never reached the deployment stage, but by 1991 the USSR had cracked apart, and it wasn't needed.

North Korea is different, in that they will probably never have more than a few viable nuclear ICBMs.  But even one nuclear bomb can spoil your whole day, so since about 2000 the U. S. has been developing a kind of mini-Star Wars system called the Ground-based Missile Defense (GMD). 

Shooting down even one ICBM on the fly is a very delicate undertaking that has been likened to shooting a bullet with another bullet.  Nevertheless, in 18 tests the system has successfully destroyed 10 targets.  These are not encouraging odds, but it's not bad for a system whose funding and support has fluctuated wildly over the years with the political climate in Washington. 

Austin Bay is a retired colonel in the U. S. Army Reserve whose service record goes back to the 1970s, and for some years has written regular columns on national affairs from a military perspective.  I have always found his viewpoints to be solidly grounded in factual information, and before the latest North Korean missile launch, Bay noted in a May 31 column that the GMD program was doing as well as you could expect in view of  the "sparse and fitful" testing it has had. 

Compare the record of an average launch of one per year for the GMD to the series of manned as well as unmanned spaceflights that took place in the 1960s, leading up to the moon launch:  14 launches (3 of which failed) for Project Mercury, 14 launches (two partial failures) for Project Gemini, and 10 successful flights that led up to the triumphal landing on the moon in 1969.  All this happened in only ten years, 1959 to 1969, which saw an average of nearly four launches a year.

The world and the U. S. were very different then, and the 1960s space program ate up a much larger proportion of the federal budget than Washington is likely to tolerate today.  But in North Korea's missile launches, we face a threat that is much less predictable than the old Soviet Union was, and one that could quite possibly lead to hundreds or thousands of American deaths in a nuclear attack.  This is serious business.

In contrast to what worked with the USSR, merely announcing a greatly expanded GMD is not going to make much of an impression on Kim Jong-un.  As Bay points out, the alternative to missile defense is diplomacy, and when the Clinton Administration made an agreement with North Korea in 1994 to quit making plutonium, evidence shows that the regime ignored us and went right ahead with their nefarious plans. 

It looks like North Korea won't quit rattling their nuclear saber until we grab every one they flaunt and crack it over our GMD-equipped knees, to stretch a metaphor.  But we can't afford to attempt a shoot-down of one of their missiles and miss—that would be worse than sitting on our hands.  Bay thinks, and I agree, that the time has come to get serious about ICBM defense, and that means a focused, well-publicized, and well-funded effort, as independent as possible of politics, to come up with a system that can be relied on to shoot down North Korea-style missiles, say at least 90% of the time. 

In the current fractious political atmosphere in Washington, such a plan is way down toward the bottom of most politicians' priority lists.  It may take a genuinely frightening incident such as an apparent attack by North Korea to motivate enough voters to call for protection.  But nobody (on our side, anyway) wants to go that far.

There are other things we can do about North Korea, but unfortunately most of them are not unilateral:  asking China to squeeze them a little, solidifying alliances with Japan, South Korea, and other eastern nations against the crackpot North Korea regime, and so on.  While China doesn't want its little neighbor incinerating the planet by mistake, it is much more tolerant of North Korea's human-rights abuses and other misbehavior than we can accept in the U. S., and there is little hope that the North Korean regime will change in response to anything that China does.

In the meantime, the U. S. needs to defend itself against attacks by foreign powers.  Everybody—Democrats, Republicans, Libertarians, you name it—agrees that defense is one of the bottom-line functions of the federal government.  However misdirected the defense budget has been in the past, the problem of North Korea won't go away.  We need to finish the job that the current GMD program has started, and develop and test it to the point that people in Alaska and the rest of the western United States can go to sleep without worrying that a rotund guy in Pyongyang is going to wake up one morning and decide to drop a nuclear bomb on their heads—and nobody can stop him.

Sources:  I referred to a report on the latest North Korean missile test in the New York Times carried on July 4 at https://www.nytimes.com/2017/07/04/world/asia/north-korea-missile-test-icbm.html.  Austin Bay's commentary on previous GMD tests appeared on May 31, 2017 at https://www.creators.com/read/austin-bay/05/17/successful-us-missile-defenders-need-to-keep-on-shooting.  I also referred to Wikipedia articles on the Strategic Defense Initiative and NASA's 1960s space program. 

Monday, July 03, 2017

The Legacy of Hanford


One era's triumph can turn into another era's disaster, and perhaps no better example of that in the field of nuclear energy and weapons is the Hanford Site in south-central Washington State, about 200 miles from Seattle.  During the height of World War II, physicist Enrico Fermi designed a nuclear reactor for the Dupont Corporation to produce plutonium that was needed for nuclear weapons, as part of the ultra-secret Manhattan Project.  The small farming community of Hanford, Washington was selected for the site of the reactor and associated chemical processing plants, and more than 40,000 construction workers swarmed to the bank of the Columbia River in 1943 to build what became known after the war as the Hanford Nuclear Reservation. 

Because plutonium is one of the most deadly radioactive substances known, plant designers had to come up with novel ways of transporting large volumes of liquid and solid plutonium-containing material while keeping workers either far away from the load or behind several feet of radiation shielding.  Accordingly, one of the first industrial applications of closed-circuit TV was to view remote-controlled plutonium-handling equipment.  In view of the hazards of spills during transportation from the producing reactors to the processing plant, a railway tunnel was constructed of timbers and steel, buried in a foot or more of earth on top.  Plutonium that went into the "Fat Man" nuclear bomb used on Nagasaki, Japan probably passed through this tunnel, as did dozens of tons of plutonium used to make nuclear weapons during the Cold War.

Beginning in the 1960s, plutonium production ceased at Hanford, as it was realized that the site was heavily contaminated with long-lasting radioactive material and was no longer usable by then-current safety standards.  When the U. S. populace felt its back was to the wall during the war, not many people raised issues about long-term health hazards of working with nuclear weapons.  But as the threat of nuclear war declined after the Partial Test Ban Treaty between the USSR and the US in 1963, and especially after the collapse of the Soviet Union in 1991, most production activity ceased at Hanford and instead, a massive cleanup became the top priority.  The U. S. Department of Energy now spends billions of dollars a year on the Hanford cleanup, employing 8,000 people at the site and taking reasonable precautions about keeping workers safe.  But since President Trump's appointment of former Texas governor Rick Perry to head the Department of Energy, the media has paid more attention to the Department and any problems it may have, the most recent of which is the collapse of part of the roof of the old railroad tunnel used to transport plutonium.

The hole in the tunnel, more than ten feet across, was discovered on May 9, and as a precaution, many employees at the site were told to shelter in place until measurements could be taken to tell if substantial amounts of radioactive material had been released.  Investigation showed that no such release occurred, and the hole has since been covered in plastic and plans made to fill the old tunnel with grout.  Several railroad cars used to transport plutonium remain in the tunnel, which is altogether too radioactive to be inspected by humans, although robotic inspections are possible.  A second larger tunnel built in the 1950s has also shown signs of structural instability, and Hanford managers are planning to do something about preventing its collapse by August.

It would be nice if engineering ethics consisted of a set of unchanging rules, and doing engineering ethically simply meant understanding and following the rules.  But a phrase I recently came across expresses nicely the difference between the discipline of ethics and the disciplines of the hard sciences. 

Ethics is a "humane science"—meaning not that it's kind to animals, but that its "laws" are really just generalizations that depend on the nature of humanity, and so cannot show the ironclad reliability and constancy of physical laws.  This is not to argue for relativism—the notion that all ethical principles are relative to particular times, places, and cultures.  Rather, it is to confess both ignorance—no finite human being can possibly know all the relevant considerations in a particular ethical situation—and the fact that as human cultures and societies change, what is regarded as ethical behavior in a given circumstance can also change. 

In the case of Hanford, what has changed the most is our sense of priorities.  In 1939, the U. S. suspected Hitler of building a nuclear weapon, and Japanese troops were showing signs of fighting to the last man on the last domestic island of that nation.  For good or ill (plenty of both, actually), Roosevelt gave the green light to the Manhattan Project, which led to the first production and use of nuclear weapons six years later.  Both leaders and ordinary citizens felt seriously that the U. S. was fighting for its life, and in such a situation, concerns about exposures to levels of radiation that might possibly lead to cancer in twenty or thirty years, or might pollute the environment for hundreds of years, simply faded into the background.

Having enjoyed relative peace in the North American continent ever since the end of World War II, the U. S. can now afford to deal with the messes it created during the war, Hanford being the leading example.  Many opponents of nuclear power take the acres of lethal radioactivity at Hanford to be proof sufficient to lead us to swear off all use of nuclear power forever, amen.  And it must be admitted that disasters such as the 1986 Chernobyl nuclear-reactor fire in Ukraine are uniquely horrible.  Shutting down all nuclear plants would presumably avoid such incidents in the future. 

But nuclear energy is also uniquely suited to address the increasingly prominent issue of global warming.  While it is an open question whether renewable energy can compete economically with nuclear energy for the world's short-term energy needs, it would be shortsighted to rule nuclear out altogether because of an emotional reaction against it not based on an objective view of the facts.  Unfortunately, there are lots of facts to view, and so nuclear power remains controversial, as it probably always will simply because its first public use was to bring us the horrors of nuclear war. 

Sources:  I referred to news reports on the Hanford tunnel-roof collapse carried by the Washington Post on May 9 at https://www.washingtonpost.com/news/post-nation/wp/2017/05/09/tunnel-collapses-at-hanford-nuclear-waste-site-in-washington-state-reports-say/, and the Seattle Times on June 30 at http://www.seattletimes.com/seattle-news/environment/another-hanford-tunnel-storing-radioactive-waste-at-risk-study-finds/.  I also referred to the Wikipedia article on the Hanford Site.