Monday, July 30, 2007

Are Robots Human? or, Are Humans Robots?

Leo is a creature with long fuzzy ears, brown eyes that blink sleepily, and two Mickey-Mouse-like hands. On a good day, Leo will listen to his trainer, a young woman who tells Leo to press a green button on the table. After blinking and swaying around a little groggily, Leo will do just that. With some prompting, Leo will even figure out what the trainer means by pressing "all" the buttons, even if the concept of "all" is a new one just recently learned.

For a dog, this would be pretty good. But Leo is not a living creature. Leo is a robot, albeit a very fancy one. New York Times reporter Robin Marantz Henig spent some time with the researchers at MIT's Personal Robotics Group and Media Lab to find out what the state of the robotics art is today. She went prepared to be amazed, but found that the videos posted online by the labs represent the best-case performances of robots that, like recalcitrant children, do the wrong thing or nothing at all at least as often as they do the right thing in response to instructions. But performance is constantly improving, and when the various human-like behaviors of following a person with its eyes, recognizing itself in a mirror, and responding to verbal and visual clues are finally integrated into one machine, we may have something that people will be tempted to respond to as we would respond to another human being. If this happens, would we be right in saying that such a robot is then human, or has consciousness, if it acts like it does and says it does? And if so, what are our obligations toward such entities: do they have rights? Should they be protected?

A friend of mine recently told me that a European group is considering how to put together what amounts to a robot bill of rights: rules for the ethical treatment of robots. He personally feels that this is going way too far in a field that is as yet largely experimental and research-oriented. There's nothing wrong with figuring out how to respond to ethical challenges before they spread to the consumer marketplace. But before we go that far with robot ethics, we should get some philosophical matters straight first.

Henig quotes robotics expert Rodney Brooks, who seems to believe that the difference between humans and machines like Leo is one of degree, not of kind: "It's all mechanistic. . . . Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we're in control, but we're not." Henig herself, in a lapse of reportorial objectivity, follows this quote with her own statement that "We are all, human and humanoid alike, whether made of flesh or of metal, basically just sociable machines."
Now a machine is an assembly of parts that interact to perform a given function. Being subject to the laws of physics and chemistry, in principle the operation of a machine is completely predictable, at least in a probabilistic sense if any quantum-mechanical things are going on. If we are machines and not human minds operating with the aid of bodies, then as Brooks implies, our sense of being "in control," of having the freedom to choose this or that action, is an illusion. Notice that neither Brooks nor Henig argue for this position—they simply state it in the manner of one worldly-wise person reminding another of something that they both agree on, but tend to forget from time to time.

Neither do they follow through with the logical conclusions of their mechanistic view of human life. If our choices are illusory, really determined by our environment and genetics, then all moral principles are pointless. You can't blame people for beating their dog, or their computer, or their robot—it was bound to happen. Maybe this sounds silly, but if you really buy into mechanistic philosophy, it is totally destructive of morality, and indeed of any values at all.

Fortunately, most people are not that logically consistent. I suppose Ms. Henig, and Prof. Brooks for that matter, avoid parking in handicapped spaces, give some money to charity, and otherwise follow general moral codes for the most part. But whether you bring robots up to the level of human beings by attributing consciousness, life, and what would in former times have been called a soul to them, or whether you drag humanity down to the level of a robot by saying we are "just sociable machines," you have destroyed a distinction which must be maintained: the distinction between human beings and every other kind of being.

As robots get more realistic, it will be increasingly tempting to treat them as humans. In Japan, whose demographics have made the over-60 segment one of the fastest-growing population groups, researchers are trying to develop a robotic companion for the aged that will help them in daily tasks such as getting things from shelves and so on. As long as we recognize that machines are machines and people are people, there is no harm in such things, and potentially great good. But a dry-sounding thing like a philosophical category mistake—the confusion of humans and machines—can lead to all sorts of evil consequences. At the least, we should question the commonly-made assumption that there is no difference, and ask people who make that claim to back it up with reasoned argument, or to leave it alone.

Sources: The New York Times Magazine article "The Real Transformers" appears at http://www.nytimes.com/2007/07/29/magazine/29robots-t.html. A fuller discussion of free will versus determinism can be found in Mortimer Adler's book Ten Philosophical Mistakes (Collier Books, 1985).

Tuesday, July 24, 2007

War, On the Other Hand

Just down the road from where I teach in San Marcos, Texas, the Arredondo family lives in a suburb of San Antonio. Every now and then Rose, age ten, will rush inside the house and tell her father Juan, "There's another snake in the back yard, Daddy!" Then she follows her father outside and watches as he calmly walks up to the snake and picks it up with his bare left hand. Even if it tries to bite him on the hand, Arredondo shows no concern. My source does not report what he does with the snake after that, but it is safe to say that this particular snake never disturbs the peace of the Arredondo back yard again.

What is remarkable about this little scene is that two years ago, Arrendondo was on patrol in Iraq when a bomb severed his left hand. He survived to join the ranks of hundreds of soldiers who have lost all or part of a limb in the Iraq war. But the Army paid $65,000 for a new prosthetic hand developed by Touch Bionics of Edinburgh, Scotland, and after some months of training, Arrendondo can use it nearly as well as his intact right hand. Unlike previous electromechanical hands, the i-Limb has five independent motors, one for each finger and the rotating thumb. Sophisticated software uses myoelectronic signals from the muscles in Arrendondo's forearm to control each finger independently. Although a lifelike skin-colored covering is available for those who wish to blend into the non-amputee world unobtrusively, Arredondo, like many of his fellow amputee veterans, chose a transparent silicone covering which shows off the camouflage green-and-brown paint job on his plastic fingers.

Anyone familiar with the history of technology knows that war is one of the most effective cultural spurs for engineering advancements. All the great engines of destruction, from the crossbow to the hydrogen bomb, were developed for reasons of war. But while the ill wind of war spreads death and tragedy wherever it goes, those in the healing professions, including biomedical engineering, can beat the sword of war into the plowshare of better medications, treatments, and prosthetics. (I am now caught up on my mixed-metaphor quotient for the month.)

Devices like the i-Limb don't get invented overnight. The ideas that gave birth to the commercial product originated in research begun about twenty years ago under the sponsorship of Scotland's National Health, the government agency responsible for most health care in that country. When the technology was far enough along to be commercialized, the private firm Touch Bionics took over and now sells the device throughout the world.

So often, engineering ethics discussions concentrate on things that go wrong: disasters, accidents, fraud, coverups, and so on. But there is a strand in the discipline that says we should highlight good examples of engineering well and ethically done: projects that go right, people who benefit their fields and humanity in general. If all we talk about is how to do something wrong, how will anyone learn how to do it right?

Touch Bionics, and the government researchers before them, look like good examples to me. While there are unethical things you can do in any profession or field, a person who chooses biomedical engineering with the goal of developing better artificial limbs chooses an engineering career that will benefit humanity almost without question.

The choice of a career has profound consequences both for the person who chooses it and for the society he or she lives in. Sometimes it is made with maturity and judgment, but other times a person decides what to do with their life with less thought than they'd give to picking out a movie or a restaurant. At the same time, there are no guarantees that everything you do will end up being used in a way you would choose.

Suppose an engineer who was dead-set against war consciously chose to go into biomedical engineering and took a job with Scotland's National Health to develop the artificial hand that turned into the i-Limb. It is the nature of the case that one of the biggest customer segments for such products are amputees who lose limbs in combat. Can you say that the availability of good prosthetics encourages or supports war? I don't think so. Yet without that market and generous Department of Defense funding to support it, companies such as i-Limb might have more trouble staying in business.

Young people starting a career in engineering seldom consider such complexities as these, and I think that overall it is probably a good thing. If you start to worry about every little bad thing that might possibly happen to you, you'll never get out of bed in the morning. But as bad as war is, I'm glad that engineers working for companies like Touch Bionics have the imagination and dedication to pursue a good idea like the i-Limb over the many years it takes to bring it into reality.

Sources: The USA Today article describing the i-Limb appeared in the July 23, 2007 online edition at http://www.usatoday.com/tech/news/techinnovations/2007-07-19-bionic-hand-amputee_N.htm. Touch Bionics has a website that gives details about the i-Limb at http://www.touchbionics.com.

Tuesday, July 17, 2007

Creeping Disaster: The Big Dig Tunnel Tragedy One Year Later

Just over a year ago, a woman died in the collapse of a part of the ceiling over a Boston highway tunnel that forms part of the so-called Big Dig. Less than a week after the collapse, experts were talking about how the epoxy used to hold up the ceiling tiles could fail. In the year that has passed since then, the National Transportation Safety Board investigated the accident and released their report on July 11, the one-year anniversary of the collapse.

At the time, I remarked on the apparent similarities between the Boston tunnel collapse of 2006 and the Kansas City hotel walkway disaster of 1981, in which 114 people died. As it turns out, the comparison was apt. In Kansas City, a contractor made an apparently innocuous change in the way some threaded support rods were arranged. But the change greatly weakened the structure and contributed directly to the collapse. The NTSB report says that while epoxy can be used safely to hold bolts in place to support suspended ceilings in tunnels, the wrong kind of epoxy was used in the ceiling that failed.

Epoxy adhesives have been available in some form since the 1940s, but to recommend their use in critical structural elements such as multi-ton ceiling slabs, the manufacturer needs to understand short-term and long-term chemical and physical processes in the material. It turns out that in common with many other plastics, certain kinds of epoxy (including what the NTSB called "fast-set" adhesive) slowly stretches under stress. This behavior is called "creep," and my blog of July 19, 2006 noted that engineering experts were already speculating that creep might have been responsible for the collapse.

It was. The epoxy vendor Powers Fasteners also sold another kind of epoxy, "standard-set," to the Big Dig contractor, Modern Continental Construction Company, intending it to be used for the critical ceiling bolts. Unlike the fast-set type, the standard-set epoxy does not creep when installed properly. Bechtel/Parsons Brinckerhoff, the consulting firm overseeing specifications for the project, allowed Gannett Fleming Inc., the ceiling designer, to specify the adhesive by performance rather than a particular make from a particular company. Such a practice is in keeping with the competitive-bid process, but often makes it harder to tell what is really needed for a specific job.

Of all the entities involved—the designer, the contractor, the vendor, and the people on the ground who actually put the adhesive in the holes—the NTSB found that only the vendor, Powers Fasteners, understood the danger of creep and the need to use the non-creeping standard-set epoxy, not the creep-prone fast-set type. But somewhere along the line, possibly under deadline pressure, that vital bit of information got buried in fine print, someone substituted the fast-set epoxy, and the deadly chain of events was set in motion.

If the Massachusetts Turnpike Authority, the organization responsible for operating the tunnel, had carried out prompt and thorough inspections of the tunnel after it opened, they would almost certainly have discovered signs that the bolts were creeping out, and could have taken corrective action. But the NTSB found that before such regular inspections could take place, the MTA felt obliged to compile a database of tunnel components and apply to the Federal Highway Administration for approval of its inspection plan before putting it into action. This bureaucratic musical-chairs performance took three and a half years—longer than the ceiling took to creep out and collapse.

There are many ironies in this episode, but I will content myself with pointing out two.

First, right in the heart of what in my less charitable moments I refer to as the "know-it-all capital of the world," the land of Harvard, MIT, and one of the greatest concentrations of engineering experts in the world, a critical life-saving bit of knowledge—the information about creep—didn't get to the people who were in a position to do something about it. I teach at an institution that is to Harvard or MIT as a culvert under a farm road is to the Big Dig. But we have a large construction program here, where hundreds of students learn the basics of materials and other dry matters on their way to becoming foremen and supervisors of the same kinds of workers who put the wrong epoxy in the ceiling in Boston. I can only hope that if our students were in the same position, they would have known better. I dare say MIT, or even Massachusetts as a whole, does not pay much attention to students who want to be contractors when they graduate. But if humble construction education programs such as ours teach people in that line of work about the dangers of ignorance when it comes to novel materials, we will have justified our existence in that regard, anyway.

Second, the kind of bureaucrat who values procedure and compliance and following all the rules above simply doing the right thing, is not serving anyone well in the long run. If there had been just one low-level inspector or employee of the MTA who had said to himself, "The hell with waiting forever for FHA approval—I'm going out there and take a look," he might have found the problem early enough to forestall it. But he would have had to raise a big stink, probably go over the heads of his supervisors, perhaps even go to the media, and in all likelihood he would have lost his job. Such people are called whistleblowers, and they are the engineering world's equivalent of the Old Testament prophet—one who speaks the truth regardless of how unpopular it might be, or how dangerous it is to one's own well-being. Like the office of prophet, it is a lonely calling, one that should not be entered into lightly. But paying the price of unpopularity, or even sacrificing one's career, is small compared to the saving of lives.

Sources: Articles describing the NTSB report were carried by the Boston Globe on July 11, 2007 (http://www.boston.com/news/local/massachusetts/articles/2007/07/11/wide_risk_wide_blame/) and the New York Times (http://www.nytimes.com/2007/07/11/us/11bigdig.html?_r=1&oref=slogin).

Tuesday, July 10, 2007

A Mouse That's Roaring: Antigua's Internet Gambling Battle with the U. S.

Back in May of 1993, my wife and I took a week's vacation in Antigua, a small Caribbean island with a present-day population of some 70,000 people. I brought back from that trip memories of great seafood, welcoming people, and a fondness for steel drum music (in limited quantities). At the time, the main industry on Antigua was tourism, and so it remained until the Internet came along.

A few years after our visit, a young former stock trader named Jay Cohen moved there from the U. S. with some friends and discovered gambling was legal in Antigua. They set up World Sports Exchange Ltd., one of many online gambling sites that catered to one of the largest markets in the world: the United States. Cohen's operation grew to employ hundreds of people on Antigua and it became the second-largest industry on the island.

Then (as I have noted in previous columns), the U. S. government decided to intervene against online gambling in a big way. The Justice Department began to use existing laws against domestic gambling to arrest operators of offshore gambling operations. In 1998, on a visit to the U. S., Cohen was arrested, convicted, sentenced to 21 months in jail, and went to prison in Nevada, not far from the lights of Las Vegas.

But before he went to jail, someone informed him that Antigua might have a case against the U. S. that could be tried before the World Trade Organization, an international body that adjudicates trade disputes between countries. To make a long story short, Cohen convinced Antiguan authorities and gambling interests to file suit with the WTO, and so far the WTO has agreed with them.

The principle that the WTO used makes sense. Countries have a right, it says, to prohibit certain kinds of activities in order to uphold "public morals and public order," even if people or entities outside that country are involved. For example, Muslim countries can prohibit the importation of alcoholic beverages, since Islam forbids their consumption. However, this kind of prohibition can't be used simply as an end run around fair trade practices, says the WTO. If you allow your own people to make homebrew hooch, you can't justify banning booze imports with the public morals and order rationale.

And here is where the great inroads into domestic gambling laws that the U. S. gambling industry has made, have come home to roost, so to speak. If the government were as hard on all forms of domestic gambling—Indian tribes, horseraces, Las Vegas, you name it—as they're trying to be on offshore Internet gambling, then the WTO case wouldn't have a leg to stand on. But even in the latest federal laws that prohibit banks and other financial institutions from processing offshore gambling payments, legislators have inserted exceptions for things like domestic horserace betting, again at the behest of gambling interests. Therefore, says the WTO, you can't use your morals and order reasoning to prohibit offshore internet gambling, unless you also try to wipe out domestic gambling with the same vigor.

While I have not too high an opinion about international bodies that presume to tell sovereign nations how to behave, I cannot fault the WTO on this one. The WTO is a toothless tiger in the sense that it cannot enforce its rulings except by means of other rulings. What Antigua is asking it to do in this case is to allow the small country to flaunt U. S. copyright law, which might turn the island into a massive sweatshop churning out knockoffs of Nike shoes.

I'd hate to see relations sour between the U. S. and Antigua, and realistically, I don't think the Caribbean nation is going to do anything that would seriously threaten the tourist industry, which still employs more people there than any other. And while I wish we in the U. S. had never started down the road toward legalized gambling, I have to admit that the charge of hypocrisy is one that sticks in this case.

In 1959, Peter Sellers starred in "The Mouse That Roared," a film about the fictional Grand Duchy of Fenwick. Faced with a bad economy, incompetent leadership (Sellers played three roles, one of them female), and the Cold War, Fenwick decides to declare war on the U. S., promptly lose, and then profit from whatever Marshall-plan-like aid would be forthcoming thereafter. Needless to say, things go awry, and the resulting international chaos ends up with Fenwick on top and the U. S. begging for mercy. Somehow I doubt that a similar comic-opera outcome will result from Antigua's lawsuit with the U. S. Like mineral wealth, gambling profits can addict and corrupt a healthy body politic in the long run as well as individuals, and I hope Antigua weans itself from excessive dependence on them in the future. But in the meantime, if they do get some huge settlement from the WTO, I have to admit it couldn't happen to a better island.

Sources: The Washington Post carried an article on Jay Cohen and his connection with the WTO lawsuit on Aug. 4, 2006 at http://www.washingtonpost.com/wp-dyn/content/article/2006/08/03/AR2006080301390_2.html. More recent developments are described briefly by a piece in the online technology newsletter TJ Daily at http://www.tgdaily.com/content/view/32594/118/. My previous blogs on gambling were "Online Gambling in the U. S.: Don't Bet On It" (Aug. 1, 2006) and "Legislating Morality: The Unlawful Internet Gambling Enforcement Act" (Oct. 3, 2006).

Tuesday, July 03, 2007

Lie Detecting with fMRI: Using Physics to Do Metaphysics

This month's Scientific American carries an article by Joe Z. Tsien about reading the brain's "neural code": the patterns of nerve activity that go on when we remember or think about something. Although most of Dr. Tsien's work has been with mice, he has been able to transform the seemingly random pattern of nerve firings into binary code that tells him what the mouse has been doing and where. Admittedly, the range of mouse activities—nesting, falling in a specially designed mouse elevator, and experiencing a miniature mouse earthquake—falls a little short of human experience. But hey, you have to start somewhere.

Or do you? Many have seen farther down this road a threat to the final bastion of independence: the freedom of thought. At the end of his article Dr. Tsien speculates that "in 5,000 years" we might be able to download our minds onto computers, with all the potential for control and exploitation that this entails. He is more conservative than the inventor Ray Kurzweil, who in his book The Singularity Is Near estimates that "the end of the 2030s is a conservative projection for successful [brain] uploading." Interesting that the same process—transferring a human brain's contents to a machine—Dr. Tsien calls downloading, and Kurzweil calls uploading. Perhaps unconsciously, this may express their respective attitudes to the order which is appropriate to the two objects. Which is higher, computers or brains?

Computers and brains are also involved in a recent New Yorker magazine article by Margaret Talbot. The hero (or villain, depending on your point of view) in her piece is Joel Huizenga, founder of a company called No Lie MRI. Really. Huizenga claims that an advanced brain-imaging technique called functional MRI (fMRI for short) is the key to figuring out whether a person is lying. The technique works by tracing the oxygen consumption of various locations in the brain. Since more active parts presumably take up more oxygen, this allows fMRI users to discern different locations of brain activity with a resolution of a few millimeters or less (as long as the patient doesn't turn his head or move his tongue too much during the scan). Huizenga has run some tests in which subjects were asked to lie sometimes and tell the truth other times, and claims his technology is much better than the old polygraph machines that rely on such mundane things as heart rate, breathing rate, and the sweatiness of one's palms. Talbot reports that "neuroethicists" are already up in arms about the threat posed to privacy and freedom by the potential misuse of such technology.

The amusing thing is that nowhere in these articles does anyone mention the fact that when someone brings the machinery of science and technology to bear on the human mind and the question of truth, it is like trying to use an X-ray machine on your checkbook to figure out your bank balance if you've done the math wrong. A bank balance is a non-material entity. Yes, it's recorded in various places—the bank's computer memory chips and discs, your checkbook if you've kept it right, and so on. But without people around to agree on what a bank balance is in the first place and what numbers represent yours in particular, those black marks on paper or magnetized regions on a hard drive somewhere are just random features of the material universe.

Despite materialistic arguments to the contrary, the human mind is a fundamentally different thing from the human brain. In most peoples' experience, the physical brain is needed for the mind to manifest itself in the material world. But there are respectable philosophical arguments (too lengthy to repeat here) that say the certain features of the mind—namely, the validity of reason—show that matter can't be all there is. Truth, if it exists at all (and there are some dangerous types out there who claim it doesn't), must exist in what philosophers call the metaphysical realm, beyond the physical one that is directly sensible.

This is why attempts to develop a technological test for truth, as one would test for diabetes or AIDS, are doomed to fall short of the 100% reliability criterion that would make them justifiable for widespread use. Even if there is a part of the brain that telling a lie activates in many people, there are so-called pathological liars to whom what we would call a lie appears to be the truth. A delusional person will maintain with the greatest calmness and peace of mind that he is a fried egg, no matter how often you show him his appearance in the mirror and how badly he must have been fried to look like that. And any lie-detector test that relied on subconscious unease or cognitive dissonance to detect lies would fail to register the lie when such a person says he's a fried egg. For all the machine could tell, he IS a fried egg.

Most courts have wisely refrained from admitting lie-detector tests as direct evidence of guilt, although they can be used in a secondary way to assist in exoneration on a voluntary basis. While brain research is fascinating and may lead to cures for neurological conditions like Alzheimer's disease, the science-fiction prospect of a kind of "omniscience machine" that you could point at any passerby to read his innermost thoughts or secrets is likely to remain science fiction for centuries, if not forever. For one thing, all such systems initially have to have the cooperation of the subject, especially when the issues being explored are unique to that subject. Both conventional lie detectors and No Lie MRI's system work only to the extent that a subject manifests typical physiological responses to lying. If the information being sought becomes more specific, such as "Where were you on the night of the 19th?", a particular brain's neuronal patterns form an uncrackable code-book-type code, as far as I can tell. And the only way to crack it would be to interview the subject beforehand on the matters at issue, with the subject's full cooperation, in order to establish what the code is. In the case of unwilling subjects, this cooperation is hardly likely to be forthcoming.

So although people interested in engineering ethics ought to keep a watchful eye on brain research, the antics of outfits such as No Lie MRI probably pose more danger to the pocketbooks of investors than to the freedom or privacy of the public at large. That is, unless we convince ourselves that they work even if they don't. And that is a metaphysical problem for another day.

Sources: The July 2007 issue of Scientific American carries Dr. Tsien's article on pp. 52-59. Margaret Talbot's article "Duped" begins on p. 52 of the July 2, 2007 issue of The New Yorker. Ray Kurzweil's prediction of brain uploading by 2040 can be found on p. 200 of The Singularity Is Near (Viking, 2005). For arguments that the mind's reasoning ability points to something beyond materialism, see Victor Reppert, C. S. Lewis's Dangerous Idea (IVP, 2003).