Thursday, February 17, 2011

Thinking Machines: For Better and For Worse

Our machines get ever more intelligent, or at least capable of doing more and more complex tasks. Whether this should be considered intelligence is a separate question.  Last night, I saw two programs that brought this clearly into focus.


The first was the IBM Watson Challenge on Jeopardy. If you haven't been tracking this story, IBM has created a new computing system for its latest corporate Grand Challenge: a machine smart enough to understand Natural Language Processing (NLP).  In past Grand Challenges, IBM created a chess-playing computer it named Big Blue which was able to beat Grand Chessmaster, Garry Kasparov, and another computer dubbed Blue Gene (Big Blue + Genome) which was used to sequence DNA. For the current challenge, the IBM team named their computer Watson after IBM founder Tom Watson whose standing directive to his employees was "Think".  IBM's R&D lab wanted to take on a tough challenge and they felt that hardly anything could be tougher for a computer than being able to compete successfully on the television quiz-show, Jeopardy.  Who isn't familiar with the opening line, "This is Jeopardy!" with host, Alex Trebek (Disclosure Alert: we once sat in the studio audience of a filming of the show in LA. Our daughter chatted up Alex during the commercial break).

Watson was positioned at the center podium and competed against the two all-time high scoring contestants, Ken Jennings and Brad Rutter. Poor Ken and Brad. Watson's winnings over the three days of the tournament were $77,147 compared to Jennings' $24,000 and Rutter's $21,600.  Watching the show, I was impressed with how adept Watson was at answering questions filled with the usual twists on topics and language that are part of the Jeopardy game. But it was also clear that Watson got some things wildly wrong. His answer to a question about U.S. Cities was Toronto.  That seems like something that would have been easily ruled out in the programming of his logic circuits.  Nonetheless, Watson is an impressive advance in NLP and IBM now hopes to perfect the technology for many other fields including medical diagnostics.

The second program that I saw last night was Nova's "The Crash of Flight 447", which described in harrowing detail the final minutes of the Air France flight that disappeared over the Atlantic on the night of May 31, 2009.


Watch the full episode. See more NOVA.

Nova assembled its own team of investigators to look into what might have caused one of the most modern aircraft in the skies, the Airbus A330, to  tumble out of the sky. The official investigation is still ongoing and the Black Box and flight recorder have yet to be found in 15,000 feet of water in the mid-Atlantic. My short summary to the longer description given during the program is that the computer flying the plane was instrumental in the crash.

How could that be? The longer sequence of events was that Flight 447 encountered a very turbulent thunderstorm in the mid-Atlantic that caused all three of its airspeed sensors (called pitot tubes) to ice up and fail.  The flight computer was in control of the aircraft and airspeed is a key parameter for the computer to perform its tasks. The lack of airspeed data caused the flight computer to start to go into a sequence of failure modes, starting with disengaging the autopilot and throttling back the engines.  The pilots were instantly thrust into an extremely challenging situation with multiple cascading computer warnings coming at them as they manually tried to control the plane in heavy turbulence.  Without accurate airspeed information, they were flying in an even more dangerous situation.  Modern jets have very narrow windows of acceptable airspeed at cruising altitude.  Changes in airspeed of as little as 10 knots either up or down can cause the plane's wings to stall (lose lift).  When an aircraft stalls, the plane not only begins to rapidly descend, it also can go into a roll that can make recovery even more difficult.

The Nova team did a good job of demonstrating how pilots are supposed to avoid this problem.  The Nova investigators recreated the conditions that the pilots of Flight 447 encountered that night in a flight simulator. The two pilots who "flew" the rerun did not know in advance what they were going to be facing in the simulator.  The rerun began with the thunderstorm suddenly showing up on their radar and progressed to the loss of airspeed data and the flight computer issuing failure warnings. The standard procedure for pilots in such a situation is to increase engine power to 85 percent and adjust the rear elevators to 5 percent up-attitude.  This always put the aircraft at a safe speed to avoid a stall.  The pilots in the simulator did just what they were supposed to do and all went smoothly as they recovered control of the plane. But apparently, the pilots on Flight 447 were too busy or too distracted to follow this standard procedure.  The result was the loss of over 200 people's lives in a terrible aircraft tragedy.

Some of the pilots interviewed on the Nova program commented on how today's generation of pilots have come to depend on the flight computer to fly the plane.  They have too little experience with how to pull an out-of-control plane back to safety if the flight computer fails.  Having seen the pilots in the flight simulator follow the standard procedure of thrust and attitude adjustment, I wonder why the flight computer wasn't programmed to do the same thing when the loss of airspeed data started a sequence of failures?  Why depend on the pilots for this first safety measure?  Why doesn't the computer just do it and then alert the pilots to what had been implemented as part of the safety program to maintain control?  The program made clear that the flight computers are currently programmed to prevent pilots from maneuvering the plane in some way that would cause loss of control.  Why not in the loss of airspeed data situation?

Clearly, we are still in the infancy of computer intelligence whether it be computers like Watson which understand the difficulties of how we communicate in our natural language or the more prescriptive computers that fly airplanes, manage nuclear power plants, or even control the inner workings of our automobiles.  Much more is needed and certainly will be built into the computers in the future to enhance what we like to call intelligence. Computers are being built to mimic what humans can do under the best of circumstances.  Surpassing what humans can do is what Ray Kurzweil, the computer and music guru, calls The Singularity.  He thinks it is coming in the next decade or so.  I am not so sure.  But I am sure that it will happen in the first half of this century.  Then, like the computer, HAL in 2001: A Space Odyssey, we will be in a strange new world where the computer might say as it did, "I can't allow you to do that, Dave".  In the meantime, computers can be fun to watch or, under rare circumstances, they can create monumental tragedies.

[You can watch the entire Nova episode on the PBS website here.]