dimarts, 13 de març del 2012

"Are You Evil? Profiling That Which Is Truly Wicked" Scientific American

A cognitive scientist employs malevolent logic to define the dark side of the human psyche


By Larry Greenemeier // October 27, 2008


INTRODUCING "E": a computer character
first created in 2005 to embody Bringsjord's
working definition of evil.

TROY, N.Y.—The hallowed halls of academia are not the place you would expect to find someone obsessed with evil (although some students might disagree). But it is indeed evil—or rather trying to get to the roots of evil—that fascinates Selmer Bringsjord, a logician, philosopher and chairman of Rensselaer Polytechnic Institute's Department of Cognitive Science here. He's so intrigued, in fact, that he has developed a sort of checklist for determining whether someone is demonic, and is working with a team of graduate students to create a computerized representation of a purely sinister person.
"I've been working on what is evil and how to formally define it," says Bringsjord, who is also director of the Rensselaer AI & Reasoning Lab (RAIR). "It's creepy, I know it is."
To be truly evil, someone must have sought to do harm by planning to commit some morally wrong action with no prompting from others (whether this person successfully executes his or her plan is beside the point). The evil person must have tried to carry out this plan with the hope of "causing considerable harm to others," Bringsjord says. Finally, "and most importantly," he adds, if this evil person were willing to analyze his or her reasons for wanting to commit this morally wrong action, these reasons would either prove to be incoherent, or they would reveal that the evil person knew he or she was doing something wrong and regarded the harm caused as a good thing.
Bringsjord's research builds on earlier definitions put forth by San Diego State University philosophy professor J. Angelo Corlett as well as the late sociopolitical philosophers and psychologists, Joel Feinberg and Erich Fromm, but most significantly by psychiatrist and author M. Scott Peck in his 1983 book, People of the Lie, The Hope for Healing Human Evil. After reading Peck's tome about clinically evil people, "I thought it would be interesting to come up with formal structures that define evil," Bringsjord says, "and, ultimately, to create a purely evil character the way a creative writer would."
He and his research team began developing their computer representation of evil by posing a series of questions beginning with the basics—name, age, sex, etcetera—and progressing to inquiries about this fictional person's beliefs and motivations.
This exercise resulted in "E," a computer character first created in 2005 to meet the criteria of Bringsjord's working definition of evil. Whereas the original E was simply a program designed to respond to questions in a manner consistent with Bringsjord's definition, the researchers have since given E a physical identity: It's a relatively young, white man with short black hair and dark stubble on his face. Bringsjord calls E's appearance "a meaner version" of the character Mr. Perry in the 1989 movieDead Poets Society. "He is a great example of evil," Bringsjord says, adding, however, that he is not entirely satisfied with this personification and may make changes.
                       
The researchers have placed E in his own virtual world and written a program depicting a scripted interview between one of the researcher's avatars and E. In this example, E is programmed to respond to questions based on a case study in Peck's book that involves a boy whose parents gave him a gun that his older brother had used to commit suicide.
The researchers programmed E with a degree of artificial intelligence to make "him" believe that he (and not the parents) had given the pistol to the distraught boy, and then asked E a series of questions designed to glean his logic for doing so. The result is a surreal simulation during which Bringsjord's diabolical incarnation attempts to produce a logical argument for its actions: The boy wanted a gun, E had a gun, so E gave the boy the gun.
Bringsjord and his team by the end of the year hope to have completed the fourth generation of E, which will be able to use artificial intelligence and a limited set of straightforward English (no slang, for example) to "speak" with computer users.
Following the path of a true logician, Bringsjord's interest in the portrayal of virtuousness and evil in literature led to his interest in software that helps writers develop ideas and create stories; this, in turn, spurred him to develop his own software for simulating human behavior, both good and odious, says Barry Smith, a distinguished professor of bioinformatics and ontology at the State University of New York at Buffalo who is familiar with Bringsjord's work. "He's known as someone on the fringe of philosophy and computer science."
Bringsjord and Smith both have an interest in finding ways to better understand human behavior, and their work has attracted the attention of the intelligence community, which is seeking ways to successfully analyze the information they gather on potential terrorists. "To solve problems in intelligence analysis, you need more accurate representations of people," Smith says. "Selmer is trying to build really good representations of human beings in all of their subtlety."
Bringsjord acknowledges that the endeavor to create pure evil, even in a software program, does raise ethical questions, such as, how researchers could control an artificially intelligent character like E if "he" was placed in a virtual world such asSecond Life, a Web-based program that allows people to create digital representations of themselves and have those avatars interact in a number of different ways.
"I wouldn't release E or anything like it, even in purely virtual environments, without engineered safeguards," Bringsjord says. These safeguards would be a set of ethics written into the software, something akin to author Isaac Asimov's "Three Laws of Robotics" that prevent a robot from harming humans, requires a robot to obey humans, and instructs a robot to protect itself—as long as that does not violate either or both of the first two laws.
"Because I have a lot of faith in this approach," he says, "E will be controlled."
Source:
http://www.scientificamerican.com/article.cfm?id=defining-evil

Cap comentari:

Publica un comentari a l'entrada

Què t'ha semblat el post? (Només s'acceptaran comentaris d'usuaris registrats)