Truth telling in the digital age

Updated: 2012-01-15 08:08

By Anita Patil(The New York Times)

  Print Mail Large Medium  Small 分享按钮 0

 Truth telling in the digital age

Brain scans, like the technology used for MRI's like this one, are the latest innovation in the field of lie detection. Jim Wilson / The New York Times

Authenticity abounds. Real people are in demand in an age of reality TV, social media and online profiles. Candidates for the Republican presidential nomination in the United States claim to be just like you and me. And viewers questioned the genuineness of wailing mourners in scenes of Kim Jong-il's funeral.

Authenticity is often contrived. (How else to really pass as real?) But what if it could be detected? What if fraud could be sniffed out with a tool? Increasingly, computer algorithms and software are doing just that.

"Machines will definitely be able to observe us and understand us better," Hartmut Neven, a computer scientist and vision expert at Google, told The Times. "Where that leads is uncertain."

It may lead to the truth.

Hany Farid, a computer science professor and a digital forensics expert at Dartmouth College in New Hampshire, was intrigued by the problem of fashion advertisements and magazines getting carried away with retouching photographs of celebrities and models. "Fix one thing, then another and pretty soon you end up with Barbie," he told The Times.

Feminist legislators in France, Britain and Norway want digitally altered photos to be labeled. So Dr. Farid helped develop a software tool for statistically measuring how much an image of a person's face and body has been altered. He proposes that the tool be used to rate that alteration on a scale from 1-to-5, from minimally altered to starkly changed.

Truth telling in the digital age

Other algorithms are listening for our lies. Linguists, engineers and computer scientists are training computers to recognize hallmarks of what they call emotional speech. Julia Hirschberg, a professor of computer science at Columbia University in New York, is programming computers to parse people's words for patterns - changes in pitch, pauses, nervous laughs - that gauge whether they are being honest, reported The Times.

"The scientific goal is to understand how our emotions are reflected in our speech," Dan Jurafsky, a professor at Stanford University in California, told The Times.

The technology may be put to use at call centers to warn operators when an angry customer is on the phone. Or software may be able to tell when a chief executive is straying from the truth, or perhaps, whether a politician is lying. Just who is the authentic conservative in the Republican primary contest?

Some computers can automatically spot clusters of words and phrases that may signal deception.

Will the new technologies soon be widely accepted as evidence in court? In 2008 India became the first country to convict someone of a crime relying on evidence from a controversial lie-detecting brain scanner.

The Brain Electrical Oscillations Signature test, or BEOS, produces images of the human mind in action and supposedly reveals signs that a suspect remembers details of the crime in question. A woman in Pune accused of killing her former fiance agreed to a BEOS test and a judge said the scans were proof of "experiential knowledge" of having committed the murder, reported The Times.

Even as the accuracy of such scans is debated, researchers are developing new uses for the technology, and a lie detection center may be coming to a location near you. No Lie MRI, a company in California that aims to sell brain scanning to law firms and intelligence agencies, is creating a network of VeraCenters where people can be conveniently brain-scanned for truthfulness.

Some scientists are even predicting the end of lying as we know it, but who can really tell?

For comments, write to nytweekly@nytimes.com

(China Daily 01/15/2012 page9)