Earlier this year, psychologist Dirk Smeesters published a study that showed that varying the perspective of advertisements from the third person to the first person, makes people weigh certain information more heavily in their consumer choices. Last year, Smeesters published a different study in the Journal of Experimental Psychology suggesting that even manipulating colors such as blue and red can make us bend one way or another.

Except that apparently none of it is true. Last month, Dr. Smeesters acknowledged manipulating his data, an admission that has been the subject of heated discussions in the scientific community. He himself pointed out in his defense in Discover Magazine, that the academic atmosphere in the social sciences, and particularly in psychology, effectively encourages such data manipulation to produce "statistically significant" outcomes.

Dr. Smeesters excluded some data so as to achieve the results he wished for. Insidious as this may sound, some recent analyses of psychological science suggest that fudging the math to get a false positive is all too easy. It is also far too common.

The problem is not that social scientists are willfully engaging in misconduct. The problem is that methods are so fluid that psychologists, acting in good faith but having natural human biases toward their own beliefs, can unknowingly nudge data in directions they think they should go. The field of psychology offers a staggering array of competing statistical choices for scholars. I suspect, too, that many psychologists are sensitive to comparisons with the "hard" sciences, and this may propel them to make more certain claims about the results even when it is irresponsible to do so.

Then there are the more obvious pressures, including the old "publish or perish" issue in academia. Getting results that don't support a study's hypothesis published is a rare event. If a scholar has just convinced the federal government that, say, cartoons are a possibly impending danger to children everywhere and to give him or her a grant for a million dollars to prove it, it's difficult to then come back years later and say, "Nope, I got nothing." Some scholars function as activists for particular causes. And of course statistically significant results tend to grab headlines in ways that null results don't.

Concerns about this problem have been raised from within the scholarly community itself. This is how science works, by identifying problems and trying to correct them. Our field needs to change the culture wherein null results are undervalued and scholars should submit their data along with their manuscripts for statistical peer review when trying to get published. And we need to continue to look for ways of moving past "statistical significance" into more sophisticated discussions of how our results may or may not rave real world impact. These are problems that can be fixed with greater rigor and open discussion. Without any attempt to do so, however, our field risks becoming little more than opinions with numbers.