The rational referee

For anyone who has undergone the process of having an article face peer-review – at a conference or at a journal – it should come to no surprise to them that there is some randomness in the process. We’ve all heard the stories: a paper that is rejected from one conference wins a top paper award at the next. A new study tests the influence that referee decisions make on the quality of accepted papers. Using a statistical-imputation method, they vary the number of “rational” referees at a journal – in other words, self-interested referees who refuse good work that would compete with their own – to determine the impact on quality. And as we might expect, the outcome is dramatic:

“Our message is clear: if it can not be guar- anteed that the fraction of ’rational’ and ’random’ refer- ees is confined to a very small number, the peer review system will not perform much better than by accepting papers by throwing (an unbiased!) coin.”

Hardly surprising, right? One critique offered to this study is that unbiased editors are meant to correct for some of these errors. But in some ways, editors are the ones who have the most opportunity to be biased. They can accept a paper, even without much reviewer support, or can reject a paper that reviewers approve. And while the blinding process is never perfect – the existence of conferences, in-text references and cues, and even the style of the writer can cue “blinded” reviewers into whose paper they are examining – the editors are the only people who actually know the identity of the researchers, giving them more information to make a “rational” decision.

While I don’t think – and certainly hope – that most reviewers do not act in their own self-interest in reviewing papers, it is telling that this is the behavior considered “rational.” So what can be done? The author of this study offers one suggestion: create a marketplace of papers, where journals can compete for interesting papers while the less sought-after papers have fewer options. This is an intriguing idea, but it does get rid of one of the benefits of peer review: the changes recommended by one’s peers often make a paper substantially better. That said – and in line with this study – these reviewers sometimes make critical errors and reject papers on the basis of a false reading or misunderstanding of the study – and it remains unfair that the paper authors have no recourse against such misunderstanding.

It is also interesting that this study only varies the number of “rational” reviewers, without closely examining also the number of “random” reviewers – the reviewers who cannot judge the paper’s quality sufficiently. Reviewing is hard work and reviewers can often end up passing judgment on a piece outside their area of expertise – which again inserts randomness in the process. It would be fascinating to have some study attempt to survey researchers to see how many feel like they’ve had to review a paper outside their expertise (random) or have let their personal goals influence a review (rational) – of course, only a truly anonymous study could produce even somewhat-reliable results (who wants to admit either to a lack of expertise or selfishness?).

Although we probably all agree that there are some changes required to the process of peer-review, let’s make sure these changes benefit the scholarly community. Getting rid of the revision process would be a mistake, but more accountability for reviewers certainly seems necessary.

Presidential election outcomes directly influence suicide rates, study finds

Presidential election outcomes directly influence suicide rates, study finds.

I ran across this article and found it simply fascinating. And perhaps even more interesting – it is those states who supported the national loser where suicide rates decline the least. The authors of the study attribute this to social cohesion – but is this still in place after the election? Other than 2000, when the “election” didn’t seem to end, are the losers getting together over beer to lament their loss or bash the new president?

Of course, while I find this study fascinating, I’m also somewhat skeptical of the conclusions. Apparently, suicide rates decline in all states after a presidential election. Are suicide rates really that tied to our elections? Is this a simple correlation vs. causation error? And if so, what else might be causing this correlation?

Finally, if these results are borne out – what dependent variables are we missing in our political communication research?

Panels + Posters: The high-density session

Last week, Brian Ekdale made a really important post on his blog about the value of poster sessions, prompting me to add my own insights to this very valuable topic.

I also went through the same changes in perspective that Brian describes about poster sessions – from feeling like it is a dismissal of my paper to being one of my favorite parts of attending a conference. The feedback is awesome, the ability to talk one-on-one to a variety of scholars is great – and something you often don’t get at a formal presentation.

However, moving to a more poster sessions has its drawbacks. Conference presentations are meant to not only give scholars a format in which to communicate their research, but also to offer young scholars a place to develop their presentation skills. I have felt this benefit personally: presenting at conferences has made me more confident in talking to my research to others and has been invaluable in the classroom. When I graduated with my B.A., I was still uncomfortable presenting in front of large groups of people – an obvious drawback for any aspiring professor. But years of conference presentations have largely removed this fear and left me more confident in my ability to present effectively.

Thus, I think the most valuable format would be a hybrid of these two approaches. Last year, I had the opportunity of presenting in two “high-density” research sections, which I thought were the perfect combination of presentation and poster. Each presenter (8 and 12 in the two sessions I was in) was given four minutes to give an overview of their paper. The rest of the time was basically a poster session, where the audience was free to walk around and talk to the researchers about their paper. This format maintains the benefits of presentation: it gives scholars the opportunity to briefly present their research – and a four-minute overview of a paper is more likely to be useful in talking to others than a more thorough 12-15 minute presentation. This format also eliminated one of the more awkward components of the poster session – walking by a poster that you know little about and finding the research less interesting to you than you thought. Audience members know exactly which papers they are most likely to find interesting and can develop some of their questions before heading over and talking to the presenter. Finally, it also gives scholars at least a brief outline of what people outside their area of expertise/interest are investigating: too often I find that poster sessions allow me to target too well on those papers I am interested in, making it probable that I miss some other projects that could be fascinating.

Thus, in the debate over poster vs. panel session, I come down in the middle – why can’t we preserve the benefits of both?

At AEJMC 2010

Hey everyone! I have arrived in Denver, Colorado, for the 2010 conference of the Association for Education in Journalism and Mass Communication! It will be a busy couple of days – I will be presenting our paper entitled “The Correspondent, the Comic, and the Combatant: How moderator style and guest civility shape news credibility” at 5:15 on Friday evening. This paper won a top-three faculty paper award in the Communication Theory & Methodology Division – congratulations to all my co-authors on this achievement! I will also be attending numerous other presentations and enjoying many opportunities to mingle and meet my fellow scholars, because you can’t work all the time. 🙂 I might update the blog if I find some fascinating new ideas that I can’t wait to share, but otherwise I’ll be offline for the next couple of weeks. Enjoy the news in the meantime without my commentary (if that’s still possible)!

Journalists’ role: Taking the amusement out of news

For anyone who’s been a J201 TA, this comic (thanks to Hans for sending me this link) speaks to what we spend the first few weeks of the course discussing – and what our students write their first paper on (or at least they did when I taught the class). I really enjoyed the distilling of Postman’s work into a dozen comic slides. 🙂

Also, this comic reminds us that while I was as eager as anyone to poke holes in Postman’s argument (and particularly his use of evidence), he made a really interesting point that society should debate. Is the glut of information actually making people less willing or able to act? I think Postman’s concern is amplified with the growth of new sources of information, such as blogs and open-source news. For example, despite the comparison of WikiLeak’s exposure of information on the war in Afghanistan to the Pentagon Papers, the differences trump the similarities: as Slate’s Anne Applebaum points out, what is any untrained eye to do with over 90,000 pages of information?

Postman is right in one point: too much information can be almost as damaging as not enough information, especially when society often has little incentive and no clear pathway to affect change. For all the concerns about journalists’ infusing their reporting with expertise – and don’t get me wrong, this is always a concern – journalists can use their expertise to help distill information and present a clear picture of what’s important (an argument made by Brent Cunningham). This is a role we still need journalists to perform, perhaps even more when information is abundant.

Media and Search Credibility

This will probably come to no surprise to anyone who’s taught an introductory university course that requires students to do research, but for all the time they spend online, students remain uncertain about how to find credible information. A new study by Northwestern University researchers demonstrates that when performing a wide range of information-seeking activities online, students often relied on the first link that Google provided, suggesting that coming up first on Google confers credibility. The students were able to recognize .edu and .gov as rating higher in credibility than other sites, but falsely included .org in their catalogue of credible sites, with most not realizing it is available for purchase like .com or .net (as I have demonstrated by purchasing vraga.org).

But while we shake our heads at their naivite, we might be missing the underlying cause. A recent poll shows that teens and adults alike trust technology firms like Google more than traditional media outlets – and even Facebook scored more highly than “the media.” Although this study has flaws – it is unclear how exactly “trust” or “the media” are defined – it demonstrates that students belief in the search results provided by Google may not be unreasoned.

Of course, that is not to say that it is rational. Google is known for offering little clarity on how its search rankings are returned. And with Google branching out to owning new businesses, including its purchase of ITA-software, which is linked to airline flight information, many are now calling for some kind of regulation to ensure equality and that Google does not unfairly favor its own interests. This call seems reasonable, for although Google does not have a clear monopoly over search, especially with the growth of its competitor Bing, it still maintains 65 percent of the search market.

Meanwhile, we are left pondering why our students trust technology firms like Google, Apple, and Microsoft, more than the media. But understanding that their use of Google for results is driven by their trust and faith in the company may provide the  key to deepening their understanding of the media environment.

MTV leads the way

It’s unusual for MTV to get praise for its television programming. But while other outlets and channels are getting criticized for their lack of positive portrayals of gay and lesbian characters, MTV has received the first-ever “Excellent” rating for their programming from GLAAD, or the Gay & Lesbian Alliance Against Defamation. And the primary source of these portrayals? MTV’s host of reality TV shows.

Meanwhile, other American broadcast networks scored much lower in this report. And this phenomenon isn’t limited to the US – the BBC has also come under fire recently for its lack of positive portrayals – and again, it is on reality TV shows that most of the gay characters surface. One point of comparison – while the British study appeared to focus on positive portrayals, there is little indication from the GLAAD study whether any portrayal of gay characters was counted, or whether only positive portrayals were valued.

This isn’t the first time television studios and networks have made a push to portray a discriminated group more fairly and equitably, with portrayal of black characters perhaps the best example of this. It also raises questions on how much is “enough” – for example, should programming reflect the true proportions of each group in the population? Would that be fair for very small groups, who might then be seen very little? Cultivation theory and social comparison reserach suggests that seeing a wide variety of groups on television can shape our understanding of the world, so it seems reasonable that we would encourage these positive portrayals. But how much is enough?

Hot summer without a cause

2010 is looking to overtake 2005 as the hottest year ever recorded on the planet. But the heat isn’t the only thing unusual this summer – for example, the torrential downpour that hit the Midwest last week is also causing dams to collapse, airports to close, and residents to seek alternative housing, as their own homes remain filled with water and debris.

But with these environmental disasters, one thing has not changed: people remained equally unconcerned in May about “global warming” as they did at the beginning of 2010 – and even in July, global warming is not one of the public’s top priorities. The government has responded to this lack of emphasis by the public, with Senate Democrats abandoning – at least temporarily – their efforts to produce legislation to curb greenhouse gas emissions.

What is perhaps most interesting about this current “climate” is that few articles – even in many stories about the heat waves – mention global warming. Research has suggested that hotter local temperatures are linked to more discussion of global warming (Shanahan & Good make this point in their 2000 article, Heat and Hot Air: Influence of Local Temperature on Journalists’ Coverage of Global Warming). And while it is foolish to suggest that any one event is “caused” by global warming, the trend certainly matches scientists’ predictions this summer.

Meanwhile, because “global warming” does not accurately cover the range of outcomes expected from a rise in temperatures, many scientists prefer the term “climate change,” while Thomas Friedland of the New York Times has suggested the term “global weirding.”

While these redefinitions may be more accurate – and eventually extend people’s concerns about greenhouse gases beyond heat waves – this shifting terminology makes it harder for both the public and news organizations to grasp and define the concept. And while it is not this change in terminology that caused lack of coverage currently – instead, it is bad timing, with so much attention focused on the poor economy and job creation – the lack of clear name also make confusion and dismissal more likely. The term we use to describe something is very important in determining attitudes, so scientists, politicians, and journalists alike need to choose and use a single term to describe the phenomenon and focus on helping the public understand the real effects – and not just the heat – that can result from climate change.

More on motivated reasoning

Building somewhat on yesterday’s post, last night I was involved in a discussion of cell phones. But we were not debating the likelihood of cell phones causing cancer, nor the problems of texting. Instead, I was watching two friends try to convince each other that their respective cell phone choices – an Android vs. an iPhone 4 – was the better phone.

After a relatively long debate, I stepped in to point out that each were unlikely to change each others’ mind – and in fact that research, such as that I pointed to in yesterday’s post about the ability of corrections to backfire, would lead us to suspect that they would ultimately polarize and each become more firmly convinced of the rightness of their own choice. One of the debaters, however, countered this point, suggesting that the discussion could only expose them to new ideas and to develop their own understanding of the benefits and drawbacks of each.

This is a good point. Deliberation theorists would like us to believe that exposure to others’ views can be a very good thing, letting us come to know our own arguments better while also exposing us to knowledge of others’ views. But in what circumstances is this reasonable exchange more likely? And how can we reconcile this with the inevitable knowledge that people become attached to their viewpoints, especially once they’ve committed themselves to a particular course? Indeed, it is this commitment and inability to change which is at the heart of cognitive dissonance – because we don’t want to admit that we are wrong, especially when a decision is hard to revoke.

In my opinion, cell phone ownership is likely to belong in the latter camp. Cell phones tend to be very expensive and are often coupled with extended contracts with one service or another that can be difficult to break. Thus, if someone admits that the other phone (one that they didn’t select or that isn’t available through their provider) is better, they are stuck with something that they can’t get out of, at least not without significant cost. This is true for most of the major decisions in life – cars, politics, and firmly-held issue positions, for anything that we are likely to debate about we are also likely to have a committed opinion on.

But in this discussion, little was at stake. Both participants acknowledged their biases and the slim likelihood that they would change their mind. So is this the only way such debate can work? When both sides acknowledge that they are unlikely to be persuaded, but are open to new information? And of what value is that information, then, if people remain ultimately committed to the same course of action that they started with? Finally, can we use this idea – of a conversation to inform but not persuade, where each side can make small concessions without admitting an entire decision is at fault – to better develop open political debate? Because something is needed to prevent the instant polarization and the inevitable gridlock that characterizes so much of the political debate in our country today.

Cell phones, Type II error, and motivated reasoning

Reading the latest in the cell phone controversy reminds me of a discussion we had in J614 this past semester about Type I vs. Type II Error. Professor Dhavan Shah likes to use the example of the American judicial system to distinguish between Type I and Type II Error – Type I error is letting a guilty defendant go free, whereas Type II error is convicting an innocent person of a crime. Basically, as social scientists, we would rather miss a clear correlation than make a claim that a correlation exists that isn’t there. To back up this argument, we talked a lot in class about the claims that autism is linked to vaccination. Although research has subsequently examined this claim and not found much of a link, many people are still worried about the risks of getting their children vaccinated against many childhood diseases.

The cell phone controversy reminds me of this debate. An article today in the BBC reports on advice from the chief medical officer from Wales suggesting that children should be texting, rather than talking on their cell phones. Although Dr. Jewell admits that so far there is little evidence linking cell phone use to medical problems, he is taking the stance “better safe than sorry.” This is an interesting approach – whereas it is hard to argue against being “safe,” especially where kids are concerned, with so little information it is worrisome that such a strong approach is being taken. And the controversy over cell phone use isn’t only in England – just last month, San Francisco passed a law requiring retailers to display the amount of radiation emitted by their cell phones.

To compound the problem, other experts have warned over the problems that excessive texting can produce. Last year, The New York Times reported that physicians and psychologists were concerned about the toll of texting – both psychologically and physically. In late 2008, teens were texting an average of 80 times per day – a number that’s probably gone up. According to these doctors, texting may be causing anxiety among teens, distracting them in class, and damaging teens’ thumbs – similar to computer usage. In light of these concerns, Dr. Jewell’s pamphlets and advice about texting over talking seems premature, and possibly damaging.

As a final concern, new research in Political Behavior suggests that corrections to misleading claims by politicians often backfire. Brendan Nyhan & Jason Reifler argue that the effect of such corrections vary by political ideology. Their research suggests that people are loath to relinquish their belief in ideologically-congruent facts, even in the face of direct contradiction (for Nyhan’s appearance on NPR, click here). This research is in line with other research into motivated reasoning and processing, but provides fresh new evidence that journalists need to be very careful in putting information out there, because even when it is “disproven,” it can be hard to change people’s minds.