24 April 2012

Academic Publishing

The Economist | Academic publishing: Open sesameWhen research is funded by the taxpayer or by charities, the results should be available to all without charge

In 2011 Elsevier, the biggest academic-journal publisher, made a profit of £768m ($1.2 billion) on revenues of £2.1 billion. Such margins (37%, up from 36% in 2010) are possible because the journals’ content is largely provided free by researchers, and the academics who peer-review their papers are usually unpaid volunteers.
(0) Good lord. I knew they turned a very healthy profit, but 37%?! On revenues of $2B+? Wow.

(1) I'm nursing a grudge right now with Elsevier because the deadline they promised on a paper I rushed to finish months ago has come and gone and I haven't heard a word from them. I really should know by now that these deadlines are never kept, but it still bothers me. I thought maybe this time, since it was a special issue, they would be close to the deadline. Plus I'll never get used to people making promises they can't or won't keep.

(2) A big part of the problem with missed deadlines is all those volunteer referees and editors. Everyone knows it's important to return their reviews of papers, but it's always low enough priority that it seems like "next week" is perpetually a viable option for getting it done.

(3) I'm coming around to the view that subjective experience as a customer is a rely reliable indication of how strong the competitive pressure facing a firm is. It should be immediately obvious that the Postal Service is a safe monopolist just from walking into a branch office. Dealing with academic publishers is only slightly easier than dealing with telecom companies. For every paper I write I need to devote about a day just to making the publisher's special .cls format file play nicely, and fiddling with the references format, and changing the file types and resolutions of my figures, and re-entering all the captions in different places, and interpreting the esoteric error messages from their online submission systems.

Daniel Lemire | Computer scientists need to learn about significant digits

Nevertheless, one thing that has become absolutely clear to me is that computer scientists do not know about significant digits.

When you write that the test took 304.03 s, you are telling me that the 0.03 s is somehow significant (otherwise, why tell me about it?). Yet it is almost certainly insignificant.

In computer science, you should almost never use more than two significant digits. So 304.03 s is indistinguishable from 300 s. And 33.14 MB is the same thing as 33 MB.
Hear, hear!

The File Drawer | Chris Said | It’s the incentive structure, people! Why science reform must come from the granting agencies.

The growing problems with scientific research are by now well known: Many results in the top journals are cherry picked, methodological weaknesses and other important caveats are often swept under the rug, and a large fraction of findings cannot be replicated. In some rare cases, there is even outright fraud. This waste of resources is unfair to the general public that pays for most of the research.

The Times article places the blame for this trend on the sharp competition for grant money and on the increasing pressure to publish in high impact journals. While both of these factors certainly play contributing roles…the cause is not simply that the competition is too steep. The cause is that the competition points scientists in the wrong direction.

…scientific journals favor surprising, interesting, and statistically significant experimental results. When journal editors give preferences to these types of results, it is obvious that more false positives will be published by simple selection effects, and it is obvious that unscrupulous scientists will manipulate their data to show these types of results. These manipulations include selection from multiple analyses, selection from multiple experiments (the “file drawer” problem), and the formulation of ‘a priori’ hypotheses after the results are known.

…the agencies should favor journals that devote special sections to replications, including failures to replicate. More directly, the agencies should devote more grant money to submissions that specifically propose replications….I would [also] like to see some preference given to fully “outcome-unbiased” journals that make decisions based on the quality of the experimental design and the importance of the scientific question, not the outcome of the experiment. This type of policy naturally eliminates the temptation to manipulate data towards desired outcomes.
Yes, yes yes yes YES YES YESYESYESYYES! I can not agree with this more.

As a minor side note, this would not only make the outcomes of science better, it would also make the lives of scientists better.


I have one suggestion that might help both the problem identified by Said and unseating Elsevier and others. The problem, in a nutshell, is that everyone wants to publish in the "top" journals, and we're left with a collective active situation.

I suggest that Grad students who know they don't want to go into academic careers should be encouraged to publish in other, better-behaved but less highly regarded outlets. University departments could establish awards for publishing in better-behaved journals, or be more generous with travel grants to conferences which adopt better policies, or directly reward students willing to take the time to report on negative results.

Obviously this is a problem in so far as grad students are often working with teams of other people who still are incentivized to pursue the "top" outlets, but I think it's still a marginal improvement.

It might also be possible to convince tenure and search committees to look kindly upon applicants who have published at least once in such an outlet.

1 comment:

  1. I think the corruption started with the publication of all of those social and political "science" papers. The soft "sciences" need to compete with the tech people, and they do it with words, not deeds. And they have the ability to churn out words much better than the techies. So, of course, the university leadership is faced with hoardes of social studies professors who have published hundreds of papers that no one reads or cares about, compared to the scientist who only publishes one paper that explains how cancer cells work, and they don't know who to promote or pay more to. As a result, the science researcher has to write up a bunch of other crappy papers to make his numbers come out right, and the corruption of real science begins.

    And don't get me started on "climate science", which is lower, in my estimation than womyn's studies...

    ReplyDelete