The dark side...
Nothing like a day on the uScope (dark) room stuck between nuclei chasing and wound scannings to get one’s readings up to date. Today I`ve come across two interesting reports on the other (also dark and increasingly visible) side of science:
1 – How deep is the Impact?
A recent JCB editorial [1] was dedicated to the infamous black box of publishing science - the Thomson/ISI impact factor (IF). Debuting with an excellent sum up of the “IF curiosities and peculiarities”, editors report on how they (Rockefeller University Press) decided to buy the data from Thomson Scientific concerning their three journals (JEM, JCB, JGP) to do the simple math of impact factors… Guess what? The results did not match the published impact factors!
From wrong article-type designations to incorrect numbers of citations, it appears that anything goes… Confronted with these results, Thomson replied that the database they provided was a different “version” than the one used for JCR… A new database was provided and still… 2+2=5. Bottom line, if authors provide unreliable data to publishers, their articles are not accepted or can be retracted. So why should they be judged by an “irreproducible factor”(if)? (Might just be because the funding guys never read these editorials…)
*Thomson Scientific has issued an official reply [2] to this editorial and JCB editors have further commented [3] on it.
1. Rossner, M., E. Hill, and H. Van Epps. 2007. Show me the data. J. Cell Biol. 179:1091–1092 info:doi/10.1083/jcb.200711140
2. Pendlebury, D.A. 2007. Article Titled "Show me the Data", Journal of Cell Biology, Vol. 179, No. 6, 1091–1092, 17 December 2007 is Misleading and Inaccurate. http://scientific.thomson.com/citationimpactforum/8427045/ (accessed January 4, 2008).
3. Rossner, M., Van Epps, H., Hill, E. (2008). Irreproducible results: a response to Thomson Scientific. J. Cell Biol. 180: 254-255 info:doi/10.1083/jcb.200801036
2 – Seeing double…
Picking up on that last sentence, not always are the rotten apples left out. Claims of article duplication (either by the same authors or others) or plagiarism have increased in recent years. But are scientists really publishing more duplicate papers? In a commentary to Nature [1], Errami and Garner report on how they used eTBLAST, a freely available text similarity software they created, to probe a subset of more than 62.000 Pubmed abstracts from the last 12 months for possible duplicates. The resulting 421 hits were deposited online in Déjà vu, with a rate of false positives estimated to 1%. Manual evaluation of the results was made difficult in many cases due to the unavailability of full-text (for example articles published by the same authors in non-English, national level journals). Extrapolating this number to the whole Pubmed database size would give 117,500 duplicates with the same authors, against the 739 records currently marked as duplicate in Pubmed!! Expanding their approach, the authors have now come to approximately 70,000 candidate duplicates on their database. Although these numbers must be seen with caution and distinctions between duplications by the same authors (legitimate or elegitimate) from pure plagiarism (that was rarely detected) must be made, they do leave a clear message!
The steady increase on the number of publications for the past few years constitutes a growing opportunity for this practice and no country appears to lead the way, since the number of duplicates was roughly proportional to the countries’ Pubmed contribution. It is up to publishers and also authors to fight this apparently growing epidemic…
1.Mounir Errami and Harold Garner, “A Tale of Two Citations,” Nature 451, no. 7177 (January 24, 2008). info:doi/10.1038/451397a
0 Comments:
Post a Comment
<< Home