早上到办公室,打开邮箱,看到今天Science杂志的社论“Impact Factor Distortions”。主编Bruce在社论中说:包括美国科学促进会(AAAS)在内的75家机构和150多位知名科学家在2012年12月举行的美国细胞生物学学会会议上支持签署了《关于研究评价的旧金山宣言(San Francisco declaration on research Assessment,DORA)》,宣言认为科学界应该停止使用影响因子评价科学家个人的工作;影响因子不能作为替代物用于评估科学家的贡献,以及招聘、晋升和项目资助等的评审。
Science社论提到这样做的理由(虽然很多理由大家早就知道):
1)影响因子是评价期刊的工具,不是评价科学家个人的工具。很多重大突破本来就可能需要长时间的积累。
2)影响因子的不恰当使用非常有破坏性,影响期刊的发表政策(比如选择发表可能高引用的文章),给知名期刊带来沉重的投稿压力(滥投)。
3)最重要的危害可能是妨碍创新。影响因子会鼓励“模仿或跟风”,使得本来就已经很热的领域(刊物影响因子高)更加人满为患。更多人关注的是发高影响因子的文章,而不是科研创新。
Science社论认为,为了客观的评价研究者的科研贡献,需要知名科学家作为评价者,认真的评阅某个研究者的代表作。《旧金山宣言》中给出了一些具体的评价建议:
General Recommendation
1.Do not use journal-based metrics, such as journal impact factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.
For funding agencies
2.Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
3.For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.
For institutions
4.Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
5.For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.
For publishers
6.Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (eg. 5-year impact factor, EigenFactor [6], SCImago [7], editorial and publication times, etc) that provide a richer view of journal performance.
7.Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.
8.Encourage responsible authorship practices and the provision of information about the specific contributions of each author.
9.Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication. (See reference 8.)
10.Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.
For organizations that supply metrics
11.Be open and transparent by providing data and methods used to calculate all metrics.
12.Provide the data under a licence that allows unrestricted reuse, and provide computational access to data.
13.Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.
14.Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared
For researchers
15.When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.
16.Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
17.Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs [9].
18.Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote best practice that focuses on the value and influence of specific research outputs.