Professional Documents
Culture Documents
Definition
New journals, which are indexed from their first published issue, will receive an
impact factor after two years of indexing; in this case, the citations to the year
prior to Volume 1, and the number of articles published in the year prior to
Volume 1 are known zero values.
Journals that are indexed starting with a volume other than the first volume will
not get an impact factor until they have been indexed for three years. Annuals and
other irregular publications, will sometimes publish no items in a particular year,
affecting the count.
The impact factor relates to a specific time period; it is possible to calculate it for
any desired period and the Journal Citation Reports (JCR) also includes a 5-year
impact factor. The JCR shows rankings of journals by impact factor, if desired by
discipline, such as organic chemistry or psychiatry.
History
Impact factor was first proposed by Eugene Garfield (who is a chemist, librarian,
and linguist by training) in a 1955 article in the journal Science.
Garfield saw impact factor as a way to "eliminate the uncritical citation of
fraudulent, incomplete, or obsolete data by making it possible for the
conscientious scholar to be aware of criticisms of earlier papers."
For Garfield's reflections on the impact factor over fifty years after its invention,
see his article in JAMA entitled, "History and Meaning of the Journal Impact
Factor".
Strengths
Weaknesses
Impact factor gets misused alarmingly often. Though it was only intended to
measure a journal's influence, people sometimes use it to judge the influence of
the researchers themselves.
Even though impact factor is supposed to measure a journal's prestige and value,
it can be affected by a number of things unrelated to a journal's quality.
Examples: Journals self-citing, publication timing, and types of articles published.
Since one cannot calculate impact factor with less than two years of data, new
journals are left out of impact factor lists.
The impact factor only includes journals indexed by Thomson Reuters Scientific.
Thomson Reuters Scientific, which updates impact factors every year, does not
share their criteria for what constitutes a "citable paper," which is part of the
impact factor equation's denominator.
Example: The journal with the highest Impact Factor in the field of medicine from
2006 is the New England Journal of Medicine, at 51.296. In contrast, the highest
Impact Factor in the field of nursing in 2006 belongs to Birth: Issues in Perinatal
Care at 2.058. As you can see, Impact Factor is not to be used to compare journals in
different fields of study.
http://isiknowledge.com/
(Additional Resources) (Journal Citation Reports)
The H-Index was first proposed by J.E. Hirsch, a physicist at the University of
California at San Diego.
In other words, a scholar with an index of h has published h papers each of which
has been cited by others at least h times. Thus, the h-index reflects both the
number of publications and the number of citations per publication. The index is
designed to improve upon simpler measures such as the total number of citations
or publications. The index works properly only for comparing scientists working
in the same field; citation conventions differ widely among different fields.
Hirsch suggested that, for physicists, a value for h of about 1012 might be a
useful guideline for tenure decisions at major research universities. A value of
about 18 could mean a full professorship, 1520 could mean a fellowship in the
American Physical Society, and 45 or higher could mean membership in the
United States National Academy of Sciences.[3] Little systematic investigation has
been made on how academic recognition correlates with h index over different
institutions, nations and fields of study.
Advantages
The h-index was intended to address the main disadvantages of other bibliometric
indicators, such as total number of papers or total number of citations. Total number of
papers does not account for the quality of scientific publications, while total number of
citations can be disproportionately affected by participation in a single publication of
major influence. The h-index is intended to measure simultaneously the quality and
sustainability of scientific output, as well as, to some extent, the diversity of scientific
research.
Criticism
Michael Nielsen points out that "...the h-index contains little information beyond
the total number of citations, and is not properly regarded as a new measure of
impact at all".
The h-index is bounded by the total number of publications. This means that
scientists with a short career are at an inherent disadvantage, regardless of the
importance of their discoveries. For example, variste Galois' h-index is 2, and
will remain so forever. Had Albert Einstein died in early 1906, his h-index would
be stuck at 4 or 5, despite his being widely acknowledged as one of the most
important physicists, even considering only his publications to that date.
The h-index does not consider the context of citations. For example, citations in a
paper are often made simply to flesh-out an introduction, otherwise having no
other significance to the work. h also does not resolve other contextual instances:
citations made in a negative context and citations made to fraudulent or retracted
work.
The h-index does not account for singular successful publications. Two scientists
may have the same h-index, say, h = 30, but one has 20 papers that have been
cited more than 1000 times and the other has none.
The h-index does not take into account the presence of self-citations. A researcher
working in the same field for a long time will likely refer to his or her previous
publications, with growing self-citations affecting the h-value in a self-referenced
way. This issue is even more severe in the presence of large networks of
researchers, as they can cite papers of people sharing the same alliance even if not
coauthors.
The h-index does not account for the number of authors of a paper. If the impact
of a paper is the number of citations it receives, it might be logical to divide that
impact by the number of authors involved. Not taking into account the number of
authors could allow gaming the h-index and other similar indices: for example,
two equally capable researchers could agree to share authorship on all their
papers, thus increasing each of their h-indices. Even in the absence of such
explicit gaming, the h-index and similar indices tend to favor fields with larger
groups, e.g. experimental over theoretical.
http://www.scimagojr.com/