You are on page 1of 4

IMPACT FACTOR

Definition

Seeks to measure the influence a journal has in its field.


Uses "bibliometric analysis" of journals indexed in the ISI database. More
specifically, it measures how often scholars and researchers have cited articles in
a particular journal in the most recent two years.
Simply put, the higher the number, the better the journal's impact factor. The
better the journal's impact factor, the more influence it is supposed to have in its
field.

Formula for calculation


In a given year, the impact factor of a journal is the average number of citations to those
papers that were published during the two preceding years. For example, the 2008 impact
factor of a journal would be calculated as follows:
A = the number of times articles published in 2006 and 2007 were cited by
indexed journals during 2008
B = the total number of "citable items" published in 2006 and 2007.
("Citable items" are usually articles, reviews, proceedings, or notes; not editorials
or Letters-to-the-Editor.)
2008 impact factor = A/B
(Note that 2008 impact factors are actually published in 2009; they cannot be calculated
until all of the 2008 publications had been received by the indexing agency.)

New journals, which are indexed from their first published issue, will receive an
impact factor after two years of indexing; in this case, the citations to the year
prior to Volume 1, and the number of articles published in the year prior to
Volume 1 are known zero values.
Journals that are indexed starting with a volume other than the first volume will
not get an impact factor until they have been indexed for three years. Annuals and
other irregular publications, will sometimes publish no items in a particular year,
affecting the count.
The impact factor relates to a specific time period; it is possible to calculate it for
any desired period and the Journal Citation Reports (JCR) also includes a 5-year
impact factor. The JCR shows rankings of journals by impact factor, if desired by
discipline, such as organic chemistry or psychiatry.

History

Impact factor was first proposed by Eugene Garfield (who is a chemist, librarian,
and linguist by training) in a 1955 article in the journal Science.
Garfield saw impact factor as a way to "eliminate the uncritical citation of
fraudulent, incomplete, or obsolete data by making it possible for the
conscientious scholar to be aware of criticisms of earlier papers."
For Garfield's reflections on the impact factor over fifty years after its invention,
see his article in JAMA entitled, "History and Meaning of the Journal Impact
Factor".

Strengths

Impact Factor gives researchers a quantitative measure of journals' influence and


impact.
Impact Factor is a simple metric, and provides a consistent way of comparing
journals.
Ranking is consistent within fields of study.

Weaknesses

Impact factor gets misused alarmingly often. Though it was only intended to
measure a journal's influence, people sometimes use it to judge the influence of
the researchers themselves.
Even though impact factor is supposed to measure a journal's prestige and value,
it can be affected by a number of things unrelated to a journal's quality.
Examples: Journals self-citing, publication timing, and types of articles published.

Since one cannot calculate impact factor with less than two years of data, new
journals are left out of impact factor lists.

The impact factor only includes journals indexed by Thomson Reuters Scientific.
Thomson Reuters Scientific, which updates impact factors every year, does not
share their criteria for what constitutes a "citable paper," which is part of the
impact factor equation's denominator.

The numbers vary quite a bit by subject area.

Example: The journal with the highest Impact Factor in the field of medicine from
2006 is the New England Journal of Medicine, at 51.296. In contrast, the highest
Impact Factor in the field of nursing in 2006 belongs to Birth: Issues in Perinatal
Care at 2.058. As you can see, Impact Factor is not to be used to compare journals in
different fields of study.

http://isiknowledge.com/
(Additional Resources) (Journal Citation Reports)

H-Index, also known as the Hirsch Index


The index is based on the distribution of citations received by a given researcher's
publications. Hirsch writes:
A scientist has index h if h of [his/her] Np papers have at least h citations each,
and the other (Np h) papers have at most h citations each.
I.F. measures the influence of scholarly journals, while the H-Index measures the
influence of the researchers themselves.

The H-Index was first proposed by J.E. Hirsch, a physicist at the University of
California at San Diego.

In other words, a scholar with an index of h has published h papers each of which
has been cited by others at least h times. Thus, the h-index reflects both the
number of publications and the number of citations per publication. The index is
designed to improve upon simpler measures such as the total number of citations
or publications. The index works properly only for comparing scientists working
in the same field; citation conventions differ widely among different fields.

The h-index serves as an alternative to more traditional journal impact factor


metrics in the evaluation of the impact of the work of a particular researcher.
Because only the most highly cited articles contribute to the h-index, its
determination is a relatively simpler process. Hirsch has demonstrated that h has
high predictive value for whether a scientist has won honors like National
Academy membership or the Nobel Prize. In physics, a moderately productive
scientist should have an h equal to the number of years of service while
biomedical scientists tend to have higher values. The h-index grows as citations
accumulate and thus it depends on the 'academic age' of a researcher.

Hirsch suggested that, for physicists, a value for h of about 1012 might be a
useful guideline for tenure decisions at major research universities. A value of
about 18 could mean a full professorship, 1520 could mean a fellowship in the
American Physical Society, and 45 or higher could mean membership in the
United States National Academy of Sciences.[3] Little systematic investigation has
been made on how academic recognition correlates with h index over different
institutions, nations and fields of study.

Advantages
The h-index was intended to address the main disadvantages of other bibliometric
indicators, such as total number of papers or total number of citations. Total number of
papers does not account for the quality of scientific publications, while total number of
citations can be disproportionately affected by participation in a single publication of
major influence. The h-index is intended to measure simultaneously the quality and
sustainability of scientific output, as well as, to some extent, the diversity of scientific
research.

Criticism

Michael Nielsen points out that "...the h-index contains little information beyond
the total number of citations, and is not properly regarded as a new measure of
impact at all".
The h-index is bounded by the total number of publications. This means that
scientists with a short career are at an inherent disadvantage, regardless of the
importance of their discoveries. For example, variste Galois' h-index is 2, and
will remain so forever. Had Albert Einstein died in early 1906, his h-index would
be stuck at 4 or 5, despite his being widely acknowledged as one of the most
important physicists, even considering only his publications to that date.
The h-index does not consider the context of citations. For example, citations in a
paper are often made simply to flesh-out an introduction, otherwise having no
other significance to the work. h also does not resolve other contextual instances:
citations made in a negative context and citations made to fraudulent or retracted
work.
The h-index does not account for singular successful publications. Two scientists
may have the same h-index, say, h = 30, but one has 20 papers that have been
cited more than 1000 times and the other has none.
The h-index does not take into account the presence of self-citations. A researcher
working in the same field for a long time will likely refer to his or her previous
publications, with growing self-citations affecting the h-value in a self-referenced
way. This issue is even more severe in the presence of large networks of
researchers, as they can cite papers of people sharing the same alliance even if not
coauthors.
The h-index does not account for the number of authors of a paper. If the impact
of a paper is the number of citations it receives, it might be logical to divide that
impact by the number of authors involved. Not taking into account the number of
authors could allow gaming the h-index and other similar indices: for example,
two equally capable researchers could agree to share authorship on all their
papers, thus increasing each of their h-indices. Even in the absence of such
explicit gaming, the h-index and similar indices tend to favor fields with larger
groups, e.g. experimental over theoretical.

http://www.scimagojr.com/

You might also like