Tag Archives: peer review

On publishing in ICT4D

During the recent ICTD2010 conference, Hari kindly brought together a group of us to discuss academic publishing in the field of ICT4D.  Each speaker was to talk for about ten minutes, directing our ‘advice’ primarily towards those who may be less experienced in academic publishing.  Whilst I absolutely love seeing, holding and smelling the first copy of one of my new books, or reading one of my new papers in an academic journal, or seeing authors that I respect referencing one of my publications in their own work, I now recognise that a system that I once admired has become fundamentally, perhaps fatally, flawed.  There is sadly much that is not really scholarly and little at all that is value free in the world of academic publishing today.  It does not foster the excellence or originality that it is  intended to achieve.  All too often it leads instead to a morass of mediocrity and replication.

Two comments in the distant past still haunt me:

  • when my first academic paper was published, a friend and colleague said “congratulations, but you don’t expect anyone will read it do you”; and
  • a senior colleague in a government department once said to me: “I don’t ever read academic papers, I get consultants to provide a short synthesis of them for me”.

The reality of academic publishing is that very few papers are ever actually read, and few people are ever influenced by what is written in journals.

Some of the most challenging problems to do with academic publishing are:

  • Academic journals are fundamentally a way to ensure professional exclusivity.  They are a means through which one group of academics excludes others from participating in their ‘mysteries’.  Thus ‘apprentices’ have to learn the rituals and obey the rules if they wish to belong to this exclusive and privileged club.
  • Because of the need for authors to obey the rules, journals all too frequently fail to promote the very innovation that is meant to be their life blood.  There is a real danger that referees or editors will reject papers that are too innovative or fail to abide by the logics and requirements of a particular journal’s editorial board.
  • Many citation cartels exist, whereby in order to boost their rankings in citation indices, academics agree to cite each other’s papers in their own works.
  • There are also real issues surrounding the dominance of the English language, and far too few journal editors or reviewers are willing to pay heed to different cultural traditions of academic writing style.  We should do much more to enable people from different linguistic backgrounds to get their papers published in the ‘top’ journals.
  • Peer review is by no means the innocent, quality control exercise it is meant to be.  Far too often academics use it as a way of preventing ideas that are contrary to their own from being published.
  • Citation indices usually only incorporate the more prestigious journals, and thus often omit the more innovative and cutting edge papers.
  • The emphasis on quantity rather than quality of publication means that vast numbers of dreadful papers are submitted to journals – and it is very frustrating for editors and referees to have to sift through the dross!

The net outcome of these is that far too many papers that are published are mediocre and tend to replicate existing knowledge.  Moreover, many of these problems have become exacerbated over the last 20 years as academic publication in ‘top’ journals has become such an important part of research assessment exercises.

I offered five key tips for less experienced academics who wish to succeed in this environment:

  • The most important tip is that one must realise that academic publishing is a game.  New academics therefore have to learn the rules and play by them – if they want to achieve success in terms that the profession’s gatekeepers have defined.  Once your career is established, then you are in a position to try to change the rules!
  • Write something that is reasonably good and then submit it to a journal.  Referees are bound to suggest revisions, and so don’t be hurt by the comments.  Use them, alongside your own developing ideas, to improve the paper and resubmit it – in most cases it will eventually be published (as long as it is reasonably good in the first place!)
  • Publish less, but publish better; focus on quality rather than quantity.  When I was head of department, I remember encouraging colleagues to make sure that they published just two or three papers a year in major journals, and a book every three to four years.
  • Remember that few people actually read academic journals. If you want your ideas to have an impact, it is therefore essential that you make them available in different formats and contexts – as, for example, through your own blog
  • Only ever agree to have your supervisor’s name as an author on the paper if she or he has actually written a substantial amount of it!  Good academics don’t need to have their names on your research – although it is always nice to recognise their advice in an acknowledgement.

Two final points are worth mentioning.  The first is that publishing in a multidisciplinary field such as ICT4D is fraught with a particular set of additional difficulties.  Where academic success is defined in large part through publication in prestigious journals, most academics seek to publish their work in their own discipline’s top-ranked  journals.  It is thus more prestigious for a computer scientist working in ICT4D to publish in a top computer science journal than in a new ICT4D journal. Those who edit cross-disciplinary journals often therefore find that the papers that are submitted to them are those that have been rejected by other more mainstream journals.  Consequently, papers published in multidisciplinary journals are often of less good quality than those in the major single disciplinary journals.  This does, though, provide editors of multidisciplinary journals with an opportunity to be innovative and creative in what and how they publish. Moreover, it is incumbent on those working in the field to support new journals that are indeed trying to break the mould of traditional academic arrogance and exclusivity.

Finally, we need to explore alternative modalities of publishing.  Those of us working in the field of ICT4D should seek to use ICTs creatively to enable multiple voices from many different backgrounds to share their research findings.  However, we still need to find appropriate business models to enable more open and free publication options to be created.  Traditionally, journal publishers have added considerable value to the publication process, not least through funding the editorial and publication process.  Such costs remain to be covered, and few ‘free’ journals have yet actually enabled high quality original academic papers to be widely disseminated. We also need to work creatively with existing publishers, since they have much to offer the publication process.

For some of my more detailed reflections on peer review see:

[For the presentations by Geoff Walsham, Cathy Urquhart and Shirin Madon as well as the full discussion see the video “Publishing ICT4D Research available from ICTD2010 videos and photos]

2 Comments

Filed under ICT4D, Postgraduate supervision, Universities

Peer review – implications of ‘corruption’

Two separate events that occurred at the start of this month have made me reflect once again on the many myths surrounding the ‘hallowed’ peer review process on which so much academic credibility is seen to lie.

First, I received an e-mail from a friend for whom I had written a reference in connection with a grant application that they had submitted to one of the UK’s Research Councils.  They had received the disappointing news that despite two strong references, a third referee had been highly critical of the proposal, casting aspersions on their professional expertise and on the quality of the proposed research.  I was appalled by this.  The research proposal was one of the best I have recently read, and from what the Research Council said of the comments of the ‘third’ referee, they seemed to me to be completely inappropriate.  Either the referee was ignorant of the research field, or they had vested interests in ensuring that this research was not funded.

By coincidence, at about the same time, the BBC picked up on an open letter sent by a group of scientists last July that also criticised the traditional peer review process, but this time with respect to journal articles.  As the BBC Science Correspondent Pallab Ghosh commented, “Stem cell experts say they believe a small group of scientists is effectively vetoing high quality science from publication in journals. In some cases they say it might be done to deliberately stifle research that is in competition with their own”. The 14 scientists had written that “Stem cell biology is highly topical and is attracting great interest not only within the research community but also from politicians, patient groups and the general public. However, the standard of publications in the field is very variable. Papers that are scientifically flawed or comprise only modest technical increments often attract undue profile. At the same time publication of truly original findings may be delayed or rejected”.  To try to overcome this, they proposed that “when a paper is published, the reviews, response to reviews and associated editorial correspondence could be provided as Supplementary Information, while preserving anonymity of the referees”.

Peer review is one of the fundamental principles upon which the edifice of academic reputation – and financial reward – is based.  However, the system is inherently  flawed, and I find it somewhat surprising that it still retains such power.  Six issues warrant particular consideration:

  • First, peer review is based on a belief that ‘science’ is in some way value free; that individual prejudice, political beliefs, or social agendas have no effect on academics’ judgements as to the quality of research.  Whilst many academics do indeed try to reach impartial judgements about the quality of work that they review, they undoubtedly bring biases to such judgements as a result of their own lives and research practices.  Moreover, editors of journals and Research Council panels exercise immense power through their choices of whom to ask to act as referees for papers or grant applications.  Science is not, and never has been, value free.
  • Academic status is in part based upon the number of citations a paper receives.  Academics thus seek to publish in the most prestigious journals that have high citation indexes.  For a very long time, cartels of academics have therefore operated, deliberately citing each other’s works so as mutually to raise their profiles and status. Academics are only human, and it is scarcely surprising that they operate in this way.  There is nothing exceptional about this.  Some of us may not think it right, but it happens.
  • One way that new ideas can begin to find voice is through the creation of new journals.  However, these take time to become established, and when status relies so much on having papers published in the most prestigious journals, it remains very difficult for new approaches and ideas to find widespread expression in this way; rarely do the most eminent academics deliberately choose to publish in new and ‘unimportant’ journals!
  • Those who run the major journals and sit on grant-giving Research Council Boards have immense power, and most do their very best to be fair in the judgements that they reach.  However, by definition, the peer review system is designed more to endorse existing approaches to intellectual enquiry, rather than to encourage innovative research.
  • None of this would matter particularly, and could merely be dismissed as irrelevant academic posturing, if there was not so much money involved.  Academic prestige and income depend fundamentally on success in publications and grant applications.  The UK’s Research Councils thus invest some £2.8 billion annually in support of research, and it is crucial that this is dispersed wisely.  It is therefore extremely sad – albeit typical – that in the case of my friend who had their grant application rejected, there was no right of appeal against the decision.  Panel chairs and editors must have the guts to stand up and recognise when they see flawed decisions being made by referees.  It is thus extremely encouraging to see that some Research Councils, notably EPSRC, are trying to create exciting new ways to support research that do not place excessive emphasis on traditional peer review processes.
  • Finally, there is now a good case for exploring alternative ways of judging research ‘quality’.  ‘Publishing’ papers openly on freely available websites, and then assessing their quality by the number of ‘hits’ that they get would, for example, be a rather more democratic process than that through which a small number of ’eminent’ academics judge their peers.  Of course this would be as open to abuse as existing systems, but at least it would present an alternative viewpoint.

We must debunk the myth that there is something ‘pure’ or ‘objective’ about academic peer review.  It is a social process, just like any other social process.  It has strengths and weaknesses.  For long, it has served the academic community well.  However, as the 14 stem biologists who raised the lid of Pandora’s Box implied, it is a system that fails to encourage the most original research, and instead supports the system that gave rise to it.  After all, that is not so surprising, is it?

5 Comments

Filed under Higher Education