Reviewers: an editors nightmare (or “Your can’t live with em…”)

Tuesday, 17 June, 2008

I’m just completing a moderately busy spring season at the Journal of Maps which has left me feeling a little battle worn. Having had two deadlines for special issues come (and go), I’ve had to deal with two “slugs” of papers coming for review. This is enough work in itself but was unfortuantely compounded by a problem with outgoing email on the JoM server. We use an “in-house” peer-review system on our server which sends out emails at various (editor controlled) stages of the review process. The system was successfully submitting emails for sending. Unfortunately the server wasn’t relaying any error which meant a took a couple of weeks to spot the problem and revert to a semi-manual system whilst the bug was fixed.

Anyway, that’s not the point of this blog (other than to note that editing is, well, admin, with a little bit of scholarly activity thrown in for good measure). Whilst the email problem compunded my woes, I have again been faced with the usual frustrations of dealing with reviewers. That is:

  • finding an appropriate reviewer
  • getting a response from a reviewer as to whether they are happy to review (or not
  • getting agreement from a reviewer who has problems interacting with the website
  • getting agreement from a reviewer who has problems in meeting a 1 month deadline (or 2 or 3)
  • getting agreement from a reviewer who has no intention of providing a review, but won’t both telling you
  • getting a reviewer who provides 1 sentence or paragraph of review



All of the the above happens, sadly, far too regularly and these are the consequences:

  • some subject areas/specialisms are difficult to find reviewers for (because they are so specialist). Not so much of a moan, but just goes with the territory
  • I request a response with about a week, knowing that some reviewers will be away from email. Sadly it seems too much trouble for some to respond. DELAY: 1-2 weeks
  • OK, not everyone is computer literate and journals are increasingly trying to minimise admin. It just surprises how difficult some people find interacting with websites. I suspect this will get easier with time
  • meeting a 1-month deadline is perhaps a little tight (although shorter timescales are common in medicine) and I don’t mind too much if it runs over a little. I also know that, outside teaching time, people’s committments can be much more fluid. However, is it really that difficult to schedule time to review 1,500 words of manuscript?? DELAY: 2-4 weeks
  • This has to be my #1 pet hate. OK, so if a reviewer accepts to review a manuscript they probably have every good intention, at the beginning, of doing so. However, even after the gentle 1-month reminder, you get… no response. If you don’t want to review a manscript, please don’t do it and let the editor know. DELAY: 4-8 weeks
  • This is possibly #2 on my list of pet hates. The only thing worse than providing no review, is a review that is, well, pointless. I don’t believe any paper is perfect either in writing style of content. So please read the manuscript carefully and provide some pertinent comments, otherwise say you don’t want to do it.



Perhaps I’m being a little hard on reviewers given that they are giving up their time with no recompense. However it cuts both ways. There is kudos in reviewing a manuscript and, of course, if you want to publish a manuscript in a journal then you will need it reviewed. Totting up some of the potential delays I’ve noted above, if you are unlucky they can run to a considerable number of weeks. The review process is by far the most time-consuming part of the process at JoM. We normally typeset within 2-weeks and publish in the next available issue. So reviewers are the real logjam (although authors can take a considerable amount of time to make corrections).

When I review a manuscript I will only accept if I believe I can justifiably comment upon the content and I can fit it in my schedule. I normally like to review within ~2-weeks. Being a train commuter I do get time to read which helps. I would normally expect to read the paper twice and provide at least 1-page of comments, although it is often 2-3 and sometimes 5-6. I also like to start out with the premise that a paper is publishable in the first instance, highlighting the positive impact upon the discipline. I prefer to see the author number the pages and provide correctly formatted (and cited) references, although I’m not particularly fussy typographically (as I think typesetting should get these in to shape).

Tbis might sound like a bit of a moan, but it is a plea across the board that reviewing forms part of our scholarly activity and we should therefore take it seriously as it impacts upon other peoples careers. I am genuinely very grateful to reviewers for the job they do and appreciate the assistance they give me in making an editorial decision.

Underlying geospatial algorithms

Saturday, 14 June, 2008

I was completing a project this week that used, in-part, a tensioned spline to interpolate across an area with no data points. My colleague had actually written a custom interpolator based upon the well used algorithm from Smith and Wessell (1). This is the algorithm used in the popular Generic Mapping Tools and seems commonly employed. I then trialled the same process in ArcGIS and noticed that the tension parameter is different between the two and therefore, not surprisingly, that the algorithms are different.

The SPLINE function that is used in Spatial Analyst and available within ArcToolBox actually calls the underlying ARC-INFO function. You won’t find details of it in the Help file. Rather you have to look in ArcDocs or online. And what you find is that they are using a method from two early papers (2,3). Now I don’t have a problem with this per se as the method appears to work reasonably well, although you do need to know what the tension parameter is actually doing. However it is interesting to note that the original Smith and Wessel algorithm has recently been updated (4). Which then begs the question as to why ESRI are still using the 1982 algorithm. Is there a sound basis for this? Or is it simply a case of “code and forget”?

1. Smith, W.H.F. & Wessel, P., 1990. Gridding with continuous curvaturesplines in tension, Geophysics, 55, 293-305.
2. Franke, R., 1982. Smooth Interpolation of Scattered Data by Local Thin Plate Splines. Comp. & Maths. with Appls. Vol. 8. No. 4. pp. 237 - 281.
3. Mitas, L., and H. Mitasova. 1988. General Variational Approach to the Interpolation Problem. Comput. Math. Applic. Vol. 16. No. 12. pp. 983-992.
4. Wessel, P. & Becker, J.M., 2008 Interpolation using a generalized Green’s function for a spherical surface spline in tension. Geophys. J. Int., 174, 21-28.

IR-pen again…

Wednesday, 11 June, 2008

After my last post on using the Wiimote as part of an interactive whiteboard, and the follow-up on building an IR pen, I have now found that it is not quite so easy to get ahold of as I thought. Having been around quite a number of stationary shops, I have come to the conlusion that no one sells an IR pen. A few online places do, but they are not easily dismantleable.

So having paused to think, I hit upon this as the simplest solution:

1. Get a normal LED keychain light. As cheap as 25p each!

2. Get an IR LED and replace the bulbs (79p).

3. Attach to your set of keys and voila, an IR pointer!

P.S. You obviously can’t see if the bulb is on (!) so to test it, point it at a digital camera and see if you can see the bulb on on the LCD screen.

Bibliographies and referencing

Sunday, 8 June, 2008

Referencing is a bit of a black art in universities and something we try to drum in from the first year. If you present an idea or piece of information that is not your, cite it. That then requires the use of a reference list and whilst universities are generally happy to stick with a Harvard “style”, most journals are not. This of course means there must be 1,000’s of reference styles actively in use. The de facto standard in referencing is Endnote which is a very accomplished (and relatively expensive) package that combines both a bibliographic database with a style manager, all of which can be copied in to Microsoft Word ready formatted. Of course you can go one step further and simply insert a “tag” within Word to your cited article and then get it to dynamically build the reference list. Endnote also went down the portable route and wrote a Palm application as well.

Clearly this is a profitable market and there are quite a few vendors around offering such products, as well as some quite good online applications, Refworks being the one that we use at Kingston.

Not to be outdone, it is worth mentioning that Latex has long had a very effective system that is similar (called BibTex). As ever, the database is a marked up text file that is then used to dynamically build the reference list. It is very sophisticated and can handle pretty much every citation requirement. Of course its non too user friendly and there have been a few GUIs developed, the most popular of which is Jabref. It is a Java programme so cross-platform, fast and well designed. It even comes with some custom import/export-ers (“Tools->Unpack Endnote filter set”) to make the transition with Endnote pretty good. However note that it is a GUI for building a bibliographic database, it is not a style manager. So whilst you can export HTML and RTF formats of your database you donot have the rich styles provided by Endnote. There is currently work underway for an Open Office Plugin but very little in the way fo the styles themselves. Another (open source) alternative is Zotero which is a Firefox plugin that offers a database and a rapidly expanding set of stylesheets. This rapidly seems to be becomign a popular choice. Of course the reason I’ve ended up using Jabref is that…. I use Latex!

Extracting images from MS Office documents

Monday, 2 June, 2008

For quite a while image handling in MS Office has bugged me. Its never been particularly great for two reasons:

1. When images are inserted, they are done so in a native format and stored at “full resolution” regardless of the print options. There has also never been an option to optimise images in Office documents, leading to hugely bloated files (a blatant commercial plug for NXPowerlite which does a sterling job in performing such a service for all Office documents and versions).

2. You then can’t “export” the images in their original format. Copy/Paste works, but only as a bitmap at screen resolution.

I have taken to using Open Office to open my MS Office documents and doing a copy/paste which is at least at full resolution. However the guys at NXPowerlite suggested an obvious (well it was after I read it) alternative:

1. After Open Office has opened the MS Office document, save it is a native (ODT) Open Office file.

2. Change the file extension from .ODT to .ZIP and open it. Inside the “Pictures” folder will be all the native images.

Its a great tip, although note that the Open Office conversion may not always work perfectly.