Writing Reviews

My strategy for writing reviews is to:

  1. write a short summary listing out what the primary contribution is of the work (see first paragraph of my sample reviews below)
  2. write a short summary of my overall thought as to whether the paper should be accepted or rejected (see second paragraph of my sample reviews below)
  3. write a paragraph for each of the main parts of a paper and offer my opinion on how well the authors did

Of course, there is no right or wrong strategy for writing a review. There is also lots of guidance online. See this page as an example.

Below are two example reviews that I have written.


This paper contributes the design of three different tangible systems aimed at <removed for anonymity>. <removed for anonymity> and <removed for anonymity> each provide a <removed for anonymity>. The paper also presents a study with 12 people who gave feedback on the designs.

Overall, the paper is an interesting and thought-provoking read. The designs and reactions from users are not earth-shattering and many of the user reactions and design considerations are fairly obvious (suggesting a study may not have been completely necessary). That being said, there are a few surprising things coming out of the work (I outline these below). Moreover, this is one of the few papers I have seen on <removed for anonymity> that actually provides some reaction from users about the design. This in itself is nice to see. I think the paper will spur further design thought in this area and push designers to publish and talk about the reactions they get from potential users. For me, this warrants a ‘4’ and an accept.

FRAMING: The framing of the paper is a weak and the authors could have spent more time describing a clear research problem that they are focused on. Clearly it is about <removed for anonymity> and there is a need for it, but the authors don’t do a good job of presenting a strong background of what that need really entails. This is evident in the somewhat short and less detailed abstract and introduction sections.

RELATED WORK: The related work feels well covered though I do not know what all publications have looked at designing <removed for anonymity>. The only notable examples that I see missing are <removed for anonymity>.

There is a large body of literature that looks at <removed for anonymity> (see <removed for anonymity> for example) that seems untapped, however, this is less relevant for a design-focused paper like this. Even still, it would have been nice to see some of this literature used to frame the paper and research problem.

THE PROTOYPES: The prototypes are all quite intriguing and border on the uncanny valley problem. This is quite nice because it gives the paper a good opportunity to explore the issue. I also like the variations that the authors have presented in their designs. They didn’t stop at the most obvious solution – <removed for anonymity>. They pushed their ideas further to think of alternative designs that would similarly probe how users want to <removed for anonymity>.

STUDY METHOD: The method seems sound though some of the questions outlined in Table 1 are weak and will get obvious answers (e.g., Do you <removed for anonymity>). My only major concern would be that the vignettes are presented in very little detail yet seem to be a cornerstone of the study method. We need to better understand them in order to properly assess the study results.

RESULTS: Several of the results are somewhat obvious and it isn’t clear to me that you need a study to find them out (e.g., people preferred <removed for anonymity>). I believe any good designer could have expected these findings.

Some results are also fairly speculative which do not really offer much contribution (e.g., people speculated that <removed for anonymity> – they are simply guessing)

That said, there are some interesting things that are less expected. These suggest interesting design directions and would be highly relevant for others who are designing <removed for anonymity>. For example: – mobility becomes an important design factor such that interactions can be more spontaneous – people favoured <removed for anonymity> – devices that tried to copy the behaviour of <removed for anonymity> were least preferred – the process of <removed for anonymity> created a significant memory – people wanted to couple the devices with other communication mediums

DISCUSSION: The discussion is fine, but the authors could have done a better job of pulling out the most interesting findings and showing how these could be more broadly applicable to <removed for anonymity> design. There are some key things that designers should think about based on the surprising findings that I listed above and the authors should explicitly state them.

I think most of my above concerns could be addressed pretty easily in a camera-ready version of the paper.


This paper reports on a data analysis that compares <removed for anonymity> to illustrate the efficiency of the various projects. Efficiency relates to <removed for anonymity>.

Overall, I do not think this paper is ready for publication. The data is based on large approximations (e.g., <removed for anonymity>) as opposed to actual data. Moreover, the results provide little new understanding about /why/ some <removed for anonymity> projects are more efficient than others.

FRAMING: The paper is somewhat oddly framed. The abstract reports the actual problem (sort of) that the paper is tackling. The introduction then continues on and does not re-state the problem again. The reader has to rely on the abstract to situate the work. At the end of the introduction, the authors state that the problem of understanding <removed for anonymity> efficiency (comparing inputs to outputs) has not been studied. I am not very familiar with the <removed for anonymity> literature so I can only assume this is true. But thinking more deeply about this problem suggests that it hasn’t been addressed because the data to do so would be very difficult to obtain. There are likely many factors that affect efficiency such as <removed for anonymity>. The authors have simplified this through approximations instead.

RELATED WORK: The authors cite a large body of literature that feels quite comprehensive. The only downside is that they have not presented it together in a succinct manner that allows the reader to understand how their current work goes beyond it. Instead, the literature is sprinkled throughout their theoretical section and discussion amidst descriptions of their own work.

DATA ANALYSIS: The data analysis is based on two hypotheses at the end of the theoretical section. These are not really hypotheses though in a formal analysis sense. They are also not posed as research questions. Because of this it is unclear at the end of the study whether or not anything has really been solved or proven.

The data analysis compares the efficiency of <removed for anonymity>. The authors exclude <removed for anonymity> because it is an “outlier”. Yet the reason for this is unclear. They then collected data from <removed for anonymity> for each <removed for anonymity> project – this is fine to me and likely quite accurate in terms of how many people contributed, what the output was, etc. Thus, I think it provides sufficient data at a granular level. The issue I have though is with the data used to generate the total number of <removed for anonymity> to a project. The authors calculated this by starting with stats from <removed for anonymity> and then calculating <removed for anonymity>. In no way is it clear to me how this type of calculation could be used to estimate how many <removed for anonymity> there are to a <removed for anonymity> project. No rationale is provided and it appears to be highly prone to error. A similar calculation is done to estimate the number of <removed for anonymity>.

The results the authors find from their data analysis are weak and most are presented in tables and figures that the reader must parse on his/her own.

DISCUSSION: At a high level, the analysis results show that there are indeed differences between efficiencies in the various <removed for anonymity> projects. The authors then pose several possible reasons for this, e.g., <removed for anonymity>. These reasons are certainly speculative as the authors have no data to back up their reasoning. Beyond this, there are really no implications coming from the work that one could use to better understand the <removed for anonymity> research space.