--======== Review Reports ========-- The review report from reviewer #1: *1: Is the paper relevant to WI? [_] No [X] Yes *2: How innovative is the paper? [_] 5 (Very innovative) [X] 4 (Innovative) [_] 3 (Marginally) [_] 2 (Not very much) [_] 1 (Not) [_] 0 (Not at all) *3: How would you rate the technical quality of the paper? [X] 5 (Very high) [_] 4 (High) [_] 3 (Good) [_] 2 (Needs improvement) [_] 1 (Low) [_] 0 (Very low) *4: How is the presentation? [X] 5 (Excellent) [_] 4 (Good) [_] 3 (Above average) [_] 2 (Below average) [_] 1 (Fair) [_] 0 (Poor) *5: Is the paper of interest to WI users and practitioners? [X] 3 (Yes) [_] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [X] 2 (High) [_] 1 (Medium) [_] 0 (Low) *7: Overall recommendation [X] 5 (Strong Accept: top quality) [_] 4 (Accept: a regular paper) [_] 3 (Weak Accept: could be a poster or a short paper) [_] 2 (Weak Reject: don't like it, but won't argue to reject it) [_] 1 (Reject: will argue to reject it) [_] 0 (Strong Reject: hopeless) *8: Detailed comments for the authors The paper presents a new technique to counter the automatic construction of users' profile by trackers. The idea is simple and elegant and experimental results are provided. ======================================================== The review report from reviewer #2: *1: Is the paper relevant to WI? [_] No [X] Yes *2: How innovative is the paper? [_] 5 (Very innovative) [_] 4 (Innovative) [X] 3 (Marginally) [_] 2 (Not very much) [_] 1 (Not) [_] 0 (Not at all) *3: How would you rate the technical quality of the paper? [_] 5 (Very high) [_] 4 (High) [X] 3 (Good) [_] 2 (Needs improvement) [_] 1 (Low) [_] 0 (Very low) *4: How is the presentation? [_] 5 (Excellent) [_] 4 (Good) [X] 3 (Above average) [_] 2 (Below average) [_] 1 (Fair) [_] 0 (Poor) *5: Is the paper of interest to WI users and practitioners? [_] 3 (Yes) [X] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [_] 2 (High) [X] 1 (Medium) [_] 0 (Low) *7: Overall recommendation [_] 5 (Strong Accept: top quality) [_] 4 (Accept: a regular paper) [X] 3 (Weak Accept: could be a poster or a short paper) [_] 2 (Weak Reject: don't like it, but won't argue to reject it) [_] 1 (Reject: will argue to reject it) [_] 0 (Strong Reject: hopeless) *8: Detailed comments for the authors The paper shows that the user profile built by web tracking networks can be improved by generating synthetic traffic relevant to the user's subjects of interest. This approach is novel to my knowledge. The paper has other strengths: it is well written, it is nice to read, and it presents some evidence that the profile imprinting may be quick and effective. On the other hand, it has also some weaknesses. Given that the generated URLs are directly relevant to some topics, it is somehow expected that the user profile will be biased towards those topics. Also, the requirement that the user should manually select the topics of interest may reduce the impact and the overall utility of this approach. While the paper is, in general, technically sound, I am also concerned about some experimental results and the method used to evaluate performance. The results obtained using synthetic traffic show that the system was nearly insensitive to the amount of browsing, the number of sites per topic, and the amount of interference. This is counterintuitive because these variables were the main factors affecting the system design. This clearly deserves more explanation. I suspect that the reported behavior may be linked to the limitations of the method used to assess performance, i.e., topic mapping + cosine similarity. As the mapping is based on a syntactic (possibly inaccurate) criterion and using a binary vector may be an over-simplification, I suspect that the generated score may not be very significant in the given experimental conditions. On the whole, I suggest that the paper should be accepted as a poster or a short paper. ======================================================== The review report from reviewer #3: *1: Is the paper relevant to WI? [_] No [X] Yes *2: How innovative is the paper? [_] 5 (Very innovative) [_] 4 (Innovative) [X] 3 (Marginally) [_] 2 (Not very much) [_] 1 (Not) [_] 0 (Not at all) *3: How would you rate the technical quality of the paper? [_] 5 (Very high) [_] 4 (High) [X] 3 (Good) [_] 2 (Needs improvement) [_] 1 (Low) [_] 0 (Very low) *4: How is the presentation? [_] 5 (Excellent) [_] 4 (Good) [X] 3 (Above average) [_] 2 (Below average) [_] 1 (Fair) [_] 0 (Poor) *5: Is the paper of interest to WI users and practitioners? [_] 3 (Yes) [X] 2 (May be) [_] 1 (No) [_] 0 (Not applicable) *6: What is your confidence in your review of this paper? [_] 2 (High) [_] 1 (Medium) [X] 0 (Low) *7: Overall recommendation [_] 5 (Strong Accept: top quality) [_] 4 (Accept: a regular paper) [_] 3 (Weak Accept: could be a poster or a short paper) [X] 2 (Weak Reject: don't like it, but won't argue to reject it) [_] 1 (Reject: will argue to reject it) [_] 0 (Strong Reject: hopeless) *8: Detailed comments for the authors The paper is well written. However, it needs improvements to be really understood by a novice in the field. - In Section II, there is a subsection dedicated to "Current Approaches". Why did you separate it from the "Related work" subsection of the conclusion. What is your contribution to the state of the art? - Figure 1: what does the coding [0 1 1 0].. mean? - Could you motivate your study by an example? Fig. 2 needs more comments.