Computing Meets Image Processing in Brussels

By Shujun Li

Last week I attended this year’s International Conference on Image Processing (ICIP 2011) in Brussels, Belgium. ICIP is not a computing style conference, but an EE (electronic engineering) one. Since many image processing problems can be solved effectively by computing methods, ICIP is also attended by many computer scientists and mathematicians. As one of the flagship conferences of the IEEE Signal Processing Society, it was usually attended by more than one thousand people from different fields. This year they got more than 800 papers accepted.

The main goal of mine is of course to present a paper which we got accepted there, whose title is “Recovering Missing Coefficients in DCT-Transformed Images.” Our paper was allocated to an afternoon poster session and it was well attended. We had two co-authors (the first two co-authors, including me and a German colleague from my former institute — University of Konstanz) presenting our poster. While officially we only needed to present our poster for 1.5 hours (the first half of that poster session), we ended up with standing there for more than three hours because there were always interested people around. As its name suggests, our work is about a general framework of recovering missing coefficients from DCT-transformed images, and its application to JPEG images and MPEG videos are straightforward. The basic idea is to model the recovery problem as a linear program and then solve it in polynomial time. The recovery results are surprisingly good even when a lot of DCT coefficients are missing. For instance, if the 15 most significant DCT coefficients are missing from each 8×8 block, some images can still be recovered with an acceptable quality allowing people to see all the semantic contents of the image (see images below). If you want to know more, you can take a look at our poster available online at http://www.hooklee.com/Papers/ICIP2011-Poster.pdf. Our work has the potential to find applications in several sub-fields of multimedia coding including image compression, multimedia security and forensics. We are currently investigating these possibilities.


For such a big conference with around 1000 presentations and multiple sessions, you always have problems deciding which sessions to go. I was more with poster sessions and a few oral sessions on multimedia security and forensics. A particularly interesting session I attended is the “Best Student Paper Award Session”, where eight finalists were presented in front of the audience and the award committee. One of the papers I was interested is about a technique countering JPEG anti-forensics. There are three things we need to explain a bit. First, JPEG forensics is about detecting the fact that a given image was ever JPEG compressed in the past. This can be done by simply looking at the histogram of the DCT coefficients, which has a lot of gaps between some peaks reflecting the quantization step in the JPEG encoder. Second, JPEG anti-forensics is to manipulate a JPEG compressed image in such a way that the footprint of JPEG compression is removed and the simple forensic tool fails. One simple approach is to add a noise-like signal so that the DCT coefficients of the manipulated JPEG image look like an uncompressed image. And last, a new technique is proposed to detect JPEG compression even when the anti-forensic manipulation is present. The technique actually tries to estimate the quantization factor of the original JPEG image by checking a re-compressed image with different quality factors. This paper finally got one of the three best student paper awards.

It is interesting to see now we have both forensics and anti-forensics. Most forensics research does not have anti-forensics in mind. But I do think that forensic tools should consider anti-forensics from the very beginning because according to Kerckhoffs’ principle (or Shannon’s maxim if you like) manipulators of multimedia data should have full knowledge of forensic tools that will run on the manipulated media. Ideally, a forensic tool should try to handle all known anti-forensic algorithms. Of course, such full-functional forensics is a very challenging task, so we should expect to continue seeing the cat and mouse game between forensics and anti-forensics in the next a few years.