SCCS

Surrey Centre for Cyber Security blog

New EPSRC research project on addressing human-related risks in cybersecurity/cybercrime ecosystems

SCCS just got a new research project “ACCEPT: Addressing Cybersecurity and Cybercrime via a co-Evolutionary aPproach to reducing human-relaTed risks” granted by EPSRC. It will involve a group of researchers working in five academic disciplines (Computer Science, Crime Science, Business, Engineering, Behavioural Science) at four UK research institutes (University of Surrey, UCL, University of Warwick, and TRL). It has an overall budget of £~1.1m, with 80% (£~881k) funding from EPSRC. It is expected to start in April 2017 and will last for 24 months.

This interdisciplinary and inter-institutional project’s overall lead will be Dr Shujun Li, a Deputy Director of Surrey Centre for Cyber Security (SCCS) and a Senior Lecturer of Surrey’s Department of Computer Science. At Surrey the project will involve Dr Michael McGuire of the Department of Sociology, Prof Roger Maull of Surrey Business School’s Centre of the Digital Economy (CODE), and Dr Helen Treharne of the Department of Computer Science and Surrey Centre for Cyber Security (SCCS), as co-investigators. Co-investigators from other partner institutes include Dr Hervé Borrion, Dr Gianluca Stringhini and Prof Paul Ekblom of UCL, Prof Irene Ng, Dr Xiao Ma and Dr Ganna Pogrebna of the University of Warwick, and Prof Alan Stevens of the TRL.

A public summary of the project is as follows:

Researchers and practitioners have acknowledged human-related risks among the most important factors in cybersecurity, e.g. an IBM report (2014) shows that over 95% of security incidents involved “human errors”. Responses to human-related cyber risks remain undermined by a conceptual problem: the mindset associated with the term ‘cyber’-crime which has persuaded us that that crimes with a cyber-dimension occur purely within a (non-physical) ‘cyber’ space, and that these constitute wholly new forms of offending, divorced from the human/social components of traditional (physical) crime landscapes. In this context, the unprecedented linking of individuals and technologies into global social-physical networks – hyperconnection – has generated exponential complexity and unpredictability of vulnerabilities.

In addition to hyperconnectivity, the dynamic evolving nature of cyber systems is equally important. Cyber systems change far faster than biological/material cultures, and criminal behaviour and techniques evolve in relation to the changing nature of opportunities centring on target assets, tools and weapons, routine activities, business models, etc. Studying networks and relationships between individuals, businesses and organisations in a hyperconnected environment requires understanding of communities and the broader ecosystems. This complex, non-linear process can lead to co-evolution in the medium-longer term.

The focus on cybersecurity as a dynamic interaction between humans and socio-technic elements within a risk ecosystem raises implementation issues, e.g. how to mobilise diverse players to support security. Conventionally they are considered under ‘raising awareness’, and many initiatives have been rolled out. However, activities targeting society as a whole have limitations, e.g. the lack of personalisation, which makes them less effective in influencing human behaviours.

While there is isolated research across these areas, there is no holistic framework combining all these theoretical concepts (co-evolution, opportunity management, behavioural and business models, ad hoc technological research on cyber risks and cybercrime) to allow a more comprehensive understanding of human-related risks within cybersecurity ecosystems and to design more effective approaches for engaging individuals and organisations to reduce such risks.

The project’s overall aim is therefore to develop a framework through which we can analyse the behavioural co-evolution of cybersecurity/cybercrime ecosystems and effectively influence behaviours of a range of actors in the ecosystems in order to reduce human-related risks. To achieve the project’s overall aim, this research will:

  1. Be theory-informed: Incorporate theoretical concepts from social, evolutionary and behavioural sciences which provide insights into the co-evolutionary aspect of cybersecurity/cybercrime ecosystems.
  2. Be evidence-based: Draw on extensive real-world data from different sources on behaviours of individuals and organisations within cybersecurity/cybercrime ecosystems.
  3. Be user-centric: Develop a framework that can provide practical guidance to system designers on how to engage individual end users and organisations for reducing human-related cyber risks.
  4. Be real world-facing: Conduct user studies in real-world use cases to validate the framework’s effectiveness.

The new framework and solutions it identifies will contribute towards enhanced safety online for many different kinds of users, whether these are from government, industry, the research community or the general public.

This project will involve a group of researchers working in five academic disciplines (Computer Science, Crime Science, Business, Engineering, Behavioural Science) at four UK research institutes (University of Surrey, University College London, University of Warwick, Transport Research Lab), and be supported by an Advisory Board with 12 international/UK researchers and a Stakeholder Group formed by 13 non-academic partners in both the public and private sectors (including law enforcement agencies, industry and NGOs).

SCCS has 2 open PhD positions funded by HM Government (UK citizens only)

SCCS invites applications for 2 PhD positions in Cyber Security: One position is in the research project called Modelling and Verification of Distributed Ledger Technologies under supervision of Prof Steve Schneider (principal supervisor) and Dr David Williams. Another position is in the research project called Attribute-based Signatures for Privacy in V2X Authentication under supervision of Dr Mark Manulis (principal supervisor) and Dr Thanassis Giannetsos. Both positions are over a period of 3.5 years, fully-funded by HM Government and available only to UK citizens. Please follow the above links for more information about the projects, application process and eligibility requirements.

SCCS is recruiting a Research Fellow for EPSRC-funded project TAPESTRY

SCCS is looking for a post-doctoral researcher to start in March/April 2017 and work on a new EPSRC-funded project called TAPESTRY that will investigate, develop and demonstrate new cryptographic protocols to enable people, businesses and services to connect safely online. The candidate will be mainly working with Dr Mark Manulis on the design, security analysis and implementation of privacy-oriented cryptographic protocols that use zero-knowledge proofs, distributed ledger technologies and authentication mechanisms. More information at https://jobs.surrey.ac.uk/vacancy.aspx?ref=079116.

When discrete optimisation meets multimedia security (and beyond)

I was invited by Prof Raouf Hamzaoui to give a talk on “When Discrete Optimisation Meets Multimedia Security” for the IEEE UK & Ireland Signal Processing Chapter on 25th May, 2016. The talk was hosted the De Montfort University and was also organised as part of the Faculty of Technology Research Seminar Series (FoT-RSS). The slides of the talk can be found at SlideShare:

Optimisation_meets_Multimedia_Security_public

The talk started from a problem of ciphertext-only attack of selective encryption of blockwise DCT-transformed images where some DCT coefficients of each block are encrypted. (In case you don’t know what DCT is: it is a mathematical transform that converts spatially correlated pixel values into a linear combination of many frequency components which are caused DCT coefficients. DCT coefficients are largely uncorrelated and the energy is highly concentrated on the low-frequency corner, so can help support more efficient lossy data compression. For image compression purpose, a large image is normally divided into 8×8 blocks and each block is transformed into DCT domain separately.) The concept of selective encryption is very simple: rather than encrypting everything you only selectively encrypt only part of the whole image in order to downgrade the visual quality to a level suitable for the required visual security. In this case, the security will be about recovering the original image with a better visual quality (for ciphertext-only attacks) or recovering the key (for some other attack scenarios). As its name implies, ciphertext-only attacks assume the attacker can only see the encrypted images only, which corresponds to the hardest task from the attacker’s point of view. Traditionally, simple error concealment attacks can be used to recover a selectively encrypted image by setting all missing components to a specific value (e.g. zero). For instance, if we apply DCT to the famous test image Lenna (see below, top), and then selectively encrypted the most important DCT coefficient in each 8×8 block, we will have the following result (see below, bottom):

Lenna  Lenna_DC_encryption

While the image looks well encrypted in terms of concealing most visual information of the image, a naive method can recover the original image’s sketch not very badly by setting all encrypted DCT coefficients to zeros:

Lenna_DC_encryption_naive_ECA

A key research question here is if and how we can recover the original image to get a visual quality better than the above. This was firstly answered by Takeyuki Uehara, Reihaneh Safavi-Naini and Philip Ogunbona in a paper published on IEEE Transactions on Image Processing in 2006. They found out two properties of (natural) images can be used to create a better image recovery method. For the above Lenna case, the recovered image is far better than the above one:

Lenna_DC_encryption_USO

Uehara et al.’s method does not always work well. For instance, for another image I took when I was working in Hong Kong (below, top), the result become worse in terms of accuracy of recovered pixel values (below, bottom):

HK_shop

HK_shop_DC_encryption_USO

The inaccuracy largely comes from a lot of under-flowed and over-flowed pixel values in Uehara et al.’s method. In 2010 we published a paper at ICIP (International Conference on Image Processing) 2010 based on an iterative method which select a parameter of Uehara et al.’s method to minimize overall under- and over-flow rate. This methods improves visual quality of almost all 200 test images we tried in a database with both standard test images and pictures taken by me. For the above HK shop image, the recovered image is as follows which has a much better quality compared with the result from Uehara et al.’s method:

HK_shop_DC_encryption_FRM

Our ICIP 2010 work is still not perfect as it still leaves noticeable artifacts for some other images. For instance, for another standard test image cameraman (bottom, left), the recovered result contains some visual artifacts at various places (bottom, right):

cameraman cameraman_DC_encryption_FRM

Can we further improve the recovery result? The answer is yes as shown by a follow-up paper we published at ICIP 2011. In that paper, we used a completely different but far more general approach: a linear optimisation model allowing any set of missing DCT coefficients to be recovered where the DC recovery case is merely the simplest case of the general model. Since the model is linear and all DCT coefficients and pixel values can be treated as continuous values (which can be later rounded), it can be effectively solved using linear programming in polynomial time. To avoid any math in this blog article, I refer interested readers to our paper for the mathematical representation of the model. For DC recovery case, the general model does a better job than the simple optimisation model we reported in our ICIP 2010 paper. For the above cameraman image, the recovered image by the general model is as follows (the improvement in terms of visual quality is obvious):

cameraman_DC_encryption_LP

As said above, the general model is capable of recovering more than missing DC coefficients, which is something none of previous methods are capable of. For instance, when 12 lowest (most significant) DCT coefficients (i.e. DC coefficients plus the 11 most significant AC coefficients) are missing from each block of the cameraman image, the general model can recover the following:

cameraman_DCT12_encryption_LP

While the general model can be solved in polynomial time, it is still quite slow and requires a large chunk of memory especially for large images, so there is a desire to find even faster algorithms. Working with two algorithm experts of the University of Konstanz, we managed to find a fast algorithm based on the min-cost network flow algorithm, which has a complexity of O(n1.5) while the general model based on linear programming has a complexity of O(n2), where n denotes the number of pixels in the image. The reduced complexity can effectively bring the solving time for recovering a 512×512 image from minus to well below 1 second (hundreds of times improvement). The fast algorithm is not approximate, so it produces exactly the same result as the linear programming based method. This paper was published at 14th SIAM Workshop on Algorithm Engineering and Experiments (ALENEX 2012). The fast algorithm cannot be directly generalized to other cases with missing AC coefficients as well, so the general model remains the only method for more complicated cases. We are also currently looking at ways to speed up the general model using some approximate approaches and has finished testing one approach which was just submitted for possible publication.

The general model of recovering DCT coefficients later also found applications in content authentication and self-restoration image watermarking where a similar model can be constructed to help restore a manipulated image based on watermarks embedded. This work was published at the Signal Processing: Image Communication journal in 2014.

Based on the above work, since 2015 we have also been working on recovering other types of missing information from DCT-transformed images and have got some promising results which are in the process of being submitted for publications. I will discuss these new results in a separate blog article later after we get the work accepted somewhere.

While our focus in the past is more about recovering missing information, all the methods have profound implications beyond multimedia security. For instance, being able to recover missing DCT coefficients with good quality means we don’t need to encode (all or part of) those coefficients, so this can lead to a more efficient image compression algorithm. In principle all the work on digital images can be applied to digital video as well although in this case the model can become more complicated since video coding involves a lot of steps and integer variables which have to be modeled in a very different way (linear programming => mixed integer programming: the latter is a NP-hard problem in general, i.e., among the most difficult problems which currently do not have a polynomial time algorithm).

The 6 years of research on this line of research has been a very exciting and enjoyable experience for me, and it allows me to work with a number of collaborators from Germany (Junaid Jameel Ahmad, Dietmar Saupe, Andreas Karrenbauer and Sabine Cornelsen), USA (C.-C. Jay Kuo), UK (Hui Wang and Anthony TS Ho) and Malaysia (KokSheik Wong, Simying Ong and Kuan Yew Tan). Two visiting students from City University of Hong Kong also helped: Kadoo (Ching-Hung) Yuen (2012) and Ruiyuan Lin (2015). I look forward to more interesting work ahead!

Open research position on human-assisted data loss prevention (DLP)

We are recruiting an MSc / good first (UG) degree holder to work on a new Innovate UK funded KTP project about human-assisted DLP (data loss prevention) following a hybrid human-machine computing paradigm.

The successful candidate should have experience in practical applications of machine learning (ideally to some cyber security applications, not necessarily on DLP).

He/She will be officially employed by the University of Surrey but work full-time for the DLP vendor Clearswift Ltd for 3 years on the joint project.

Click the following link for more details and the online application form: http://jobs.surrey.ac.uk/042416-R (deadline: 19 September 2016).

Verifiable Electronic Voting on the radar

This year’s European Conference of Electoral Management Bodies took place last month, with the conference topic as “New technologies in elections: public trust and challenges for electoral management bodies” [1].   Conclusion 21 of the conference “pointed to the issue of verifiability of the vote if electronic voting is used and the importance of providing effective means of verification whilst conducting e-enabled elections”.

Verifiability within an electronic voting system is concerned with providing ways to check that the system is processing votes correctly [2]. The ability to check that votes have been handled properly provides a way of countering bugs or deliberate software errors, as well as cyber attacks on election systems, and thereby provides confidence in the integrity of the election result.   However, designing verifiability into a system is challenging: on the one hand it needs to provide voters with a way of checking that their vote has been counted correctly within the overall tally, and a way of challenging the election if they find it has not; on the other hand it needs to maintain strict vote privacy so that voters cannot prove to anyone else how they voted.

Proposals in the academic literature tend to meet this challenge through use of cryptography, allowing the voter to cast the vote in encrypted form, and then providing mechanisms that allow the encrypted vote to be decrypted in such a way that it cannot be linked back to the voter. The voter is able to check that the encrypted vote has been properly recorded by the system – this is known as individual verifiability. Anonymisation and decryption of all the votes together can be carried out in a publicly verifiable way, where all the cryptographic steps can be independently checked. End-to-end verifiability demands ways of verifying all the steps, from verifying the vote is cast-as-intended (the encryption of the vote is correct), to recorded-as-cast (the encrypted vote is in the system), to counted-as-recorded (the votes have been processed correctly).   Eligibility verifiability is another aspect, concerned with being able to verify that all recorded votes are indeed from eligible voters.

Verifiability is now reaching the point where it is emerging from the electronic voting research community and impacting on the elections world. It is encouraging that electoral management bodies are recognizing the benefits that it can bring. Several verifiable systems have already been used, in political elections [3,4] and in University elections [5,6], and the state-of-the-art is still being developed. Our own experience of deploying a verifiable polling place voting system in the 2014 Victorian State Election, Australia [4] demonstrated that such systems are now feasible in practice. There are still many technical and sociological challenges ahead, but only verifiability can achieve the level of trust required for electronic elections.

 

References

[1] Synopsis of 13th European Conference of Electoral Management Bodies, April 14-15, 2016. Bucharest Rumania. http://www.venice.coe.int/webforms/documents/default.aspx?pdffile=CDL-EL%282016%29001syn-e

[2] Steve Schneider and Alan Woodward: E-Votng: Trust but Verify, Scientific American guest blog 2012.  http://blogs.scientificamerican.com/guest-blog/e-voting-trust-but-verify/

[3] Richard Carback, David Chaum, Jeremy Clark, John Conway, Aleksander Essex, Paul S. Herrnson, Travis Mayberry, Stefan Popoveniuc, Ronald L. Rivest, Emily Shen, Alan T. Sherman, Poorvi L. Vora: Scantegrity II Municipal Election at Takoma Park: The First E2E Binding Governmental Election with Ballot Privacy. USENIX Security Symposium 2010

[4] Craig Burton, Chris Culnane, Steve Schneider: Secure and Verifiable Electronic Voting in Practice: the use of vVote in the Victorian State Election. CoRR abs/1504.07098 (2015)

[5] Olivier de Marneffe, Olivier Pereira, Jean-Jacques Quisquater: Electing a University President Using Open-Audit Voting: Analysis of Real-World Use of Helios. EVT/WOTE 2009

[6] Jonathan Ben-Nun, Niko Fahri, Morgan Llewellyn, Ben Riva, Alon Rosen, Amnon Ta-Shma, Douglas Wikström: A New Implementation of a Dual (Paper and Cryptographic) Voting System. Electronic Voting 2012

1 2 3