Security and Privacy in Recommendation Systems

Visiting Lectureship at N.I.I. Tokyo, and Birmingham International Engagement Fund (BIEF)
P.I.

Recommendation Systems (RS) allow users to provide their preferences regarding products and services they are interested in (either directly or indirectly). Systems then map these attributes to identify similarity areas (neighbourhoods) aiming to produce accurate predictions and recommendations. Malicious entities, such as an unsolicited user or an online vendor, might take advantage of RSs to gain certain benefits (e.g., push a product or nuke another competitive service) by employing automated algorithms, or by introducing perturbations in databases which can affect predictions, resulting in biased, inaccurate, and deceitful recommendations.

Fundamental Research Questions

  • Fairness of RS: How can we utilise Graph Neural Networks to accommodate fairness in Recommender Systems and (at the same time) protect users’ privacy – obfuscating protected characteristics, such as demographics. AML can be used as the main method for enabling obfuscation.
  • Explainable RS: Moving away from the use of AML, how can we propose robust Explainable recommender systems (XRS)? Getting inspiration from XAI (explainable artificial intelligence) we will investigate the use of GNNs to provide transparent recommendations.

The tradeoff between maintaining privacy but also proposing adequate and transparent recommendations has been studied briefly in the latest literature. We envision our work in this area to initiate opportunities for further research collaboration in the emerging field of Explainable Secure RSs (XSec).