Human Machine Lab
The Human Machine Lab is an interdisciplinary research laboratory with overarching yet practical ambition: designing computer systems around human needs and capabilities while maintaining human-level intelligence. Its interdisciplinary projects fall in various fields including human-computer interaction, usable security, privacy, and artificial intelligence. Three recent research programs are highlighted below:
Cybersecurity
(Led by Miguel vargas Martin, PhD)
This research program endeavours to analyze memorability of system assigned passwords at the time of creation, by studying the brain waves generated while seeing a password for the first time. The research program also investigates authentication techniques that leverage implicit learning phenomena from the psychology field.
Data Privacy in Companion Robots and Smart Toys
(Led by Patrick Hung, PhD)
This program studies a privacy protection framework for companion robots and smart toys. A companion (social) robot is a device consisting of a physical humanoid robot component that connects through a network infrastructure to web services that enhance traditional robot functionality. In this context, a smart toy is defined as a device consisting of a physical toy component that connects to one or more toy computing services to facilitate gameplay in the cloud. The objective of this research is to build a theoretical and technical data privacy protection engine for culture-aware robots and smart toys on enabling users to be in control of their privacy by specifying their privacy preferences in human-robot interaction (HRI).
Intelligent Decision Systems
(Led by Amirali Abari, PhD)
This program focuses on designing, exploring and extending the capabilities of intelligent decision systems, aiming to ease, assist, or automate human decision making. One of the main endeavours in any intelligent decision system (e.g. recommender systems) is learning the user preferences upon which decisions are ultimately made. Our focus in preference learning is to require as little explicit information as possible from users. This approach intends to make intelligent decision systems practical and to help preserve individual’s privacy. This approach is possible due to the increasing availability of user behaviour data generated through online social networks, e-marketplaces or other web or mobile applications. We are currently developing probabilistic models and machine-learning algorithms for preference learning.