Research

Here are some themes and techniques that we currently work on:

Edge AI Systems. We conduct research on on-device and edge-assisted AI systems to develop resource-efficient and secure mobile applications. Edge AI research endeavors to enhance the capabilities of AI at the edge, fostering a decentralized and user-centric approach to align with human-centered principles. Specifically, our lab focuses on optimizing and compressing AI models and algorithms for resource-constrained edge devices, ensuring efficient power usage and prolonged battery life. We strive to improve the robustness of AI models, enabling them to operate effectively in diverse edge environments by accommodating variations in data quality, network conditions, and device capabilities. Furthermore, our current exploration involves enhancing the interaction between humans and edge AI devices, aiming to make applications more intuitive, responsive, and tailored to individual user needs.

Feedback Learning (Reinforcement Learning). We research reinforcement learning based feedback learning for developing human-friendly AI systems. Feedback learning delineates a paradigm in which an agent refines its decision-making and performance by receiving feedback from its environment, primarily human, to closely align with human values. Employing a post-training methodology, the agent executes actions in its environment, obtains feedback in the form of rewards or penalties, and subsequently refines its decision-making strategy. Our focus involves the integration of machine learning techniques with reinforcement learning algorithms to effectively navigate complex and high-dimensional input spaces, thereby enhancing overall efficacy.

Responsible AI. Responsible AI refers to the development, deployment, and use of AI technologies in a manner that prioritizes ethical considerations, fairness, transparency, and accountability. This approach recognizes the potential impact of AI on individuals and society, emphasizing the need for responsible and inclusive practices throughout the AI lifecycle. Our research mainly focuses on developing techniques to identify and address biases in AI models, promoting fairness and preventing discrimination across diverse demographic groups, and also conducting assessments to understand and mitigate the broader societal impact of AI applications, including considerations of job displacement, economic implications, and social equality.

Medical AI. Medical AI involves the development and application of AI technologies with a primary focus on enhancing healthcare outcomes and prioritizing the well-being of individuals. Medical AI mainly seeks to augment the capabilities of healthcare professionals, improve diagnostic accuracy, streamline treatment planning, and personalize patient care. Currently, we are working on implementing AI-driven natural language processing techniques to extract valuable insights from unstructured clinical notes, medical literature, and patient communications and also utilizing AI to predict disease progression, identify at-risk populations, and optimize preventive interventions, leading to more proactive and personalized healthcare.

Social Sector AI. Social sector AI involves the development and application of AI technologies to address societal challenges and improve the well-being of communities. This field focuses on leveraging AI to create positive social impact, promote inclusivity, and address issues related to education, poverty, healthcare, environmental sustainability, and more. We currently work on using AI to optimize the delivery of social services and legislative practices, such as welfare programs or legal counseling, by identifying and assisting those in need more efficiently and equitably and legislative practices.