Trustworthy Federated Ubiquitous Learning (TrustFUL) Research Lab


About

As AI becomes ubiquitous with advances in AI research, the key barrier to AI adoption is no longer technical in nature. Instead, it is often more about gaining the trust of stakeholders. Developing AI techniques that are fair, transparent and robust has been identified as a viable way for enhancing trust in AI. However, there is an added layer of challenge for this effort moving forward. Societies are increasingly concerned about data privacy and user confidentiality. With stricter laws, such as the General Data Protection Regulation (GDPR), the existing centralized AI training paradigm must be revised to meet regulatory compliance.

Federated Learning (FL), a learning paradigm that enables collaborative training of machine learning models in which data reside in data silos and are not shared during the training process, can help AI thrive in the privacy-focused regulatory environment. As FL allows self-interested data owners to collaboratively train machine learning models, end-users can become co-creators of AI solutions. Currently, FL requires a central trusted entity to coordinate co-creators. However, in practice, such a trusted entity is hard to find and can become a single point of failure. In addition, the assumption that all co-creators receive the same final FL model regardless of their contributions introduces unfairness and limits the adoption of FL.

To enable open collaboration among FL co-creators and enhance the adoption of the federated learning paradigm, we have established the Trustworthy Federated Ubiquitous Learning (TrustFUL) Research Lab in 2021, which is funded by AISG and hosted by Nanyang Technological University (NTU), Singapore. It will enable communities of data owners to self-organize during FL model training based on three notions of trust: 1) trust through transparency, 2) trust through fairness and 3) trust through robustness, without exposing sensitive local data. We will translate TrustFUL into an FL-powered AI model crowdsourcing platform to support AI solution co-creation. In addition, we will train local expertise in the emerging and rapidly advancing field of federated learning to help Singapore build up much needed specialist manpower to benefit from this promising trend in trustworthy AI. The proposed research programme has the potential to serve as an enabler for an AI startup ecosystem, and provide enabling technologies for privacy-respecting data trading exchanges.


Principal Investigator



Yang Liu

Co-Principal Investigators



Han Yu
   

Chunyan Miao
   

Dusit Niyato
   

Hong Xu
   

Ah-Hwee Tan
   

Yiyang Pei
   

Collaborators



Boi Faltings
   

Cyril Leung
   

Lixin Fan
   

Yiqiang Chen
   

Jianshu Weng
   

Funded by

 

Founding Organizations

         

 

International Partners

         
   

Join Us

For opportunities to study, work or collaborate with us under the TrustFUL Research Programme, please visit the Join Us page.

For more details, please contact Han Yu.