Reasoning with Large Foundation Models
Project Description
The recent advancements in large foundation models have revolutionized the way humans interact with AI. This project aims to delve into the application and enhancement of reasoning capabilities in these models across a range of downstream scenarios, such as E-Commerce (Amazon), Social Media (Twitter), and solving generalizable commonsense problems. Our investigation will encompass both large language models (LLM) and large vision-language models (LVLM). This project is highly research-oriented, and it requires applicants to possess strong self-motivation and determination.
Supervisor
SONG Yangqiu
Quota
10
Course type
UROP1000
UROP1100
UROP2100
UROP3100
UROP3200
UROP4100
Applicant's Roles
Working together with a PhD student on task formulation, designing experiments, analyzing results, and writing research papers.
Applicant's Learning Objectives
Have hands-on experience in playing with LLMs and LVLMs. Learn how to research with them for diverse reasoning scenarios.
Complexity of the project
Challenging