About

I am who I am

Hi, I am Howard (Jingbiao) Mei, a PhD student at the Machine Intelligence Laboratory (MIL), University of Cambridge, supervised by Prof. Bill Byrne. I am a member of Peterhouse.

My research focuses on multimodal retrieval, hateful meme detection, and vision-language models. My work spans retrieval-augmented generation for visual question answering, content moderation with multimodal reasoning, and long-term personalized memory systems. I have published at top venues including NeurIPS, ACL, EMNLP, ICLR, and NAACL.

For a full list of publications, talks, and portfolio, see my academic page: meijingbiao.github.io.

Research Interests

  • Multimodal Retrieval and Retrieval-Augmented Generation
  • Hateful Meme Detection and Content Moderation
  • Vision-Language Models
  • Reinforcement Learning and Preference Optimization
  • Personalized AI and Long-Term Memory

Education

  • Ph.D. in Engineering, University of Cambridge, Oct 2022 – Jun 2026
    • Machine Intelligence Laboratory, Peterhouse
    • Supervisor: Prof. Bill Byrne
    • Research interests: Vision-Language models, Information retrieval, RL and reasoning models
  • M.Eng & B.A. in Information and Computer Engineering, University of Cambridge, Oct 2018 – Jun 2022

Work Experience

  • Multimodal LLM Research Intern, RedNote (Xiaohongshu), Mar 2025 – Present
    • Harmful content detection, RLHF/GRPO for MLLMs, curriculum learning for content moderation
  • AI Strategy Research Intern, Huawei Cambridge Research Centre (ISR), May 2024 – Present
    • Strategic AI research, co-organising workshops at ECCV/WWW/BMVC/ECAI/Eurographics
  • AI Research Intern, Huawei Cambridge Research Centre (Kirin AI Solution), Jul 2022 – Jan 2023
    • On-device streaming ASR, model compression, patent EP4404187A1
  • Deep Learning Research Intern, University of Cambridge, Jun 2021 – Sep 2021
    • Multimodal hateful speech detection with pretrained vision-language models
  • Deep Learning Research Intern, Shanghai Jiao Tong University, Sep 2020 – Dec 2020
    • Fault-tolerant neural network architectures, published at DAC 2021, patent CN113570056A
  • Web Programmer, Jieqi Edge Computing, Jul 2019 – Sep 2019

News

  • [Mar 2026] Paper accepted at ACL 2026: Retrieval-Augmented Defense: Adaptive and Controllable Jailbreak Prevention for Large Language Models.
  • [Mar 2026] New preprints: According to Me: Long-Term Personalized Referential Memory QA and Controllable Multi-label Video Safety Detection via Adaptive Tversky Policy Optimization.
  • [Jan 2026] Paper accepted at ICLR 2026: ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection.
  • [Sep 2025] Paper accepted at EMNLP 2025 Main as Oral: Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection.
  • [Sep 2025] Paper accepted at NeurIPS 2025: On Extending Direct Preference Optimization to Accommodate Ties.
  • [May 2024] Two papers accepted at ACL 2024 Main: PreFLMR and RGCL.
  • [Mar 2024] Paper accepted at NAACL 2024 Main: Control-DAG.

Academic Service

Supervision: Co-supervision of Cambridge Engineering MPhil (MLMI), UROP, and MEng student projects (2022–2026).

Teaching: Demonstrator and supervisor for MLMI 8 on Machine Translation and Visual Question Answering (2022–2025) and Large Language Model Applications (2025–2026).

Workshop Organising: Multimodal Information Retrieval Challenge at WWW 2025; UK and Ireland Speech Workshop 2024.

Reviewing: ACL ARR (Feb/May/Jul 2025, Jan 2026), NeurIPS 2025, ICLR 2025/2026, ICML 2026.