Presentations

Lab Meeting

2024

  • [Sep 24, 2024] HIRE: Leveraging LLMs to Highlight and Reference Information(Research Project)
  • [Sep 10, 2024] Faith and Fate: Transformers as fuzzy pattern matchers
  • [Aug 13, 2024] Faithful CoT
  • [Jul 22, 2024] The Probabilities Also Matter: A More Faithful Metric for Faithfulness of Free-Text Explanations in LLMs
  • [Jul 10, 2024] Why are Visually-Grounded Language Models Bad at Image Classification
  • [May 16, 2024] Hierarchical Open-vocabulary Universal Image Segmentation
  • [Apr 16, 2024] GLaMM: Pixel Grounding Large Multimodal Model
  • [Mar 12, 2024] Improved Zero-shot Classification by Adapting VLMs with Text Descriptions
  • [Jan 30, 2024] Classification based on Boxes and Phrases (Research Project)
  • 2023

  • [Nov 21, 2023] What does CLIP know about a red circle? Visual prompt engineering for VLMs
  • [Oct 3, 2023] CLIP-Event: Connecting Text and Images with Event Structures
  • [Aug 25, 2023] How does “habitat” help fine-grained bird identification? (Research Project)
  • [Apr 25, 2023] Zero-Shot Classification by Logical Reasoning on Natural Language Explanations
  • [Mar 21, 2023] Open-Vocabulary Semantic Segmentation With Mask-Adapted CLIP
  • [Feb 14, 2023] STAIR: Learning Sparse Text and Image Representation in Grounded Tokens
  • 2022

  • [Nov 22, 2022] Re-labeling ImageNet - from Single to Multi-Labels, from Global to Localized Labels
  • [Oct 18, 2022] Class Activation Latent Mapping - Keep CALM and Improve Visual Feature Attribution
  • [Sep 20, 2022] Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation
  • Posts in this series