1. Introduction
Large language models (LLMs) [4]–[13] have catalyzed a transformative era in problem-solving abilities, revolution-izing various domains within artificial intelligence. The advent of the Chain-of-Thought (CoT) paradigm [3] and its further enhancements [2, 14–16] have equipped these LLMs with a human-like step-by-step reasoning capacity. This augmentation has enabled LLMs to excel in intricate reasoning tasks spanning from language comprehension to visual understanding. As shown in Fig. 2 (Left), CoT in-stills LLMs with a sequential thinking process wherein each subsequent thought builds upon the previous one. This paradigm enhances the precision and rigor in logical processing, making it exceedingly effective for problems that demand closely linked logical reasoning.
Comparison between (multimodal) large language model (LLM, red) and its CLoT-integrated version ( blue) for Oogiri-style multimodal humor generation. According to the model input that can be image, text or both, there are three Oogiri tasks, “Image&Text to Text (IT2T)”,”Image to Text (I2T)”, and “Text to Text (T2T)”, where text can be English (EN), Chinese (CN), and Japanese (JP). “@” denotes translations. The baseline LLM is Qwen-VL [1]. While humor is subjective, these examples demonstrate CLoT's leap-of-thought capacity of using excellent creative thinking to produce high-quality humor responses. See more examples in Appendix.