Loading [MathJax]/extensions/MathMenu.js
Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation | IEEE Conference Publication | IEEE Xplore

Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation


Abstract:

Chain-of-Thought (CoT) [2], [3] guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logi-cal...Show More

Abstract:

Chain-of-Thought (CoT) [2], [3] guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logi-cal tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs - a non-sequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study. Then to investi-gate LLMs' LoT ability in the Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset which contains over 130,000 samples from the Oogiri game, and observe the insufficient LoT ability or failures of most existing LLMs on the Oogiri game. Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve LLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into LoT-oriented instruction tuning data to train pre-trained LLM for achieving certain LoT humor generation and discrimination abilities. Then CLoT designs an ex-plorative self-refinement that encourages the LLM to gener-ate more creative LoT data via exploring parallels between seemingly unrelated concepts and selects high-quality data to train itself for self-refinement. CLoT not only excels in humor generation in the Oogiri game as shown in Fig. 1 but also boosts creative abilities in various tasks like “cloud guessing game” and “divergent association task”. These findings advance our understanding and offer a pathway to improve LLMs' creative capacities for innovative applications across domains. The dataset, code, and models have been released online: https://zhongshsh.github.io/CLoT.
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA

1. Introduction

Large language models (LLMs) [4]–[13] have catalyzed a transformative era in problem-solving abilities, revolution-izing various domains within artificial intelligence. The advent of the Chain-of-Thought (CoT) paradigm [3] and its further enhancements [2, 14–16] have equipped these LLMs with a human-like step-by-step reasoning capacity. This augmentation has enabled LLMs to excel in intricate reasoning tasks spanning from language comprehension to visual understanding. As shown in Fig. 2 (Left), CoT in-stills LLMs with a sequential thinking process wherein each subsequent thought builds upon the previous one. This paradigm enhances the precision and rigor in logical processing, making it exceedingly effective for problems that demand closely linked logical reasoning.

Comparison between (multimodal) large language model (LLM, red) and its CLoT-integrated version ( blue) for Oogiri-style multimodal humor generation. According to the model input that can be image, text or both, there are three Oogiri tasks, “Image&Text to Text (IT2T)”,”Image to Text (I2T)”, and “Text to Text (T2T)”, where text can be English (EN), Chinese (CN), and Japanese (JP). “@” denotes translations. The baseline LLM is Qwen-VL [1]. While humor is subjective, these examples demonstrate CLoT's leap-of-thought capacity of using excellent creative thinking to produce high-quality humor responses. See more examples in Appendix.

Contact IEEE to Subscribe

References

References is not available for this document.