CS Semineri: “CS 590/690 Seminerleri”, 15:30 4 Mart 2024 (EN)

1. Analyzing Deceptive Intent: A Multimodal Framework

Berat Biçer
Ph.D. Student

(Supervisor: Asst. Prof.Dr. Hamdi Dibeklioğlu)
Computer Engineering Department
Bilkent University

Abstract: In this presentation, we introduce a new approach that utilizes convolutional self-attention for attention-based representation learning, alongside a transformer backbone for transfer learning in our automatic deceit detection framework. Our approach involves analyzing multimodal datapoints by combining visual, vocal, and speech (textual) channels to predict deceptive intent. Experimental results indicate that our architecture surpasses the state-of-the-art on the popular Real-Life Trial (RLT) dataset in terms of correct classification rate. We also assess the generalizability of our approach on the low-stakes Box of Lies (BoL) dataset, achieving state-of-the-art performance and offering cross-corpus insights. Our analysis suggests that convolutional self attention effectively learns meaningful representations and performs joint attention computation for deception. Additionally, we note that apparent deceptive intent appears to be a continuous function of time, with subjects showing varying levels of apparent deceptive intent throughout recordings. Lastly, our findings align with insights from criminal psychology, indicating that studying abnormal behavior out of context may not reliably predict deceptive intent.

DATE&TIME: March 04, Monday @ 13:50
PLACE: EA 502


2. Analyzing Deceptive Intent: A Multimodal Framework

Berat Biçer
Ph.D. Student

(Supervisor: Asst. Prof.Dr. Hamdi Dibeklioğlu)
Computer Engineering Department
Bilkent University

Abstract: In this presentation, we introduce a new approach that utilizes convolutional self-attention for attention-based representation learning, alongside a transformer backbone for transfer learning in our automatic deceit detection framework. Our approach involves analyzing multimodal datapoints by combining visual, vocal, and speech (textual) channels to predict deceptive intent. Experimental results indicate that our architecture surpasses the state-of-the-art on the popular Real-Life Trial (RLT) dataset in terms of correct classification rate. We also assess the generalizability of our approach on the low-stakes Box of Lies (BoL) dataset, achieving state-of-the-art performance and offering cross-corpus insights. Our analysis suggests that convolutional self attention effectively learns meaningful representations and performs joint attention computation for deception. Additionally, we note that apparent deceptive intent appears to be a continuous function of time, with subjects showing varying levels of apparent deceptive intent throughout recordings. Lastly, our findings align with insights from criminal psychology, indicating that studying abnormal behavior out of context may not reliably predict deceptive intent.

DATE&TIME: March 04, Monday @ 13:50
PLACE: EA 502


 3. Reference-Based 3D-Aware Image Editing with Triplanes

Bahri Batuhan Bilecen
Master Student

(Supervisor: Asst. Prof. Ayşegül Dündar)
Computer Engineering Department
Bilkent University

Abstract: Image editing via GANs, especially StyleGAN, is a well-studied problem. Various solutions have been proposed, such as training image encoders to project real-life images onto StyleGAN’s latent spaces (W+) and finding suitable editing directions. Another recent development in GANs is the 3D-aware GANs with efficient triplane-based architectures, such as EG3D, where the same W+ methods of StyleGAN are reused to perform edits. However; in this study, we discover that EG3D enables a new editing space, namely triplanes, which achieves reference based edits unattainable via conventional W+ editing methods. Our 3D-aware editing pipeline exploits the differentiability of the volumetric neural renderer, and generates localized high-dimensional region-of-interest masks in triplane space from 2D pixel space via backpropagation. In addition, it includes novel post-processing and fine-tuning stages for the triplane masks and image encoders, respectively, for seamless and realistic image editing. Our method works on cartoon, animal, and real human portraits without additional supervision, shows major improvements over 2D reference-based and W+ editing methods, and achieves state-of-the-art results.

DATE&TIME: March 04, Monday @ 14:10
PLACE: EA 502


4. Contextual Object Recognition in Remote Sensing Images

Sinan Çavdar
Master Student

(Supervisor: Prof.Dr.Selim Aksoy)
Computer Engineering Department
Bilkent University

Abstract: The current studies in object detection models predominantly concentrate on the mere identification of objects. However, in the domain of remote sensing imagery, the discernibility of objects is frequently compromised by factors such as sensor inaccuracies, which can render objects grainy, or the occlusion caused by the proximity of neighboring objects, the intrinsic reflectivity of the objects themselves, or atmospheric conditions such as cloud cover. Given these challenges, relying solely on the visual characteristics of detected objects may prove insufficient for accurate identification. While certain studies in the existing literature have endeavored to integrate rich semantic contexts with intricate feature details, their approach typically involves an implicit assumption about the capture of semantics. The motivation behind this research is the explicit incorporation of contextual semantics through the employment of transformer-based inpainting detectors, in combination with the visual attributes of the detected objects. This methodology extends beyond a narrow focus on the objects in isolation, enhancing the clarity with which objects are identified by also considering the surrounding context.

DATE&TIME: March 04, Monday @ 14:30
PLACE: EA 502