bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=7f1i6Po0Sn
@inproceedings{ liao2024uniq, title={UniQ: Unified Decoder with Task-specific Queries for Efficient Scene Graph Generation}, author={Xinyao Liao and Wei Wei and Dangyang Chen and Yuanyuanfu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7f1i6Po0Sn} }
Scene Graph Generation(SGG) is a scene understanding task aims at identifying object entities and reasoning their relationships within a given image. In contrast to prevailing two-stage methods based on a large object detector (e.g., Faster R-CNN), one-stage methods integrate a fixed-size set of learnable queries to jointly reason relational triplets <subject, predicate, object>. This paradigm demonstrates robust performance with significantly reduced parameters and computational overhead. However, the challenge in one-stage methods stems from the issue of weak entanglement, wherein entities involved in relationships require both coupled features shared within triplets and decoupled visual features. Previous methods either adopt a single decoder for coupled triplet feature modeling or multiple decoders for separate visual feature extraction but fail to consider both. In this paper, we introduce UniQ, a Unified decoder with task-specific Queries architecture, where task-specific queries generate decoupled visual features for subjects, objects, and predicates respectively, and unified decoder enables coupled feature modeling within relational triplets. Experimental results on the Visual Genome dataset demonstrate that UniQ has superior performance to both one-stage and two-stage methods.
UniQ: Unified Decoder with Task-specific Queries for Efficient Scene Graph Generation
[ "Xinyao Liao", "Wei Wei", "Dangyang Chen", "Yuanyuanfu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7dGcZ5xTVP
@inproceedings{ wang2024imagefree, title={Image-free Pre-training for Low-Level Vision}, author={Siyang Wang and JingHao Zhang and Jie Huang and Feng Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7dGcZ5xTVP} }
The constrained data scale in low-level vision often induces the demon overfitting hazard for restoration networks, necessitating the adoption of the pre-training paradigm. Mirroring the success of the high-level pre-training approaches, recent methods in the low-level community aim to derive general visual representation from extensive data with synthesized degradation. In this paper, we propose a new perspective beyond the data-driven image pre-training paradigm for low-level vision, building upon the following examination. First, unlike the semantic extraction prevalent in high-level vision tasks, low-level vision primarily focuses on the continuous and content-agnostic pixel-level regression, indicating that the diversified image contents inherent in large-scale data are potentially unnecessary for low-level vision pre-training. Secondary, considering the low-level degradations are highly relevant to the frequency spectrum, we discern that the low-level pre-training paradigm can be implemented in the Fourier space with fostered degradation sensibility. Therefore, we develop an Image-Free Pre-training (IFP) paradigm, a novel low-level pre-training approach with necessity of single randomly sampled Gaussian noise image, streamlining complicated data collection and synthesis procedure. The principle of the IFP involves reconstructing the original Gaussian noise from the randomly perturbed counterpart with partially masked spectrum band, facilitating the capability for robust spectrum representation extraction in response to the capricious downstream degradations. Extensive experiments demonstrate the significant improvements brought by the IFP paradigm to various downstream tasks, such as 1.31dB performance boost in low-light enhancement for Restormer, and improvements of 1.2dB in deblurring and 2.42dB in deraining for Uformer.
Image-free Pre-training for Low-Level Vision
[ "Siyang Wang", "JingHao Zhang", "Jie Huang", "Feng Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7cxKXqMdOb
@inproceedings{ ruan2024gist, title={{GIST}: Improving Parameter Efficient Fine-Tuning via Knowledge Interaction}, author={Jiacheng Ruan and Jingsheng Gao and Mingye Xie and Suncheng Xiang and Zefang Yu and Ting Liu and yuzhuo fu and Xiaoye Qu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7cxKXqMdOb} }
Recently, the Parameter Efficient Fine-Tuning (PEFT) method, which adjusts or introduces fewer trainable parameters to calibrate pre-trained models on downstream tasks, has been a hot research topic. However, existing PEFT methods within the traditional fine-tuning framework have two main shortcomings: 1) They overlook the explicit association between trainable parameters and downstream knowledge. 2) They neglect the interaction between the intrinsic task-agnostic knowledge of pre-trained models and the task-specific knowledge of downstream tasks. These oversights lead to insufficient utilization of knowledge and suboptimal performance. To address these issues, we propose a novel fine-tuning framework, named GIST, that can be seamlessly integrated into the current PEFT methods in a plug-and-play manner. Specifically, our framework first introduces a trainable token, called the Gist token, when applying PEFT methods on downstream tasks. This token serves as an aggregator of the task-specific knowledge learned by the PEFT methods and builds an explicit association with downstream tasks. Furthermore, to facilitate explicit interaction between task-agnostic and task-specific knowledge, we introduce the concept of knowledge interaction via a Bidirectional Kullback-Leibler Divergence objective. As a result, PEFT methods within our framework can enable the pre-trained model to understand downstream tasks more comprehensively by fully leveraging both types of knowledge. Extensive experiments on the 35 datasets demonstrate the universality and scalability of our framework. Notably, the PEFT method within our GIST framework achieves up to a 2.25% increase on the VTAB-1K benchmark with an addition of just 0.8K parameters (0.009‰ of ViT-B/16). Code is in the supplementary materials.
GIST: Improving Parameter Efficient Fine-Tuning via Knowledge Interaction
[ "Jiacheng Ruan", "Jingsheng Gao", "Mingye Xie", "Suncheng Xiang", "Zefang Yu", "Ting Liu", "yuzhuo fu", "Xiaoye Qu" ]
Conference
poster
[ "https://github.com/JCruan519/GIST" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7ZYEoB71Vd
@inproceedings{ guo2024llavaultra, title={{LL}a{VA}-Ultra: Large Chinese Language and Vision Assistant for Ultrasound}, author={Xuechen Guo and Wenhao Chai and Shi-Yan Li and Gaoang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7ZYEoB71Vd} }
Multimodal Large Language Model (MLLM) has recently garnered significant attention as a prominent research focus. By harnessing the capability of powerful Large Language Model (LLM), it facilitates the transition of conversational generative AI from unimodal text to performing multimodal tasks. This blooming development has begun to significantly impact the medical field. However, visual language models in the general domain lack sophisticated comprehension required for medical visual conversations. Even some models specifically tailored for the medical domain often produce answers that tend to be vague and weakly related to the visual contents. In this paper, we propose a fine-grained and adaptive visual language model architecture for Chinese medical visual conversations through parameter-efficient tuning. Specifically, we devise a fusion module with fine-grained vision encoders to achieve enhancement for subtle medical visual semantics. Then we note data redundancy that is common in medical scenes but ignored in most prior works. In cases of a single text paired with multiple figures, we utilize weighted scoring with knowledge distillation to adaptively screen valid images mirroring text descriptions. For execution, we leverage a large-scale Chinese ultrasound multimodal dataset obtained first-hand from the hospital database. We create instruction-following data based on text derived from doctors, which ensures professionality and thus contributes to effective tuning. With enhanced architecture and quality data, our Large Chinese Language and Vision Assistant for Ultrasound (LLaVA-Ultra) shows strong capability and robustness to medical scenarios. On three medical visual question answering datasets, LLaVA-Ultra surpasses previous state-of-the-art models on various metrics.
LLaVA-Ultra: Large Chinese Language and Vision Assistant for Ultrasound
[ "Xuechen Guo", "Wenhao Chai", "Shi-Yan Li", "Gaoang Wang" ]
Conference
poster
2410.15074
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7XyXOOGYfV
@inproceedings{ han2024event, title={Event Traffic Forecasting with Sparse Multimodal Data}, author={Xiao Han and Zhenduo zhang and Yiling Wu and Xinfeng Zhang and Zhe Wu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7XyXOOGYfV} }
With the development of deep learning, traffic forecasting technology has made significant progress and is being applied in many practical scenarios. However, various events held in cities, such as sporting events, exhibitions, concerts, etc., have a significant impact on traffic patterns of surrounding areas, causing current advanced prediction models to fail in this case. In this paper, to broaden the applicable scenarios of traffic forecasting, we focus on modeling the impact of events on traffic patterns and propose an event traffic forecasting problem with multimodal inputs. We outline the main challenges of this problem: diversity and sparsity of events, as well as insufficient data. To address these issues, we first use textual modal data containing rich semantics to describe the diverse characteristics of events. Then, we propose a simple yet effective multi-modal event traffic forecasting model that uses pre-trained text and traffic encoders to extract the embeddings and fuses the two embeddings for prediction. Encoders pre-trained on large-scale data have powerful generalization abilities to cope with the challenge of sparse data. Next, we design an efficient large language model-based event description text generation pipeline to build multi-modal event traffic forecasting datasets, ShenzhenCEC and SuzhouIEC. Experiments on two real-world datasets show that our method achieves state-of-the-art performance compared with eight baselines, reducing mean absolute error during the event peak period by 4.26\%. Code is available at: https://github.com/2448845600/EventTrafficForecasting.
Event Traffic Forecasting with Sparse Multimodal Data
[ "Xiao Han", "Zhenduo zhang", "Yiling Wu", "Xinfeng Zhang", "Zhe Wu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7Xoj8nNwot
@inproceedings{ wang2024make, title={Make Privacy Renewable! Generating Privacy-Preserving Faces Supporting Cancelable Biometric Recognition}, author={Tao Wang and Yushu Zhang and Xiangli Xiao and Lin Yuan and Zhihua Xia and Jian Weng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7Xoj8nNwot} }
The significant advancement in face recognition drives face privacy protection into a prominent research direction. Unlike de-identification, a recent class of face privacy protection schemes preserves identifiable formation for face recognition. However, these schemes fail to support the revocation of the leaked identity, causing attackers to potentially identify individuals and then pose security threats. In this paper, we explore the possibility of generating privacy-preserving faces (not features) supporting cancelable biometric recognition. Specifically, we propose a cancelable face generator (CanFG), which removes the physical identity for privacy protection and embeds the virtual identity for face recognition. Particularly, when leaked, the virtual identity can be revoked and renew as another one, preventing re-identification from attackers. Benefiting from the designed distance-preserving identity transformation, CanFG can guarantee separability and preserve recognizability of virtual identities. Moreover, to make CanFG lightweight, we design a simple but effective training strategy, which allows CanFG to require only one (rather than two) network for achieving stable multi-objective learning. Extensive experimental results and sufficient security analyses demonstrate the ability of CanFG to effectively protect physical identity and support cancelable biometric recognition. Our code is available at https://github.com/daizigege/CanFG.
Make Privacy Renewable! Generating Privacy-Preserving Faces Supporting Cancelable Biometric Recognition
[ "Tao Wang", "Yushu Zhang", "Xiangli Xiao", "Lin Yuan", "Zhihua Xia", "Jian Weng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7TYTiE7CAZ
@inproceedings{ xu2024probabilistic, title={Probabilistic Distillation Transformer: Modelling Uncertainties for Visual Abductive Reasoning}, author={Wanru Xu and Zhenjiang Miao and Yi Tian and Yigang Cen and Lili Wan and Ma Xiaole}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7TYTiE7CAZ} }
Visual abduction reasoning aims to find the most plausible explanation for incomplete observations, and suffers from inherent uncertainties and ambiguities, which mainly stem from the latent causal relations, incomplete observations, and the reasoning itself. To address this, we propose a probabilistic model named Uncertainty-Guided Probabilistic Distillation Transformer (UPD-Trans) to model uncertainties for Visual Abductive Reasoning. In order to better discover the correct cause-effect chain, we model all the potential causal relations into a unified reasoning framework, thus both the direct relations and latent relations are considered. In order to reduce the effect of the stochasticity and uncertainty for reasoning: 1) we extend the deterministic Transformer to a probabilistic Transformer by considering those uncertain factors as Gaussian random variables and explicitly modeling their distribution; 2) we introduce a distillation mechanism between the posterior branch with complete observations and the prior branch with incomplete observations to transfer posterior knowledge. Evaluation results on the benchmark datasets, consistently demonstrate the commendable performance of our UPD-Trans, with significant improvements after latent relation modeling and uncertainty modeling.
Probabilistic Distillation Transformer: Modelling Uncertainties for Visual Abductive Reasoning
[ "Wanru Xu", "Zhenjiang Miao", "Yi Tian", "Yigang Cen", "Lili Wan", "Ma Xiaole" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7RiZKcUtGe
@inproceedings{ rosh2024rsfd, title={R2{SFD}: Improving Single Image Reflection Removal using Semantic Feature Dictionary}, author={Green Rosh and Pawan Prasad B H and LOKESH R BOREGOWDA and Kaushik Mitra}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7RiZKcUtGe} }
Single image reflection removal is a severely ill-posed problem and it is very hard to separate the desirable transmission and undesirable reflection layers. Most of the existing single image reflection removal methods try to recover the transmission layer by exploiting cues that are extracted only from the given input image. However, there is abundant unutilized information in the form of millions of reflection free images available publicly. Even though this information is easily available, utilizing the same for effectively removing reflections is non-trivial. In this paper, we propose a novel method, termed R2SFD, for improving single image reflection removal using a Semantic Feature Dictionary (SFD) constructed from a database of reflection-free images. The SFD is constructed using a novel Reflection Aware Feature Extractor (RAFENet) that extracts features invariant to the presence of reflections. The SFD and the input image are then passed to another novel network termed SFDNet. This network first extracts RAFENet features from the reflection-corrupted input image, searches for similar features in the SFD, and transfers the semantic content to generate the final output. To further improve reflection removal, we also introduce a Large Scale Reflection Removal (LSRR) dataset consisting of 2650 image pairs comprising of a variety of real world reflection scenarios. The proposed method achieves superior results both qualitatively and quantitatively compared to the state of the art single image reflection removal methods on real public datasets as well as our LSRR dataset.We will release the dataset at https://github.com/ee19d005/r2sfd.
R2SFD: Improving Single Image Reflection Removal using Semantic Feature Dictionary
[ "Green Rosh", "Pawan Prasad B H", "LOKESH R BOREGOWDA", "Kaushik Mitra" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7N8AImEVfV
@inproceedings{ wang2024primecomposer, title={PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering}, author={Yibin Wang and WEIZHONG ZHANG and Jianwei Zheng and Cheng Jin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7N8AImEVfV} }
Image composition involves seamlessly integrating given objects into a specific visual context. Current training-free methods rely on composing attention weights from several samplers to guide the generator. However, since these weights are derived from disparate contexts, their combination leads to coherence confusion and loss of appearance information. These issues worsen with their excessive focus on background generation, even when unnecessary in this task. This not only impedes their swift implementation but also compromises foreground generation quality. Moreover, these methods introduce unwanted artifacts in the transition area. In this paper, we formulate image composition as a subject-based local editing task, solely focusing on foreground generation. At each step, the edited foreground is combined with the noisy background to maintain scene consistency. To address the remaining issues, we propose PrimeComposer, a faster training-free diffuser that composites the images by well-designed attention steering across different noise levels. This steering is predominantly achieved by our Correlation Diffuser, utilizing its self-attention layers at each step. Within these layers, the synthesized subject interacts with both the referenced object and background, capturing intricate details and coherent relationships. This prior information is encoded into the attention weights, which are then integrated into the self-attention layers of the generator to guide the synthesis process. Besides, we introduce a Region-constrained Cross-Attention to confine the impact of specific subject-related words to desired regions, addressing the unwanted artifacts shown in the prior method thereby further improving the coherence in the transition area. Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively. The code is available at https://github.com/CodeGoat24/PrimeComposer.
PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering
[ "Yibin Wang", "WEIZHONG ZHANG", "Jianwei Zheng", "Cheng Jin" ]
Conference
poster
2403.05053
[ "https://github.com/codegoat24/primecomposer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7JhV3Pbfgk
@inproceedings{ huang2024magicfight, title={MagicFight: Personalized Martial Arts Combat Video Generation}, author={Jiancheng Huang and Mingfu Yan and Songyan Chen and Yi Huang and Shifeng Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7JhV3Pbfgk} }
Amid the surge in generic text-to-video generation, the field of personalized human video generation has witnessed notable advancements, primarily concentrated on single-person scenarios. However, to our knowledge, the domain of two-person interactions, particularly in the context of martial arts combat, remains uncharted. We identify a significant gap: existing models for single-person dancing generation prove insufficient for capturing the subtleties and complexities of two engaged fighters, resulting in challenges such as identity confusion, anomalous limbs, and action mismatches. To address this, we introduce a pioneering new task, Personalized Martial Arts Combat Video Generation. Our approach, MagicFight, is specifically crafted to overcome these hurdles. Given this pioneering task, we face a lack of appropriate datasets. Thus, we generate a bespoke dataset using the game physics engine Unity, meticulously crafting a multitude of 3D characters, martial arts moves, and scenes designed to represent the diversity of combat. MagicFight refines and adapts existing models and strategies to generate high-fidelity two-person combat videos that maintain individual identities and ensure seamless, coherent action sequences, thereby laying the groundwork for future innovations in the realm of interactive video content creation.
MagicFight: Personalized Martial Arts Combat Video Generation
[ "Jiancheng Huang", "Mingfu Yan", "Songyan Chen", "Yi Huang", "Shifeng Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7GPsuT0vyh
@inproceedings{ shen2024bridging, title={Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening}, author={Jiaming Shen and Kun Hu and Wei Bao and Chang Wen Chen and Zhiyong Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7GPsuT0vyh} }
Hand-drawn 2D animation workflow is typically initiated with the creation of sketch keyframes. Subsequent manual inbetweens are crafted for smoothness, which is a labor-intensive process and the prospect of automatic animation sketch interpolation has become highly appealing. Yet, common frame interpolation methods are generally hindered by two key issues: 1) limited texture and colour details in sketches, and 2) exaggerated alterations between two sketch keyframes. To overcome these issues, we propose a novel deep learning method - Sketch-Aware Interpolation Network (SAIN). This approach incorporates multi-level guidance that formulates region-level correspondence, stroke-level correspondence and pixel-level dynamics. A multi-stream U-Transformer is then devised to characterize sketch inbewteening patterns using these multi-level guides through the integration of self / cross-attention mechanisms. Additionally, to facilitate future research on animation sketch inbetweening, we constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles. Comprehensive experiments on this dataset convincingly show that our proposed SAIN surpasses the state-of-the-art interpolation methods. Our code and dataset will be publicly available.
Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening
[ "Jiaming Shen", "Kun Hu", "Wei Bao", "Chang Wen Chen", "Zhiyong Wang" ]
Conference
poster
2308.13273
[ "https://github.com/none-master/sain" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7DKE4N9wbu
@inproceedings{ ma2024cirp, title={{CIRP}: Cross-Item Relational Pre-training for Multimodal Product Bundling}, author={Yunshan Ma and Yingzhi He and WENJUN ZHONG and Xiang Wang and Roger Zimmermann and Tat-Seng Chua}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7DKE4N9wbu} }
Product bundling has been a prevailing marketing strategy that is beneficial in the online shopping scenario. Effective product bundling methods depend on high-quality item representations capturing both the individual items' semantics and cross-item relations. However, previous item representation learning methods, either feature fusion or graph learning, suffer from inadequate cross-modal alignment and struggle to capture the cross-item relations for cold-start items. Multimodal pre-train models could be the potential solutions given their promising performance on various multimodal downstream tasks. However, the cross-item relations have been under-explored in the current multimodal pre-train models. To bridge this gap, we propose a novel and simple framework Cross-Item Relational Pre-training (CIRP) for item representation learning in product bundling. Specifically, we employ a multimodal encoder to generate image and text representations. Then we leverage both the cross-item contrastive loss (CIC) and individual item's image-text contrastive loss (ITC) as the pre-train objectives. Our method seeks to integrate cross-item relation modeling capability into the multimodal encoder. Therefore, even for cold-start items that have no relations, their representations are still relation-aware. Furthermore, to eliminate the potential noise and reduce the computational cost, we harness a relation pruning module to remove the noisy and redundant relations. We apply the item representations extracted by CIRP to the product bundling model ItemKNN, and experiments on three e-commerce datasets demonstrate that CIRP outperforms various leading representation learning methods.
CIRP: Cross-Item Relational Pre-training for Multimodal Product Bundling
[ "Yunshan Ma", "Yingzhi He", "WENJUN ZHONG", "Xiang Wang", "Roger Zimmermann", "Tat-Seng Chua" ]
Conference
poster
2404.01735
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7BZ4biy975
@inproceedings{ zhu2024unistyle, title={UniStyle: Unified Style Modeling for Speaking Style Captioning and Stylistic Speech Synthesis}, author={Xinfa Zhu and Wenjie Tian and Xinsheng Wang and Lei He and Yujia Xiao and Xi Wang and Xu Tan and sheng zhao and Lei Xie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7BZ4biy975} }
Understanding the speaking style, such as the emotion of the interlocutor's speech, and responding with speech in an appropriate style is a natural occurrence in human conversations. However, technically, existing research on speech synthesis and speaking style captioning typically proceeds independently. In this work, an innovative framework, referred to as UniStyle, is proposed to incorporate both the capabilities of speaking style captioning and style-controllable speech synthesizing. Specifically, UniStyle consists of a UniConnector and a style prompt-based speech generator. The role of the UniConnector is to bridge the gap between different modalities, namely speech audio and text descriptions. It enables the generation of text descriptions with speech as input and the creation of style representations from text descriptions for speech synthesis with the speech generator. Besides, to overcome the issue of data scarcity, we propose a two-stage and semi-supervised training strategy, which reduces data requirements while boosting performance. Extensive experiments conducted on open-source corpora demonstrate that UniStyle achieves state-of-the-art performance in speaking style captioning and synthesizes expressive speech with various speaker timbres and speaking styles in a zero-shot manner.
UniStyle: Unified Style Modeling for Speaking Style Captioning and Stylistic Speech Synthesis
[ "Xinfa Zhu", "Wenjie Tian", "Xinsheng Wang", "Lei He", "Yujia Xiao", "Xi Wang", "Xu Tan", "sheng zhao", "Lei Xie" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7B93ulPZ9K
@inproceedings{ liling2024translating, title={Translating Motion to Notation: Hand Labanotation for Intuitive and Comprehensive Hand Movement Documentation}, author={LiLing and Wenrui Yang and Xinchun Yu and Junliang Xing and Xiao-Ping Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7B93ulPZ9K} }
Symbols play a pivotal role in the documentation and dissemination of art. For instance, we use musical scores and dance notation to document musical compositions and choreographic movements. Existing hand representations do not fit well with hand movement documentation since (1) data-oriented representations, e.g., coordinates of hand keypoints, are not intuitive and vulnerable to noise, and (2) the sign language, another widely adopted representation for hand movements, focuses solely on semantic interaction rather than action encoding. To balance intuitiveness and precision, we propose a novel notation system, named Hand Labanotation (HL), for hand movement documentation. We first introduce a new HL dataset comprising $4$M annotated images. Thereon, we propose a novel multi-view transformer architecture for automatically translating hand movements to HL. Extensive experiments demonstrate the promising capacity of our method for representing hand movements. This makes our method a general tool for hand movement documentation, driving various downstream applications like using HL to control robotic hands.
Translating Motion to Notation: Hand Labanotation for Intuitive and Comprehensive Hand Movement Documentation
[ "LiLing", "Wenrui Yang", "Xinchun Yu", "Junliang Xing", "Xiao-Ping Zhang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7B8TsGZHGz
@inproceedings{ lu2024large, title={Large Point-to-Gaussian Model for Image-to-3D Generation}, author={Longfei Lu and Huachen Gao and Tao Dai and Yaohua Zha and Zhi Hou and Junta Wu and Shu-Tao Xia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=7B8TsGZHGz} }
Recently, image-to-3D approaches have significantly advanced the generation quality and speed of 3D assets based on large reconstruction models, particularly 3D Gaussian reconstruction models. Existing large 3D Gaussian models directly map 2D image to 3D Gaussian parameters, while regressing 2D image to 3D Gaussian representations is challenging without 3D priors. In this paper, we propose a large Point-to-Gaussian model, that inputs the initial point cloud produced from large 3D diffusion model conditional on 2D image to generate the Gaussian parameters, for image-to-3D generation. The point cloud provides initial 3D geometry prior for Gaussian generation, thus significantly facilitating image-to-3D Generation. Moreover, we present the Attention mechanism, Projection mechanism, and Point feature extractor, dubbed as APP block, for fusing the image features with point cloud features. The qualitative and quantitative experiments extensively demonstrate the effectiveness of the proposed approach on GSO and Objaverse datasets, and show the proposed method achieves state-of-the-art performance.
Large Point-to-Gaussian Model for Image-to-3D Generation
[ "Longfei Lu", "Huachen Gao", "Tao Dai", "Yaohua Zha", "Zhi Hou", "Junta Wu", "Shu-Tao Xia" ]
Conference
poster
2408.10935
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=78TMql1c04
@inproceedings{ gao2024enhanced, title={Enhanced Experts with Uncertainty-Aware Routing for Multimodal Sentiment Analysis}, author={Zixian Gao and Disen Hu and Xun Jiang and Huimin Lu and Heng Tao Shen and Xing Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=78TMql1c04} }
Multimodal sentiment analysis, which has garnered widespread attention in recent years, aims to predict human emotional states using multimodal data. Previous studies have primarily focused on enhancing multimodal fusion and integrating information across different modalities while overlooking the impact of noisy data on the internal features of each single modality. In this paper, we propose the Enhanced experts with Uncertainty-Aware Routing (EUAR) method to address the influence of noisy data on multimodal sentiment analysis by capturing uncertainty and dynamically altering the network. Specifically, we introduce the Mixture of Experts approach into multimodal sentiment analysis for the first time, leveraging its properties under conditional computation to dynamically alter the network in response to different types of noisy data. Particularly, we refine the experts within the MoE framework to capture uncertainty in the data and extract clearer features. Additionally, a novel routing mechanism is introduced. Through our proposed U-loss, which utilizes the quantified uncertainty by experts, the network learns to route different samples to experts with lower uncertainty for processing, thus obtaining clearer, noise-free features. Experimental results demonstrate that our method achieves state-of-the-art performance on three widely used multimodal sentiment analysis datasets. Moreover, experiments on noisy datasets show that our approach outperforms existing methods in handling noisy data. Our anonymous implementation code can be available at https://anonymous.4open.science/r/EUAR-7BF6.
Enhanced Experts with Uncertainty-Aware Routing for Multimodal Sentiment Analysis
[ "Zixian Gao", "Disen Hu", "Xun Jiang", "Huimin Lu", "Heng Tao Shen", "Xing Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=77IS5a80GK
@inproceedings{ su2024model, title={Model X-ray : Detecting Backdoored Models via Decision Boundary}, author={Yanghao Su and Jie Zhang and Ting Xu and Tianwei Zhang and Weiming Zhang and Nenghai Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=77IS5a80GK} }
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs), enabling them to operate normally on clean inputs but manipulate predictions when specific trigger patterns occur. In this paper, we consider a practical post-training scenario backdoor defense, where the defender aims to evaluate whether a trained model has been compromised by backdoor attacks. Currently, post-training backdoor detection approaches often operate under the assumption that the defender has knowledge of the attack information, logit output from the model, and knowledge of the model parameters, limiting their implementation in practical scenarios. In contrast, our approach functions as a lightweight diagnostic scanning tool that operates in conjunction with other defense methods, assisting in defense pipelines. We begin by presenting an intriguing observation: the decision boundary of the backdoored model exhibits a greater degree of closeness than that of the clean model. Simultaneously, if only one single label is infected, a larger portion of the regions will be dominated by the attacked label. Leveraging this observation, drawing an analogy to X-rays in disease diagnosis, we propose Model X-ray . This novel backdoor detection approach is based on the analysis of illustrated two-dimensional (2D) decision boundaries, offering interpretability and visualization. Model X-ray can not only identify whether the target model is infected but also determine the target attacked label under the all-to-one attack strategy. Importantly, it accomplishes this solely by the predicted hard labels of clean inputs, regardless of any assumptions about attacks and prior knowledge of the training details of the model. Extensive experiments demonstrated that Model X-ray can be effective and efficient across diverse backdoor attacks, datasets, and architectures.
Model X-ray : Detecting Backdoored Models via Decision Boundary
[ "Yanghao Su", "Jie Zhang", "Ting Xu", "Tianwei Zhang", "Weiming Zhang", "Nenghai Yu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6yiHb5VEL9
@inproceedings{ li2024attributedriven, title={Attribute-driven Disentangled Representation Learning for Multimodal Recommendation}, author={Zhenyang Li and Fan Liu and Yinwei Wei and Zhiyong Cheng and Liqiang Nie and Mohan Kankanhalli}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6yiHb5VEL9} }
Recommendation algorithms forecast user preferences by correlating user and item representations derived from historical interaction patterns. In pursuit of enhanced performance, many methods focus on learning robust and independent representations by disentangling the intricate factors within interaction data across various modalities in an unsupervised manner. However, such an approach obfuscates the discernment of how specific factors (e.g., category or brand) influence the outcomes, making it challenging to regulate their effects. In response to this challenge, we introduce a novel method called Attribute-Driven Disentangled Representation Learning (short for AD-DRL), which explicitly incorporates attributes from different modalities into the disentangled representation learning process. By assigning a specific attribute to each factor in multimodal features, AD-DRL can disentangle the factors at both attribute and attribute-value levels. To obtain robust and independent representations for each factor associated with a specific attribute, we first disentangle the representations of features both within and across different modalities. Moreover, we further enhance the robustness of the representations by fusing the multimodal features of the same factor. Empirical evaluations conducted on three public real-world datasets substantiate the effectiveness of AD-DRL, as well as its interpretability and controllability.
Attribute-driven Disentangled Representation Learning for Multimodal Recommendation
[ "Zhenyang Li", "Fan Liu", "Yinwei Wei", "Zhiyong Cheng", "Liqiang Nie", "Mohan Kankanhalli" ]
Conference
poster
2312.14433
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6yOayWBufN
@inproceedings{ wang2024importanceaware, title={Importance-aware Shared Parameter Subspace Learning for Domain Incremental Learning}, author={Shiye Wang and Changsheng Li and Jialin Tang and Xing Gong and Ye Yuan and Guoren Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6yOayWBufN} }
Parameter-Efficient-Tuning (PET) for pre-trained deep models (e.g., transformer) hold significant potential for domain increment learning (DIL). Recent prevailing approaches resort to prompt learning, which typically involves learning a small number of prompts for each domain to avoid the issue of catastrophic forgetting. However, previous studies have pointed out prompt-based methods are often challenging to optimize, and their performance may vary non-monotonically with trainable parameters. In contrast to previous prompt-based DIL methods, we put forward an importance-aware shared parameter subspace learning for domain incremental learning, on the basis of low-rank adaption (LoRA). Specifically, we propose to incrementally learn a domain-specific and domain-shared low-rank parameter subspace for each domain, in order to effectively decouple the parameter space and capture shared information across different domains. Meanwhile, we present a momentum update strategy for learning the domain-shared subspace, allowing for the smoothly accumulation of knowledge in the current domain while mitigating the risk of forgetting the knowledge acquired from previous domains. Moreover, given that domain-shared information might hold varying degrees of importance across different domains, we design an importance-aware mechanism that adaptively assigns an importance weight to the domain-shared subspace for the corresponding domain. Finally, we devise a cross-domain contrastive constraint to encourage domain-specific subspaces to capture distinctive information within each domain effectively, and enforce orthogonality between domain-shared and domain-specific subspaces to minimize interference between them. Extensive experiments on image domain incremental datasets demonstrate the effectiveness of the proposed method in comparison to the related state-of-the-art methods.
Importance-aware Shared Parameter Subspace Learning for Domain Incremental Learning
[ "Shiye Wang", "Changsheng Li", "Jialin Tang", "Xing Gong", "Ye Yuan", "Guoren Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6vcZHJTlPj
@inproceedings{ liu2024not, title={Not All Pairs are Equal: Hierarchical Learning for Average-Precision-Oriented Video Retrieval}, author={Yang Liu and Qianqian Xu and Peisong Wen and Siran Dai and Qingming Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6vcZHJTlPj} }
The rapid growth of online video resources has significantly promoted the development of video retrieval methods. As a standard evaluation metric for video retrieval, Average Precision (AP) assesses the overall rankings of relevant videos at the top list, making the predicted scores a reliable reference for users. However, recent video retrieval methods utilize pair-wise losses that treat all sample pairs equally, leading to an evident gap between the training objective and evaluation metric. To effectively bridge this gap, in this work, we aim to address two primary challenges: a) The current similarity measure and AP-based loss are suboptimal for video retrieval; b) The noticeable noise from frame-to-frame matching introduces ambiguity in estimating the AP loss. In response to these challenges, we propose the Hierarchical learning framework for Average-Precision-oriented Video Retrieval (HAP-VR). For the former challenge, we develop the TopK-Chamfer Similarity and QuadLinear-AP loss to measure and optimize video-level similarities in terms of AP. For the latter challenge, we suggest constraining the frame-level similarities to achieve an accurate AP loss estimation. Experimental results present that HAP-VR outperforms existing methods on several benchmark datasets, providing a feasible solution for video retrieval tasks and thus offering potential benefits for the multi-media application.
Not All Pairs are Equal: Hierarchical Learning for Average-Precision-Oriented Video Retrieval
[ "Yang Liu", "Qianqian Xu", "Peisong Wen", "Siran Dai", "Qingming Huang" ]
Conference
oral
2407.15566
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6r4xHv3bbe
@inproceedings{ zhu2024selfsupervised, title={Self-Supervised Visual Preference Alignment}, author={Ke Zhu and Liang Zhao and Zheng Ge and Xiangyu Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6r4xHv3bbe} }
This paper makes the first attempt towards unsupervised preference alignment in Vision-Language Models (VLMs). We generate chosen and rejected responses with regard to the original and augmented image pairs, and conduct preference alignment with direct preference optimization. It is based on a core idea: properly designed augmentation to the image input will induce VLM to generate false but hard negative responses, which helps the model to learn from and produce more robust and powerful answers. The whole pipeline no longer hinges on supervision from GPT-4 or human involvement during alignment, and is highly efficient with few lines of code. With only 8k randomly sampled unsupervised data, it achieves 90\% relative score to GPT-4 on complex reasoning in LLaVA-Bench, and improves LLaVA-7B/13B by 6.7\%/5.6\% score on complex multi-modal benchmark MM-Vet. Visualizations shows its improved ability to align with user-intentions. A series of ablations are firmly conducted to reveal the latent mechanism of the approach, which also indicates its potential towards further scaling.
Self-Supervised Visual Preference Alignment
[ "Ke Zhu", "Liang Zhao", "Zheng Ge", "Xiangyu Zhang" ]
Conference
oral
2404.10501
[ "https://github.com/Kevinz-code/SeVa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6ko0tOQllI
@inproceedings{ zhou2024towards, title={Towards Distortion-Debiased Blind Image Quality Assessment}, author={Lize Zhou and Xiaoqi Wang and Jian Xiong and Xianzhong Long and Hao Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6ko0tOQllI} }
Existing blind image quality assessment (BIQA) models are susceptible to biases related to distortion intensity and domain. Intensity bias manifests as an over-sensitivity to severe distortions and under-estimation of minor ones, while domain bias stems from the discrepancies between synthetic and authentic distortion properties. This work introduces a unified learning framework to address these distortion biases. We integrate distortion perception and restoration modules to address intensity bias. The restoration module uses a combined image-level and feature-level denoising method to restore distorted images, where easily restorable minor distortions serve as references for mildly distorted images, and severe distortions benefit directly from distortion perception. Finally, calculating a distortion intensity matrix via intensity-aware cross-attention for adaptive handling of intensity bias. To tackle domain bias, we introduce a distortion domain recognition task, leveraging inherent differences between synthetic and authentic distortions for adaptive quality score weighting. Experimental results show that our proposed method achieves state-of-the-art performance on a multitude of synthetic and authentic IQA benchmark datasets. The code and models will be available.
Towards Distortion-Debiased Blind Image Quality Assessment
[ "Lize Zhou", "Xiaoqi Wang", "Jian Xiong", "Xianzhong Long", "Hao Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6iqKhyq1N4
@inproceedings{ xie2024brainram, title={Brain{RAM}: Cross-Modality Retrieval-Augmented Image Reconstruction from Human Brain Activity}, author={Dian Xie and Peiang Zhao and Jiarui Zhang and Kangqi Wei and Xiaobao Ni and Jiong Xia}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6iqKhyq1N4} }
Reconstructing visual stimuli from brain activities is crucial for deciphering the underlying mechanism of the human visual system. While recent studies have achieved notable results by leveraging deep generative models, challenges persist due to the lack of large-scale datasets and the inherent noise from non-invasive measurement methods. In this study, we draw inspiration from the mechanism of human memory and propose BrainRAM, a novel two-stage dual-guided framework for visual stimuli reconstruction. BrainRAM incorporates a Retrieval-Augmented Module (RAM) and diffusion prior to enhance the quality of reconstructed images from the brain. Specifically, in stage I, we transform fMRI voxels into the latent space of image and text embeddings via diffusion priors, obtaining preliminary estimates of the visual stimuli's semantics and structure. In stage II, based on previous estimates, we retrieve data from the LAION-2B-en dataset and employ the proposed RAM to refine them, yielding high-quality reconstruction results. Extensive experiments demonstrate that our BrainRAM outperforms current state-of-the-art methods both qualitatively and quantitatively, providing a new perspective for visual stimuli reconstruction.
BrainRAM: Cross-Modality Retrieval-Augmented Image Reconstruction from Human Brain Activity
[ "Dian Xie", "Peiang Zhao", "Jiarui Zhang", "Kangqi Wei", "Xiaobao Ni", "Jiong Xia" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6bW35k0JhX
@inproceedings{ wang2024gsgnesf, title={{GS}\${\textasciicircum}2\$-{GN}e{SF}: Geometry-Semantics Synergy for Generalizable Neural Semantic Fields}, author={Chengshun Wang and Na Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6bW35k0JhX} }
The remarkable success of neural radiance fields in low-level vision tasks such as novel view synthesis has motivated its extension to high-level semantic understanding, giving rise to the concept of the neural semantic field (NeSF). NeSF aims to simultaneously synthesize novel view images and associated semantic segmentation maps. Generalizable NeSF, in particular, is an appealing direction as it can generalize to unseen scenes for synthesizing images and semantic maps for novel views, thereby avoiding the need for tedious per-scene optimization. However, existing approaches to generalizable NeSF fall short in fully exploiting the geometric and semantic features as well as their mutual interactions, resulting in suboptimal performance in both novel-view image synthesis and semantic segmentation. To address this limitation, we propose Geometry-Semantics Synergy for Generalized Neural Semantic Fields (GS$^2$-GNeSF), a novel approach aimed at improving the performance of generalizable NeSF through the comprehensive construction and synergistic interaction of geometric and semantic features. In GS$^2$-GNeSF, we introduce a robust geometric prior generator to generate the cost volumes and depth prior, which aid in constructing geometric features and facilitating geometry-aware sampling. Leveraging the depth prior, we additionally construct a global semantic context for the target view. This context provides two types of compensation information to enhance geometry and semantic features, achieved through boundary detection and semantic segmentation, respectively. Lastly, we present an efficient dual-directional interactive attention mechanism to foster deep interactions between the enhanced geometric and semantic features. Experiments conducted on both synthetic and real datasets demonstrate that our GS$^2$-GNeSF outperforms existing methods in both novel view and semantic map synthesis, highlighting its effectiveness in generalizing neural semantic fields for unseen scenes.
GS^2-GNeSF: Geometry-Semantics Synergy for Generalizable Neural Semantic Fields
[ "Chengshun Wang", "Na Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6Tyzw3IaKw
@inproceedings{ du2024fast, title={Fast and Scalable Incomplete Multi-View Clustering with Duality Optimal Graph Filtering}, author={Liang Du and Yukai Shi and Yan Chen and Peng Zhou and Yuhua Qian}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6Tyzw3IaKw} }
Incomplete Multi-View Clustering (IMVC) is crucial for multi-media data analysis. While graph learning-based IMVC methods have shown promise, they still have limitations. The prevalent first-order affinity graph often misclassifies out-neighborhood intra-cluster and in-neighbor inter-cluster samples, worsened by data incompleteness. These inaccuracies, combined with high computational demands, restrict their suitability for large-scale IMVC tasks. To address these issues, we propose a novel Fast and Scalable IMVC with duality Optimal graph Filtering (FSIMVC-OF). Specifically, we refine the clustering-friendly structure of the bipartite graph by learning an optimal filter within a consensus clustering framework. Instead of learning a sample-side filter, we optimize an anchor-side graph filter and apply it to the anchor side, ensuring computational efficiency with linear complexity, supported by the provable equivalence between these two types of graph filters. We present an alternative optimization algorithm with linear complexity. Extensive experimental analysis demonstrates the superior performance of FSIMVC-OF over current IMVC methods. The codes of this article are released in https://github.com/sroytik/FSIMVC-OF.
Fast and Scalable Incomplete Multi-View Clustering with Duality Optimal Graph Filtering
[ "Liang Du", "Yukai Shi", "Yan Chen", "Peng Zhou", "Yuhua Qian" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6PSVsL2kYi
@inproceedings{ sun2024mmldm, title={{MM}-{LDM}: Multi-Modal Latent Diffusion Model for Sounding Video Generation}, author={Mingzhen Sun and Weining Wang and Yanyuan Qiao and Jiahui Sun and Zihan Qin and Longteng Guo and Xinxin Zhu and Jing Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6PSVsL2kYi} }
Sounding Video Generation (SVG) is an audio-video joint generation task challenged by high-dimensional signal spaces, distinct data formats, and different patterns of content information. To address these issues, we introduce a novel multi-modal latent diffusion model (MM-LDM) for the SVG task. We first unify the representation of audio and video data by converting them into a single or a couple of images. Then, we introduce a hierarchical multi-modal autoencoder that constructs a low-level perceptual latent space for each modality and a shared high-level semantic feature space. The former space is perceptually equivalent to the raw signal space of each modality but drastically reduces signal dimensions. The latter space serves to bridge the information gap between modalities and provides more insightful cross-modal guidance. Our proposed method achieves new state-of-the-art results with significant quality and efficiency gains. Specifically, our method achieves a comprehensive improvement on all evaluation metrics and a faster training and sampling speed on Landscape and AIST++ datasets. Moreover, we explore its performance on open-domain sounding video generation, long sounding video generation, audio continuation, video continuation, and conditional single-modal generation tasks for a comprehensive evaluation, where our MM-LDM demonstrates exciting adaptability and generalization ability.
MM-LDM: Multi-Modal Latent Diffusion Model for Sounding Video Generation
[ "Mingzhen Sun", "Weining Wang", "Yanyuan Qiao", "Jiahui Sun", "Zihan Qin", "Longteng Guo", "Xinxin Zhu", "Jing Liu" ]
Conference
poster
2410.01594
[ "https://github.com/iva-mzsun/mm-ldm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6MMrYKMrGL
@inproceedings{ he2024litegfm, title={LiteGfm: A Lightweight Self-supervised Monocular Depth Estimation Framework for Artifacts Reduction via Guided Image Filtering}, author={Zhilin He and Yawei Zhang and Jingchang Mu and Xiaoyue Gu and Tianhao Gu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6MMrYKMrGL} }
Facing with two significant challenges for monocular depth estimation under a lightweight network, including the preservation of detail information and the artifact reduction of the predicted depth maps, this paper proposes a self-supervised monocular depth estimation framework, called LiteGfm. It contains a DepthNet with an Anti-Artifact Guided (AAG) module and a PoseNet. In the AAG module, a Guided Image Filtering with cross-detail masking is first designed to filter the input features of the decoder for preserving comprehensive detail information. Second, a filter kernel generator is proposed to decompose the Sobel operator along the vertical and horizontal axes for achieving cross-detail masking, which better captures the structure and edge feature for minimizing artifacts. Furthermore, a boundary-aware loss between the reconstructed and input images is presented to preserve high-frequency details for decreasing artifacts. Extensive experimental results demonstrate that LiteGfm under 1.9M parameters gets more optimal performance than state-of-the-art methods.
LiteGfm: A Lightweight Self-supervised Monocular Depth Estimation Framework for Artifacts Reduction via Guided Image Filtering
[ "Zhilin He", "Yawei Zhang", "Jingchang Mu", "Xiaoyue Gu", "Tianhao Gu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6LXNDcWMlL
@inproceedings{ wang2024gptvideo, title={{GPT}4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation}, author={Zhanyu Wang and Longyue Wang and Zhen Zhao and Minghao Wu and Chenyang Lyu and Huayang Li and Deng Cai and Luping Zhou and Shuming Shi and Zhaopeng Tu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6LXNDcWMlL} }
Recent advances in Multimodal Large Language Models (MLLMs) have constituted a significant leap forward in the field, particularly in the processing of videos, which encompasses inherent challenges such as spatiotemporal relationships. However, existing MLLMs are predominantly focused on the comprehension of video inputs, with limited capabilities in generating video content. In this paper, we present GPT4Video, a unified framework that seamlessly and lightly integrates with LLMs, visual feature extractors, and stable diffusion generative models for cohesive video understanding and generation. Moreover, we propose a text-only finetuning approach to equip models for instruction-following and safeguarding in multimodal conversations without requiring costly annotated video-based instructions. Additionally, we construct multi-turn and caption-interleaved datasets for finetuning and benchmarking MLLMs, which serve as solid resources for advancing this field. Through quantitative and qualitative assessments, GPT4Video demonstrates the following advantages: 1) The framework incorporates video generation ability without adding extra training parameters, ensuring seamless compatibility with various video generators. 2) The model achieves superior performances across a variety of benchmarks. For instance, it outperforms Valley by 11.8% on video question answering, and surpasses NExt-GPT by 2.3% on text-to-video generation. 3) As safety pioneers in open-source MLLMs, we developed finetuning and evaluation datasets, securing an F1 score exceeding 80% in blocking harmful content during understanding and generating videos. In general, GPT4Video shows potential to function as a real-life assistant, marked by its effectiveness, adaptability, and safety. We will open-source our code, data, and models.
GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation
[ "Zhanyu Wang", "Longyue Wang", "Zhen Zhao", "Minghao Wu", "Chenyang Lyu", "Huayang Li", "Deng Cai", "Luping Zhou", "Shuming Shi", "Zhaopeng Tu" ]
Conference
oral
2311.16511
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6HT4jUkSRg
@inproceedings{ yang2024generating, title={Generating Prompts in Latent Space for Rehearsal-free Continual Learning}, author={Chengyi Yang and Wentao Liu and Shisong Chen and Jiayin Qi and Aimin Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6HT4jUkSRg} }
Continual learning emerges as a framework that trains the model on a sequence of tasks without forgetting previously learned knowledge, which has been applied in multiple multimodal scenarios. Recently, prompt-based continual learning has achieved excellent domain adaptability and knowledge transfer through prompt generation. However, existing methods mainly focus on designing the architecture of a generator, neglecting the importance of providing effective guidance for training the generator. To address this issue, we propose Generating Prompts in Latent Space (GPLS), which considers prompts as latent variables to account for the uncertainty of prompt generation and aligns with the fact that prompts are inserted into the hidden layer outputs and exert an implicit influence on classification. GPLS adopts a trainable encoder to encode task and feature information into prompts with reparameterization technique, and provides refined and targeted guidance for the training process through the evidence lower bound (ELBO) related to Mahalanobis distance. Extensive experiments demonstrate that GPLS achieves state-of-the-art performance on various benchmarks. Our code is available at https://github.com/Hifipsysta/GPLS.
Generating Prompts in Latent Space for Rehearsal-free Continual Learning
[ "Chengyi Yang", "Wentao Liu", "Shisong Chen", "Jiayin Qi", "Aimin Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6GLwA6jR3N
@inproceedings{ ding2024improving, title={Improving Open-World Classification with Disentangled Foreground and Background Features}, author={Choubo Ding and Guansong Pang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6GLwA6jR3N} }
Detecting out-of-distribution (OOD) inputs is a principal task for ensuring the safety of deploying deep-neural-network classifiers in open-world scenarios. OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions, such as foreground features (e.g., objects in CIFAR100 images vs. that in CIFAR10 images) and background features (e.g., textural images vs. objects in CIFAR10). Existing methods can confound foreground and background features in training, failing to utilize the background features for OOD detection. This paper considers the importance of feature disentanglement in open-world classification and proposes the simultaneous exploitation of both foreground and background features to support the detection of OOD inputs in open-world classification. To this end, we propose a novel framework that first disentangles foreground and background features from ID training samples via a dense prediction approach, and then learns a new classifier that can evaluate the OOD scores of test images from both foreground and background features. It is a generic framework that allows for a seamless combination with various existing OOD detection methods. Extensive experiments show that our approach 1) can substantially enhance the performance of four different state-of-the-art (SotA) OOD detection methods on multiple widely-used OOD datasets with diverse background features, and 2) achieves new SotA performance on these benchmarks.
Improving Open-World Classification with Disentangled Foreground and Background Features
[ "Choubo Ding", "Guansong Pang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6F5SQCoNG6
@inproceedings{ zhang2024a, title={A Descriptive Basketball Highlight Dataset for Automatic Commentary Generation}, author={Benhui Zhang and Junyu Gao and Yuan Yuan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6F5SQCoNG6} }
The emergence of video captioning makes it possible to automatically generate natural language description for a given video. However, generating detailed video descriptions that incorporate domain-specific information remains an unsolved challenge, holding significant research and application value, particularly in domains such as sports commentary generation. Moreover, sports event commentary goes beyond being a mere game report, it involves entertaining, metaphorical, and emotional descriptions. To promote the field of sports commentary automatic generation, in this paper, we introduce a novel dataset, the Basketball Highlight Commentary (BH-Commentary), comprising approximately 4K basketball highlight videos with ground-truth commentaries from professional commentators. In addition, we propose an end-to-end framework as a benchmark for basketball highlight commentary generation task, in which a lightweight and effective prompt strategy is designed to enhance alignment fusion among visual and textual features. Extensive experiments on the BH-Commentary dataset demonstrate the validity of the dataset and the effectiveness of the proposed benchmark for sports highlight commentary generation. (The dataset is available at https://anonymous.4open.science/r/dataset-DC8E)
A Descriptive Basketball Highlight Dataset for Automatic Commentary Generation
[ "Benhui Zhang", "Junyu Gao", "Yuan Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6Ekv9Egl3j
@inproceedings{ pan2024reconstructing, title={Reconstructing, Understanding, and Analyzing Relief Type Cultural Heritage from a Single Old Photo}, author={Jiao PAN and Liang Li and Hiroshi Yamaguchi and Kyoko Hasegawa and Fadjar Ibnu Thufail and Brahmantara and Xiaojuan Ban and Satoshi Tanaka}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6Ekv9Egl3j} }
Relief-type cultural heritage objects are commonly found at historical sites but often manifest with varying degrees of damage and deterioration. The traditional process of reconstructing these reliefs is laborious and requires extensive manual intervention and specialized archaeological knowledge. By utilizing a single old photo containing predamage information of a given relief, monocular depth estimation can be used to reconstruct 3D digital models. However, extracting depth variations along the edges is challenging in relief scenario due to the highly compression of the depth values, resulting in low-curvature edges. This paper proposes an innovative solution that leverages a multi-task neural network to enhance the depth estimation task by integrating the edge detection and semantic segmentation tasks. We redefine edge detection of relief data as a multi-class classification task rather than a typical binary classification task. In this paper, an edge matching module that performs this novel task is proposed to refine depth estimations specifically for edge regions. The proposed approach achieves better depth estimation results with finer details along the edge region. Additionally, the semantic and edge outputs provide a comprehensive reference for multi-modal understanding and analysis. This paper not only advances in computer vision task computer vision tasks but also provides effective technical support for the protection of relief-type cultural heritage objects.
Reconstructing, Understanding, and Analyzing Relief Type Cultural Heritage from a Single Old Photo
[ "Jiao PAN", "Liang Li", "Hiroshi Yamaguchi", "Kyoko Hasegawa", "Fadjar Ibnu Thufail", "Brahmantara", "Xiaojuan Ban", "Satoshi Tanaka" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6A1wNvHhdg
@inproceedings{ lu2024leveraging, title={Leveraging {RGB}-Pressure for Whole-body Human-to-Humanoid Motion Imitation}, author={Yi Lu and Shenghao Ren and Qiu Shen and Xun Cao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=6A1wNvHhdg} }
Whole-body motion imitation has gained wide attention in recent years as it can enhance the locomotive capabilities of humanoid robots. In this task, non-intrusive human motion capturing with RGB cameras is commonly used for its low-cost, efficiency, portability and user-friendliness. However, RGB based methods always faces the problem of depth ambiguity, leading to inaccurate and unstable imitation. Accordingly, we propose to introduce pressure sensor into the non-intrusive humanoid motion imitation system for two considerations: first, pressure can be used to estimate the contact relationship and interaction force between human and the ground, which play a key role in the balancing and stabilizing motion; second, pressure can be measured in the manner of almost non-intrusive approach, which can keep the experience of human demonstrator. In this paper, we establish a RGB-Pressure (RGB-P) based humanoid imitation system, achieving accurate and stable end-to-end mapping from human body models to robot control parameters. Specifically, we use RGB camera to capture human posture and pressure insoles to measure the underfoot pressure during the movements of human demonstrator. Then, a constraint relationship between pressure and pose is studied to refine the estimated pose according to the support modes and balance mechanism, thereby enhancing consistency between human and robot motions. Experimental results demonstrate that fusing RGB and pressure can enhance overall robot motion execution performance by improving stability while maintaining imitation similarity.
Leveraging RGB-Pressure for Whole-body Human-to-Humanoid Motion Imitation
[ "Yi Lu", "Shenghao Ren", "Qiu Shen", "Xun Cao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=68HSwzPBHx
@inproceedings{ zhang2024vocapter, title={Vo{CAPTER}: Voting-based Pose Tracking for Category-level Articulated Object via Inter-frame Priors}, author={Li Zhang and Zean Han and Yan Zhong and Qiaojun Yu and Xingyu Wu and xue Wang and RujingWang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=68HSwzPBHx} }
Articulated objects are common in our daily life. However, current category-level articulation pose works mostly focus on predicting 9D poses on statistical point cloud observations. In this paper, we deal with the problem of category-level online robust 9D pose tracking of articulated objects, where we propose VoCAPTER, a novel 3D Voting-based Category-level Articulated object Pose TrackER. Our VoCAPTER efficiently updates poses between adjacent frames by utilizing partial observations from the current frame and the estimated per-part 9D poses from the previous frame. Specifically, by incorporating prior knowledge of continuous motion relationships between frames, we begin by canonicalizing the input point cloud, casting the pose tracking task as an inter-frame pose increment estimation challenge. Subsequently, to obtain a robust pose-tracking algorithm, our main idea is to leverage SE(3)-invariant features during motion. This is achieved through a voting-based articulation tracking algorithm, which identifies keyframes as reference states for accurate pose updating throughout the entire video sequence. We evaluate the performance of VoCAPTER in the synthetic dataset and real-world scenarios, which demonstrates VoCAPTER's generalization ability to diverse and complicated scenes. Through these experiments, we provide evidence of VoCAPTER's superiority and robustness in multi-frame pose tracking of articulated objects. We believe that this work can facilitate the progress of various fields, including robotics, embodied intelligence, and augmented reality. All the codes will be made publicly available.
VoCAPTER: Voting-based Pose Tracking for Category-level Articulated Object via Inter-frame Priors
[ "Li Zhang", "Zean Han", "Yan Zhong", "Qiaojun Yu", "Xingyu Wu", "xue Wang", "RujingWang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=65p0B1QQi9
@inproceedings{ yu2024geoformer, title={GeoFormer: Learning Point Cloud Completion with Tri-Plane Integrated Transformer}, author={Jinpeng Yu and Binbin Huang and Yuxuan Zhang and Huaxia Li and Xu Tang and Shenghua Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=65p0B1QQi9} }
Point cloud completion aims to recover accurate global geometry and preserve fine-grained local details from partial point clouds. Conventional methods typically predict unseen points directly from 3D point cloud coordinates or use self-projected multi-view depth maps to ease this task. However, these gray-scale depth maps cannot reach multi-view consistency, consequently restricting the performance. In this paper, we introduce a GeoFormer that simultaneously enhances the global geometric structure of the points and improves the local details. Specifically, we design a CCM Feature Enhanced Point Generator to integrate image features from multi-view consistent canonical coordinate maps (CCMs) and align them with pure point features, thereby enhancing the global geometry feature. Additionally, we employ the Multi-scale Geometry-aware Upsampler module to progressively enhance local details. This is achieved through cross attention between the multi-scale features extracted from the partial input and the features derived from previously estimated points. Extensive experiments on the PCN, ShapeNet-55/34, and KITTI benchmarks demonstrate that our GeoFormer outperforms recent methods, achieving the state-of-the-art performance. The code is ready and will be released soon.
GeoFormer: Learning Point Cloud Completion with Tri-Plane Integrated Transformer
[ "Jinpeng Yu", "Binbin Huang", "Yuxuan Zhang", "Huaxia Li", "Xu Tang", "Shenghua Gao" ]
Conference
poster
2408.06596
[ "https://github.com/jinpeng-yu/geoformer" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=64qWHegH73
@inproceedings{ wu2024jointmotion, title={Joint-Motion Mutual Learning for Pose Estimation in Video}, author={Sifan Wu and Haipeng Chen and Yifang Yin and Sihao Hu and Runyang Feng and Yingying Jiao and Ziqi Yang and Zhenguang Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=64qWHegH73} }
Human pose estimation in videos has long been a compelling yet challenging task within the realm of computer vision. Nevertheless, this task remains difficult because of the complex video scenes, such as video defocus and self-occlusion. Recent methods strive to integrate multi-frame visual features generated by a backbone network for pose estimation. However, they often ignore the useful joint information encoded in the initial heatmap, which is a by-product of the backbone generation. Comparatively, methods that attempt to refine the initial heatmap fail to consider any spatio-temporal motion features. As a result, the performance of existing methods for pose estimation falls short due to the lack of ability to leverage both local joint (heatmap) information and global motion (feature) dynamics. To address this problem, we propose a novel joint-motion mutual learning framework for pose estimation, which effectively concentrates on both local joint dependency and global pixel-level motion dynamics. Specifically, we introduce a context-aware joint learner that adaptively leverages initial heatmaps and motion flows to retrieve robust local joint features. Given that local joint features and global motion flows are complementary, we further propose a progressive joint-motion mutual learning that synergistically exchanges information and interactively learns between joint features and motion flows to improve the capability of the model. More importantly, to capture more diverse joint and motion cues, we theoretically analyze and propose an information orthogonality objective to avoid learning redundant information from multi-cues. Empirical experiments show our method outperforms prior arts on three challenging benchmarks.
Joint-Motion Mutual Learning for Pose Estimation in Video
[ "Sifan Wu", "Haipeng Chen", "Yifang Yin", "Sihao Hu", "Runyang Feng", "Yingying Jiao", "Ziqi Yang", "Zhenguang Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=629Zc6wBoM
@inproceedings{ yin2024fsvfg, title={{FSVFG}: Towards Immersive Full-Scene Volumetric Video Streaming with Adaptive Feature Grid}, author={Daheng Yin and Jianxin Shi and Miao Zhang and Zhaowu Huang and Jiangchuan Liu and Fang Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=629Zc6wBoM} }
Full-scene volumetric video streaming, an emerging technology providing immersive viewing experiences via the Internet, is receiving increasing attention from both the academic and industrial communities. Considering the vast amount of full-scene volumetric data to be streamed and the limited bandwidth on the internet, achieving adaptive full-scene volumetric video streaming over the internet presents a significant challenge. Inspired by the advantages offered by neural fields, especially the feature grid method, we propose FSVFG, a novel full-scene volumetric video streaming system integrated feature grids as the representation of volumetric content. FSVFG employs an incremental training approach for feature grids and stores the features and residuals between adjacent grids as frames. To support adaptive streaming, we delve into the data structure and rendering processes of feature grids and propose bandwidth adaptation mechanisms. The mechanisms involve a coarse ray-marching for the selection of features and residuals to be sent, and achieve variable bitrate streaming by Level-of-Detail (LoD) and residual filtering. Based on these mechanisms, FSVFG achieves adaptive streaming by adaptively balancing the transmission of feature and residual according to the available bandwidth. Our preliminary results demonstrate the effectiveness of FSVFG, demonstrating its ability to improve visual quality and reduce bandwidth requirements of full-scene volumetric video streaming.
FSVFG: Towards Immersive Full-Scene Volumetric Video Streaming with Adaptive Feature Grid
[ "Daheng Yin", "Jianxin Shi", "Miao Zhang", "Zhaowu Huang", "Jiangchuan Liu", "Fang Dong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=60Uyf4UumM
@inproceedings{ wang2024alignconcept, title={Align2Concept: Language Guided Interpretable Image Recognition by Visual Prototype and Textual Concept Alignment}, author={Jiaqi Wang and Pichao WANG and Yi Feng and Huafeng Liu and Chang Gao and Liping Jing}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=60Uyf4UumM} }
Most works of interpretable neural networks strive for learning the semantics concepts merely from single modal information such as images. However, humans usually learn semantic concepts from multiple modalities and the semantics is encoded by the brain from fused multi-modal information. Inspired by cognitive science and vision-language learning, we propose a Prototype-Concept Alignment Network (ProCoNet) for learning visual prototypes under the guidance of textual concepts. In the ProCoNet, we have designed a visual encoder to decompose the input image into regional features of prototypes, while also developing a prompt generation strategy that incorporates in-context learning to prompt large language models to generate textual concepts. To align visual prototypes with textual concepts, we leverage the multimodal space provided by the pre-trained CLIP as a bridge. Specifically, the regional features from the vision space and the cropped regions of prototypes encoded by CLIP reside on different but semantically highly correlated manifolds, i.e. follow a multi-manifold distribution. We transform the multi-manifold distribution alignment problem into optimizing the projection matrix by Cayley transform on the Stiefel manifold. Through the learned projection matrix, visual prototypes can be projected into the multimodal space to align with semantically similar textual concept features encoded by CLIP. We conducted two case studies on the CUB-200-2011 and Oxford Flower dataset. Our experiments show that the ProCoNet provides higher accuracy and better interpretability compared to the single-modality interpretable model. Furthermore, ProCoNet offers a level of interpretability not previously available in other interpretable methods.
Align2Concept: Language Guided Interpretable Image Recognition by Visual Prototype and Textual Concept Alignment
[ "Jiaqi Wang", "Pichao WANG", "Yi Feng", "Huafeng Liu", "Chang Gao", "Liping Jing" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=60QvmIlnhT
@inproceedings{ xiao2024adversarial, title={Adversarial Experts Model for Black-box Domain Adaptation}, author={Siying Xiao and Mao Ye and Qichen He and Shuaifeng Li and Song Tang and Xiatian Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=60QvmIlnhT} }
Black-box domain adaptation treats the source domain model as a black box. During the transfer process, the only available information about the target domain is the noisy labels output by the black-box model. This poses significant challenges for domain adaptation. Conventional approaches typically tackle the black-box noisy label problem from two aspects: self-knowledge distillation and pseudo-label denoising, both achieving limited performance due to limited knowledge information. To mitigate this issue, we explore the potential of off-the-shelf vision-language (ViL) multimodal models with rich semantic information for black-box domain adaptation by introducing an Adversarial Experts Model (AEM). Specifically, our target domain model is designed as one feature extractor and two classifiers, trained over two stages: In the knowledge transferring stage, with a shared feature extractor, the black-box source model and the ViL model act as two distinct experts for joint knowledge contribution, guiding the learning of one classifier each. While contributing their respective knowledge, the experts are also updated due to their own limitation and bias. In the adversarial alignment stage, to further distill expert knowledge to the target domain model, adversarial learning is conducted between the feature extractor and the two classifiers. A new consistency-max loss function is proposed to measure two classifier consistency and further improve classifier prediction certainty. Extensive experiments on multiple datasets demonstrate the effectiveness of our approach. Our source code will be released.
Adversarial Experts Model for Black-box Domain Adaptation
[ "Siying Xiao", "Mao Ye", "Qichen He", "Shuaifeng Li", "Song Tang", "Xiatian Zhu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=603JjFZ3u8
@inproceedings{ wang2024progressive, title={Progressive Local and Non-Local Interactive Networks with Deeply Discriminative Training for Image Deraining}, author={Cong Wang and Liyan Wang and Jie Mu and Chengjin Yu and Wei Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=603JjFZ3u8} }
In this paper, we develop a progressive local and non-local interactive network with multi-scale cross-content deeply discriminative learning to solve image deraining. The proposed model contains two key techniques: 1) Progressive Local and Non-Local Interactive Network (PLNLIN) and 2) Multi-Scale Cross-Content Deeply Discriminative Learning (MCDDL). The PLNLIN is a U-shaped encoder-decoder network, where the proposed new Progressive Local and Non-Local Interactive Module (PLNLIM) is the basic unit in the encoder-decoder framework. The PLNLIM fully explores local and non-local learning in convolution and Transformer operation respectively and the local and non-local content are further interactively learned in a progressive manner. The proposed MCDDL not only discriminates the output of the generator but also receives the deep content from the generator to distinguish real and fake features at each side layer of the discriminator in a multi-scale manner. We show that the proposed MCDDL has fast and stable convergence properties that lack in existing discriminative learning manners. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art methods on five public synthetic datasets and one real-world data. The source codes will be made available at \url{https://github.com/supersupercong/PLNLIN-MCDDL}.
Progressive Local and Non-Local Interactive Networks with Deeply Discriminative Training for Image Deraining
[ "Cong Wang", "Liyan Wang", "Jie Mu", "Chengjin Yu", "Wei Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5zAXR6HG4d
@inproceedings{ wei2024mbc, title={{MB}2C: Multimodal Bidirectional Cycle Consistency for Learning Robust Visual Neural Representations}, author={Yayun Wei and Lei Cao and Hao Li and Yilin Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5zAXR6HG4d} }
Decoding human visual representations from brain activity data is a challenging but arguably essential task with an understanding of the real world and the human visual system. However, decoding semantically similar visual representations from brain recordings is difficult, especially for electroencephalography (EEG), which has excellent temporal resolution but suffers from spatial precision. Prevailing methods mainly focus on matching brain activity data with corresponding stimuli-responses using contrastive learning. They rely on massive and high-quality paired data and omit semantically aligned modalities distributed in distinct regions of the latent space. This paper proposes a novel Multimodal Bidirectional Cycle Consistency (MB2C) framework for learning robust visual neural representations. Specifically, we utilize dual-GAN to generate modality-related features and inversely translate back to the corresponding semantic latent space to close the modality gap and guarantee that embeddings from different modalities with similar semantics are in the same region of representation space. We perform zero-shot tasks on the ThingsEEG dataset. Additionally, we conduct EEG classification and image reconstruction on both the ThingsEEG and EEGCVPR40 datasets, achieving state-of-the-art performance compared to other baselines.
MB2C: Multimodal Bidirectional Cycle Consistency for Learning Robust Visual Neural Representations
[ "Yayun Wei", "Lei Cao", "Hao Li", "Yilin Dong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5yGVBCG6nY
@inproceedings{ wang2024bilateral, title={Bilateral Adaptive Cross-Modal Fusion Prompt Learning for {CLIP}}, author={Qiang Wang and Ke Yan and Shouhong Ding}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5yGVBCG6nY} }
In the realm of CLIP adaptation through prompt learning, it is important to emphasize the pivotal role that the proper alignment of visual and textual representations plays when adapting the CLIP to downstream tasks. We propose that the proper alignment for downstream tasks is determined by the $\textbf{flexibility}$ of the interaction between cross-modal information, which compensates for the absence of contrastive loss during the adaptation process. However, the current prompt learning methods, such as isolated modifications to the visual or language branches of CLIP or the employment of uni-directional cross-modal fusion, are not sufficient to explore the full potential of the mutual interaction between visual and textual modalities. To overcome this limitation, we propose a new paradigm for the CLIP prompt learning community, named $\textbf{B}$i$\textbf{l}$ateral Adaptive Cr$\textbf{o}$ss-Modal Fusi$\textbf{o}$n Pro$\textbf{m}$pt Learning~($\textit{Bloom}$), which includes two enhancements. First, we propose using projection functions for bi-directional modality transformation and fusion functions to encourage the mutual interaction between corresponding layers within both the image and text encoders. Second, we propose an adaptive manner that automatically searches the optimal combination of cross-modal information at each layer. These two improvements ensure a more efficient and flexible integration of the two modalities, thereby achieving proper alignment for specific downstream tasks. We put our method to the test in terms of base-to-novel, cross-dataset, and cross-domain evaluations on 15 image classification datasets. The results demonstrate a significant performance enhancement achieved by $\textit{Bloom}$.
Bilateral Adaptive Cross-Modal Fusion Prompt Learning for CLIP
[ "Qiang Wang", "Ke Yan", "Shouhong Ding" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5wLf2MWReq
@inproceedings{ wang2024genudc, title={Gen{UDC}: High Quality 3D Mesh Generation With Unsigned Dual Contouring Representation}, author={Ruowei Wang and Jiaqi Li and Dan Zeng and Xueqi Ma and Xu Zixiang and Jianwei Zhang and Qijun Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5wLf2MWReq} }
Generating high-quality meshes with complex structures and realistic surfaces is the primary goal of 3D generative models. Existing methods typically employ sequence data or deformable tetrahedral grids for mesh generation. However, sequence-based methods have difficulty producing complex structures with many faces due to memory limits. The deformable tetrahedral grid-based method MeshDiffusion fails to recover realistic surfaces due to the inherent ambiguity in deformable grids. We propose the novel GenUDC framework to address these challenges, leveraging the Unsigned Dual Contouring (UDC) as a better mesh representation. UDC discretizes a mesh in a regular grid and divides it into the face and vertex parts, recovering both complex structures and fine details. As a result, the one-to-one mapping between UDC and mesh resolves the ambiguity problem. In addition, GenUDC adopts a two-stage, coarse-to-fine generative process for 3D mesh generation. It first generates the face part as a rough shape and then the vertex part to craft a detailed shape. Extensive evaluations demonstrate the superiority of UDC as a mesh representation and the favorable performance of GenUDC in mesh generation. The code and trained models will be released upon publication.
GenUDC: High Quality 3D Mesh Generation With Unsigned Dual Contouring Representation
[ "Ruowei Wang", "Jiaqi Li", "Dan Zeng", "Xueqi Ma", "Xu Zixiang", "Jianwei Zhang", "Qijun Zhao" ]
Conference
poster
2410.17802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5snfb4ip0a
@inproceedings{ zhu2024natural, title={Natural Language Induced Adversarial Images}, author={Xiaopei Zhu and Peiyang Xu and Guanning Zeng and Yinpeng Dong and Xiaolin Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5snfb4ip0a} }
Research of adversarial attacks is important for AI security because it shows the vulnerability of deep learning models and helps to build more robust models. Adversarial attacks on images are most widely studied, which includes noise-based attacks, image editing-based attacks, and latent space-based attacks. However, the adversarial examples crafted by these methods often lack sufficient semantic information, making it challenging for humans to understand the failure modes of deep learning models under natural conditions. To address this limitation, we propose a natural language induced adversarial image attack method. The core idea is to leverage a text-to-image model to generate adversarial images given input prompts, which are maliciously constructed to lead to misclassification for a target model. To adopt commercial text-to-image models for synthesizing more natural adversarial images, we propose an adaptive genetic algorithm (GA) for optimizing discrete adversarial prompts without requiring gradients and an adaptive word space reduction method for improving the query efficiency. We further used CLIP to maintain the semantic consistency of the generated images. In our experiments, we found that some high-frequency semantic information such as "foggy'', "humid'', "stretching'', etc. can easily cause classifier errors. These adversarial semantic information exist not only in generated images, but also in photos captured in the real world. We also found that some adversarial semantic information can be transferred to unknown classification tasks. Furthermore, our attack method can transfer to different text-to-image models (e.g., Midjourney, DALL·E 3, etc.) and image classifiers.
Natural Language Induced Adversarial Images
[ "Xiaopei Zhu", "Peiyang Xu", "Guanning Zeng", "Yinpeng Dong", "Xiaolin Hu" ]
Conference
poster
2410.08620
[ "https://github.com/zxp555/natural-language-induced-adversarial-images" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5qOWYC2IQj
@inproceedings{ gao2024aigcs, title={{AIGC}s Confuse {AI} Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language Models}, author={Yifei Gao and Jiaqi Wang and Zhiyu Lin and Jitao Sang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5qOWYC2IQj} }
The evolution of Artificial Intelligence Generated Contents (AIGCs) is advancing towards higher quality. The growing interactions with AIGCs present a new challenge to the data-driven AI community: While AI-generated contents have played a crucial role in a wide range of AI models, the potential hidden risks they introduce have not been thoroughly examined. Beyond human-oriented forgery detection, AI-generated content poses potential issues for AI models originally designed to process natural data. In this study, we underscore the exacerbated hallucination phenomena in Large Vision-Language Models (LVLMs) caused by AI-synthetic images. Remarkably, our findings shed light on a consistent AIGC hallucination bias: the object hallucinations induced by synthetic images are characterized by a greater quantity and a more uniform position distribution, even these synthetic images do not manifest unrealistic or additional relevant visual features compared to natural images. Moreover, our investigations on Q-former and Linear projector reveal that synthetic images may present token deviations after visual projection, thereby amplifying the hallucination bias.
AIGCs Confuse AI Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language Models
[ "Yifei Gao", "Jiaqi Wang", "Zhiyu Lin", "Jitao Sang" ]
Conference
poster
2403.08542
[ "https://github.com/LucusFigoGao/AIGCs_Confuse_AI_Too" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5kcnAHZXIC
@inproceedings{ yang2024semantic, title={Semantic Aware Just Noticeable Differences for {VVC} compressed Text Screen Content Images}, author={Kaifang Yang and Xinrong Zhao and Yanchao Gong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5kcnAHZXIC} }
With the rapid development of multimedia applications such as online education, remote conferences, and telemedicine, an emerging type of image known as text screen content images (TSCI) has gained widespread utilization. Distinguishing from natural images captured by cameras, TSCI is generally generated or rendered by computers and exhibits significant differences in content characteristics. Notably, TSCI primarily comprises text,, a symbol system uniquely defined by humans with specific semantics. As an important carrier for transmitting semantic information, the quality of text in TSCI significantly affects the subjective perception experience of multimedia system users. Just noticeable difference (JND) is a widely studied image quality measure that is theoretically closest to human perception. However, the traditional JND (T-JND) tests fail to distinguish text from other image contents, ignoring the significant impact of semantic readability of text on image quality. This paper focuses for the first time on the impact of text semantics on the quality of TSCI, and JND experiments for TSCI compressed by the state-of-the-art versatile video coding (VVC) standard are explored and discussed. Specifically, a matching TSCI database is first established. Using the database, image subjective observation comparison experiments are further designed and carried out to construct the traditional JND (T-JND) as well as the semantic aware JND (S-JND). By comparing the experimental results, crucial conclusions are reached, including the fact that the S-JND provides a more precise description of the quality of TSCI compared to the T-JND. These conclusions have important guiding significance for the subsequent development of efficient JND models suitable for TSCI compressed by VVC.
Semantic Aware Just Noticeable Differences for VVC compressed Text Screen Content Images
[ "Kaifang Yang", "Xinrong Zhao", "Yanchao Gong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5ielGTd21u
@inproceedings{ liu2024affinityd, title={Affinity3D: Propagating Instance-Level Semantic Affinity for Zero-Shot Point Cloud Semantic Segmentation}, author={Haizhuang Liu and Junbao Zhuo and Chen Liang and Jiansheng Chen and Huimin Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5ielGTd21u} }
Zero-shot point cloud semantic segmentation aims to recognize novel classes at the point level. Previous methods mainly transfer excellent zero-shot generalization capabilities from images to point clouds. However, directly transferring knowledge from image to point clouds faces two ambiguous problems. On the one hand, 2D models will generate wrong predictions when the image changes. On the other hand, directly mapping 3D points to 2D pixels by perspective projection fails to consider the visibility of 3D points in camera view. The wrong geometric alignment of 3D points and 2D pixels causes semantic ambiguity. To tackle these two problems, we propose a framework named Affinity3D that intends to empower 3D semantic segmentation models to perceive novel samples. Our framework aggregates instances in 3D and recognizes them in 2D, leveraging the excellent geometric separation in 3D and the zero-shot capabilities of 2D models. Affinity3D involves an affinity module that rectifies the wrong predictions by comparing them with similar instances and a visibility module preventing knowledge transfer from visible 2D pixels to invisible 3D points. Extensive experiments have been conducted on SemanticKITTI datasets. Our framework achieves state-of-the-art performance in two settings.
Affinity3D: Propagating Instance-Level Semantic Affinity for Zero-Shot Point Cloud Semantic Segmentation
[ "Haizhuang Liu", "Junbao Zhuo", "Chen Liang", "Jiansheng Chen", "Huimin Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5hVk9yGxax
@inproceedings{ wu2024generative, title={Generative Text Steganography with Large Language Model}, author={Jiaxuan Wu and Wu Zhengxian and Xue yiming and Juan Wen and Wanli Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5hVk9yGxax} }
Recent advances in large language models (LLMs) have blurred the boundary of high-quality text generation between humans and machines, which is favorable for generative text steganography. While, current advanced steganographic mapping is not suitable for LLMs since most users are restricted to accessing only the black-box API or user interface of the LLMs, thereby lacking access to the training vocabulary and its sampling probabilities. In this paper, we explore a black-box generative text steganographic method based on the user interfaces of large language models, which is called LLM-Stega. The main goal of LLM-Stega is that the secure covert communication between Alice (sender) and Bob (receiver) is conducted by using the user interfaces of LLMs. Specifically, We first construct a keyword set and design a new encrypted steganographic mapping to embed secret messages. Furthermore, to guarantee accurate extraction of secret messages and rich semantics of generated stego texts, an optimization mechanism based on reject sampling is proposed. Comprehensive experiments demonstrate that the proposed LLM-Stega outperforms current state-of-the-art methods.
Generative Text Steganography with Large Language Model
[ "Jiaxuan Wu", "Wu Zhengxian", "Xue yiming", "Juan Wen", "Wanli Peng" ]
Conference
poster
2404.10229
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5ab60yS8e0
@inproceedings{ li2024tas, title={{TAS}: Personalized Text-guided Audio Spatialization}, author={Zhaojian Li and Bin Zhao and Yuan Yuan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5ab60yS8e0} }
Synthesizing binaural audio according to personalized requirements is crucial for building immersive artificial spaces. Previous methods employ visual modalities to guide the spatialization of audio because it can provide spatial information about objects. However, the paradigm is dependent on object visibility and strict audiovisual correspondence, which makes it tough to satisfy personalized requirements. In addition, the visual counterpart to the audio may be crippled or even non-existent, which greatly limits the development of the field. To this end, we advocate exploring a novel task known as Text-guided Audio Spatialization (TAS), in which the goal is to convert mono audio into spatial audio based on text prompts. This approach circumvents harsh audiovisual conditions and allows for more flexible individualization. To facilitate this research, we construct the first TASBench dataset. The dataset provides a dense frame-level description of the spatial location of sounding objects in audio, enabling fine-grained spatial control. Since text prompts contain multiple sounding objects and spatial locations, the core issue of TAS is to establish the mapping relationship between text semantic information and audio objects. To tackle this issue, we design a Semantic-Aware Fusion (SAF) module to capture text-aware audio features and propose a text-guided diffusion model to learn the spatialization of audio, which can generate spatial audio consistent with text prompts. Extensive experiments on TASBench compare the proposed method with several methods from related tasks, demonstrating that our method is promising to achieve the personalized generation of spatial sense of audio under text prompts.
TAS: Personalized Text-guided Audio Spatialization
[ "Zhaojian Li", "Bin Zhao", "Yuan Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5aXZqp3KII
@inproceedings{ cao2024taskadapter, title={Task-Adapter: Task-specific Adaptation of Image Models for Few-shot Action Recognition}, author={Congqi Cao and Yueran Zhang and Yating Yu and Qinyi Lv and Lingtong Min and Yanning Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5aXZqp3KII} }
Existing works in few-shot action recognition mostly fine-tune a pre-trained image model and design sophisticated temporal alignment modules at feature level. However, simply fully fine-tuning the pre-trained model could cause overfitting due to the scarcity of video samples. Additionally, we argue that the exploration of task-specific information is insufficient when relying solely on well extracted abstract features. In this work, we propose a simple but effective task-specific adaptation method (Task-Adapter) for few-shot action recognition. By introducing the proposed Task-Adapter into the last several layers of the backbone and keeping the parameters of the original pre-trained model frozen, we mitigate the overfitting problem caused by full fine-tuning and advance the task-specific mechanism into the process of feature extraction. In each Task-Adapter, we reuse the frozen self-attention layer to perform task-specific self-attention across different videos within the given task to capture both distinctive information among classes and shared information within classes, which facilitates task-specific adaptation and enhances subsequent metric measurement between the query feature and support prototypes. Experimental results consistently demonstrate the effectiveness of our proposed Task-Adapter on four standard few-shot action recognition datasets. Especially on temporal challenging SSv2 dataset, our method outperforms the state-of-the-art methods by a large margin.
Task-Adapter: Task-specific Adaptation of Image Models for Few-shot Action Recognition
[ "Congqi Cao", "Yueran Zhang", "Yating Yu", "Qinyi Lv", "Lingtong Min", "Yanning Zhang" ]
Conference
poster
2408.00249
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5XwylUAmnY
@inproceedings{ li2024deep, title={Deep Incomplete Multi-View Network Semi-Supervised Multi-Label Learning with Unbiased Loss}, author={Quanjiang Li and Tingjin Luo and Mingdie Jiang and Jiahui Liao and Zhangqi Jiang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5XwylUAmnY} }
Due to the explosive growth in data sources and label categories, multi-view multi-label learning has garnered widespread attention. However, multi-view multi-label data often exhibits incomplete features and few labeled instances alongside a huge number of unlabeled instances, due to the technical limitations of data collection and high annotation cost of manual labeling in practice. Learning for such simultaneous missing of view features and labels is crucial but rarely studied, particularly when the labeled samples with full observations are limited. In this paper, we tackle this problem by proposing a novel Deep Incomplete Multi-View Semi-Supervised Multi-Label Learning method (DIMvSML). Specifically, to improve high-level representations of missing features, DIMvSML firstly employs deep graph networks to recover the feature information with structural similarity relations. Meanwhile, we design the structure-specific deep feature extractors to obtain discriminative information and preserve the cross-view consistency for the recovered data with instance-level contrastive loss. Furthermore, to eliminate the bias of the estimate of the risk that the semi-supervised multi-label methods minimise, we design a safe risk estimate framework with an unbiased loss and improve its empirical performance by using pseudo-labels of unlabeled data. Besides, we provide both the theoretical proof of better estimate variance and the intuitive explanation of our debiased framework. Finally, extensive experimental results on public datasets validate the superiority of DIMvSML compared with state-of-the-art methods.
Deep Incomplete Multi-View Network Semi-Supervised Multi-Label Learning with Unbiased Loss
[ "Quanjiang Li", "Tingjin Luo", "Mingdie Jiang", "Jiahui Liao", "Zhangqi Jiang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5XbgmzaI2a
@inproceedings{ wang2024achieving, title={Achieving Resolution-Agnostic {DNN}-based Image Watermarking: A Novel Perspective of Implicit Neural Representation}, author={Yuchen Wang and Xingyu Zhu and Guanhui Ye and Shiyao Zhang and Xuetao Wei}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5XbgmzaI2a} }
DNN-based watermarking methods are rapidly developing and delivering impressive performances. Recent advances achieve resolution-agnostic image watermarking by reducing the variant resolution watermarking problem to a fixed resolution watermarking problem. However, such a reduction process can potentially introduce artifacts and low robustness. To address this issue, we propose the first, to the best of our knowledge, Resolution-Agnostic Image WaterMarking (RAIMark) framework by watermarking the implicit neural representation (INR) of image. Unlike previous methods, our method does not rely on the previous reduction process by directly watermarking the continuous signal instead of image pixels, thus achieving resolution-agnostic watermarking. Precisely, given an arbitrary-resolution image, we fit an INR for the target image. As a continuous signal, such an INR can be sampled to obtain images with variant resolutions. Then, we quickly fine-tune the fitted INR to get a watermarked INR conditioned on a binary secret message. A pre-trained watermark decoder extracts the hidden message from any sampled images with arbitrary resolutions. By directly watermarking INR, we achieve resolution-agnostic watermarking with increased robustness. Extensive experiments show that our method outperforms previous methods with significant improvements: averagely improved bit accuracy by 7\%$\sim$29\%. Notably, we observe that previous methods are vulnerable to at least one watermarking attack (e.g. JPEG, crop, resize), while ours are robust against all watermarking attacks.
Achieving Resolution-Agnostic DNN-based Image Watermarking: A Novel Perspective of Implicit Neural Representation
[ "Yuchen Wang", "Xingyu Zhu", "Guanhui Ye", "Shiyao Zhang", "Xuetao Wei" ]
Conference
poster
2405.08340
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5WdYACFOhI
@inproceedings{ liu2024conditional, title={Conditional Diffusion Model for Open-ended Video Question Answering}, author={Xinyue Liu and Jiahui Wan and Linlin Zong and Bo Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5WdYACFOhI} }
Open-ended VideoQA presents a significant challenge due to the absence of fixed options, requiring the identification of the correct answer from a vast pool of candidate answers. Previous approaches typically utilize classifier or similarity comparison on fusion feature to yield prediction directly, lacking coarse-to-fine filtering on numerous candidates. Gradual refining the probability distribution of candidates can achieve more precise prediction. Thus, we propose the DiffAns model, which integrates the diffusion model to handle open-ended VideoQA task, simulating the gradual process by which humans answer open-ended question. Specifically, we first diffuse the true answer label into a random distribution (forward process). And under the guidance of answer-aware condition generated from video and question, the model iteratively denoises to obtain the correct probability distribution(backward process). This equips the model with the capability to progressively refine the random probability distribution of candidates, ultimately predicting the correct answer. We conduct experiments on three challenging open-ended VideoQA datasets, surpassing existing SoTA methods. Extensive experiments further explore and analyse the impact of each modules, as well as the design of diffusion model, demonstrating the effectiveness of DiffAns. Our code will be available.
Conditional Diffusion Model for Open-ended Video Question Answering
[ "Xinyue Liu", "Jiahui Wan", "Linlin Zong", "Bo Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5SelUL07QL
@inproceedings{ gu2024d, title={3D Human Pose Estimation from Multiple Dynamic Views via Single-view Pretraining with Procrustes Alignment}, author={Renshu Gu and Jiajun Zhu and Yixuan Si and Fei Gao and Jiamin Xu and Gang Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5SelUL07QL} }
3D Human pose estimation from multiple cameras with unknown calibration has received less attention than it should. The few existing data-driven solutions do not fully exploit 3D training data that are available on the market, and typically train from scratch for every novel multi-view scene, which impedes both accuracy and efficiency. We show how to exploit 3D training data to the fullest and associate multiple dynamic views efficiently to achieve high precision on novel scenes using a simple yet effective framework, dubbed \textit{Multiple Dynamic View Pose estimation} (MDVPose). MDVPose utilizes novel scenarios data to finetune a single-view pretrained motion encoder in multi-view setting, aligns arbitrary number of views in a unified coordinate via Procruste alignment, and imposes multi-view consistency. The proposed method achieves 22.1 mm P-MPJPE or 34.2 mm MPJPE on the challenging in-the-wild Ski-Pose PTZ dataset, which outperforms the state-of-the-art method by 24.8% P-MPJPE (-7.3 mm) and 19.0% MPJPE (-8.0 mm). It also outperforms the state-of-the-art methods by a large margin (-18.2mm P-MPJPE and -28.3mm MPJPE) on the EgoBody dataset. In addition, MDVPose achieves robust performance on the Human3.6M datasets featuring multiple static cameras. Code will be released upon acceptance.
3D Human Pose Estimation from Multiple Dynamic Views via Single-view Pretraining with Procrustes Alignment
[ "Renshu Gu", "Jiajun Zhu", "Yixuan Si", "Fei Gao", "Jiamin Xu", "Gang Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5RPP4QoFKw
@inproceedings{ he2024sniffing, title={Sniffing Threatening Open-World Objects in Autonomous Driving by Open-Vocabulary Models}, author={Yulin He and Siqi Wang and Wei Chen and Tianci Xun and Yusong Tan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5RPP4QoFKw} }
Autonomous driving (AD) is a typical application that requires effectively exploiting multimedia information. For AD, it is critical to ensure safety by detecting unknown objects in an open world, driving the demand for open world object detection (OWOD). However, existing OWOD methods treat generic objects beyond known classes in the training set as unknown objects and prioritize recall in evaluation. This encourages excessive false positives and endangers safety of AD. To address this issue, we restrict the definition of unknown objects to threatening objects in AD, and introduce a new evaluation protocol, which is built upon a new metric named U-ARecall, to alleviate biased evaluation caused by neglecting false positives. Under the new evaluation protocol, we re-evaluate existing OWOD methods and discover that they typically perform poorly in AD. Then, we propose a novel OWOD paradigm for AD based on fine-tuning foundational open-vocabulary models (OVMs), as they can exploit rich linguistic and visual prior knowledge for OWOD. Following this new paradigm, we propose a brand-new OWOD solution, which effectively addresses two core challenges of fine-tuning OVMs via two novel techniques: 1) the maintenance of open-world generic knowledge by a dual-branch architecture; 2) the acquisition of scenario-specific knowledge by the visual-oriented contrastive learning scheme. Besides, a dual-branch prediction fusion module is proposed to avoid post-processing and hand-crafted heuristics. Extensive experiments show that our proposed method not only surpasses classic OWOD methods in unknown object detection by a large margin ($\sim$3$\times$ U-ARecall), but also notably outperforms OVMs without fine-tuning in known object detection ($\sim$ 20\% K-mAP). Our codes are available at https://github.com/harrylin-hyl/AD-OWOD.
Sniffing Threatening Open-World Objects in Autonomous Driving by Open-Vocabulary Models
[ "Yulin He", "Siqi Wang", "Wei Chen", "Tianci Xun", "Yusong Tan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5QE0Hf37Le
@inproceedings{ jiang2024multimodal, title={Multi-Modal Diffusion Model for Recommendation}, author={Yangqin Jiang and Lianghao Xia and Wei Wei and Da Luo and Kangyi Lin and Chao Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5QE0Hf37Le} }
The rise of online multi-modal sharing platforms like TikTok and YouTube has enabled personalized recommender systems to incorporate multiple modalities (such as visual, textual, and acoustic) into user representations. However, addressing the challenge of data sparsity in these systems remains a key issue. To address this limitation, recent research has introduced self-supervised learning techniques to enhance recommender systems. However, these methods often rely on simplistic random augmentation or intuitive cross-view information, which can introduce irrelevant noise and fail to accurately align the multi-modal context with user-item interaction modeling. To fill this research gap, we propose a novel multi-modal graph diffusion model for recommendation called DiffMM. Our framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning. This integration facilitates better alignment between multi-modal feature information and collaborative relation modeling. Our approach leverages diffusion models’ generative capabilities to automatically generate a user-item graph that is aware of different modalities, facilitating the incorporation of useful multi-modal knowledge in modeling user-item interactions. We conduct extensive experiments on three public datasets, consistently demonstrating the superiority of our DiffMM over various competitive baselines.
Multi-Modal Diffusion Model for Recommendation
[ "Yangqin Jiang", "Lianghao Xia", "Wei Wei", "Da Luo", "Kangyi Lin", "Chao Huang" ]
Conference
oral
[ "https://github.com/hkuds/diffmm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5IF7qfqyJT
@inproceedings{ ding2024integrating, title={Integrating Content-Semantics-World Knowledge to Detect Stress from Videos}, author={Yang Ding and Yi Dai and Xin Wang and Ling Feng and Lei Cao and Huijun Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5IF7qfqyJT} }
Stress has rapidly emerged as a significant public health concern in the contemporary society, necessitating prompt identification and effective intervention strategies. Video-based stress detection offers a non-invasive, low-cost, and mass-reaching approach for identifying stress. In this paper, we propose a three-level content-semantic-world knowledge framework, addressing three particular issues for video-based stress detection. (1) How to abstract and encode video semantics with frame contents into visual representation? (2) How to leverage general-purpose LMMs to augment task-specific visual representation? (3) To what extent could general-purpose LLMs contribute to video-based stress detection? We design a Slow-Emotion-Fast-Action scheme to encode fast temporal changes of body actions revealed from video frames, as well as subtle details of emotions per video segment, into visual representation. We augment task-specific visual representation with linguistic facial expression descriptions by prompting general-purpose Large Multimodal Models (LMMs). A knowledge retriever is built to evaluate and select the most proper deliverable of LMMs. Experimental results on two datasets show that 1) our proposed three-level framework can achieve 90.89% F1-score in UVSD dataset and 80.79% F1-score, outperforming state-of-the-art; 2) leveraging LMMs helps to improve the F1-score by 2.25% in UVSD and 3.55% in RSL, compared to using the traditional Facial Action Coding System; 3) purely relying on general-purpose LMMs. is insufficient with 88.73% F1-score in UVSD dataset and 77.48% F1-score in RSL dataset, demonstrating the necessity to combine task-specific dedicated solutions with world knowledge given by LMMs.
Integrating Content-Semantics-World Knowledge to Detect Stress from Videos
[ "Yang Ding", "Yi Dai", "Xin Wang", "Ling Feng", "Lei Cao", "Huijun Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5FYaBV1nAE
@inproceedings{ sun2024learning, title={Learning from Distinction: Mitigating backdoors using a low-capacity model}, author={Haosen Sun and Yiming Li and Xixiang Lyu and Jing Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=5FYaBV1nAE} }
Deep neural networks (DNNs) are susceptible to backdoor attacks due to their black-box nature and lack of interpretability. Backdoor attacks intend to manipulate the model's prediction when hidden backdoors are activated by predefined triggers. Although considerable progress has been made in backdoor detection and removal at the model deployment stage, an effective defense against backdoor attacks during the training time is still under-explored. In this paper, we propose a novel training-time backdoor defense method called Learning from Distinction (LfD), allowing training a backdoor-free model on the backdoor-poisoned data. LfD uses a low-capacity model as a teacher to guide the learning of a backdoor-free student model via a dynamic weighting strategy. Extensive experiments on CIFAR-10, GTSRB and ImageNet-subset datasets show that LfD significantly reduces attack success rates to 0.67\%, 6.14\% and 1.42\%, respectively, with minimal impact on clean accuracy (less than 1\%, 3\% and 1\%).
Learning from Distinction: Mitigating backdoors using a low-capacity model
[ "Haosen Sun", "Yiming Li", "Xixiang Lyu", "Jing Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=57cL5TwLAH
@inproceedings{ mao2024loformer, title={LoFormer: Local Frequency Transformer for Image Deblurring}, author={Xintian Mao and Jiansheng Wang and Xingran Xie and Qingli Li and Yan Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=57cL5TwLAH} }
Due to the computational complexity of self-attention (SA), prevalent techniques for image deblurring often resort to either adopting localized SA or employing coarse-grained global SA methods, both of which exhibit drawbacks such as compromising global modeling or lacking fine-grained correlation. In order to address this issue by effectively modeling long-range dependencies without sacrificing fine-grained details, we introduce a novel approach termed Local Frequency Transformer (LoFormer). Within each unit of LoFormer, we incorporate a Local Channel-wise SA in the frequency domain (Freq-LC) to simultaneously capture cross-covariance within low- and high-frequency local windows. These operations offer the advantage of (1) ensuring equitable learning opportunities for both coarse-grained structures and fine-grained details, and (2) exploring a broader range of representational properties compared to coarse-grained global SA methods. Additionally, we introduce an MLP Gating mechanism complementary to Freq-LC, which serves to filter out irrelevant features while enhancing global learning capabilities. Our experiments demonstrate that LoFormer significantly improves performance in the image deblurring task, achieving a PSNR of 34.09 dB on the GoPro dataset with 126G FLOPs. Code will be released.
LoFormer: Local Frequency Transformer for Image Deblurring
[ "Xintian Mao", "Jiansheng Wang", "Xingran Xie", "Qingli Li", "Yan Wang" ]
Conference
poster
2407.16993
[ "https://github.com/deepmed-lab-ecnu/single-image-deblur" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=54MrV3qXmL
@inproceedings{ fu2024fedcafe, title={Fed{CAFE}: Federated Cross-Modal Hashing with Adaptive Feature Enhancement}, author={Ting Fu and Yu-Wei Zhan and Chong-Yu Zhang and Xin Luo and Zhen-Duo Chen and Yongxin Wang and Xun Yang and Xin-Shun Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=54MrV3qXmL} }
Deep Cross-Modal Hashing (CMH) has become one of the most popular solutions for cross-modal retrieval. Existing methods need to first collect data and then be trained with these accumulated data. However, in real world, data may be generated and possessed by different owners. Considering the concerns about privacy, data may not be shared or transmitted, leading to the failure of sufficient training of CMH. To solve the problem, we propose a new framework called Federated Cross-modal Hashing with Adaptive Feature Enhancement (FedCAFE). FedCAFE is a federated method which could use distributed data to train existing CMH methods under the privacy protection. To overcome the data heterogeneity challenge of distributed data and improve the generalization ability of global model, FedCAFE is endowed with a novel adaptive feature enhancement module and a new weighted aggregation strategy. Besides, it could fully utilize the rich global information carried in the global model to constrain the model during the local training process. We have conducted extensive experiments on four widely-used datasets in CMH domain with both IID and non-IID settings. The reported results demonstrate that the proposed FedCAFE achieves better performance than several state-of-the-art baselines. As the topic that training deep CMH in federated scenario is in its infancy, we plan to release the code and data to boost the development of the field. However, considering restriction of anonymous submission and size limitation, we could only upload the source code of FedCAFE as supplementary materials for peer review at the present stage.
FedCAFE: Federated Cross-Modal Hashing with Adaptive Feature Enhancement
[ "Ting Fu", "Yu-Wei Zhan", "Chong-Yu Zhang", "Xin Luo", "Zhen-Duo Chen", "Yongxin Wang", "Xun Yang", "Xin-Shun Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=52NGERGnlz
@inproceedings{ lin2024gdrgma, title={{GDR}-{GMA}: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients}, author={Shen Lin and Xiaoyu Zhang and Willy Susilo and Xiaofeng Chen and Jun Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=52NGERGnlz} }
As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency. Code is available at https://github.com/RUIYUN-ML/GDR-GMA.
GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients
[ "Shen Lin", "Xiaoyu Zhang", "Willy Susilo", "Xiaofeng Chen", "Jun Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=51eXLV9kI2
@inproceedings{ pei2024improving, title={Improving Interaction Comfort in Authoring Task in {AR}-{HRI} through Dynamic Dual-Layer Interaction Adjustment}, author={Yunqiang Pei and Kaiyue Zhang and Hongrong yang and Yong Tao and Qihang Tang and Jialei Tang and Guoqing Wang and Zhitao Liu and Ning Xie and Peng Wang and Yang Yang and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=51eXLV9kI2} }
Previous research has demonstrated the potential of Augmented Reality in enhancing psychological comfort in Human-Robot Interaction (AR-HRI) through shared robot intent, enhanced visual feedback, and increased expressiveness and creativity in interaction methods. However, the challenge of selecting interaction methods that enhance physical comfort in varying scenarios remains. This study purposes a dynamic dual-layer interaction adjustment mechanism to improve user comfort and interaction efficiency. The mechanism comprises two models: an general layer model, grounded in ergonomics principles, identifies appropriate areas for various interaction methods; a individual layer model predicts user discomfort levels using physiological signals. Interaction methods are dynamically adjusted based on continuous discomfort level changes, enabling the system to adapt to individual differences and dynamic changes, thereby reducing misjudgments and enhancing comfort management. The mechanism's success in authoring tasks validates its effectiveness, significantly advancing AR-HRI and fostering more comfortable and enhancing efficient human-centered interactions.
Improving Interaction Comfort in Authoring Task in AR-HRI through Dynamic Dual-Layer Interaction Adjustment
[ "Yunqiang Pei", "Kaiyue Zhang", "Hongrong yang", "Yong Tao", "Qihang Tang", "Jialei Tang", "Guoqing Wang", "Zhitao Liu", "Ning Xie", "Peng Wang", "Yang Yang", "Heng Tao Shen" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4yMa8BbH6J
@inproceedings{ gao2024cantor, title={Cantor: Inspiring Multimodal Chain-of-Thought of {MLLM}}, author={Timin Gao and Peixian Chen and Mengdan Zhang and Chaoyou Fu and Yunhang Shen and Yan Zhang and Shengchuan Zhang and Xiawu Zheng and Xing Sun and Liujuan Cao and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4yMa8BbH6J} }
With the advent of large language models(LLMs) enhanced by the chain-of-thought(CoT) methodology, visual reasoning problem is usually decomposed into manageable sub-tasks and tackled sequentially with various external tools. However, such a paradigm faces the challenge of the potential ``determining hallucinations'' in decision-making due to insufficient visual information and the limitation of low-level perception tools that fail to provide abstract summaries necessary for comprehensive reasoning. We argue that converging visual context acquisition and logical reasoning is pivotal for tackling visual reasoning tasks. This paper delves into the realm of multimodal CoT to solve intricate visual reasoning tasks with multimodal large language models (MLLMs) and their cognitive capability. To this end, we propose an innovative multimodal CoT framework, termed Cantor, characterized by a perception-decision architecture. Cantor first acts as a decision generator and integrates visual inputs to analyze the image and problem, ensuring a closer alignment with the actual context. Furthermore, Cantor leverages the advanced cognitive functions of MLLMs to perform as multifaceted experts for deriving higher-level information, enhancing the CoT generation process. Our extensive experiments demonstrate the efficacy of the proposed framework, showing significant improvements in multimodal CoT performance across two complex visual reasoning datasets, without necessitating fine-tuning or ground-truth rationales.
Cantor: Inspiring Multimodal Chain-of-Thought of MLLM
[ "Timin Gao", "Peixian Chen", "Mengdan Zhang", "Chaoyou Fu", "Yunhang Shen", "Yan Zhang", "Shengchuan Zhang", "Xiawu Zheng", "Xing Sun", "Liujuan Cao", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4uHa1ANmV5
@inproceedings{ lu2024fcdfs, title={{FC}-4{DFS}: Frequency-controlled Flexible 4D Facial Expression Synthesizing}, author={Xin Lu and Chuanqing Zhuang and Zhengda Lu and Yiqun Wang and Jun Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4uHa1ANmV5} }
4D facial expression synthesizing is a critical problem in the fields of computer vision and graphics. Current methods lack flexibility and smoothness when simulating the inter-frame motion of expression sequences. In this paper, we propose a frequency-controlled 4D facial expression synthesizing method, FC-4DFS. Specifically, we introduce a frequency-controlled LSTM network to generate 4D facial expression sequences frame by frame from a given neutral landmark with a given length. Meanwhile, we propose a temporal coherence loss to enhance the perception of temporal sequence motion and improve the accuracy of relative displacements. Furthermore, we designed a Multi-level Identity-Aware Displacement Network based on a cross-attention mechanism to reconstruct the 4D facial expression sequences from landmark sequences. Finally, our FC-4DFS achieves flexible and SOTA generation results of 4D facial expression sequences with different lengths on CoMA and Florence4D datasets. The code will be available on GitHub.
FC-4DFS: Frequency-controlled Flexible 4D Facial Expression Synthesizing
[ "Xin Lu", "Chuanqing Zhuang", "Zhengda Lu", "Yiqun Wang", "Jun Xiao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4uFTPLdfUp
@inproceedings{ li2024magic, title={{MAGIC}: Rethinking Dynamic Convolution Design for Medical Image Segmentation}, author={Shijie Li and Yunbin Tu and Qingyuan Xiang and Zheng Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4uFTPLdfUp} }
Recently, dynamic convolution shows performance boost for the CNN-related networks in medical image segmentation. The core idea is to replace static convolutional kernel with a linear combination of multiple convolutional kernels, conditioned on input-dependent attention function. However, the existing dynamic convolution design suffers from two limitations: i) The convolutional kernels are weighted by enforcing a single-dimensional attention function upon the input maps, overlooking the synergy in multi-dimensional information. This results in sub-optimal computations of convolution kernels. ii) The linear kernel aggregation is inefficient, restricting the model’s capacity to learn more intricate patterns. In this paper, we rethink the dynamic convolution design to address these limitations and propose multi-dimensional aggregation dynamic convolution (MAGIC). Specifically, our MAGIC introduce a dimensional-reciprocal fusion module to capture correlations among input maps across the spatial, channel, and global dimensions simultaneously for computing convolutional kernels. Furthermore, we design kernel recalculation module, which enhances the efficiency of aggregation through learning the interaction between kernels. As a drop-in replacement for regular convolution, our MAGIC can be flexibly integrated into prevalent pure CNN or hybrid CNN-Transformer backbones. The extensive experiments on four benchmarks demonstrate that our MAGIC outperforms regular convolution and existing dynamic convolution. Code is available at: https://github.com/Segment82/MAGIC
MAGIC: Rethinking Dynamic Convolution Design for Medical Image Segmentation
[ "Shijie Li", "Yunbin Tu", "Qingyuan Xiang", "Zheng Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4sqXrISHZT
@inproceedings{ zhao2024ctcqa, title={{CT}2C-{QA}: Multimodal Question Answering over Chinese Text, Table and Chart}, author={Bowen Zhao and Tianhao Cheng and Yuejie Zhang and Ying Cheng and Rui Feng and Xiaobo Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4sqXrISHZT} }
Multimodal Question Answering (MMQA) is crucial as it enables comprehensive understanding and accurate responses by integrating insights from diverse data representations such as tables, charts, and text. Most existing researches in MMQA only focus on two modalities such as image-text QA, table-text QA and chart-text QA, and there remains a notable scarcity in studies that investigate the joint analysis of text, tables, and charts. In this paper, we present C$\text{T}^2$C-QA, a pioneering Chinese reasoning-based QA dataset that includes an extensive collection of text, tables, and charts, meticulously compiled from 200 selectively sourced webpages. Our dataset simulates real webpages and serves as a great test for the capability of the model to analyze and reason with multimodal data, because the answer to a question could appear in various modalities, or even potentially not exist at all. Additionally, we present AED (Allocating, Expert and Desicion), a multi-agent system implemented through collaborative deployment, information interaction, and collective decision-making among different agents. Specifically, the Assignment Agent is in charge of selecting and activating expert agents, including those proficient in text, tables, and charts. The Decision Agent bears the responsibility of delivering the final verdict, drawing upon the analytical insights provided by these expert agents. We execute a comprehensive analysis, comparing AED with various state-of-the-art models in MMQA, including GPT-4. The experimental outcomes demonstrate that current methodologies, including GPT-4, are yet to meet the benchmarks set by our dataset.
CT2C-QA: Multimodal Question Answering over Chinese Text, Table and Chart
[ "Bowen Zhao", "Tianhao Cheng", "Yuejie Zhang", "Ying Cheng", "Rui Feng", "Xiaobo Zhang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4k79es7Guv
@inproceedings{ li2024towards, title={Towards Photorealistic Video Colorization via Gated Color-Guided Image Diffusion Models}, author={Jiaxing Li and Hongbo Zhao and Yijun Wang and Jianxin Lin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4k79es7Guv} }
Video colorization poses challenging tasks, necessitating structural stability, continuity, and details control in the colors produced. In this paper, based on a pretrained text-to-image model, we introduce the $\textbf{Gated Color Guidance}$ module ($\textbf{GCG}$), enabling the model to adaptively perform color propagation or generation according to the structural differences between reference and grayscale frames. Based on this multifunctionality, we propose a novel two-stage coloring strategy. In the first stage, under reference-mask condition, the model autonomously and jointly colors input keyframes in a one-to-many color domain mapping, while temporal coherence constraints are emphasized by modifying the attention mechanism. In the second stage, under reference-guided condition, the model effectively captures the colors of matching structures in the reference, and we further introduce $\textbf{Sliding Reference Grid}$ strategy ($\textbf{SRG}$) to merge and extract the color features from multiple frames, providing more stable coloring for the grayscale frames. Through this pipeline, we can achieve high-quality and stable video coloring while maintaining the accuracy of detailed colors. Additionally, the two-stage strategy is flexible and detachable, allowing users to adjust the number of selected reference frames to balance coloring quality and efficiency. Extensive experiments demonstrate that our method significantly outperforms previous state-of-the-art models in both qualitative comparison and quantitative measurement.
Towards Photorealistic Video Colorization via Gated Color-Guided Image Diffusion Models
[ "Jiaxing Li", "Hongbo Zhao", "Yijun Wang", "Jianxin Lin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4fZSVT4hSK
@inproceedings{ zhang2024unleashing, title={Unleashing the Power of Generic Segmentation Model: A Simple Baseline for Infrared Small Target Detection}, author={Mingjin Zhang and Chi Zhang and Qiming Zhang and Yunsong Li and Xinbo Gao and Jing Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4fZSVT4hSK} }
Recent advancements in deep learning have greatly advanced the field of infrared small object detection (IRSTD). Despite their remarkable success, a notable gap persists between these IRSTD methods and generic segmentation approaches in natural image domains. This gap primarily arises from the significant modality differences and the limited availability of infrared data. In this study, we aim to bridge this divergence by investigating the adaptation of generic segmentation models, such as the Segment Anything Model (SAM), to IRSTD tasks. Our investigation reveals that many generic segmentation models can achieve comparable performance to state-of-the-art IRSTD methods. However, their full potential in IRSTD remains untapped. To address this, we propose a simple, lightweight, yet effective baseline model for segmenting small infrared objects. Through appropriate distillation strategies, we empower smaller student models to outperform state-of-the-art methods, even surpassing fine-tuned teacher results. Furthermore, we enhance the model's performance by introducing a novel query design comprising dense and sparse queries to effectively encode multi-scale features. Through extensive experimentation across four popular IRSTD datasets, our model demonstrates significantly improved performance in both accuracy and throughput compared to existing approaches, surpassing SAM and Semantic-SAM by over 14 IoU on NUDT and 4 IoU on IRSTD1k. The source code and models will be released.
Unleashing the Power of Generic Segmentation Model: A Simple Baseline for Infrared Small Target Detection
[ "Mingjin Zhang", "Chi Zhang", "Qiming Zhang", "Yunsong Li", "Xinbo Gao", "Jing Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4dwZARybRf
@inproceedings{ zhang2024aralive, title={AraLive: Automatic Reward Adaption for Learning-based Live Video Streaming}, author={Huanhuan Zhang and Zhuo Liu and Haotian Li and Anfu Zhou and Chuanming Wang and Huadong Ma}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4dwZARybRf} }
Optimizing user Quality of Experience (QoE) for live video streaming remains a long-standing challenge. The Bitrate Control Algorithm (BCA) plays a crucial role in shaping user QoE. Recent advancements have seen RL-based algorithms overtake traditional rule-based methods, promising enhanced QoE optimization. Nevertheless, our comprehensive study reveals a pressing issue: current RL-based BCAs are limited to the fixed and formulaic reward functions, rendering them ill-equipped to adapt to dynamic network environments and varied viewer preferences. In this work, we present AraLive, an automatically adaptive reward learning method designed for seamless integration with any existing learning-based approach in live streaming contexts. To accomplish this goal, we construct a dedicated user QoE assessment dataset for live streaming and customize-design an adversarial model that skillfully aligns human feedback with actual network scenarios. We have deployed AraLive in not only the live streaming but also the classic VoD systems, in comparison to a series of state-of-the-art BCAs. The experimental results demonstrate that AraLive not only elevates overall QoE but also exhibits remarkable adaptability to varied user preferences.
AraLive: Automatic Reward Adaption for Learning-based Live Video Streaming
[ "Huanhuan Zhang", "Zhuo Liu", "Haotian Li", "Anfu Zhou", "Chuanming Wang", "Huadong Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4UDvROcNJu
@inproceedings{ wang2024illumination, title={Illumination Distribution Prior for Low-light Image Enhancement}, author={Chao Wang and Yang Zhou and Liangtian He and Lin Fenglai and Hongming Chen and Liang-Jian Deng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4UDvROcNJu} }
In this paper, we propose a simple but effective illumination distribution prior (IDP) for images to illuminate the darkness. The illumination distribution prior is the product of a statistical approach to low-light images. It is based on a key factor - the mean value and standard deviation of images are positively correlated with the illumination. Using IDP in combination with the dual-domain feature fusion network (DFFN), we can obtain images that are more consistent with the ground truth distribution. DFFN inserts the discrete wavelet transform (DWT) into the transformer architecture, aiming to recover the detailed texture of the image through local high-frequency information and global spatial information. We have conducted extensive experiments on five widely used low-light image enhancement datasets and the experimental results show the superior performance of our proposed network (IDP-Net) compared to other state-of-the-art methods.
Illumination Distribution Prior for Low-light Image Enhancement
[ "Chao Wang", "Yang Zhou", "Liangtian He", "Lin Fenglai", "Hongming Chen", "Liang-Jian Deng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4QZsYR3Wmj
@inproceedings{ zhu2024an, title={An Active Masked Attention Framework for Many-to-Many Cross-Domain Recommendations}, author={Feng Zhu and Xinxing Yang and Longfei Li and JUN ZHOU}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4QZsYR3Wmj} }
Cross-Domain Recommendation (CDR) has been proposed to improve the recommendation accuracy in the target domain (the sparser dataset) by benefiting from the auxiliary information transferred or the knowledge learned from one or many source domains (the denser datasets). However, most of the existing CDR approaches still suffer from the problem of negative transfer caused by undifferentiated knowledge transfer, and thus the recommendation accuracy in some domains, especially in the sparser domains, is still too low, which is not practical in real application scenarios. To address this problem, we propose a novel Active Masked Attention framework, i.e., AMA-CDR, for many-to-many CDR scenarios. Our AMA-CDR pursues a higher goal for CDR approaches, i.e., \textit{improving the recommendation performance in the target domain to achieve a practically usable level}, which is meaningful and challenging in real CDR systems. Specifically, AMA-CDR adopts an end-to-end graph embedding to reduce the objective distortion between graph embedding and embedding combination. More importantly, we propose an active mask for the embedding combination to ease negative transfer, which leverages both the prior knowledge, i.e., data density, and the posterior knowledge, i.e., sample uncertainty. Extensive experiments conducted on two public datasets demonstrate that our proposed AMA-CDR models significantly outperform the state-of-the-art approaches and achieve the new goal.
An Active Masked Attention Framework for Many-to-Many Cross-Domain Recommendations
[ "Feng Zhu", "Xinxing Yang", "Longfei Li", "JUN ZHOU" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4NyjtVHiv8
@inproceedings{ fu2024comonas, title={Co{MO}-{NAS}: Core-Structures-Guided Multi-Objective Neural Architecture Search for Multi-Modal Classification}, author={Pinhan Fu and Xinyan Liang and Yuhua Qian and Qian Guo and Zhifang Wei and Wen Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4NyjtVHiv8} }
Most existing NAS-based multi-modal classification (MMC-NAS) methods are optimized using the classification accuracy. They can not simultaneously provide multiple models with diverse perferences such as model complex and classification performance for meeting different users' demands.Combining NAS-MMC with multi-objective optimization is a nature way for this issue. However, the challenge problem of this solution is the high computation cost. For multi-objective optimization, the computing bottleneck is pareto front search. Some higher-quality MMC models (namely core structures, CSs) consisting of high-quality features and fusion operators are easier to identify. We find that CSs have a close relation with the pareto front (PF), i.e., the individuals lying in PF contain the CSs. Based on the finding, we propose an efficient multi-objective neural architecture search for multi-modal classification by applying CSs to guide the PF search (CoMO-NAS). In conclusion, experimental results thoroughly demonstrate the effectiveness of our CoMO-NAS. Compared to state-of-the-art competitors on benchmark multi-modal tasks, we achieve comparable performance with lower model complexity in shorter search time.
CoMO-NAS: Core-Structures-Guided Multi-Objective Neural Architecture Search for Multi-Modal Classification
[ "Pinhan Fu", "Xinyan Liang", "Yuhua Qian", "Qian Guo", "Zhifang Wei", "Wen Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4DhSFeAOpX
@inproceedings{ liu2024hcanet, title={HcaNet: Haze-concentration-aware Network for Real-scene Dehazing with Codebook Priors}, author={Yi Liu and Jiachen Li and Yanchun Ma and Qing Xie and Yongjian Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4DhSFeAOpX} }
In the task of image dehazing, it has been proven that high-quality codebook priors can be used to compensate for the distribution differences between real-world hazy images and synthetic hazy images, thereby helping the model improve its performance. However, because the concentration and distribution of haze in the image are irregular, the manners those simply replacing or blending the prior information in the codebook with the original image features are inconsistent with this irregularity, which leads to a non-ideal dehazing performance. To this end, we propose a haze concentration aware network(HcaNet), its haze-concentration-aware module(HcaM) can reduce the information loss in the vector quantization stage and achieve an adaptive domain transfer for regions with different degrees of degradation. To further capture the detailed texture information, we develop a frequency selective fusion module(FSFM) to facilitate the transmission of shallow information retained in haze areas to deeper layers, thereby enhancing the fusion with high-quality feature priors. Extensive evaluations demonstrate that the proposed model can be merely trained on synthetic hazy-clean pairs and effectively generalize to real-world data. Several experimental results confirm that the proposed dehazing model outperforms state-of-the-art methods significantly on real-world images.
HcaNet: Haze-concentration-aware Network for Real-scene Dehazing with Codebook Priors
[ "Yi Liu", "Jiachen Li", "Yanchun Ma", "Qing Xie", "Yongjian Liu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4BrIZo3Ave
@inproceedings{ yuan2024robust, title={Robust Prototype Completion for Incomplete Multi-view Clustering}, author={Honglin Yuan and Shiyun Lai and Xingfeng Li and Jian Dai and Yuan Sun and Zhenwen Ren}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4BrIZo3Ave} }
In practical data collection processes, certain views may become partially unavailable due to sensor failures or equipment issues, leading to the problem of incomplete multi-view clustering (IMVC). While some IMVC methods employing prototype completion achieve satisfactory performance, almost all of them implicitly assume correct alignment of prototypes across all views. However, during prototype generation, different networks could generate different cluster centers, thereby leading to the produced prototypes from different views may be misaligned, i.e., prototype noisy correspondence. To address this issue, we propose Robust Prototype Completion for Incomplete Multi-view Clustering (RPCIC), which mitigates the impact of noisy correspondence in prototypes. Specifically, RPCIC initially utilizes cross-view contrastive learning module to obtain consistent feature representations across different views. Subsequently, we devise robust contrastive loss for the produced prototypes, aiming to alleviate the influence of noisy correspondence within them. Finally, we employ prototype fusion-based strategy to complete the missing data. Comprehensive experiments demonstrate that RPCIC outperforms 11 state-of-the-art methods in terms of both performance and robustness.
Robust Prototype Completion for Incomplete Multi-view Clustering
[ "Honglin Yuan", "Shiyun Lai", "Xingfeng Li", "Jian Dai", "Yuan Sun", "Zhenwen Ren" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4AtVBoEUgi
@inproceedings{ liao2024calibrbev, title={Calib{RBEV}: Multi-Camera Calibration via Reversed Bird's-eye-view Representations for Autonomous Driving}, author={Wenlong Liao and Sunyuan Qiang and Xianfei Li and Xiaolei Chen and Haoyu Wang and Yanyan Liang and Junchi Yan and Tao He and Pai Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=4AtVBoEUgi} }
Camera calibration consists of determining the intrinsic and extrinsic parameters of an imaging system, which forms the fundamental basis for various computer vision tasks and applications, e.g., robotics and autonomous driving (AD). However, prevailing camera calibration models pose a time-consuming and labor-intensive off-board process particularly in mass production settings, while simultaneously lacking exploration of real-world autonomous driving scenarios. To this end, in this paper, inspired by recent advancements in bird's-eye-view (BEV) perception models, we proposes a novel automatic multi-camera Calibration method via Reversed BEV representations for autonomous driving, termed CalibRBEV. Specifically, the proposed CalibRBEV model primarily comprises two stages. Initially, we innovatively reverse the BEV perception pipeline, reconstructing bounding boxes through an attention auto-encoder module to fully extract the latent reversed BEV representations. Subsequently, the obtained representations from encoder are interacted with the surrounding multi-view image features for further refinement and calibration parameters prediction. Extensive experimental results on nuScenes and Waymo datasets validate the effectiveness of our proposed model.
CalibRBEV: Multi-Camera Calibration via Reversed Bird's-eye-view Representations for Autonomous Driving
[ "Wenlong Liao", "Sunyuan Qiang", "Xianfei Li", "Xiaolei Chen", "Haoyu Wang", "Yanyan Liang", "Junchi Yan", "Tao He", "Pai Peng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=44rVbGyKDX
@inproceedings{ islam2024hazespacem, title={HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing}, author={Md Tanvir Islam and Nasir Rahim and Saeed Anwar and Muhammad Saqib and Sambit Bakshi and Khan Muhammad}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=44rVbGyKDX} }
Reducing atmospheric hazes and enhancing image clarity is crucial for a range of applications related to computer vision. The lack of real-life hazy ground truth images necessitates synthetic datasets, which often need more diverse haze types, impeding effective haze type classification and dehazing algorithm selection. This research introduces the HazeSpace2M dataset, a comprehensive collection of over 2 million images designed to enhance the performance of dehazing through haze-type classification. HazeSpace2M includes diverse scenes with 10 haze intensity levels, featuring Fog, Cloud, and a novel category, Environmental Haze (EH). Leveraging the dataset, we introduce a novel technique of haze-type classification followed by specialized dehazers to dehaze hazy images. Unlike the conventional methods, our approach classifies haze types before applying type-specific dehazing, improving clarity and functionality across applications lacking real-life hazy images. We benchmark the state-of-the-art classification models against different combinations of the hazy benchmarking datasets (HBDs) and the Real Hazy Testset (RHT) from the HazeSapce2M dataset. For instance, ResNet50 and AlexNet, on average, achieve 92.75% and 92.50% accuracy, respectively, against the existing synthetic HBDs. However, the same models furnish 80% and 70% accuracy, respectively, against our RHT, proving the challenging nature of our dataset. Additional experiments utilizing our proposed framework verify that haze-type classification followed by specialized dehazing enhances dehazing results by 2.41% in PSNR, 17.14% in SSIM, and 10.2% in MSE over general dehazers. These results highlight the significance of HazeSapce2M and the proposed framework in addressing the pervasive challenge of atmospheric haze in multimedia processing. The codes and dataset will be available on GitHub soon.
HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing
[ "Md Tanvir Islam", "Nasir Rahim", "Saeed Anwar", "Muhammad Saqib", "Sambit Bakshi", "Khan Muhammad" ]
Conference
poster
2409.17432
[ "https://github.com/tanvirnwu/hazespace2m" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=42flABfqDT
@inproceedings{ yujian2024sparse, title={Sparse Query Dense: Enhancing 3D Object Detection with Pseudo points}, author={Mo Yujian and Yan Wu and Junqiao Zhao and Hou zhenjie and weiquan Huang and Hu Yinghao and Jijun Wang and Jun Yan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=42flABfqDT} }
Current LiDAR-only 3D detection methods are limited by the sparsity of point clouds. The previous method used pseudo points generated by depth completion to supplement the LiDAR point cloud, but the pseudo points sample process was complex, and the distribution of pseudo points was uneven. Meanwhile, due to the imprecision of depth completion, the pseudo points suffer from noise and local structural ambiguity, which limit the further improvement of detection accuracy. This paper presents SQDNet, a novel framework designed to address these challenges. SQDNet incorporates two key components: the SQD, which achieves sparse-to-dense matching via grid position indices, allowing for rapid sampling of large-scale pseudo points on the dense depth map directly, thus streamlining the data preprocessing pipeline. And use the density of LiDAR points within these grids to alleviate the uneven distribution and noise problems of pseudo points. Meanwhile, the sparse 3D Backbone is designed to capture long-distance dependencies, thereby improving voxel feature extraction and mitigating local structural blur in pseudo points. The experimental results validate the effectiveness of SQD and achieve considerable detection performance for difficult-to-detect instances on the KITTI test.
Sparse Query Dense: Enhancing 3D Object Detection with Pseudo points
[ "Mo Yujian", "Yan Wu", "Junqiao Zhao", "Hou zhenjie", "weiquan Huang", "Hu Yinghao", "Jijun Wang", "Jun Yan" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=41qxYMnEJw
@inproceedings{ peng2024laplacian, title={Laplacian Matrix Learning for Point Cloud Attribute Compression with Ternary Search-Based Adaptive Block Partition}, author={Changhao Peng and Wei Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=41qxYMnEJw} }
Graph Fourier Transform (GFT) has demonstrated significant effectiveness in point cloud attribute compression task. However, existing graph modeling methods are based on the geometric relationships of the points, which leads to reduced efficiency of graph transforms in cases where the correlation between attributes and geometry is weak. In this paper, we propose a novel graph modeling method based on attribute prediction values. Specifically, we utilize Gaussian priors to model prediction values, then use maximum a posteriori estimation to learn the Laplacian matrix that best fits the prediction values in order to conduct separate graph transforms on prediction values and ground truth values to derive residuals, and subsequently perform quantization and entropy coding on these residuals. Additionally, since the partitioning of point clouds directly affects the coding performance, We design an adaptive block partitioning method based on ternary search, which selects reference points using distance threshold r and performs block partitioning and non-reference point attribute prediction based on these reference points. By conducting ternary search on distance threshold r, we rapidly identify the optimal block partitioning strategy. Moreover, we introduce an efficient residual encoding method based on Morton codes for the attributes of reference points while the prediction attributes of non-reference points are modeled using the proposed graph-based modeling approach. Experimental results demonstrate that our method significantly outperforms two attribute compression methods employed by Moving Picture Experts Group (MPEG) in lossless geometry based attribute compression tasks, with an average of 30.57% BD-rate gain compared to Predictive Lifting Transform (PLT), and an average of 33.54% BD-rate gain compared to Region-Adaptive Hierarchical Transform (RAHT), which exhibits significantly improved rate-distortion performance over the current state-of-the-art method based on GFT.
Laplacian Matrix Learning for Point Cloud Attribute Compression with Ternary Search-Based Adaptive Block Partition
[ "Changhao Peng", "Wei Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3w7o8TJjdj
@inproceedings{ xuan2024superpixelbased, title={Superpixel-based Efficient Sampling for Learning Neural Fields from Large Input}, author={Zhongwei Xuan and Zunjie Zhu and Shuai Wang and Haibing YIN and Hongkui Wang and Ming Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3w7o8TJjdj} }
In recent years, novel view synthesis methods using neural implicit fields have gained popularity due to their exceptional rendering quality and rapid training speed. However, the computational cost of volumetric rendering has increased significantly with the advancement of camera technology and the consequent rise in average camera resolution. Despite extensive efforts to accelerate the training process, the training duration remains unacceptable for high-resolution inputs. Therefore, the development of efficient sampling methods is crucial for optimizing the learning process of neural fields from a large volume of inputs. In this paper, we introduce a novel method named Superpixel Efficient Sampling (SES), aimed at enhancing the learning efficiency of neural implicit fields. Our approach optimizes pixel-level ray sampling by segmenting the error map into multiple superpixels using the slic algorithm and dynamically updating their errors during training to increase ray sampling in areas with higher rendering errors. Compared to other methods, our approach leverages the flexibility of superpixels, effectively reducing redundant sampling while considering local information. Our method not only accelerates the learning process but also improves the rendering quality obtained from a vast array of inputs. We conduct extensive experiments to evaluate the effectiveness of our method across several baselines and datasets. The code will be released.
Superpixel-based Efficient Sampling for Learning Neural Fields from Large Input
[ "Zhongwei Xuan", "Zunjie Zhu", "Shuai Wang", "Haibing YIN", "Hongkui Wang", "Ming Lu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3tztjCtqwd
@inproceedings{ chen2024recos, title={ReCoS: A Novel Benchmark for Cross-Modal Image-Text Retrieval in Complex Real-Life Scenarios}, author={Xiaojun Chen and Jimeng Lou and Wenxi Huang and Ting Wan and Qin Zhang and Min Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3tztjCtqwd} }
Image-text retrieval stands as a pivotal task within information retrieval, gaining increasing importance with the rapid advancements in Visual-Language Pretraining models. However, current benchmarks for evaluating these models face limitations, exemplified by instances such as BLIP2 achieving near-perfect performance on existing benchmarks. In response, this paper advocates for a more robust evaluation benchmark for image-text retrieval, one that embraces several essential characteristics. Firstly, a comprehensive benchmark should cover a diverse range of tasks in both perception and cognition-based retrieval. Recognizing this need, we introduce ReCoS, a novel benchmark specifically designed for cross-modal image-text retrieval in complex real-life scenarios. Unlike existing benchmarks, ReCoS encompasses 12 retrieval tasks, with a particular focus on three cognition-based tasks, providing a more holistic assessment of model capabilities. To ensure the novelty of the benchmark, we emphasize the use of original data sources, steering clear of reliance on existing publicly available datasets to minimize the risk of data leakage. Additionally, to strike a balance between the complexity of the real world and benchmark usability, ReCoS includes text descriptions that are neither overly detailed, making retrieval overly simplistic, nor under-detailed to the point where retrieval becomes impossible. Our evaluation results shed light on the challenges faced by existing methods, especially in cognition-based retrieval tasks within ReCoS. This underscores the necessity for innovative approaches in addressing the complexities of image-text retrieval in real-world scenarios.
ReCoS: A Novel Benchmark for Cross-Modal Image-Text Retrieval in Complex Real-Life Scenarios
[ "Xiaojun Chen", "Jimeng Lou", "Wenxi Huang", "Ting Wan", "Qin Zhang", "Min Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3n9khXAV6C
@inproceedings{ yang2024channelspatial, title={Channel-Spatial Support-Query Cross-Attention for Fine-Grained Few-Shot Image Classification}, author={Shicheng Yang and Xiaoxu Li and Dongliang Chang and Zhanyu Ma and Jing-Hao Xue}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3n9khXAV6C} }
Few-shot fine-grained image classification aims to use only few labelled samples to successfully recognize subtle sub-classes within the same parent class. This task is extremely challenging, due to the co-occurrence of large inter-class similarity, low intra-class similarity, and only few labelled samples. In this paper, to address these challenges, we propose a new Channel-Spatial Cross-Attention Module (CSCAM), which can effectively drive a model to extract discriminative fine-grained feature representations with only few shots. CSCAM collaboratively integrates a channel cross-attention module and a spatial cross-attention module, for the attentions across support and query samples. In addition, to fit for the characteristics of fine-grained images, a support averaging method is proposed in CSCAM to reduce the intra-class distance and increase the inter-class distance. Extensive experiments on four few-shot fine-grained classification datasets validate the effectiveness of CSCAM. Furthermore, CSCAM is a plug-and-play module, conveniently enabling effective improvement of state-of-the-art methods for few-shot fine-grained image classification.
Channel-Spatial Support-Query Cross-Attention for Fine-Grained Few-Shot Image Classification
[ "Shicheng Yang", "Xiaoxu Li", "Dongliang Chang", "Zhanyu Ma", "Jing-Hao Xue" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3fqqjMM1BI
@inproceedings{ shigyo2024vrmediated, title={{VR}-Mediated Cognitive Defusion: A Comparative Study for Managing Negative Thoughts}, author={Kento Shigyo and Yi-Fan Cao and Kentaro Takahira and Mingming Fan and Huamin Qu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3fqqjMM1BI} }
The growing prevalence of psychological disorders underscores the critical importance of mental health research in today's society. In psychotherapy, particularly Acceptance and Commitment Therapy (ACT), cognitive exercises employing mental imagery are used to manage negative thoughts. However, the challenge of maintaining vivid imagery diminishes their therapeutic effectiveness. Virtual reality (VR) offers untapped potential for increasing engagement and therapeutic efficacy. However, there is still a gap in exploration regarding how to effectively leverage the potential of VR to enhance traditional cognitive exercises with mental imagery. This study investigates the effective HCI design and the comparative efficacy of a VR-mediated exercise for promoting cognitive defusion to address negative thoughts grounded in ACT. Using a co-design approach with clinicians and potential users of postgraduate students, we developed a VR system that materializes negative thoughts into tangible objects. This allows users to visually modify and transpose these objects onto a surface, facilitating mental detachment from negative thoughts. In an evaluation study with 20 non-clinical participants, divided into VR and mental imagery groups, we assessed the impact of the cognitive defusion exercise on their perception of negative thoughts and psychological measures using standardized questionnaires. Results show improvement in both groups, with significant enhancements in negative thought perception and mental detachment from negative thoughts exclusively in the VR group, whereas the mental imagery group did not demonstrate significant changes. Interviews emphasize the VR's capability to present vivid visualizations of negative thoughts effortlessly, highlighting its effectiveness and engagement in psychotherapy to facilitate cognitive exercises.
VR-Mediated Cognitive Defusion: A Comparative Study for Managing Negative Thoughts
[ "Kento Shigyo", "Yi-Fan Cao", "Kentaro Takahira", "Mingming Fan", "Huamin Qu" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3fgY4qOhoO
@inproceedings{ jiang2024heterogeneityaware, title={Heterogeneity-Aware Federated Deep Multi-View Clustering towards Diverse Feature Representations}, author={Xiaorui Jiang and Zhongyi Ma and Yulin Fu and Yong Liao and Pengyuan Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3fgY4qOhoO} }
Multi-view clustering has garnered increasing attention in recent years because of its ability to extract consistent and complementary information from multi-view data. In this context, contrastive learning has often been employed to explore the common semantics across different views. However, we notice that existing multi-view contrastive clustering methods often overlook cases where samples belonging to the same cluster but different views are incorrectly classified as negative feature pairs, leading to larger separations between features belonging to the same cluster in the feature space. To address this issue, we propose to shift the perspective from the view-level to the cluster-level and introduce $\mathbf{C}$luster-level $\mathbf{C}$ontrastive $\mathbf{D}$eep $\mathbf{M}$ulti-$\mathbf{V}$iew $\mathbf{C}$lustering ($\mathbf{CCDMVC}$) method based on an intra-cluster negative pair exemption strategy. Specifically, by constructing global features to utilize complete view information, we infer the clustering probability of each sample, thus reducing the construction of negative feature pairs belonging to the same cluster. As a result, the contrastive loss is corrected, allowing the model to treat different levels of feature pairs differently, minimizing the introduction of noise and making the sample points within the same cluster more compact. Additionally, we propose a cluster-level imputation module to make CCDMVC compatible with scenarios involving incomplete data. This module infers missing features with high confidence clustering probabilities and classifies them in cluster-level form. We conduct extensive experiments on eight datasets with fourteen baseline algorithms. The results demonstrate that CCDMVC exhibits superior clustering performance.
Heterogeneity-Aware Federated Deep Multi-View Clustering towards Diverse Feature Representations
[ "Xiaorui Jiang", "Zhongyi Ma", "Yulin Fu", "Yong Liao", "Pengyuan Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3fJRZAu7JY
@inproceedings{ zhang2024spikegs, title={Spike{GS}: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion}, author={Jiyuan Zhang and Kang Chen and Shiyan Chen and Yajing Zheng and Tiejun Huang and Zhaofei Yu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3fJRZAu7JY} }
Novel View Synthesis plays a crucial role by generating new 2D renderings from multi-view images of 3D scenes. However, capturing high-speed scenes with conventional cameras often leads to motion blur, hindering the effectiveness of 3D reconstruction. To address this challenge, high-frame-rate dense 3D reconstruction emerges as a vital technique, enabling detailed and accurate modeling of real-world objects or scenes in various fields, including Virtual Reality or embodied AI. Spike cameras, a novel type of neuromorphic sensor, continuously record scenes with an ultra-high temporal resolution, showing potential for accurate 3D reconstruction. Despite their promise, existing approaches, such as applying Neural Radiance Fields (NeRF) to spike cameras, encounter challenges due to the time-consuming rendering process. To address this issue, we make the first attempt to introduce the 3D Gaussian Splatting (3DGS) into spike cameras in high-speed capture, providing 3DGS as dense and continuous clues of views, then constructing SpikeGS. Specifically, to train SpikeGS, we establish computational equations between the rendering process of 3DGS and the processes of instantaneous imaging and exposing-like imaging of the continuous spike stream. Besides, we build a very lightweight but effective mapping process from spikes to instant images to support training. Furthermore, we introduced a new spike-based 3D rendering dataset for validation. Extensive experiments have demonstrated our method possesses the high quality of novel view rendering, proving the tremendous potential of spike cameras in modeling 3D scenes.
SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion
[ "Jiyuan Zhang", "Kang Chen", "Shiyan Chen", "Yajing Zheng", "Tiejun Huang", "Zhaofei Yu" ]
Conference
poster
2407.10062
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3fIAeai2YE
@inproceedings{ jiangyi2024onthefly, title={On-the-fly Point Feature Representation for Point Clouds Analysis}, author={Wang Jiangyi and Zhongyao Cheng and Na Zhao and Jun Cheng and Xulei Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3fIAeai2YE} }
Point cloud analysis is challenging due to its unique characteristics of unorderness, sparsity and irregularity. Prior works attempt to capture local relationships by convolution operations or attention mechanisms, exploiting geometric information from coordinates implicitly. These methods, however, are insufficient to describe the explicit local geometry, e.g., curvature and orientation. In this paper, we propose On-the-fly Point Feature Representation (OPFR), which captures abundant geometric information explicitly through Curve Feature Generator module. This is inspired by Point Feature Histogram (PFH) from computer vision community. However, the utilization of vanilla PFH encounters great difficulties when applied to large datasets and dense point clouds, as it demands considerable time for feature generation. In contrast, we introduce the Local Reference Constructor module, which approximates the local coordinate systems based on triangle sets. Owing to this, our OPFR only requires extra 1.56ms for inference (65$\times$ faster than vanilla PFH) and 0.012M more parameters, and it can serve as a versatile plug-and-play module for various backbones, particularly MLP-based and Transformer-based backbones examined in this study. Additionally, we introduce the novel Hierarchical Sampling module aimed at enhancing the quality of triangle sets, thereby ensuring robustness of the obtained geometric features. Our proposed method improves overall accuracy (OA) on ModelNet40 from 90.7\% to 94.5\% (+3.8\%) for classification, and OA on S3DIS Area-5 from 86.4\% to 90.0\% (+3.6\%) for semantic segmentation, respectively, building upon PointNet++ backbone. When integrated with Point Transformer backbone, we achieve state-of-the-art results on both tasks: 94.8\% OA on ModelNet40 and 91.7\% OA on S3DIS Area-5.
On-the-fly Point Feature Representation for Point Clouds Analysis
[ "Wang Jiangyi", "Zhongyao Cheng", "Na Zhao", "Jun Cheng", "Xulei Yang" ]
Conference
poster
2407.21335
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3di7IStl3a
@inproceedings{ huang2024stablemofusion, title={StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework}, author={Yiheng Huang and Yang Hui and Chuanchen Luo and Yuxi Wang and Shibiao Xu and Zhaoxiang Zhang and Man Zhang and Junran Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3di7IStl3a} }
Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods.
StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework
[ "Yiheng Huang", "Yang Hui", "Chuanchen Luo", "Yuxi Wang", "Shibiao Xu", "Zhaoxiang Zhang", "Man Zhang", "Junran Peng" ]
Conference
oral
2405.05691
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3crgOHtJKi
@inproceedings{ wan2024dualstream, title={Dual-stream Perception-driven Blind Quality Assessment for Stereoscopic Omnidirectional Images}, author={Zhaolin Wan and Qiushuang Yang and Zhiyang Li and Xiaopeng Fan and Wangmeng Zuo and Debin Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3crgOHtJKi} }
The emergence of virtual reality technology has made stereoscopic omnidirectional images (SOI) easily accessible and prompting the need to evaluate their perceptual quality. At present, most stereoscopic omnidirectional image quality assessment (SOIQA) methods rely on one of the projection formats, i.e., Equirectangular Projection (ERP) or CubeMap Projection (CMP). However, while ERP provides global information and the less distorted CMP complements it by providing local structural guidance, research on leveraging both ERP and CMP in SOIQA remains limited, hindering a comprehensive understanding of both global and local visual cues. Motivated by this gap, our study introduces a novel dual-stream perception-driven network for blind quality assessment of stereoscopic omnidirectional images. By integrating both ERP and CMP, our method effectively captures both global and local information, marking the first attempt to bridge this gap in SOIQA, particularly through deep learning methodologies. We employ an inter-intra feature fusion module, which considers both the inter-complementarity between ERP and CMP and the intra-relationships within CMP images. This module dynamically and complementarily adjusts the contributions of features from both projections and effectively integrates them to achieve a more comprehensive perception. Besides, deformable convolution is employed to extract the local region of interest, simulating the orientation selectivity of the primary visual cortex. Finally, with the features of left and right views of SOI, a stereo cross attention module that simulates the binocular fusion mechanism is proposed to predict the quality score. Extensive experiments are conducted to evaluate our model and the state-of-the-art competitors, demonstrating that our model has achieved the best performance on the databases of LIVE 3D VR, SOLID, and NBU.
Dual-stream Perception-driven Blind Quality Assessment for Stereoscopic Omnidirectional Images
[ "Zhaolin Wan", "Qiushuang Yang", "Zhiyang Li", "Xiaopeng Fan", "Wangmeng Zuo", "Debin Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3a0we8hrHk
@inproceedings{ wang2024explicit, title={Explicit Granularity and Implicit Scale Correspondence Learning for Point-Supervised Video Moment Localization}, author={Kun Wang and Hao Liu and Lirong Jie and Zixu Li and Yupeng Hu and Liqiang Nie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3a0we8hrHk} }
Video moment localization (VML) aims to identify the temporal boundary of the target moment semantically matching the given query. Existing approaches fall into three paradigms: fully-supervised, weakly-supervised, and point-supervised. Compared to other two paradigms, point-supervised VML strikes a balance between localization accuracy and annotation cost. However, it is still in its infancy due to the following two challenges: explicit granularity alignment and implicit scale perception, especially when facing complex cross-modal correspondences. To this end, we propose a Semantic Granularity and Scale Correspondence Integration (SG-SCI) framework aimed at modeling the semantic alignment between video and text, leveraging limited single-frame annotation information for correspondence learning. It explicitly models semantic relations of different feature granularities and adaptively mines the implicit semantic scale, thereby enhancing and utilizing modal feature representations of varying granularities and scales. SG-SCI employs a granularity correspondence alignment module to align semantic information by leveraging latent prior knowledge. Then we develop a scale correspondence learning strategy to identify and address semantic scale differences. Extensive comparison experiments, ablation studies, and necessary hyperparameter analyses on benchmark datasets have demonstrated the promising performance of our model over several state-of-the-art competitors.
Explicit Granularity and Implicit Scale Correspondence Learning for Point-Supervised Video Moment Localization
[ "Kun Wang", "Hao Liu", "Lirong Jie", "Zixu Li", "Yupeng Hu", "Liqiang Nie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3Xx2MgYX67
@inproceedings{ ge2024customizing, title={Customizing Text-to-Image Generation with Inverted Interaction}, author={mengmeng Ge and Xu Jia and Takashi Isobe and Xiaomin Li and Qinghe Wang and Jing Mu and Dong Zhou and liwang Amd and Huchuan Lu and Lu Tian and Ashish Sirasao and Emad Barsoum}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3Xx2MgYX67} }
Subject-driven image generation, aimed at customizing user-specified subjects, has experienced rapid progress. However, most of them focus on transferring the customized appearance of subjects. In this work, we consider a novel concept customization task, that is, capturing the interaction between subjects in exemplar images and transferring the learned concept of interaction to achieve customized text-to-image generation. Intrinsically, the interaction between subjects is diverse and is difficult to describe in only a few words. In addition, typical exemplar images are about the interaction between humans, which further intensifies the challenge of interaction-driven image generation with various categories of subjects. To address this task, we adopt a divide-and-conquer strategy and propose a two-stage interaction inversion framework. The framework begins by learning a pseudo-word for a single pose of each subject in the interaction. This is then employed to promote the learning of the concept for the interaction. In addition, language prior and cross-attention loss are incorporated into the optimization process to encourage the modeling of interaction. Extensive experiments demonstrate that the proposed methods are able to effectively invert the interactive pose from exemplar images and apply it to the customized generation with user-specified interaction.
Customizing Text-to-Image Generation with Inverted Interaction
[ "mengmeng Ge", "Xu Jia", "Takashi Isobe", "Xiaomin Li", "Qinghe Wang", "Jing Mu", "Dong Zhou", "liwang Amd", "Huchuan Lu", "Lu Tian", "Ashish Sirasao", "Emad Barsoum" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3XaxpSG9F8
@inproceedings{ jiang2024sparseinteraction, title={SparseInteraction: Sparse Semantic Guidance for Radar and Camera 3D Object Detection}, author={Shengyin Jiang and Shaoqing Xu and lifang and Li Liu and Ziying Song and Yang Bo and Zhi-Xin Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3XaxpSG9F8} }
Multi-modal fusion techniques, such as radar and images, enable a complementary and cost-effective perception of the surrounding environment regardless of lighting and weather conditions. However, existing fusion methods for surround-view images and radar are challenged by the inherent noise and positional ambiguity of radar, which leads to significant performance losses. To address this limitation effectively, our paper presents a robust, end-to-end fusion framework dubbed SparseInteraction. First, we introduce the Noisy Radar Filter (NRF) module to extract foreground features by creatively using queried semantic features from the image to filter out noisy radar features. Furthermore, we implement the Sparse Cross-Attention Encoder (SCAE) to effectively blend foreground radar eatures and image features to address positional ambiguity issues at a sparse level. Ultimately, to facilitate model convergence and performance, the foreground prior queries containing position information of the foreground radar are concatenated with predefined queries and fed into the subsequent transformer-based decoder. The experimental results demonstrate that the proposed fusion strategies markedly enhance detection performance and achieve new state-of-the-art results on the nuScenes benchmark. Source code is available at https://github.com/GG-Bonds/SparseInteraction.
SparseInteraction: Sparse Semantic Guidance for Radar and Camera 3D Object Detection
[ "Shengyin Jiang", "Shaoqing Xu", "lifang", "Li Liu", "Ziying Song", "Yang Bo", "Zhi-Xin Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3TzQJ12s7i
@inproceedings{ ukai2024adacoder, title={AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering}, author={Mahiro Ukai and Shuhei Kurita and Atsushi Hashimoto and Yoshitaka Ushiku and Nakamasa Inoue}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3TzQJ12s7i} }
Visual question answering aims to provide responses to natural language questions given visual input. Recently, visual programmatic models (VPMs), which generate executable programs to answer questions through large language models (LLMs), have attracted research interest. However, they often require long input prompts to provide the LLM with sufficient API usage details to generate relevant code. To address this limitation, we propose AdaCoder, an adaptive prompt compression framework for VPMs. AdaCoder operates in two phases: a compression phase and an inference phase. In the compression phase, given a preprompt that describes all API definitions in the Python language with example snippets of code, a set of compressed preprompts is generated, each depending on a specific question type. In the inference phase, given an input question, AdaCoder predicts the question type and chooses the appropriate corresponding compressed preprompt to generate code to answer the question. Notably, AdaCoder employs a single frozen LLM and pre-defined prompts, negating the necessity of additional training and maintaining adaptability across different powerful black-box LLMs such as GPT and Claude. In experiments, we apply AdaCoder to ViperGPT and demonstrate that it reduces token length by 71.1%, while maintaining or even improving the performance of visual question answering.
AdaCoder: Adaptive Prompt Compression for Programmatic Visual Question Answering
[ "Mahiro Ukai", "Shuhei Kurita", "Atsushi Hashimoto", "Yoshitaka Ushiku", "Nakamasa Inoue" ]
Conference
poster
2407.19410
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3R7pLqdi8C
@inproceedings{ tang2024dig, title={Dig a Hole and Fill in Sand: Adversary and Hiding Decoupled Steganography}, author={Weixuan Tang and Haoyu Yang and Yuan Rao and Zhili Zhou and Fei Peng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3R7pLqdi8C} }
Deep steganography is a technique that imperceptibly hides secret information into image by neural networks. Existing networks consist of two components, including a hiding component for information hiding and an adversary component for countering against steganalyzers. However, these two components are two ends of the seesaw, and it is difficult to balance the tradeoff between message extraction accuracy and security performance by joint optimization. To address the issues, this paper proposes a steganographic method called AHDeS (Adversary-Hiding-Decoupled Steganography) under the Dig-and-Fill paradigm, wherein the adversary and hiding components can be decoupled into an optimization-based adversary module in the digging process and an INN-based hiding network in the filling process. Specfically in the training stage, the INN is first trained for acquiring the ability of message embedding. In the deployment stage, given the well-trained and fixed INN, the cover image is first iteratively optimized for enhancing the security performance against steganalyzers, followed by the actual message embedding by the INN. Owing to the reversibility of the INN, security performance can be enhanced without sacrificing message extraction accuracy. Experimental results show that AHDeS can achieve the state-of-the-art security performance and visual quality while maintaining satisfied message extraction accuracy.
Dig a Hole and Fill in Sand: Adversary and Hiding Decoupled Steganography
[ "Weixuan Tang", "Haoyu Yang", "Yuan Rao", "Zhili Zhou", "Fei Peng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3KAqujSwS8
@inproceedings{ qi2024predicting, title={Predicting the Unseen: A Novel Dataset for Hidden Intention Localization in Pre-abnormal Analysis}, author={ZeHao Qi and Ruixu Zhang and Xinyi Hu and Wenxuan Liu and Zheng Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3KAqujSwS8} }
Our paper introduces a novel video dataset specifically designed for Temporal Intention Localization (TIL), aimed at identifying hidden abnormal intention in densely populated and dynamically complex environments. Traditional Temporal Action Localization (TAL) frameworks, focusing on overt actions within constrained temporal intervals, often miss the subtleties of pre-abnormal actions that unfold over extended periods. Our dataset comprises 228 videos with 5790 clips, each annotated to capture fine-grained actions within ambiguous temporal boundaries using a Joint-Linear-Assignment methodology. This comprehensive approach enables detailed analysis of the evolution of abnormal intention over time. To address the detection of subtle, hidden intention, we developed the Intention-Action Fusion module, an creative approach that integrates dynamic feature fusion across 11 behavioral subcategories, significantly enhancing the model's ability to discern nuanced intention. This enhancement has led to performance improvements of up to 139\% in specific scenarios, dramatically boosting the model's sensitivity and interpretability, which is crucial for advancing the capabilities of proactive surveillance systems. By pushing the boundaries of current technology, our dataset and methodologies foster the development of proactive surveillance systems capable of preemptively identifying potential threats from nuanced behavioral patterns, encouraging further exploration into the complexities of intention beyond observable actions.
Predicting the Unseen: A Novel Dataset for Hidden Intention Localization in Pre-abnormal Analysis
[ "ZeHao Qi", "Ruixu Zhang", "Xinyi Hu", "Wenxuan Liu", "Zheng Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3CkUXpwb2g
@inproceedings{ wang2024information, title={Information Diffusion Prediction with Graph Neural Ordinary Differential Equation Network}, author={Ding Wang and Wei Zhou and Songlin Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3CkUXpwb2g} }
Information diffusion prediction aims to forecast the path of information spreading in social networks. Prior works generally consider the diffusion process to be driven by user correlations or preferences. Recent works focus on characterizing the dynamicity of user preferences and propose to capture users' dynamic preferences by discretizing the diffusion process into structure snapshots. Despite their effectiveness, these works summarize user preferences from partially observed structure snapshots, ignoring that users' preferences are evolving constantly. Moreover, discretizing the diffusion process makes these models overlook abundant structure information across different periods, reducing their ability to discover potential participants. To address the above issues, we propose a novel \textbf{G}raph Neural \textbf{O}rdinary \textbf{D}ifferential \textbf{E}quation \textbf{N}etwork (GODEN) for information diffusion prediction, which incorporates neural ordinary differential equations (ODE) to model the continuous dynamics of the diffusion process. Specifically, we design two coupled ODE functions on nodes and edges to describe their co-evolution dynamic and infer user dynamic preferences based on the solution of ODEs. Besides, we extract user correlations from a heterogeneous graph to complement user encoding for prediction. Finally, to predict the future user infections of the observed cascade, we represent its diffusion pattern in terms of user and temporal contexts and apply a multi-head attention module to attend to different contexts. Experimental results confirm our approach’s effectiveness on four real-world datasets, with our model outperforming the state-of-the-art diffusion prediction models.
Information Diffusion Prediction with Graph Neural Ordinary Differential Equation Network
[ "Ding Wang", "Wei Zhou", "Songlin Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3BFo7AYjoa
@inproceedings{ zhao2024multigrained, title={Multi-grained Correspondence Learning of Audio-language Models for Few-shot Audio Recognition}, author={Shengwei Zhao and Xu Linhai and Yuying Liu and Shaoyi Du}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3BFo7AYjoa} }
Large-scale pre-trained audio-language models excel in general multi-modal representation, facilitating their adaptation to downstream audio recognition tasks in a data-efficient manner. However, existing few-shot audio recognition methods based on audio-language models primarily focus on learning coarse-grained correlations, which are not sufficient to capture the intricate matching patterns between the multi-level information of audio and the diverse characteristics of category concepts. To address this gap, we propose multi-grained correspondence learning for bootstrapping audio-language models to improve audio recognition with few training samples. This approach leverages generative models to enrich multi-modal representation learning, mining the multi-level information of audio alongside the diverse characteristics of category concepts. Multi-grained matching patterns are then established through multi-grained key-value cache and multi-grained cross-modal contrast, enhancing the alignment between audio and category concepts. Additionally, we incorporate optimal transport to tackle temporal misalignment and semantic intersection issues in fine-grained correspondence learning, enabling flexible fine-grained matching. Our method achieves state-of-the-art results on multiple benchmark datasets for few-shot audio recognition, with comprehensive ablation experiments validating its effectiveness.
Multi-grained Correspondence Learning of Audio-language Models for Few-shot Audio Recognition
[ "Shengwei Zhao", "Xu Linhai", "Yuying Liu", "Shaoyi Du" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3A2FE4Jtam
@inproceedings{ wu2024crossview, title={Cross-View Mutual Learning for Semi-Supervised Medical Image Segmentation}, author={Song Wu and Xiaoyu Wei and Xinyue Chen and Yazhou Ren and Jing He and Xiaorong Pu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=3A2FE4Jtam} }
Semi-supervised medical image segmentation has gained increasing attention due to its potential to alleviate the manual annotation burden. Mainstream methods typically involve two subnets, and conduct a consistency objective to ensure them producing consistent predictions for unlabeled data. However, they often ignore that the complementarity of model predictions is equally crucial. To realize the potential of the multi-subnet architecture, we propose a novel cross-view mutual learning method with a two-branch co-training framework. Specifically, we first introduce a novel conflict-based feature learning (CFL) that encourages the two subnets to learn distinct features from the same input. These distinct features are then decoded into complementary model predictions, allowing both subnets to understand the input from different views. More importantly, we propose a cross-view mutual learning (CML) to maximize the effectiveness of CFL. This approach requires only modifications to the model inputs and supervisory signals, and implements a heterogeneous consistency objective to fully explore the complementarity of model predictions. Consequently, the aggregated predictions can effectively capture both consistency and complementarity across two subnets. Experimental results on three public datasets demonstrate the superiority of CML over previous SoTA methods. Code is available at https://github.com/SongwuJob/CML.
Cross-View Mutual Learning for Semi-Supervised Medical Image Segmentation
[ "Song Wu", "Xiaoyu Wei", "Xinyue Chen", "Yazhou Ren", "Jing He", "Xiaorong Pu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=38ylN0SDMk
@inproceedings{ chen2024tgcapvt, title={{TGCA}-{PVT}: Topic-Guided Context-Aware Pyramid Vision Transformer for Sticker Emotion Recognition}, author={Jian Chen and Wei Wang and Yuzhu Hu and Junxin Chen and Han Liu and Xiping Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=38ylN0SDMk} }
Online chatting has become an essential aspect of our daily interactions, with stickers emerging as a prevalent tool for conveying emotions more vividly than plain text. While conventional image emotion recognition focuses on global features, sticker emotion recognition necessitates incorporating both global and local features, along with additional modalities like text. To address this, we introduce a topic ID-guided transformer method to facilitate a more nuanced analysis of the stickers. Considering that each sticker will have a topic, and stickers with the same topic will have the same object, we introduce a topic ID and regard the stickers with the same topic ID as topic context. Our approach encompasses a novel topic-guided context-aware module and a topic-guided attention mechanism, enabling the extraction of comprehensive topic context features from stickers sharing the same topic ID, significantly enhancing emotion recognition accuracy. Moreover, we integrate a frequency linear attention module to leverage frequency domain information to better capture the object information of the stickers and a locally enhanced re-attention mechanism for improved local feature extraction. Extensive experiments and ablation studies on the large-scale sticker emotion dataset SER30k validate the efficacy of our method. Experimental results show that our proposed method obtains the best accuracy on both single-modal and multi-modal sticker emotion recognition.
TGCA-PVT: Topic-Guided Context-Aware Pyramid Vision Transformer for Sticker Emotion Recognition
[ "Jian Chen", "Wei Wang", "Yuzhu Hu", "Junxin Chen", "Han Liu", "Xiping Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=35rgn7rIn3
@inproceedings{ wang2024speechee, title={Speech{EE}: A Novel Benchmark for Speech Event Extraction}, author={Bin Wang and Meishan Zhang and Hao Fei and Yu Zhao and Bobo Li and Shengqiong Wu and Wei Ji and Min Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=35rgn7rIn3} }
Event extraction (EE) is a critical direction in the field of information extraction, laying an important foundation for the construction of structured knowledge bases. EE from text has received ample research and attention for years, yet there can be numerous real-world applications that require direct information acquisition from speech signals, online meeting minutes, interview summaries, press releases, etc. While EE from speech has remained under-explored, this paper fills the gap by pioneering a SpeechEE, defined as detecting the event predicates and arguments from a given audio speech. To benchmark the SpeechEE task, we first construct a large-scale high-quality dataset. Based on textual EE datasets under the sentence, document, and dialogue scenarios, we convert texts into speeches through both manual real-person narration and automatic synthesis, empowering the data with diverse scenarios, languages, domains, ambiences, and speaker styles. Further, to effectively address the key challenges in the task, we tailor an E2E SpeechEE system based on the encoder-decoder architecture, where a novel Shrinking Unit module and a retrieval-aided decoding mechanism are devised. Extensive experimental results on all SpeechEE subsets demonstrate the efficacy of the proposed model, offering a strong baseline for the task. At last, being the first work on this topic, we shed light on key directions for future research.
SpeechEE: A Novel Benchmark for Speech Event Extraction
[ "Bin Wang", "Meishan Zhang", "Hao Fei", "Yu Zhao", "Bobo Li", "Shengqiong Wu", "Wei Ji", "Min Zhang" ]
Conference
poster
2408.09462
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=35eYE4CR3C
@inproceedings{ qi2024deblurring, title={Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment}, author={Yunshan Qi and Lin Zhu and Yifan Zhao and Nan Bao and Jia Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=35eYE4CR3C} }
Neural Radiance Fields (NeRF) achieves impressive 3D representation learning and novel view synthesis results with high-quality multi-view images as input. However, motion blur in images often occurs in low-light and high-speed motion scenes, which significantly degrades the reconstruction quality of NeRF. Previous deblurring NeRF methods struggle to estimate pose and lighting changes during the exposure time, making them unable to accurately model the motion blur. The bio-inspired event camera measuring intensity changes with high temporal resolution makes up this information deficiency. In this paper, we propose Event-driven Bundle Adjustment for Deblurring Neural Radiance Fields (EBAD-NeRF) to jointly optimize the learnable poses and NeRF parameters by leveraging the hybrid event-RGB data. An intensity-change-metric event loss and a photo-metric blur loss are introduced to strengthen the explicit modeling of camera motion blur. Experiments on both synthetic and real-captured data demonstrate that EBAD-NeRF can obtain accurate camera trajectory during the exposure time and learn a sharper 3D representations compared to prior works.
Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment
[ "Yunshan Qi", "Lin Zhu", "Yifan Zhao", "Nan Bao", "Jia Li" ]
Conference
poster
2406.14360
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=31rrsYnriG
@inproceedings{ xu2024ggeditor, title={{GG}-Editor: Locally Editing 3D Avatars with Multimodal Large Language Model Guidance}, author={Yunqiu Xu and Linchao Zhu and Yi Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=31rrsYnriG} }
Text-driven 3D avatar customization has attracted increasing attention in recent years, where precisely editing specific local parts of avatars with only text prompts is particularly challenging. Previous editing methods usually use segmentation or cross-attention masks as constraints for local editing. Although these masks tightly cover existing objects/parts, they may limit editing methods to create drastic geometry deformations beyond the covered contents. From a different perspective, this paper presents a GPT-guided local avatar editing framework, namely GG-Editor. Specifically, GG-Editor progressively mines more reasonable candidate editing regions via harnessing multimodal large language models which already organically assimilate common-sense human knowledge. In order to improve the editing quality of the local areas, GG-Editor explicitly decouples the geometry/appearance optimization, and adopts a global-local synergy editing strategy with GPT-generated local prompts. Moreover, to preserve concepts residing in source avatars, GG-Editor proposes an orthogonal denoising score that orthogonally decomposes editing directions and introduce an explicit term for preservation. Comprehensive experiments demonstrate that GG-Editor with only textual prompts achieves realistic and high-fidelity local editing results, significantly surpassing prior works. Project page: https://xuyunqiu.github.io/GG-Editor/.
GG-Editor: Locally Editing 3D Avatars with Multimodal Large Language Model Guidance
[ "Yunqiu Xu", "Linchao Zhu", "Yi Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0