arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.09107 | 2023-05-16T02:12:57Z | Is a Video worth $n\times n$ Images? A Highly Efficient Approach to
Transformer-based Video Question Answering | [
"Chenyang Lyu",
"Tianbo Ji",
"Yvette Graham",
"Jennifer Foster"
] | Conventional Transformer-based Video Question Answering (VideoQA) approaches
generally encode frames independently through one or more image encoders
followed by interaction between frames and question. However, such schema would
incur significant memory use and inevitably slow down the training and
inference speed. In this work, we present a highly efficient approach for
VideoQA based on existing vision-language pre-trained models where we
concatenate video frames to a $n\times n$ matrix and then convert it to one
image. By doing so, we reduce the use of the image encoder from $n^{2}$ to $1$
while maintaining the temporal structure of the original video. Experimental
results on MSRVTT and TrafficQA show that our proposed approach achieves
state-of-the-art performance with nearly $4\times$ faster speed and only 30%
memory use. We show that by integrating our approach into VideoQA systems we
can achieve comparable, even superior, performance with a significant speed up
for training and inference. We believe the proposed approach can facilitate
VideoQA-related research by reducing the computational requirements for those
who have limited access to budgets and resources. Our code will be made
publicly available for research use. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] | false |
2305.09145 | 2023-05-16T03:51:34Z | Deep ReLU Networks Have Surprisingly Simple Polytopes | [
"Feng-Lei Fan",
"Wei Huang",
"Xiangru Zhong",
"Lecheng Ruan",
"Tieyong Zeng",
"Huan Xiong",
"Fei Wang"
] | A ReLU network is a piecewise linear function over polytopes. Figuring out
the properties of such polytopes is of fundamental importance for the research
and development of neural networks. So far, either theoretical or empirical
studies on polytopes only stay at the level of counting their number, which is
far from a complete characterization of polytopes. To upgrade the
characterization to a new level, here we propose to study the shapes of
polytopes via the number of simplices obtained by triangulating the polytope.
Then, by computing and analyzing the histogram of simplices across polytopes,
we find that a ReLU network has relatively simple polytopes under both
initialization and gradient descent, although these polytopes theoretically can
be rather diverse and complicated. This finding can be appreciated as a novel
implicit bias. Next, we use nontrivial combinatorial derivation to
theoretically explain why adding depth does not create a more complicated
polytope by bounding the average number of faces of polytopes with a function
of the dimensionality. Our results concretely reveal what kind of simple
functions a network learns and its space partition property. Also, by
characterizing the shape of polytopes, the number of simplices be a leverage
for other problems, \textit{e.g.}, serving as a generic functional complexity
measure to explain the power of popular shortcut networks such as ResNet and
analyzing the impact of different regularization strategies on a network's
space partition. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.MM"
] | false |
2305.09212 | 2023-05-16T06:41:25Z | Cross-Modal Global Interaction and Local Alignment for Audio-Visual
Speech Recognition | [
"Yuchen Hu",
"Ruizhe Li",
"Chen Chen",
"Heqing Zou",
"Qiushi Zhu",
"Eng Siong Chng"
] | Audio-visual speech recognition (AVSR) research has gained a great success
recently by improving the noise-robustness of audio-only automatic speech
recognition (ASR) with noise-invariant visual information. However, most
existing AVSR approaches simply fuse the audio and visual features by
concatenation, without explicit interactions to capture the deep correlations
between them, which results in sub-optimal multimodal representations for
downstream speech recognition task. In this paper, we propose a cross-modal
global interaction and local alignment (GILA) approach for AVSR, which captures
the deep audio-visual (A-V) correlations from both global and local
perspectives. Specifically, we design a global interaction model to capture the
A-V complementary relationship on modality level, as well as a local alignment
approach to model the A-V temporal consistency on frame level. Such a holistic
view of cross-modal correlations enable better multimodal representations for
AVSR. Experiments on public benchmarks LRS3 and LRS2 show that our GILA
outperforms the supervised learning state-of-the-art. | [
"eess.AS",
"cs.CV",
"cs.MM",
"cs.SD"
] | false |
2305.09275 | 2023-05-16T08:29:33Z | Rapid Adaptation in Online Continual Learning: Are We Evaluating It
Right? | [
"Hasan Abed Al Kader Hammoud",
"Ameya Prabhu",
"Ser-Nam Lim",
"Philip H. S. Torr",
"Adel Bibi",
"Bernard Ghanem"
] | We revisit the common practice of evaluating adaptation of Online Continual
Learning (OCL) algorithms through the metric of online accuracy, which measures
the accuracy of the model on the immediate next few samples. However, we show
that this metric is unreliable, as even vacuous blind classifiers, which do not
use input images for prediction, can achieve unrealistically high online
accuracy by exploiting spurious label correlations in the data stream. Our
study reveals that existing OCL algorithms can also achieve high online
accuracy, but perform poorly in retaining useful information, suggesting that
they unintentionally learn spurious label correlations. To address this issue,
we propose a novel metric for measuring adaptation based on the accuracy on the
near-future samples, where spurious correlations are removed. We benchmark
existing OCL approaches using our proposed metric on large-scale datasets under
various computational budgets and find that better generalization can be
achieved by retaining and reusing past seen information. We believe that our
proposed metric can aid in the development of truly adaptive OCL methods. We
provide code to reproduce our results at
https://github.com/drimpossible/EvalOCL. | [
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2305.09276 | 2023-05-16T08:30:45Z | Noise robust neural network architecture | [
"Xiong Yunuo",
"Xiong Hongwei"
] | In which we propose neural network architecture (dune neural network) for
recognizing general noisy image without adding any artificial noise in the
training data. By representing each free parameter of the network as an
uncertainty interval, and applying a linear transformation to each input
element, we show that the resulting architecture achieves decent noise
robustness when faced with input data with white noise. We apply simple dune
neural networks for MNIST dataset and demonstrate that even for very noisy
input images which are hard for human to recognize, our approach achieved
better test set accuracy than human without dataset augmentation. We also find
that our method is robust for many other examples with various background
patterns added. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.09302 | 2023-05-16T09:21:56Z | Pink-Eggs Dataset V1: A Step Toward Invasive Species Management Using
Deep Learning Embedded Solutions | [
"Di Xu",
"Yang Zhao",
"Xiang Hao",
"Xin Meng"
] | We introduce a novel dataset consisting of images depicting pink eggs that
have been identified as Pomacea canaliculata eggs, accompanied by corresponding
bounding box annotations. The purpose of this dataset is to aid researchers in
the analysis of the spread of Pomacea canaliculata species by utilizing deep
learning techniques, as well as supporting other investigative pursuits that
require visual data pertaining to the eggs of Pomacea canaliculata. It is worth
noting, however, that the identity of the eggs in question is not definitively
established, as other species within the same taxonomic family have been
observed to lay similar-looking eggs in regions of the Americas. Therefore, a
crucial prerequisite to any decision regarding the elimination of these eggs
would be to establish with certainty whether they are exclusively attributable
to invasive Pomacea canaliculata or if other species are also involved. The
dataset is available at https://www.kaggle.com/datasets/deeshenzhen/pinkeggs | [
"cs.CV",
"cs.AI",
"eess.AS"
] | false |
2305.09327 | 2023-05-16T10:04:30Z | Improved Type III solar radio burst detection using congruent deep
learning models | [
"Jeremiah Scully",
"Ronan Flynn",
"Peter Gallagher",
"Eoin Carley",
"Mark Daly"
] | Solar flares are energetic events in the solar atmosphere that are often
linked with solar radio bursts (SRBs). SRBs are observed at metric to
decametric wavelengths and are classified into five spectral classes (Type
I--V) based on their signature in dynamic spectra. The automatic detection and
classification of SRBs is a challenge due to their heterogeneous form.
Near-realtime detection and classification of SRBs has become a necessity in
recent years due to large data rates generated by advanced radio telescopes
such as the LOw Frequency ARray (LOFAR). In this study, we implement congruent
deep learning models to automatically detect and classify Type III SRBs. We
generated simulated Type III SRBs, which were comparable to Type IIIs seen in
real observations, using a deep learning method known as Generative Adversarial
Network (GAN). This simulated data was combined with observations from LOFAR to
produce a training set that was used to train an object detection model known
as YOLOv2 (You Only Look Once). Using this congruent deep learning model
system, we can accurately detect Type III SRBs at a mean Average Precision
(mAP) value of 77.71%. | [
"astro-ph.SR",
"astro-ph.IM",
"cs.CV"
] | false |
2305.09333 | 2023-05-16T10:15:44Z | Multi-modal Visual Understanding with Prompts for Semantic Information
Disentanglement of Image | [
"Yuzhou Peng"
] | Multi-modal visual understanding of images with prompts involves using
various visual and textual cues to enhance the semantic understanding of
images. This approach combines both vision and language processing to generate
more accurate predictions and recognition of images. By utilizing prompt-based
techniques, models can learn to focus on certain features of an image to
extract useful information for downstream tasks. Additionally, multi-modal
understanding can improve upon single modality models by providing more robust
representations of images. Overall, the combination of visual and textual
information is a promising area of research for advancing image recognition and
understanding. In this paper we will try an amount of prompt design methods and
propose a new method for better extraction of semantic information | [
"cs.CV",
"cs.AI",
"cs.CL"
] | false |
2305.09510 | 2023-05-16T15:03:20Z | Real-time Simultaneous Multi-Object 3D Shape Reconstruction, 6DoF Pose
Estimation and Dense Grasp Prediction | [
"Shubham Agrawal",
"Nikhil Chavan-Dafle",
"Isaac Kasahara",
"Selim Engin",
"Jinwook Huh",
"Volkan Isler"
] | Robotic manipulation systems operating in complex environments rely on
perception systems that provide information about the geometry (pose and 3D
shape) of the objects in the scene along with other semantic information such
as object labels. This information is then used for choosing the feasible
grasps on relevant objects. In this paper, we present a novel method to provide
this geometric and semantic information of all objects in the scene as well as
feasible grasps on those objects simultaneously. The main advantage of our
method is its speed as it avoids sequential perception and grasp planning
steps. With detailed quantitative analysis, we show that our method delivers
competitive performance compared to the state-of-the-art dedicated methods for
object shape, pose, and grasp predictions while providing fast inference at 30
frames per second speed. | [
"cs.RO",
"cs.AI",
"cs.CV",
"cs.LG",
"I.4.5; I.4.8; I.4.10; I.2.9; I.2.10; I.6.3"
] | false |
2305.09564 | 2023-05-16T16:00:48Z | Image Reconstruction using Superpixel Clustering and Tensor Completion | [
"Maame G. Asante-Mensah",
"Anh Huy Phan",
"Salman Ahmadi-Asl",
"Zaher Al Aghbari",
"Andrzej Cichocki"
] | This paper presents a pixel selection method for compact image representation
based on superpixel segmentation and tensor completion. Our method divides the
image into several regions that capture important textures or semantics and
selects a representative pixel from each region to store. We experiment with
different criteria for choosing the representative pixel and find that the
centroid pixel performs the best. We also propose two smooth tensor completion
algorithms that can effectively reconstruct different types of images from the
selected pixels. Our experiments show that our superpixel-based method achieves
better results than uniform sampling for various missing ratios. | [
"cs.CV",
"cs.NA",
"math.NA"
] | false |
2305.09610 | 2023-05-16T17:02:57Z | Concurrent Misclassification and Out-of-Distribution Detection for
Semantic Segmentation via Energy-Based Normalizing Flow | [
"Denis Gudovskiy",
"Tomoyuki Okuno",
"Yohei Nakata"
] | Recent semantic segmentation models accurately classify test-time examples
that are similar to a training dataset distribution. However, their
discriminative closed-set approach is not robust in practical data setups with
distributional shifts and out-of-distribution (OOD) classes. As a result, the
predicted probabilities can be very imprecise when used as confidence scores at
test time. To address this, we propose a generative model for concurrent
in-distribution misclassification (IDM) and OOD detection that relies on a
normalizing flow framework. The proposed flow-based detector with an
energy-based inputs (FlowEneDet) can extend previously deployed segmentation
models without their time-consuming retraining. Our FlowEneDet results in a
low-complexity architecture with marginal increase in the memory footprint.
FlowEneDet achieves promising results on Cityscapes, Cityscapes-C, FishyScapes
and SegmentMeIfYouCan benchmarks in IDM/OOD detection when applied to
pretrained DeepLabV3+ and SegFormer semantic segmentation models. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.09641 | 2023-05-16T17:42:45Z | FitMe: Deep Photorealistic 3D Morphable Model Avatars | [
"Alexandros Lattas",
"Stylianos Moschoglou",
"Stylianos Ploumpis",
"Baris Gecer",
"Jiankang Deng",
"Stefanos Zafeiriou"
] | In this paper, we introduce FitMe, a facial reflectance model and a
differentiable rendering optimization pipeline, that can be used to acquire
high-fidelity renderable human avatars from single or multiple images. The
model consists of a multi-modal style-based generator, that captures facial
appearance in terms of diffuse and specular reflectance, and a PCA-based shape
model. We employ a fast differentiable rendering process that can be used in an
optimization pipeline, while also achieving photorealistic facial shading. Our
optimization process accurately captures both the facial reflectance and shape
in high-detail, by exploiting the expressivity of the style-based latent
representation and of our shape model. FitMe achieves state-of-the-art
reflectance acquisition and identity preservation on single "in-the-wild"
facial images, while it produces impressive scan-like results, when given
multiple unconstrained facial images pertaining to the same identity. In
contrast with recent implicit avatar reconstructions, FitMe requires only one
minute and produces relightable mesh and texture-based avatars, that can be
used by end-user applications. | [
"cs.CV",
"cs.GR",
"cs.LG",
"I.2.10; I.3.7; I.4.1"
] | true |
2305.09736 | 2023-05-16T18:08:24Z | ADDSL: Hand Gesture Detection and Sign Language Recognition on Annotated
Danish Sign Language | [
"Sanyam Jain"
] | For a long time, detecting hand gestures and recognizing them as letters or
numbers has been a challenging task. This creates communication barriers for
individuals with disabilities. This paper introduces a new dataset, the
Annotated Dataset for Danish Sign Language (ADDSL). Annota-tions for the
dataset were made using the open-source tool LabelImg in the YOLO format. Using
this dataset, a one-stage ob-ject detector model (YOLOv5) was trained with the
CSP-DarkNet53 backbone and YOLOv3 head to recognize letters (A-Z) and numbers
(0-9) using only seven unique images per class (without augmen-tation). Five
models were trained with 350 epochs, resulting in an average inference time of
9.02ms per image and a best accu-racy of 92% when compared to previous
research. Our results show that modified model is efficient and more accurate
than existing work in the same field. The code repository for our model is
available at the GitHub repository https://github.com/s4nyam/pvt-addsl. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.09817 | 2023-05-16T21:46:28Z | A Method for Training-free Person Image Picture Generation | [
"Tianyu Chen"
] | The current state-of-the-art Diffusion model has demonstrated excellent
results in generating images. However, the images are monotonous and are mostly
the result of the distribution of images of people in the training set, making
it challenging to generate multiple images for a fixed number of individuals.
This problem can often only be solved by fine-tuning the training of the model.
This means that each individual/animated character image must be trained if it
is to be drawn, and the hardware and cost of this training is often beyond the
reach of the average user, who accounts for the largest number of people. To
solve this problem, the Character Image Feature Encoder model proposed in this
paper enables the user to use the process by simply providing a picture of the
character to make the image of the character in the generated image match the
expectation. In addition, various details can be adjusted during the process
using prompts. Unlike traditional Image-to-Image models, the Character Image
Feature Encoder extracts only the relevant image features, rather than
information about the model's composition or movements. In addition, the
Character Image Feature Encoder can be adapted to different models after
training. The proposed model can be conveniently incorporated into the Stable
Diffusion generation process without modifying the model's ontology or used in
combination with Stable Diffusion as a joint model. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG"
] | false |
2305.09828 | 2023-05-16T22:12:25Z | Mimetic Initialization of Self-Attention Layers | [
"Asher Trockman",
"J. Zico Kolter"
] | It is notoriously difficult to train Transformers on small datasets;
typically, large pre-trained models are instead used as the starting point. We
explore the weights of such pre-trained Transformers (particularly for vision)
to attempt to find reasons for this discrepancy. Surprisingly, we find that
simply initializing the weights of self-attention layers so that they "look"
more like their pre-trained counterparts allows us to train vanilla
Transformers faster and to higher final accuracies, particularly on vision
tasks such as CIFAR-10 and ImageNet classification, where we see gains in
accuracy of over 5% and 4%, respectively. Our initialization scheme is closed
form, learning-free, and very simple: we set the product of the query and key
weights to be approximately the identity, and the product of the value and
projection weights to approximately the negative identity. As this mimics the
patterns we saw in pre-trained Transformers, we call the technique "mimetic
initialization". | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2305.15421 | 2023-05-16T17:28:06Z | Generative Adversarial Networks for Brain Images Synthesis: A Review | [
"Firoozeh Shomal Zadeh",
"Sevda Molani",
"Maysam Orouskhani",
"Marziyeh Rezaei",
"Mehrzad Shafiei",
"Hossein Abbasi"
] | In medical imaging, image synthesis is the estimation process of one image
(sequence, modality) from another image (sequence, modality). Since images with
different modalities provide diverse biomarkers and capture various features,
multi-modality imaging is crucial in medicine. While multi-screening is
expensive, costly, and time-consuming to report by radiologists, image
synthesis methods are capable of artificially generating missing modalities.
Deep learning models can automatically capture and extract the high dimensional
features. Especially, generative adversarial network (GAN) as one of the most
popular generative-based deep learning methods, uses convolutional networks as
generators, and estimated images are discriminated as true or false based on a
discriminator network. This review provides brain image synthesis via GANs. We
summarized the recent developments of GANs for cross-modality brain image
synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa. | [
"eess.IV",
"cs.CV",
"cs.LG",
"68T07",
"I.2.m"
] | false |
2305.10450 | 2023-05-16T19:52:40Z | Understanding of Normal and Abnormal Hearts by Phase Space Analysis and
Convolutional Neural Networks | [
"Bekir Yavuz Koc",
"Taner Arsan",
"Onder Pekcan"
] | Cardiac diseases are one of the leading mortality factors in modern,
industrialized societies, which cause high expenses in public health systems.
Due to high costs, developing analytical methods to improve cardiac diagnostics
is essential. The heart's electric activity was first modeled using a set of
nonlinear differential equations. Following this, variations of cardiac spectra
originating from deterministic dynamics are investigated. Analyzing a normal
human heart's power spectra offers His-Purkinje network, which possesses a
fractal-like structure. Phase space trajectories are extracted from the time
series electrocardiogram (ECG) graph with third-order derivate Taylor Series.
Here in this study, phase space analysis and Convolutional Neural Networks
(CNNs) method are applied to 44 records via the MIT-BIH database recorded with
MLII. In order to increase accuracy, a straight line is drawn between the
highest Q-R distance in the phase space images of the records. Binary CNN
classification is used to determine healthy or unhealthy hearts. With a 90.90%
accuracy rate, this model could classify records according to their heart
status. | [
"eess.IV",
"cs.CL",
"cs.CV",
"cs.LG",
"cs.NE",
"eess.SP",
"68",
"I.5.1"
] | false |
2305.09137 | 2023-05-16T03:38:06Z | Pre-Training to Learn in Context | [
"Yuxian Gu",
"Li Dong",
"Furu Wei",
"Minlie Huang"
] | In-context learning, where pre-trained language models learn to perform tasks
from task examples and instructions in their contexts, has attracted much
attention in the NLP community. However, the ability of in-context learning is
not fully exploited because language models are not explicitly trained to learn
in context. To this end, we propose PICL (Pre-training for In-Context
Learning), a framework to enhance the language models' in-context learning
ability by pre-training the model on a large collection of "intrinsic tasks" in
the general plain-text corpus using the simple language modeling objective.
PICL encourages the model to infer and perform tasks by conditioning on the
contexts while maintaining task generalization of pre-trained models. We
evaluate the in-context learning performance of the model trained with PICL on
seven widely-used text classification datasets and the Super-NaturalInstrctions
benchmark, which contains 100+ NLP tasks formulated to text generation. Our
experiments show that PICL is more effective and task-generalizable than a
range of baselines, outperforming larger language models with nearly 4x
parameters. The code is publicly available at https://github.com/thu-coai/PICL. | [
"cs.CL"
] | true |
2305.09154 | 2023-05-16T04:15:25Z | Progressive Translation: Improving Domain Robustness of Neural Machine
Translation with Intermediate Sequences | [
"Chaojun Wang",
"Yang Liu",
"Wai Lam"
] | Previous studies show that intermediate supervision signals benefit various
Natural Language Processing tasks. However, it is not clear whether there exist
intermediate signals that benefit Neural Machine Translation (NMT). Borrowing
techniques from Statistical Machine Translation, we propose intermediate
signals which are intermediate sequences from the "source-like" structure to
the "target-like" structure. Such intermediate sequences introduce an inductive
bias that reflects a domain-agnostic principle of translation, which reduces
spurious correlations that are harmful to out-of-domain generalisation.
Furthermore, we introduce a full-permutation multi-task learning to alleviate
the spurious causal relations from intermediate sequences to the target, which
results from exposure bias. The Minimum Bayes Risk decoding algorithm is used
to pick the best candidate translation from all permutations to further improve
the performance. Experiments show that the introduced intermediate signals can
effectively improve the domain robustness of NMT and reduces the amount of
hallucinations on out-of-domain translation. Further analysis shows that our
methods are especially promising in low-resource scenarios. | [
"cs.CL"
] | false |
2305.09249 | 2023-05-16T07:56:19Z | xPQA: Cross-Lingual Product Question Answering across 12 Languages | [
"Xiaoyu Shen",
"Akari Asai",
"Bill Byrne",
"Adrià de Gispert"
] | Product Question Answering (PQA) systems are key in e-commerce applications
to provide responses to customers' questions as they shop for products. While
existing work on PQA focuses mainly on English, in practice there is need to
support multiple customer languages while leveraging product information
available in English. To study this practical industrial task, we present xPQA,
a large-scale annotated cross-lingual PQA dataset in 12 languages across 9
branches, and report results in (1) candidate ranking, to select the best
English candidate containing the information to answer a non-English question;
and (2) answer generation, to generate a natural-sounding non-English answer
based on the selected English candidate. We evaluate various approaches
involving machine translation at runtime or offline, leveraging multilingual
pre-trained LMs, and including or excluding xPQA training data. We find that
(1) In-domain data is essential as cross-lingual rankers trained on other
domains perform poorly on the PQA task; (2) Candidate ranking often prefers
runtime-translation approaches while answer generation prefers multilingual
approaches; (3) Translating offline to augment multilingual models helps
candidate ranking mainly on languages with non-Latin scripts; and helps answer
generation mainly on languages with Latin scripts. Still, there remains a
significant performance gap between the English and the cross-lingual test
sets. | [
"cs.CL"
] | false |
2305.09269 | 2023-05-16T08:22:17Z | ContrastNet: A Contrastive Learning Framework for Few-Shot Text
Classification | [
"Junfan Chen",
"Richong Zhang",
"Yongyi Mao",
"Jie Xu"
] | Few-shot text classification has recently been promoted by the meta-learning
paradigm which aims to identify target classes with knowledge transferred from
source classes with sets of small tasks named episodes. Despite their success,
existing works building their meta-learner based on Prototypical Networks are
unsatisfactory in learning discriminative text representations between similar
classes, which may lead to contradictions during label prediction. In addition,
the tasklevel and instance-level overfitting problems in few-shot text
classification caused by a few training examples are not sufficiently tackled.
In this work, we propose a contrastive learning framework named ContrastNet to
tackle both discriminative representation and overfitting problems in few-shot
text classification. ContrastNet learns to pull closer text representations
belonging to the same class and push away text representations belonging to
different classes, while simultaneously introducing unsupervised contrastive
regularization at both task-level and instance-level to prevent overfitting.
Experiments on 8 few-shot text classification datasets show that ContrastNet
outperforms the current state-of-the-art models. | [
"cs.CL"
] | false |
2305.09281 | 2023-05-16T08:37:13Z | On the Origins of Bias in NLP through the Lens of the Jim Code | [
"Fatma Elsafoury",
"Gavin Abercrombie"
] | In this paper, we trace the biases in current natural language processing
(NLP) models back to their origins in racism, sexism, and homophobia over the
last 500 years. We review literature from critical race theory, gender studies,
data ethics, and digital humanities studies, and summarize the origins of bias
in NLP models from these social science perspective. We show how the causes of
the biases in the NLP pipeline are rooted in social issues. Finally, we argue
that the only way to fix the bias and unfairness in NLP is by addressing the
social problems that caused them in the first place and by incorporating social
sciences and social scientists in efforts to mitigate bias in NLP models. We
provide actionable recommendations for the NLP research community to do so. | [
"cs.CL"
] | false |
2305.09312 | 2023-05-16T09:37:08Z | Exploring the Impact of Layer Normalization for Zero-shot Neural Machine
Translation | [
"Zhuoyuan Mao",
"Raj Dabre",
"Qianying Liu",
"Haiyue Song",
"Chenhui Chu",
"Sadao Kurohashi"
] | This paper studies the impact of layer normalization (LayerNorm) on zero-shot
translation (ZST). Recent efforts for ZST often utilize the Transformer
architecture as the backbone, with LayerNorm at the input of layers (PreNorm)
set as the default. However, Xu et al. (2019) has revealed that PreNorm carries
the risk of overfitting the training data. Based on this, we hypothesize that
PreNorm may overfit supervised directions and thus have low generalizability
for ZST. Through experiments on OPUS, IWSLT, and Europarl datasets for 54 ZST
directions, we demonstrate that the original Transformer setting of LayerNorm
after residual connections (PostNorm) consistently outperforms PreNorm by up to
12.3 BLEU points. We then study the performance disparities by analyzing the
differences in off-target rates and structural variations between PreNorm and
PostNorm. This study highlights the need for careful consideration of the
LayerNorm setting for ZST. | [
"cs.CL"
] | false |
2305.09335 | 2023-05-16T10:19:12Z | MsPrompt: Multi-step Prompt Learning for Debiasing Few-shot Event
Detection | [
"Siyuan Wang",
"Jianming Zheng",
"Xuejun Hu",
"Fei Cai",
"Chengyu Song",
"Xueshan Luo"
] | Event detection (ED) is aimed to identify the key trigger words in
unstructured text and predict the event types accordingly. Traditional ED
models are too data-hungry to accommodate real applications with scarce labeled
data. Besides, typical ED models are facing the context-bypassing and disabled
generalization issues caused by the trigger bias stemming from ED datasets.
Therefore, we focus on the true few-shot paradigm to satisfy the low-resource
scenarios. In particular, we propose a multi-step prompt learning model
(MsPrompt) for debiasing few-shot event detection, that consists of the
following three components: an under-sampling module targeting to construct a
novel training set that accommodates the true few-shot setting, a multi-step
prompt module equipped with a knowledge-enhanced ontology to leverage the event
semantics and latent prior knowledge in the PLMs sufficiently for tackling the
context-bypassing problem, and a prototypical module compensating for the
weakness of classifying events with sparse data and boost the generalization
performance. Experiments on two public datasets ACE-2005 and FewEvent show that
MsPrompt can outperform the state-of-the-art models, especially in the strict
low-resource scenarios reporting 11.43% improvement in terms of weighted
F1-score against the best-performing baseline and achieving an outstanding
debiasing performance. | [
"cs.CL"
] | false |
2305.09400 | 2023-05-16T12:31:53Z | Consistent Multi-Granular Rationale Extraction for Explainable Multi-hop
Fact Verification | [
"Jiasheng Si",
"Yingjie Zhu",
"Deyu Zhou"
] | The success of deep learning models on multi-hop fact verification has
prompted researchers to understand the behavior behind their veracity. One
possible way is erasure search: obtaining the rationale by entirely removing a
subset of input without compromising the veracity prediction. Although
extensively explored, existing approaches fall within the scope of the
single-granular (tokens or sentences) explanation, which inevitably leads to
explanation redundancy and inconsistency. To address such issues, this paper
explores the viability of multi-granular rationale extraction with consistency
and faithfulness for explainable multi-hop fact verification. In particular,
given a pretrained veracity prediction model, both the token-level explainer
and sentence-level explainer are trained simultaneously to obtain
multi-granular rationales via differentiable masking. Meanwhile, three
diagnostic properties (fidelity, consistency, salience) are introduced and
applied to the training process, to ensure that the extracted rationales
satisfy faithfulness and consistency. Experimental results on three multi-hop
fact verification datasets show that the proposed approach outperforms some
state-of-the-art baselines. | [
"cs.CL"
] | false |
2305.09506 | 2023-05-16T14:59:38Z | Fuzzy Temporal Protoforms for the Quantitative Description of Processes
in Natural Language | [
"Yago Fontenla-Seco",
"Alberto Bugarín-Diz",
"Manuel Lama"
] | In this paper, we propose a series of fuzzy temporal protoforms in the
framework of the automatic generation of quantitative and qualitative natural
language descriptions of processes. The model includes temporal and causal
information from processes and attributes, quantifies attributes in time during
the process life-span and recalls causal relations and temporal distances
between events, among other features. Through integrating process mining
techniques and fuzzy sets within the usual Data-to-Text architecture, our
framework is able to extract relevant quantitative temporal as well as
structural information from a process and describe it in natural language
involving uncertain terms. A real use-case in the cardiology domain is
presented, showing the potential of our model for providing natural language
explanations addressed to domain experts. | [
"cs.CL"
] | false |
2305.09509 | 2023-05-16T15:02:23Z | Bidirectional Generative Framework for Cross-domain Aspect-based
Sentiment Analysis | [
"Yue Deng",
"Wenxuan Zhang",
"Sinno Jialin Pan",
"Lidong Bing"
] | Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various
fine-grained sentiment analysis tasks on a target domain by transferring
knowledge from a source domain. Since labeled data only exists in the source
domain, a model is expected to bridge the domain gap for tackling cross-domain
ABSA. Though domain adaptation methods have proven to be effective, most of
them are based on a discriminative model, which needs to be specifically
designed for different ABSA tasks. To offer a more general solution, we propose
a unified bidirectional generative framework to tackle various cross-domain
ABSA tasks. Specifically, our framework trains a generative model in both
text-to-label and label-to-text directions. The former transforms each task
into a unified format to learn domain-agnostic features, and the latter
generates natural sentences from noisy labels for data augmentation, with which
a more accurate model can be trained. To investigate the effectiveness and
generality of our framework, we conduct extensive experiments on four
cross-domain ABSA tasks and present new state-of-the-art results on all tasks.
Our data and code are publicly available at
\url{https://github.com/DAMO-NLP-SG/BGCA}. | [
"cs.CL"
] | false |
2305.09520 | 2023-05-16T15:16:24Z | DLUE: Benchmarking Document Language Understanding | [
"Ruoxi Xu",
"Hongyu Lin",
"Xinyan Guan",
"Xianpei Han",
"Yingfei Sun",
"Le Sun"
] | Understanding documents is central to many real-world tasks but remains a
challenging topic. Unfortunately, there is no well-established consensus on how
to comprehensively evaluate document understanding abilities, which
significantly hinders the fair comparison and measuring the progress of the
field. To benchmark document understanding researches, this paper summarizes
four representative abilities, i.e., document classification, document
structural analysis, document information extraction, and document
transcription. Under the new evaluation framework, we propose \textbf{Document
Language Understanding Evaluation} -- \textbf{DLUE}, a new task suite which
covers a wide-range of tasks in various forms, domains and document genres. We
also systematically evaluate six well-established transformer models on DLUE,
and find that due to the lengthy content, complicated underlying structure and
dispersed knowledge, document understanding is still far from being solved, and
currently there is no neural architecture that dominates all tasks, raising
requirements for a universal document understanding architecture. | [
"cs.CL"
] | false |
2305.09534 | 2023-05-16T15:26:52Z | MetaSRL++: A Uniform Scheme for Modelling Deeper Semantics | [
"Fritz Hohl",
"Nianheng Wu",
"Martina Galetti",
"Remi van Trijp"
] | Despite enormous progress in Natural Language Processing (NLP), our field is
still lacking a common deep semantic representation scheme. As a result, the
problem of meaning and understanding is typically sidestepped through more
simple, approximative methods. This paper argues that in order to arrive at
such a scheme, we also need a common modelling scheme. It therefore introduces
MetaSRL++, a uniform, language- and modality-independent modelling scheme based
on Semantic Graphs, as a step towards a common representation scheme; as well
as a method for defining the concepts and entities that are used in these
graphs. Our output is twofold. First, we illustrate MetaSRL++ through concrete
examples. Secondly, we discuss how it relates to existing work in the field. | [
"cs.CL"
] | false |
2305.09598 | 2023-05-16T16:52:07Z | Boosting Event Extraction with Denoised Structure-to-Text Augmentation | [
"bo wang",
"Heyan Huang",
"Xiaochi Wei",
"Ge Shi",
"Xiao Liu",
"Chong Feng",
"Tong Zhou",
"Shuaiqiang Wang",
"Dawei Yin"
] | Event extraction aims to recognize pre-defined event triggers and arguments
from texts, which suffer from the lack of high-quality annotations. In most NLP
applications, involving a large scale of synthetic training data is a practical
and effective approach to alleviate the problem of data scarcity. However, when
applying to the task of event extraction, recent data augmentation methods
often neglect the problem of grammatical incorrectness, structure misalignment,
and semantic drifting, leading to unsatisfactory performances. In order to
solve these problems, we propose a denoised structure-to-text augmentation
framework for event extraction DAEE, which generates additional training data
through the knowledge-based structure-to-text generation model and selects the
effective subset from the generated data iteratively with a deep reinforcement
learning agent. Experimental results on several datasets demonstrate that the
proposed method generates more diverse text representations for event
extraction and achieves comparable results with the state-of-the-art. | [
"cs.CL"
] | false |
2305.09756 | 2023-05-16T19:08:18Z | Clinical Note Owns its Hierarchy: Multi-Level Hypergraph Neural Networks
for Patient-Level Representation Learning | [
"Nayeon Kim",
"Yinhua Piao",
"Sun Kim"
] | Leveraging knowledge from electronic health records (EHRs) to predict a
patient's condition is essential to the effective delivery of appropriate care.
Clinical notes of patient EHRs contain valuable information from healthcare
professionals, but have been underused due to their difficult contents and
complex hierarchies. Recently, hypergraph-based methods have been proposed for
document classifications. Directly adopting existing hypergraph methods on
clinical notes cannot sufficiently utilize the hierarchy information of the
patient, which can degrade clinical semantic information by (1) frequent
neutral words and (2) hierarchies with imbalanced distribution. Thus, we
propose a taxonomy-aware multi-level hypergraph neural network (TM-HGNN), where
multi-level hypergraphs assemble useful neutral words with rare keywords via
note and taxonomy level hyperedges to retain the clinical semantic information.
The constructed patient hypergraphs are fed into hierarchical message passing
layers for learning more balanced multi-level knowledge at the note and
taxonomy levels. We validate the effectiveness of TM-HGNN by conducting
extensive experiments with MIMIC-III dataset on benchmark in-hospital-mortality
prediction. | [
"cs.CL"
] | false |
2305.09148 | 2023-05-16T03:53:30Z | Dual-Alignment Pre-training for Cross-lingual Sentence Embedding | [
"Ziheng Li",
"Shaohan Huang",
"Zihan Zhang",
"Zhi-Hong Deng",
"Qiang Lou",
"Haizhen Huang",
"Jian Jiao",
"Furu Wei",
"Weiwei Deng",
"Qi Zhang"
] | Recent studies have shown that dual encoder models trained with the
sentence-level translation ranking task are effective methods for cross-lingual
sentence embedding. However, our research indicates that token-level alignment
is also crucial in multilingual scenarios, which has not been fully explored
previously. Based on our findings, we propose a dual-alignment pre-training
(DAP) framework for cross-lingual sentence embedding that incorporates both
sentence-level and token-level alignment. To achieve this, we introduce a novel
representation translation learning (RTL) task, where the model learns to use
one-side contextualized token representation to reconstruct its translation
counterpart. This reconstruction objective encourages the model to embed
translation information into the token representation. Compared to other
token-level alignment methods such as translation language modeling, RTL is
more suitable for dual encoder architectures and is computationally efficient.
Extensive experiments on three sentence-level cross-lingual benchmarks
demonstrate that our approach can significantly improve sentence embedding. Our
code is available at https://github.com/ChillingDream/DAP. | [
"cs.CL",
"cs.AI"
] | true |
2305.09220 | 2023-05-16T06:53:21Z | Towards Unifying Multi-Lingual and Cross-Lingual Summarization | [
"Jiaan Wang",
"Fandong Meng",
"Duo Zheng",
"Yunlong Liang",
"Zhixu Li",
"Jianfeng Qu",
"Jie Zhou"
] | To adapt text summarization to the multilingual world, previous work proposes
multi-lingual summarization (MLS) and cross-lingual summarization (CLS).
However, these two tasks have been studied separately due to the different
definitions, which limits the compatible and systematic research on both of
them. In this paper, we aim to unify MLS and CLS into a more general setting,
i.e., many-to-many summarization (M2MS), where a single model could process
documents in any language and generate their summaries also in any language. As
the first step towards M2MS, we conduct preliminary studies to show that M2MS
can better transfer task knowledge across different languages than MLS and CLS.
Furthermore, we propose Pisces, a pre-trained M2MS model that learns language
modeling, cross-lingual ability and summarization ability via three-stage
pre-training. Experimental results indicate that our Pisces significantly
outperforms the state-of-the-art baselines, especially in the zero-shot
directions, where there is no training data from the source-language documents
to the target-language summaries. | [
"cs.CL",
"cs.AI"
] | false |
2305.09246 | 2023-05-16T07:52:57Z | Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low
Training Data Instruction Tuning | [
"Hao Chen",
"Yiming Zhang",
"Qi Zhang",
"Hantao Yang",
"Xiaomeng Hu",
"Xuetao Ma",
"Yifan Yanggong",
"Junbo Zhao"
] | Instruction tuning for large language models (LLMs) has gained attention from
researchers due to its ability to unlock the potential of LLMs in following
instructions. While instruction tuning offers advantages for facilitating the
adaptation of large language models (LLMs) to downstream tasks as a fine-tuning
approach, training models with tens of millions or even billions of parameters
on large amounts of data results in unaffordable computational costs. To
address this, we focus on reducing the data used in LLM instruction tuning to
decrease training costs and improve data efficiency, dubbed as Low Training
Data Instruction Tuning (LTD Instruction Tuning). Specifically, this paper
conducts a preliminary exploration into reducing the data used in LLM training
and identifies several observations regarding task specialization for LLM
training, such as the optimization of performance for a specific task, the
number of instruction types required for instruction tuning, and the amount of
data required for task-specific models. The results suggest that task-specific
models can be trained using less than 0.5% of the original dataset, with a 2%
improvement in performance over those trained on full task-related data. | [
"cs.AI",
"cs.CL"
] | false |
2305.09258 | 2023-05-16T08:06:11Z | HyHTM: Hyperbolic Geometry based Hierarchical Topic Models | [
"Simra Shahid",
"Tanay Anand",
"Nikitha Srikanth",
"Sumit Bhatia",
"Balaji Krishnamurthy",
"Nikaash Puri"
] | Hierarchical Topic Models (HTMs) are useful for discovering topic hierarchies
in a collection of documents. However, traditional HTMs often produce
hierarchies where lowerlevel topics are unrelated and not specific enough to
their higher-level topics. Additionally, these methods can be computationally
expensive. We present HyHTM - a Hyperbolic geometry based Hierarchical Topic
Models - that addresses these limitations by incorporating hierarchical
information from hyperbolic geometry to explicitly model hierarchies in topic
models. Experimental results with four baselines show that HyHTM can better
attend to parent-child relationships among topics. HyHTM produces coherent
topic hierarchies that specialise in granularity from generic higher-level
topics to specific lowerlevel topics. Further, our model is significantly
faster and leaves a much smaller memory footprint than our best-performing
baseline.We have made the source code for our algorithm publicly accessible. | [
"cs.IR",
"cs.CL"
] | false |
2305.09316 | 2023-05-16T09:44:38Z | Enhancing Keyphrase Extraction from Long Scientific Documents using
Graph Embeddings | [
"Roberto Martínez-Cruz",
"Debanjan Mahata",
"Alvaro J. López-López",
"José Portela"
] | In this study, we investigate using graph neural network (GNN)
representations to enhance contextualized representations of pre-trained
language models (PLMs) for keyphrase extraction from lengthy documents. We show
that augmenting a PLM with graph embeddings provides a more comprehensive
semantic understanding of words in a document, particularly for long documents.
We construct a co-occurrence graph of the text and embed it using a graph
convolutional network (GCN) trained on the task of edge prediction. We propose
a graph-enhanced sequence tagging architecture that augments contextualized PLM
embeddings with graph representations. Evaluating on benchmark datasets, we
demonstrate that enhancing PLMs with graph embeddings outperforms
state-of-the-art models on long documents, showing significant improvements in
F1 scores across all the datasets. Our study highlights the potential of GNN
representations as a complementary approach to improve PLM performance for
keyphrase extraction from long documents. | [
"cs.CL",
"cs.AI"
] | false |
2305.09402 | 2023-05-16T12:44:39Z | A Preliminary Analysis on the Code Generation Capabilities of GPT-3.5
and Bard AI Models for Java Functions | [
"Giuseppe Destefanis",
"Silvia Bartolucci",
"Marco Ortu"
] | This paper evaluates the capability of two state-of-the-art artificial
intelligence (AI) models, GPT-3.5 and Bard, in generating Java code given a
function description. We sourced the descriptions from CodingBat.com, a popular
online platform that provides practice problems to learn programming. We
compared the Java code generated by both models based on correctness, verified
through the platform's own test cases. The results indicate clear differences
in the capabilities of the two models. GPT-3.5 demonstrated superior
performance, generating correct code for approximately 90.6% of the function
descriptions, whereas Bard produced correct code for 53.1% of the functions.
While both models exhibited strengths and weaknesses, these findings suggest
potential avenues for the development and refinement of more advanced
AI-assisted code generation tools. The study underlines the potential of AI in
automating and supporting aspects of software development, although further
research is required to fully realize this potential. | [
"cs.SE",
"cs.CL"
] | false |
2305.09410 | 2023-05-16T12:55:43Z | About Evaluation of F1 Score for RECENT Relation Extraction System | [
"Michał Olek"
] | This document contains a discussion of the F1 score evaluation used in the
article 'Relation Classification with Entity Type Restriction' by Shengfei Lyu,
Huanhuan Chen published on Findings of the Association for Computational
Linguistics: ACL-IJCNLP 2021. The authors created a system named RECENT and
claim it achieves (then) a new state-of-the-art result 75.2 (previous 74.8) on
the TACRED dataset, while after correcting errors and reevaluation the final
result is 65.16 | [
"cs.CL",
"cs.AI"
] | false |
2305.09612 | 2023-05-16T17:04:48Z | Large Language Models are Built-in Autoregressive Search Engines | [
"Noah Ziems",
"Wenhao Yu",
"Zhihan Zhang",
"Meng Jiang"
] | Document retrieval is a key stage of standard Web search engines. Existing
dual-encoder dense retrievers obtain representations for questions and
documents independently, allowing for only shallow interactions between them.
To overcome this limitation, recent autoregressive search engines replace the
dual-encoder architecture by directly generating identifiers for relevant
documents in the candidate pool. However, the training cost of such
autoregressive search engines rises sharply as the number of candidate
documents increases. In this paper, we find that large language models (LLMs)
can follow human instructions to directly generate URLs for document retrieval.
Surprisingly, when providing a few {Query-URL} pairs as in-context
demonstrations, LLMs can generate Web URLs where nearly 90\% of the
corresponding documents contain correct answers to open-domain questions. In
this way, LLMs can be thought of as built-in search engines, since they have
not been explicitly trained to map questions to document identifiers.
Experiments demonstrate that our method can consistently achieve better
retrieval performance than existing retrieval approaches by a significant
margin on three open-domain question answering benchmarks, under both zero and
few-shot settings. The code for this work can be found at
\url{https://github.com/Ziems/llm-url}. | [
"cs.CL",
"cs.IR"
] | false |
2305.09731 | 2023-05-16T18:05:19Z | What In-Context Learning "Learns" In-Context: Disentangling Task
Recognition and Task Learning | [
"Jane Pan",
"Tianyu Gao",
"Howard Chen",
"Danqi Chen"
] | Large language models (LLMs) exploit in-context learning (ICL) to solve tasks
with only a few demonstrations, but its mechanisms are not yet well-understood.
Some works suggest that LLMs only recall already learned concepts from
pre-training, while others hint that ICL performs implicit learning over
demonstrations. We characterize two ways through which ICL leverages
demonstrations. Task recognition (TR) captures the extent to which LLMs can
recognize a task through demonstrations -- even without ground-truth labels --
and apply their pre-trained priors, whereas task learning (TL) is the ability
to capture new input-label mappings unseen in pre-training. Using a wide range
of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we
design controlled experiments to disentangle the roles of TR and TL in ICL. We
show that (1) models can achieve non-trivial performance with only TR, and TR
does not further improve with larger models or more demonstrations; (2) LLMs
acquire TL as the model scales, and TL's performance consistently improves with
more demonstrations in context. Our findings unravel two different forces
behind ICL and we advocate for discriminating them in future ICL research due
to their distinct nature. | [
"cs.CL",
"cs.LG"
] | false |
2305.09785 | 2023-05-16T20:17:02Z | Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned
Language Models | [
"Na Li",
"Hanane Kteich",
"Zied Bouraoui",
"Steven Schockaert"
] | Learning vectors that capture the meaning of concepts remains a fundamental
challenge. Somewhat surprisingly, perhaps, pre-trained language models have
thus far only enabled modest improvements to the quality of such concept
embeddings. Current strategies for using language models typically represent a
concept by averaging the contextualised representations of its mentions in some
corpus. This is potentially sub-optimal for at least two reasons. First,
contextualised word vectors have an unusual geometry, which hampers downstream
tasks. Second, concept embeddings should capture the semantic properties of
concepts, whereas contextualised word vectors are also affected by other
factors. To address these issues, we propose two contrastive learning
strategies, based on the view that whenever two sentences reveal similar
properties, the corresponding contextualised vectors should also be similar.
One strategy is fully unsupervised, estimating the properties which are
expressed in a sentence from the neighbourhood structure of the contextualised
word embeddings. The second strategy instead relies on a distant supervision
signal from ConceptNet. Our experimental results show that the resulting
vectors substantially outperform existing concept embeddings in predicting the
semantic properties of concepts, with the ConceptNet-based strategy achieving
the best results. These findings are furthermore confirmed in a clustering task
and in the downstream task of ontology completion. | [
"cs.CL",
"cs.AI"
] | false |
2305.10448 | 2023-05-16T15:25:19Z | Sequence-to-Sequence Pre-training with Unified Modality Masking for
Visual Document Understanding | [
"Shuwei Feng",
"Tianyang Zhan",
"Zhanming Jie",
"Trung Quoc Luong",
"Xiaoran Jin"
] | This paper presents GenDoc, a general sequence-to-sequence document
understanding model pre-trained with unified masking across three modalities:
text, image, and layout. The proposed model utilizes an encoder-decoder
architecture, which allows for increased adaptability to a wide range of
downstream tasks with diverse output formats, in contrast to the encoder-only
models commonly employed in document understanding. In addition to the
traditional text infilling task used in previous encoder-decoder models, our
pre-training extends to include tasks of masked image token prediction and
masked layout prediction. We also design modality-specific instruction and
adopt both disentangled attention and the mixture-of-modality-experts strategy
to effectively capture the information leveraged by each modality. Evaluation
of the proposed model through extensive experiments on several downstream tasks
in document understanding demonstrates its ability to achieve superior or
competitive performance compared to state-of-the-art approaches. Our analysis
further suggests that GenDoc is more robust than the encoder-only models in
scenarios where the OCR quality is imperfect. | [
"cs.CL",
"cs.AI"
] | false |
2305.09167 | 2023-05-16T04:52:29Z | Adversarial Speaker Disentanglement Using Unannotated External Data for
Self-supervised Representation Based Voice Conversion | [
"Xintao Zhao",
"Shuai Wang",
"Yang Chao",
"Zhiyong Wu",
"Helen Meng"
] | Nowadays, recognition-synthesis-based methods have been quite popular with
voice conversion (VC). By introducing linguistics features with good
disentangling characters extracted from an automatic speech recognition (ASR)
model, the VC performance achieved considerable breakthroughs. Recently,
self-supervised learning (SSL) methods trained with a large-scale unannotated
speech corpus have been applied to downstream tasks focusing on the content
information, which is suitable for VC tasks. However, a huge amount of speaker
information in SSL representations degrades timbre similarity and the quality
of converted speech significantly. To address this problem, we proposed a
high-similarity any-to-one voice conversion method with the input of SSL
representations. We incorporated adversarial training mechanisms in the
synthesis module using external unannotated corpora. Two auxiliary
discriminators were trained to distinguish whether a sequence of
mel-spectrograms has been converted by the acoustic model and whether a
sequence of content embeddings contains speaker information from external
corpora. Experimental results show that our proposed method achieves comparable
similarity and higher naturalness than the supervised method, which needs a
huge amount of annotated corpora for training and is applicable to improve
similarity for VC methods with other SSL representations as input. | [
"cs.SD",
"cs.CL",
"eess.AS"
] | false |
2305.09204 | 2023-05-16T06:27:27Z | The Weighted Möbius Score: A Unified Framework for Feature Attribution | [
"Yifan Jiang",
"Shane Steinert-Threlkeld"
] | Feature attribution aims to explain the reasoning behind a black-box model's
prediction by identifying the impact of each feature on the prediction. Recent
work has extended feature attribution to interactions between multiple
features. However, the lack of a unified framework has led to a proliferation
of methods that are often not directly comparable. This paper introduces a
parameterized attribution framework -- the Weighted M\"obius Score -- and (i)
shows that many different attribution methods for both individual features and
feature interactions are special cases and (ii) identifies some new methods. By
studying the vector space of attribution methods, our framework utilizes
standard linear algebra tools and provides interpretations in various fields,
including cooperative game theory and causal mediation analysis. We empirically
demonstrate the framework's versatility and effectiveness by applying these
attribution methods to feature interactions in sentiment analysis and
chain-of-thought prompting. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2305.09313 | 2023-05-16T09:38:52Z | Hybrid and Collaborative Passage Reranking | [
"Zongmeng Zhang",
"Wengang Zhou",
"Jiaxin Shi",
"Houqiang Li"
] | In passage retrieval system, the initial passage retrieval results may be
unsatisfactory, which can be refined by a reranking scheme. Existing solutions
to passage reranking focus on enriching the interaction between query and each
passage separately, neglecting the context among the top-ranked passages in the
initial retrieval list. To tackle this problem, we propose a Hybrid and
Collaborative Passage Reranking (HybRank) method, which leverages the
substantial similarity measurements of upstream retrievers for passage
collaboration and incorporates the lexical and semantic properties of sparse
and dense retrievers for reranking. Besides, built on off-the-shelf retriever
features, HybRank is a plug-in reranker capable of enhancing arbitrary passage
lists including previously reranked ones. Extensive experiments demonstrate the
stable improvements of performance over prevalent retrieval and reranking
methods, and verify the effectiveness of the core components of HybRank. | [
"cs.IR",
"cs.AI",
"cs.CL"
] | false |
2305.09574 | 2023-05-16T16:11:48Z | UOR: Universal Backdoor Attacks on Pre-trained Language Models | [
"Wei Du",
"Peixuan Li",
"Boqun Li",
"Haodong Zhao",
"Gongshen Liu"
] | Backdoors implanted in pre-trained language models (PLMs) can be transferred
to various downstream tasks, which exposes a severe security threat. However,
most existing backdoor attacks against PLMs are un-targeted and task-specific.
Few targeted and task-agnostic methods use manually pre-defined triggers and
output representations, which prevent the attacks from being more effective and
general. In this paper, we first summarize the requirements that a more
threatening backdoor attack against PLMs should satisfy, and then propose a new
backdoor attack method called UOR, which breaks the bottleneck of the previous
approach by turning manual selection into automatic optimization. Specifically,
we define poisoned supervised contrastive learning which can automatically
learn the more uniform and universal output representations of triggers for
various PLMs. Moreover, we use gradient search to select appropriate trigger
words which can be adaptive to different PLMs and vocabularies. Experiments
show that our method can achieve better attack performance on various text
classification tasks compared to manual methods. Further, we tested our method
on PLMs with different architectures, different usage paradigms, and more
difficult tasks, which demonstrated the universality of our method. | [
"cs.CL",
"cs.AI",
"cs.CR"
] | false |
2305.09617 | 2023-05-16T17:11:29Z | Towards Expert-Level Medical Question Answering with Large Language
Models | [
"Karan Singhal",
"Tao Tu",
"Juraj Gottweis",
"Rory Sayres",
"Ellery Wulczyn",
"Le Hou",
"Kevin Clark",
"Stephen Pfohl",
"Heather Cole-Lewis",
"Darlene Neal",
"Mike Schaekermann",
"Amy Wang",
"Mohamed Amin",
"Sami Lachgar",
"Philip Mansfield",
"Sushant Prakash",
"Bradley Green",
"Ewa Dominowska",
"Blaise Aguera y Arcas",
"Nenad Tomasev",
"Yun Liu",
"Renee Wong",
"Christopher Semturs",
"S. Sara Mahdavi",
"Joelle Barral",
"Dale Webster",
"Greg S. Corrado",
"Yossi Matias",
"Shekoofeh Azizi",
"Alan Karthikesalingam",
"Vivek Natarajan"
] | Recent artificial intelligence (AI) systems have reached milestones in "grand
challenges" ranging from Go to protein-folding. The capability to retrieve
medical knowledge, reason over it, and answer medical questions comparably to
physicians has long been viewed as one such grand challenge.
Large language models (LLMs) have catalyzed significant progress in medical
question answering; Med-PaLM was the first model to exceed a "passing" score in
US Medical Licensing Examination (USMLE) style questions with a score of 67.2%
on the MedQA dataset. However, this and other prior work suggested significant
room for improvement, especially when models' answers were compared to
clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by
leveraging a combination of base LLM improvements (PaLM 2), medical domain
finetuning, and prompting strategies including a novel ensemble refinement
approach.
Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM
by over 19% and setting a new state-of-the-art. We also observed performance
approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU
clinical topics datasets.
We performed detailed human evaluations on long-form questions along multiple
axes relevant to clinical applications. In pairwise comparative ranking of 1066
consumer medical questions, physicians preferred Med-PaLM 2 answers to those
produced by physicians on eight of nine axes pertaining to clinical utility (p
< 0.001). We also observed significant improvements compared to Med-PaLM on
every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form
"adversarial" questions to probe LLM limitations.
While further studies are necessary to validate the efficacy of these models
in real-world settings, these results highlight rapid progress towards
physician-level performance in medical question answering. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | true |
2305.09696 | 2023-05-16T06:37:38Z | Generative Table Pre-training Empowers Models for Tabular Prediction | [
"Tianping Zhang",
"Shaowen Wang",
"Shuicheng Yan",
"Jian Li",
"Qian Liu"
] | Recently, the topic of table pre-training has attracted considerable research
interest. However, how to employ table pre-training to boost the performance of
tabular prediction remains an open challenge. In this paper, we propose TapTap,
the first attempt that leverages table pre-training to empower models for
tabular prediction. After pre-training on a large corpus of real-world tabular
data, TapTap can generate high-quality synthetic tables to support various
applications on tabular data, including privacy protection, low resource
regime, missing value imputation, and imbalanced classification. Extensive
experiments on 12 datasets demonstrate that TapTap outperforms a total of 16
baselines in different scenarios. Meanwhile, it can be easily combined with
various backbone models, including LightGBM, Multilayer Perceptron (MLP) and
Transformer. Moreover, with the aid of table pre-training, models trained using
synthetic data generated by TapTap can even compete with models using the
original dataset on half of the experimental datasets, marking a milestone in
the development of synthetic tabular data generation. The codes are available
at https://github.com/ZhangTP1996/TapTap. | [
"cs.LG",
"cs.AI",
"cs.CL"
] | false |
2305.09764 | 2023-05-16T19:31:18Z | Application-Agnostic Language Modeling for On-Device ASR | [
"Markus Nußbaum-Thom",
"Lyan Verwimp",
"Youssef Oualil"
] | On-device automatic speech recognition systems face several challenges
compared to server-based systems. They have to meet stricter constraints in
terms of speed, disk size and memory while maintaining the same accuracy. Often
they have to serve several applications with different distributions at once,
such as communicating with a virtual assistant and speech-to-text. The simplest
solution to serve multiple applications is to build application-specific
(language) models, but this leads to an increase in memory. Therefore, we
explore different data- and architecture-driven language modeling approaches to
build a single application-agnostic model. We propose two novel feed-forward
architectures that find an optimal trade off between different on-device
constraints. In comparison to the application-specific solution, one of our
novel approaches reduces the disk size by half, while maintaining speed and
accuracy of the original model. | [
"cs.CL",
"cs.SD",
"eess.AS"
] | true |
2305.09798 | 2023-05-16T20:46:36Z | The Ways of Words: The Impact of Word Choice on Information Engagement
and Decision Making | [
"Nimrod Dvir",
"Elaine Friedman",
"Suraj Commuri",
"Fan Yang",
"Jennifer Romano"
] | Little research has explored how information engagement (IE), the degree to
which individuals interact with and use information in a manner that manifests
cognitively, behaviorally, and affectively. This study explored the impact of
phrasing, specifically word choice, on IE and decision making. Synthesizing two
theoretical models, User Engagement Theory UET and Information Behavior Theory
IBT, a theoretical framework illustrating the impact of and relationships among
the three IE dimensions of perception, participation, and perseverance was
developed and hypotheses generated. The framework was empirically validated in
a large-scale user study measuring how word choice impacts the dimensions of
IE. The findings provide evidence that IE differs from other forms of
engagement in that it is driven and fostered by the expression of the
information itself, regardless of the information system used to view, interact
with, and use the information. The findings suggest that phrasing can have a
significant effect on the interpretation of and interaction with digital
information, indicating the importance of expression of information, in
particular word choice, on decision making and IE. The research contributes to
the literature by identifying methods for assessment and improvement of IE and
decision making with digital text. | [
"cs.CL",
"cs.HC",
"cs.SY",
"eess.SY",
"stat.AP",
"28-08",
"H.5.2; H.1.2"
] | false |
2305.09101 | 2023-05-16T01:57:01Z | Automatic learning algorithm selection for classification via
convolutional neural networks | [
"Sebastian Maldonado",
"Carla Vairetti",
"Ignacio Figueroa"
] | As in any other task, the process of building machine learning models can
benefit from prior experience. Meta-learning for classifier selection gains
knowledge from characteristics of different datasets and/or previous
performance of machine learning techniques to make better decisions for the
current modeling process. Meta-learning approaches first collect meta-data that
describe this prior experience and then use it as input for an algorithm
selection model. In this paper, however, we propose an automatic learning
scheme in which we train convolutional networks directly with the information
of tabular datasets for binary classification. The goal of this study is to
learn the inherent structure of the data without identifying meta-features.
Experiments with simulated datasets show that the proposed approach achieves
nearly perfect performance in identifying linear and nonlinear patterns,
outperforming the traditional two-step method based on meta-features. The
proposed method is then applied to real-world datasets, making suggestions
about the best classifiers that can be considered based on the structure of the
data. | [
"cs.LG"
] | false |
2305.09178 | 2023-05-16T05:30:13Z | Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by
Discrete Fourier Transform of Output Sequences | [
"Taiga Ishii",
"Ryo Ueda",
"Yusuke Miyao"
] | A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally
processes input sequences. In this research, we aim to uncover the inherent
generalization properties, i.e., inductive bias, of RNNs with respect to how
frequently RNNs switch the outputs through time steps in the sequence
classification task, which we call output sequence frequency. Previous work
analyzed inductive bias by training models with a few synthetic data and
comparing the model's generalization with candidate generalization patterns.
However, when examining the output sequence frequency, previous methods cannot
be directly applied since enumerating candidate patterns is computationally
difficult for longer sequences. To this end, we propose to directly calculate
the output sequence frequency for each model by regarding the outputs of the
model as discrete-time signals and applying frequency domain analysis.
Experimental results showed that Long Short-Term Memory (LSTM) and Gated
Recurrent Unit (GRU) have an inductive bias towards lower-frequency patterns,
while Elman RNN tends to learn patterns in which the output changes at high
frequencies. We also found that the inductive bias of LSTM and GRU varies with
the number of layers and the size of hidden layers. | [
"cs.LG"
] | false |
2305.09288 | 2023-05-16T08:48:36Z | A Dictionary-based approach to Time Series Ordinal Classification | [
"Rafael Ayllón-Gavilán",
"David Guijo-Rubio",
"Pedro Antonio Gutiérrez",
"César Hervás-Martinez"
] | Time Series Classification (TSC) is an extensively researched field from
which a broad range of real-world problems can be addressed obtaining excellent
results. One sort of the approaches performing well are the so-called
dictionary-based techniques. The Temporal Dictionary Ensemble (TDE) is the
current state-of-the-art dictionary-based TSC approach. In many TSC problems we
find a natural ordering in the labels associated with the time series. This
characteristic is referred to as ordinality, and can be exploited to improve
the methods performance. The area dealing with ordinal time series is the Time
Series Ordinal Classification (TSOC) field, which is yet unexplored. In this
work, we present an ordinal adaptation of the TDE algorithm, known as ordinal
TDE (O-TDE). For this, a comprehensive comparison using a set of 18 TSOC
problems is performed. Experiments conducted show the improvement achieved by
the ordinal dictionary-based approach in comparison to four other existing
nominal dictionary-based techniques. | [
"cs.LG"
] | false |
2305.09424 | 2023-05-16T13:30:15Z | Unwrapping All ReLU Networks | [
"Mattia Jacopo Villani",
"Peter McBurney"
] | Deep ReLU Networks can be decomposed into a collection of linear models, each
defined in a region of a partition of the input space. This paper provides
three results extending this theory. First, we extend this linear
decompositions to Graph Neural networks and tensor convolutional networks, as
well as networks with multiplicative interactions. Second, we provide proofs
that neural networks can be understood as interpretable models such as
Multivariate Decision trees and logical theories. Finally, we show how this
model leads to computing cheap and exact SHAP values. We validate the theory
through experiments with on Graph Neural Networks. | [
"cs.LG"
] | false |
2305.09425 | 2023-05-16T13:31:11Z | When is an SHM problem a Multi-Task-Learning problem? | [
"Sarah Bee",
"Lawrence Bull",
"Nikolas Dervilis",
"Keith Worden"
] | Multi-task neural networks learn tasks simultaneously to improve individual
task performance. There are three mechanisms of multi-task learning (MTL) which
are explored here for the context of structural health monitoring (SHM): (i)
the natural occurrence of multiple tasks; (ii) using outputs as inputs (both
linked to the recent research in population-based SHM (PBSHM)); and, (iii)
additional loss functions to provide different insights. Each of these problem
settings for MTL is detailed and an example is given. | [
"cs.LG"
] | false |
2305.09500 | 2023-05-16T14:53:07Z | Contrastive Label Enhancement | [
"Yifei Wang",
"Yiyang Zhou",
"Jihua Zhu",
"Xinyuan Liu",
"Wenbiao Yan",
"Zhiqiang Tian"
] | Label distribution learning (LDL) is a new machine learning paradigm for
solving label ambiguity. Since it is difficult to directly obtain label
distributions, many studies are focusing on how to recover label distributions
from logical labels, dubbed label enhancement (LE). Existing LE methods
estimate label distributions by simply building a mapping relationship between
features and label distributions under the supervision of logical labels. They
typically overlook the fact that both features and logical labels are
descriptions of the instance from different views. Therefore, we propose a
novel method called Contrastive Label Enhancement (ConLE) which integrates
features and logical labels into the unified projection space to generate
high-level features by contrastive learning strategy. In this approach,
features and logical labels belonging to the same sample are pulled closer,
while those of different samples are projected farther away from each other in
the projection space. Subsequently, we leverage the obtained high-level
features to gain label distributions through a welldesigned training strategy
that considers the consistency of label attributes. Extensive experiments on
LDL benchmark datasets demonstrate the effectiveness and superiority of our
method. | [
"cs.LG"
] | false |
2305.09628 | 2023-05-16T17:36:34Z | Faster Federated Learning with Decaying Number of Local SGD Steps | [
"Jed Mills",
"Jia Hu",
"Geyong Min"
] | In Federated Learning (FL) client devices connected over the internet
collaboratively train a machine learning model without sharing their private
data with a central server or with other clients. The seminal Federated
Averaging (FedAvg) algorithm trains a single global model by performing rounds
of local training on clients followed by model averaging. FedAvg can improve
the communication-efficiency of training by performing more steps of Stochastic
Gradient Descent (SGD) on clients in each round. However, client data in
real-world FL is highly heterogeneous, which has been extensively shown to slow
model convergence and harm final performance when $K > 1$ steps of SGD are
performed on clients per round. In this work we propose decaying $K$ as
training progresses, which can jointly improve the final performance of the FL
model whilst reducing the wall-clock time and the total computational cost of
training compared to using a fixed $K$. We analyse the convergence of FedAvg
with decaying $K$ for strongly-convex objectives, providing novel insights into
the convergence properties, and derive three theoretically-motivated decay
schedules for $K$. We then perform thorough experiments on four benchmark FL
datasets (FEMNIST, CIFAR100, Sentiment140, Shakespeare) to show the real-world
benefit of our approaches in terms of real-world convergence time,
computational cost, and generalisation performance. | [
"cs.LG"
] | false |
2305.09777 | 2023-05-16T20:02:39Z | BSGAN: A Novel Oversampling Technique for Imbalanced Pattern
Recognitions | [
"Md Manjurul Ahsan",
"Shivakumar Raman",
"Zahed Siddique"
] | Class imbalanced problems (CIP) are one of the potential challenges in
developing unbiased Machine Learning (ML) models for predictions. CIP occurs
when data samples are not equally distributed between the two or multiple
classes. Borderline-Synthetic Minority Oversampling Techniques (SMOTE) is one
of the approaches that has been used to balance the imbalance data by
oversampling the minor (limited) samples. One of the potential drawbacks of
existing Borderline-SMOTE is that it focuses on the data samples that lay at
the border point and gives more attention to the extreme observations,
ultimately limiting the creation of more diverse data after oversampling, and
that is the almost scenario for the most of the borderline-SMOTE based
oversampling strategies. As an effect, marginalization occurs after
oversampling. To address these issues, in this work, we propose a hybrid
oversampling technique by combining the power of borderline SMOTE and
Generative Adversarial Network to generate more diverse data that follow
Gaussian distributions. We named it BSGAN and tested it on four highly
imbalanced datasets: Ecoli, Wine quality, Yeast, and Abalone. Our preliminary
computational results reveal that BSGAN outperformed existing borderline SMOTE
and GAN-based oversampling techniques and created a more diverse dataset that
follows normal distribution after oversampling effect. | [
"cs.LG"
] | false |
2305.10451 | 2023-05-16T21:40:51Z | How does agency impact human-AI collaborative design space exploration?
A case study on ship design with deep generative models | [
"Shahroz Khan",
"Panagiotis Kaklis",
"Kosa Goucher-Lambert"
] | Typical parametric approaches restrict the exploration of diverse designs by
generating variations based on a baseline design. In contrast, generative
models provide a solution by leveraging existing designs to create compact yet
diverse generative design spaces (GDSs). However, the effectiveness of current
exploration methods in complex GDSs, especially in ship hull design, remains
unclear. To that end, we first construct a GDS using a generative adversarial
network, trained on 52,591 designs of various ship types. Next, we constructed
three modes of exploration, random (REM), semi-automated (SAEM) and automated
(AEM), with varying levels of user involvement to explore GDS for novel and
optimised designs. In REM, users manually explore the GDS based on intuition.
In SAEM, both the users and optimiser drive the exploration. The optimiser
focuses on exploring a diverse set of optimised designs, while the user directs
the exploration towards their design preference. AEM uses an optimiser to
search for the global optimum based on design performance. Our results revealed
that REM generates the most diverse designs, followed by SAEM and AEM. However,
the SAEM and AEM produce better-performing designs. Specifically, SAEM is the
most effective in exploring designs with a high trade-off between novelty and
performance. In conclusion, our study highlights the need for innovative
exploration approaches to fully harness the potential of GDS in design
optimisation. | [
"cs.LG"
] | false |
2305.09084 | 2023-05-16T00:57:22Z | A Review of Data-driven Approaches for Malicious Website Detection | [
"Zeyuan Hu",
"Ziang Yuan"
] | The detection of malicious websites has become a critical issue in
cybersecurity. Therefore, this paper offers a comprehensive review of
data-driven methods for detecting malicious websites. Traditional approaches
and their limitations are discussed, followed by an overview of data-driven
approaches. The paper establishes the data-feature-model-extension pipeline and
the latest research developments of data-driven approaches, including data
preprocessing, feature extraction, model construction and technology extension.
Specifically, this paper compares methods using deep learning models proposed
in recent years. Furthermore, the paper follows the
data-feature-model-extension pipeline to discuss the challenges together with
some future directions of data-driven methods in malicious website detection. | [
"cs.CR",
"cs.LG"
] | false |
2305.09088 | 2023-05-16T01:15:00Z | The Hessian perspective into the Nature of Convolutional Neural Networks | [
"Sidak Pal Singh",
"Thomas Hofmann",
"Bernhard Schölkopf"
] | While Convolutional Neural Networks (CNNs) have long been investigated and
applied, as well as theorized, we aim to provide a slightly different
perspective into their nature -- through the perspective of their Hessian maps.
The reason is that the loss Hessian captures the pairwise interaction of
parameters and therefore forms a natural ground to probe how the architectural
aspects of CNN get manifested in its structure and properties. We develop a
framework relying on Toeplitz representation of CNNs, and then utilize it to
reveal the Hessian structure and, in particular, its rank. We prove tight upper
bounds (with linear activations), which closely follow the empirical trend of
the Hessian rank and hold in practice in more general settings. Overall, our
work generalizes and establishes the key insight that, even in CNNs, the
Hessian rank grows as the square root of the number of parameters. | [
"cs.LG",
"stat.ML"
] | false |
2305.09245 | 2023-05-16T07:52:08Z | Sorting and Hypergraph Orientation under Uncertainty with Predictions | [
"Thomas Erlebach",
"Murilo Santos de Lima",
"Nicole Megow",
"Jens Schlöter"
] | Learning-augmented algorithms have been attracting increasing interest, but
have only recently been considered in the setting of explorable uncertainty
where precise values of uncertain input elements can be obtained by a query and
the goal is to minimize the number of queries needed to solve a problem. We
study learning-augmented algorithms for sorting and hypergraph orientation
under uncertainty, assuming access to untrusted predictions for the uncertain
values. Our algorithms provide improved performance guarantees for accurate
predictions while maintaining worst-case guarantees that are best possible
without predictions. For hypergraph orientation, for any $\gamma \geq 2$, we
give an algorithm that achieves a competitive ratio of $1+1/\gamma$ for correct
predictions and $\gamma$ for arbitrarily wrong predictions. For sorting, we
achieve an optimal solution for accurate predictions while still being
$2$-competitive for arbitrarily wrong predictions. These tradeoffs are the best
possible. We also consider different error metrics and show that the
performance of our algorithms degrades smoothly with the prediction error in
all the cases where this is possible. | [
"cs.DS",
"cs.LG"
] | false |
2305.09304 | 2023-05-16T09:22:14Z | OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning
Research | [
"Jiaming Ji",
"Jiayi Zhou",
"Borong Zhang",
"Juntao Dai",
"Xuehai Pan",
"Ruiyang Sun",
"Weidong Huang",
"Yiran Geng",
"Mickel Liu",
"Yaodong Yang"
] | AI systems empowered by reinforcement learning (RL) algorithms harbor the
immense potential to catalyze societal advancement, yet their deployment is
often impeded by significant safety concerns. Particularly in safety-critical
applications, researchers have raised concerns about unintended harms or unsafe
behaviors of unaligned RL agents. The philosophy of safe reinforcement learning
(SafeRL) is to align RL agents with harmless intentions and safe behavioral
patterns. In SafeRL, agents learn to develop optimal policies by receiving
feedback from the environment, while also fulfilling the requirement of
minimizing the risk of unintended harm or unsafe behavior. However, due to the
intricate nature of SafeRL algorithm implementation, combining methodologies
across various domains presents a formidable challenge. This had led to an
absence of a cohesive and efficacious learning framework within the
contemporary SafeRL research milieu. In this work, we introduce a foundational
framework designed to expedite SafeRL research endeavors. Our comprehensive
framework encompasses an array of algorithms spanning different RL domains and
places heavy emphasis on safety elements. Our efforts are to make the
SafeRL-related research process more streamlined and efficient, therefore
facilitating further research in AI safety. Our project is released at:
https://github.com/PKU-Alignment/omnisafe. | [
"cs.LG",
"cs.AI"
] | false |
2305.09366 | 2023-05-16T11:46:16Z | Evaluation of self-supervised pre-training for automatic infant movement
classification using wearable movement sensors | [
"Einari Vaaras",
"Manu Airaksinen",
"Sampsa Vanhatalo",
"Okko Räsänen"
] | The recently-developed infant wearable MAIJU provides a means to
automatically evaluate infants' motor performance in an objective and scalable
manner in out-of-hospital settings. This information could be used for
developmental research and to support clinical decision-making, such as
detection of developmental problems and guiding of their therapeutic
interventions. MAIJU-based analyses rely fully on the classification of
infant's posture and movement; it is hence essential to study ways to increase
the accuracy of such classifications, aiming to increase the reliability and
robustness of the automated analysis. Here, we investigated how self-supervised
pre-training improves performance of the classifiers used for analyzing MAIJU
recordings, and we studied whether performance of the classifier models is
affected by context-selective quality-screening of pre-training data to exclude
periods of little infant movement or with missing sensors. Our experiments show
that i) pre-training the classifier with unlabeled data leads to a robust
accuracy increase of subsequent classification models, and ii) selecting
context-relevant pre-training data leads to substantial further improvements in
the classifier performance. | [
"cs.LG",
"eess.SP"
] | false |
2305.09385 | 2023-05-16T12:11:08Z | Lp- and Risk Consistency of Localized SVMs | [
"Hannes Köhler"
] | Kernel-based regularized risk minimizers, also called support vector machines
(SVMs), are known to possess many desirable properties but suffer from their
super-linear computational requirements when dealing with large data sets. This
problem can be tackled by using localized SVMs instead, which also offer the
additional advantage of being able to apply different hyperparameters to
different regions of the input space. In this paper, localized SVMs are
analyzed with regards to their consistency. It is proven that they inherit
$L_p$- as well as risk consistency from global SVMs under very weak conditions
and even if the regions underlying the localized SVMs are allowed to change as
the size of the training data set increases. | [
"stat.ML",
"cs.LG"
] | false |
2305.09458 | 2023-05-16T14:18:53Z | An Empirical Study on Google Research Football Multi-agent Scenarios | [
"Yan Song",
"He Jiang",
"Zheng Tian",
"Haifeng Zhang",
"Yingping Zhang",
"Jiangcheng Zhu",
"Zonghong Dai",
"Weinan Zhang",
"Jun Wang"
] | Few multi-agent reinforcement learning (MARL) research on Google Research
Football (GRF) focus on the 11v11 multi-agent full-game scenario and to the
best of our knowledge, no open benchmark on this scenario has been released to
the public. In this work, we fill the gap by providing a population-based MARL
training pipeline and hyperparameter settings on multi-agent football scenario
that outperforms the bot with difficulty 1.0 from scratch within 2 million
steps. Our experiments serve as a reference for the expected performance of
Independent Proximal Policy Optimization (IPPO), a state-of-the-art multi-agent
reinforcement learning algorithm where each agent tries to maximize its own
policy independently across various training configurations. Meanwhile, we
open-source our training framework Light-MALib which extends the MALib codebase
by distributed and asynchronized implementation with additional analytical
tools for football games. Finally, we provide guidance for building strong
football AI with population-based training and release diverse pretrained
policies for benchmarking. The goal is to provide the community with a head
start for whoever experiment their works on GRF and a simple-to-use
population-based training framework for further improving their agents through
self-play. The implementation is available at
https://github.com/Shanghai-Digital-Brain-Laboratory/DB-Football. | [
"cs.LG",
"cs.MA"
] | false |
2305.09495 | 2023-05-16T14:47:55Z | Hardware Realization of Nonlinear Activation Functions for NN-based
Optical Equalizers | [
"Sasipim Srivallapanondh",
"Pedro J. Freire",
"Antonio Napoli",
"Sergei K. Turitsyn",
"Jaroslaw E. Prilepsky"
] | To reduce the complexity of the hardware implementation of neural
network-based optical channel equalizers, we demonstrate that the performance
of the biLSTM equalizer with approximated activation functions is close to that
of the original model. | [
"cs.LG",
"physics.optics"
] | false |
2305.09565 | 2023-05-16T16:02:18Z | Toward Falsifying Causal Graphs Using a Permutation-Based Test | [
"Elias Eulig",
"Atalanti A. Mastakouri",
"Patrick Blöbaum",
"Michaela Hardt",
"Dominik Janzing"
] | Understanding the causal relationships among the variables of a system is
paramount to explain and control its behaviour. Inferring the causal graph from
observational data without interventions, however, requires a lot of strong
assumptions that are not always realistic. Even for domain experts it can be
challenging to express the causal graph. Therefore, metrics that quantitatively
assess the goodness of a causal graph provide helpful checks before using it in
downstream tasks. Existing metrics provide an absolute number of
inconsistencies between the graph and the observed data, and without a
baseline, practitioners are left to answer the hard question of how many such
inconsistencies are acceptable or expected. Here, we propose a novel
consistency metric by constructing a surrogate baseline through node
permutations. By comparing the number of inconsistencies with those on the
surrogate baseline, we derive an interpretable metric that captures whether the
DAG fits significantly better than random. Evaluating on both simulated and
real data sets from various domains, including biology and cloud monitoring, we
demonstrate that the true DAG is not falsified by our metric, whereas the wrong
graphs given by a hypothetical user are likely to be falsified. | [
"stat.ML",
"cs.LG"
] | false |
2305.09600 | 2023-05-16T16:53:27Z | Deep Reinforcement Learning to Maximize Arterial Usage during Extreme
Congestion | [
"Ashutosh Dutta",
"Milan Jain",
"Arif Khan",
"Arun Sathanur"
] | Collisions, crashes, and other incidents on road networks, if left
unmitigated, can potentially cause cascading failures that can affect large
parts of the system. Timely handling such extreme congestion scenarios is
imperative to reduce emissions, enhance productivity, and improve the quality
of urban living. In this work, we propose a Deep Reinforcement Learning (DRL)
approach to reduce traffic congestion on multi-lane freeways during extreme
congestion. The agent is trained to learn adaptive detouring strategies for
congested freeway traffic such that the freeway lanes along with the local
arterial network in proximity are utilized optimally, with rewards being
congestion reduction and traffic speed improvement. The experimental setup is a
2.6-mile-long 4-lane freeway stretch in Shoreline, Washington, USA with two
exits and associated arterial roads simulated on a microscopic and continuous
multi-modal traffic simulator SUMO (Simulation of Urban MObility) while using
parameterized traffic profiles generated using real-world traffic data. Our
analysis indicates that DRL-based controllers can improve average traffic speed
by 21\% when compared to no-action during steep congestion. The study further
discusses the trade-offs involved in the choice of reward functions, the impact
of human compliance on agent performance, and the feasibility of knowledge
transfer from one agent to other to address data sparsity and scaling issues. | [
"cs.AI",
"cs.LG"
] | false |
2305.09605 | 2023-05-16T16:56:19Z | Expressiveness Remarks for Denoising Diffusion Models and Samplers | [
"Francisco Vargas",
"Teodora Reu",
"Anna Kerekes"
] | Denoising diffusion models are a class of generative models which have
recently achieved state-of-the-art results across many domains. Gradual noise
is added to the data using a diffusion process, which transforms the data
distribution into a Gaussian. Samples from the generative model are then
obtained by simulating an approximation of the time reversal of this diffusion
initialized by Gaussian samples. Recent research has explored adapting
diffusion models for sampling and inference tasks. In this paper, we leverage
known connections to stochastic control akin to the F\"ollmer drift to extend
established neural network approximation results for the F\"ollmer drift to
denoising diffusion models and samplers. | [
"stat.ML",
"cs.LG"
] | false |
2305.09608 | 2023-05-16T17:00:36Z | Data Augmentation for Conflict and Duplicate Detection in Software
Engineering Sentence Pairs | [
"Garima Malik",
"Mucahit Cevik",
"Ayşe Başar"
] | This paper explores the use of text data augmentation techniques to enhance
conflict and duplicate detection in software engineering tasks through sentence
pair classification. The study adapts generic augmentation techniques such as
shuffling, back translation, and paraphrasing and proposes new data
augmentation techniques such as Noun-Verb Substitution, target-lemma
replacement and Actor-Action Substitution for software requirement texts. A
comprehensive empirical analysis is conducted on six software text datasets to
identify conflicts and duplicates among sentence pairs. The results demonstrate
that data augmentation techniques have a significant impact on the performance
of all software pair text datasets. On the other hand, in cases where the
datasets are relatively balanced, the use of augmentation techniques may result
in a negative effect on the classification performance. | [
"cs.SE",
"cs.LG"
] | false |
2305.09625 | 2023-05-16T17:24:28Z | Conditional variational autoencoder with Gaussian process regression
recognition for parametric models | [
"Xuehan Zhang",
"Lijian Jiang"
] | In this article, we present a data-driven method for parametric models with
noisy observation data. Gaussian process regression based reduced order
modeling (GPR-based ROM) can realize fast online predictions without using
equations in the offline stage. However, GPR-based ROM does not perform well
for complex systems since POD projection are naturally linear. Conditional
variational autoencoder (CVAE) can address this issue via nonlinear neural
networks but it has more model complexity, which poses challenges for training
and tuning hyperparameters. To this end, we propose a framework of CVAE with
Gaussian process regression recognition (CVAE-GPRR). The proposed method
consists of a recognition model and a likelihood model. In the recognition
model, we first extract low-dimensional features from data by POD to filter the
redundant information with high frequency. And then a non-parametric model GPR
is used to learn the map from parameters to POD latent variables, which can
also alleviate the impact of noise. CVAE-GPRR can achieve the similar accuracy
to CVAE but with fewer parameters. In the likelihood model, neural networks are
used to reconstruct data. Besides the samples of POD latent variables and input
parameters, physical variables are also added as the inputs to make predictions
in the whole physical space. This can not be achieved by either GPR-based ROM
or CVAE. Moreover, the numerical results show that CVAE-GPRR may alleviate the
overfitting issue in CVAE. | [
"cs.CE",
"cs.LG"
] | false |
2305.09627 | 2023-05-16T17:31:50Z | Addressing computational challenges in physical system simulations with
machine learning | [
"Sabber Ahamed",
"Md Mesbah Uddin"
] | In this paper, we present a machine learning-based data generator framework
tailored to aid researchers who utilize simulations to examine various physical
systems or processes. High computational costs and the resulting limited data
often pose significant challenges to gaining insights into these systems or
processes. Our approach involves a two-step process: initially, we train a
supervised predictive model using a limited simulated dataset to predict
simulation outcomes. Subsequently, a reinforcement learning agent is trained to
generate accurate, simulation-like data by leveraging the supervised model.
With this framework, researchers can generate more accurate data and know the
outcomes without running high computational simulations, which enables them to
explore the parameter space more efficiently and gain deeper insights into
physical systems or processes. We demonstrate the effectiveness of the proposed
framework by applying it to two case studies, one focusing on earthquake
rupture physics and the other on new material development. | [
"cs.LG",
"physics.comp-ph"
] | false |
2305.09648 | 2023-05-16T17:49:04Z | Prompt-Tuning Decision Transformer with Preference Ranking | [
"Shengchao Hu",
"Li Shen",
"Ya Zhang",
"Dacheng Tao"
] | Prompt-tuning has emerged as a promising method for adapting pre-trained
models to downstream tasks or aligning with human preferences. Prompt learning
is widely used in NLP but has limited applicability to RL due to the complex
physical meaning and environment-specific information contained within RL
prompts. These factors require supervised learning to imitate the
demonstrations and may result in a loss of meaning after learning.
Additionally, directly extending prompt-tuning approaches to RL is challenging
because RL prompts guide agent behavior based on environmental modeling and
analysis, rather than filling in missing information, making it unlikely that
adjustments to the prompt format for downstream tasks, as in NLP, can yield
significant improvements. In this work, we propose the Prompt-Tuning DT
algorithm to address these challenges by using trajectory segments as prompts
to guide RL agents in acquiring environmental information and optimizing
prompts via black-box tuning to enhance their ability to contain more relevant
information, thereby enabling agents to make better decisions. Our approach
involves randomly sampling a Gaussian distribution to fine-tune the elements of
the prompt trajectory and using preference ranking function to find the
optimization direction, thereby providing more informative prompts and guiding
the agent towards specific preferences in the target environment. Extensive
experiments show that with only 0.03% of the parameters learned, Prompt-Tuning
DT achieves comparable or even better performance than full-model fine-tuning
in low-data scenarios. Our work contributes to the advancement of prompt-tuning
approaches in RL, providing a promising direction for optimizing large RL
agents for specific preference tasks. | [
"cs.LG",
"cs.AI"
] | false |
2305.09655 | 2023-05-16T17:54:14Z | RAMario: Experimental Approach to Reptile Algorithm -- Reinforcement
Learning for Mario | [
"Sanyam Jain"
] | This research paper presents an experimental approach to using the Reptile
algorithm for reinforcement learning to train a neural network to play Super
Mario Bros. We implement the Reptile algorithm using the Super Mario Bros Gym
library and TensorFlow in Python, creating a neural network model with a single
convolutional layer, a flatten layer, and a dense layer. We define the
optimizer and use the Reptile class to create an instance of the Reptile
meta-learning algorithm. We train the model using multiple tasks and episodes,
choosing actions using the current weights of the neural network model, taking
those actions in the environment, and updating the model weights using the
Reptile algorithm. We evaluate the performance of the algorithm by printing the
total reward for each episode. In addition, we compare the performance of the
Reptile algorithm approach to two other popular reinforcement learning
algorithms, Proximal Policy Optimization (PPO) and Deep Q-Network (DQN),
applied to the same Super Mario Bros task. Our results demonstrate that the
Reptile algorithm provides a promising approach to few-shot learning in video
game AI, with comparable or even better performance than the other two
algorithms, particularly in terms of moves vs distance that agent performs for
1M episodes of training. The results shows that best total distance for world
1-2 in the game environment were ~1732 (PPO), ~1840 (DQN) and ~2300 (RAMario).
Full code is available at https://github.com/s4nyam/RAMario. | [
"cs.LG",
"cs.MA"
] | false |
2305.09695 | 2023-05-16T06:10:54Z | Applying Machine Learning Analysis for Software Quality Test | [
"Al Khan",
"Remudin Reshid Mekuria",
"Ruslan Isaev"
] | One of the biggest expense in software development is the maintenance.
Therefore, it is critical to comprehend what triggers maintenance and if it may
be predicted. Numerous research have demonstrated that specific methods of
assessing the complexity of created programs may produce useful prediction
models to ascertain the possibility of maintenance due to software failures. As
a routine it is performed prior to the release, and setting up the models
frequently calls for certain, object-oriented software measurements. It is not
always the case that software developers have access to these measurements. In
this paper, the machine learning is applied on the available data to calculate
the cumulative software failure levels. A technique to forecast a software`s
residual defectiveness using machine learning can be looked into as a solution
to the challenge of predicting residual flaws. Software metrics and defect data
were separated out of the static source code repository. Static code is used to
create software metrics, and reported bugs in the repository are used to gather
defect information. By using a correlation method, metrics that had no
connection to the defect data were removed. This makes it possible to analyze
all the data without pausing the programming process. Large, sophisticated
software`s primary issue is that it is impossible to control everything
manually, and the cost of an error can be quite expensive. Developers may miss
errors during testing as a consequence, which will raise maintenance costs.
Finding a method to accurately forecast software defects is the overall
objective. | [
"cs.SE",
"cs.LG"
] | false |
2305.09738 | 2023-05-16T18:19:12Z | CQural: A Novel CNN based Hybrid Architecture for Quantum Continual
Machine Learning | [
"Sanyam Jain"
] | Training machine learning models in an incremental fashion is not only
important but also an efficient way to achieve artificial general intelligence.
The ability that humans possess of continuous or lifelong learning helps them
to not forget previously learned tasks. However, current neural network models
are prone to catastrophic forgetting when it comes to continual learning. Many
researchers have come up with several techniques in order to reduce the effect
of forgetting from neural networks, however, all techniques are studied
classically with a very less focus on changing the machine learning model
architecture. In this research paper, we show that it is not only possible to
circumvent catastrophic forgetting in continual learning with novel hybrid
classical-quantum neural networks, but also explains what features are most
important to learn for classification. In addition, we also claim that if the
model is trained with these explanations, it tends to give better performance
and learn specific features that are far from the decision boundary. Finally,
we present the experimental results to show comparisons between classical and
classical-quantum hybrid architectures on benchmark MNIST and CIFAR-10
datasets. After successful runs of learning procedure, we found hybrid neural
network outperforms classical one in terms of remembering the right evidences
of the class-specific features. | [
"cs.LG",
"cs.AI"
] | false |
2305.09838 | 2023-05-16T22:41:56Z | Coagent Networks: Generalized and Scaled | [
"James E. Kostas",
"Scott M. Jordan",
"Yash Chandak",
"Georgios Theocharous",
"Dhawal Gupta",
"Martha White",
"Bruno Castro da Silva",
"Philip S. Thomas"
] | Coagent networks for reinforcement learning (RL) [Thomas and Barto, 2011]
provide a powerful and flexible framework for deriving principled learning
rules for arbitrary stochastic neural networks. The coagent framework offers an
alternative to backpropagation-based deep learning (BDL) that overcomes some of
backpropagation's main limitations. For example, coagent networks can compute
different parts of the network \emph{asynchronously} (at different rates or at
different times), can incorporate non-differentiable components that cannot be
used with backpropagation, and can explore at levels higher than their action
spaces (that is, they can be designed as hierarchical networks for exploration
and/or temporal abstraction). However, the coagent framework is not just an
alternative to BDL; the two approaches can be blended: BDL can be combined with
coagent learning rules to create architectures with the advantages of both
approaches. This work generalizes the coagent theory and learning rules
provided by previous works; this generalization provides more flexibility for
network architecture design within the coagent framework. This work also
studies one of the chief disadvantages of coagent networks: high variance
updates for networks that have many coagents and do not use backpropagation. We
show that a coagent algorithm with a policy network that does not use
backpropagation can scale to a challenging RL domain with a high-dimensional
state and action space (the MuJoCo Ant environment), learning reasonable
(although not state-of-the-art) policies. These contributions motivate and
provide a more general theoretical foundation for future work that studies
coagent networks. | [
"cs.LG",
"cs.AI"
] | false |
2305.10452 | 2023-05-16T23:38:34Z | Comparison of classifiers in challenge scheme | [
"Sergio Nava-Muñoz",
"Mario Graff Guerrero",
"Hugo Jair Escalante"
] | In recent decades, challenges have become very popular in scientific research
as these are crowdsourcing schemes. In particular, challenges are essential for
developing machine learning algorithms. For the challenges settings, it is
vital to establish the scientific question, the dataset (with adequate quality,
quantity, diversity, and complexity), performance metrics, as well as a way to
authenticate the participants' results (Gold Standard). This paper addresses
the problem of evaluating the performance of different competitors (algorithms)
under the restrictions imposed by the challenge scheme, such as the comparison
of multiple competitors with a unique dataset (with fixed size), a minimal
number of submissions and, a set of metrics chosen to assess performance. The
algorithms are sorted according to the performance metric. Still, it is common
to observe performance differences among competitors as small as hundredths or
even thousandths, so the question is whether the differences are significant.
This paper analyzes the results of the MeOffendEs@IberLEF 2021 competition and
proposes to make inference through resampling techniques (bootstrap) to support
Challenge organizers' decision-making. | [
"cs.LG",
"cs.PF"
] | false |
2305.09129 | 2023-05-16T03:20:22Z | Graph Reinforcement Learning for Network Control via Bi-Level
Optimization | [
"Daniele Gammelli",
"James Harrison",
"Kaidi Yang",
"Marco Pavone",
"Filipe Rodrigues",
"Francisco C. Pereira"
] | Optimization problems over dynamic networks have been extensively studied and
widely used in the past decades to formulate numerous real-world problems.
However, (1) traditional optimization-based approaches do not scale to large
networks, and (2) the design of good heuristics or approximation algorithms
often requires significant manual trial-and-error. In this work, we argue that
data-driven strategies can automate this process and learn efficient algorithms
without compromising optimality. To do so, we present network control problems
through the lens of reinforcement learning and propose a graph network-based
framework to handle a broad class of problems. Instead of naively computing
actions over high-dimensional graph elements, e.g., edges, we propose a
bi-level formulation where we (1) specify a desired next state via RL, and (2)
solve a convex program to best achieve it, leading to drastically improved
scalability and performance. We further highlight a collection of desirable
features to system designers, investigate design decisions, and present
experiments on real-world control problems showing the utility, scalability,
and flexibility of our framework. | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2305.09179 | 2023-05-16T05:37:06Z | Ortho-ODE: Enhancing Robustness and of Neural ODEs against Adversarial
Attacks | [
"Vishal Purohit"
] | Neural Ordinary Differential Equations (NODEs) probed the usage of numerical
solvers to solve the differential equation characterized by a Neural Network
(NN), therefore initiating a new paradigm of deep learning models with infinite
depth. NODEs were designed to tackle the irregular time series problem.
However, NODEs have demonstrated robustness against various noises and
adversarial attacks. This paper is about the natural robustness of NODEs and
examines the cause behind such surprising behaviour. We show that by
controlling the Lipschitz constant of the ODE dynamics the robustness can be
significantly improved. We derive our approach from Grownwall's inequality.
Further, we draw parallels between contractivity theory and Grownwall's
inequality. Experimentally we corroborate the enhanced robustness on numerous
datasets - MNIST, CIFAR-10, and CIFAR 100. We also present the impact of
adaptive and non-adaptive solvers on the robustness of NODEs. | [
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2305.09199 | 2023-05-16T06:15:13Z | Machine learning enhanced real-time aerodynamic forces prediction based
on sparse pressure sensor inputs | [
"Junming Duan",
"Qian Wang",
"Jan S. Hesthaven"
] | Accurate prediction of aerodynamic forces in real-time is crucial for
autonomous navigation of unmanned aerial vehicles (UAVs). This paper presents a
data-driven aerodynamic force prediction model based on a small number of
pressure sensors located on the surface of UAV. The model is built on a linear
term that can make a reasonably accurate prediction and a nonlinear correction
for accuracy improvement. The linear term is based on a reduced basis
reconstruction of the surface pressure distribution, where the basis is
extracted from numerical simulation data and the basis coefficients are
determined by solving linear pressure reconstruction equations at a set of
sensor locations. Sensor placement is optimized using the discrete empirical
interpolation method (DEIM). Aerodynamic forces are computed by integrating the
reconstructed surface pressure distribution. The nonlinear term is an
artificial neural network (NN) that is trained to bridge the gap between the
ground truth and the DEIM prediction, especially in the scenario where the DEIM
model is constructed from simulation data with limited fidelity. A large
network is not necessary for accurate correction as the linear model already
captures the main dynamics of the surface pressure field, thus yielding an
efficient DEIM+NN aerodynamic force prediction model. The model is tested on
numerical and experimental dynamic stall data of a 2D NACA0015 airfoil, and
numerical simulation data of dynamic stall of a 3D drone. Numerical results
demonstrate that the machine learning enhanced model can make fast and accurate
predictions of aerodynamic forces using only a few pressure sensors, even for
the NACA0015 case in which the simulations do not agree well with the wind
tunnel experiments. Furthermore, the model is robust to noise. | [
"cs.LG",
"cs.NA",
"math.NA",
"physics.flu-dyn"
] | false |
2305.09207 | 2023-05-16T06:32:43Z | Counterfactual Outcome Prediction using Structured State Space Model | [
"Vishal Purohit"
] | Counterfactual outcome prediction in longitudinal data has recently gained
attention due to its potential applications in healthcare and social sciences.
In this paper, we explore the use of the state space model, a popular sequence
model, for this task. Specifically, we compare the performance of two models:
Treatment Effect Neural Controlled Differential Equation (TE-CDE) and
structured state space model (S4Model). While TE-CDE uses controlled
differential equations to address time-dependent confounding, it suffers from
optimization issues and slow training. In contrast, S4Model is more efficient
at modeling long-range dependencies and easier to train. We evaluate the models
on a simulated lung tumor growth dataset and find that S4Model outperforms
TE-CDE with 1.63x reduction in per epoch training time and 10x better
normalized mean squared error. Additionally, S4Model is more stable during
training and less sensitive to weight initialization than TE-CDE. Our results
suggest that the state space model may be a promising approach for
counterfactual outcome prediction in longitudinal data, with S4Model offering a
more efficient and effective alternative to TE-CDE. | [
"cs.LG",
"cs.AI",
"stat.ME"
] | false |
2305.09216 | 2023-05-16T06:48:40Z | Component Training of Turbo Autoencoders | [
"Jannis Clausius",
"Marvin Geiselhart",
"Stephan ten Brink"
] | Isolated training with Gaussian priors (TGP) of the component autoencoders of
turbo-autoencoder architectures enables faster, more consistent training and
better generalization to arbitrary decoding iterations than training based on
deep unfolding. We propose fitting the components via extrinsic information
transfer (EXIT) charts to a desired behavior which enables scaling to larger
message lengths ($k \approx 1000$) while retaining competitive performance. To
the best of our knowledge, this is the first autoencoder that performs close to
classical codes in this regime. Although the binary cross-entropy (BCE) loss
function optimizes the bit error rate (BER) of the components, the design via
EXIT charts enables to focus on the block error rate (BLER). In serially
concatenated systems the component-wise TGP approach is well known for inner
components with a fixed outer binary interface, e.g., a learned inner code or
equalizer, with an outer binary error correcting code. In this paper we extend
the component training to structures with an inner and outer autoencoder, where
we propose a new 1-bit quantization strategy for the encoder outputs based on
the underlying communication problem. Finally, we discuss the model complexity
of the learned components during design time (training) and inference and show
that the number of weights in the encoder can be reduced by 99.96 %. | [
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2305.09330 | 2023-05-16T10:07:41Z | Consumer-side Fairness in Recommender Systems: A Systematic Survey of
Methods and Evaluation | [
"Bjørnar Vassøy",
"Helge Langseth"
] | In the current landscape of ever-increasing levels of digitalization, we are
facing major challenges pertaining to scalability. Recommender systems have
become irreplaceable both for helping users navigate the increasing amounts of
data and, conversely, aiding providers in marketing products to interested
users. The growing awareness of discrimination in machine learning methods has
recently motivated both academia and industry to research how fairness can be
ensured in recommender systems. For recommender systems, such issues are well
exemplified by occupation recommendation, where biases in historical data may
lead to recommender systems relating one gender to lower wages or to the
propagation of stereotypes. In particular, consumer-side fairness, which
focuses on mitigating discrimination experienced by users of recommender
systems, has seen a vast number of diverse approaches for addressing different
types of discrimination. The nature of said discrimination depends on the
setting and the applied fairness interpretation, of which there are many
variations. This survey serves as a systematic overview and discussion of the
current research on consumer-side fairness in recommender systems. To that end,
a novel taxonomy based on high-level fairness interpretation is proposed and
used to categorize the research and their proposed fairness evaluation metrics.
Finally, we highlight some suggestions for the future direction of the field. | [
"cs.IR",
"cs.AI",
"cs.CY",
"cs.LG"
] | false |
2305.09348 | 2023-05-16T11:06:09Z | One-Shot Online Testing of Deep Neural Networks Based on Distribution
Shift Detection | [
"Soyed Tuhin Ahmed",
"Mehdi B. Tahoori"
] | Neural networks (NNs) are capable of learning complex patterns and
relationships in data to make predictions with high accuracy, making them
useful for various tasks. However, NNs are both computation-intensive and
memory-intensive methods, making them challenging for edge applications. To
accelerate the most common operations (matrix-vector multiplication) in NNs,
hardware accelerator architectures such as computation-in-memory (CiM) with
non-volatile memristive crossbars are utilized. Although they offer benefits
such as power efficiency, parallelism, and nonvolatility, they suffer from
various faults and variations, both during manufacturing and lifetime
operations. This can lead to faulty computations and, in turn, degradation of
post-mapping inference accuracy, which is unacceptable for many applications,
including safety-critical applications. Therefore, proper testing of NN
hardware accelerators is required. In this paper, we propose a \emph{one-shot}
testing approach that can test NNs accelerated on memristive crossbars with
only one test vector, making it very suitable for online testing applications.
Our approach can consistently achieve $100\%$ fault coverage across several
large topologies with up to $201$ layers and challenging tasks like semantic
segmentation. Nevertheless, compared to existing methods, the fault coverage is
improved by up to $24\%$, the memory overhead is only $0.0123$ MB, a reduction
of up to $19980\times$ and the number of test vectors is reduced by
$10000\times$. | [
"cs.LG",
"cs.AI",
"cs.ET"
] | false |
2305.09536 | 2023-05-16T15:27:17Z | A Comparative Study of Methods for Estimating Conditional Shapley Values
and When to Use Them | [
"Lars Henry Berge Olsen",
"Ingrid Kristine Glad",
"Martin Jullum",
"Kjersti Aas"
] | Shapley values originated in cooperative game theory but are extensively used
today as a model-agnostic explanation framework to explain predictions made by
complex machine learning models in the industry and academia. There are several
algorithmic approaches for computing different versions of Shapley value
explanations. Here, we focus on conditional Shapley values for predictive
models fitted to tabular data. Estimating precise conditional Shapley values is
difficult as they require the estimation of non-trivial conditional
expectations. In this article, we develop new methods, extend earlier proposed
approaches, and systematize the new refined and existing methods into different
method classes for comparison and evaluation. The method classes use either
Monte Carlo integration or regression to model the conditional expectations. We
conduct extensive simulation studies to evaluate how precisely the different
method classes estimate the conditional expectations, and thereby the
conditional Shapley values, for different setups. We also apply the methods to
several real-world data experiments and provide recommendations for when to use
the different method classes and approaches. Roughly speaking, we recommend
using parametric methods when we can specify the data distribution almost
correctly, as they generally produce the most accurate Shapley value
explanations. When the distribution is unknown, both generative methods and
regression models with a similar form as the underlying predictive model are
good and stable options. Regression-based methods are often slow to train but
produce the Shapley value explanations quickly once trained. The vice versa is
true for Monte Carlo-based methods, making the different methods appropriate in
different practical situations. | [
"stat.ML",
"cs.LG",
"stat.CO"
] | false |
2305.09543 | 2023-05-16T15:37:32Z | EEG-based Sleep Staging with Hybrid Attention | [
"Xinliang Zhou",
"Chenyu Liu",
"Jiaping Xiao",
"Yang Liu"
] | Sleep staging is critical for assessing sleep quality and diagnosing sleep
disorders. However, capturing both the spatial and temporal relationships
within electroencephalogram (EEG) signals during different sleep stages remains
challenging. In this paper, we propose a novel framework called the Hybrid
Attention EEG Sleep Staging (HASS) Framework. Specifically, we propose a
well-designed spatio-temporal attention mechanism to adaptively assign weights
to inter-channels and intra-channel EEG segments based on the spatio-temporal
relationship of the brain during different sleep stages. Experiment results on
the MASS and ISRUC datasets demonstrate that HASS can significantly improve
typical sleep staging networks. Our proposed framework alleviates the
difficulties of capturing the spatial-temporal relationship of EEG signals
during sleep staging and holds promise for improving the accuracy and
reliability of sleep assessment in both clinical and research settings. | [
"eess.SP",
"cs.AI",
"cs.LG"
] | false |
2305.09579 | 2023-05-16T16:26:49Z | Private Everlasting Prediction | [
"Moni Naor",
"Kobbi Nissim",
"Uri Stemmer",
"Chao Yan"
] | A private learner is trained on a sample of labeled points and generates a
hypothesis that can be used for predicting the labels of newly sampled points
while protecting the privacy of the training set [Kasiviswannathan et al., FOCS
2008]. Research uncovered that private learners may need to exhibit
significantly higher sample complexity than non-private learners as is the case
with, e.g., learning of one-dimensional threshold functions [Bun et al., FOCS
2015, Alon et al., STOC 2019].
We explore prediction as an alternative to learning. Instead of putting
forward a hypothesis, a predictor answers a stream of classification queries.
Earlier work has considered a private prediction model with just a single
classification query [Dwork and Feldman, COLT 2018]. We observe that when
answering a stream of queries, a predictor must modify the hypothesis it uses
over time, and, furthermore, that it must use the queries for this
modification, hence introducing potential privacy risks with respect to the
queries themselves.
We introduce private everlasting prediction taking into account the privacy
of both the training set and the (adaptively chosen) queries made to the
predictor. We then present a generic construction of private everlasting
predictors in the PAC model. The sample complexity of the initial training
sample in our construction is quadratic (up to polylog factors) in the VC
dimension of the concept class. Our construction allows prediction for all
concept classes with finite VC dimension, and in particular threshold functions
with constant size initial training sample, even when considered over infinite
domains, whereas it is known that the sample complexity of privately learning
threshold functions must grow as a function of the domain size and hence is
impossible for infinite domains. | [
"cs.LG",
"cs.CR",
"cs.DS"
] | false |
2305.09594 | 2023-05-16T16:47:02Z | HiNoVa: A Novel Open-Set Detection Method for Automating RF Device
Authentication | [
"Luke Puppo",
"Weng-Keen Wong",
"Bechir Hamdaoui",
"Abdurrahman Elmaghbub"
] | New capabilities in wireless network security have been enabled by deep
learning, which leverages patterns in radio frequency (RF) data to identify and
authenticate devices. Open-set detection is an area of deep learning that
identifies samples captured from new devices during deployment that were not
part of the training set. Past work in open-set detection has mostly been
applied to independent and identically distributed data such as images. In
contrast, RF signal data present a unique set of challenges as the data forms a
time series with non-linear time dependencies among the samples. We introduce a
novel open-set detection approach based on the patterns of the hidden state
values within a Convolutional Neural Network (CNN) Long Short-Term Memory
(LSTM) model. Our approach greatly improves the Area Under the Precision-Recall
Curve on LoRa, Wireless-WiFi, and Wired-WiFi datasets, and hence, can be used
successfully to monitor and control unauthorized network access of wireless
devices. | [
"cs.CR",
"cs.LG",
"eess.SP"
] | false |
2305.09596 | 2023-05-16T16:51:07Z | Identification and Classification of Exoplanets Using Machine Learning
Techniques | [
"Prithivraj G",
"Alka Kumari"
] | NASA's Kepler Space Telescope has been instrumental in the task of finding
the presence of exoplanets in our galaxy. This search has been supported by
computational data analysis to identify exoplanets from the signals received by
the Kepler telescope. In this paper, we consider building upon some existing
work on exoplanet identification using residual networks for the data of the
Kepler space telescope and its extended mission K2. This paper aims to explore
how deep learning algorithms can help in classifying the presence of exoplanets
with less amount of data in one case and a more extensive variety of data in
another. In addition to the standard CNN-based method, we propose a Siamese
architecture that is particularly useful in addressing classification in a
low-data scenario. The CNN and ResNet algorithms achieved an average accuracy
of 68% for three classes and 86% for two-class classification. However, for
both the three and two classes, the Siamese algorithm achieved 99% accuracy. | [
"astro-ph.EP",
"astro-ph.IM",
"cs.LG",
"physics.comp-ph"
] | false |
2305.09619 | 2023-05-16T17:13:00Z | The Power of Learned Locally Linear Models for Nonlinear Policy
Optimization | [
"Daniel Pfrommer",
"Max Simchowitz",
"Tyler Westenbroek",
"Nikolai Matni",
"Stephen Tu"
] | A common pipeline in learning-based control is to iteratively estimate a
model of system dynamics, and apply a trajectory optimization algorithm -
e.g.~$\mathtt{iLQR}$ - on the learned model to minimize a target cost. This
paper conducts a rigorous analysis of a simplified variant of this strategy for
general nonlinear systems. We analyze an algorithm which iterates between
estimating local linear models of nonlinear system dynamics and performing
$\mathtt{iLQR}$-like policy updates. We demonstrate that this algorithm attains
sample complexity polynomial in relevant problem parameters, and, by
synthesizing locally stabilizing gains, overcomes exponential dependence in
problem horizon. Experimental results validate the performance of our
algorithm, and compare to natural deep-learning baselines. | [
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2305.09626 | 2023-05-16T17:27:34Z | Balancing Risk and Reward: An Automated Phased Release Strategy | [
"Yufan Li",
"Jialiang Mao",
"Iavor Bojinov"
] | Phased releases are a common strategy in the technology industry for
gradually releasing new products or updates through a sequence of A/B tests in
which the number of treated units gradually grows until full deployment or
deprecation. Performing phased releases in a principled way requires selecting
the proportion of units assigned to the new release in a way that balances the
risk of an adverse effect with the need to iterate and learn from the
experiment rapidly. In this paper, we formalize this problem and propose an
algorithm that automatically determines the release percentage at each stage in
the schedule, balancing the need to control risk while maximizing ramp-up
speed. Our framework models the challenge as a constrained batched bandit
problem that ensures that our pre-specified experimental budget is not depleted
with high probability. Our proposed algorithm leverages an adaptive Bayesian
approach in which the maximal number of units assigned to the treatment is
determined by the posterior distribution, ensuring that the probability of
depleting the remaining budget is low. Notably, our approach analytically
solves the ramp sizes by inverting probability bounds, eliminating the need for
challenging rare-event Monte Carlo simulation. It only requires computing means
and variances of outcome subsets, making it highly efficient and
parallelizable. | [
"stat.ML",
"cs.LG",
"stat.ME"
] | false |
2305.09636 | 2023-05-16T17:41:25Z | SoundStorm: Efficient Parallel Audio Generation | [
"Zalán Borsos",
"Matt Sharifi",
"Damien Vincent",
"Eugene Kharitonov",
"Neil Zeghidour",
"Marco Tagliasacchi"
] | We present SoundStorm, a model for efficient, non-autoregressive audio
generation. SoundStorm receives as input the semantic tokens of AudioLM, and
relies on bidirectional attention and confidence-based parallel decoding to
generate the tokens of a neural audio codec. Compared to the autoregressive
generation approach of AudioLM, our model produces audio of the same quality
and with higher consistency in voice and acoustic conditions, while being two
orders of magnitude faster. SoundStorm generates 30 seconds of audio in 0.5
seconds on a TPU-v4. We demonstrate the ability of our model to scale audio
generation to longer sequences by synthesizing high-quality, natural dialogue
segments, given a transcript annotated with speaker turns and a short prompt
with the speakers' voices. | [
"cs.SD",
"cs.LG",
"eess.AS"
] | true |
2305.09703 | 2023-05-16T11:38:19Z | Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting | [
"Guojun Liang",
"Prayag Tiwari",
"Sławomir Nowaczyk",
"Stefan Byttner",
"Fernando Alonso-Fernandez"
] | Graph neural networks (GNNs), especially dynamic GNNs, have become a research
hotspot in spatio-temporal forecasting problems. While many dynamic graph
construction methods have been developed, relatively few of them explore the
causal relationship between neighbour nodes. Thus, the resulting models lack
strong explainability for the causal relationship between the neighbour nodes
of the dynamically generated graphs, which can easily lead to a risk in
subsequent decisions. Moreover, few of them consider the uncertainty and noise
of dynamic graphs based on the time series datasets, which are ubiquitous in
real-world graph structure networks. In this paper, we propose a novel Dynamic
Diffusion-Variational Graph Neural Network (DVGNN) for spatio-temporal
forecasting. For dynamic graph construction, an unsupervised generative model
is devised. Two layers of graph convolutional network (GCN) are applied to
calculate the posterior distribution of the latent node embeddings in the
encoder stage. Then, a diffusion model is used to infer the dynamic link
probability and reconstruct causal graphs in the decoder stage adaptively. The
new loss function is derived theoretically, and the reparameterization trick is
adopted in estimating the probability distribution of the dynamic graphs by
Evidence Lower Bound during the backpropagation period. After obtaining the
generated graphs, dynamic GCN and temporal attention are applied to predict
future states. Experiments are conducted on four real-world datasets of
different graph structures in different domains. The results demonstrate that
the proposed DVGNN model outperforms state-of-the-art approaches and achieves
outstanding Root Mean Squared Error result while exhibiting higher robustness.
Also, by F1-score and probability distribution analysis, we demonstrate that
DVGNN better reflects the causal relationship and uncertainty of dynamic
graphs. | [
"cs.LG",
"cs.AI",
"cs.SI"
] | false |
2305.09705 | 2023-05-16T12:23:18Z | Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs | [
"Daniel Severo",
"James Townsend",
"Ashish Khisti",
"Alireza Makhzani"
] | We present a one-shot method for compressing large labeled graphs called
Random Edge Coding. When paired with a parameter-free model based on P\'olya's
Urn, the worst-case computational and memory complexities scale quasi-linearly
and linearly with the number of observed edges, making it efficient on sparse
graphs, and requires only integer arithmetic. Key to our method is bits-back
coding, which is used to sample edges and vertices without replacement from the
edge-list in a way that preserves the structure of the graph. Optimality is
proven under a class of random graph models that are invariant to permutations
of the edges and of vertices within an edge. Experiments indicate Random Edge
Coding can achieve competitive compression performance on real-world network
datasets and scales to graphs with millions of nodes and edges. | [
"cs.LG",
"cs.IT",
"math.IT"
] | false |
2305.09729 | 2023-05-16T18:01:49Z | FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks | [
"Xinyu Fu",
"Irwin King"
] | Heterogeneous graph neural networks (HGNNs) can learn from typed and
relational graph data more effectively than conventional GNNs. With larger
parameter spaces, HGNNs may require more training data, which is often scarce
in real-world applications due to privacy regulations (e.g., GDPR). Federated
graph learning (FGL) enables multiple clients to train a GNN collaboratively
without sharing their local data. However, existing FGL methods mainly focus on
homogeneous GNNs or knowledge graph embeddings; few have considered
heterogeneous graphs and HGNNs. In federated heterogeneous graph learning,
clients may have private graph schemas. Conventional FL/FGL methods attempting
to define a global HGNN model would violate schema privacy. To address these
challenges, we propose FedHGN, a novel and general FGL framework for HGNNs.
FedHGN adopts schema-weight decoupling to enable schema-agnostic knowledge
sharing and employs coefficients alignment to stabilize the training process
and improve HGNN performance. With better privacy preservation, FedHGN
consistently outperforms local training and conventional FL methods on three
widely adopted heterogeneous graph datasets with varying client numbers. The
code is available at https://github.com/cynricfu/FedHGN . | [
"cs.LG",
"cs.AI",
"cs.DC",
"cs.SI"
] | false |
2305.09765 | 2023-05-16T19:34:05Z | OpenVR: Teleoperation for Manipulation | [
"Abraham George",
"Alison Bartsch",
"Amir Barati Farimani"
] | Across the robotics field, quality demonstrations are an integral part of
many control pipelines. However, collecting high-quality demonstration
trajectories remains time-consuming and difficult, often resulting in the
number of demonstrations being the performance bottleneck. To address this
issue, we present a method of Virtual Reality (VR) Teleoperation that uses an
Oculus VR headset to teleoperate a Franka Emika Panda robot. Although other VR
teleoperation methods exist, our code is open source, designed for readily
available consumer hardware, easy to modify, agnostic to experimental setup,
and simple to use. | [
"cs.RO",
"cs.HC",
"cs.LG"
] | false |
2305.09842 | 2023-05-16T22:49:15Z | A Note on Dimensionality Reduction in Deep Neural Networks using
Empirical Interpolation Method | [
"Harbir Antil",
"Madhu Gupta",
"Randy Price"
] | Empirical interpolation method (EIM) is a well-known technique to efficiently
approximate parameterized functions. This paper proposes to use EIM algorithm
to efficiently reduce the dimension of the training data within supervised
machine learning. This is termed as DNN-EIM. Applications in data science
(e.g., MNIST) and parameterized (and time-dependent) partial differential
equations (PDEs) are considered. The proposed DNNs in case of classification
are trained in parallel for each class. This approach is sequential, i.e., new
classes can be added without having to retrain the network. In case of PDEs, a
DNN is designed corresponding to each EIM point. Again, these networks can be
trained in parallel, for each EIM point. In all cases, the parallel networks
require fewer than ten times the number of training weights. Significant gains
are observed in terms of training times, without sacrificing accuracy. | [
"cs.LG",
"cs.NA",
"math.NA",
"68T07, 76B75, 93C20, 93C15"
] | false |
2305.09856 | 2023-05-16T23:55:47Z | Keep It Simple: Fault Tolerance Evaluation of Federated Learning with
Unreliable Clients | [
"Victoria Huang",
"Shaleeza Sohail",
"Michael Mayo",
"Tania Lorido Botran",
"Mark Rodrigues",
"Chris Anderson",
"Melanie Ooi"
] | Federated learning (FL), as an emerging artificial intelligence (AI)
approach, enables decentralized model training across multiple devices without
exposing their local training data. FL has been increasingly gaining popularity
in both academia and industry. While research works have been proposed to
improve the fault tolerance of FL, the real impact of unreliable devices (e.g.,
dropping out, misconfiguration, poor data quality) in real-world applications
is not fully investigated. We carefully chose two representative, real-world
classification problems with a limited numbers of clients to better analyze FL
fault tolerance. Contrary to the intuition, simple FL algorithms can perform
surprisingly well in the presence of unreliable clients. | [
"cs.LG",
"cs.AI",
"cs.DC"
] | false |