abstract
stringlengths 42
2.09k
|
---|
Advances in synthetic biology and nanotechnology have contributed to the
design of tools that can be used to control, reuse, modify, and re-engineer
cells' structure, as well as enabling engineers to effectively use biological
cells as programmable substrates to realize Bio-Nano Things (biological
embedded computing devices). Bio-NanoThings are generally tiny, non-intrusive,
and concealable devices that can be used for in-vivo applications such as
intra-body sensing and actuation networks, where the use of artificial devices
can be detrimental. Such (nano-scale) devices can be used in various healthcare
settings such as continuous health monitoring, targeted drug delivery, and
nano-surgeries. These services can also be grouped to form a collaborative
network (i.e., nanonetwork), whose performance can potentially be improved when
connected to higher bandwidth external networks such as the Internet, say via
5G. However, to realize the IoBNT paradigm, it is also important to seamlessly
connect the biological environment with the technological landscape by having a
dynamic interface design to convert biochemical signals from the human body
into an equivalent electromagnetic signal (and vice versa). This,
unfortunately, risks the exposure of internal biological mechanisms to
cyber-based sensing and medical actuation, with potential security and privacy
implications. This paper comprehensively reviews bio-cyber interface for IoBNT
architecture, focusing on bio-cyber interfacing options for IoBNT like
biologically inspired bio-electronic devices, RFID enabled implantable chips,
and electronic tattoos. This study also identifies known and potential security
and privacy vulnerabilities and mitigation strategies for consideration in
future IoBNT designs and implementations.
|
In this paper, we introduce a novel deep neural network suitable for
multi-scale analysis and propose efficient model-agnostic methods that help the
network extract information from high-frequency domains to reconstruct clearer
images. Our model can be applied to multi-scale image enhancement problems
including denoising, deblurring and single image super-resolution. Experiments
on SIDD, Flickr2K, DIV2K, and REDS datasets show that our method achieves
state-of-the-art performance on each task. Furthermore, we show that our model
can overcome the over-smoothing problem commonly observed in existing
PSNR-oriented methods and generate more natural high-resolution images by
applying adversarial training.
|
Topological phases exhibit unconventional order that cannot be detected by
any local order parameter. In the framework of Projected Entangled Pair
States(PEPS), topological order is characterized by an entanglement symmetry of
the local tensor which describes the model. This symmetry can take the form of
a tensor product of group representations, or in the more general case a
correlated symmetry action in the form of a Matrix Product Operator(MPO), which
encompasses all string-net models. Among other things, these entanglement
symmetries allow for the description of ground states and anyon excitations.
Recently, the idea has been put forward to use those symmetries and the anyonic
objects they describe as order parameters for probing topological phase
transitions, and the applicability of this idea has been demonstrated for
Abelian groups. In this paper, we extend this construction to the domain of
non-Abelian models with MPO symmetries, and use it to study the breakdown of
topological order in the double Fibonacci (DFib) string-net and its Galois
conjugate, the non-hermitian double Yang-Lee (DYL) string-net. We start by
showing how to construct topological order parameters for condensation and
deconfinement of anyons using the MPO symmetries. Subsequently, we set up
interpolations from the DFib and the DYL model to the trivial phase, and show
that these can be mapped to certain restricted solid on solid(RSOS) models,
which are equivalent to the $((5\pm\sqrt{5})/2)$-state Potts model,
respectively. The known exact solutions of the statistical models allow us to
locate the critical points, and to predict the critical exponents for the order
parameters. We complement this by numerical study of the phase transitions,
which fully confirms our theoretical predictions; remarkably, we find that both
models exhibit a duality between the order parameters for condensation and
deconfinement.
|
In this paper we propose a novel class of methods for high order accurate
integration of multirate systems of ordinary differential equation
initial-value problems. The proposed methods construct multirate schemes by
approximating the action of matrix $\varphi$-functions within explicit
exponential Rosenbrock (ExpRB) methods, thereby called Multirate Exponential
Rosenbrock (MERB) methods. They consist of the solution to a sequence of
modified "fast" initial-value problems, that may themselves be approximated
through subcycling any desired IVP solver. In addition to proving how to
construct MERB methods from certain classes of ExpRB methods, we provide
rigorous convergence analysis of these methods and derive efficient MERB
schemes of orders two through six (the highest order ever constructed
infinitesimal multirate methods). We then present numerical simulations to
confirm these theoretical convergence rates, and to compare the efficiency of
MERB methods against other recently-introduced high order multirate methods.
|
When the inflaton couples to photons and amplifies electric fields, charged
particles produced via the Schwinger effect can dominate the universe after
inflation, which is dubbed as the Schwinger preheating. Using the hydrodynamic
approach for the Boltzmann equation, we numerically study two cases, the
Starobinsky inflation model with the kinetic coupling and the anisotropic
inflation model. The Schwinger preheating is not observed in the latter model
but occurs for a sufficiently large inflaton-photon coupling in the first
model. We analytically address its condition and derive a general attractor
solution of the electric fields. The occurrence of the Schwinger preheating in
the first model is determined by whether the electric fields enter the
attractor solution during inflation or not.
|
In this paper we have presented the mechanism of the barrier crossing
dynamics of a Brownian particle which is coupled to a thermal bath in the
presence of both time independent and fluctuating magnetic fields. Here the
following three aspects are important in addition to the role of the thermal
bath on the barrier crossing dynamics. Magnetic field induced coupling may
introduce a resonance like effect. Another role of the field is that
enhancement of its strength reduces the frequency factor of the barrier
crossing rate constant. Finally, the fluctuating magnetic field introduces an
induced electric field which activates the Brownian particle to cross the
energy barrier. As a result of interplay among these aspects versatile
non-monotonic behavior may appear in the variation of the rate constant as a
function of the strength of the time independent magnetic field.
|
The space radiation environment is a complex combination of fast-moving ions
derived from all atomic species found in the periodic table. The energy
spectrum of each ion species varies widely but is prominently in the range of
400 - 600 MeV/n. The large dynamic range in ion energy is difficult to simulate
in ground-based radiobiology experiments. Most ground-based irradiations with
mono-energetic beams of a single one ion species are delivered at comparatively
high dose rates. In some cases, sequences of such beams are delivered with
various ion species and energies to crudely approximate the complex space
radiation environment. This approximation may cause profound experimental bias
in processes such as biologic repair of radiation damage, which are known to
have strong temporal dependancies. It is possible that this experimental bias
leads to an overprediction of risks of radiation effects that have not been
observed in the astronaut cohort. None of the primary health risks presumely
attributed to space radiation exposure, such as radiation carciogenesis,
cardiovascular disease, cognitive deficits, etc., have been observed in
astronaut or cosmonaut crews. This fundamentally and profoundly limits our
understanding of the effects of GCR on humans and limits the development of
effective radiation countermeasures.
|
Time series forecasting methods play critical role in estimating the spread
of an epidemic. The coronavirus outbreak of December 2019 has already infected
millions all over the world and continues to spread on. Just when the curve of
the outbreak had started to flatten, many countries have again started to
witness a rise in cases which is now being referred as the 2nd wave of the
pandemic. A thorough analysis of time-series forecasting models is therefore
required to equip state authorities and health officials with immediate
strategies for future times. This aims of the study are three-fold: (a) To
model the overall trend of the spread; (b) To generate a short-term forecast of
10 days in countries with the highest incidence of confirmed cases (USA, India
and Brazil); (c) To quantitatively determine the algorithm that is best suited
for precise modelling of the linear and non-linear features of the time series.
The comparison of forecasting models for the total cumulative cases of each
country is carried out by comparing the reported data and the predicted value,
and then ranking the algorithms (Prophet, Holt-Winters, LSTM, ARIMA, and
ARIMA-NARNN) based on their RMSE, MAE and MAPE values. The hybrid combination
of ARIMA and NARNN (Nonlinear Auto-Regression Neural Network) gave the best
result among the selected models with a reduced RMSE, which proved to be almost
35.3% better than one of the most prevalent method of time-series prediction
(ARIMA). The results demonstrated the efficacy of the hybrid implementation of
the ARIMA-NARNN model over other forecasting methods such as Prophet, Holt
Winters, LSTM, and the ARIMA model in encapsulating the linear as well as
non-linear patterns of the epidemical datasets.
|
We present mid infrared imaging of two young clusters, the Coronet in the CrA
cloud core and B59 in the Pipe Nebula, using the FORCAST camera on the
Stratospheric Observatory for Infrared Astronomy. We also analyze Herschel
Space Observatory PACS and SPIRE images of the associated clouds. The two
clusters are at similar, and very close, distances. Star formation is ongoing
in the Coronet, which hosts at least one Class 0 source and several pre-stellar
cores, which may collapse and form stars. The B59 cluster is older, although it
still has a few Class I sources, and is less compact. The CrA cloud has a
diameter of about 0.16 pc, and we determine a dust temperature of 15.7 K and a
star formation efficiency of about 27 %, while the B59 core is approximately
twice as large, has a dust temperature of about 11.4 K and a star formation
efficiency of about 14 %. We infer that the gas densities are much higher in
the Coronet, which has also formed intermediate mass stars, while B59 has only
formed low-mass stars.
|
This paper presents approximation methods for time-dependent thermal
radiative transfer problems in high energy density physics. It is based on the
multilevel quasidiffusion method defined by the high-order radiative transfer
equation (RTE) and the low-order quasidiffusion (aka VEF) equations for the
moments of the specific intensity. A large part of data storage in TRT problems
between time steps is determined by the dimensionality of grid functions of the
radiation intensity. The approximate implicit methods with reduced memory for
the time-dependent Boltzmann equation are applied to the high-order RTE,
discretized in time with the backward Euler (BE) scheme. The high-dimensional
intensity from the previous time level in the BE scheme is approximated by
means of the low-rank proper orthogonal decomposition (POD). Another version of
the presented method applies the POD to the remainder term of P2 expansion of
the intensity. The accuracy of the solution of the approximate implicit methods
depends of the rank of the POD. The proposed methods enable one to reduce
storage requirements in time dependent problems. Numerical results of a
Fleck-Cummings TRT test problem are presented.
|
In this paper we study nonlinear interpolation problems for interpolation and
peak-interpolation sets of function algebras. The subject goes back to the
classical Rudin-Carleson interpolation theorem. In particular, we prove the
following nonlinear version of this theorem:
Let $\bar{\mathbb D}\subset \mathbb C$ be the closed unit disk, $\mathbb
T\subset\bar{\mathbb D}$ the unit circle, $S\subset\mathbb T$ a closed subset
of Lebesgue measure zero and $M$ a connected complex manifold.
Then for every continuous $M$-valued map $f$ on $S$ there exists a continuous
$M$-valued map $g$ on $\bar{\mathbb D}$ holomorphic on its interior such that
$g|_S=f$. We also consider similar interpolation problems for continuous maps
$f: S\rightarrow\bar M$, where $\bar M$ is a complex manifold with boundary
$\partial M$ and interior $M$. Assuming that $f(S)\cap\partial M\ne\emptyset$
we are looking for holomorphic extensions $g$ of $f$ such that $g(\bar{\mathbb
D}\setminus S)\subset M$.
|
In this paper, we investigate Riesz energy problems on unbounded conductors
in $\R^d$ in the presence of general external fields $Q$, not necessarily
satisfying the growth condition $Q(x)\to\infty$ as $x\to\infty$ assumed in
several previous studies. We provide sufficient conditions on $Q$ for the
existence of an equilibrium measure and the compactness of its support.
Particular attention is paid to the case of the hyperplanar conductor $\R^{d}$,
embedded in $\R^{d+1}$, when the external field is created by the potential of
a signed measure $\nu$ outside of $\R^{d}$. Simple cases where $\nu$ is a
discrete measure are analyzed in detail. New theoretic results for Riesz
potentials, in particular an extension of a classical theorem by de La
Vall\'ee-Poussin, are established. These results are of independent interest.
|
DNA sequencing is becoming increasingly commonplace, both in medical and
direct-to-consumer settings. To promote discovery, collected genomic data is
often de-identified and shared, either in public repositories, such as OpenSNP,
or with researchers through access-controlled repositories. However, recent
studies have suggested that genomic data can be effectively matched to
high-resolution three-dimensional face images, which raises a concern that the
increasingly ubiquitous public face images can be linked to shared genomic
data, thereby re-identifying individuals in the genomic data. While these
investigations illustrate the possibility of such an attack, they assume that
those performing the linkage have access to extremely well-curated data. Given
that this is unlikely to be the case in practice, it calls into question the
pragmatic nature of the attack. As such, we systematically study this
re-identification risk from two perspectives: first, we investigate how
successful such linkage attacks can be when real face images are used, and
second, we consider how we can empower individuals to have better control over
the associated re-identification risk. We observe that the true risk of
re-identification is likely substantially smaller for most individuals than
prior literature suggests. In addition, we demonstrate that the addition of a
small amount of carefully crafted noise to images can enable a controlled
trade-off between re-identification success and the quality of shared images,
with risk typically significantly lowered even with noise that is imperceptible
to humans.
|
Recent works on Binary Neural Networks (BNNs) have made promising progress in
narrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the
accuracy gains are often based on specialized model designs using additional
32-bit components. Furthermore, almost all previous BNNs use 32-bit for feature
maps and the shortcuts enclosing the corresponding binary convolution blocks,
which helps to effectively maintain the accuracy, but is not friendly to
hardware accelerators with limited memory, energy, and computing resources.
Thus, we raise the following question: How can accuracy and energy consumption
be balanced in a BNN network design? We extensively study this fundamental
problem in this work and propose a novel BNN architecture without most commonly
used 32-bit components: \textit{BoolNet}. Experimental results on ImageNet
demonstrate that BoolNet can achieve 4.6x energy reduction coupled with 1.2\%
higher accuracy than the commonly used BNN architecture Bi-RealNet. Code and
trained models are available at: https://github.com/hpi-xnor/BoolNet.
|
Many task-oriented dialogue systems use deep reinforcement learning (DRL) to
learn policies that respond to the user appropriately and complete the tasks
successfully. Training DRL agents with diverse dialogue trajectories prepare
them well for rare user requests and unseen situations. One effective
diversification method is to let the agent interact with a diverse set of
learned user models. However, trajectories created by these artificial user
models may contain generation errors, which can quickly propagate into the
agent's policy. It is thus important to control the quality of the
diversification and resist the noise. In this paper, we propose a novel
dialogue diversification method for task-oriented dialogue systems trained in
simulators. Our method, Intermittent Short Extension Ensemble (I-SEE),
constrains the intensity to interact with an ensemble of diverse user models
and effectively controls the quality of the diversification. Evaluations on the
Multiwoz dataset show that I-SEE successfully boosts the performance of several
state-of-the-art DRL dialogue agents.
|
When manipulating three-dimensional data, it is possible to ensure that
rotational and translational symmetries are respected by applying so-called
SE(3)-equivariant models. Protein structure prediction is a prominent example
of a task which displays these symmetries. Recent work in this area has
successfully made use of an SE(3)-equivariant model, applying an iterative
SE(3)-equivariant attention mechanism. Motivated by this application, we
implement an iterative version of the SE(3)-Transformer, an SE(3)-equivariant
attention-based model for graph data. We address the additional complications
which arise when applying the SE(3)-Transformer in an iterative fashion,
compare the iterative and single-pass versions on a toy problem, and consider
why an iterative model may be beneficial in some problem settings. We make the
code for our implementation available to the community.
|
In the framework of the Standard Model (SM) a theoretical description of the
neutron beta decay is given at the level of 10^{-5}. The neutron lifetime and
correlation coefficients of the neutron beta decay for a polarized neutron, a
polarized electron and an unpolarized proton are calculated at the account for
i) the radiative corrections of order O(\alpha E_e/m_N) ~ 10^{-5} to Sirlin's
outer and inner radiative corrections of order O(\alpha/\pi), ii) the
corrections of order O(E^2_e/m^2_N) ~ 10^{-5}, caused by weak magnetism and
proton recoil, and iii) Wilkinson's corrections of order 10^{-5} (Wilkinson,
Nucl. Phys. A377, 474 (1982)). These corrections define the SM background of
the theoretical description of the neutron beta decay at the level of 10^{-5},
which is required by experimental searches of interactions beyond the SM with
experimental uncertainties of a few parts of 10^{-5}.
|
Face swapping has both positive applications such as entertainment,
human-computer interaction, etc., and negative applications such as DeepFake
threats to politics, economics, etc. Nevertheless, it is necessary to
understand the scheme of advanced methods for high-quality face swapping and
generate enough and representative face swapping images to train DeepFake
detection algorithms. This paper proposes the first Megapixel level method for
one shot Face Swapping (or MegaFS for short). Firstly, MegaFS organizes face
representation hierarchically by the proposed Hierarchical Representation Face
Encoder (HieRFE) in an extended latent space to maintain more facial details,
rather than compressed representation in previous face swapping methods.
Secondly, a carefully designed Face Transfer Module (FTM) is proposed to
transfer the identity from a source image to the target by a non-linear
trajectory without explicit feature disentanglement. Finally, the swapped faces
can be synthesized by StyleGAN2 with the benefits of its training stability and
powerful generative capability. Each part of MegaFS can be trained separately
so the requirement of our model for GPU memory can be satisfied for megapixel
face swapping. In summary, complete face representation, stable training, and
limited memory usage are the three novel contributions to the success of our
method. Extensive experiments demonstrate the superiority of MegaFS and the
first megapixel level face swapping database is released for research on
DeepFake detection and face image editing in the public domain. The dataset is
at this link.
|
In this paper, we study the problem of mobile user profiling, which is a
critical component for quantifying users' characteristics in the human mobility
modeling pipeline. Human mobility is a sequential decision-making process
dependent on the users' dynamic interests. With accurate user profiles, the
predictive model can perfectly reproduce users' mobility trajectories. In the
reverse direction, once the predictive model can imitate users' mobility
patterns, the learned user profiles are also optimal. Such intuition motivates
us to propose an imitation-based mobile user profiling framework by exploiting
reinforcement learning, in which the agent is trained to precisely imitate
users' mobility patterns for optimal user profiles. Specifically, the proposed
framework includes two modules: (1) representation module, which produces state
combining user profiles and spatio-temporal context in real-time; (2) imitation
module, where Deep Q-network (DQN) imitates the user behavior (action) based on
the state that is produced by the representation module. However, there are two
challenges in running the framework effectively. First, epsilon-greedy strategy
in DQN makes use of the exploration-exploitation trade-off by randomly pick
actions with the epsilon probability. Such randomness feeds back to the
representation module, causing the learned user profiles unstable. To solve the
problem, we propose an adversarial training strategy to guarantee the
robustness of the representation module. Second, the representation module
updates users' profiles in an incremental manner, requiring integrating the
temporal effects of user profiles. Inspired by Long-short Term Memory (LSTM),
we introduce a gated mechanism to incorporate new and old user characteristics
into the user profile.
|
We study the polarization dynamics of ultrafast solitons in mode-locked fiber
lasers. We find that when a stable soliton is generated, it's
state-of-polarization shifts toward a stable state and when the soliton is
generated with excess power levels it experiences relaxation oscillations in
its intensity and timing. On the other hand, when a soliton is generated in an
unstable state-of-polarization, it either decays in intensity until it
disappears, or its temporal width decreases until it explodes into several
solitons and then it disappears. We also found that when two solitons are
simultaneously generated close to each other, they attract each other until
they collide and merge into a single soliton. Although, these two solitons are
generated with different states-of-polarization, they shift their
state-of-polarization closer to each other until the polarization coincides
when they collide. We support our findings by numerical calculations of a
non-Lagrangian approach by simulating the Ginzburg-Landau equation governing
the dynamics of solitons in a laser cavity. Our model also predicts the
relaxation oscillations of stable solitons and the two types of unstable
solitons observed in the experimental measurements.
|
This paper presents Favalon, a functional programming language built on the
premise of a lambda calculus for use as an interactive shell replacement.
Favalon seamlessly integrates with typed versions of existing libraries and
commands using type inference, flexible runtime type metadata, and the same
techniques employed by shells to link commands together. Much of Favalon's
syntax is customizable via user-defined functions, allowing it to be extended
by anyone who is familiar with a command-line shell. Furthermore, Favalon's
type inference engine can be separated from its runtime library and easily
repurposed for other applications.
|
Recently, asymmetric plasmonic nanojunctions [Karnetzky et. al., Nature Comm.
2471, 9 (2018)] have shown promise as on-chip electronic devices to convert
femtosecond optical pulses to current bursts, with a bandwidth of
multi-terahertz scale, although yet at low temperatures and pressures. Such
nanoscale devices are of great interest for novel ultrafast electronics and
opto-electronic applications. Here, we operate the device in air and at room
temperature, revealing the mechanisms of photoemission from plasmonic
nanojunctions, and the fundamental limitations on the speed of
optical-to-electronic conversion. Inter-cycle interference of coherent
electronic wavepackets results in a complex energy electron distribution and
birth of multiphoton effects. This energy structure, as well as reshaping of
the wavepackets during their propagation from one tip to the other, determine
the ultrafast dynamics of the current. We show that, up to some level of
approximation, the electron flight time is well-determined by the mean
ponderomotive velocity in the driving field.
|
The necessary and sufficient conditions are given for a sequence of complex
numbers to be the periodic (or antiperiodic) spectrum of non-self-adjoint Dirac
operator.
|
In this paper, we consider a type of image quality assessment as a
task-specific measurement, which can be used to select images that are more
amenable to a given target task, such as image classification or segmentation.
We propose to train simultaneously two neural networks for image selection and
a target task using reinforcement learning. A controller network learns an
image selection policy by maximising an accumulated reward based on the target
task performance on the controller-selected validation set, whilst the target
task predictor is optimised using the training set. The trained controller is
therefore able to reject those images that lead to poor accuracy in the target
task. In this work, we show that the controller-predicted image quality can be
significantly different from the task-specific image quality labels that are
manually defined by humans. Furthermore, we demonstrate that it is possible to
learn effective image quality assessment without using a ``clean'' validation
set, thereby avoiding the requirement for human labelling of images with
respect to their amenability for the task. Using $6712$, labelled and
segmented, clinical ultrasound images from $259$ patients, experimental results
on holdout data show that the proposed image quality assessment achieved a mean
classification accuracy of $0.94\pm0.01$ and a mean segmentation Dice of
$0.89\pm0.02$, by discarding $5\%$ and $15\%$ of the acquired images,
respectively. The significantly improved performance was observed for both
tested tasks, compared with the respective $0.90\pm0.01$ and $0.82\pm0.02$ from
networks without considering task amenability. This enables image quality
feedback during real-time ultrasound acquisition among many other medical
imaging applications.
|
We show that a novel, general phase space mapping Hamiltonian for
nonadiabatic systems, which is reminiscent of the renowned Meyer-Miller mapping
Hamiltonian, involves a commutator variable matrix rather than the conventional
zero-point-energy parameter. In the exact mapping formulation on constraint
space for phase space approaches for nonadiabatic dynamics, the general mapping
Hamiltonian with commutator variables can be employed to generate approximate
trajectory-based dynamics. Various benchmark model tests, which range from gas
phase to condensed phase systems, suggest that the overall performance of the
general mapping Hamiltonian is better than that of the conventional
Meyer-Miller Hamiltonian.
|
Many man-made objects are characterised by a shape that is symmetric along
one or more planar directions. Estimating the location and orientation of such
symmetry planes can aid many tasks such as estimating the overall orientation
of an object of interest or performing shape completion, where a partial scan
of an object is reflected across the estimated symmetry plane in order to
obtain a more detailed shape. Many methods processing 3D data rely on expensive
3D convolutions. In this paper we present an alternative novel encoding that
instead slices the data along the height dimension and passes it sequentially
to a 2D convolutional recurrent regression scheme. The method also comprises a
differentiable least squares step, allowing for end-to-end accurate and fast
processing of both full and partial scans of symmetric objects. We use this
approach to efficiently handle 3D inputs to design a method to estimate planar
reflective symmetries. We show that our approach has an accuracy comparable to
state-of-the-art techniques on the task of planar reflective symmetry
estimation on full synthetic objects. Additionally, we show that it can be
deployed on partial scans of objects in a real-world pipeline to improve the
outputs of a 3D object detector.
|
In this paper we present a novel mechanism for producing the observed Dark
Matter(DM) relic abundance during the First Order Phase Transition (FOPT) in
the early universe. We show that the bubble expansion with ultra-relativistic
velocities can lead to the abundance of DM particles with masses much larger
than the scale of the transition. We study this non-thermal production
mechanism in the context of a generic phase transition and the electroweak
phase transition. The application of the mechanism to the Higgs portal DM as
well as the signal in the Stochastic Gravitational Background are discussed.
|
Introduction. Can the infection due to the human immunodeficiency virus type
1 induce a change in the differentiation status or process in T cells?.
Methods. We will consider two stochastic Markov chain models, one which will
describe the T-helper cell differentiation process, and another one describing
that process of infection of the T-helper cell by the virus; in these Markov
chains, we will consider a set of states $\{X_t \}$ comprised of those proteins
involved in each of the processes and their interactions (either
differentiation or infection of the cell), such that we will obtain two
stochastic transition matrices ($A,B$), one for each process; afterwards, the
computation of their eigenvalues shall be performed, in which, should the
eigenvalue $\lambda_i=1$ exist, the computation for the equilibrium
distribution $\pi^n$ will be obtained for each of the matrices, which will
inform us on the trends of interactions amongst the proteins in the long-term.
Results. The stochastic processes considered possess an equilibrium
distribution, when reaching their equilibrium distribution, there exists an
increase in their informational entropy, and their log-rank distributions can
be modeled as discrete beta generalized distributions (DGBD). Discussion. The
equilibrium distributions of both process can be regarded as states in which
the cell is well-differentiated, ergo there exists an induction of a novel
HIV-dependent differentiated state in the T-cell; these processes due to their
DGBD distribution can be considered complex processes; due to the increasing
entropy, the equilibrium states are stable ones. Conclusion. The HIV virus can
promote a novel differentiated state in the T-cell, which can give account for
clinical features seen in patients; this model, notwithstanding does not give
account of YES/NO logical switches involved in the regulatory networks.
|
We propose an optimal MMSE precoding technique using quantized signals with
constant envelope. Unlike the existing MMSE design that relies on 1-bit
resolution, the proposed approach employs uniform phase quantization and the
bounding step in the branch-and-bound method is different in terms of
considering the most restrictive relaxation of the nonconvex problem, which is
then utilized for a suboptimal design also. Moreover, unlike prior studies, we
propose three different soft detection methods and an iterative detection and
decoding scheme that allow the utilization of channel coding in conjunction
with low-resolution precoding. Besides an exact approach for computing the
extrinsic information, we propose two approximations with reduced computational
complexity. Numerical simulations show that utilizing the MMSE criterion
instead of the established maximum-minimum distance to the decision threshold
yields a lower bit-error-rate in many scenarios. Furthermore, when using the
MMSE criterion, a smaller number of bound evaluations in the branch-and-bound
method is required for low and medium SNR. Finally, results based on an LDPC
block code indicate that the receive processing schemes yield a lower
bit-error-rate compared to the conventional design.
|
We consider the geodesic of the directed last passage percolation with iid
exponential weights. We find the explicit one point distribution of the
geodesic location joint with the last passage times, and its limit when the
size goes to infinity.
|
We consider the problem of minimizing age of information in general
single-hop and multihop wireless networks. First, we formulate a way to convert
AoI optimization problems into equivalent network stability problems. Then, we
propose a heuristic low complexity approach for achieving stability that can
handle general network topologies; unicast, multicast and broadcast flows;
interference constraints; link reliabilities; and AoI cost functions. We
provide numerical results to show that our proposed algorithms behave as well
as the best known scheduling and routing schemes available in the literature
for a wide variety of network settings.
|
We develop a theory for the non-equilibrium screening of a charged impurity
in a two-dimensional electron system under a strong time-periodic drive. Our
analysis of the time-averaged polarization function and dielectric function
reveals that Floquet driving modifies the screened impurity potential in two
main regimes. In the weak drive regime, the time-averaged screened potential
exhibits unconventional Friedel oscillations with multiple spatial periods
contributed by a principal period modulated by higher-order periods, which are
due to the emergence of additional Kohn anomalies in the polarization function.
In the strong drive regime, the time-averaged impurity potential becomes almost
unscreened and does not exhibit Friedel oscillations. This tunable Friedel
oscillations is a result of the dynamic gating effect of the time-dependent
driving field on the two-dimensional electron system.
|
In this paper, based on the idea of self-adjusting steepness based
schemes[5], a two-dimensional calculation method of steepness parameter is
proposed, and thus a two-dimensional self-adjusting steepness based limiter is
constructed. With the application of such limiter to the over-intersection
based remapping framework, a low dissipation remapping method has been proposed
that can be applied to the existing ALE method.
|
We derive the Laws of Cosines and Sines in the super hyperbolic plane using
Minkowski supergeometry and find the identical formulae to the classical case,
but remarkably involving different expressions for cosines and sines of angles
which include substantial fermionic corrections. In further analogy to the
classical case, we apply these results to show that two parallel supergeodesics
which are not ultraparallel admit a unique common orthogonal supergeodesic, and
we briefly describe aspects of elementary supernumber theory, leading to a
prospective analogue of the Gauss product of quadratic forms.
|
We present an analysis of the galaxy environment and physical properties of a
partial Lyman limit system at z = 0.83718 with HI and metal line components
closely separated in redshift space ($|\Delta v| \approx 400$ km/s) towards the
background quasar HE1003+0149. The HST/COS far-ultraviolet spectrum provides
coverage of lines of oxygen ions from OI to OV. Comparison of observed spectral
lines with synthetic profiles generated from Bayesian ionization modeling
reveals the presence of two distinct gas phases in the absorbing medium. The
low-ionization phase of the absorber has sub-solar metallicities (1/10-th
solar) with indications of [C/O] < 0 in each of the components. The OIV and OV
trace a more diffuse higher-ionization medium with predicted HI column
densities that are $\approx 2$ dex lower. The quasar field observed with
VLT/MUSE reveals three dwarf galaxies with stellar masses of $M^* \sim 10^{8} -
10^{9}$ M$_\odot$, and with star formation rates of $\approx 0.5 - 1$ M$_\odot$
yr$^{-1}$, at projected separations of $\rho/R_{\mathrm{vir}} \approx 1.8 -
3.0$ from the absorber. Over a wider field with projected proper separation of
$\leq 5$ Mpc and radial velocity offset of $|\Delta v| \leq 1000$ km/s from the
absorber, 21 more galaxies are identified in the $VLT$/VIMOS and Magellan deep
galaxy redshift surveys, with 8 of them within $1$ Mpc and $500$ km/s,
consistent with the line of sight penetrating a group of galaxies. The absorber
presumably traces multiple phases of cool ($T \sim 10^4$ K) photoionized
intragroup medium. The inferred [C/O] < 0 hints at preferential enrichment from
core-collapse supernovae, with such gas displaced from one or more of the
nearby galaxies, and confined to the group medium.
|
Transition metal dichalcogenides (TMDs) combine interesting optical and
spintronic properties in an atomically-thin material, where the light
polarization can be used to control the spin and valley degrees-of-freedom for
the development of novel opto-spintronic devices. These promising properties
emerge due to their large spin-orbit coupling in combination with their crystal
symmetries. Here, we provide simple symmetry arguments in a group-theory
approach to unveil the symmetry-allowed spin scattering mechanisms, and
indicate how one can use these concepts towards an external control of the spin
lifetime. We perform this analysis for both monolayer (inversion asymmetric)
and bilayer (inversion symmetric) crystals, indicating the different mechanisms
that play a role in these systems. We show that, in monolayer TMDs, electrons
and holes transform fundamentally differently -- leading to distinct
spin-scattering processes. We find that one of the electronic states in the
conduction band is partially protected by time-reversal symmetry, indicating a
longer spin lifetime for that state. In bilayer and bulk TMDs, a hidden
spin-polarization can exist within each layer despite the presence of global
inversion symmetry. We show that this feature enables control of the interlayer
spin-flipping scattering processes via an out-of-plane electric field,
providing a mechanism for electrical control of the spin lifetime.
|
We study the dynamics of a one-dimensional Rydberg lattice gas under
facilitation (anti-blockade) conditions which implements a so-called
kinetically constrained spin system. Here an atom can only be excited to a
Rydberg state when one of its neighbors is already excited. Once two or more
atoms are simultaneously excited mechanical forces emerge, which couple the
internal electronic dynamics of this many-body system to external vibrational
degrees of freedom in the lattice. This electron-phonon coupling results in a
so-called phonon dressing of many-body states which in turn impacts on the
facilitation dynamics. In our theoretical study we focus on a scenario in which
all energy scales are sufficiently separated such that a perturbative treatment
of the coupling between electronic and vibrational states is possible. This
allows to analytically derive an effective Hamiltonian for the evolution of
consecutive clusters of Rydberg excitations in the presence of phonon dressing.
We analyze the spectrum of this Hamiltonian and show -- by employing Fano
resonance theory -- that the interaction between Rydberg excitations and
lattice vibrations leads to the emergence of slowly decaying bound states that
inhibit fast relaxation of certain initial states.
|
Drafting as a process to reduce drag and to benefit from the presence of
other competitors is applied in various sports with several recent examples of
competitive running in formations. In this study, the aerodynamics of a
realistic model of a female runner is calculated by computational fluid
dynamics (CFD) simulations at four running speeds of 15 km/h, 18 km/h, 21 km/h,
and 36 km/h. Aerodynamic power fractions of the total energy expenditure are
found to be in the range of 2.6-8.5%. Additionally, four exemplary formations
are analysed with respect to their drafting potential and resulting drag values
are compared for the main runner and her pacers. The best of the formations
achieves a total drag reduction on the main runner of 75.6%. Moreover, there
are large variations in the drag reduction between the considered formations of
up to 42% with respect to the baseline single-runner case. We conclude that
major drag reduction of more than 70% can already be achieved with fairly
simple formations, while certain factors, such as runners on the sides, can
have a detrimental effect on drag reduction due to local acceleration of the
passing flow. Using an empirical model for mechanical power output during
running, gains of metabolic power and performance predictions are evaluated for
all considered formations. Improvements in running economy are up to 3.5% for
the best formation, leading to velocity gains of 2.3%. This translates to 154 s
(~2.6 min) saved over a marathon distance. Consequently, direct conclusions are
drawn from the obtained data for ideal drafting of long-distance running in
highly packed formations.
|
Turbulence in the upper ocean in the submesoscale range (scales smaller than
the deformation radius) plays an important role for the heat exchange with the
atmosphere and for oceanic biogeochemistry. Its dynamics should strongly depend
on the seasonal cycle and the associated mixed-layer instabilities. The latter
are particularly relevant in winter and are responsible for the formation of
energetic small scales that extend over the whole depth of the mixed layer. The
knowledge of the transport properties of oceanic flows at depth, which is
essential to understand the coupling between surface and interior dynamics,
however, is still limited. By means of numerical simulations, we explore the
Lagrangian dispersion properties of turbulent flows in a quasi-geostrophic
model system allowing for both thermocline and mixed-layer instabilities. The
results indicate that, when mixed-layer instabilities are present, the
dispersion regime is local from the surface down to depths comparable with that
of the interface with the thermocline, while in their absence dispersion
quickly becomes nonlocal versus depth. We then identify the origin of such
behavior in the existence of fine-scale energetic structures due to mixed-layer
instabilities. We further discuss the effect of vertical shear on the
Lagrangian particle spreading and address the correlation between the
dispersion properties at the surface and at depth, which is relevant to assess
the possibility of inferring the dynamical features of deeper flows from the
more accessible surface ones.
|
Electrochemically mediated selective adsorption is an emerging
electrosorption technique that utilizes Faradaically enhanced redox active
electrodes, which can adsorb ions not only electrostatically, but also
electrochemically. The superb selectivity (>100) of this technique enables
selective removal of toxic or high-value target ions under low energy
consumption. Here, we develop a general theoretical framework to describe the
competitive electrosorption phenomena involving multiple ions and surface-bound
redox species. The model couples diffusion, convection and electromigration
with competitive surface adsorption reaction kinetics, consistently derived
from non-equilibrium thermodynamics. To optimize the selective removal of the
target ions, design criteria were derived analytically from physically relevant
dimensionless groups and time scales, where the propagation of the target
anions concentration front is the limiting step. Detailed computational studies
are reported for three case studies that cover a wide range of inlet
concentration ratios between the competing ions. And in all three cases, target
anions in the electrosorption cell forms a self-sharpening reaction-diffusion
wave front. Based on the model, a three-step stop-flow operation scheme with a
pure stripping solution of target anions is proposed that optimizes the ion
adsorption performance and increases the purity of the regeneration stream to
almost 100%, which is beneficial for downstream processing.
|
Quality of website design is one of the influential factors of website
success. How the design helps the users using effectively and efficiently
website and satisfied at the end of the use. However, it is a common tendency
that websites are designed based on the developer's perspectives and lack
considering user importance. Thus, the degree of website usability tends to be
low according to user perceptions. This study purposed to understand the user
experiences using an institutional repository (IR) website in a public
university in Indonesia. The research was performed based on usability testing
framework as the usability testing method. About 12 participants were purposely
involved concerning their key informant characteristics. Following three
empirical data collection techniques (i.e., query technique, formal experiment,
and thinking aloud), both descriptive analysis using usability scale matric and
content analysis using qualitative data analysis (QDA) Miner Lite software were
used in the data analysis stage. Lastly, several visual design recommendations
were then proposed at the end of the study. In terms of a case study, besides
the practical recommendations which may contextually useful for the next
website development; the clarity of the research design may also help scholars
how to combine more than one usability testing technique within a
multi-technique study design.
|
The SIMT execution model is commonly used for general GPU development. CUDA
and OpenCL developers write scalar code that is implicitly parallelized by
compiler and hardware. On Intel GPUs, however, this abstraction has profound
performance implications as the underlying ISA is SIMD and important hardware
capabilities cannot be fully utilized. To close this performance gap we
introduce C-For-Metal (CM), an explicit SIMD programming framework designed to
deliver close-to-the-metal performance on Intel GPUs. The CM programming
language and its vector/matrix types provide an intuitive interface to exploit
the underlying hardware features, allowing fine-grained register management,
SIMD size control and cross-lane data sharing. Experimental results show that
CM applications from different domains outperform the best-known SIMT-based
OpenCL implementations, achieving up to 2.7x speedup on the latest Intel GPU.
|
We systematically investigate axisymmetric extremal isolated horizons (EIHs)
defined by vanishing surface gravity, corresponding to zero temperature. In the
first part, using the Newman-Penrose and GHP formalism we derive the most
general metric function for such EIHs in the Einstein-Maxwell theory, which
complements the previous result of Lewandowski and Pawlowski. We prove that it
depends on 5 independent parameters, namely deficit angles on the north and
south poles of a spherical-like section of the horizon, its radius (area), and
total electric and magnetic charges of the black hole. The deficit angles and
both charges can be separately set to zero. In the second part of our paper, we
identify this general axially symmetric solution for EIH with extremal horizons
in exact electrovacuum Plebanski-Demianski spacetimes, using the convenient
parametrization of this family by Griffiths and Podolsky. They represent all
(double aligned) black holes of algebraic type D without a cosmological
constant. Apart from a conicity, they depend on 6 physical parameters (mass,
Kerr-like rotation, NUT parameter, acceleration, electric and magnetic charges)
constrained by the extremality condition. We were able to determine their
relation to the EIH geometrical parameters. This explicit identification of
type D extremal black holes with a unique form of EIH includes several
interesting subclasses, such as accelerating extremely charged
Reissner-Nordstrom black hole (C-metric), extremal accelerating Kerr-Newman,
accelerating Kerr-NUT, or non-accelerating Kerr-Newman-NUT black holes.
|
Multimode nonlinear optics offers to overcome a long-standing limitation of
fiber optics, tightly phase locking several spatial modes and enabling the
coherent transport of a wavepacket through a multimode fiber. A similar problem
is encountered in the temporal compression of multi-mJ pulses to few-cycle
duration in hollow gas-filled fibers. Scaling the fiber length to up to six
meters, hollow fibers have recently reached 1 TW of peak power. Despite the
remarkable utility of the hollow fiber compressor and its widespread
application, however, no analytical model exists to enable insight into the
scaling behavior of maximum compressibility and peak power. Here we extend a
recently introduced formalism for describing mode-locking to the spatially
analogue scenario of locking spatial fiber modes together. Our formalism
unveils the coexistence of two soliton branches for anomalous modal dispersion
and indicates the formation of stable spatio-temporal light bullets that would
be unstable in free space, similar to the temporal cage solitons in
mode-locking theory. Our model enables deeper understanding of the physical
processes behind the formation of such light bullets and predict the existence
of multimode solitons in a much wider range of fiber types than previously
considered possible.
|
Let $f : X \to S$ be a family of smooth projective algebraic varieties over a
smooth connected base $S$, with everything defined over
$\overline{\mathbb{Q}}$. Denote by $\mathbb{V} = R^{2i} f_{*} \mathbb{Z}(i)$
the associated integral variation of Hodge structure on the degree $2i$
cohomology. We consider the following question: when can a fibre
$\mathbb{V}_{s}$ above an algebraic point $s \in S(\overline{\mathbb{Q}})$ be
isomorphic to a transcendental fibre $\mathbb{V}_{s'}$ with $s' \in
S(\mathbb{C}) \setminus S(\overline{\mathbb{Q}})$? When $\mathbb{V}$ induces a
quasi-finite period map $\varphi : S \to \Gamma \backslash D$, conjectures in
Hodge theory predict that such isomorphisms cannot exist. We introduce new
differential-algebraic techniques to show this is true for all points $s \in
S(\overline{\mathbb{Q}})$ outside of an explicit proper closed algebraic subset
of $S$. As a corollary we establish the existence of a canonical
$\overline{\mathbb{Q}}$-algebraic model for normalizations of period images.
|
We study co-dimension two monodromy defects in theories of conformally
coupled scalars and free Dirac fermions in arbitrary $d$ dimensions. We
characterise this family of conformal defects by computing the one-point
functions of the stress-tensor and conserved current for Abelian flavour
symmetries as well as two-point functions of the displacement operator. In the
case of $d=4$, the normalisation of these correlation functions are related to
defect Weyl anomaly coefficients, and thus provide crucial information about
the defect conformal field theory. We provide explicit checks on the values of
the defect central charges by calculating the universal part of the defect
contribution to entanglement entropy, and further, we use our results to
extract the universal part of the vacuum R\'enyi entropy. Moreover, we leverage
the non-supersymmetric free field results to compute a novel defect Weyl
anomaly coefficient in a $d=4$ theory of free $\mathcal{N}=2$ hypermultiplets.
Including singular modes in the defect operator product expansion of
fundamental fields, we identify notable relevant deformations in the singular
defect theories and show that they trigger a renormalisation group flow towards
an IR fixed point with the most regular defect OPE. We also study Gukov-Witten
defects in free $d=4$ Maxwell theory and show that their central charges
vanish.
|
Loneliness (i.e., the distressing feeling that often accompanies the
subjective sense of social disconnection) is detrimental to mental and physical
health, and deficits in self-reported feelings of being understood by others is
a risk factor for loneliness. What contributes to these deficits in lonely
people? We used functional magnetic resonance imaging (fMRI) to unobtrusively
measure the relative alignment of various aspects of people's mental processing
of naturalistic stimuli (specifically, videos) as they unfold over time. We
thereby tested whether lonely people actually process the world in
idiosyncratic ways, rather than only exaggerating or misperceiving how
dissimilar others' views are to their own (which could lead them to feel
misunderstood, even if they actually see the world similarly to those around
them). We found evidence for such idiosyncrasy: lonely individuals' neural
responses during free viewing of the videos were dissimilar to peers in their
communities, particularly in brain regions (e.g., regions of the default-mode
network) in which similar responses have been associated with shared
psychological perspectives and subjective understanding. Our findings were
robust even after controlling for demographic similarities, participants'
overall levels of objective social isolation, and their friendships with each
other. These results suggest that being surrounded predominantly by people who
see the world differently from oneself may be a risk factor for loneliness,
even if one is friends with them.
|
The Traditional Approximation of Rotation (TAR) is a treatment of the
hydrodynamic equations of rotating and stably stratified fluids in which the
action of the Coriolis acceleration along the direction of the entropy and
chemical stratifications is neglected because it is weak in comparison with the
buoyancy force. The dependent variables in the equations for the dynamics of
gravito-inertial waves (GIWs) then become separable into radial and horizontal
parts as in the non-rotating case. The TAR is built on the assumptions that the
star is spherical (i.e. its centrifugal deformation is neglected) and uniformly
rotating. We study the feasibility of carrying out a generalisation of the TAR
to account for the centrifugal acceleration in the case of strongly deformed
uniformly and rapidly rotating stars (and planets), and to identify the
validity domain of this approximation. We built analytically a complete
formalism that allows the study of the dynamics of GIWs in spheroidal
coordinates which take into account the flattening of rapidly rotating stars by
assuming the hierarchies of frequencies adopted within the TAR in the spherical
case and by deriving a generalised Laplace tidal equation for the horizontal
eigenfunctions of the GIWs and their asymptotic wave periods, which can be used
to probe the structure and dynamics of rotating deformed stars with
asteroseismology. Using 2D ESTER stellar models, we determine the validity
domain of the generalised TAR as a function of the rotation rate of the star
normalised by its critical angular velocity and its pseudo-radius. This
generalisation allows us to study the signature of the centrifugal effects on
GIWs in rapidly rotating deformed stars. We found that the effects of the
centrifugal acceleration in rapidly rotating early-type stars on GIWs are
theoretically detectable in modern space photometry using observations from
Kepler.
|
Artificial intelligence is applied in a range of sectors, and is relied upon
for decisions requiring a high level of trust. For regression methods, trust is
increased if they approximate the true input-output relationships and perform
accurately outside the bounds of the training data. But often performance
off-test-set is poor, especially when data is sparse. This is because the
conditional average, which in many scenarios is a good approximation of the
`ground truth', is only modelled with conventional Minkowski-r error measures
when the data set adheres to restrictive assumptions, with many real data sets
violating these. To combat this there are several methods that use prior
knowledge to approximate the `ground truth'. However, prior knowledge is not
always available, and this paper investigates how error measures affect the
ability for a regression method to model the `ground truth' in these scenarios.
Current error measures are shown to create an unhelpful bias and a new error
measure is derived which does not exhibit this behaviour. This is tested on 36
representative data sets with different characteristics, showing that it is
more consistent in determining the `ground truth' and in giving improved
predictions in regions beyond the range of the training data.
|
This paper addresses the task of (complex) conversational question answering
over a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic
parSing with trAnsformer and Graph atteNtion nEtworks). It is the first
approach, which employs a transformer architecture extended with Graph
Attention Networks for multi-task neural semantic parsing. LASAGNE uses a
transformer model for generating the base logical forms, while the Graph
Attention model is used to exploit correlations between (entity) types and
predicates to produce node representations. LASAGNE also includes a novel
entity recognition module which detects, links, and ranks all relevant entities
in the question context. We evaluate LASAGNE on a standard dataset for complex
sequential question answering, on which it outperforms existing baseline
averages on all question types. Specifically, we show that LASAGNE improves the
F1-score on eight out of ten question types; in some cases, the increase in
F1-score is more than 20% compared to the state of the art.
|
Calculation of conductivity in the Hubbard model is a challenging task.
Recent years have seen much progress in this respect and numerically exact
solutions are now possible in certain regimes. In this paper we discuss the
calculation of conductivity for the square lattice Hubbard model in the
presence of a perpendicular magnetic field, focusing on orbital effects. We
present the relevant formalism in all detail and in full generality, and then
discuss the simplifications that arise at the level of the dynamical mean field
theory (DMFT). We prove that the Kubo bubble preserves gauge and translational
invariance, and that in the DMFT the vertex corrections cancel regardless of
the magnetic field. We present the DMFT results for the spectral function and
both the longitudinal and Hall conductivity in several regimes of parameters.
We analyze thoroughly the quantum oscillations of the longitudinal conductivity
and identify a high-frequency oscillation component, arising as a combined
effect of scattering and temperature, in line with recent experimental
observations in moir\'e systems.
|
Millions of people use platforms such as YouTube, Facebook, Twitter, and
other mass media. Due to the accessibility of these platforms, they are often
used to establish a narrative, conduct propaganda, and disseminate
misinformation. This work proposes an approach that uses state-of-the-art NLP
techniques to extract features from video captions (subtitles). To evaluate our
approach, we utilize a publicly accessible and labeled dataset for classifying
videos as misinformation or not. The motivation behind exploring video captions
stems from our analysis of videos metadata. Attributes such as the number of
views, likes, dislikes, and comments are ineffective as videos are hard to
differentiate using this information. Using caption dataset, the proposed
models can classify videos among three classes (Misinformation, Debunking
Misinformation, and Neutral) with 0.85 to 0.90 F1-score. To emphasize the
relevance of the misinformation class, we re-formulate our classification
problem as a two-class classification - Misinformation vs. others (Debunking
Misinformation and Neutral). In our experiments, the proposed models can
classify videos with 0.92 to 0.95 F1-score and 0.78 to 0.90 AUC ROC.
|
The 2D TI edge states are considered within the Volkov-Pankratov (VP)
Hamiltonian. A smooth transition between TI and OI is assumed. The edge states
are formed in the total gap of homogeneous 2D material. A pair of these states
are of linear dispersion, others have gapped Dirac spectra. The optical
selection rules are found. The optical transitions between the neighboring edge
states appear in the global 2D gap for the in-plane light electric field
directed across the edge.
The electrons in linear edge states have no backscattering, that is
indicative of the fact of topological protection. However, when linear edge
states get to the energy domain of Dirac edge states, the backscattering
becomes permitted. The elastic backscattering rate is found. The Drude-like
conductivity is found when the Fermi level gets into the energy domain of the
coexistence of linear and Dirac edge states. The localization edge conductance
of a finite sample at zero temperature is determined.
|
This paper compares the advantages, limitations, and computational
considerations of using Finite-Time Lyapunov Exponents (FTLEs) and Lagrangian
Descriptors (LDs) as tools for identifying barriers and mechanisms of fluid
transport in two-dimensional time-periodic flows. These barriers and mechanisms
of transport are often referred to as "Lagrangian Coherent Structures," though
this term often changes meaning depending on the author or context. This paper
will specifically focus on using FTLEs and LDs to identify stable and unstable
manifolds of hyperbolic stagnation points, and the Kolmogorov-Arnold-Moser
(KAM) tori associated with elliptic stagnation points. The background and
theory behind both methods and their associated phase space structures will be
presented, and then examples of FTLEs and LDs will be shown based on a simple,
periodic, time-dependent double-gyre toy model with varying parameters.
|
Railway systems provide pivotal support to modern societies, making their
efficiency and robustness important to ensure. However, these systems are
susceptible to disruptions and delays, leading to accumulating economic damage.
The large spatial scale of delay spreading typically make it difficult to
distinguish which regions will ultimately affected from an initial disruption,
creating uncertainty for risk assessment. In this paper, we identify
geographical structures that reflect how delay spreads through railway
networks. We do so by proposing a graph-based, hybrid schedule and
empirical-based model for delay propagation and apply spectral clustering. We
apply the model to four European railway systems: the Netherlands, Germany,
Switzerland and Italy. We characterize geographical structures in the railway
systems of these countries and interpret these regions in terms of delay
severity and how dynamically disconnected they are from the rest. The method
also allows us to point out important differences between these countries'
railway systems. For practitioners, this geographical characterization of
railways provide natural boundaries for local decision-making structures and a
first-order prioritization on which regions are at risk, given an initial
disruption.
|
Brain parcellations play a ubiquitous role in the analysis of magnetic
resonance imaging (MRI) datasets. Over 100 years of research has been conducted
in pursuit of an ideal brain parcellation. Different methods have been
developed and studied for constructing brain parcellations using different
imaging modalities. More recently, several data-driven parcellation methods
have been adopted from data mining, machine learning, and statistics
communities. With contributions from different scientific fields, there is a
rich body of literature that needs to be examined to appreciate the breadth of
existing research and the gaps that need to be investigated. In this work, we
review the large body of in vivo brain parcellation research spanning different
neuroimaging modalities and methods. A key contribution of this work is a
semantic organization of this large body of work into different taxonomies,
making it easy to understand the breadth and depth of the brain parcellation
literature. Specifically, we categorized the existing parcellations into three
groups: Anatomical parcellations, functional parcellations, and structural
parcellations which are constructed using T1-weighted MRI, functional MRI
(fMRI), and diffusion-weighted imaging (DWI) datasets, respectively. We provide
a multi-level taxonomy of different methods studied in each of these
categories, compare their relative strengths and weaknesses, and highlight the
challenges currently faced for the development of brain parcellations.
|
Effective molecular representation learning is of great importance to
facilitate molecular property prediction, which is a fundamental task for the
drug and material industry. Recent advances in graph neural networks (GNNs)
have shown great promise in applying GNNs for molecular representation
learning. Moreover, a few recent studies have also demonstrated successful
applications of self-supervised learning methods to pre-train the GNNs to
overcome the problem of insufficient labeled molecules. However, existing GNNs
and pre-training strategies usually treat molecules as topological graph data
without fully utilizing the molecular geometry information. Whereas, the
three-dimensional (3D) spatial structure of a molecule, a.k.a molecular
geometry, is one of the most critical factors for determining molecular
physical, chemical, and biological properties. To this end, we propose a novel
Geometry Enhanced Molecular representation learning method (GEM) for Chemical
Representation Learning (ChemRL). At first, we design a geometry-based GNN
architecture that simultaneously models atoms, bonds, and bond angles in a
molecule. To be specific, we devised double graphs for a molecule: The first
one encodes the atom-bond relations; The second one encodes bond-angle
relations. Moreover, on top of the devised GNN architecture, we propose several
novel geometry-level self-supervised learning strategies to learn spatial
knowledge by utilizing the local and global molecular 3D structures. We compare
ChemRL-GEM with various state-of-the-art (SOTA) baselines on different
molecular benchmarks and exhibit that ChemRL-GEM can significantly outperform
all baselines in both regression and classification tasks. For example, the
experimental results show an overall improvement of 8.8% on average compared to
SOTA baselines on the regression tasks, demonstrating the superiority of the
proposed method.
|
We investigate program equivalence for linear higher-order(sequential)
languages endowed with primitives for computational effects. More specifically,
we study operationally-based notions of program equivalence for a linear
$\lambda$-calculus with explicit copying and algebraic effects \emph{\`a la}
Plotkin and Power. Such a calculus makes explicit the interaction between
copying and linearity, which are intensional aspects of computation, with
effects, which are, instead, \emph{extensional}. We review some of the notions
of equivalences for linear calculi proposed in the literature and show their
limitations when applied to effectful calculi where copying is a first-class
citizen. We then introduce resource transition systems, namely transition
systems whose states are built over tuples of programs representing the
available resources, as an operational semantics accounting for both
intensional and extensional interactive behaviors of programs. Our main result
is a sound and complete characterization of contextual equivalence as trace
equivalence defined on top of resource transition systems.
|
One of the exciting recent developments in decentralized finance (DeFi) has
been the development of decentralized cryptocurrency exchanges that can
autonomously handle conversion between different cryptocurrencies.
Decentralized exchange protocols such as Uniswap, Curve and other types of
Automated Market Makers (AMMs) maintain a liquidity pool (LP) of two or more
assets constrained to maintain at all times a mathematical relation to each
other, defined by a given function or curve. Examples of such functions are the
constant-sum and constant-product AMMs. Existing systems however suffer from
several challenges. They require external arbitrageurs to restore the price of
tokens in the pool to match the market price. Such activities can potentially
drain resources from the liquidity pool. In particular, dramatic market price
changes can result in low liquidity with respect to one or more of the assets
and reduce the total value of the LP. We propose in this work a new approach to
constructing the AMM by proposing the idea of dynamic curves. It utilizes input
from a market price oracle to modify the mathematical relationship between the
assets so that the pool price continuously and automatically adjusts to be
identical to the market price. This approach eliminates arbitrage opportunities
and, as we show through simulations, maintains liquidity in the LP for all
assets and the total value of the LP over a wide range of market prices.
|
We develop a model of interacting zwitterionic membranes with rotating
surface dipoles immersed in a monovalent salt, and implement it in a field
theoretic formalism. In the mean-field regime of monovalent salt, the
electrostatic forces between the membranes are characterized by a non-uniform
trend: at large membrane separations, the interfacial dipoles on the opposing
sides behave as like-charge cations and give rise to repulsive membrane
interactions; at short membrane separations, the anionic field induced by the
dipolar phosphate groups sets the behavior in the intermembrane region. The
attraction of the cationic nitrogens in the dipolar lipid headgroups leads to
the adhesion of the membrane surfaces via dipolar bridging. The underlying
competition between the opposing field components of the individual dipolar
charges leads to the non-uniform salt ion affinity of the zwitterionic membrane
with respect to the separation distance; large inter-membrane separations imply
anionic excess while small, nanometer size separations, favor cationic excess.
This complex ionic selectivity of zwitterionic membranes may have relevant
repercussions on nanofiltration and nanofluidic transport techniques.
|
Recently the leading order of the correlation energy of a Fermi gas in a
coupled mean-field and semiclassical scaling regime has been derived, under the
assumption of an interaction potential with a small norm and with compact
support in Fourier space. We generalize this result to large interaction
potentials, requiring only $|\cdot| \hat{V} \in \ell^1 (\mathbb{Z}^3)$. Our
proof is based on approximate, collective bosonization in three dimensions.
Significant improvements compared to recent work include stronger bounds on
non-bosonizable terms and more efficient control on the bosonization of the
kinetic energy.
|
The localization spread gives a criterion to decide between metallic versus
insulating behaviour of a material. It is defined as the second moment cumulant
of the many-body position operator, divided by the number of electrons.
Different operators are used for systems treated with Open or Periodic Boundary
Conditions. In particular, in the case of periodic systems, we use the
complex-position definition, that was already used in similar contexts for the
treatment of both classical and quantum situations. In this study, we show that
the localization spread evaluated on a finite ring system of radius $R$ with
Open Boundary Conditions leads, in the large $R$ limit, to the same formula
derived by Resta et al. for 1D systems with periodic Born-von K\'arm\'an
boundary conditions. A second formula, alternative to the Resta's one, is also
given, based on the sum-over-state formalism, allowing for an interesting
generalization to polarizability and other similar quantities.
|
In the recent years, there has been a shift in facial behavior analysis from
the laboratory-controlled conditions to the challenging in-the-wild conditions
due to the superior performance of deep learning based approaches for many real
world applications.However, the performance of deep learning approaches relies
on the amount of training data. One of the major problems with data acquisition
is the requirement of annotations for large amount of training data. Labeling
process of huge training data demands lot of human support with strong domain
expertise for facial expressions or action units, which is difficult to obtain
in real-time environments.Moreover, labeling process is highly vulnerable to
ambiguity of expressions or action units, especially for intensities due to the
bias induced by the domain experts. Therefore, there is an imperative need to
address the problem of facial behavior analysis with weak annotations. In this
paper, we provide a comprehensive review of weakly supervised learning (WSL)
approaches for facial behavior analysis with both categorical as well as
dimensional labels along with the challenges and potential research directions
associated with it. First, we introduce various types of weak annotations in
the context of facial behavior analysis and the corresponding challenges
associated with it. We then systematically review the existing state-of-the-art
approaches and provide a taxonomy of these approaches along with their insights
and limitations. In addition, widely used data-sets in the reviewed literature
and the performance of these approaches along with evaluation principles are
summarized. Finally, we discuss the remaining challenges and opportunities
along with the potential research directions in order to apply facial behavior
analysis with weak labels in real life situations.
|
We show that the action of the mapping class group on the space of closed
curves of a closed surface effectively tracks the corresponding action on
Teichm\"uller space in the following sense: for all but quantitatively few
mapping classes, the information of how a mapping class moves a given point of
Teichm\"uller space determines, up to a power saving error term, how it changes
the geometric intersection numbers of a given closed curve with respect to
arbitrary geodesic currents. Applications include an effective estimate
describing the speed of convergence of Teichm\"uller geodesic rays to the
boundary at infinity of Teichm\"uller space, an effective estimate comparing
the Teichm\"uller and Thurston metrics along mapping class group orbits of
Teichm\"uller space, and, in the sequel, effective estimates for countings of
filling closed geodesics on closed, negatively curved surfaces.
|
We discuss a model based on dark sector described by non-Abelian $SU(2)_D$
gauge symmetry where we introduce $SU(2)_L \times SU(2)_D$ bi-doublet
vector-like leptons to generate active neutrino masses and kinetic mixing
between $SU(2)_D$ and $U(1)_Y$ gauge fields at one-loop level. After
spontaneous symmetry breaking of $SU(2)_D$, we have remnant $Z_4$ symmetry
guaranteeing stability of dark matter candidates. We formulate neutrino mass
matrix and related lepton flavor violating processes and discus dark matter
physics estimating relic density. It is found that our model realize
multicomponent dark matter scenario due to the $Z_4$ symmetry and relic density
can be explained by gauge interactions with kinetic mixing effect.
|
In this article we continue with the research initiated in our previous work
on singular Liouville equations with quantized singularity. The main goal of
this article is to prove that as long as the bubbling solutions violate the
spherical Harnack inequality near a singular source, the first derivatives of
coefficient functions must tend to zero.
|
We compute explicit solutions $\Lambda^\pm_m$ of the Painleve VI (PVI)
differential equation from equivariant instanton bundles $E_m$ corresponding to
Yang-Mills instantons with "quadrupole symmetry." This is based on a
generalization of Hitchin's logarithmic connection to vector bundles with an
$SL_2({\mathbb C})$ action. We then identify explicit Okamoto transformation
which play the role of "creation operators" for construction $\Lambda^\pm_m$
from the "ground state" $\Lambda^\pm_0$, suggesting that the equivariant
instanton bundles $E_m$ might similarly be related to the trivial "ground
state" $E_0$.
|
Machine unlearning has great significance in guaranteeing model security and
protecting user privacy. Additionally, many legal provisions clearly stipulate
that users have the right to demand model providers to delete their own data
from training set, that is, the right to be forgotten. The naive way of
unlearning data is to retrain the model without it from scratch, which becomes
extremely time and resource consuming at the modern scale of deep neural
networks. Other unlearning approaches by refactoring model or training data
struggle to gain a balance between overhead and model usability.
In this paper, we propose an approach, dubbed as DeepObliviate, to implement
machine unlearning efficiently, without modifying the normal training mode. Our
approach improves the original training process by storing intermediate models
on the hard disk. Given a data point to unlearn, we first quantify its temporal
residual memory left in stored models. The influenced models will be retrained
and we decide when to terminate the retraining based on the trend of residual
memory on-the-fly. Last, we stitch an unlearned model by combining the
retrained models and uninfluenced models. We extensively evaluate our approach
on five datasets and deep learning models. Compared to the method of retraining
from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1%
accuracy rates and 66.7$\times$, 75.0$\times$, 33.3$\times$, 29.4$\times$,
13.7$\times$ speedups on the MNIST, SVHN, CIFAR-10, Purchase, and ImageNet
datasets, respectively. Compared to the state-of-the-art unlearning approach,
we improve 5.8% accuracy, 32.5$\times$ prediction speedup, and reach a
comparable retrain speedup under identical settings on average on these
datasets. Additionally, DeepObliviate can also pass the backdoor-based
unlearning verification.
|
Theoretical studies of superradiant lasing on optical clock transitions
predict a superb frequency accuracy and precision closely tied to the bare
atomic linewidth. Such a superradiant laser is also robust against cavity
fluctuations when the spectral width of the lasing mode is much larger than
that of the atomic medium. Recent predictions suggest that this unique feature
persists even for a hot and thus strongly broadened ensemble, provided the
effective atom number is large enough. Here we use a second-order cumulant
expansion approach to study the power, linewidth and lineshifts of such a
superradiant laser as a function of the inhomogeneous width of the ensemble
including variations of the spatial atom-field coupling within the resonator.
We present conditions on the atom numbers, the pump and coupling strengths
required to reach the buildup of collective atomic coherence as well as scaling
and limitations for the achievable laser linewidth.
|
Many sequence-to-sequence tasks in natural language processing are roughly
monotonic in the alignment between source and target sequence, and previous
work has facilitated or enforced learning of monotonic attention behavior via
specialized attention functions or pretraining. In this work, we introduce a
monotonicity loss function that is compatible with standard attention
mechanisms and test it on several sequence-to-sequence tasks:
grapheme-to-phoneme conversion, morphological inflection, transliteration, and
dialect normalization. Experiments show that we can achieve largely monotonic
behavior. Performance is mixed, with larger gains on top of RNN baselines.
General monotonicity does not benefit transformer multihead attention, however,
we see isolated improvements when only a subset of heads is biased towards
monotonic behavior.
|
For hidden Markov models one of the most popular estimates of the hidden
chain is the Viterbi path -- the path maximising the posterior probability. We
consider a more general setting, called the pairwise Markov model (PMM), where
the joint process consisting of finite-state hidden process and observation
process is assumed to be a Markov chain. It has been recently proven that under
some conditions the Viterbi path of the PMM can almost surely be extended to
infinity, thereby defining the infinite Viterbi decoding of the observation
sequence, called the Viterbi process. This was done by constructing a block of
observations, called a barrier, which ensures that the Viterbi path goes trough
a given state whenever this block occurs in the observation sequence. In this
paper we prove that the joint process consisting of Viterbi process and PMM is
regenerative. The proof involves a delicate construction of regeneration times
which coincide with the occurrences of barriers. As one possible application of
our theory, some results on the asymptotics of the Viterbi training algorithm
are derived.
|
A generalized Kummer surface $X=Km_{3}(A,G_{A})$ is the minimal resolution of
the quotient of a $2$-dimensional complex torus by an order 3 symplectic
automorphism group $G_{A}$. A Kummer structure on $X$ is an isomorphism class
of pairs $(B,G_{B})$ such that $X\simeq Km_{3}(B,G_{B})$. When the surface is
algebraic, we obtain that the number of Kummer structures is linked with the
number of order $3$ elliptic points on some Shimura curve naturally related to
$A$. For each $n\in\mathbb{N}$, we obtain generalized Kummer surfaces $X_{n}$
for which the number of Kummer structures is $2^{n}$. We then give a
classification of the moduli spaces of generalized Kummer surfaces. When the
surface is non algebraic, there is only one Kummer structure, but the number of
irreducible components of the moduli spaces of such surfaces is large compared
to the algebraic case. The endomorphism rings of the complex $2$-tori we study
are mainly quaternion orders, these order contain the ring of Eisenstein
integers. One can also see this paper as a study of quaternion orders
$\mathcal{O}$ over $\mathbb{Q}$ that contain the ring of Eisenstein integers.
We obtain that such order is determined up to isomorphism by its discriminant,
and when the quaternion algebra is indefinite, the order $\mathcal{O}$ is
principal.
|
The neural network and quantum computing are both significant and appealing
fields, with their interactive disciplines promising for large-scale computing
tasks that are untackled by conventional computers. However, both developments
are restricted by the scope of the hardware development. Nevertheless, many
neural network algorithms had been proposed before GPUs become powerful enough
for running very deep models. Similarly, quantum algorithms can also be
proposed as knowledge reserves before real quantum computers are easily
accessible. Specifically, taking advantage of both the neural networks and
quantum computation and designing quantum deep neural networks (QDNNs) for
acceleration on Noisy Intermediate-Scale Quantum (NISQ) processors is also an
important research problem. As one of the most widely used neural network
architectures, convolutional neural network (CNN) remains to be accelerated by
quantum mechanisms, with only a few attempts have been demonstrated. In this
paper, we propose a new hybrid quantum-classical circuit, namely Quantum
Fourier Convolutional Network (QFCN). Our model achieves exponential speed-up
compared with classical CNN theoretically and improves over the existing best
result of quantum CNN. We demonstrate the potential of this architecture by
applying it to different deep learning tasks, including traffic prediction and
image classification.
|
Our goal, in the context of open-domain textual question-answering (QA), is
to explain answers by showing the line of reasoning from what is known to the
answer, rather than simply showing a fragment of textual evidence (a
"rationale'"). If this could be done, new opportunities for understanding and
debugging the system's reasoning become possible. Our approach is to generate
explanations in the form of entailment trees, namely a tree of multipremise
entailment steps from facts that are known, through intermediate conclusions,
to the hypothesis of interest (namely the question + answer). To train a model
with this skill, we created ENTAILMENTBANK, the first dataset to contain
multistep entailment trees. Given a hypothesis (question + answer), we define
three increasingly difficult explanation tasks: generate a valid entailment
tree given (a) all relevant sentences (b) all relevant and some irrelevant
sentences, or (c) a corpus. We show that a strong language model can partially
solve these tasks, in particular when the relevant sentences are included in
the input (e.g., 35% of trees for (a) are perfect), and with indications of
generalization to other domains. This work is significant as it provides a new
type of dataset (multistep entailments) and baselines, offering a new avenue
for the community to generate richer, more systematic explanations.
|
We study the problem of list-decodable linear regression, where an adversary
can corrupt a majority of the examples. Specifically, we are given a set $T$ of
labeled examples $(x, y) \in \mathbb{R}^d \times \mathbb{R}$ and a parameter
$0< \alpha <1/2$ such that an $\alpha$-fraction of the points in $T$ are i.i.d.
samples from a linear regression model with Gaussian covariates, and the
remaining $(1-\alpha)$-fraction of the points are drawn from an arbitrary noise
distribution. The goal is to output a small list of hypothesis vectors such
that at least one of them is close to the target regression vector. Our main
result is a Statistical Query (SQ) lower bound of $d^{\mathrm{poly}(1/\alpha)}$
for this problem. Our SQ lower bound qualitatively matches the performance of
previously developed algorithms, providing evidence that current upper bounds
for this task are nearly best possible.
|
In this work, we aim to address the 3D scene stylization problem - generating
stylized images of the scene at arbitrary novel view angles. A straightforward
solution is to combine existing novel view synthesis and image/video style
transfer approaches, which often leads to blurry results or inconsistent
appearance. Inspired by the high quality results of the neural radiance fields
(NeRF) method, we propose a joint framework to directly render novel views with
the desired style. Our framework consists of two components: an implicit
representation of the 3D scene with the neural radiance field model, and a
hypernetwork to transfer the style information into the scene representation.
In particular, our implicit representation model disentangles the scene into
the geometry and appearance branches, and the hypernetwork learns to predict
the parameters of the appearance branch from the reference style image. To
alleviate the training difficulties and memory burden, we propose a two-stage
training procedure and a patch sub-sampling approach to optimize the style and
content losses with the neural radiance field model. After optimization, our
model is able to render consistent novel views at arbitrary view angles with
arbitrary style. Both quantitative evaluation and human subject study have
demonstrated that the proposed method generates faithful stylization results
with consistent appearance across different views.
|
In this paper, we propose a simple yet effective method to deal with the
violation of the Closed-World Assumption for a classifier. Previous works tend
to apply a threshold either on the classification scores or the loss function
to reject the inputs that violate the assumption. However, these methods cannot
achieve the low False Positive Ratio (FPR) required in safety applications. The
proposed method is a rejection option based on hypothesis testing with
probabilistic networks. With probabilistic networks, it is possible to estimate
the distribution of outcomes instead of a single output. By utilizing Z-test
over the mean and standard deviation for each class, the proposed method can
estimate the statistical significance of the network certainty and reject
uncertain outputs. The proposed method was experimented on with different
configurations of the COCO and CIFAR datasets. The performance of the proposed
method is compared with the Softmax Response, which is a known top-performing
method. It is shown that the proposed method can achieve a broader range of
operation and cover a lower FPR than the alternative.
|
I describe the rationale for, and design of, an agent-based simulation model
of a contemporary online sports-betting exchange: such exchanges, closely
related to the exchange mechanisms at the heart of major financial markets,
have revolutionized the gambling industry in the past 20 years, but gathering
sufficiently large quantities of rich and temporally high-resolution data from
real exchanges - i.e., the sort of data that is needed in large quantities for
Deep Learning - is often very expensive, and sometimes simply impossible; this
creates a need for a plausibly realistic synthetic data generator, which is
what this simulation now provides. The simulator, named the "Bristol Betting
Exchange" (BBE), is intended as a common platform, a data-source and
experimental test-bed, for researchers studying the application of AI and
machine learning (ML) techniques to issues arising in betting exchanges; and,
as far as I have been able to determine, BBE is the first of its kind: a free
open-source agent-based simulation model consisting not only of a
sports-betting exchange, but also a minimal simulation model of racetrack
sporting events (e.g., horse-races or car-races) about which bets may be made,
and a population of simulated bettors who each form their own private
evaluation of odds and place bets on the exchange before and - crucially -
during the race itself (i.e., so-called "in-play" betting) and whose betting
opinions change second-by-second as each race event unfolds. BBE is offered as
a proof-of-concept system that enables the generation of large high-resolution
data-sets for automated discovery or improvement of profitable strategies for
betting on sporting events via the application of AI/ML and advanced data
analytics techniques. This paper offers an extensive survey of relevant
literature and explains the motivation and design of BBE, and presents brief
illustrative results.
|
Pinch-off and satellite droplets formation during breakup of near-inviscid
liquid bridge sandwiched between two given equal and coaxial circular plates
have been investigated. The breakup always results in the formation of a
spindle shape which is the precursor of the satellite droplet at the moment of
pinch-off. Interestingly, the slenderness of this spindle is always bigger than
2{\pi} and always results in the formation of only one satellite droplet
regardless of the surface tension and the slenderness of the liquid bridge. We
predict the cone angle of this spindle formed during the pinch-off of inviscid
fluids should be 18.086122158...{\deg}. After pinch-off, the satellite droplets
will drift out of the pinch-off regions in the case of symmetrical short
bridge, and merge again with the sessile drop in the case of unsymmetrical long
bridge. We demonstrate that the velocity of the satellite droplet is consistent
with a scaling model based on a balance between capillary forces and the
inertia at the pinch-off region.
|
In this paper, we generalize fractional $q$-integrals by the method of
$q$-difference equation. In addition, we deduce fractional Askey--Wilson
integral, reversal type fractional Askey--Wilson integral and Ramanujan type
fractional Askey--Wilson integral.
|
Person Re-Identification (Re-ID) is of great importance to the many video
surveillance systems. Learning discriminative features for Re-ID remains a
challenge due to the large variations in the image space, e.g., continuously
changing human poses, illuminations and point of views. In this paper, we
propose HAVANA, a novel extensible, light-weight HierArchical and
VAriation-Normalized Autoencoder that learns features robust to intra-class
variations. In contrast to existing generative approaches that prune the
variations with heavy extra supervised signals, HAVANA suppresses the
intra-class variations with a Variation-Normalized Autoencoder trained with no
additional supervision. We also introduce a novel Jensen-Shannon triplet loss
for contrastive distribution learning in Re-ID. In addition, we present
Hierarchical Variation Distiller, a hierarchical VAE to factorize the latent
representation and explicitly model the variations. To the best of our
knowledge, HAVANA is the first VAE-based framework for person ReID.
|
We classify Frobenius forms, a special class of homogeneous polynomials in
characteristic $p>0$, in up to five variables over an algebraically closed
field. We also point out some of the similarities with quadratic forms.
|
We present optical spectroscopy for 18 halo white dwarfs identified using
photometry from the Canada-France Imaging Survey and Pan-STARRS1 DR1 3$\pi$
survey combined with astrometry from Gaia DR2. The sample contains 13 DA, 1 DZ,
2 DC, and two potentially exotic types of white dwarf. We fit both the spectrum
and the spectral energy distribution in order to obtain the temperature and
surface gravity, which we then convert into a mass, and then an age, using
stellar isochrones and the initial-to-final mass relation. We find a large
spread in ages that is not consistent with expected formation scenarios for the
Galactic halo. We find a mean age of 9.03$^{+2.13}_{-2.03}$ Gyr and a
dispersion of 4.21$^{+2.33}_{-1.58}$ Gyr for the inner halo using a maximum
likelihood method. This result suggests an extended star formation history
within the local halo population.
|
According to their strength, the tracing properties of a code can be
categorized as frameproof, separating, IPP and TA. It is known that if the
minimum distance of the code is larger than a certain threshold then the TA
property implies the rest. Silverberg et al. ask if there is some kind of
tracing capability left when the minimum distance falls below the threshold.
Under different assumptions, several papers have given a negative answer to the
question. In this paper further progress is made. We establish values of the
minimum distance for which Reed-Solomon codes do not posses the separating
property.
|
By using ab-initio-accurate force fields and molecular dynamics simulations
we demonstrate that the layer stiffness has profound effects on the
superlubricant state of two-dimensional van der Waals heterostructures. These
are engineered to have identical inter-layer sliding energy surfaces, but
layers of different rigidity, so that the effects of the stiffness on the
microscopic friction in the superlubricant state can be isolated. A twofold
increase in the intra-layer stiffness reduces the friction by approximately a
factor six. Most importantly, we find two sliding regimes as a function of the
sliding velocity. At low velocity the heat generated by the motion is
efficiently exchanged between the layers and the friction is independent on
whether the sliding layer is softer or harder than the substrate. In contrast,
at high velocity the friction heat flux cannot be exchanged fast enough, and
the build up of significant temperature gradients between the layers is
observed. In this situation the temperature profile depends on whether the
slider is softer than the substrate.
|
Previous studies have predicted the failure of Fourier's law of thermal
conduction due to the existence of wave like propagation of heat with finite
propagation speed. This non-Fourier thermal transport phenomenon can appear in
both the hydrodynamic and (quasi) ballistic regimes. Hence, it is not easy to
clearly distinguish these two non-Fourier regimes only by this phenomenon. In
this work, the transient heat propagation in homogeneous thermal system is
studied based on the phonon Boltzmann transport equation (BTE) under the
Callaway model. Given a quasi-one or quasi-two (three) dimensional simulation
with homogeneous environment temperature, at initial moment, a heat source is
added suddenly at the center with high temperature, then the heat propagates
from the center to the outer. Numerical results show that in quasi-two (three)
dimensional simulations, the transient temperature will be lower than the
lowest value of initial temperature in the hydrodynamic regime within a certain
range of time and space. This phenomenon appears only when the normal
scattering dominates heat conduction. Besides, it disappears in quasi-one
dimensional simulations. Similar phenomenon is also observed in thermal systems
with time varying heat source. This novel transient heat propagation phenomenon
of hydrodynamic phonon transport distinguishes it well from (quasi) ballistic
phonon transport.
|
Despite the advances in the autonomous driving domain, autonomous vehicles
(AVs) are still inefficient and limited in terms of cooperating with each other
or coordinating with vehicles operated by humans. A group of autonomous and
human-driven vehicles (HVs) which work together to optimize an altruistic
social utility -- as opposed to the egoistic individual utility -- can co-exist
seamlessly and assure safety and efficiency on the road. Achieving this mission
without explicit coordination among agents is challenging, mainly due to the
difficulty of predicting the behavior of humans with heterogeneous preferences
in mixed-autonomy environments. Formally, we model an AV's maneuver planning in
mixed-autonomy traffic as a partially-observable stochastic game and attempt to
derive optimal policies that lead to socially-desirable outcomes using a
multi-agent reinforcement learning framework. We introduce a quantitative
representation of the AVs' social preferences and design a distributed reward
structure that induces altruism into their decision making process. Our
altruistic AVs are able to form alliances, guide the traffic, and affect the
behavior of the HVs to handle competitive driving scenarios. As a case study,
we compare egoistic AVs to our altruistic autonomous agents in a highway
merging setting and demonstrate the emerging behaviors that lead to a
noticeable improvement in the number of successful merges as well as the
overall traffic flow and safety.
|
Improving existing widely-adopted prediction models is often a more efficient
and robust way towards progress than training new models from scratch. Existing
models may (a) incorporate complex mechanistic knowledge, (b) leverage
proprietary information and, (c) have surmounted barriers to adoption. Compared
to model training, model improvement and modification receive little attention.
In this paper we propose a general approach to model improvement: we combine
gradient boosting with any previously developed model to improve model
performance while retaining important existing characteristics. To exemplify,
we consider the context of Mendelian models, which estimate the probability of
carrying genetic mutations that confer susceptibility to disease by using
family pedigrees and health histories of family members. Via simulations we
show that integration of gradient boosting with an existing Mendelian model can
produce an improved model that outperforms both that model and the model built
using gradient boosting alone. We illustrate the approach on genetic testing
data from the USC-Stanford Cancer Genetics Hereditary Cancer Panel (HCP) study.
|
Traditional on-die, three-level cache hierarchy design is very commonly used
but is also prone to latency, especially at the Level 2 (L2) cache. We discuss
three distinct ways of improving this design in order to have better
performance. Performance is especially important for systems with high
workloads. The first method proposes to eliminate L2 altogether while proposing
a new prefetching technique, the second method suggests increasing the size of
L2, while the last method advocates the implementation of optical caches. After
carefully contemplating the results in performance gains and the advantages and
disadvantages of each method, we found the last method to be the best of the
three.
|
It is known that general relativity (GR) theory is not consistent with the
latest observations. The modified gravity of GR known as $\mathrm{f(R)}$ where
$\mathrm{R}$ is the Ricci scalar, is considered to be a good candidate for
dealing with the anomalies present in classical GR. In this context, we study
static rotating uncharged anti-de Sitter and de Sitter (AdS and dS) black holes
(BHs) using $\mathrm{f(R)}$ theory without assuming any constraints on the
Ricci scalar or on $\mathrm{f(R)}$. We derive BH solutions depend on the
convolution function and deviate from the AdS/dS Schwarzschild BH solution of
GR. Although the field equations have no dependence on the cosmological
constant, the BHs are characterized by an effective cosmological constant that
depends on the convolution function. The asymptotic form of this BH solution
depends on the gravitational mass of the system and on extra terms that lead to
BHs being different from GR BHs but to correspond to GR BHs under certain
conditions. We also investigate how these extra terms are responsible for
making the singularities of the invariants milder than those of the GR BHs. We
study some physical properties of the BHs from the point of view of
thermodynamics and show that there is an outer event horizon in addition to the
inner Cauchy horizons. Among other things, we show that our BH solutions
satisfy the first law of thermodynamics. To check the stability of these BHs we
use the geodesic deviations and derive the stability conditions. Finally, using
the odd-type mode it is shown that all the derived BHs are stable and have a
radial speed equal to one.
|
In physical experiments, reference frames are standardly modelled through a
specific choice of coordinates used to describe the physical systems, but they
themselves are not considered as such. However, any reference frame is a
physical system that ultimately behaves according to quantum mechanics. We
develop a framework for rotational (i.e. spin) quantum reference frames, with
respect to which quantum systems with spin degrees of freedom are described. We
give an explicit model for such frames as systems composed of three spin
coherent states of angular momentum $j$ and introduce the transformations
between them by upgrading the Euler angles occurring in classical
$\textrm{SO}(3)$ spin transformations to quantum mechanical operators acting on
the states of the reference frames. To ensure that an arbitrary rotation can be
applied on the spin we take the limit of infinitely large $j$, in which case
the angle operator possesses a continuous spectrum. We prove that rotationally
invariant Hamiltonians (such as that of the Heisenberg model) are invariant
under a larger group of quantum reference frame transformations. Our result is
the first development of the quantum reference frame formalism for a
non-Abelian group.
|
We consider a simple scalar dark matter model within the frame of gauged
$L_{\mu}-L_{\tau}$ symmetry. A gauge boson $Z'$ as well as two scalar fields
$S$ and $\Phi$ are introduced to the Standard Model (SM). $S$ and $\Phi$ are SM
singlet but both with $U(1)_{L_{\mu}-L_{\tau}}$ charge. The real component and
imaginary component of $S$ can acquire different masses after spontaneously
symmetry breaking, and the lighter one can play the role of dark matter which
is stabilized by the residual $Z_2$ symmetry. A viable parameter space is
considered to discuss the possibility of light dark matter as well as
co-annihilation case, and we present current $(g-2)_{\mu}$ anomaly, Higgs
invisible decay, dark matter relic density as well as direct detection
constriants on the parameter space.
|
Decentralized financial (DeFi) applications on the Ethereum blockchain are
highly interoperable because they share a single state in a deterministic
computational environment. Stakeholders can deposit claims on assets, referred
to as 'liquidity shares', across applications producing effects equivalent to
rehypothecation in traditional financial systems. We seek to understand the
degree to which this practice may contribute to financial integration on
Ethereum by examining transactions in 'composed' derivatives for the assets
DAI, USDC, USDT, ETH and tokenized BTC for the full set of 344.8 million
Ethereum transactions computed in 2020. We identify a salient trend for
'composing' assets in multiple sequential generations of derivatives and
comment on potential systemic implications for the Ethereum network.
|
The paper presents the submission of the team indicnlp@kgp to the EACL 2021
shared task "Offensive Language Identification in Dravidian Languages." The
task aimed to classify different offensive content types in 3 code-mixed
Dravidian language datasets. The work leverages existing state of the art
approaches in text classification by incorporating additional data and transfer
learning on pre-trained models. Our final submission is an ensemble of an
AWD-LSTM based model along with 2 different transformer model architectures
based on BERT and RoBERTa. We achieved weighted-average F1 scores of 0.97,
0.77, and 0.72 in the Malayalam-English, Tamil-English, and Kannada-English
datasets ranking 1st, 2nd, and 3rd on the respective tasks.
|
The goal of this study was to improve the post-processing of precipitation
forecasts using convolutional neural networks (CNNs). Instead of
post-processing forecasts on a per-pixel basis, as is usually done when
employing machine learning in meteorological post-processing, input forecast
images were combined and transformed into probabilistic output forecast images
using fully convolutional neural networks. CNNs did not outperform regularized
logistic regression. Additionally, an ablation analysis was performed.
Combining input forecasts from a global low-resolution weather model and a
regional high-resolution weather model improved performance over either one.
|
Deep learning can promote the mammography-based computer-aided diagnosis
(CAD) for breast cancers, but it generally suffers from the small sample size
problem. Self-supervised learning (SSL) has shown its effectiveness in medical
image analysis with limited training samples. However, the network model
sometimes cannot be well pre-trained in the conventional SSL framework due to
the limitation of the pretext task and fine-tuning mechanism. In this work, a
Task-driven Self-supervised Bi-channel Networks (TSBN) framework is proposed to
improve the performance of classification model the mammography-based CAD. In
particular, a new gray-scale image mapping (GSIM) is designed as the pretext
task, which embeds the class label information of mammograms into the image
restoration task to improve discriminative feature representation. The proposed
TSBN then innovatively integrates different network architecture, including the
image restoration network and the classification network, into a unified SSL
framework. It jointly trains the bi-channel network models and collaboratively
transfers the knowledge from the pretext task network to the downstream task
network with improved diagnostic accuracy. The proposed TSBN is evaluated on a
public INbreast mammogram dataset. The experimental results indicate that it
outperforms the conventional SSL and multi-task learning algorithms for
diagnosis of breast cancers with limited samples.
|
The true topological nature of the Kondo insulator SmB$_6$ remains to be
unveiled. Our previous tunneling study not only found evidence for the
existence of surface Dirac fermions, but it also uncovered that they inherently
interact with the spin excitons, collective excitations in the bulk. We have
extended such a spectroscopic investigation into crystals containing a Sm
deficiency. The bulk hybridization gap is found to be insensitive to the
deficiency up to 1% studied here, but the surface states in Sm-deficient
crystals exhibit quite different temperature evolutions from those in
stoichiometric ones. We attribute this to the topological surface states
remaining incoherent down to the lowest measurement temperature due to their
continued interaction with the spin excitons that remain uncondensed. This
result shows that the detailed topological nature of SmB$_6$ could vary
drastically in the presence of disorder in the lattice. This sensitiveness to
disorder is seemingly contradictory to the celebrated topological protection,
but it can be understood as being due to the intimate interplay between strong
correlations and topological effects.
|
Effective environmental planning and management to address climate change
could be achieved through extensive environmental modeling with machine
learning and conventional physical models. In order to develop and improve
these models, practitioners and researchers need comprehensive benchmark
datasets that are prepared and processed with environmental expertise that they
can rely on. This study presents an extensive dataset of rainfall events for
the state of Iowa (2016-2019) acquired from the National Weather Service Next
Generation Weather Radar (NEXRAD) system and processed by a quantitative
precipitation estimation system. The dataset presented in this study could be
used for better disaster monitoring, response and recovery by paving the way
for both predictive and prescriptive modeling.
|
This paper is concerned with polynomially generated multiplier invariant
subspaces of the weighted Bergman space $A_{\boldsymbol{\beta}}^2$ in
infinitely many variables. We completely classify these invariant subspaces
under the unitary equivalence. Our results not only cover cases of both the
Hardy space $H^{2}(\mathbb{D}_{2}^{\infty})$ and the Bergman space
$A^{2}(\mathbb{D}_{2}^{\infty})$ in infinitely many variables, but also apply
in finite-variable setting.
|
The capability of generalization to unseen domains is crucial for deep
learning models when considering real-world scenarios. However, current
available medical image datasets, such as those for COVID-19 CT images, have
large variations of infections and domain shift problems. To address this
issue, we propose a prior knowledge driven domain adaptation and a dual-domain
enhanced self-correction learning scheme. Based on the novel learning schemes,
a domain adaptation based self-correction model (DASC-Net) is proposed for
COVID-19 infection segmentation on CT images. DASC-Net consists of a novel
attention and feature domain enhanced domain adaptation model (AFD-DA) to solve
the domain shifts and a self-correction learning process to refine segmentation
results. The innovations in AFD-DA include an image-level activation feature
extractor with attention to lung abnormalities and a multi-level discrimination
module for hierarchical feature domain alignment. The proposed self-correction
learning process adaptively aggregates the learned model and corresponding
pseudo labels for the propagation of aligned source and target domain
information to alleviate the overfitting to noises caused by pseudo labels.
Extensive experiments over three publicly available COVID-19 CT datasets
demonstrate that DASC-Net consistently outperforms state-of-the-art
segmentation, domain shift, and coronavirus infection segmentation methods.
Ablation analysis further shows the effectiveness of the major components in
our model. The DASC-Net enriches the theory of domain adaptation and
self-correction learning in medical imaging and can be generalized to
multi-site COVID-19 infection segmentation on CT images for clinical
deployment.
|