added
stringlengths
24
24
created
stringlengths
23
23
id
stringlengths
3
9
metadata
dict
source
stringclasses
1 value
text
stringlengths
1.56k
316k
version
stringclasses
1 value
2014-10-01T00:00:00.000Z
2002-02-01T00:00:00.000
1625596
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1289/ehp.02110s1103", "pdf_hash": "6af82a41cc2e25a42f6fa8302b133e0df9b84271", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:351", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "6af82a41cc2e25a42f6fa8302b133e0df9b84271", "year": 2002 }
pes2o/s2orc
Diesel exhaust and asthma: hypotheses and molecular mechanisms of action. Several components of air pollution have been linked to asthma. In addition to the well-studied critera air pollutants, such as nitrogen dioxide, sulfur dioxide, and ozone, diesel exhaust and diesel exhaust particles (DEPs) also appear to play a role in respiratory and allergic diseases. Diesel exhaust is composed of vapors, gases, and fine particles emitted by diesel-fueled compression-ignition engines. DEPs can act as nonspecific airway irritants at relatively high levels. At lower levels, DEPs promote release of specific cytokines, chemokines, immunoglobulins, and oxidants in the upper and lower airway. Release of these mediators of the allergic and inflammatory response initiates a cascade that can culminate in airway inflammation, mucus secretion, serum leakage into the airways, and bronchial smooth muscle contraction. DEPs also may promote expression of the T(subscript)H(/subscript)2 immunologic response phenotype that has been associated with asthma and allergic disease. DEPs appear to have greater immunologic effects in the presence of environmental allergens than they do alone. This immunologic evidence may help explain the epidemiologic studies indicating that children living along major trucking thoroughfares are at increased risk for asthmatic and allergic symptoms and are more likely to have objective evidence of respiratory dysfunction. Medical treatment of asthma and knowledge about asthma's biologic mechanisms have improved in recent years. Yet asthma prevalence, hospitalization rates, and mortality rates continue to rise internationally in both adults and children (1)(2)(3)(4)(5). According to the Centers for Disease Control and Prevention, the number of individuals with self-reported asthma increased by 75% in the United States from 1980 to 1994 (6). The increase was seen in all races, both sexes, and all age groups, but nonwhite children have been particularly affected. The prevalence of pediatric asthma increased by 160% during the same time period in children under 4 years of age and by 74% in children over age 4 (7). Not only is the prevalence of asthma rising in industrialized countries, but also the severity among those afflicted has increased. A recent cross-sectional study found that the odds of an adverse outcome (i.e., intubation, cardiopulmonary arrest, or death) among children hospitalized for asthma in California doubled between 1986 and 1993 (8). Asthma is more prevalent in the urbanized areas of industrialized countries (9). Numerous studies have demonstrated that specific components of air pollution may be associated with exacerbations of asthma (10)(11)(12)(13)(14). Although the levels of coarse particulate matter in the atmosphere have decreased over recent decades, the levels of fine particulate matter smaller than 2.5 µm in size (PM 2.5 ), such as diesel exhaust particles (DEPs), remain an ongoing problem (15). Ambient air pollution has been associated with hospitalizations and deaths due to exacerbations of cardiovascular and respiratory diseases (15). Particulate air pollution has also been linked more specifically to asthma (16,17). Some of the evidence linking particulate air pollution and asthma is indirect. For instance, several studies found that children raised in more polluted regions of a country are more likely to develop respiratory diseases and allergies compared with children raised in "cleaner" regions (18,19). Within communities, children living on busy streets have a higher likelihood of developing chronic respiratory symptoms than those living on streets with lower traffic volume (10,20). When exposed to similar levels of Japanese cedar pollen (a standard allergen), people who live in highly trafficked areas have enhanced allergic reactions compared with people who live in rural areas. This suggests the possibility of a synergistic effect between air pollution and aeroallergens (21). Diesel exhaust and DEPs have previously been associated with asthma (22)(23)(24). Current evidence supports the hypothesis that components of diesel exhaust worsen respiratory symptoms in individuals with preexisting asthma or allergies, and offers some support for the hypothesis that diesel exhaust and DEPs may play a role in causing asthma (25)(26)(27)(28). This paper critically analyzes the research relevant to the question of whether diesel exhaust exposure is associated with asthma. We also review molecular mechanisms by which particulate matter in diesel exhaust may facilitate and promote asthmatic symptoms. Molecular Basis for the Inflammatory Events in Asthma Asthma is a chronic respiratory disease manifested by bronchial hyperresponsiveness, reversible bronchial constriction, airway inflammation, and respiratory symptoms such as wheezing, dyspnea, coughing, and chest tightness (29,30). A complex immunologic cascade, including recruitment of inflammatory cells from the bloodstream to the bronchial mucosa, is characteristic of asthma (31). During asthma attacks, both inflammatory and structural cells of the respiratory tract are activated. Activated cells include T cells, mast cells, eosinophils, macrophages, epithelial cells, fibroblasts, and bronchial smooth muscle cells. By releasing proinflammatory and cytotoxic mediators and cytokines, these cells are all involved in a cascade that leads to the acute and chronic symptoms of asthma (30). Figure 1 summarizes the immunologic events involved in asthma. T lymphocytes appear to play a particularly important role in airway inflammation. T cells have been demonstrated in the airways of patients with fatal asthma (32) and appear to be vital for regulating the immune pathways that control allergic immune responses (31). In general, T cells can be classified into two major subsets consisting of CD4 + or CD8 + cells. CD4 + T cells differentiate into several phenotypes of T cells, including T helper 1 (T H 1) and T helper 2 (T H 2) (33). A shift in the predominant T-cell population from the T H 1 type to the T H 2 type has been associated with asthma (34). Immunoglobulins, cytokines, and chemokines appear to play important roles in the inflammatory foundation of asthma. For example, IL-5 promotes the development and survival of eosinophils, the cells that help drive the chronic asthmatic response. IL-8 is a potent chemoattractant for neutrophils and primes eosinophil responses. IL-10 builds and prolongs the immune response by stimulating production of more T H 2 cells. IL-4 and IL-13 act on B cells to stimulate production of antigen-specific immunoglobulin E (IgE), and GM-CSF is an important growth and survival factor for neutrophils, eosinophils, and macrophages. The relationship between these molecules and the eventual clinical symptomatology of asthma is illustrated in Figure 1. Theories on the Etiology of Asthma Genetic and environmental factors interact to cause asthma (1). There is substantial epidemiologic evidence, supported by clinical and toxicologic data, regarding a variety of asthma risk factors. Atopy is a major heritable risk factor for asthma and involves the familial tendency to develop immediate-type hypersensitivity (i.e., IgE-mediated) immune responses to specific allergens (34). Although genetic predisposition may be important in the development of asthma, recent increases in the prevalence and severity of asthma seem to have occurred too rapidly to be mediated solely by genetic shifts (35). Environmental factors that have been associated with adult and childhood asthma include allergen exposure, environmental tobacco smoke, socioeconomic status, nutrition, family size, history of infections, and ambient levels of air pollution (2,7). Although no consensus exists on the relative importance of each of these factors, the development of asthma is clearly multifactorial. Some scientists have hypothesized that fetuses and infants may take the first steps toward sensitization to environmental allergens during critical windows of susceptibility during early life, perhaps because of an environment that encourages dominance of the T H 2 phenotype beyond fetal life (9,34). Because components of diesel exhaust have been shown to affect numerous inflammatory and immunologic pathways in the respiratory tract, including promoting induction of a T H 2 phenotypic response, some researchers hypothesize that exposures to diesel exhaust may play a role in the development or exacerbation of asthma and allergic disorders (36). Composition of Diesel Exhaust Arising from the combustion of diesel fuel in compression-ignition engines, diesel exhaust consists of a complex mixture of particulate matter, including elemental carbon and polycyclic aromatic hydrocarbons (PAHs; i.e., phenanthrene, fluorenes, naphthalenes, pyrenes, fluoranthrenes), as well as acid aerosols, volatile organic compounds, various hydrocarbons (including highly reactive quinones), and gases, including carbon dioxide (CO 2) , carbon monoxide (CO), nitric oxide (NO), nitrogen dioxide (NO 2 ), and sulfur dioxide (SO 2 ) (37). After combustion of diesel fuel, the exhaust components tend to aggregate into discrete, spherical, respirable particles approximately 0.1-0.5 µm in diameter (38). These particles consist of an inert carbonaceous core with a large surface area, ideal for adsorbing heavy metals and organic compounds such as PAHs. The PAHs are small compounds of three to five benzene rings that can easily diffuse through cell membranes and bind to receptors within the cytoplasm. One such receptor is the aromatic hydrocarbon receptor complex (36). In addition, diesel exhaust contains many substances that are listed as toxic air pollutants by the State of California and as hazardous air pollutants by the U.S. Environmental Protection Agency (37,39). Buses, trucks, and other heavy industrial transport vehicles are major sources of ambient diesel exhaust pollution. Utilization of diesel fuel has steadily increased in the United States over the past several decades: the number of miles traveled by commercial trucks in the United States has increased by 235% between 1950 and 1985, and cargo tonnage carried by trucks has increased by 169% (40). DEPs are major sources of ambient PM 2.5 (41). In California, an estimated 26% of all particulate matter from fuel combustion sources arises from the combustion of diesel engines (41). In 1996, diesel exhaust also comprised a quarter of the NO smog precursors released nationally in the United States (39). Epidemiologic Studies Linking Diesel Exhaust and Asthma There is some epidemiologic evidence associating exposure to high levels of diesel exhaust with asthma. Wade and Newman (42) describe three railroad workers who traveled in locomotive units directly behind the lead diesel-powered locomotive engine and eventually developed acute or subacute onset of respiratory symptoms. They demonstrated symptoms consistent with asthma, including hyperreactive airways, airflow limitation, and reversibility with bronchodilators. None of these workers had any known preexisting respiratory conditions. Numerous components within diesel exhaust are respiratory irritants (38), including some of the acid aerosols, volatile organic compounds, and gases in the mixture. The irritant effect alone could potentially trigger asthmatic symptoms at sufficiently high exposure levels. Although exposure to acutely high levels of diesel exhaust can produce respiratory symptoms, there is also epidemiologic evidence that chronic exposure to diesel exhaust at lower environmental levels may also be associated with increased levels of respiratory symptoms. For instance, children living near busy diesel trucking routes have decreased lung function in comparison with children living near roads with mostly automobile traffic (10). A population-based survey of more than 39,000 children living in Italy found that children living on streets with heavy truck traffic were 60-90% more likely to report acute and chronic symptoms such as wheeze, phlegm, and diagnoses such as bronchitis, bronchiolitis, and pneumonia (43). A German study of over 3,700 adolescent students found that those living on streets with "constant" truck traffic were 71% more likely to report symptoms of allergic rhinitis and more than twice as likely to report wheezing (44). Diesel Exhaust Gases and Potential Adverse Respiratory Effects Diesel exhaust contains many well-known air pollutants that have been associated with asthma exacerbations (45), including SO 2 , NO 2 , and fine particulate matter smaller than 10 µm in size (PM 10 ), which are all criteria air pollutants (39). Several studies have found temporal associations between ambient particulate levels (PM 10 ) and emergency department admissions for exacerbations of asthma (16,17,46). Some recent studies have also shown relationships between both daily and long-term levels of SO 2 and child hospital visits for respiratory diseases (11,47). SO 2 causes bronchoconstriction in asthmatics during exercise. These effects are above and beyond the effects of exercise alone. Adult asthmatic subjects exposed to ambient concentrations (0.5 ppm SO 2 ) during just a few minutes of moderate exercise experienced significant drops in forced expiratory volume in 1 sec (FEV 1 ) (48,49). There is also evidence that shortterm exposure of asthmatics to NO 2 at ambient atmospheric levels may increase airway responsiveness to SO 2 (50). Therefore, it is possible that some of the gases related to diesel exhaust may trigger exacerbations of asthmatic and allergic symptoms in already asthmatic subjects (51)(52)(53). Several epidemiologic studies have reported associations between daily and chronic levels of NO 2 and exacerbations of asthma (12,24,26,54). Toxicologic evidence indicates that NO 2 is directly harmful to the respiratory system. Normal healthy subjects exposed for 2 hr to 2 ppm NO 2 demonstrated increases in IL-8 and neutrophils (55). An in vitro study exposed human nasal mucosal tissues to NO 2 and ozone and reported elevated histamine levels (56). Another study that exposed mildly asthmatic human subjects to 260 ppb (500 µg/m 3 ) NO 2 for 30 min found that the response to an inhaled allergen was enhanced after the NO 2 exposure (57). Acute exposures to diesel exhaust, even at low concentrations, have been shown to elicit inflammatory responses. There is some evidence to suggest that the inflammatory response from diesel exhaust may not simply be due to SO 2 and NO 2 exposures. Fifteen nonasthmatic volunteers exposed for 1-hr periods to diesel exhaust (at PM 10 concentrations of 300 µg/m 3 and NO 2 concentrations of 1.6 ppm) developed elevated levels of neutrophils, macrophages, B cells, mast cells, T lymphocytes, histamine, endothelial adhesion molecules, and lactate dehydrogenase in their airways at 6 hr postexposure (58). Such effects do not occur in nonasthmatics exposed to NO 2 alone at comparable concentrations, making the particles the more likely culprit. An in vitro study found that exposure of human bronchial epithelial cells to unfiltered diesel exhaust released inflammatory cytokines, whereas diesel exhaust that was filtered (and therefore contained gases but no particulate matter) did not have this effect (59). These studies suggest that the particulate components of diesel exhaust may play a more significant role in triggering airway inflammation than the gaseous components. Molecular Mechanisms of Action of DEPs in the Respiratory Tract It is not entirely clear which DEP components produce toxicity. Some studies suggest that the majority of the toxicity is attributable to the adsorbed organic compounds (38,60,61), whereas others conclude that the most toxic portion of a DEP is the carbonaceous core (15). Regardless of which specific components of DEPs are most toxic, it appears that DEPs may be associated with both early and late phases of the inflammatory response in asthma. Typically, the early asthmatic phase is predominantly IgE mediated, whereas the late phase involves complex networks of inflammatory mediators, including eosinophils, T cells, cytokines, chemokines, and immunoglobulins (30). There are numerous hypothesized interactions of DEPs with the immune and respiratory systems. DEPs may act directly to alter specific immunologic pathways that may precipitate acute exacerbations of asthma. Direct effects of DEPs include stimulation of IgE production, eosinophilic degranulation, augmentation of cytokine and chemokine production and release, free radical formation, and effects on production of NO in the airways (62). As an adjuvant with environmental allergens, DEPs appear to enhance the differentiation of CD4 + T lymphocytes into the T H 2 phenotype and enhance allergen-specific IgE and IgG production. The potential pathways by which DEPs may promote asthma are summarized in Table 1. Reviews, 2002 • Diesel exhaust and asthma Environmental Health Perspectives • VOLUME 110 | SUPPLEMENT 1 | February 2002 Animal studies Increase in GM-CSF mRNA expression in the lungs of mice in cells involved in asthmatic activity intranasally exposed to DEPs (66) Increase in GM-CSF in mice exposed to DEPs and allergen ( Direct Immunologic Effects of DEPs Enhanced IgE Production by Effects on B Lymphocytes DEPs consistently enhance the production of IgE in the airways (63)(64)(65). IgE is produced by activated B cells in response to a specific allergen. Once produced, IgE attaches to mast cells and, when cross-linked by allergen, induces mast cells to release histamine and leukotrienes. The chemicals released from mast cells cause constriction of bronchial smooth muscle, mucus secretion, and serum leakage into the airways and result in acute asthma symptoms (30). The mast cell is often considered the central cell type in the acute asthmatic response, and IgE is the critical immunoglobulin driving the mast cell response. In a study of eleven nonsmoking, nonallergic volunteers, Diaz-Sanchez et al. (65) showed that exposure to DEPs significantly increases IgE levels in nasal fluids by greatly increasing the numbers of IgE-secreting cells and by altering the expression of IgE mRNA isoforms. In comparison, there was no effect on IgG, IgA, or IgM antibody production. This suggests that DEP exposure in vivo induces both a quantitative increase in IgE production and a shift in the type of IgE that is produced. Although most studies support the finding that DEPs increase IgE synthesis (63)(64)(65), one study in mice failed to find an increase in IgE synthesis from DEPs alone (66). In vitro evidence suggests that IgEsecreting B cells may be directly stimulated by DEPs. For instance, PAHs from DEPs were able to induce production of IgE in purified human B cells treated with IL-4 and CD-40 (67). Another study demonstrated that phenanthrene, a major PAH in DEPs, increased IgE in human B cells transformed by Epstein-Barr virus (64). The IgE stimulation by phenanthrene was accompanied by an increased expression of total IgE mRNA. In addition, several studies have found that the DEP-mediated increase in IgE synthesis may be amplified when DEPs act as an adjuvant to environmental allergens (68)(69)(70)(71)(72)(73). Stimulation of Eosinophils Diesel exhaust may also stimulate the proliferation of eosinophils. Eosinophil production is regulated by IL-3, IL-5, and GM-CSF. The granules of mature eosinophils contain chemokines, leukotrienes, and toxic proteins. Degranulation of eosinophils in mucosal tissues results in bronchial inflammation and contributes to asthmatic symptoms (74). Just as mast cells are regarded as the central cell for the acute asthmatic response, eosinophils are often regarded as the critical cell type in chronic asthma. DEPs may enhance eosinophilic infiltration into the respiratory tract and subsequent degranulation. Healthy human volunteers exposed to diesel exhaust had increased eosinophils and other inflammatory molecules on bronchial biopsies 6 hr after exposure (58). However, a similar study did not detect increased eosinophils in induced sputa 4 hr after exposure to DEPs (75). Induced sputa are less sensitive than bronchial biopsies at detecting subtle inflammatory changes in the lower airway. Eosinophils incubated with DEPs had enhanced adherence to human nasal epithelial cells and enhanced levels of degranulation (76). In animal assays, the DEP-induced eosinophilia is enhanced in the presence of allergens such as ovalbumin (OVA) and is accompanied by enhanced airway hyperresponsiveness to acetylcholine challenge (68,77). Influence on Cytokine Expression Exposure to DEPs may augment levels of many different cytokines (soluble protein immune mediators such as interleukins) and chemokines (attractant proteins that induce migration of different cell types). These molecules are key chemical messengers in the inflammatory processes of asthma. Various interleukins stimulate T-cell switching between T H 1 and T H 2 subtypes, stimulate B cells, attract and prolong the survival of eosinophils, and play other roles orchestrating the immunologic cascade that results in an allergic or asthmatic response. Augmentation of interleukin levels. DEPs and associated polyaromatic hydrocarbons may increase levels of some interleukins. For example, healthy humans exposed nasally to 0.15 mg of DEPs suspended in 200 µL of saline expressed T H 2type cytokines (i.e., IL-4, IL-5, IL-6, IL-10) in their nasal mucosal cells 18-24 hr after exposure (65). IL-4 production may be enhanced by pyrene, a PAH found in DEPs (78). The molecular mechanism of this effect may be upregulation of IL-4 mRNA transcription. IL-4 is a T H 2-type cytokine that induces isotype switching in B cells to alter antibody production from the IgM to IgE isotype and is also central to the production of IgE (79). DEPs may enhance IL-4 production more effectively with allergen than it does alone. Mice injected intratracheally with DEPs plus Japanese cedar pollen manifested an IL-4 production about twice as high as that seen in mice injected with Japanese cedar pollen alone (80). This enhancement in IL-4 production increased to an 8-fold level in mice injected with OVA and DEPs compared with mice receiving only OVA. A later study examining cytokine production in DEPexposed and control mice sensitized with OVA found that IL-4 and IL-10 production in spleen cells was significantly increased in the group of DEP-exposed mice (69). In addition, humans challenged with DEPs plus ragweed antigen had enhanced local IgE, IL-4, and IL-13 production accompanied by isotype switching from IgM or IgD to IgE antibody in nasal lavage cells (71). In comparison, isotype switching did not occur Reviews, 2002 • Diesel exhaust and asthma Environmental Health Perspectives • VOLUME 110 | SUPPLEMENT 1 | February 2002 Levels of IL-5 are also increased after DEP exposure. IL-5 is an important factor for the proliferation and activation of eosinophils after exposure to certain allergens such as OVA and pollen (81,82). A recent study found that mRNA expression for IL-5 was significantly lower in patients who had no nasal symptoms when compared with those who required medicines to control allergic symptoms during pollen season (83). Two human studies found that exposure to DEPs resulted in increased levels of IL-5 (65,84). However, other human, animal, and in vitro studies found that diesel exhaust alone did not result in any IL-5 response (38,66,81,82,85). Despite the conflicting results about the effect of DEPs alone on IL-5, DEPs consistently increase IL-5 levels in the presence of environmental allergens. For instance, healthy human subjects exposed to DEPs with ragweed antigen had significantly increased levels of IL-5 and other T H 2 cytokines in nasal lavage fluid (86). Mice exposed to diesel exhaust combined with OVA sensitization had increased expression of IL-5 in lung tissue and developed airway inflammation and hyperresponsiveness (77,81,82,87). Instillation of OVA and DEPs together produced a 3-to 4-fold increase in IL-5 in mouse lung tissue compared with the levels in mice exposed to OVA or DEPs alone (77). DEPs may enhance the symptoms of allergic rhinitis by a synergistic effect with pollen to increase IL-5 secretion (86). DEPs also increase the presence of IL-8, a member of the CXC chemokine family. Produced primarily by macrophages, IL-8 is one of the most important mediators in the recruitment of neutrophils to the respiratory tract (88). Neutrophils appear to be important inflammatory leukocytes in airway secretions of patients with acute severe asthma (89). IL-8 appears to play an important role in augmenting the numbers of activated eosinophils in asthmatic patients (90). Increased IL-8 levels are found in bronchial washings and bronchial tissues of healthy humans exposed to diesel exhaust levels similar to those in the ambient air of many cities (84). In vitro exposure to DEPs has also been found to enhance the release of IL-8 from various types of airway cells, including human bronchial epithelial cells (38,60,(91)(92)(93), human mucosal microvascular endothelial cells (94), and human nasal epithelial cells (38,94). Effect on other inflammatory mediators. DEPs may enable the release of several additional molecules involved in airway inflammation. In animal and in vitro models, DEPs increase GM-CSF. In both animals and humans, GM-CSF is thought to sustain the asthmatic response by prolonging the survival of eosinophils and neutrophils (95). Mice intranasally exposed to DEPs developed bronchial constriction associated with increased levels of GM-CSF in bronchial epithelial cells; blocking the GM-CSF response abolished the DEP-evoked airway hyperresponsiveness (66). DEP-induced increases in GM-CSF were also shown in vitro in exposed human bronchial epithelial cells (60,93,96), human mucous membrane epithelial cells, and human nasal epithelial cells (38,94). However, no effect on GM-CSF levels in bronchial cells was found in one study of human volunteers exposed to diesel exhaust (84). Proposed mechanisms by which DEPs may increase GM-CSF include increased expression of the histamine H 1 receptor (94) and free radical production, which may independently elevate GM-CSF levels (97). A recent study demonstrated that free radical scavengers inhibit the DEP-mediated GM-CSF release in airway epithelial cells (97), providing some support for the latter hypothesis. Free radical production is part of the inflammatory pathway discussed in more detail below. Expression of Chemokines DEPs have been shown to increase the expression of RANTES (regulated upon activation, normal T-cell expressed and secreted), a chemokine that is central to the delivery of eosinophils to the airway (30). RANTES also plays a role in attracting leukocytes during the inflammatory response (98). Upon exposure to DEPs, expression of the gene for RANTES was increased in the bronchial epithelial cells of asthmatic (96) and nonasthmatic individuals (92). Although DEPs enhance both IL-8 and RANTES, an inhibitor of p38 mitogenactivated protein (MAP) kinase apparently prevents these effects. p38 MAP kinase is thought to be important in the signal transduction pathway leading to upregulation of nuclear factors (e.g., activator protein 1 [AP-1] and nuclear factor kappa B [NFκB]) that activate the transcription of genes for IL-8 and RANTES. Thus, DEPs may enhance IL-8 and RANTES through activation of the p38 MAP kinase pathway in human bronchial epithelial cells, which leads to upregulation of nuclear transcription factors AP-1 and NFκB (92). Inflammatory Effects of DEPs Although DEPs may have numerous effects on the immunologic cascade involved in allergy and asthma, there is also some evidence that these particles may have a more direct irritant or cytotoxic effect in the respiratory tract. Although there is overlap between the two pathways, this inflammatory mode of action is somewhat distinct from the more immunologic effects described above. The inflammatory pathway in asthma is shown in Figure 2. Enhanced Superoxide Production DEPs may induce the production of oxidants such as superoxide (O 2 -) and hydroxyl radical (OH -), reactive compounds that can cause direct damage to the pulmonary epithelium (99). Superoxides appear to be part of a cellular response against the adsorbed organic molecules on DEPs and may promote apoptosis in macrophages (100), thereby causing release of more inflammatory and cytotoxic molecules. Intratracheal DEP exposure in mice enhances the activity of P450 reductase, an enzyme that increases production of superoxide. This provides a possible mechanism by which DEPs may stimulate superoxide production (101). While increasing superoxide production, DEPs may also reduce the superoxide scavenging activities of superoxide dismutase (SOD) and glutathione in vitro. For example, when the antioxidant catalase was exposed to the oxidant stress of hydrogen peroxide (H 2 O 2 ) in the presence of DEPs and chlorine, the activity of the catalase was inhibited dose dependently (102). This type of inhibitory activity by DEPs can reduce the capacity of the body to counteract oxidants (e.g., H 2 O 2 ), thereby providing another mechanism for cellular injury. Lim et al. (101) provide evidence for this by demonstrating that the activity of CuZn-superoxide dismutase (SOD) and Mn-SOD was decreased after intratracheal exposure to DEPs in mice. DEPs may also inhibit the activity of antioxidants through a deactivating reaction between SOD and quinones, which are present on the surface of DEPs (103). Therefore, DEPs appear to increase the superoxide load yet decrease the body's innate superoxide scavenging activity, which leads to potentially higher levels of cytotoxicity. Increases in superoxides may be a key factor in asthmatic and allergic responses. For instance, pretreatment with polyethyleneglycol-conjugated SOD suppressed DEP-related airway alterations in mice, including infiltration of inflammatory cells, mucus hypersecretion, and airway constriction (99,104). This illustrates that direct cellular toxicity by superoxides may play a role in asthma. Superoxides may also activate intracellular signaling pathways, including those involving NFκB and AP-1, that upregulate chemokine and cytokine expression. This may help mediate and sustain inflammatory responses in asthma. Effect on the Nitric Oxide Pathway DEPs may be capable of influencing NO production. NO is elevated in asthmatic patients and has been proposed as a biologic marker for airway inflammation (105,106). NO is synthesized from the amino acid arginine by the enzyme nitric oxide synthase (NOS). NO is normally released constitutively by one isoform of NOS, but NO may also be produced from augmented expression of inducible forms of NOS by various stimuli. However, the precise role of NO in asthma is not clear. Interestingly, it appears that NO produced by constitutive NOS may have anti-inflammatory effects, whereas NO produced from inducible forms of NOS may have proinflammatory effects (105). DEPs may affect both the constitutive and inducible NOS pathways. Intratracheal exposure of mice to DEPs increased production of both the constitutive and inducible NOS isoforms (101). However, another study found that DEP-induced airway inflammation was aggravated by NO generated from the inducible form of NOS (105). This study suggested that DEPs may aggravate airway inflammation by inhibition of NO production by the constitutive form of NOS. Although DEPs may alter the NO pathway, the implications for asthma are not clear. One theory is that NO may react with superoxide to form a compound called peroxynitrite that may play a key role in the development of airway inflammation and hyperresponsiveness (101). Adjuvant Immunologic Effects of DEPs Although DEP exposure alone can elicit adverse biologic effects in the airway, the effect of DEPs has been repeatedly shown to be even greater in conjunction with allergens (82,87). For example, mice exposed intranasally to DEPs and OVA have far greater levels of anti-OVA IgE than mice exposed solely to DEPs or OVA alone (73). Guinea pigs exposed for 4 weeks to diesel exhaust and challenged with histamine experienced nasal mucosal hyperresponsiveness, sneezing, and nasal secretion, while those exposed to either diesel exhaust or histamine alone had far weaker responses (107). An innovative study of 10 nonsmoking atopic human subjects tested the potential for DEPs to create a brand new immune response to an allergen. The investigators exposed the atopic subjects on three occasions to the neoantigen keyhole limpet hemocyanin (KLH), a compound to which humans are not normally sensitized. Twenty-four hours prior to each exposure to the new antigen, the subjects were exposed nasally to a concentration of DEPs roughly equivalent to 1-3 days of breathing Los Angeles air. Subjects exposed to KLH alone did not develop IgE antibodies to this compound, whereas subjects exposed to DEPs followed by KLH developed KLHspecific IgE and mounted a T H 2-type cytokine response with increased levels of IL-4. This important study indicates that DEPs may promote new allergic sensitization to antigens in addition to aggravating existing allergic diseases (108). Theories as to how DEPs may have adjuvant effects include stimulation of a T H 2type immune response, by acting as delivery agents for coallergens, and by increasing allergen-specific IgE and IgG production. DEPs and Induction of a T H 2 Phenotypic Response Exposure to diesel exhaust may induce T cells to differentiate into a T H 2 phenotype (34). Rather than a direct effect of DEPs alone, this shift toward a T H 2 phenotype seems to occur as an adjuvant effect of DEPs with allergens. In the presence of allergen, DEPs stimulate the release of T H 2-specific cytokines (i.e., IL-4, IL-5, IL-6, IL-10, and IL-13). These cytokines appear to play a major role in the molecular pathophysiology underlying the clinical manifestations of asthma and allergies. Increased levels of T H 2-type cytokines have stimulatory effects on B cells, enhancing IgE production, as discussed above. In a study of 13 nonsmoking volunteers, Diaz-Sanchez et al. (86) found that exposure to DEPs plus ragweed results in increased expression of all of the T H 2-type cytokines in nasal lavage fluid and decreased expression of T H 1-type cytokines. A study of 27 nonsmoking volunteers with known allergies found that intranasal coadministration of DEPs and an allergen to which the subjects are sensitized stimulates a dramatic increase over 18 hr of T H 2-type cytokines such as IL-4 and IL-6. The initial production of these cytokines appears to derive from mast cells in the mucosa (79). The precise mechanism of how DEPs stimulate the T H 2 pathway has not been determined. However, the time during development when an organism is exposed to DEPs may be vital in priming the immune system for development and maintenance of the T H 2 pattern. Exposure to DEPs and environmental allergens during early life may predispose individuals to asthma and allergic disorders later in life by promoting the expression of T H 2 phenotypic responses (34,109). Physical Interactions between DEPs and Allergens DEPs may enhance the immune response to allergens by physically binding with them. By this mechanism, DEPs may be transported with allergens such as pollen grain fragments into human airways, where both agents may be deposited on the mucosa at the same location. This proximity may facilitate synergistic immunologic responses and respiratory symptoms. DEPs bind strongly with certain allergens. For instance, a study that incubated DEPs with purified natural grass pollen allergen, Lol p 1, for 30 min found that this compound was bound to DEPs with sufficient strength that it could not be removed by washing methods (110). Another study used immunogold labeling to demonstrate the presence of the allergens Can f 1 (dog) and Bet v 1 (birch pollen) on the surface of suspended particulate matter, similar to DEP, which was collected from the indoor environment. In addition, the allergens Fel d 1 (cat) and Der p 1 (house dust mite) both attached to DEP when incubated with DEP in vitro (111). However, actual binding of DEP to allergen does not appear to be necessary to the immune response. For instance, pollen grains from timothy grass do not adhere significantly to DEP in vitro, but the combination does induce synergistic inflammatory changes (i.e., influx of macrophages, eosinophilic granulocytes, and granuloma formation) in the lungs of rats (112). Another study demonstrated that the capacity of a particle to adsorb antigens was not related to its ability to enhance allergic responses (113). Thus, the binding or adsorption of DEP to antigen may be less important than the physical proximity of the two agents on the mucosal surface. Enhancement of IgE and IgG Production Exposure to DEP and many environmental allergens has been shown to augment both IgE and IgG production. Both IgE and IgG 1 antibodies are the result of T H 2 cytokine environments. Research in mice has demonstrated that DEP produces allergen-specific (85). Production of IgG 1 antibodies is dependent on T H 2 lymphocytes in mice, and has been linked in humans to delayed asthmatic reactions. Human nasal instillation studies involving exposures to 0.30 mg DEP (equivalent to total exposure on 1-3 average days in Los Angeles) along with a ragweed antigen challenge showed that ragweed-specific IgE levels peaked far higher in the presence of DEP, with a maximum level 4 days postexposure. The levels of ragweedspecific IgG 4 (an isoform of IgG that is linked to IgE expression) also increased in these studies, although other forms of IgG were not affected (45,86). Adjuvant IgE antibody responses were observed in mice exposed by intraperitoneal injection to OVA and DEPs (114). However, another study measured IgE and IgG responses to intratracheal instillation of diesel exhaust and OVA sensitization in strains of mice that were either high IgG responders or high IgE responders. In contrast to the previous study, IgE production did not change in either strain, but the combined exposure dramatically increased IgG 1 production and IL-2 and IL-5 levels in the high IgG responders (85). Similar studies (81,82,87) found that inhaled exposure to diesel exhaust with OVA sensitization for 5-6 weeks increased both IgG 1 and IgE levels. Studies in mice using other allergens such as house dust mite antigen and Japanese cedar pollen were consistent with the literature using OVA. Mice immunized with either of these antigens mounted a much greater IgG 1 response with exposure to DEPs than mice exposed to the same level of allergen without DEPs. A similar response was found for IgE synthesis, indicating that both antibodies play a role in the adjuvant effects of DEPs on the immune response (72,113). Guinea pigs exposed to DEPs for 5 weeks with OVA sensitization once per week developed 7-fold greater anti-OVA IgG antibody than guinea pigs exposed only to filtered air, indicating that the response is not specific to mice. The exposed guinea pigs also experienced slight concentrationdependent increases in IgE antibody (115). Similar results have been seen in rats, where intranasal or intratrachial co-exposure to DEPs and pollen grains resulted in a much greater serum level of specific IgE and IgG 1 antibodies than exposure to either alone. Electron microscopy revealed pollen grains in the alveoli surrounded by DEP-loaded macrophages (116). One interesting study examined the effects of oral ingestion of DEPs in mice because it is known that airborne particulate reaches not only the lung but also the mucosa of the gastrointestinal tract. DEPs in the gut mucosa also appear to act as an adjuvant, enhancing both T H 1and T H 2-type responses to allergen and enhancing production of allergen-specific IgG 1 (117). Conclusions and Considerations for Further Research Rising rates of asthma and allergies create a public health imperative to identify any modifiable environmental factors that may cause or contribute to these diseases. Abundant evidence suggests that components of diesel exhaust can cause biologic responses that are related to asthma. Although evidence from research cited in this article indicates that exposures to diesel exhaust and DEPs are associated with the inflammatory and immune responses involved in asthma, some questions remain regarding the underlying molecular mechanisms. DEPs alone may augment levels of IgE, trigger eosinophil degranulation, and stimulate release of numerous cytokines and chemokines. DEPs also may play a role in unleashing the cytotoxic effects of free radicals in the airways. All of these cellular mechanisms would be expected to produce airway inflammation, bronchial smooth muscle contraction, serum leakage, and mucus production, thereby resulting in the clinical symptoms of asthma. Interestingly, DEPs appear to have a far greater impact as an adjuvant with allergens than it has alone. The immune events leading to the asthmatic response are intertwined, and DEPs likely act at numerous points on the pathway. Stimulation of the T H 2-type pathway and increase in IgE production are two of the most important and likely mechanisms by which DEPs may generate and sustain an asthmatic response. The timing of exposure to air pollutants such as DEPs during early life may also be critical in fostering the persistence of the T H 2 phenotype. DEPs also have other biologic effects, such as increasing superoxide and NO levels. However, the evidence for these effects is currently found only in a few animal or in vitro studies, and key questions remain. Although exposure to diesel exhaust appears capable of inducing inflammatory changes in the respiratory tract, this area is poorly understood. Most important, the epidemiologic evidence linking diesel exhaust and asthma is distressingly sparse because of a paucity of studies that have collected relevant exposure data. More research is needed to investigate the mechanism and the clinical relevance of the observed adjuvant effect of co-exposure to DEPs and allergens. One study demonstrated that this adjuvant effect results in increased respiratory resistance in mouse airways after acetylcholine challenge (118). This line of research will help to link the observed immunologic alterations with clinical relevance. The question of windows of vulnerability in early life and the induction of an allergic phenotype also requires further investigation. Research is needed to demonstrate more clearly the effect of DEPs on reactive oxygen species, superoxide, and NO production. Epidemiologic research on allergic and/or asthmatic human populations would be particularly valuable. Observational studies of children, including quantitative assessment of DEP exposure and airway function, would remove some of the uncertainties associated with the epidemiologic research to date. Despite the need for further research, it is biologically plausible that diesel exhaust and associated particles are associated with asthma and other allergies in humans. In light of these findings, public health efforts to reduce exposures to diesel exhaust are warranted. In particular, reducing the exposure of infants and children should be a priority as part of a coordinated effort to improve the prevention and management of childhood asthma.
v3-fos-license
2019-04-09T13:05:00.241Z
2017-08-28T00:00:00.000
56319964
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajhc.20170304.11.pdf", "pdf_hash": "8d9198ba7a30eb75c0e19c2bcc8bde90cb257f63", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:352", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "a748bcf583d0cfc06531e9da26c2509ae71af72d", "year": 2017 }
pes2o/s2orc
Physiochemical and Antibacterial Activity Investigation on Noble Schiff Base Cu(II) Complex Schiff base ligand and its Cu(II) complex had been synthesized by the condensation reaction of isatin with amino acids (cysteine / glycine / leucine / alanine ). The Structure and spectral properties of ligand and complex were confirmed by UV, FT-IR and some physiochemical measurements. The spectral properties showed that it was a distorted tetrahedral geometry with a tridentate ligand and chloride ion. IR spectral studies show the binding sites of the Schiff base ligand with the metal ion. Molar conductance data and magnetic susceptibility measurements give evidence for monomeric and non-electrolytic nature of the complexes. The Schiff base Cu(II) complex was subjected to antimicrobial studies screened by employing the Disc Diffusion method. All the synthesized complexes showed strong antibacterial activity. Introduction Multidentate ligands are extensively used for the preparation of metal complexes with interesting properties [1][2][3]. Among these ligands, Schiff bases containing nitrogen and phenolic oxygen donor atoms are of considerable interest due to their potential application in catalysis, medicine and material science [4][5][6][7]. Transition metal complexes of these ligands exhibit varying configurations, structural lability and sensitivity to molecular environments. The central metal ions in these complexes act as active sites for pharmacological agent. This feature is employed for modeling active sites in biological systems. Amino acids and Isatin are important to the pharmaceutical industry, since they have antibacterial and antitubercular action. Schiff bases obtained by the condensation of Isatin and amino acids in presence of potassium hydroxide find application as antituberculosis compounds. They also find application in the biophysical and clinical studies as metal ligand luminescence probes [8]. Recently, few mixed ligand complexes containing heterocyclic amine as secondary ligands and few Schiff base containing complexes have studied in our laboratory [9][10][11]. Therefore, in view of our interest in synthesis of new Schiff base complexes, which might find application as pharmacological and as luminescence probes, we have synthesized and characterized Cu(II) metal ion complexes of noble Schiff base formed by the condensation of isatin and amino acids in presence of potassium hydroxide. Experimental Infrared spectra disc were recorded as KBr with a NICOLET 310, FTIR spectrophotometer, Belgium, from 4000-225 cm -1 . Magnetic measurements have been carried out in a Sherwood Scientific magnetic susceptibility balance at room temperature. The measurements of magnetic susceptibilities were made at about constant temperature. The electronic spectra of the ligand and complex in UV-Vis region were obtained in DMSO Solutions using a Shimadzu UV-1200 Spectrophotometer in the range of 200-800 nm. Synthesis of Schiff Bases To a stirring solution of isatin (0.735 g, 0.005 mole) dissolved in 25mL of ethanol, a solution of amino acids (cysteine 0.6058 g, glycine 0.3754 g, leucine 0.6559 g, alanine 0.4455 g, 0.005 mole) dissolved in 10mL water was added drop wise and in this mixture, a solution of potassium hydroxide (0.2805 g, 0.005 mole) dissolved in 10 mL water was added slowly. This has resulted a dark red solution, which was refluxed for 6h. The reaction mixture was cooled and kept for evaporation at room temperature leading to isolation of solid product. The product thus formed was filtered washed several times with ethanol and finally with diethyl ether and dried in vacuum over anhydrous CaCl 2 . The product was found to be soluble in DMF and DMSO. The target Schiff bases were synthesized according to Synthesis of Metal Complexes To a stirring solution of isatin (0.735 g, 0.005 mol) dissolved in 25mL of ethanol, a solution of amino acids (cysteine 0.6058 g / glycine 0.3754 g / leucine 0.6559 g / alanine 0.4455 g, 0.005 mol) dissolved in 10mL water was added drop wise and in this mixture, a solution of potassium hydroxide (0.2805 g, 0.005 mol) dissolved in 10 mL water was added slowly. This has resulted a dark red solution and a solution of cupric chloride (0.8525 g, 0.005 mol) dissolved in 10 mL water was added slowly to this solution. Then dark red color turns to gray color and the mixture was refluxed for 6 hours leading to the isolation of solid product. The complexes thus formed were filtered and washed several times with ethanol to remove any traces of unreacted starting materials and were further washed with diethyl ether and dried in vacuum over anhydrous CaCl 2 . The complexes were soluble in DMF and DMSO. Physical Measurement The melting point of the complexes prepared for this study is given in Table 1. All the complexes are non-electrolyte. The observed values of effective magnetic moment (µ eff ) of the complexes at room temperature are given in Table 1. From the above data it is showed that all the complexes are paramagnetic in nature [11,12]. Characterizations by UV-Visible Spectra Because of the relatively low symmetry of the environments in which the Cu 2+ ion is characteristically found, detailed interpretations of the spectra and magnetic properties are somewhat complicated, even though one is dealing with the equivalent of a one -electron case [16]. Virtually all complexes and compounds are blue or green. Exceptions are generally caused by strong UV bandscharge transfer bands -tailing off into the blue end of the visible spectrum, thus causing the substances to appear red or brown [17]. The observed λ max values are used to predict the geometry around the central metal ion in the complex. The electronic spectra of Ligand show similar absorption bands and obtain at 290 nm. These bands shows the presence of n→n* and π→π* transitions of their azomethines chromophore group and aromatic ring. But in the Spectra of complexes, slightly shifts are observed in the position and intensity of these bands as com-pare to that of ligand which might be due to the coordination of metal with the ligand. All the synthesized complexes showed d-d transitions at 410 nm which is due to 2 B 1g → 2 A 1g transition indicated the distorted tetrahedral structure [18]. Any chemical or biological agent that either destroys or inhibits the growth of microorganisms is called antimicrobial agent. The susceptibility of microorganism to antimicrobial agent can be determined in vitro by a number of methods. The disc diffusion technique [19] is widely acceptable for preliminary investigations of materials which are suspected to possess antimicrobial properties. Diffusion procedure, as normally used in essentially a qualitative test, which allocates organism of the susceptible, intermediate (moderately susceptible) or resistant categories. The antibacterial activity of the test complexes were determined by using the dose of 10 µg/disc. The results of antibacterial activity measured in terms of zone of inhibition is shown in Table- Conclusion Magnetic susceptibility data indicated that all the complexes are paramagnetic in nature. Conductivity measurement indicated that all the complexes are nonelectrolyte in nature. IR spectral data showed the ligand coordinate with metal complexes through O and N atoms. UV-Vis data showed the presence of d-d transition and paramagnetic nature nature of the complexes. There are two possibilities Cu(I) / Cu(II) state. Where, the Cu(II) oxidation state is more stable than Cu(I) for complexes with nitrogen or oxygen electron donating ligands because of the CFSE. The d 9 of Cu(II) configuration has more CFSE than d 10 of Cu(I) which is zero. Keeping this in mind and judging from all the experimental data it was concluded that the geometry around Cu(II) ions in the respective complexes might be distorted tetrahedral and structures of complexes have been proposed as shown in figures 5-8. All the complexes showed strong antibacterial activity.
v3-fos-license
2020-09-03T09:03:38.253Z
2020-06-25T00:00:00.000
225742145
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aijournals.com/index.php/ajmrr/article/download/1547/1133", "pdf_hash": "f0b021c7b7242b568580a9c6de9fb8bc19257678", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:357", "s2fieldsofstudy": [ "Medicine" ], "sha1": "679ab744b9e13b4bc3d5995839690fe850f00860", "year": 2020 }
pes2o/s2orc
Role of Cross-Sectional Imaging in Tongue Lesions Background: The purpose of this study was to determine the role of CT and MR imaging in demonstrating lesions of the tongue. Imaging    can help to decide the further management of the patient, and when resection is considered, the precise extent of the lesion can be delineated, and also if organ conservation therapy can be suggested. Hence, knowing the differentiating characteristics of these lesions is essential for        a Radiologist to narrow the differential diagnosis. The aim of the study is to describe the imaging findings of various tongue lesions, give radio-pathological correlation, and discuss the role of CT and MRI in planning further appropriate treatment, the extent of involvement of adjacent structures, resectability, postoperative reconstruction & prognosis. Subjects and Methods: Twenty patients with tongue masses were prospectively evaluated with CT & MRI for eighteen months from June 2018 - Nov 2019. Contrast-enhanced CT axial images with reconstruction were acquired. MRI plain & contrast study done. Imaging findings & diagnoses were later correlated with surgical and histopathological results in all possible cases. Results: Among twenty patients, three patients revealed no abnormality; seventeen patients with findings on imaging include twelve squamous cell carcinoma, two venous malformations, two thyroglossal cysts, one hemangioma & one fatty lipoma. Conclusion: Few specific lesion characteristics can aid in narrowing the differential diagnosis. Solid high-density lesions in the midline mostly represent lingual thyroids. Calcifications likely indicate goitrous transformation. Phleboliths are highly suggestive of venous malformations. Multinodular, thin-rim enhancing cystic lesions are indicative of lymphatic malformations, primarily when fluid-fluid levels are found. Fat/calcium content within a complex cystic lesion is specific for a dermoid cyst, whereas diffusion restriction within a pure cystic lesion is suggestive of an epidermoid cyst. Finally, when an injury is Trans spatial, three differentials to be considered are highly aggressive malignancies, congenital masses & aggressive infections. Introduction Although most of the tongue masses are squamous cell carcinomas, various unusual lesions may also affect the tongue. Most of the congenital lesions found in children are seen at the root of the language. The root of the tongue is relatively resistant to primary neoplastic and infectious processes due to its high percentage of skeletal muscle and lack of significant lymphatic tissue. Lesions involving the root of the tongue can be classified into congenital vascular and nonvascular lesions, infections, and neoplasms. [1] The ageadjusted incidence in India being 20 per 100,000 population, squamous cell cancers (SCC) form the bulk of the lesions involving the tongue. [2] SCC invariably invades the deep tongue from adjacent mucosal surfaces of the oral cavity and anterior oropharynx and does not originate from the deep tongue structures. [3] Imaging provides crucial details for the appropriate management of these lesions. In the present study, we discuss the brief anatomy of tongue, imaging findings in various injuries, and suggest optimal modality & type of crosssectional investigation for different pathological lesions. Subjects and Methods Twenty patients with clinically suspected tongue masses were prospectively evaluated with CT & MRI for eighteen months from June 2018 to Nov 2019. Contrast-enhanced axial images with multiplanar reconstruction were acquired using 64 slice Siemens CT machine. MRI plain & contrast study was obtained with T1, T2, T1 Fat saturated, DWI & GRE sequences & multiplanar image acquisition using Siemens Avento 1.5T machine. Imaging findings & diagnoses were In our study, we observed that congenital lesions were more common in patients In our study, we observed that congenital lesions were more common in patients below 40 years, predominantly in females, while acquired injuries were common in patients above 40 years of age, mostly in males. In patients diagnosed with SCC, eight patients were above 60 years of age, three patients 40-60 years of age, and one patient above 20 years of age. Eight patients showed an association with risk factors like tobacco chewing, smoking, and alcohol. SCC showed a predilection for males with an incidence being 72% among males and 28% among females. The most common site involved in the SCC of tongue in this study was the lateral aspect and anterior two-third of the tongue. In two patients, the lesions showed erosion of hyoid bone. Two patients had lesions that extended across the midline. Four patients of SCC had lesions extending into the retromolar Postoperative HPE findings correlated well with imaging diagnosis in cases of malignant lesions. Only one case was proven to be a thyroglossal duct cyst. Vascular lesions were managed with a combination of clinical and imaging findings & embolization was done where ever necessary. Other benign lesions were asked for monitoring with follow-up after six months without intervention. Normal tongue Anatomy The oral cavity includes the lips anteriorly; circumvallate papillae, tonsillar pillars, and soft palate posteriorly; mandibular alveolar ridge, mylohyoid muscles, and the teeth inferiorly, gingivobuccal region laterally; hard palate, maxillary alveolar ridge and the teeth superiorly. [5] The terms' floor of the mouth' and 'root of the tongue' refer to the oral cavity, whereas the hyoid bone, mandible, and styloid process of the skull base are the extrinsic muscles, which include genioglossus, hyoglossus, palatoglossus, and styloglossus. The hypoglossal nerve traversing between the mylohyoid and hyoglossus tissue innervates all the muscles of the tongue except palatoglossus, which is supplied by pharyngeal plexus. The lingual nerve, which courses adjacent to the hypoglossal nerve, is the sensory supply to the anterior two-thirds of the tongue. Glossopharyngeal nerve supplies posterior one-third of the tongue. Individual sensory fibers for taste course along the lingual nerve and join forming the chords tympani nerve, which traverses the middle ear and joins the facial nerve. [6] Imaging Features of various lesions involving the tongue Lesions arising from the root of the tongue can be classified into congenital (vascular and nonvascular lesions), infections, and neoplasms. The largest group of injuries comprise of the congenital vascular and nonvascular lesions. In contrast, lesions occurring much more frequently in the adjacent sublingual and submandibular spaces, and the base of the tongue are of acquired type. The higher prevalence of acquired lesions is presumably due to the greater exposure to the mucosal surfaces and prominent lymphoid tissues. Also, the skeletal muscle that composes most of the root of the tongue is less prone to neoplasms and relatively resistant to infection as compared with other tissues. [7] Lingual Thyroid Ectopic thyroid tissues are found along the thyroglossal duct, which is situated between the foramen cecum and the thyroid gland during development, and involute in adults. Before it reaches the hyoid bone, the canal passes through the posterior aspect of the root of the tongue. The thyroglossal duct is found along the midline in the suprahyoid neck. However, in the infrahyoid neck, it diverges laterally. [8] On CT, lingual thyroids appear characteristically hyperattenuating relative to muscle due to, (a) The iodine content of thyroid tissue and (b) Moderate contrast enhancement on post-contrast images. On MRI, ectopic thyroid usually appears solid, mildly T1 hyperintense or isointense relative to muscle, and avidly enhancing. The uptake of high iodine-123 (123I) and technetium-99m on radiotracer studies is very specific. Similar to other thyroid tissues, these lesions can undergo goitrous and malignant transformation (3% of cases). Thyroglossal Duct Cyst Similar to ectopic thyroids, thyroglossal duct cysts are seen along the thyroglossal duct. They account for the most common thyroglossal duct lesions, wherein approximately 20%-25% are suprahyoid in location. These lesions are characteristically cystic in appearance, sometimes showing thin septae or lobulations. Like most of the cystic lesions, thyroglossal duct cysts show an attenuation value that is usually between 0 and 20 HU at CT, with hyperintensity on T2, and intermediate T1 signal at MR imaging. The contents within the lesion can be proteinaceous or hemorrhagic, making the lesion hyperdense on CT, and hyperintense at T1 sequence. Most of the lesions are well-circumscribed, showing a very thin rim enhancement. Extensive surrounding soft-tissue edema with heterogeneous and complex appearance may be seen with lesions that are infected or hemorrhagic. When a component closely associated with the hyoid bone is discovered, thyroglossal duct cysts may be distinguished from other cystic lesions. [9] Dermoid and Epidermoid Cysts Dermoid and epidermoid cysts in the oral cavity are most commonly found at the floor of the mouth and the root of the tongue. The distinction between dermoid and epidermoid cysts at imaging may be challenging as both the lesions are well-circumscribed and exhibit high T2 signal with no enhancement or only rim enhancement. Epidermoid cysts have only epithelial elements, whereas dermoid cysts contain both epithelial components and a dermal substructure. At imaging, significant signal heterogeneity (i.e., from the combination of solid and cystic factors) reflects the additional complexity of dermoid cysts. Epidermoid cysts usually do not exhibit substantial solid components. Intralesional fat is a distinguishing feature in dermoid cysts. A nearly pathognomonic appearance of a "sack of marbles" may be created when this fat coalesce into globules. [10] At diffusion imaging, epidermoid cysts may show restriction (high diffusion and a low apparent diffusion coefficient). However, the visible diffusion coefficient values are typically only moderately low and are a characteristic feature. Lipoma Lipomas are usually easily identified, well-encapsulated lesions with the attenuation or signal intensity of fat. Lipomas account for only 0.1%-5% of benign lesions in the oral cavity, but 50% of these localize to the buccal soft tissues. Foregut Duplication Cyst Foregut duplication cysts are seldom seen in the root of the tongue. These cysts usually exhibit a CT attenuation of that of fluid but may vary depending on proteinaceous content. At MR imaging, these cysts show a high T2 signal and a T1 signal that varies depending on proteinaceous content. Like thyroglossal duct cysts, they have a uniform enhancing rim and can have thin septa or lobulations. Foregut duplication cysts may also become infected or hemorrhage, leading to a heterogeneous appearance. Vascular malformations Based on the growth and histological differences, vascular malformations are divided into hemangioma and vascular malformations. Vascular malformations are further subcategorized into high flow lesions like arteriovenous malformations and low flow lesions, such as lymphatic and venous malformation. Of these, soft tissue masses presenting at birth are relatively more contributed by lymphatic and venous malformations. [11] Low flow venous malformations are characterized by T2 hyperintense venous lakes with flow voids within, which represent phleboliths. Lymphatic malformations may be of two types, namely microcystic or macro cystic malformations. Macro-cystic type is T2 hyperintense and maybe uni /multilocular cystic mass with fluid level within. But microcystic malformation appears as an area of high signal intensity on T2-weighted imaging. Angiography is usually used to confirm high flow malformations, which show abnormal arterial supply to the tongue with a abnormal prominent vascular blush. Hemangioma exhibits strong signals on T1-weighted imaging, heterogeneous high signal on T2-weighted imaging, and prominent enhancement with a lack of signal voids. Involuting hemangioma shows high T1 signals representing fatty replacement. Squamous cell carcinoma Squamous cell carcinoma (SCC) of the oral cavity has a predilection for the gingivo-buccal region, tongue, and retromolar trigone. The second most common site for SCC of the oral cavity is the tongue. The prevalence of SCC has been rising in India and the west due to excessive abuse of tobacco and alcohol. SCC usually presents clinically as ulcers, which can be diagnosed with ease by biopsy. Staging the lesion is the primary concern of imaging. Staging for the SCC of the oral cavity is as follows; T0no evidence of a primary tumor; T1-greatest diameter of the primary tumor is less than 2 cm; T2-greater than 2 cm but less than 4 cm in diameter; T3-primary tumor greater than 4 cm; T4-a massive tumor more than 4 cm in diameter with deep invasion, involving the antrum, pterygoid muscles, base of tongue or skin of the neck. The above-mentioned staging system is used by The American Joint Committee on Cancer and the International Union against Cancer (UICC). As thickness greater than 4 mm has been associated with cervical nodal metastasis, assessing tumor thickness is essential in the early stages. [12] The precise measurement for assessing the tumors from the lateral border of the tongue is the lateromedial thickness taken in the axial MR plane. Another study by Okura et al. established a significant predictor in a tumor with depth> 9.7 mm for nodal metastasis. [13] The other predictors for staging the tumor is the assessment of the involvement of muscles, with or without crossing of midline, extension into the floor of mouth, valleculae, pre-epiglottic space, and the hyoid bone. The involvement of vallecular, pre-epiglottic space and hyoid bone indicate relative contraindications for surgical resection. SCC commonly involves the level I and level II neck nodes. Skip metastasis to level III, IV, contralateral level I and II lymph nodes are also noted. Metastatic lymph nodes appear enlarged, round and show necrosis. Circumferential contact of greater than 270 degrees of the lymph nodes with the carotid artery precludes the resectability of the node. An increase in tumor size causes an increase in heterogeneity of tumors which also indicates the degree of necrosis. Cortical bone invasion is indicated by erosions of the bone. A hyperdense area replacing the healthy fat in CT denotes medullary bone involvement. [14] Non-contrast T1 weighted images yield satisfactory details on cortical erosion and bone marrow invasion. However, contrastenhanced T1 weighted imaging aids in the assessment of marrow invasion, perineural spread, soft tissue extent, tumor thickness, and necrotic lymph nodes. [15] T2 weighted imaging provides useful information regarding the involvement of extrinsic muscle and floor of the mouth. STIR and DWI sequences are of great importance in visualizing lymph nodes, whereas the latter is an added advantage in assessing subcentimetric lymph nodes. Conclusion Few specific lesion characteristics can aid in narrowing the differential diagnosis. Solid high-density lesions in the midline mostly represent lingual thyroids. These lesions may be verified with nuclear imaging. Calcifications likely indicate goitrous transformation. Phleboliths are highly suggestive of venous malformations. Multilocular, thin-rim enhancing cystic lesions are indicative of lymphatic malformations, primarily when fluid-fluid levels are found. Fat/calcium content within a complex cystic lesion is specific for a dermoid cyst, whereas diffusion restriction within a single cystic lesion is suggestive of an epidermoid cyst. Finally, when a lesion is Trans spatial, three differentials to be considered are -highly aggressive malignancies, congenital masses & aggressive infections. Although MRI is a more sensitive modality, CT is the most commonly used investigation for pre-operative planning & postoperative follow-up. Plain CT is useful in assessing the involvement of adjacent bones, while MRI is helpful in identifying the flow voids, the extent of soft tissue involvement & neurovascular bundle assessment.
v3-fos-license
2023-06-29T13:02:00.368Z
2023-06-16T00:00:00.000
259276070
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1029/2023gl103990", "pdf_hash": "2a6776303771c3b5da2c5179dadfa9dd91179a90", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:358", "s2fieldsofstudy": [ "Physics", "Geology" ], "sha1": "2a6776303771c3b5da2c5179dadfa9dd91179a90", "year": 2023 }
pes2o/s2orc
Unrestricted Solar Energetic Particle Access to the Moon While Within the Terrestrial Magnetotail This study presents observations of Solar Energetic Particle (SEP) protons that have penetrated Earth's magnetotail to reach the lunar environment. We apply data from Wind as an upstream monitor and compare to observations from THEMIS‐ARTEMIS within the tail to show clear signatures of SEPs at the Moon during two events. Combining modeling and data analysis, we show that SEPs above energies of ∼25 keV gain access to the Moon's position deep within the magnetotail through field lines that are open on one end to the solar wind. These results contradict previous studies that have suggested that the magnetotail is effective in shielding the Moon from SEPs with energies up to 1 GeV. Instead, we highlight that Earth's magnetosphere provides poor protection to the Moon from SEPs, which irradiate the lunar surface even within the tail. Our results have important implications regarding the safety of astronauts during upcoming lunar missions. • The two Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun probes observe Solar Energetic Particle (SEP) events at the Moon when located deep within the terrestrial magnetotail • SEPs enter the magnetotail far downstream beyond the lunar orbit, along open field lines that are connected on one end to Earth's polar caps • The terrestrial magnetosphere is ineffective in shielding SEPs from accessing the lunar orbit while within the magnetotail Supporting Information: Supporting Information may be found in the online version of this article. due to their much larger gyroradii (≫R L ; see X. Xu et al., 2017;Z. Xu et al., 2020). During the remaining one third of its orbit, the Moon is located within the terrestrial magnetosphere. Here, the Moon is exposed to multiple plasma environments, including the shocked plasma within the magnetosheath, the terrestrial plasma sheet, and the tenuous plasma of the lobes (e.g., Liuzzo, Poppe, & Halekas, 2022). As the Moon travels through Earth's magnetotail, the local field may likewise limit energetic particle access to the lunar surface (e.g., Størmer, 1955). Such shielding could curb the role that these particles have in processing the surface, limiting the hazards associated with their precipitation whenever the Moon is embedded within the tail. To estimate the energies below which particles could be prevented from reaching the Moon while within the tail, Winglee and Harnett (2007) applied a multi-fluid model of Earth's magnetosphere to calculate the magnetic field strength along the lunar orbit. They suggested that particles below energies of E ≈ 1 GeV could be shielded from the lunar surface, depending on the orientation of the IMF. Separately, Harnett (2010) applied this model to constrain energetic particle access to the Moon within the tail, for times when Earth's magnetosphere is perturbed during, for example, an interplanetary coronal mass ejection (ICME). Under these storm-time conditions, Harnett (2010) stated that particles below 35 MeV are prevented from reaching the Moon. More recently, Jordan et al. (2022) suggested that the gyroradius of a charged particle must exceed the radius of the terrestrial magnetotail at the Moon's orbital position in order to precipitate onto the lunar surface when in the tail, arguing that particles with smaller gyroradii would be significantly deflected and unable to reach the Moon (see also Winglee & Harnett, 2007). Hence, they argued that protons below 100 MeV, and electrons below 1 GeV, are prevented from reaching the lunar environment when embedded within the magnetotail. However, multiple observational and modeling studies contradict these results, instead suggesting that particles with much lower energies reach the Moon when in the tail. Using data from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) instrument onboard the Lunar Reconnaissance Orbiter, Case et al. (2010) reported that the magnetotail is ineffective at shielding particles above 14 MeV (the lowest energy measured by CRaTER). These authors found that above this energy, the instrument observed a reduction in the particle flux by less than 2% when transiting through the magnetotail. Similarly, Chandrayaan-1 and Chang'E-1 measurements suggest no significant decrease of particles at energies above ∼10 MeV within the tail (Jie & Gang, 2013;Koleva et al., 2010). Finally, Huang et al. (2009) (Harten & Clark, 1995) observed an enhanced SEP ion population upstream of Earth at energies 100 keV ≤ E ≤ 7 MeV. Similarly, ARTEMIS observed enhanced fluxes during this time, but also detected signatures of intermittent hot magnetotail plasma below E ≲ 100 keV. X. Xu et al. (2017) argued that these measurements in the low-energy channels contaminated the SST observations with background signals of secondary particles (even those channels extending beyond energies of 100 keV) and stated that the fluxes observed by ARTEMIS during this event were not associated with SEPs. Based on their findings, these authors concluded that the magnetosphere shields all SEPs at energies below E ≲ 4 MeV from reaching the Moon's environment when located within the tail. However, this contradicts the modeling results of Huang et al. (2009) which suggest that 1 MeV protons can reach the lunar orbit, even within the tail. To understand effects including space weathering of the lunar surface (e.g., Crites et al., 2013;Jordan et al., 2023;Poppe, Farrell, & Halekas, 2018) and to appreciate possible hazards to astronaut safety associated with radiation from these high-energy particles, it is critical to determine whether SEPs can penetrate the magnetotail fields to reach the lunar environment and to constrain the energies at which SEPs access the lunar surface. Here, we present case studies of two events observed by ARTEMIS that clearly illustrate the Moon is fully exposed to upstream SEP protons, even while located deep within in the terrestrial magnetotail. We combine magnetohydrodynamic and test-particle simulations to show that SEPs access the terrestrial magnetotail at distances of several hundred R E downtail along open field lines that are connected on one end to Earth's polar caps. 10.1029/2023GL103990 3 of 9 The Wind and ARTEMIS Missions Onboard Wind is a suite of instruments to study the plasma environment near 1 AU, including the SST (Lin et al., 1995) Observations of SEPs Within the Tail Figures 1a-1i display Wind and ARTEMIS observations from 23 to 24 June 2013, the same SEP event presented by X. Xu et al. (2017). This event was likely associated with a fast CME that erupted on 21 June from solar active region (AR) 11777, which produced an observed coronal shock wave and associated widespread SEPs that were observed by multiple spacecraft throughout the inner heliosphere (see Frassati et al., 2022;Winslow et al., 2015). Panels 1a and 1b show the location of the Moon and the Wind spacecraft, respectively, during this event, illustrating the position of ARTEMIS well within the terrestrial magnetotail (see also the orientation of the magnetic field with its strong Earthward/anti-Earthward component, provided in panels 1h-1i). Energetic ion measurements from the Wind, P1, and P2 SSTs are included in panels 1c, 1d, and 1f, respectively. In addition, ARTEMIS ESA ion measurements are shown in panels 1e and 1g. The ICME associated with this June event did not directly impact Earth; see, for example, the Richardson and Cane ICME list (Richardson & Cane, 2010) or the Kasper and Stevens shock list. Nevertheless, Figure 1c shows that Wind clearly observed a broad signature of "energetic storm particles" (SEPs accelerated locally by a passing ICME; see Cohen, 2006) beginning near 12:00 on 23 June and lasting ∼24 hr, with all but the highest SST energy channel displaying an enhanced differential energy flux. Panels 1d and 1f illustrate that a nearly identical enhancement was observed by the ARTEMIS SSTs at energies 100 keV ≲ E ≲ 1 MeV during this time. This enhancement even extends to the lowest SST energy channel of ∼25 keV. However, below E ≲ 100 keV, this signature is overlain by intermittent injections of hot plasma associated with the magnetotail plasma sheet (e.g., Artemyev et al., 2017), as also detected by the ARTEMIS ESAs at energies as low as ∼500 eV (see panels 1e and 1g). Even during these times with the probes located within the sheet, the signature of E > 100 keV particles in the ARTEMIS observations is evident. Interestingly, panels 1d and 1f illustrate that the probes detected SEPs while in both magnetotail lobes (see panels 1h and 1i). Indeed, Wind observed a nearly isotropic SEP pitch angle distribution, suggesting that both lobes-open to oppositely-oriented IMF field lines-were suffused with SEPs during this event. The similarity between the Wind and ARTEMIS SST observations at energies above 100 keV suggest that Earth's magnetotail plays only a limited role in preventing SEP access to the Moon. , the low-energy ion differential energy fluxes observed by ARTEMIS (panels 1n and 1p), and the nearly constant, tailward magnetic field (B x < 0; panels 1q-1r) indicate that the Moon was located deep within the southern lobe of the terrestrial magnetotail during this event. And yet, like the observations during the June 2013 SEP event, the Wind E > 100 keV ion differential energy flux (panel 1l) is nearly identical to the ARTEMIS signatures (panels 1m and 1o). Notably, the SST energy fluxes are dispersive in that they demonstrate a clear dependence on energy: the highest-energy protons (with highest velocities) arrive to the detectors first, followed by protons at successively lower energies. This dispersion is consistent with the signature of SEPs when magnetically connected to a solar flare (e.g., Reames, 2013), and the dispersive structure is preserved even within the magnetotail. The Wind and ARTEMIS observations during these two separate SEP events are nearly identical, demonstrating that the magnetotail is unable to effectively shield SEP ions above E ≈ 100 keV from accessing the lunar environment, contradicting findings from previous studies (e.g., Harnett, 2010;Jordan et al., 2022;Winglee & Harnett, 2007;X. Xu et al., 2017). To further elucidate the similarity between these signatures, Figure 2a compares the E > 100 keV ion spectrum observed by Wind (blue) and ARTEMIS P1 (green) during the June 2013 SEP event (see Figures S1 and S2 in Supporting Information S1 for additional comparisons). These spectra in panel 2a are averaged over a two-hour period from 20:50-22:50 on the 23rd (see pink bars at the bottom of panels 1c, 1d, and 1f, and 2b), while P1 was in the southern lobe of the terrestrial magnetotail to ensure that the SST measurements were not contaminated by any hot magnetotail plasma. The two spectra in Figure 2a are nearly identical (but differ by up to a factor of ∼1.1 due to, e.g., different instrument calibrations), despite the spacecraft being located within two vastly different regions (Wind upstream of Earth in the solar wind, and ARTEMIS deep within Earth's magnetotail). Panel 2a therefore confirms that the enhanced differential energy fluxes detected by ARTEMIS are SEPs that penetrated the magnetosphere and were detected, nearly unaltered, deep within the magnetotail. Furthermore, Figure 2b shows the Wind (blue) and ARTEMIS P1 (green) differential ion energy fluxes of 1 MeV protons for this SEP event. Near 18:00 on the 23rd, the 1 MeV Wind proton differential energy flux sharply increased by a factor of 2 in ∼10 min (labeled "onset"), reaching a value of 600 eV cm −2 s −1 sr −1 eV −1 . This flux was sustained for ∼1 hr, before further increasing to a peak value of ∼800 eV cm −2 s −1 sr −1 eV −1 near 19:30. Over the next 12 hr, the differential energy flux gradually decreased, until a rapid subsidence in the 1 MeV differential energy flux at 08:00 on 24 June. Panel 2b illustrates that P1 observed nearly identical features that were time-shifted compared to the Wind observations. To further highlight this delay, the yellow curve in Figure 2b again shows the Wind observations, but now shifted by 2 hours to coincide with the observed ARTEMIS 1 MeV SEP enhancement. With this shift, multiple features of the Wind and ARTEMIS data occur at nearly identical times, including the 1 MeV differential energy flux onset, the ∼12-hr decrease, and the rapid subsidence where fluxes return to background (see also Figure 1). However, certain features of the ARTEMIS differential energy flux are not present in the time-shifted Wind observations. The most obvious differences are three narrow spikes, visible near 21:00 on the 23rd and 00:00 on the 24th. Here, the P1 1 MeV differential flux reached nearly 900 eV cm −2 s −1 sr −1 eV −1 (light blue arrows in Figure 2b): ∼1.2 times above the maximum observed by Wind, which did not observe qualitatively similar features during these periods. These spikes are associated with the magnetotail plasma sheet (see light blue arrows above panels 1d-1e), where P1 detected an enhanced ion flux extending down to energies of ∼1 keV. A second discrepancy regards the timing of the peak SEP differential energy flux. Although the ARTEMIS SEP 1 MeV onset occurred almost exactly 2 hr after Wind, the peak P1 flux occurred ∼2.5 hr after the Wind peak. However, the maximum amplitude and structures of the peaks observed by both spacecraft closely match. The agreement between the ARTEMIS and time-shifted Wind observations (Figure 2b) suggests that these SEPs required ∼2 additional hours to reach the Moon, consistent with the local generation of energetic storm particles as an ICME passed nearby. Using the WSA-ENLIL + Cone model for this June 2013 event available at NASA's Community Coordinated Modeling Center (CCMC; see kauai.ccmc.gsfc.nasa.gov/DONKI/view/ WSA-ENLIL/2586/1), we estimate a velocity of 800 km/s as this ICME passed near Earth. Hence, this ICME traveled ∼900R E in the 2-hr offset between the Wind and ARTEMIS observations. Since Wind was located 200R E upstream, this suggests these SEP entered the magnetosphere ∼640R E beyond the Moon's orbit. Note that since these SEPs were generated locally (i.e., not at the Sun) and travel more than an order of magnitude faster than the ICME, any path difference between the protons detected by Wind and ARTEMIS plays only a minor role in any temporal offset between the observations. Notably, despite the suggestions of X. Xu et al. (2017), the SST signatures during this event were not caused by high-energy particles contaminating the sensors. While penetration of highly energetic ions (E ≳ 10 MeV) may contaminate the SSTs, additional spacecraft upstream of Earth (e.g., the Advanced Composition Explorer and the Solar and Heliospheric Observatory) with separate instrumentation show nearly identical enhancements. Likewise, while high energy electrons may also contaminate the SST detectors, the Wind and ARTEMIS E ≳ 400 keV electron fluxes were not enhanced, so penetrating electrons can be ruled out. Instead, the SST observations indicate SEP entry into Earth's magnetotail. Figure S2 in Supporting Information S1 presents a comparison between the Wind and ARTEMIS spectra for the September SEP event, in the same style as Figure 2. Again, the spectra of the Wind and ARTEMIS differential energy fluxes, respectively, are in close agreement, despite the position of P1 and P2 deep within the magnetotail. Note that unlike for the June event, there is no appreciable time-shift between the Wind and ARTEMIS observations, consistent with the signature of SEPs generated by a solar flare. Modeling SEP Access to the Tail To better understand SEP dynamics and to shed light on their access to the Moon, we traced test particle trajectories as they travel through the terrestrial magnetosphere. We apply the Open Geospace General Circulation Model (OpenGGCM; e.g., Fuller-Rowell et al., 1996;Raeder et al., 2017), available at the CCMC, to calculate global plasma and electromagnetic field quantities. For this analysis, we focus on the June 2013 SEP event; results from the September 2017 event are shown in Figure S3 in Supporting Information S1. OpenGGCM is driven with the conditions observed upstream by Wind for the 24-hr period from 12:00 on 23 June 2013 through 12:00 on 24 June; that is, the center interval shown in Figure 1. In GSE coordinates, the OpenGGCM spatial domain extends from −350R E ≤ x ≤ +33R E (along the Sun-Earth line), and from −96R E ≤ y, z ≤ +96R E . Thus, this domain includes a large portion of the Moon's orbit, including its magnetotail passage. Three-dimensional data cubes of the plasma density, plasma velocity, and electromagnetic fields are output every 240 s. Comparison of OpenGGCM with the ARTEMIS P1 and P2 observations show moderate agreement, sufficient to proceed with test-particle tracing. Following the approach of Poppe et al. (2016), protons with mass 1 amu are initialized at the Moon within the OpenGGCM electromagnetic field output at a specific time, position, energy, and pitch angle corresponding to the observed SEPs (see Figure 1). Trajectories are then integrated backwards in time using a Runge-Kutta-4 algorithm to solve the Lorentz force law, thereby allowing for an assessment of the particles' behavior leading up to their detection by ARTEMIS. Each particle is followed until it strikes one of the outer simulation boundaries (no particles reached OpenGGCM's inner boundary located at ∼3R E ). This backtracking method is much more computationally efficient than an approach where particles would be initialized upstream of Earth and traced forward in time (see also, e.g., Poppe, Fatemi, & Khurana, 2018;Liuzzo, Poppe, Addison, et al., 2022;Liuzzo et al., 2019aLiuzzo et al., , 2019bLiuzzo et al., , 2020. Figure 3 shows a representative set of test-particle trajectories in the GSE (3a-3c) x-y and (3d-3f) x-z planes for select energies (25,50,150,350,750, and 1,000 keV) over-plotted on the OpenGGCM results for the magnetic field magnitude (3a, 3d), plasma bulk velocity (3b, 3e), and number density (3c, 3f) during the June 2013 SEP event. These particles were initialized at midnight on 24 June with initial pitch angles of 180°, such that they traveled toward Earth when integrating backwards in time. When considered going forwards in time, these results illustrate that the SEP protons travel large distances within the terrestrial magnetotail before their detection by ARTEMIS. The two lowest-energy test particles at 25 and 50 keV (blue and green in Figure 3) entered the distant magnetotail near x ≈ −275R E and x ≈ −300R E , respectively. The "elbow" visible in each of their trajectories denotes the point at which they crossed the magnetopause (as also confirmed by inspection of the local electromagnetic fields along each particle's trajectory, which is omitted from the figure for clarity). As SEPs approach the Moon from downtail, some impact the lunar nightside, as is consistent with observations from Explorer 35 of "lunar shadowing" of ions and electrons, whereby the surface of the Moon prevented detection of energetic particles during cis-lunar transits of the spacecraft (e.g., Lin, 1968;Van Allen, 1970;Van Allen & Ness, 1969). However, the 25 and 50 keV particles illustrated in Figure 3 continue traveling toward upstream, initially passing by the Moon before eventually encountering the enhanced magnetic fields close to Earth (near x ≈ −5R E ). Although beyond the focus of this study, penetration of SEPs to these magnetospheric locations is consistent with preliminary analysis of the SSTs on the three inner THEMIS probes during this event, which also show signatures of enhanced energetic proton energy fluxes. The enhanced fields near the Earth cause the 25 and 50 keV test particles displayed in Figure 3 to mirror, where they begin traveling tailwards before impacting the lunar dayside or (in the case of Figure 1) being detected by ARTEMIS. Higher-energy backtracked particles (150 keV ≤ E ≤ 1 MeV; see Figure 3) encounter the OpenGGCM boundary before exiting the tail. While the behavior of the 25 and 50 keV ions implies that, in forward time, these higher-energy particles also enter the terrestrial magnetotail along open field lines at even greater distances (beyond x = −350R E ), they may instead penetrate across the magnetopause boundary since the magnetic field magnitude is reduced far downtail. Regardless of the process, the fact remains that SEPs enter the magnetotail far downstream at distances that are consistent with estimations based on the velocity of the ICME during this event (see Section 3.1 above) and consistent with previous studies suggesting SEP entry at locations up to 1,000 R E downstream (e.g., Dessler, 1964;Dungey, 1965;Evans, 1972;Milan, 2004;Ness et al., 1967;Paulikas, 1974). We simulated 25 keV ≤ E ≤ 1 MeV particles at a range of initial pitch angles and at several starting times between 12:00 on 23 June 2013 and 12:00 on the 24th, finding that a majority (∼90%) demonstrated behaviors like those in Figure 3. Additional tracing confirms a similar behavior of protons entering the distant magnetotail during the 05 September 2017 event (see Figure S3 in Supporting Information S1). These tests confirm the robustness of our conclusions that SEPs access the terrestrial magnetotail far downstream along open magnetospheric field lines that have one end rooted in the terrestrial polar cap and the other end connected to the IMF. Conclusions This study has presented THEMIS-ARTEMIS observations obtained while the probes were embedded deep within Earth's magnetotail. During two SEP events, the probes observed clear signatures of energetic protons, nearly unchanged when compared to their detection upstream by Wind, indicating direct access of SEPs to the lunar orbit, even when the Moon is within the magnetotail. The open nature of the magnetic field lines suggests that the magnetosphere is ineffective in shielding particles above keV energies from reaching the Moon. This finding contradicts previous studies that have suggested that the magnetotail is effective in shielding protons from 1 to 100 MeV (e.g., Harnett, 2010;Jordan et al., 2022;Winglee & Harnett, 2007;X. Xu et al., 2017), but is consistent with observations within the tail of particles at energies E ≳ 10 MeV (Case et al., 2010). Notably, shielding future astronauts and equipment on the lunar surface from energetic particle radiation is a key consideration of the upcoming missions to the Moon, including the crewed Artemis missions to explore the south pole. Although penetrating particles at energies greater than ∼1 MeV are responsible for the most severe damage to astronauts and electronics (e.g., Cucinotta et al., 2010;Xapsos et al., 2007), our findings that SEPs at energies above 25 keV enter the magnetotail along open field lines indicate that the more damaging, higher-energy particles likely also have nearly unrestricted access to the lunar surface. Hence, in addition to those particles at higher energies, keV-to-MeV SEPs present a clear potential hazard to future exploration of the lunar surface, even for times when the Moon is within the terrestrial magnetotail.
v3-fos-license
2019-05-07T14:06:04.386Z
2019-04-17T00:00:00.000
145842706
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/amse/2019/6707143.pdf", "pdf_hash": "58477327d52ee4ca2e946fb50bbc4ede531ab3ce", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:359", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "58477327d52ee4ca2e946fb50bbc4ede531ab3ce", "year": 2019 }
pes2o/s2orc
Evaluation of Structural Properties and Catalytic Activities in Knoevenagel Condensation Reaction of Zeolitic Imidazolate Framework-8 Synthesized under Different Conditions In the present study, the zeolitic imidazolate framework-8 (ZIF-8) was synthesized at both room temperature and high temperatures. (e effects of solvents, molar ratios of precursors, reaction time, and temperature on the structural properties of the asprepared materials were investigated. Moreover, the surface morphologies of the obtained specimens were characterized using X-ray diffraction, scanning electron microscopy, Fourier-transform infrared spectroscopy, and nitrogen adsorption methods.(e results show that ZIF-8 was formed in methanol and water at room temperature and in dimethylformamide (DMF) at high temperatures. Further, in methanol, the molar ratios of precursors and reaction time have negligible effects on the morphologies and structures of ZIF-8; however, in DMF, the reaction temperature has a significant influence on the microstructures of ZIF-8. (e catalytic activities of the obtainedmaterials were evaluated using the Knoevenagel condensation reaction, and ZIF-8 proves to be an excellent solid base catalyst. Introduction Zeolitic imidazolate frameworks (ZIFs) belong to the family of metal-organic frameworks (MOFs) and possess unique properties (uniform small pores and high surface area) of both zeolites and MOFs [1,2].In ZIFs, divalent metal cations are found to be tetrahedrally coordinated with the imidazolate anions [2,3].In recent years, ZIFs have attracted significant attention in gas storage and separation applications [4,5], catalytic reactions [6], chemical processes [7,8], and drug delivery systems [9].ZIF-8 is one of the most studied zeolitic imidazolate frameworks due to its high chemical and thermal stabilities.In addition, ZIF-8 has a large surface area (S BET �1630 m 2 •g −1 ) and high porosity (0.636 cm 3 •g −1 ) [2].In previous studies, ZIF-8 is synthesized in DMF at high temperature and pressure [2,3].ZIF-8 can also be synthesized in methanol at room temperature under normal pressure [2,[10][11][12][13][14].In these two approaches, the solvents and temperature/pressure have a critical role in the formation of ZIF-8.However, there have not been any studies dealing with this issue. e Knoevenagel condensation is an important reaction for the formation of carbon-carbon double bonds.e reaction is traditionally catalyzed by conventional bases such as KOH, NaOH, or amine compounds [15].Today, many researchers have conducted this reaction with different precursors by using solid base catalysts to expand their applicability in the organic synthesis industry [16][17][18].Recently, Tran et al. [19] have studied the catalytic activity of ZIF-8 by the Knoevenagel condensation reaction and advocated its feasibility as a catalyst. In the present work, ZIF-8 was synthesized in two different routes. e effects of solvents, molar ratios of precursors, reaction time, and temperature on the structural properties of the as-prepared materials were investigated.In addition, the catalytic activities of the fabricated materials were evaluated using the Knoevenagel condensation reaction.e asproduced white solid material was collected using filtration, washed in a Soxhlet apparatus for two days with MeOH, and dried at 100 °C to obtain ZIF-8 (denoted Z-B(DMF)).e effects of solvents, molar ratios of Zn(NO 3 ) 2 •6H 2 O and meIm, reaction time, and temperature on the structural properties of the as-prepared ZIF-8 samples were further studied. e N 2 adsorption/desorption isotherm measurement test was performed at 77 K in a Tristar 3000 analyzer, and before setting the dry mass, the samples were degassed at 250 °C with N 2 for 5 h.Scanning electron microscopy (SEM) images were obtained using an SEM JMS-5300LV (Japan), and infrared spectra (IR) were recorded in a Jasco FT/IR-4600 spectrometer (Japan) in the range of 4000-400 cm −1 . e compositional analysis of the reactants and products in the liquid sample was executed using a GC-MS chromatograph (Agilent GC-MS 7890).e conversion and the selectivity were calculated according to the following equations: conversion (%) � moles of reacted benzaldehyde moles of initial benzaldehyde × 100, selectivity (%) � %product of ethyl-2-cyano-3-phenylacrylate %total products × 100. (1) Structural Properties of Z-A(MeOH) and Z-B(DMF). Figure 1(a) exhibits the XRD patterns of ZIF-8 synthesized with process A and process B. e diffraction peak (011) at 2θ � 7.2 °is observed in both samples, indicating their high crystallinities [2,10,12,20,22].However, the intensity of diffraction peaks in Z-B(DMF) is higher than that in Z-A(MeOH), and this means that Z-B(DMF) has higher symmetric planes.e FT-IR spectra of both Z-A(MeOH) and Z-B(DMF) are displayed in Figure 1(b), and the findings are consistent with earlier reported results [19,23,24].e bands at 3122 cm −1 and 2920 cm −1 are associated with the aromatic and the aliphatic C-H asymmetric stretching vibrations, respectively.e band at 1668 cm −1 is attributed to the C�C stretching mode, and the band at 1574 cm −1 is assigned to the C�N stretching mode. e bands at 1300-1460 cm −1 are associated with the entire ring stretching, whereas the band at 1140 cm −1 is formed from the aromatic C-N stretching mode.Similarly, the bands at 991 cm −1 and 748 cm −1 could be assigned to the C-N bending vibration mode and to the C-H bending mode, respectively.Moreover, the band at 690 cm −1 is developed due to the ring out-of-plane bending vibration of imidazolate.e sharp band at 416 cm −1 formed due to Zn-N stretching indicates that zinc atoms are connected to the nitrogen atoms in the 2-methylimidazolate linkers. In process A, the formation path of ZIF-8 depends on the reaction time [10,11].However, in process B, ZIF-8 was prepared in DMF at 100 °C in three days; hence, a fully crystalline ZIF-8 phase was obtained.and 1415 m 2 •g −1 , respectively (Table 1).ese values are higher than those of ZIF-8 synthesized with other routes [23,26]. Table 2 describes the characteristics of ZIF-8 obtained from process A and process B with different solvents. It is clear that toluene is an unsuitable solvent for the ZIF-8 synthesis. is can be attributed to the very small dipole moment of toluene (0.36 D) compared with that of methanol (1.69 D), water (1.85 D), and DMF (3.86 D), and the deprotonation of a meIm compound cannot occur to form a meIm − ion. Figure 4 shows that the synthesized samples in water have characteristic diffraction peaks (at 2θ < 30 °) and Advances in Materials Science and Engineering characteristic vibration bands.However, in process B, the diffraction peaks at 2θ � 31.73 °, 34.4 °, 36.23 °, and 47.48 °and the band at 492 cm −1 indicate the existence of ZnO oxides and Zn-O bonds in ZIF-8 [26]. is can be ascribed to the hydrolysis of Zn 2+ ions in water at high temperatures.e morphologies of the synthesized samples in water are displayed in Figure 5.In process A, the surface morphologies of ZIF-8 are indeterminate (Figures 5(a e results in Table 2 indicate that the solvent exchange between the two processes did not lead to the formation of ZIF-8. is is due to large differences in the boiling temperature and dipole moment of the solvents, indicating that solvents play an important role in the synthesis of ZIF-8. Effects of Synthesis Time and meIm/Zn Molar Ratios. Figure 6 displays the SEM images of the synthesized ZIF-8 samples at two different stirring intervals-two days and five days.Both samples have mainly hexagonal and rhombic dodecahedron crystals of a diameter of ∼100 nm.Further, their very sharp diffraction peaks appear at 2θ below 10 °, indicating the formation of highly crystalline materials (Figure 7(a)). e XRD patterns of the samples synthesized at different meIm/Zn molar ratios are presented in Figure 7(b).In all cases, the amounts of Zn(NO 3 ) 2 •6H 2 O and methanol were kept constant.No conspicuous difference was noticed in the XRD peaks of the samples; however, their relative crystallinities slightly decrease as the meIm/Zn molar ratio increases from 64.4/8 to 100/8.ese findings are well consistent with those reported by Zhang et al. [24]. Effects of Synthesis Temperature. e effects of reaction temperature on ZIF-8 synthesized with the process B are depicted in Figure 8. Noticeably, better cohesion of particles in the sample synthesized at 200 °C reduces the intensities of diffraction peaks at larger angles.However, as the reaction temperature increases, the intensities of diffraction peaks at angles less than 10 °become higher, indicating the formation of ZIF-8 with higher crystallinities. In conclusion, the meIm/Zn molar ratios, reaction time, and temperature have an impact on the crystallinity of ZIF-8 but do not affect its crystalline structure. Catalytic Test. e Knoevenagel condensation reaction between benzaldehyde and ethyl cyanoacetate to form ethyl- 4 Advances in Materials Science and Engineering 2-cyano-3-phenylacrylate (Scheme 1) was used to test the catalytic activities of the synthesized ZIF-8 samples.e effect of different ZIF-8 samples on the Knoevenagel condensation reaction is illustrated in Figure 9. Evidently, all synthesized samples exhibit excellent catalytic activities in the Knoevenagel condensation reaction (the benzaldehyde conversion in the catalytic reactions is considerably higher than that in the reactions without catalysts).Figure 9(a) shows that the conversion of benzaldehyde depends on the crystallinity of ZIF-8.e conversion is greater when the intensity of diffraction peaks at 2θ < 10 °is higher (Figures 1(a e Knoevenagel condensation reaction is commonly catalyzed by liquid or solid bases.ZIF-8 is a bifunctional catalyst composed of both acidic (Lewis acid Zn 2+ ions) and basic sites (imidazole groups) [20].It was noticed that the Advances in Materials Science and Engineering into the reaction solution is found.us, in the ZIF-8 sample synthesized with process B at 200 °C, the Lewis acid Zn 2+ sites may become saturated; hence, the decrease in densities of the Lewis acid sites results in more base sites from the imidazole linkers. A comparison of the benzaldehyde conversion of the Knoevenagel condensation with different catalysts is shown in Table 3.Although the reaction conditions are different, the benzaldehyde conversion of this study is higher (76.5% compared with 51-70%) or consistent with that of previous studies (78-92%). Conclusions ZIF-8 was formed in methanol and water at room temperature and in DMF at high temperatures (100-200 °C).In methanol, the reaction time and meIm/Zn molar ratio have small effects on the microstructures (uniform particles of ∼100 nm diameter) of ZIF-8; however, the diffraction peaks at the angles smaller than 10 °had slight variations.In contrast, ZIF-8 synthesized in DMF manifests full crystallinity with varying particle sizes (3∼20 μm) with the better cohesion of particles observed at 200 °C.All ZIF-8 samples Advances in Materials Science and Engineering exhibit excellent catalytic activities in the Knoevenagel condensation reaction because of the base sites of imidazoles.When ZIF-8 are highly crystalline (samples synthesized in MeOH for five days and in DMF at 200 °C), the activities of the base sites of imidazole prevail those of the Lewis acid sites of Zn 2+ , resulting in a higher conversion of benzaldehyde. Figure 3 displays the nitrogen adsorption/desorption isotherms of Z-A(MeOH) sample and Z-B(DMF) sample at 77 K.According to the classification of IUPAC, the isotherm curves belong to type-I, indicating that Z-A(MeOH) and Z-B(DMF) are microporous materials.e specific 2 Advances in Materials Science and Engineering surface area of Z-A(MeOH) and Z-B(DMF) is 1279 m 2 •g −1 Table 2 : Characteristics of ZIF-8 obtained in different solvents. Table 3 : Effect of the different catalysts on Knoevenagel condensation.ZIF-8 sample synthesized with process B at 200 °C.Scheme 1: Knoevenagel condensation reaction between benzaldehyde and ethyl cyanoacetate. *
v3-fos-license
2020-04-30T09:11:02.462Z
2020-04-08T00:00:00.000
219071788
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://geology-dnu.dp.ua/index.php/GG/article/download/664/576", "pdf_hash": "2ec2eaa26962b2b94d3d4431c8c5657bed22cd75", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:360", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "77f9575e5b3a6bb8b50408c77738b3c81084a444", "year": 2020 }
pes2o/s2orc
Human-induced load on the environment when using geothermal heat pump wells The research is aimed to study the process of change in temperature mode dynamics for the Earth subsurface layer when heat is extracted with geothermal heat pump systems, reveal and disclose specifics of effect on the ecology caused by technologies using geothermal resources and give practical recommendations regarding further development of methods for designing heat pumps using low potential heat energy of soil based on the long-term forecast and efficacy assessment. Mathematical statistics and mathematical model methods were applied for assessment of economic and environmental effects. Methods based on principles of the theory of thermal conductivity, hydromechanics, theory of differential equations and mathematical analysis were applied for calculation of proposed systems and review of field observation findings. The authors had developed for research purposes an experimental geothermal heat pump system consisting of four structurally connected geothermal wells, each with installed U-shaped twin collectors of 200 m overall length, and a heat pump of 14 kW capacity with a heat energy battery for 300 L connected to the building heat-supply system. They also created a computer data archivation and visualisation system and devised a research procedure. The paper provides assessment of the effect caused by changes in the process operation mode of the heat pump system on the soil temperature near the geothermal well. As a result, the authors have found that the higher the intensity of heat energy extraction, the lower the soil temperature near the geothermal heat exchanger, in proportion to the load on the system. Moreover, it has been determined by experimental means that at critical loads on the geothermal heat exchanger the soil temperature is unable to keep up with regeneration and may reach negative values. The research also determined relation between in-service time and season of the system operation and temperature fluctuations of geothermal field. For example, it has been found by experimental means that the heat flow from the well is spread radially, from the well axis to its borders. Additionally, it has been proved that depending on the heat load value, the bed temperature is changed after the time of the first launch. For example, the geothermal field temperature has changed from the time of the first launch during 1-year operation by 0.5 °С in average. The research has proved that depending on the heat load value, under seasonal operation (heating only or cooling only) of the system, the soil temperature has decreased for five years by 2.5 °С and switched to quasi-steady state, meanwhile, stabilisation of the geothermal field in the state under 1-year operation (heating and cooling) occurred yet in the 2nd year of operation. In conclusion, the paper reasonably states that geothermal heat pump systems using vertical heat exchangers installed into the wells put no significant human-induced load on the environment. At the same time, still relevant are issues of scientific approach to development of the required configuration of the geothermal collector, methodology for its optimal placement and determination of efficacy depending on operation conditions. Ключові слова: техногенне навантаження, геотермальна свердловина, тепловий насос. Introduction. One of the nowadays problems that are highly relevant for the world's society is need to determine the balance between satisfaction of current human demands and protection of interests of future generations. In Ukraine, as well as in many other countries of the world the impact of the power generation industry on the environment is associated with considerable emissions of pollutants from the companies of fuel and energy complex. One of the ways to ensure environmental security for the state may be the transfer to environmentally friendly technologies which is impossible without broad use of renewable energy sources. In climatic and geographical conditions of Ukraine, one of the promising areas of use of renewable energy, both for ecology and economy is application of heat pump geothermal systems (GTS) that use low-potential energy of subsurface layers of the Earth as a renewable energy source (Goshovskyi, Zurian, 2013). To obtain primary energy, GTSs are equipped with heat exchangers installed in geothermal wells that all together set up an integrated system for extraction of low-potential renewable energy from soil. Operational experience with geothermal wells shows that continuous extraction or discharge of heat energy causes considerable change in the soil heat balance. Change in the temperature background that is formed under long operation (GTSs) may result in significant change of the soil mass temperature (geothermal field) which is not immediately compensated by the background heat flows and has negative consequences for human-induced load on the environment (Saprykina, Yakovlev, 2016). Review of recent researches and publications. The practical issue of using heat pump systems in Ukraine and worldwide where low-potential energy of subsurface layers of the Earth is used as a primary energy source is studied in a lot of research works. (Boyle, 2014;Tidwell, Weir, 2016;Morozov, 2017;Limarenko, Taranenko, 2015). The works of Shubenko, Kuharec, 2014;Morrison and others, 2004;Hepbasli, Kalinci, 2009 have proved scientifically that a heat pump itself as a component part of the heat pump system is an environmentally clear appliance with the principal function of transferring low-temperature energy from a renewable source to the building heatsupply system with the consumption-grounded values of temperature and capacity. The studies by Gao and others, 2008;Li et al., 2009;Nikitin et al., 2015 assumed that human-induced load on the environment and environmental hazard may be induced by thermalphysic processes that occur in the geothermal heat exchanger -soil system. The work by Chao et al., 2016 shows mathematical analysis of the soil heat balance. Based on digital-analytical simulation of the system, the study by Kordas, Nikoforovich, 2014 has revealed the interconnection of energy exchange processes between soil and heat-carrying agent of the geothermal heat exchanger under stable conditions. Besides, analytical calculations based on the mathematical model aimed to devise methods for forecasting the temperature field under operation of the geothermal well in various processing conditions were made in the studies by Saprykina, Yakovlev, 2017;Nakorchevsky, Basok, 2005;Filatov, Volodin, 2012 have designed a laboratory bench and researched temperature changes near the well area using the experimental model. Review of the references showed lack of attention given to the experimental investigation of the environmental effect made by the geothermal well on the geothermal field under field operation of the GTS in terms of particular lithologic and geographical conditions. Research of the thermal field change dynamics is largely focused on mathematical calculations and experimental investigations with laboratory models which may not always give unbiased scientific information and requires for experimental confirmation in field conditions. In consideration of the above, it is required to find out the dynamics of changes in the temperature mode of the Earth subsurface layer under extraction of heat with geothermal heat pump systems under field operation of the heat pump unit, namely: describe specifics of energy in-flow from rocks to the inner space of the well without circulation of heatcarrying agent therein over the long time interval; find out the impact of change in process operation mode of the heat pump system on the soil temperature near the geothermal well and describe deviations of temperature fluctuations of the geothermal field in connection to duration and seasonality of the system operation; determine the impact made by intensive extraction of heat energy with geothermal heat pump systems on regeneration abilities of the geothermal field; give practical recommendations regarding further development of methods for designing heat pumps using low potential heat energy of soil based on the long-term forecast and assessment of environmental impact and efficacy. Research data and methods. Mathematical statistics and mathematical model methods were applied for assessment of economic and environmental effects. Measurements were made by means of temperature sensors, pressure sensors and amount of heat-carrying agent flow with direct readings and DC sensors with electrical data transfer. Digital data were processed with MAXYCON FLEXY controller and the software using FDB open configurator by RAUT AUTOMATIK. Methods based on principles of the theory of thermal conductivity, hydromechanics, theory of differential equations and mathematical analysis were applied for calculation of proposed systems and review of field observation findings. Findings and review. Geothermal resources are considered to included, first of all, thermal fluid and warmth of heated dry rocks. Geological regions of Ukraine differ in geothermal conditions. For example, the Ukrainian Shield, the Southern slope of Voronezh mountain group (the Northern side of Dnipro-Donetsk cavity), Volyn-Podillia basin have very low geothermal gradients. The Black Sea cavity, Plain Crimea, Transcarpathian inner bay have higher gradients and are promising in the view of using the Earth warmth. The Ukrainian Shield in whole is featured with the lowest geothermal gradients compared to other territory of Ukraine. From the geothermal point, it is studied almost exclusively in the areas of ironore deposits of Kryvyi Rig and Bilozerka. Average value of the geothermal depth for Kryvyi Rig which represents the area of the lowest geothermal gradients is 116.3 m per grad. Generally, geothermal depth indicators within the Ukrainian Schield vary from 90 to 185 m per grad rising, mainly, in the areas of large tectonic breaks (Fig. 1). The Earth heat energy is a power resource. Geothermal resources of Ukraine at developed depths are described with thermalphysic properties of the Earth, namely, temperature and density of heat flow. The geothermal energy source has diversified impact on the environment. Since geothermal resources are a renewable energy source, environmental sustainability must be their principal advantage. So that, firstly, geothermal power stations do not require for large land space; secondly, discharge waters are pumped out back to the well which allows to maintain environmental security of the region and stable production process; thirdly, geothermal power stations release much smaller amount of toxic substances into the atmosphere, for example, a geothermal station releases 0.45 kg of CO2 emissions per 1 mW⋅hour of produced power, while a thermoelectric power station that runs on natural gas releases 464 kg, 720 kg on fuel oil, and 819 kg on coal (Limarenko, Taranenko, 2015). At the same time, geothermal power industry has its own disadvantages that can be summarised as follows: Firstly, action of mineralised geothermal waters and vapours; secondly, sinking of the earth surface located over the mined geothermal layer; thirdly, change of the underground water level, formation of sinkholes in soil, swamping; fourthly, gas emissions (methane, hydrogen, nitrogen, ammonia, hydrogen sulphide) and heat emissions into the atmosphere or surface waters; fifthly, contamination of underground waters and water-bearing layers, soil salting; sixthly, change of temperature fields of underground levels (Degtyarev, 2013). Thus, in spite of allegedly simple and accessible use of geothermal energy, technical and environmental implementation of this method of power generation is a complex scientific technical issue. Also, there has been particularly strong interest today in possible use of energy of subsurface layers of the Earth (at the depths up to 400 m) for heating systems both for residential buildings and industrial facilities using heat pumps (Limarenko, Taranenko, 2015). Over 90% of the areas of Ukraine at the industrially accessible depths of 50 to 100 m below the Earth's surface level always maintain temperatures of 14 to 18 °С which can be classified as low-potential heat sources. This temperature range can not be utilised in the most production processes including the heating systems. In this connection, extraction of lowpotential energy with heat pumps that allow with relatively low cost to obtain the required heat-carrying agent temperatures seems to be the most promising (Saprykina, Yakovlev, 2017). Heat pumps with vertical soil heat exchangers (VSHE) that are PE pipes placed in the wells at the depth of up to 400 m have been widely used. The space around them is filled in with special heat-conducting solution. The heat-carrying agent is heated in a VSHE and transfers its heat energy to an evaporator of a heat pump (HP), the vapour of which is condensed after compression in a compressor condenser. This process goes together with supply of extracted heat energy to consumers (Filatov, Volodin, 2012). The principal advantage of a geothermal heat pump is its high performance which is achieved in result of a high energy conversion factor (ECF) for the heat pump (400% to 500%) which ensures that 4-5 kW of heat energy will be obtained for each 1 kW of consumed electrical power and thus allows for lower operational cost (Filatov, Volodin, 2012). Introduction of heat pump technologies for heat production in Ukraine is one of the effective energy saving measures that allow to save fossil fuel and reduce pollution of the environment. Harmful emissions during heat pump operation are the ones generated where electrical power is produced. No harmful emission is produced right in the place of heat pump installation. Heat pumps with ECF equal to 3.0 compared with traditional boiler houses produce almost twice less emissions of nitrogen, sulphur, and carbon oxides than under operation on coal; more than 1.5 times less than under operation on fuel oil; and by 30% less than under operation on natural gas. (Goshovsky, S.V., Zuryan, A.V. 2017). Operational experience of existing GTSs shows that we have no enough information regarding: a) impact made by extraction of heat energy from subsurface layers of the Earth with the geothermal well on regeneration processes near the well area over the long time interval (5-7 years); b) relation between in-service time and season of the system operation and temperature fluctuations of geothermal field; c) connection of unstable operation of GTS with termination of heat exchange between the well and geothermal field; d) impact made by intensive extraction of heat energy with geothermal heat pump systems on regeneration abilities of the geothermal field. The temperature field is a complicated object both for a natural (experimental) and mathematical study and is regulated by variable limit conditions that depend on climate of the region, operation mode of the object, season, change of thermalphysic properties of soil, etc. For the purpose of investigation of the temperature field around the vertical well, the Ukrainian State Geological Research Institute developed and installed an experimental geothemal heat pump system to extract heat energy. Principal diagram of the geothermal experimental heat pump system is shown in Fig. 2. The land-located part of the experimental power system consists of a heat power battery and heat pump elements with automation system. The geothermal collector for collection of lowtemperature heat energy is made of plastic pipe of 32 mm in diameter and consists of four heat exchangers coupled in parallel. The pipe length in each heat exchanger is 200 m. Total length of the collector is 800 m. Aqueous propylene glycol solution 25% (C 3 H 8 O 2 ) was used as a heat-carrying agent. For the purpose of study, the complex included measurement equipment and management information system. Measuring devices that include temperature probes and heat-carrying agent flow-rate sensors are installed both in land-based and underground parts of the complex. The temperature sensors (resistance thermal converters) TSP-204 were used for temperature measurements in the check points. Resistance thermal converters TSP-204 are included into the State Register of Measuring Devices of Ukraine under number U246-07. The working range of measured temperatures is -40 °С to +270 °С, thermal response indicator does not exceed 6-8 sec. The temperature sensors in the land-based part of the power system are installed in supply and exhaust pipelines of all loops, on the heat battery and at input and output of heat-carrying agent flow-rate sensors. The sensor readings were taken automatically with a time interval of five seconds. The water meter by SENSUS was used for measuring flow-rate of the heat-carrying agent. The rated flow is 10 m 3 per hour and withstands the working pressure of 16 bar. Six heat-carrying agent flow-rate sensors are connected to the system: four at each line of heat-carrying agent supply to the probes (geothermal) and two at common lines for heat-carrying agent flow over lowtemperature and high-temperature loops of the landbased part of the system. The temperature sensors installed in the underground part of the geothermal power system allow for discrete measurements of soil temperature at depths of 0.02 to 50.0 m and heat-carrying agent temperature both in vertical and horizontal parts of the soil collector at the site between the geothermal well and entrance to the building (Goshovskyi, Zurian, 2015). The MAXYCON FLEXY controller and special software using FDB open configurator by RAUT AUTOMATIK in the geothermal system allowed for data collection from the measuring devices to be further processed and recorded into the archive, interpreted and shown on the computer monitor by means of visualisation software in real time (Fig. 3). MAXYCON FLEXY controller allows to take readings from more than 36 data channels and operate the system remotely both offline and manually. The research was carried out in three stages: Description of thermalphysic specifics of geothermal energy in-flow generated by rocks to the inner space of the well without circulation of heat-carrying agent therein over the long time interval. Investigation of the impact made by intensive extraction of heat energy with geothermal heat pump systems on regeneration abilities of the geothermal field. Investigation of the impact of change in process operation mode of the heat pump system on the soil temperature near the geothermal well and determination of the extent of deviation of temperature fluctuations of the geothermal field in connection to duration of the system operation. 1. With the purpose of determination of regularities in seasonal temperature changes in the upper layers of the Earth and depth of annual temperature changes in soil, investigators applied the experimental research method which allowed for temperature measurements of intact soil during twelve months, from October 2018 to September 2019. The temperature sensors installed in the well allowed to measure the soil temperature at standard depths during the experiment: 0.02; 0.30; 0.70; 1.2; 2.0; 5.0; 15.0; 35.0; 50.0 m. The sensor readings were taken automatically with a time interval of five seconds. Measurements of soil temperature were made at the geothermal landfill of the Ukrainian State Geological Research Institute. In order to maintain the experimental integrity, no heat extraction was effected from the geothermal field where the research was carried out both before and during the experiment. Findings that allowed to make an analysis of relation between the soil temperature change and depth at various time intervals, from a day to a year, and to determine the relation of average monthly temperatures T and depth h for soil mass in the place where geothermal probes were installed (geothermal field), were obtained in the course of research. It was found out by experiment that daily fluctuations of ambient air temperature caused by change of sunlight intensity had significant impact on the soil temperature at depth of up to 0.30 m. Starting from depth of 0.70 m and more, daily fluctuation of air temperature has no impact on change of the soil temperature. It is grounded scientifically that the soil temperature at depth of up to 2 m during the month tends to decrease continuously under general dynamics of decrease in the air temperature. Moreover, it is essential that change in the soil temperature at depth of up to 0.70 m depends on the air temperature, while the air temperature has no impact on the soil temperature at depth of over 5 m during the month. We can see the moment when the soil temperature at depth of 2 meters decreasing from 18 °С at the beginning of the month to 15 °С at the end of the month crosses the soil temperature isotherm at depth of 5 m (Fig. 4) With experimental findings obtained during the year we can make conclusion as to existent tendency for the difference in extreme temperature values ∆Т to decrease as the depth h increases (Table 1). Besides the above stable tendency of 'compression' of temperature line bundle, the review of data given in Table 1 allowed to make conclusion as to independence of average annual temperature (T) from depth h for each measurement mass data. So, for the temperature change data in Table 1 we obtained, to the extent of acceptable to us increase in depth, the following values (T) (in °С): 12.37; 12.63; 12.92; 13.04; 12.85; 12.31; 13.26; 12.98; 12.11. Consequently, if we put h r to be the depth where no seasonal temperature fluctuations are found, then temperature T(h r ) can be determined as the arithmetic mean value for the average annual temperatures (T). In addition, considering that h r value fulfils condition (Т) (h r ) = 0, then we can see from the experimental findings given in Table 1 that h r value fulfilling condition (Т) (h r ) = 0 is within 15 m. 2. The impact made by intensive extraction of heat energy with geothermal heat pump systems on temperature fluctuations of the geothermal field has been investigated. The intensity of heat energy extraction from the geothermal field changed, according to the investigation procedure, with change in both number of wells with geothermal heat exchangers involved to extraction of the Earth's warmth and number of geothermal heat exchangers installed into the wells (Zurian, 2019). Technical capabilities of the experimental system allowed to carry out the experiment in the configurations of the geothermal heat exchanger as follows: 1) 4×2: four wells, each with two U-shaped geothermal heat exchangers installed; 2) 4×1: four wells, each with one U-shaped geothermal heat exchanger installed; 3) 2×1: two wells, each with one U-shaped geothermal heat exchanger installed. Meanwhile, heat load on the system over the condenser loop was unchanged. Amount of flow-rate of the heat-carrying agent through the building heat supply system kept the same. The investigation procedure and capabilities of the software developed by the Ukrainian State Geological Research Institute allowed to discrete time intervals required for research which enabled to obtain necessary findings and make conclusions regarding certain dependencies: 1 -heat-carrying agent temperature at the output of the geothermal system condenser decreases with reduction in number of geothermal heat exchangers, however not significantly depends on their configuration; 2temperature hysteresis for heat-carrying agent in the condenser loop decreases both in case of reduction in number of geothermal heat exchangers and when configuration of the heat exchanger is changed from U×2 to U×1; 4 -heating capacity of the geothermal system decreases when the number of geothermal heat exchangers is reduced and slightly lowers when configuration of the heat exchanger is changed from U×2 to U×1; 5 -heat-carrying agent temperature at the input and output of the geothermal system evaporator decreases with reduction in number of and depends on configuration of geothermal heat exchangers; 6 -heat-carrying agent temperature at the input and output of the geothermal system evaporator decreases when configuration of the heat exchanger is changed from U×2 to U×1; 5 -difference between the ambient medium and working body temperatures at the evaporator output is uniformly increasing both with reduction in number of geothermal heat exchangers and when configuration of the heat exchanger is changed from U×1 to U×2 (Table 2). Accordingly, the experimental findings of temperature settings near the well area during heat energy extraction for operation of the collector heat pump prove that with increased intensity of heat energy extraction the soil temperature near the geothermal heat exchanger decreases in proportion to the increase in the system load. In addition, in accordance with the tasks set, the GTS operation under extreme loads on the geothermal heat exchanger were studied. As with previous experiments, the initial temperature of the heat-carrying agent in the heat supply system was 28 °С and that equal to 15 °С for propylene glycol in the soil heat exchanger loop under continuous circulation. At the beginning of the experiment, from 6:12 pm till 6:25 pm under 1×1 mode, the temperature changing dynamics both for high-temperature and low-temperature loops matched the processes that took place under the experiment conditions of 4×2, 4×1, 2×2, 2×1, 1×2; however, the experiment proved operation of the system under such mode conditions to be unstable. For example, propylene glycol temperature started to drop at the very beginning of the experiment. The temperature increasing dynamics for the heat-carrying agent in the heat supply system slowed down. The system was operated in such mode for short time period (Fig. 5). At the time point of 6:33 pm, the output temperature of the low-temperature loop evaporator reached the extreme of 2 °С. This triggered the safety automation signal. Further recurrent unstable operation of the geothermal system after 6:33 pm time point was associated with failure of 1×1 configuration soil heat exchanger in the geothermal system to restore the required evaporator input temperature under set options of 12 kW for heating capacity at the geothermal system output. Up to the moment the system switched to the emergency operation, the readings for temperature, flow rates of heat-carrying agents in low-temperature and high-temperature loops and calculated data for the system capacities under operation in 1x1 mode were: -temperature hysteresis at the input and output of the heating system at 6:30 pm time point was 5.8 °С; -difference between the ambient medium and working body temperatures at the evaporator output at 6:30 pm time point was 4.4 °С. In consideration of meter readings for heat-carrying agent flow rate of 1.419 m 3 over the evaporator loop and 1.618 m 3 over the condenser loop, we can conclude that the system is trying to maintain the required cooling and heating capacities but because of drop in evaporator input temperature which is the result of failure of the renewable energy source to maintain regeneration *Note: t 1 is the temperature at the condenser output; t 2 is the temperature at the condenser input; Δt k is the temperature hysteresis at the condenser; V k is the heat-carrying agent flow rate over the condenser loop; W k is the heating capacity of the geothermal system; t 3 is the temperature at the output of the geothermal collector (ambient); t 4 is the temperature of the working body at the evaporator output; Δt е is the difference between the ambient medium and working body temperatures at the evaporator output; V е is the heat-carrying agent flow rate over the evaporator loop; W е is the cooling capacity of the geothermal system. processes under routine operation, switches to the emergency operation. In other words, the soil heat exchanger of 1×1 configuration has capacity insufficient to ensure stable operation of the geothermal system even for short time. Such operation of the geothermal system we believe to be the emergency mode. This is because further operation under such conditions without safety automation devices may result in ice formation on the heat exchanger and freezing of the well which makes regeneration of soil in the place of freezing impossible and the well may be exposed to thermal heating (freeze-over). 3. Relation between in-service time of the system operation and temperature fluctuations of the geothermal field was investigated by experiment. It was found out that with temperature setting of 40 °С at the condenser input on the geothermal heat pump system which, with hysteresis of 8 °С, allows to supply heat-carrying agent to consumers at 48 °С, the soil temperature near the well area during short-term operation (to one point) may decrease from 3 °С to 12 °С (Fig. 6). At the same time, regeneration of the soil heat balance at the place of heat energy extraction may take 20 minutes to 1 hour. Moreover, review of findings obtained in the short run (during one day) showed that at the depth of 50 m, temperature deviations in the place of heat energy extraction exceed 3 °С and tend to decrease in absolute values. Also, the ratio of charge duration till discharge of the soil battery was determined. The factors for fast discharge and slow charge of the thermal field were found out, as well determined that with given time interval of operation for the particular soil Fig. 5. Curve of temperature versus system operation time when one geothermal heat exchanger installed into the well is connected to the geothermal system (1×1 mode): t 1 is the temperature at the condenser output; t 2 is the temperature at the condenser input; t 3 is the temperature at the output of the geothermal collector; t 4 is the temperature of the working body at the evaporator output battery the thermal field charge last five times longer than the discharge. At the same time it is relevant to carry out certain investigations to review how fast the temperate mode of the rock mass can be restored due to its thermalphysic properties and what changes in the temperature mode of the near the well area considering lithologic specifics of the working section may take place with the well depth increased. We also determined that the soil temperature in the place of heat energy extraction regardless of the depth is to some extent influenced by intensity of sun insolation at the surface of the geothermal field. This influence has certain delay by time. This is connected with circulation of the heat-carrying agent in the geothermal heat exchanger over entire space of the geothermal well from top downward generating heat exchange between various soil layers adjacent to the well. It has been proved by experiment that in the long run, namely during five years of the system operation, the soil temperature near the well area decreased by 2.5 °С (in average, by 0.5 °С each year). Measurements were taken during September at the beginning of heating season prior to the well operation at the depths of 15, 35, and 30 m with year-round thermal loading on the geothermal field. We have proved by experiment that the soil temperature in the sixth year of the system operation had stabilised and stopped at 10.5 °С. And at the beginning of the seventh year of operation it increased by 1.2 °С, i.e. Resulted in the effect of heat in-flow to the near the well area of the geothermal field (Fig. 7). The findings obtained by experiment fully correlate with mathematical calculations made when modelling the temperature field in conditions of multiple cyclic turn-ons and turn-offs of the heating system [2]. The numeric model is based on discrete presentation of the energy equation, extreme and initial conditions, with various densities of the heat flow and implemented by means of MathLab application software package. Main found factors present (Fig. 8) changes of the temperature field under conditions of cyclic heat supply to the well of 100 Wt per m 2 With pre-set values for the thermal load, well and bed, the well cut-off temperature after seasonal operation in conditions of heat supply has increased by more than 20 °С. The bed temperature evened up under downtime of half a year, and temperature deviations at the moment of well cut-off from the background one were maintained within 2 °С. Cyclical alterations of heat supply modes and downtimes (i.e. when a heat pump is idle) cause heat accumulation effect that is compensated with background heat flows. Quazi-steady condition that constitutes cyclic mode without further temperature rise is assumed to occur in 2.5 years and in 3 years under downtime. Conclusions. Continuous heat energy extraction from or discharge into soil causes change in the heat balance of the geothermal field at locations of heat pump geothermal wells. The above changes depend on h=geological and hydrogeological specifics of the mined bed, background heat flows, climate conditions and operation parameters of the geothermal systems. Long-term operation of heat pump geothermal wells has its specifics: -firstly, it has been found by experiment that under seasonal operation of the geothermal heat pump system during the first 5 years of heat energy extraction, the soil mass temperature decreases in average by 0.5 °С each year of the system operation, and starting from the fifth year, operation of the geothermal heat pump system is stabilised and switched to the quazi-steady mode; -secondly, stabilisation of the geothermal field under year-round operation is achieved in the second year of operation; -thirdly, freezing of the geothermal well is possible, however, only under operation of the heat pump system in contingency and emergency. It has been proved necessary to study the dynamics of changes in the temperature mode of the Earth subsurface layer under extraction of heat with geothermal heat pump systems under field operation of the heat pump unit in consideration of stratigraphical specifics of the working section with increased depth of the geothermal well. Human-induced load of geothermal power industry on the environment and humans is insignificant, and use of geothermal heat pump systems for heat supply of residential and industrial facilities seems to be promising and environmentally friendly courses of renewable power industry.
v3-fos-license
2024-01-31T06:17:06.969Z
2024-01-29T00:00:00.000
267319198
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "3dbd9c9c4b8b3880bc66af1dbe499b22f2c11957", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:361", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "9bd70b78e6b9002e6e1e96acdf1a3f774566fea9", "year": 2024 }
pes2o/s2orc
Influence of spatial structure migration of overlying strata on water storage of underground reservoir in coal mine Underground reservoir technology for coal mines can realize the coordinated development of coal exploitation and water protection in water-shortage-prone areas. The seepage effect of the floor seriously affects the safety of underground reservoirs under the action of mining damage and seepage pressure. Focusing on the problem of floor seepage in underground reservoirs, a spatial mechanical model of underground reservoirs was established. The main factors affecting the seepage of the surrounding rock were studied. The seepage pressure law in different stages of spatial structure evolution of overlying strata was explored. The results showed that pressure change was the main factor affecting the stability of a reservoir’s surrounding rock. The pore space between the broken and fractured rock in the water-flowing fractured zone was the main water storage space, which was directly related to the development of a breaking arch. According to the spatial structure evolution process of the overlying strata, the water storage state of an underground reservoir was divided into two stages and three situations. The seepage pressure was mainly affected by the water pressure and the overlying strata weight. The water pressure was affected by the reservoir head height, and the overlying strata weight was mainly affected by the overlying strata thickness. Introduction The northwest region is one of the main production areas of coal resources in China.The production accounts for about 70% of the national output.However, water resources account for only 3.9%, resulting in very serious water shortages for mines [1].Underground reservoir technology uses coal mine goaf to realize water source storage and circulation reuse [2,3].There are still many scientific problems to be overcome in underground reservoir technology [4], such as the occurrence of seepage in the floor that seriously affects the safety of underground reservoirs. The seepage characteristics of fractured rock mass directly affect the stability of the floor.After the underground reservoir stores water, the pore water pressure in the floor increases, damaging the original joints of the surrounding rock.Many scholars have carried out relevant research on the stability and failure of underground reservoirs in coal mines.Gu studied the dynamic response and stability of coal pillars under earthquake action and believed that the floor of the mine was also damaged by the earthquake [5].Li Jianhua believed that the soaking effect had little effect on the mechanical properties of coal samples, which were mainly affected by the bedding structure.Therefore, the influencing factors affecting floor seepage are mainly its structure, external stresses and osmotic pressure [6].Bai et al. studied the limit water head of a coal pillar in an underground reservoir and obtained the relationship between the stability of a reservoir and the critical value of the bearing water pressure of a coal pillar [7].Huo et al. pointed out that the stress distribution and transmission of the floor are superimposed by a coal pillar, and the damage to the floor is very serious when the disturbance occurs again [8].Zhu et al. found that only when the elastic part of the coal pillar was greater than 31% of the overall coal pillar would the coal pillar be in a long-term stable state.The process of seepage formation is the result of the combined action of stress and seepage [9].Terzaghi proposed the basic theory and initial model of seepage and profit coupling in geotechnical media [10].Witherspoon first defined the stress change caused by seepage as a "coupling effect" and proposed a related theory [11].Noorishad improved the coupling theory [12].At the same time, international scholars have also used different test methods to verify the inverse correlation between the permeability coefficient and effective confining pressure [13][14][15].Herda simulated the seepage of fractured rock mass and expounded the path of seepage [16].Yao et al. studied the mining-induced seepage strain mechanism of the floor and found that the mining disturbance made the permeability coefficient of the surrounding rock change significantly [17].Ma et al. [18] and Li et al. [19] established a seepage model of a mined-out area floor and calculated the permeability characteristics of the floor strata through the relationship between the stress and permeability characteristics.Wang et al. revealed the coupling mechanism of rock stress-seepage and simulated and studied the stress state and seepage characteristics of floor mining above confined water [20].The research shows that the permeability of the floor is closely related to horizontal stress.Wang et al. pointed out that the coal seam floor was alternately damaged as mining progressed, and a plastic zone and large channel were formed under the combined action of mining and confined water pressure, resulting in a sudden change in permeability [21].The above studies pointed the stress-seepage coupling law of the floor from the perspective of confined water inrush on the floor, but it cannot be effectively applied to the seepage control for the floor of an underground reservoir.Some researchers have studied the impermeability of floor rock.Guo et al. carried out in situ compression-seepage tests of different lithological structures and obtained the impermeability of the original state of the rock strata with different lithological structures and concluded that the impermeability of the floor rock strata depends on the original structural conditions and fissure properties of the rock strata, which are not related to the lithology of the rock strata itself [22].Huang et al. tested the floor aquifuge through a high-pressure water pressure drilling test.The research shows that there is a positive correlation between the permeability coefficient and pressure water flow [23].Shao et al. [24], Jiang et al. [25] and Zhang et al. [26] also conducted similar studies.The above research explores the main factors affecting the permeability coefficient of the floor.The results show that the permeability coefficient is affected by water pressure.The change in the permeability coefficient under changing water pressure is studied, and the mechanical characteristics of the rock layer under the action of the seepage field and stress field are not fully considered. Floor water inrush belongs to floor confined water inrush, and the water storage state of underground reservoir in coal mine is different from it.The difference between groundwater reservoir floor seepage and confined water floor water inrush is mainly reflected in two aspects.First, the floor confined water mainly comes from Ordovician limestone karst water, while the main coal seam in northwest China was formed in the late Jurassic period, so the water sources of the two are different from the stratigraphic age.Secondly, the main force source of reservoir floor seepage is the water pressure generated by the height of water level or the resultant force of water pressure and overlying strata gravity, while the force source of water inrush from the floor above confined water is the pressure given by the stratum to the confined aquifer.At the same time, both will be affected by primary or mining disturbance cracks.The water in the coal mine underground reservoir exists freely in the goaf, and the water body may also be affected by the movement of the overlying strata. Because The research of underground reservoir is biased towards the black box problem, the most intuitive is the change of external surrounding rock.Relatively mature application of transfer rock beam theory and overlying strata spatial structure can measure part of the parameter values, but still can not be fully quantified.Therefore, based on this theory, qualitative and semi-quantitative analysis of the stability of underground reservoirs was conducted in different time periods under different working conditions, in order to obtain results through subjective evaluation and measurement of phenomena.Based on the mature spatial structure evolution of overburden rock, this paper deduces the change of seepage pressure inside underground reservoir.In view of this, this paper actively carries out research on the occurrence characteristics of seepage in the surrounding rock of underground reservoirs and explores the relationship between seepage pressure and overlying strata movement and structure, which has an important engineering value for studying the seepage of the floor and determining the reasonable reservoir water storage. Model construction Under long-term external disturbance, an underground reservoir is prone to floor damage, coal pillar collapse, etc., resulting in the overall destruction of the reservoir [27].Rock failure under stress-seepage coupling is a typical problem, which is different from rock failure under single action [28,29].Underground reservoirs have particular importance because of their special geographical environment.Firstly, the floor is affected by mining, forming fracture damage zones with certain depths [30].Secondly, the floor experiences secondary damage under the coupling of stress and seepage, wherein the strength is further reduced, and the overall stability of the reservoir is reduced [31]. A coal mine underground reservoir in Shendong mining area is selected as the research object.The geometric shape, size and material properties of the underground reservoir are collected.It is determined that the reservoir is mainly composed of multiple goafs as the main water storage space, the boundary coal pillar of the mining area is the water retaining dam, and the roof and floor in the mining area are the boundaries.According to the theory of transfer rock beam, the stress environment of the model and the spatial structure of the rock mass around the underground reservoir are determined.The analysis shows that the underground reservoir will be in different stable states at different stages of overlying strata movement.The initial conditions and excavation methods of the underground reservoir model are the basic conditions for the formation of goaf after coal seam mining.The model is a combination of multiple goafs formed by the natural caving method, forming a relatively closed goaf composed of coal pillars and roof and floor surrounding rock.Considering the influence of overlying strata movement on the stability of underground reservoir, based on the influence of stress and water pressure on stability of underground reservoir, the spatial mechanical model of surrounding rock of underground reservoir is established by studying the water seepage in damaged rock mass and the failure phenomenon of rock mass under the water action, as shown in Fig 1. Considering the change in pressure as the main factor of surrounding rock stability design, the influence of different seepage pressures and stresses on the seepage of an underground reservoir floor is studied.The sudden change in the permeability coefficient is key to the seepage instability of the surrounding rock in an underground reservoir.The starting pressure gradient is negatively correlated with the permeability coefficient [32].The occurrence of floor seepage in an underground reservoir is affected by the rock mass permeability coefficient and external seepage pressure.The permeability coefficient of the floor is related to its porosity and the stress distribution and transmission of the floor.At the same time, the seepage of the floor is affected by water pressure. The seepage characteristics of the surrounding rock of the underground reservoir are mainly affected by factors such as seepage pressure, coal pillar stress and mining-induced fracture, and the possibility of earthquakes caused by mining disturbance [33].According to statistics, the degree of influence of each factor is seepage pressure>pillar stress>mining-induced fracture>mine earthquake.During the operation of an underground reservoir, seepage pressure and coal pillar stress are the main factors affecting the stability of the surrounding rock.Under the action of seepage pressure, water flows into the floor where the fracture zone has been formed, causing secondary damage to the floor.The coal pillar stress changes the flow characteristics of water in the rock.In the case of the reservoir group, the instability of the reservoir roof and floor will directly lead to the water level overrunning the lower reservoir, which will affect the stability of the surrounding rock of the lower reservoir. On this basis, the relationship between seepage pressure and overlying strata failure and water head height is studied, and the distribution of floor seepage pressure in different stages of overlying strata spatial structure evolution is explored. Factors of floor seepage based on Darcy's law The research shows that the mutability of the permeability coefficient and the joint structure of the surrounding rock medium itself are the two main factors affecting groundwater flow [34].According to the analysis of the spatial model of the underground reservoir, the mutation of the permeability coefficient is mainly due to the influence of the non-uniform stress affecting the floor and the cracks related to mining damage.At the same time, the change in the seepage pressure will also cause secondary damage to the damaged floor, thus affecting the permeability coefficient of the floor. Darcy's law describes the linear relationship between the seepage velocity of the water and the hydraulic gradient in a saturated media [35].The seepage flow is positively correlated with the cross-sectional area of the specimen and the difference between the upper and lower heads and negatively correlated with the length of the seepage path, as follows. where Q is the seepage flow, k is the permeability coefficient, A is the cross-sectional area of the specimen, ΔL is the length of the seepage path, and h 2 h 1 are the upper and lower water heads, respectively.At the same time, the product of the flow rate and the cross-sectional area is equal to the seepage flow, as follows. where J is the hydraulic gradient, J ¼ Dh DL .Extending Darcy's law to a three-dimensional case, the differential form of a three-dimensional Darcy 's law is obtained, as follows. Whether it is a saturated medium or an unsaturated medium, the occurrence of medium seepage must obey the law of the conservation of mass.Large numbers of experimental studies have shown that Darcy's law is still applicable to the seepage of fluid in the unsaturated zone of the medium.Based on this theory, the saturated zone and unsaturated zone of the seepage medium are regarded as a unified continuous medium, and a unified equation is used to describe the seepage field [36,37].In this paper, two variables of water pressure and overlying strata gravity (external stress of medium) are introduced into Darcy's law formula and deduced.The continuity equation is the basic equation to study groundwater movement.Based on the continuity equation and mass conservation equation, the differential equation of groundwater movement suitable for an underground reservoir is established. A micro-element control unit (Fig 2) is established, assuming that the water is compressible [38], the solid particles cannot be compressed, the porous medium skeleton is compressible in the vertical direction Z, and Δx, Δy is a constant.Therefore, only fluid density ρ w , porosity ϕ and unit height Δz change with pressure. According to the law of conservation of mass, the difference between the inflow mass and outflow mass per unit of time at a certain point is equal to the changing mass in the micro-element.The continuity equation is as follows. where ρ w is the fluid density, ϕ is the medium's porosity, C is the saturation, and v i is the velocity component in a certain direction. In the case of considering the seepage pressure, the change state of water and the porous media under seepage pressure are considered.It is known that the equation of the state of water under osmotic pressure is as follows. where p is the seepage pressure, which includes the water pressure and overlying strata gravity, and β w is the compression coefficient of the water.In general, water can be regarded as incompressible, and the variation in water density in this paper is small enough that it is considered to be constant.When there is pressure outside of the medium, the compressibility of the medium is reflected in the changing pore volume.The influence of rock compressibility on the seepage process is mainly reflected in two aspects: on the one hand, the change in seepage pressure causes a change in the pore size of the seepage medium, namely porosity, which is a function of pressure.On the other hand, the change in porosity causes a change in permeability.Therefore, the compression coefficient is used to represent the relationship between porosity and osmotic pressure.It is known that the state equation of a porous media under seepage pressure is as follows. where e is the void ratio and α b is the volume elastic compression coefficient of a porous media.The change in porosity of water-bearing strata under an external force is proportional to the increment in pressure. It is known that seepage pressure is the main factor affecting the seepage of the reservoir floor, which includes two parts: water pressure and gravity acting on the overlying strata.Its differential form is as follows. where p w is the water pressure, σ z is the gravity acting on the overlying strata, g is a constant, h is the water head, γ is the volumetric weight of the overlying strata, and H is the buried depth of the coal seam. According to the above formula, the seepage continuity equation is transformed into a seepage control equation of the seepage floor of an underground reservoir with water pressure and overlying strata gravity as control variables. When seepage conforms to Darcy's law, the three-dimensional Darcy's law of Eq 4 can be substituted into the continuity equation of Eq 5 to obtain. Combining Eq 8 with Eq 6: Combining Eq 8 with Eq 7: Therefore, the right side of Formula ( 9) can be changed to: The seepage continuity equation of an underground reservoir floor is simplified as follows. In the case of a certain permeability coefficient, water pressure and overlying strata gravity changes will directly affect the seepage velocity, thus affecting the water seepage behaviour.The water pressure is mainly affected by the water head height, and the overlying strata gravity is mainly affected by the thickness of the overburdened rock. Results and discussion Based on the spatial structure model of the overlying strata in the stope, the relationship between seepage pressure and the mode of the overlying strata is studied, and the variation law of seepage pressure under different stages of overlying strata failure is explored. Spatial structure evolution of overlying strata With the advance of the working face, the overlying strata are broken in turn from bottom to top, and a breaking arch and stress arch begin to form [39].The moving strata, which have a direct influence on the stress, are the main component of the spatial structure of the overlying strata.The spatial structure model of the overlying strata is established (Fig 3) [40]. According to the existing theory, the overlying strata can be divided into four zones in the vertical direction [41].The failure form of rocks in a caving zone is mainly breakup, and that in the fracture zone is mainly fracture.The rock strata in the broken zone are composed of a series of 'transfer rock beams' that move simultaneously (or almost simultaneously) and can always maintain the connection of transfer force in the advancing direction.The breaking arch consists of the caving zone and the water-flowing fractured zone, and the height of the latter is basically consistent with the range of the breaking arch [42].The main water storage space is the water-flowing fractured zone [43]. When the mining conditions (working face width, mining thickness, buried depth, overlying strata properties) are certain, the height of the breaking arch increases with the increase in the working face's advancing distance.The structural development process is divided into two stages: insufficient mining and sufficient mining [44].The formation of a breaking arch is gradually formed with mining.The overlying strata experience a process of suspension-collapse-compaction.In the early stage of the formation of a breaking arch, the overburden structure is mainly manifested as the fracture and hinge of a beam.After the mining of a working face, the rock strata under the breaking arch undergo a compaction stage. The prediction and control model of mining subsidence based on the correlation between the length of mining face and the fracture step distance of overlying strata is put forward in Fig 4 [42].The surface subsidence can be calculated by the working condition, and the overburden structure can be calculated by the surface subsidence to judge the stage of overburden migration. Seepage pressure based on spatial structure of overlying strata Pressure is the main factor affecting the seepage of an underground reservoir floor [45].The seepage pressure of an underground reservoir comes in two aspects.On the one hand, it is the water pressure dominated by the change in the water head.The second is the overlying strata weight, which is dominated by the thickness of that participating in the movement.According to the process of the overlying strata's spatial structure evolution, the water storage state of an underground reservoir is divided into two stages, as shown in Fig 5. The first stage: the breaking arch began to form, but the overlying strata in the gob had not been compacted and stabilized.The interior of the underground reservoir is not completely sealed at this stage.Partial fracture space in the water-flowing fractured zone filled with water.The water body has not yet contacted the bottom of the complete aquiclude above the waterflowing fractured zone and can flow.As shown in Fig 5(A), the height of the water head is less than that of the water-flowing fractured zone, and the seepage pressure is only determined by the water head height. p ¼ r w gh w ð14Þ where h w is the height water head, m.The seepage pressure is only related to the height of the water head until the water body fills the water-flowing fractured zone. The second stage: the breaking arch fully formed, the overlying strata have collapsed and compacted stability.At the same time, according to whether the water body is fully filled, the range of water-flowing fractured zone is divided into two cases. ① Water is filled completely (Fig 5 (B)).Water pressure is related to water head height and overlying strata gravity.When the water is filled completely with the range of the water flowing fractured zone, there will be some pressure inside the reservoir.At this time, the seepage pressure is not only from the water pressure produced by the water head height, but also from the stress of overlying strata on the water body.With the condition of stable compaction of overlying strata, the reservoir forms a closed space.When the water-flowing fractured zone is not fully filled, the condition of seepage pressure is similar to the first stage.What is more, due to the overall sealing of the reservoir, there is a certain gas pressure inside the reservoir.The factors affecting the internal gas pressure of the reservoir are mainly the porosity of the surrounding rock of the reservoir and are also affected by the size of the reservoir space.At this time, the seepage pressure is composed of water pressure and gas pressure generated by its own water body. Above all, the influence of seepage pressure on reservoir floor seepage is different under different conditions.The maximum water level of an underground reservoir can be obtained by clarifying the seepage pressure and composition of a reservoir under different overlying strata structures, which is conducive to controlling the safe and stable operation of an underground reservoir. Influence of overlying strata failure mode on seepage The practical mine pressure theory points out that the form of movement and failure of overlying strata on the goaf determines the law of mine pressure, which directly affects the pressure ② Movement form of shear failure. After the rock stratum is exposed, a small bending deformation occurs, and the end of the exposed rock stratum is cracked.In the case of no cracking in the middle of the rock stratum, the sudden overall cutting and caving is shown in the Fig 7. The different movement forms of overburden failure have a significant impact on the water storage of underground reservoirs.Firstly, the available space of goaf under bending failure is larger than that under shear failure.Secondly, under the shear failure situation, the stress and action range of the gangue in the goaf on the floor are greater than the bending failure. It is assumed that the compaction degree of gangue in goaf is k 1 and the contact ratio is k 2 , which affect the influence of seepage pressure and stress on the floor respectively.The k 1 directly affects the permeability coefficient, and the two are negatively correlated.In the case of a certain gravity of the overlying strata, the greater the k 2 , the more uniform the stress transfer.Due to the opaque characteristics inside the underground reservoir, the quantitative relationship between them needs further study. Influence of other factors on seepage In addition to the above factors, the factors that may affect the stability of underground reservoirs may include. ① abutment pressure The abutment pressure is affected by the stress of the original rock, the shape and size of the goaf, the properties and dynamics of the overlying strata in the goaf, the strength of the coal pillar and its surrounding mining conditions, and the mining thickness of the coal seam.The distribution parameters of the abutment pressure are mainly obtained by field measurement.The distribution of abutment pressure in stope is shown in Fig 8. ② Softening effect of water on rock The study of rock mechanics shows that the strength of saturated water rock, original humidity rock and dry rock is different.The strength of saturated water rock is the lowest, especially the weak rock in sedimentary strata is more affected by water content.Some weak rocks even collapse and lose strength after soaking in water.The floor strata of underground reservoir are often in saturated water state. ③The erosion of water There are micro-scale and macro-scale small discontinuities such as pores, joints and cracks in rock mass, and there are also larger fault planes and sedimentary planes.The erosion of water makes the original small discontinuous surface of rock mass change. Conclusions Based on the spatial model of an underground reservoir and seepage theory, the seepage law of a floor at different stages of overlying strata structure evolution was explored. A spatial mechanical model of an underground reservoir was established.It is concluded that the seepage pressure, coal pillar stress and mining-induced fracture were the factors affecting the seepage of the floor The seepage control equation for an underground reservoir floor was derived with water pressure and overlying strata weight as variables.The seepage of a reservoir floor was mainly affected by the change in seepage pressure. The correlation between the underground seepage pressure and the structural evolution of the overlying strata was studied, and the influence of the movement and failure stage of the overlying strata on the seepage pressure and reservoir capacity was explored.The variation law of seepage pressure under three kinds of overlying strata structure evolution in the two stages of overlying strata failure was obtained.The seepage pressure was mainly affected by the height of the water head and the weight of the overlying strata. Fig 1 . Fig 1.The spatial mechanics model of surrounding rock in underground reservoir.(a) Structure diagram of underground reservoir.(b) A-A section bottom plate structure diagram (in dip).(c) Force indication of B area.https://doi.org/10.1371/journal.pone.0292357.g001 Fig 3 .Fig 4 . Fig 3. Spatial structure of overlying strata in stope.https://doi.org/10.1371/journal.pone.0292357.g003 Water body has not yet filled the water flowing fractured zone, and the water body has a free surface (Fig 5(C)).Water pressure is related to head height. Fig 5 . Fig 5.The relationship between reservoir water head and overlying strata structure.(a) The overlying rock is not stable, h W <h G .(b) The overlying rock is stable, h W = h G .(c) The overlying rock is stable, h W <h G .https://doi.org/10.1371/journal.pone.0292357.g005
v3-fos-license
2024-04-24T15:14:39.580Z
2024-04-22T00:00:00.000
269306812
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a55d40df84411a9ac4af0df32c1f8df254207daf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:362", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fd90a81a678ebb3965556e2711d0c2302de6b6b7", "year": 2024 }
pes2o/s2orc
Extracorporeal membrane oxygenation technology for adults: an evidence mapping based on systematic reviews Background Extracorporeal membrane oxygenation (ECMO) is a cutting-edge life-support measure for patients with severe cardiac and pulmonary illnesses. Although there are several systematic reviews (SRs) about ECMO, it remains to be seen how quality they are and how efficacy and safe the information about ECMO they describe is in these SRs. Therefore, performing an overview of available SRs concerning ECMO is crucial. Methods We searched four electronic databases from inception to January 2023 to identify SRs with or without meta-analyses. The Assessment of Multiple Systematic Reviews 2 (AMSTAR-2) tool, and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system were used to assess the methodological quality, and evidence quality for SRs, respectively. A bubble plot was used to visually display clinical topics, literature size, number of SRs, evidence quality, and an overall estimate of efficacy. Results A total of 17 SRs met eligibility criteria, which were combined into 9 different clinical topics. The methodological quality of the included SRs in this mapping was “Critically low” to “Moderate”. One of the SRs was high-quality evidence, three on moderate, three on low, and two on very low-quality evidence. The most prevalent study used to evaluate ECMO technology was observational or cohort study with frequently small sample sizes. ECMO has been proven beneficial for severe ARDS and ALI due to the H1N1 influenza infection. For ARDS, ALF or ACLF, and cardiac arrest were concluded to be probably beneficial. For dependent ARDS, ARF, ARF due to the H1N1 influenza pandemic, and cardiac arrest of cardiac origin came to an inconclusive conclusion. There was no evidence for a harmful association between ECMO and the range of clinical topics. Conclusions There is limited available evidence for ECMO that large sample, multi-center, and multinational RCTs are needed. Most clinical topics are reported as beneficial or probably beneficial of SRs for ECMO. Evidence mapping is a valuable and reliable methodology to identify and present the existing evidence about therapeutic interventions. Introduction Extracorporeal membrane oxygenation (ECMO) is an advanced life-support technique to rescue critically ill patients with the severe cardiac and pulmonary disease [1].In 1970, Robert et al. provided a patient with adult respiratory distress syndrome (ARDS) with three days of cardiopulmonary support, establishing a record for long-term life support [2].Nonetheless, two randomized controlled trials (RCTs) on the clinical application concluded in 1979 [3] and 1994 [4] concluded that ECMO did not increase the likelihood of survival in patients with severe acute respiratory failure (ARF) or ARDS.This conclusion excluded ECMO, primarily used to treat cardiopulmonary failure in infants.Until 2009, the Lancet published the third RCT study result of ECMO technology for ARDS [5], which revealed that 63% of the ECMO group survived to 6 months without disability, 16% more than the conventional management group.In the same year, 2009, a clinical study [6] on treating the influenza A (H1N1)-associated ARDS published in the JAMA demonstrated that approximately one-third of mechanically ventilated patients treated with ECMO had a 79% survival rate.However, the results of EOLIA (ECMO to Rescue Lung Injury in Severe ARDS) in 2018 showed that 60-day mortality was not significantly lower with ECMO than with a strategy of conventional mechanical ventilation (MV) that included ECMO as rescue therapy among patients with very severe ARDS.There is still significant controversy over the effectiveness of ECMO. Due to technological advancements, improved safety, and reduced complications [7], especially in the context of the coronavirus disease 2019 (COVID-19) pandemic, there is a growing demand for treatment techniques for ECMO.According to the Extracorporeal Life Support Organization (ELSO) [8], an international voluntary registry founded in 1989, the number of ECMO runs has increased by 1137% over the past nearly two decades, from 1643 between 1990 and 2021 to over 21,896 to date.Similarly, the number of ECMO centers, which increased by 125% (from 83 to 187) between 1990 and 2010, rose by a staggering 216% (from 187 to 591) between 2010 and 2021, and reached over 1000 in 2023. In the era of evidence-based medicine, it is generally accepted that all healthcare decisions should be based on the strongest scientific evidence available [9].Research on ECMO effects continues to expand, which has been the subject of many primary research studies and systematic reviews (SRs) of the literature.SR, a critical evidence synthesis method, with or without meta-analysis, is widely used in resolving diverse healthcare questions, which are the foundation of evidence-based healthcare provided evidence to support decision-making [10].In contrast, SR frequently addresses particular concerns, preventing them from providing a comprehensive overview of a given topic [11].Moreover, the reliability of SR methodological and reporting quality trends to affect the correct evaluation of intervention results [12], and even fragmentary reports may impact on the selection of appropriate intervention measures [13].Overviews of SRs (or umbrella reviews) attempt to systematically retrieve and summarize the results of multiple SRs into a single document [14].The number of published overviews of SRs has increased steadily in recent years, in part due to the proliferation of SRs, but methods for conducting, interpreting and reporting overviews are in their infancy [15]. In general, the steps for undertaking an overview mirror those of a systematic review, with many of the methods used in systematic reviews being directly transferrable to overviews (e.g., independent study selection and data extraction) [16].However, there are unique features of overviews that require the use of different or additional methods, for example, methods for assessing the quality or the risk of bias in systematic reviews, dealing with the inclusion of the same trial in multiple systematic reviews, dealing with out-of-date systematic reviews, and dealing with discordant results across systematic reviews [15].Evidence maps provide a systematic method for mapping the evidence on a particular topic, which clarifies the characteristics of the studies in this field from multiple dimensions (such as intervention type, research population, research conclusions, etc.), with the resulting map facilitating identification of gaps in the literature, thereby providing decision-makers with systematic evidence support [17,18].A key strength of the evidence mapping method is the use of visuals or interactive, online databases. Increasing numbers of SR for the application of ECMO technology in diseases have been currently conducted, but a comprehensive systematic summary or visual representation of the overall impact of ECMO is lacking, which is where the strength of the evidence map lies.Consequently, we use evidence mapping to identify, characterize, and organize the currently available evidence on ECMO based on the included SRs to provide reliable evidence for ECMO efficacy and safety assessment, as well as guidance for clinical application and future research. Data sources and search strategy We searched four electronic databases including Pubmed, EMBASE, Cochrane Library, Web of Science from their inception through January 2023.Search strategies were constructed using combinations of words describing the intervention of interest ("Extracorporeal Membrane Oxygenation" or "Venovenous ECMOs" or "Venovenous Extracorporeal Membrane Oxygenation" or "Venovenous ECMO" or "Venoarterial ECMO" or "Venoarterial ECMOs" or "Venoarterial Extracorporeal Membrane Oxygenation" or "Extracorporeal Membrane Oxygenations" or "ECMO" or "ECMO Treatment" or "ECMO Treatments" or "Extracorporeal Life Support" or "ECMO Extracorporeal Membrane Oxygenation" or "ECLS" or "ECLS Treatment" or "ECLS Treatments" or "Extracorporeal Life Supports" or "Extracorporeal Gas Exchange"), and the studied type ("meta-analysis" or "systematic review").Depending on characteristics of the database, medical subject headings (MeSH) and free vocabulary words were combined.No language restrictions were imposed. Inclusion and exclusion criteria Design Only SRs with or without meta-analyses on ECMO that compiled primary studies for any clinical indication were eligible for inclusion.We defined SRs as reviews that selfidentified as a "systematic review", "systematic review and meta-analysis", or "review" and reported the search sources and identified studies.Animal experiments, descriptive studies, conference abstracts, case reports, reviews, clinical experiences, trial protocols, letters, editorials, and unavailable or duplicate publication papers were excluded. Participants Adult (age ≥ 16 years old) participants with any disease status were included, regardless of gender. Intervention and comparison SRs describing the effects of ECMO for any clinical indication were eligible for inclusion.SRs were still acceptable if they included other interventions and ECMO results were reported separately.Comparisons were made with conventional treatments, such as conventional MV alone and conventional cardiopulmonary resuscitation (CCPR). Outcomes SRs presenting clinical topics were included, whereas those focusing on study designs, intervention characteristics, pharmacokinetics, prevalence, prognostic predictors, and cost-effectiveness unrelated to patient clinical topics were excluded. Timing SRs that provided a summary of intervention assessments, regardless of their duration and follow-up point, were considered eligible for inclusion. Study selection and data extraction EndNote X9 (Clarivate Analytics, Spring Garden, Pennsylvania, USA) software managed search results and deduplication.Two reviewers independently screened all potentially relevant studies based on recorded titles and abstracts and then cross-checked them.In the event of disagreement, the study was provisionally included to obtain additional information.After an initial selection decision was made, the full texts of the chosen studies were downloaded for further review.Two independent reviewers then conducted a new selection process based on a full-text analysis of eligible SRs.An all-researcher meeting was convened to reach a definitive decision on disparate related studies.We extracted information regarding the population, intervention, comparison, and outcomes (PICO) process, certainty of evidence statements, and the number of studies included in each SR. Methodological quality assessment Two independent reviewers used the Assessment of Multiple Systematic Reviews 2 (AMSTAR-2) tool to evaluate the methodological quality of the included SRs.Disagreements were resolved by mutual discussion with a third reviewer until a consensus was reached.AMSTAR-2 consists of 16 items [19], each of which was evaluated as "Yes", "Partial yes", or "No", and overall methodological quality according to the weaknesses in critical domains (items 2, 4, 7, 9, 11, 13 and 15) was categorized as high, moderate, low, or critically low.In other words, there were four categories in the overall assessment results of SRs: "High" was defined as no or one non-critical weakness; "Moderate" meant more than one non-critical weakness; "Low" was one critical flaw with or without non-critical weaknesses; and "Critically low" was defined as more than one critical flaw with or without non-critical weaknesses. Evidence quality assessment of outcomes Assessment of the evidence quality used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was carried out by two reviewers back-to-back.In case of disagreement, two reviewers settled it through discussion.If RCTs were included in SRs, initial confidence in the result would be high.In contrast, the evidence quality for observational studies (OS) was low confidence.Then, three upgrade factors, including larger effects, doseresponse gradients, and plausible confounding, as well as five degradation factors, including inconsistency, risk of bias, indirectness, imprecision, and publication bias, were considered.Overall evidence quality was categorized as "High", "Moderate", "Low", or "Very low" [20]. Data synthesis and analysis The included SRs were classified according to the topic of the investigation.If multiple SRs on similar clinical topics were identified, we chose the most relevant and best-performed SR for each topic based on the results of the GRADE assessment.Besides, SRs for each topic were depicted only once on the bubble plot.The results of the evidence mapping were presented using characteristic tables of the included SRs and a bubble plot display.Each bubble in the graph represents the evidence evaluated by the SRs investigating the efficacy of ECMO for clinical topics.The visual representation or evidence mapping displays information on four dimensions using a bubble plot: X-axis, Y-axis, bubble size, and color.This enabled us to provide the following forms of information regarding each included SR. X-axis: effect estimate The mapping presented depended on the certainty of the evidence statement, as reported in each SRs [21].The "beneficial" denoted that conclusions and results reported apparent beneficial effects without any major concerns regarding supporting evidence.The "probably beneficial" effect indicated that the conclusions did not assert an actual benefit despite a positive treatment effect being reported, or the conclusions reported a potential benefit despite the result showing no significant difference.The "no effect" showed that the conclusions and results provided evidence of no differences between intervention and comparison.The "inconclusive" indicated that the study results were insufficient for the authors to conclude whether the intervention had a definitive or potential effect.The "harmful" implied that the conclusions and results were reported to be a harmful effect.In addition, the primary evaluation criteria encompassed long-term prognosis, clinical symptom outcomes, laboratory inflammation indicators, description of adverse events, and quality of life. Y-axis: literature size estimate The literature size was defined as the number of primary research studies in the selected SR. Bubble size: numbers of included SRs The bubble size was used to represent the number of SRs on this topic. Color: evidence quality of the findings The results of the GRADE system assessment were used to determine confidence, which was divided into four categories: green circles represent "high" evidence quality, blue circles symbolize "moderate", yellow circles convey "low", along with red circles reflect "very low". Literature selection Four electronic databases searches yielded a total of 1933 records from inception to January 2023.After removing duplicates, 975 records were screened based on their titles and abstracts.The initial screening identified 338 potentially relevant studies evaluated against eligibility criteria.A total of 321 articles were deemed ineligible after thoroughly examining of their full text due to noncompliance with the established eligibility criteria.Ultimately, a total of 17 SRs with or without a meta-analysis on ECMO were included for systematic scoping review and evidence synthesis [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38]. Figure 1 displays all comprehensive review processes, exclusion numbers, and reasons for full-text exclusions. Characteristics of included SRs Annual trends in publications The correlation between the number of studies and the year of publication was plotted to visualize the trend of ECMO studies published over time.SRs on ECMO first appeared in 2010, although our search began with databases construction.Nonetheless, there has been a worldwide increase in ECMO literature, with a peak in 2020, that likely due to the COVID-19 pandemic.Figure 2 depicts the trend in publication of research studies. Primary studies and participants The number of studies included for ECMO ranged from 2 to 75, with an average of 14.Ten SRs contained fewer than 10 ECMO-related primary studies, five SRs included 10 to 25 primary studies, and two SRs contained more than 25 primary studies.Three SRs only included the type of randomized controlled trial (RCT), seven SRs included RCT and other study types, and seven SRs were limited to study categories other than RCT.The range of participants included in SRs was 429 to 38,160, averaging 5168.The majority of SRs (n = 12) included more than 1000 participants.There was a wide variety of clinical topics in the included SRs, including acute respiratory Methodological quality of included SRs In terms of methodological quality, the overall quality was rated as "Moderate" for six [23,27,28,[35][36][37], seven SRs scored "Low" [25,26,29,[31][32][33][34], four SRs scored "Critically Low" [22,24,30,38], and "High" for none SRs according to AMSTAR-2 criteria.The most frequent flaws were as follows: lack of a reasonable explanation for the selection of study design type for inclusion, the absence of a report on sources of funding for included studies, a lack of a statement regarding potential sources of conflict of interest, and the absence of a protocol.Figure 3 depicts the methodological quality of the 17 SRs included in the analysis. Evidence quality of included SRs The evidence quality of included one SR was considered high, 5 SRs were moderate and low quality respectively, while 6 SRs were assessed as having very low quality.The results of this evaluation can be found in Table 3. Evidence mapping For diseases that overlap, the overall evidence quality was considered.Individual SRs reflected the conclusions, which were confirmed by an internal review.The evidence mapping on ECMO for adults is presented in Fig. 4. Evidence of "beneficial" effect The effects of ECMO, as indicated by statistically significant pooled treatment effects in SRs, were determined based on a substantial number of research studies that included findings on severe ARDS and ALI due to H1N1 influenza infection. Four SRs [27,30,32,36] evaluated the effects of ECMO on severe ARDS relative to conventional therapy.Among them, one SR [30] with moderate evidence quality was selected as "probably beneficial" on this mapping, suggestive of the probable efficacy of ECMO in severe ARDS; the efficacy of ECMO and severe ARDS in this study was likely linked to reducing mortality, treatment failure, and the need for renal replacement therapy, but longer ICU and hospital lengths of stay.Two SRs [32,36], including 2 RCTs with moderate or high evidence quality, were selected as "beneficial" on this mapping, suggesting positive support for ECMO in severe ARDS based on 90-day and 30-day mortality outcomes; additionally, there was no difference in device-related adverse events compared to conventional therapy.The remaining SR [27], with low evidence quality, showed an "inconclusive" conclusion in survival to hospital discharge, indicative of weak confidence to support the effectiveness of ECMO.In general, 75% of SRs (comprising 6 RCTs) were classified under the categories of "beneficial" or "probably beneficial".The overlapping severe ARDS of four SRs was eventually classified as a "beneficial" conclusion considering the overall evidence quality. A single SR [23] consisting of eight observational studies, which were of low evidence quality, evaluated the impact of ECMO on ALI due to H1N1 influenza infection compared to conventional therapy.Results indicated that ECMO was feasible and effective in patients with ALI due to H1N1 infection; however, subjects with severe comorbidities or multiorgan failure remained at high risk of in-hospital death if prolonged support (more than one week) was required in the majority of cases, which ended in a "beneficial" conclusion. Evidence of "probably beneficial" effect A considerable number of research studies on clinical topics such as ARDS, ALF or ACLF, and cardiac arrest were used to determine the promising effects of ECMO, as evidenced by statistically significant pooled effects in SRs. One SR [34] Six SRs on cardiac arrest produced controversial results [26,28,29,33,35,37].Among them, two SRs [26,37], both low evidence quality, were selected as "beneficial" [29,33,35] with low, moderate, or very low evidence quality showed a "probably beneficial" conclusion on this mapping, suggesting the probable efficacy of ECMO in cardiac arrest; the efficacy of ECMO and cardiac arrest in these studies was likely associated with improved survival, 30-day and long-term favorable neurological outcome, and longterm neurologically intact survival.The remaining SR [28] of 63 case series and 12 cohort studies concerning out-of-hospital cardiac arrest (OHCA), with very low evidence quality, demonstrated that although a trend toward improved survival with good neurologic outcome was reported in controlled, low-risk of bias cohort studies, a preponderance of low-quality evidence may ascribe an optimistic effect size of extracorporeal cardiopulmonary resuscitation (ECPR) on survival among OHCA patients, rated as "inconclusive" conclusion.On the whole, 83.3% of SRs were classified into "beneficial" or "probably beneficial" categories.For the overlapping cardiac arrest of six SRs, we ultimately rated it as "probably beneficial" conclusion after considering the overall quality of the evidence. Evidence of "inconclusive" effect This mapping contained several SRs that provided evidence of the potential inconclusive effect of ECMO in treating clinical topics, including dependent ARDS, ARF, ARF due to the H1N1 influenza pandemic, and cardiac arrest of cardiac origin. One SR [22] with very low evidence quality demonstrated that there was insufficient evidence to recommend for the use of ECMO among patients with ARF due to the H1N1 influenza pandemic. One SR [25] with very low evidence quality indicated that ECPR yielded comparable survival (OR = 2.26, [95% Items ( critical domains): 1 Did the research questions and inclusion criteria for the review include the components of PICO? 2 Did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol?3 Did the review authors explain their selection of the study designs for inclusion in the review?4 Did the review authors use a comprehensive literature search strategy?5 Did the review authors perform study selection in duplicate?6 Did the review authors perform data extraction in duplicate?7 Did the review authors provide a list of excluded studies and justify the exclusions?8 Did the review authors describe the included studies in adequate detail?9 Did the review authors use a satisfactory technique for assessing the risk of bias (ROB) in individual studies that were included in the review?10 Did the review authors report on the sources of funding for the studies included in the review?11 If meta-analysis was performed did the review authors use appropriate methods for statistical combination of results? 12 If meta-analysis was performed, did the review authors assess the potential impact of ROB in individual studies on the results of the metaanalysis or other evidence synthesis?13 Did the review authors account for ROB in individual studies when interpreting/discussing the results of the review?14 Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?15 If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?16 Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review? Evidence of "no effect" No SR clearly declared that ECMO was harmful to clinical topics. Evidence of "harmful" effect No SR clearly declared that ECMO was harmful to clinical topics. Discussion Evidence mapping is a relatively new method for summarizing scientific evidence on a specific topic.Despite the absence of a standard definition or agreement on its components or application methods, these types of reviews share certain characteristics [18].Generally, it involves executing a systematic search across various topics to identify knowledge gaps and/or future research needs.It also presents the findings in an approachable format, such as a visual figure, graph, or searchable database [18].Even without research retrieval and data extraction, evidence mapping can generate a comprehensive list of priority research issues in a topic area, which has the potential to serve as a foundation for study, policy development, and research funding [39]. Principal findings There are several SRs for ECMO based on available evidence, with only 17 SRs centered on various diseases meeting the criteria.Most SRs contain a small number of primary studies, indicating limited evidence for this issue.RCT is the most reliable evidence to evaluate the efficacy and safety of interventions [40].However, the majority of the main studies used to support the efficacy and safety of ECMO were not RCT, according to the SRs included in this evidence mapping, which might be a phenomenon with significant ethical implications.SR, an essential component of evidence-based medicine, has become the highest level of evidence as it synthesizes all available evidence on a given topic [41].However, if the quality and criteria of SR differ widely, the findings of reviews may be exaggerated.Although methodological quality assessment is not a core task of evidence mapping, it has been recommended that any review should include this process to assess the consistency of their conclusions [42].The AMSTAR-2 tool has been used extensively as an effective method to evaluate the methodological quality of SR [43].Increasing numbers of SR have been conducted on ECMO, but we used the AMSTAR-2 tool to assess methodological quality to ascertain the validity of their conclusions.Unfortunately, we discovered no "High" methodological quality of SRs, but rather six "Moderate" SRs, seven "Low" SRs, and four "Critically Low" SRs.The most frequent shortcomings were as follows: lack of a reasonable explanation for the selection of study design type for inclusion, the absence of a report on sources of funding for included studies, a lack of a statement regarding potential sources of conflict Fig. 4 The evidence mapping on ECMO.X-axis, effect estimate on the certainty of the evidence statement; Y-axis, the number of primary research studies in the selected SR; bubble size, the number of SRs on this topic; bubble color, evidence quality of the findings by GRADE system assessment.ARDS adult respiratory distress syndrome, ALI acute lung injury, H1N1 influenza A, ALF acute liver failure, ACLF acute on chronic liver failure, ARF acute respiratory failure of interest, and the absence of a protocol, all of which would require the attention of future researchers. Our evidence mapping emphasizes the areas where SRs have reported "beneficial", "probably beneficial", "inconclusive", "no effect", or "harmful" effects while simultaneously displaying the research concentration and volume.ECMO was beneficial for some clinical topics, such as severe ARDS and ALI due to H1N1 influenza infection.It is probably beneficial for certain clinical topics, such as ARDS, ALF or ACLF, and cardiac arrest.Conclusions regarding dependent ARDS, ARF, ARF due to the H1N1 influenza pandemic, and cardiac arrest of cardiac origin were inconclusive.Significantly, we found no evidence of a harmful association between ECMO and various clinical topics, which may be due to the fact that few RCTs with negative conclusions have been published [44].The fact that the efficacy and safety outcome of ECMO in treating severe ARDS is not only concluded as having a "beneficial" effect but also supported by "high" quality evidence, indicating that ECMO is a potentially promising support technique for severe ARDS and is also consistent with the ARDS management guidelines, is particularly noteworthy.According to the formal management guidelines of ARDS [45], severe cases of ARDS with PaO 2 /FiO 2 < 80 mmHg and/or dangerous MV, despite optimization of ARDS management including high PEEP, neuromuscular blocking agents and prone positioning, should probably be considered for venovenous ECMO.The decision to use ECMO should be evaluated early by contacting an expert center with the strong agreement.Being distinct from severe ARDS, ECMO demonstrated a probably beneficial effect in another SR that did not differentiate the severity of ARDS, suggesting that ECMO may have a more effective therapeutic effect in severe ARDS, which requires further research to confirm.The current estimated mortality rate for ARDS is approximately 30-40%, with severe forms of ARDS having higher mortality rates than mild or moderate forms of ARDS [46].It is highly promising to assert that using ECMO technology will significantly contribute to reducing mortality rates, especially associated with severe ARDS.The continuous advancements in ECMO technology are expected to achieve such a favorable effect. An additionally interesting finding was that ECMO outcome indicators tend to focus on survival, mortality, and a favorable neurological outcome.However, none of the studies evaluated the quality of life as an outcome or conducted an economic evaluation.In the past few decades, quality of life has emerged as a significant concept and objective for research and practice in the fields of health and medicine, which can inform clinicians and policymakers about how to prioritize and allocate healthcare resources when assessing the benefits of different treatment options [47].Similarly, economic evaluation contributes to the most efficient allocation of societal resources [48].ECMO is an essential technology for critically ill patients with cardiopulmonary failure; consequently, evaluations of the quality of life and economic impact are particularly crucial. Cardiopulmonary failure is a condition that the majority of critically ill patients may experience, indicating that ECMO may be increasingly required for a prolonged time in the future.According to searched studies, highquality design research was absent on the adverse effects of long-lasting ECMO use.Four of the 17 included SRs reported potential adverse effects of prolonged ECMO use, including bleeding, barotrauma, sepsis, and circuitrelated complications.Specifically, two SRs [24,31] suggested that ECMO may increase the risk of bleeding in ARDS or ARF patients, and ALI due to H1N1 infection patients with severe comorbidities or multiorgan failure remained at high risk of in-hospital death if prolonged support (over one week) was required in most cases [23].Having said that, only a few studies on the adverse effects of ECMO were included in this mapping, and most primary studies are not RCTs, so the findings were not entirely accurate.Thus, related RCTs ought to concentrate on developing high-quality studies to evaluate the adverse effects of ECMO. Evidence gaps and future directions This evidence mapping has described the research focus, reported in the existing SRs, and identified the gaps in evidence to identify clinical topics that should be prioritized for future research [49].However, it can not answer more specific questions, such as the optimal parameter selection, application timing and duration for ECMO in a particular specific health topic.To advance our evidencebased understanding of ECMO, we should acquire additional data on the efficacy and safety of ECMO across and within each clinical condition and patient population using meta-analyses of primary studies.In addition, the large number of clinical topics classified as lacking conclusive evidence calls for additional primary research.In some of the clinical topics included in the category of the inconclusive evidence, additional studies have been published, necessitating an update of the present SRs. Strengths and limitations This evidence mapping, unlike previous ones, provides a comprehensive summary of the current evidence for all categories of clinical topics associated with ECMO without restrictions.Moreover, our study conducted a systematic and exhaustive search of four databases and utilized a relatively dependable study design, SR.Then, we used the AMSTAR-2 tool to assess the methodological quality of inclusion in SRs, and the GRADE system to assess the quality of evidence for inclusion in SR outcomes, visually presenting the results of the existing evidence in the form of a bubble plot based on multiple significant dimensions.In addition, we determine the rating of conclusions based on the description of research results and conclusions, which may avoid the uncertainty caused by policy recommendations determined solely based on research results or conclusions in a certain sense [50], which are not only instructive for future research and important for preventing the waste of academic resources, but also essential for policymakers.Nonetheless, it is important to note that this research does have a few limitations.First, we excluded other study designs (such as RCT, case report, cohort study, or cross-sectional study, etc.), even though the fact that SRs generally could provide the highest quality evidence for decision-making.Second, only four frequent literature databases were searched, but literature from other sources, such as clinical trial registration websites, was not focused on, so literature omission was inevitable.Especially since I was unable to find any information related to ECMO and pregnancy.Third, most of the included SRs were based on observational or cohort studies with poor methodological quality, which might have led to bias and affected the intrinsic authenticity of SR to some extent. Conclusions In conclusion, observational or cohort studies, frequently with small sample sizes, have been the most common types of study to evaluate ECMO.AMSTAR-2 tool rated the methodological quality of the included SRs in this mapping as "Critically Low" to "Moderate".The most beneficial clinical topics of ECMO therapeutic intervention reported by authors for patients are severe ARDS and ALI due to H1N1 influenza infection.However, ECMO for dependent ARDS, ARF, ARF due to H1N1 influenza pandemic, and cardiac arrest of cardiac origin shows an inconclusive effect.These outcomes emphasize the need for future research on new clinical topics and knowledge gaps in this field.Increased efforts are required to improve the methodology quality and reporting process of SRs on ECMO. Fig. 1 Fig. 1 Flow diagram of the literature reviewing process and results Table 1 Summary of the included SRs of ECMO treatment Table 2 Eligibility criteria and method of quality appraisal or risk of bias of the included SRs
v3-fos-license
2021-12-25T16:06:26.083Z
2021-12-22T00:00:00.000
245473063
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/1424-8220/22/1/35/pdf", "pdf_hash": "0a81b87433157e6bfebff6d443fe0a23e706df42", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:363", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "129577f2b753e428f0f496554e4f442ca4e85223", "year": 2021 }
pes2o/s2orc
A Comparison of Three Airborne Laser Scanner Types for Species Identification of Individual Trees Species identification is a critical factor for obtaining accurate forest inventories. This paper compares the same method of tree species identification (at the individual crown level) across three different types of airborne laser scanning systems (ALS): two linear lidar systems (monospectral and multispectral) and one single-photon lidar (SPL) system to ascertain whether current individual tree crown (ITC) species classification methods are applicable across all sensors. SPL is a new type of sensor that promises comparable point densities from higher flight altitudes, thereby increasing lidar coverage. Initial results indicate that the methods are indeed applicable across all of the three sensor types with broadly similar overall accuracies (Hardwood/Softwood, 83–90%; 12 species, 46–54%; 4 species, 68–79%), with SPL being slightly lower in all cases. The additional intensity features that are provided by multispectral ALS appear to be more beneficial to overall accuracy than the higher point density of SPL. We also demonstrate the potential contribution of lidar time-series data in improving classification accuracy (Hardwood/Softwood, 91%; 12 species, 58%; 4 species, 84%). Possible causes for lower SPL accuracy are (a) differences in the nature of the intensity features and (b) differences in first and second return distributions between the two linear systems and SPL. We also show that segmentation (and field-identified training crowns deriving from segmentation) that is performed on an initial dataset can be used on subsequent datasets with similar overall accuracy. To our knowledge, this is the first study to compare these three types of ALS systems for species identification at the individual tree level. Introduction Forests are important global resources that affect numerous natural cycles as well as contributing to natural biodiversity, i.e., flora and fauna [1]. Forested lands also constitute the largest terrestrial carbon sink on the planet, with approximate relative contributions of 80% being made by above-ground biomass and 40% being made by below-ground biomass [2]. Forest structural information cannot be fully exploited if species information is missing. Indeed, precise species identification is a crucial variable for forest inventories [3], for the quantification and monitoring of biodiversity [4], and for the study of forest ecosystems and habitats [5]. Accurate tree species identification is the information that is most frequently requested by the forestry industry and by government organisations in the elaboration of forest inventories [6]. However, it is economically unfeasible to sample large numbers of trees in the field. As a consequence, remote sensing is essential not only to supply forest inventories with primary data [7][8][9], but also to address environmental information needs. High-resolution optical imagery is the most common source of remotely sensed data for species identification. For such images, pixel sizes that are larger than a tree crown may contain foliage from more than one species, leading to ambiguity and frequent identification errors. A small pixel size (e.g., 20-40 cm) implies that a tree crown would necessarily be composed of multiple pixels, leading to a situation where individual pixels will be spectrally representative of neither the tree nor the species. The pixels composing each crown thus include intra-specific spectral variability, which reduces classification accuracy [10]. For this reason, most studies pertaining to tree species identification use object-based classification, which is frequently denoted Individual Tree Crown (ITC) segmentation or delineation [11]. Once the tree crown is delineated, the individual pixels are extracted and summary statistics (e.g., mean spectral signature) and a gamut of features (spectral, spatial, and contextual features, among others) are calculated per crown, which reduces intra-specific spectral variation [12,13]. Optical imaging sensor methods also suffer from major shortcomings when used for species identification at the individual tree level. First, passive optical methods provide information regarding the top of the canopy, especially in dense broadleaf cover, but yield little to no information regarding vertical canopy structure [2]. The second shortcoming of optical methods is related to the anisotropy of reflectance (dependent upon sun-sensor viewing geometry relative to the object) causing different spatial radiometric patterns of the spatial objects (e.g., sun-light vs. shaded crowns) [14,15]. The fact that the bidirectional reflectance distribution function (BRDF) effect is dependent upon the species further complicates the retrieval of information from optical imagery. Within the broad range of remote sensing technologies that are available to practitioners, airborne laser scanning (ALS) is particularly well adapted to precision forestry, as it provides detailed structural information (given the laser pulse capacity to penetrate closed canopy) [16]. Linear ALS systems are composed of a laser emitter (or multiple emitters in the case of multispectral systems) that produces pulses, which are emitted at a repetition rate of hundreds of kHz. The detector requires a flux of at least 500-1000 photons to register the backscattered laser energy from the target [17]. The detector generates an electronic signal directly and linearly proportional to the backscattered laser energy from the target, hence the name "linear ALS." The width and amplitude of the returned energy pulse depends upon the target characteristics. Proprietary algorithms transform the multiple peaks of a given waveform into discrete multiple returns. Semi-porous targets such as forest canopies can backscatter multiple peaks corresponding to different components of the canopy (top of crown, leaves, branches, trunk, ground). The use of ALS data addresses some passive optical sensor limitations that are related to tree species identification. Give that it is an active sensor, lidar signal acquisition is permanently in the hot-spot configuration (the emission angle of the laser pulses is always the same as the viewing angle), which resolves many anisotropy issues [18]. The ALS returns penetrate the canopy to various depths, sometimes reaching the ground. Therefore, the spatial ALS information (i.e., X, Y and Z coordinates) provides species-related structural information concerning the crown, branches and leaves [19]. This canopy penetration and ground resolution capability is the major advantage of linear ALS over other remote sensing methods in the production of enhanced forest inventories. ALS return intensity values, which are measures of the backscattered laser energy, bring supplemental information about tree species. Intensity values are not only strongly related to the type of foliage [20] and its spectral signature, but the size, orientation, density, and clumping of leaves as well [21,22]. One of the main disadvantages of airborne lidar systems is that there are still many unanswered questions regarding the algorithms that are used to calculate ALS intensity values (they are proprietary to the various instrument manufacturers) and they preclude the comparison of lidar acquisitions that are provided by different sensors and over-flights. Additionally, the linear ALS system used in this study is monospectral, which precludes the use of vegetation indices to improve classification accuracy. In order to address the latter point, multispectral ALS systems is one of the latest major innovations to have developed over the past few years [23]. The three channels with different wavelengths provide additional intensity features and permit the calculation of ratios analogous to NDVI. The intensity comparison issues between different surveys remain with MSL however, as with all lidar systems. ALS technology is undergoing rapid evolution. One of the most important variables in ALS acquisition specifications is the point density or the average number of returns per m 2 . This density depends upon flight altitude and flight speed for a given pulse repetition frequency. Therefore, there is a direct relationship between the cost ($/km −2 ) of acquisition and the point density. There is also a relationship between ALS point density and classification accuracy for ITC methods. Conversely, methods using the Area-Based Approach (ABA) provide good results at lower point densities and results accuracy that do not necessarily improve proportionally with point density [24]. Even if the importance of these relationships is well known, it remains unclear what effect ALS point density exerts on ITC identification. As soon as the first commercial linear ALS systems appeared in the mid-1990s, researchers also started to explore the use of photon-counting instruments, i.e., Single-Photon Lidar (SPL), to address some shortcomings of conventional or linear ALS systems such as the high cost to obtain coverage of an area, even when compared to optical imaging sensors. SPL covers larger areas at comparable densities at much higher flight altitude, potentially reducing costs [25]. SPL also provides opportunities for more frequent over-flights. SPL instruments utilise beam energy in a more efficient manner than linear ALS; therefore, the former obtains a higher point density for a given flight altitude than the latter [26]. Alternatively, SPL achieves acceptable point densities while flying at a higher altitude, thereby permitting greater coverage [27]. SPL systems use a laser that is split into a 10 × 10 array of "beamlets" with the return energy being acquired by a 10 × 10 array of single-photon sensitive detectors [28,29]. The intensity value for each SPL return pulse is not well defined and is derived differently from that of linear systems. For example, the data provider for the SPL over-flight that was used in this study uses the pulse width of the returned energy as an analogue of linear lidar intensity. In addition, the return distribution, such as the ratio of first to second returns, is different in the SPL case when compared to linear ALS systems. The short recovery time of the detector is a crucial element of SPL technology, as it enables multiple close-by photon measurements along the beam's path for each laser pulse that is emitted. The high sensitivity that is required of the pulse detector to detect single photon returns from the surface also makes it susceptible to background noise; the most important noise source is solar illumination reflecting off said surface [30]. This background noise is proportional to the instrument Field-Of-View (FOV) and to the receiver telescope aperture, both of which are reduced in the type of sensor that is used for this paper. Noise filtering algorithms, such as the Differential Cell Count method, are used to further reduce interference from background solar illumination [31]. Several ABAs that have been developed under linear ALS systems were adapted for use with SPL data to map forest attributes. ABA metrics that were derived under multispectral ALS and SPL systems were comparable [32,33]. The SPL data resulted in slightly better estimates for all canopy structural variables compared to multispectral linear ALS, except for basal area. Since SPL covers 590 km 2 h −1 compared to 50 km 2 h −1 for multispectral linear ALS at equivalent point density, SPL sensors clearly provided a productivity advantage over linear ALS systems for methods using ABA [34]. However, the classification performance of SPL for tree species identification has yet to be ascertained since the SPL point cloud exhibits both a different vertical distribution as well as differences in the ratio between first and second returns compared to linear mode systems. The main objective of this study is to compare the tree species identification capabilities from three datasets that were acquired respectively with linear monospectral ALS, linear multispectral ALS, and an SPL system. To our knowledge, this is the first study to compare these three types of ALS systems when used for species classification at the individual crown level. In particular, we wish to verify whether the methods that were developed for linear ALS data perform as well with SPL data. Species identification methods were tested at three classification levels: broad species types (hardwood, HW vs. softwood, SW), narrow species groups (e.g., pines, spruces), and specific tree species. A secondary objective was to determine whether an increased number of species identification features that were derived from multispectral lidar or the higher point density of SPL provides greater classification accuracy compared to the standard mono-spectral linear ALS baseline. Finally, additional specific questions were addressed: Are the most relevant features the same for the three sensor types, or do they differ significantly? Does feature selection affect classification accuracy in the same manner for these three datasets? Study Area The Petawawa Research Forest (PRF) is a 10,000 ha forest that is situated about 200 km NW of the City of Ottawa, ON, Canada. PRF is composed of mixed-mature natural stands as well as plantations and is representative of the Great Lakes-St. Lawrence Forest type [35]. Common species include eastern white pine (Pinus strobus), red pine (Pinus resinosa), trembling aspen (Populus tremuloides), paper or white birch (Betula papyrifera), yellow birch (Betula alleghaniensis), red maple (Acer rubrum), and sugar maple (Acer saccharum). Both boreal species and shade-tolerant hardwoods exist throughout the area. The climate of PRF is characterised by a mean annual temperature of 5.6 • C (−11.8 • C in January, 20.3 • C in July), and average annual precipitation of 859 mm, with 682 mm falling as rain and 182 cm as snow [36]. The research forest lies on the southern edge of the Precambrian Shield, with elevations ranging from 140 to greater than 280 m above sea level [37]. Its gentle topography is strongly influenced by glaciation and post-glacial outwashing. Three types of terrain characterise the PRF: extensive sand plains of mostly deltaic origin; imposing hills with shallow sandy soils, as well as bedrock outcrops; and gently rolling hills that are composed of moderately deep, loamy sand that contains numerous boulders. Figure 1 shows the extent of the common study area (line in red) for the three datasets used in this study. Airborne Laser Scanning Data Three different datasets were used for this study. First, a monospectral linear ALS (Riegl 680i; 1550 nm) was flown in 2012 (hereafter, designated as ALS12), a multispectral linear ALS (Optech Titan; 532, 1064 and 1550 nm) was flown in 2016 (MSL16), and a photoncounting lidar (Leica SPL100; 532 nm) was acquired in 2018 (SPL18). Information on the respective acquisition parameters and sensors is provided in Table 1. In an ideal situation, the three datasets would have been acquired simultaneously and then compared. Logistical and financial considerations rendered this unpractical. The main difference between the three datasets is the altitude flown during acquisition; 3760 m for SPL compared to 600-750 m for the linear systems. Despite the much higher flying altitude, the point density of SPL remains much higher than that of the other sensors owing to the principle of single photon measurements. The triple-beam configuration of the MSL system provides increased point density (similar to SPL18) when compared to the monospectral ALS system. The use of the 532 nm green wavelength in the SPL system, much like the green channel of the MSL system, hampers pulse penetration in thicker canopies, as witnessed by the much lower point density of the MSL16 green channel compared to the IR channels. Prior to our use of the information, the ALS12 and SPL18 datasets were processed by their respective vendors to classify the ground/non-ground points using proprietary software. For the multispectral dataset (MSL16), the three channels (C1, C2 and C3) were combined into a single point cloud (C321) for the calculation of the geometric feature set. In contrast, intensity features were calculated per channel, as pooled intensity features would be meaningless. Normalised differences between channels were computed to produce NDVI-like features, as found in [38]. Transects that were taken of the same area, but from the three different point clouds, provide an example of monospectral ALS (ALS12- Figure 2 (top)), multispectral ALS (MSL16- Figure 2 (middle)), and photon-counting lidar (SPL18- Figure 2 (bottom)) datasets. Photon-counting lidar featured a high point density when compared with the other two datasets despite being flown at a higher altitude. Differences in the middle-story and ground hits can also be seen between the three datasets. Methods For the purposes of this study, all processing was performed using in-house software developed in the Python and R languages. This processing ranged from the initial data layers, i.e., the point cloud, digital terrain model (DTM), digital surface model (DSM), and canopy height model (CHM), to feature extraction and balanced Random Forest classification [39]. The species identification methods that are proposed in this article were initially developed for operational deployment with an industrial partner over large (e.g., 200,000 ha) commercial forests. Given this criterion, processing speed was one of the primary drivers guiding method development. This explains, for example, the use of raster-based methods rather than more sophisticated point cloud methods for individual tree crown segmentation, together with the need for feature selection in our Random Forest models. We (and others) [40] have found that parsimonious classification models perform better when they are applied to a large study area, while also making the analysis of selected features easier to implement. Individual Tree Crown (ITC) Segmentation As the MSL16 point cloud was not processed to classify ground points, the 2012 point cloud was used to produce the reference DTM. This was generated with Whitebox Tools [41] at a 25 cm-resolution using a Delaunay triangular irregular network fitted to the lidar ground points. The DSM for both the ALS12 and MSL16 dataset was processed using the same algorithm. DTM, DSM, and CHM rasters were provided with the SPL18 dataset. SEGMA (https://en.geophoton.ca/t%C3%A9l%C3%A9chargements (accessed on 17 October 2021)) software v 0.3 [42] was used to delineate the ITCs from the ALS12 CHM. Within SEGMA, the CHM with XY resolution of 0.25 m is first filtered using a Gaussian filter, in which the σ (sigma) value varies proportionally to the local CHM height. Local maxima are then detected on the filtered CHM using an exclusion radius that was proportional to local CHM height; for a local maximum to be detected, a pixel must be higher than all of the pixels that are found within a radius determined by the local height. Regions are grown around these maxima until certain criteria are met, such as reaching a crown height much smaller than the local maximum [42]. At this stage, a certain number of attributes are computed, such as the height (maximum unfiltered CHM height within the crown), crown area, diameter, height-to-area ratio, vertical extent (difference between the highest and lowest unfiltered CHM height in a crown), crown ratio (vertical extent over height), circularity and eccentricity, among others. A delineation score is computed automatically as a weighted mean of these attributes. Crowns having a low delineation score or improbable proportions (e.g., an outlying value of height to area ratio) are resegmented by erosion. The final crowns are polygons that are recorded in a vector layer (shapefile) with their attributes. After automated delineation, the quality of the ITC was appraised visually by overlaying the delineated crowns onto the CHM or onto ortho-photos to ensure that delineation problems would not compromise subsequent methodological steps. Using visual analysis (which we recognise as being subjective), ITC delineation performance was generally very good, but lower in dense, hardwood-dominated forests. This may have introduced omission and commission errors when identifying tree crowns. Crown matching is required to be coherent between datasets. Therefore, crown delineation was performed using SEGMA on the ALS12 dataset. These crown polygons were subsequently used to extract features on all three datasets. Visual inspection of the ALS12 crown outlines (Figure 3a,b) that were overlaid on the MSL16 (Figure 3c,d) and SPL18 (Figure 3e,f) showed that most properly delineated crowns still showed good agreement with the crowns that were visible in the CHM of the two more recent datasets. Feature Calculation Geometric and intensity features that were derived from the ALS points of each delineated crown were used to identify species. The geometric features (based on X, Y and Z lidar data) included tree proportions, vertical crown profile, and porosity to laser pulses, among others. The intensity features were based on measures of central tendency (mean, median) and dispersion (standard deviation) of the laser return intensities. We used a subset of the features that were described by [38]; these are enumerated in Tables 2 and 3. Abbreviations: all = all returns, 1st = first returns, cv = coefficient of variation, mn = mean, sd= standard deviation, p = percentile. Abbreviations: all = all returns, 1st = first returns, 2nd = second returns, cv = coefficient of variation, mn = mean, sd= standard deviation, p = percentile. In the case of geometric features, it is possible to normalise each laser return elevation to height above ground by subtracting the underlying raster DTM elevations under each ALS return. We avoided this normalisation because this warps the 3D shape of tree crowns in the presence of terrain slope [43]. Instead, we extracted a single DTM value at each crown's centroid and used this single value to normalise all of the ALS points falling within the corresponding crowns. The following steps involved extracting the laser returns for each crown and normalising them to the single DTM height. Points below 2 m above ground were discarded. In addition, all geometric features that relate to tree size, e.g., the height at the ith percentile, were normalised relative to the tree height, as: where F n is the normalised feature value based upon the absolute value of F, and H is the height of the ith tree. Calculating F n ensures that species identification remains independent of the height distribution of trees in the training samples [44]. Adimensional geometric features, such as the ratio of crown area to height, or the slope of the lines connecting the highest return to each of the other returns, were not transformed. No intensity normalisation was required, since range information was unavailable for all three datasets. Our preliminary tests showed a negligible effect of intensity normalisation using alternative methods to determine range (such as using the above-ground altitude of the aircraft and the scan angle as a proxy for range for the study area) on the classification accuracy of our Random Forest models. Overall, a total of 34 3D features (all three MSL channels were combined into a single channel for the calculation of these 3D features), and 16 intensity features (65 in the case of MSL where each individual channel was used) were computed for each tree. Training Crown Selection Reference training crowns were sampled and identified based on ITC delineation (using SEGMA) performed on the ALS12 dataset. An initial set of training crowns (N = 331) was identified in 2014 by trained photo-interpreters with Ontario Ministry of Natural Resources and Forestry (MNRF) with high confidence in conifer identification and good confidence with regard to hardwoods. A second set of training crowns (n = 1109) was identified by field crews that were cruising targeted areas to achieve the proper spatial distribution of training crowns in the summer of 2015. For this campaign, field crews cruised the forest with an SX-Blue GNSS receiver that was obtained from Geneq Inc. (Montreal, QC, Canada). The GNSS receiver contained GPS, GLONASS (a Russian satellitebased navigation system), and a Wide Area Augmentation System (WAAS). The WAAScorrected geo-location was shown on a field tablet displaying the CHM raster and the delineated crown polygons. Matching was sometimes complex because an actual crown may bear little resemblance to the associated polygon; additionally, the field position may drift due to GPS positioning error. Based upon geo-location-assisted visual association between a tree in the field and its representation on the tablet, the matching crown polygon species label was added to the training crown shapefile on the tablet. The training crowns were curated using recent high-resolution aerial imagery to remove felled or dead crowns. Throughout the sampling campaign, care was applied to gathering trees of different heights, from 5 m to height at maturity for each species. The overall goal was to collect an equal number of sample crowns per species; this proved to be difficult, as abundance varied between species and per stand. Three crowns were removed during a visual quality control step and species with fewer than 40 exemplars were removed. The resulting number of sample crowns per species is presented in Table 4. Due to the complexity and expense related with field training crown selection, it was unfeasible to conduct campaigns for the MSL16 and SPL18 acquisitions; hence, the ALS12 training crowns were used as a reference in this study. Classification Groupings We performed four different groupings to compare classification across the three datasets: two tree types (HW/SW); four genera with four species; five functional groups; and a species grouping with twelve species, as seen in Table 5. Differences in species counts reflect the fact that some features in the MSL and SPL datasets cannot be calculated for those crowns; we cannot use crowns with missing values to train our Random Forest models; therefore, they are discarded. This is likely due to the tree having been felled during the time interval between initial acquisition and delineation (2012), or to differences between the features that were calculated depending upon the lidar system being used. For example, the green channel of the MSL system has been shown to attain a lesser degree of penetration than do the other two IR channels [45], resulting in some crowns having no second returns in the MSL acquisition. One type of 3D feature (RM from Table 2) uses second returns, so these features cannot be computed for crowns without second returns. A similar problem exists for SPL systems, since far fewer second returns are recorded by these systems than by linear ALS systems (see Figure 1) [46], resulting in crowns being discarded as well. Random Forest Training and Feature Selection The species were identified using a Random Forest (RF) classifier. This classification method offers several advantages compared to other methods. It leads to the best or at least equivalent accuracy when compared to other methods [47]. RF has been found to be well suited for several tree species classification studies [6,22,38,48]. RF has been shown to not rely upon assumptions of normality and homoscedasticity. We applied the Shapiro-Wilk test to our datasets and found that none of the features followed a normal distribution. This lack of normality eliminates widely used parametric statistical tests, such as linear or quadratic discriminant analysis. Finally, RF is able to handle a very large set of predictors and exhibits a low sensitivity to collinearity between features [49] as well as a low propensity to over-fit the model [39]. However, it is sensitive to unbalanced data (such as ours), that include large discrepancies in the number of samples per class. Various sub-sampling strategies can be applied to the training set to balance the classes for model training [50]. The number of geometrical and intensity features that were calculated (as per Section 3.2) resulted in a large feature set. Using the complete feature set (high dimensionality), especially given the paucity of training crowns per species (N = 35 in the worst case, after crowns with missing features are removed), can result in a reduction in prediction power, over-fitting, and a reduction in the generalisability of the models. These problems, particularly the loss of predictive power, exemplify the Hughes effect, or what [51] referred to as "the curse of dimensionality." Due to the number of features that were calculated, we proceeded with two widely used feature reduction methods: an initial ranking and filtering of all features [52], followed by stepwise selection of the final features. The first criterion that was selected in the initial feature filtering step was the mean decrease in accuracy (MDA) function, found in the Random Forest package [53] of R [54]. Only features with an MDA > 0.1 were retained. Next, features with a correlation > |0.9| with another were removed, retaining the one having greater usefulness (largest MDA value) in the inter-correlated pair. The Variable Selection Using RF (VSURF) algorithm [55] was then used to perform the final feature selection. VSURF is a wrapper-based algorithm that uses the MDA information contained in the RF model to select features. The desired number of features is ranked based upon MDA scores over 50 permutations; features that include negligible or zero contributions to the classification are removed. The remaining variables are then tested in a variety of RF models with the most accurate model being retained; MDA is only used initially to rank the features. An ascending stepwise function is then used, which removes redundant features based upon their contributions to the out-of-bag (OOB) error. The threshold for rejecting a feature is based upon a function that minimises OOB error. These remaining features were subsequently used to train the RF models for each dataset. As a result, the retained features differed for each dataset, depending upon the usefulness of the features in their respective datasets and their degree of inter-correlation. Retained features were used to construct the final RF model for each species grouping and for each dataset. Due to the heuristic nature of the VSURF algorithm, the resulting feature set is not necessarily the best set of features, but rather a good one to train our models [56]. The resulting models were run 20 times on the training data to calculate the average overall accuracy. A classification was performed and its accuracy was assessed using three feature sets for each dataset: (1) all selected features; (2) the 25 best features; and (3) the 15 best features. Finally, to understand the respective roles of the 3D and intensity features, we report classification accuracies resulting from using only 3D features, only intensity features, or all features. Furthermore, to explore the advantages of using a combination of systems, and acquisition over multiple years, we combined the features of all systems into a single classification. Results The RF classification accuracies were compared for four different species groupings, three ALS systems (ASL12, MSL16, SPL18), and four broad feature groupings: 3D only; intensity (I) only; all the features of a given ALS system; and all the features of all the ALS systems pooled (Table 6). This comparison was performed following an initial variable selection (based upon MDA, inter-correlation, and VSURF). The best accuracies were achieved for the first level of classification, i.e., the type distinction between hardwood and softwood species, while the lowest accuracies occur at the 12 species level. At the finest classification level, there was a noticeable difference in accuracy between most hardwood (in the grey background of Table 7) and softwood species (in the white background of Table 7) for the best model (all sensors, all features) with eastern larch (Larix laricina) being a notable exception to this pattern. This result was not necessarily surprisingly, given that larch is a deciduous softwood. Multispectral ALS (3D + intensity features) produced the best results in all species groups, and all feature subsets, while SPL ALS displayed a systematically lower accuracy compared to the two other types of sensors. The decrease in performance was almost always greater going from standard ALS to SPL, highlighting the different nature of SPL compared to the two linear ALS systems. However, both linear ALS systems (standard and MSL) generally produced comparable results, with a small advantage being shown by the MSL sensor in most cases. Table 6. Random Forest classification accuracy in % (20 runs) broken down by 3D and intensity (I) features, pooled by system (All), and pooled across all systems and features (ALL ALS, All). The relative information contents of the 3D and intensity features varied across systems. Unsurprisingly, the three-wavelength intensity features of MSL provided greater species identification performance than did its 3D features. The reverse was true in the case of the two other systems. In most cases, the contrast between the discrimination power of the 3D and the intensity features was greater in SPL, with the 3D features performing much better than the intensity features. The SPL models displayed two fewer intensity (I_) features than the standard ALS, given that they were more strongly correlated and were removed in the feature selection process. It must be reiterated that SPL intensity is an ill-defined quantity and care must be taken in the interpretation of results that are derived from it. In all cases, the single intensity channel of standard ALS provided greater accuracy than that of the SPL system, while the accuracy that was provided by the 3D features of SPL was similar to that of the other two sensors, or slightly lower. For each ALS system and each species grouping, the greatest accuracy was attained when the 3D and intensity features were combined. For the simplest classification level (hardwood vs. softwood), the pooled 3D and intensity variables did not feature substantially greater accuracy compared to that of the best subset (intensity-only or 3D-only, depending upon the case). For the most complex level (12 species), the contrast was greater, particularly in the case of standard ALS, where the numbers rose from 38.9% (3D-only) to 50.7% (all). ALS12 Combining all the features from all the systems improved the accuracy in all cases but one (type discrimination using all available features). This improvement, in general, was about 5% compared to using MSL only, except for the classification of tree type. Figure 4 shows the feature rankings for the 12 species-all sensors-all features model that were ordered by Mean Decrease Gini and which were produced with the varImpPlot function of the Random Forest package in R. The Mean Decrease Gini (unitless) is the mean of a feature's total decrease in node impurity, weighted by the proportion of samples reaching that node in each individual decision tree in the Random Forest. It is a measure of how important a feature is for classification accuracy across all the trees in the Random Forest. The relative ranking of the features is of interest in these Figures. The suffix following the variable name of each feature refers to the dataset from which the feature was calculated. Slope features figured amongst the most important, as did both green channel-based multispectral vegetation indices. The first return intensity dispersion coefficient from standard ALS is the most significant feature for the 12 species classification. Figures 5 and 6 break down in order of importance the features by 3D and intensity, respectively. In most cases, more parsimonious classification models, i.e., using only the best 25 or 15 features, displayed only a slight decrease of accuracy compared to using all of the pre-selected features. This decrease was very small (≤0.5%, except for SPL) for the type level, and more apparent, while rarely exceeding 2% for the other classification systems. Our results indicate that the more complex sensors (MSL and SPL) did not substantially improve the performance of our models, with the SPL models being the least accurate in all cases. Factors Influencing Tree Identification Accuracy The results that are presented here represent the first time that the classification accuracy of automatically delineated ITC was directly compared amongst single ALS, multispectral ALS and SPL systems. The main factors influencing the classification accuracy include system type (ALS, MSL, SPL), the type of feature that is being used (3D, I) and the number of species classes that need to be identified. The richer I feature set that was provided by the three-channel MSL (26, compared to 8 for ALS, and 6 for SPL) resulted in higher classification accuracies across all cases than using 3D features only. This is consistent with results that were found by [6,36] using the same MSL sensor. The best results across all the system types are obtained when combining 3D and I features with the MSL system, which once again featured the highest accuracies of all three system types. The MSL overall feature set (45) was also richer compared to ALS (31) and SPL (23). The NDVI-like features that were provided by MSL consistently emerged as the top 10 most important features that are selected by the Random Forest models (e.g., Figures 4 and 6). Furthermore, larger numbers of features increase the number of features that remain after the correlation filter is applied, which provided more information when training our model. SPL's higher point density does not seem to mitigate its limitations when classifying species. As shown in Figure 2 (bottom), the point cloud that was provided by SPL over dense canopy is more akin to the photogrammetric point clouds that are obtained through stereo image matching, with most of the returns being provided by the uppermost part of the canopy and composed of singleton returns. The distribution of returns (first vs. second) is very different from that for linear ALS, with MSL having almost four times the number of pulses with multiple returns than does SPL [46]. A possible explanation for this observation is that data acquired with SPL systems require extensive noise removal for daytime acquisitions [31]. Most methods for noise suppression in SPL are based upon the elimination of isolated points, which potentially removes signal photons. The remaining points are clustered and, therefore, are likely to be redundant. Spurious return filtering is not required for linear ALS systems (except for the occasional very high or low points). Lastly, the positional precision of the SPL sensor (Leica SPL100, Leica Geosystems Ltd. (North America), Lachine, QC, Canada) has been shown to be weaker than that of the MLS sensor (Optech Titan, Teledyne Optech, Toronto, ON, Canada) that was used in this study [46], which may lead to the "blurring" of 3D features. As was the case for the intensity features that were derived from the linear ALS, the precise interpretation of the intensity values that were provided by SPL was also problematic for reasons similar to those evoked for 3D features. Linear ALS systems detectors produce voltage that is linearly proportional to the number of photons being recorded [30]. There are still many unanswered questions regarding the algorithms that are used to calculate ALS intensity values (they are proprietary to the various instrument manufacturers) and they preclude the comparison of lidar acquisitions that are provided by different sensors and over-flights. SPL detectors yield a binary response to incoming photons, theoretically precluding the calculation of an intensity value for each (single photon) return. It is thus approximated by computing a measure of local point density for each cloud [57]: the detector in the SPL system can register multiple single-photon hits (from the same pulse) in each channel and sum the output to form an analogue value of intensity for each return [46]. This ambiguity exacerbates the existing limitations of using linear ALS intensity values in classification models, as described above. Comparing our results to those of other studies is difficult, given that most (80%) of the 97 studies that were analysed in a review by [56] classified four or fewer species classes. Furthermore, several species identification papers use a manual or semi-manual process for delineating crowns and combine other datasets, such as optical and hyperspectral imagery, with the lidar data (e.g., [58,59]. Additionally, there are very few species identification studies using MLS or SPL systems at the individual tree crown level. Comparing study accuracies relative to each other is difficult since the number of species, species included, the type of forest biome, and different acquisition parameters can vary so much between studies. The Number of Categories Adapted Index (NOCAI) has been proposed as a means of enhancing comparability between tree species classification studies [60]. It is calculated by dividing the accuracy that is obtained for a given model by the expected accuracy of randomly assigned tree species. The expected accuracy is modelled by 1/k, with k representing the number of species classes for a given study. Higher values of NOCAI indicate a better performance by the classifier. The authors of [36] achieved an accuracy of 76% (NOCAI = 7.6) for 10 similar species and 95% (1.9) for type (HW/SW), while [54] obtained a similar accuracy of 77% (7.7) for 10 species in Sweden, and [42] obtained an accuracy of 88% (5.3) on a subset of 6 needleleaved species, with all of these studies using the same MSL system. The results for the best 12 species classification that were obtained in this study was 58% (7.0). While the accuracies in the aforementioned papers are much higher than the ones that were obtained in this paper for the highest number of species classes, one difference that distinguishes them from our method is the manual delineation of tree crowns rather than automatic delineation that was used in this paper. Furthermore, the aforementioned studies used nearly double the number of training crowns that were used in our study. However, the results for the coarser level groupings, the functional group 75% (3.8), and HW/SW 91% (1.8) are consistent with the studies mentioned above. The authors of [6] achieved 86% (2.6) accuracy and [60] achieved 89% (2.7) accuracy for three species in Finland, once again using the same MSL sensor. Notwithstanding the difference in forest structure between Finland and Canada, which makes automatic delineation less challenging to perform accurately, our results at the four-genera level with MSL at 79% (3.2) accuracy compare with the results obtained in these two papers, especially as most of our genera classes contained more than one species. Our results differ from those reported in the former papers, in which the authors found that MSL performs better than ALS when more species were classified. More generally, our results are consistent with the survey by [56], who found that across numerous studies, classification accuracy decreases with the number of species classes being considered. In addition, the average NOCAI that was calculated for the best-performing studies compare favourably to those that we obtained in the two-species (1.9 in review average vs. 1.8 in this paper) and four-species The inferior performance of SPL for species identification that was found in this paper is contrary to studies that are based upon ABA, which found that SPL is comparable to MSL [34] and ALS [61] when it comes to calculating forest inventory parameters (e.g., Lorey's height, basal area, stem volume, aboveground biomass). This difference can be explained by the fact that ABA uses statistical methods that are based upon the height distribution of the lidar returns, rather than the type of return (first vs. second under our method used in our example). The discrepancies in penetration depth between ALS and SPL are not exploited under ABA. Furthermore, a classification problem (species identification) is fundamentally different from modelling structural attributes with regression models, or others. When combining the features from all three datasets (last column of Table 6), we see that it improved results by about 5% in all cases, except for the type classification (HW/SW). Each system likely provides a specific type of information content that is not redundant or repeated between systems, thereby increasing classification accuracy. An additional, or alternative, explanation is that having thus created a 6 year time series of data, perhaps inter-species differences in feature evolution (e.g., specific growth patterns) are captured as well. Even though multiple acquisitions on the same area by these three different systems may not be economically recommended, the temporal aspect may have made multiple acquisitions by standard ALS systems more useful as additional data for our models. When examining the relative importance of features for the 12 species classification using the combined dataset (Figure 4), slope-based features were the major single contributor. However, intensity features composed most of the top 10. It should be noted that the two MSL green channel vegetation indices (I_G_IR1 and _IR2) appeared in the top 10 features. SPL contributed the least number of features, i.e., two. When looking at 3D features only ( Figure 5), slope-based features contributed significantly, as did convex hull features from the ALS and MSL datasets. Given that the SPL data are mainly composed of first returns near the top of the canopy, a convex hull value was not computed for many crowns, reducing the value of the THREED_CH_18 feature. SPL once again contributed the least number of features to the model: two. Figure 6 shows the intensity features in isolation. Three MSL-based spectral indices were amongst the top 10 features of the model; features that were based upon the ratio of median intensity values between first and second returns contributed to the model as well. Finally, the most significant feature in the combined model and the intensity-only model was the coefficient of dispersion of first return intensity values (I_DI_1st_sd_12) from the ALS dataset. Implications for Forest Inventory Generally, the most requested output from the remote sensing acquisitions of forests consists in the species-specific size distributions of their individual trees [6]. The results that are presented in Table 7 for the 12 species classification (58% using 97 features from the combined datasets) fall short (with 70% being a reasonable threshold, in our opinion) of being sufficiently accurate for operational use. There was also a clear difference amongst most hardwood and softwood accuracies, with the accuracy of most softwood species being much higher (≥60%, except for LA) compared with hardwood species (≤40%). This illustrates the continuing challenge of accurate tree crown delineation and identification in dense, mixed hardwood forests. Higher accuracies have been achieved through the fusion of hyperspectral imagery and ALS. For example, an accuracy of 88% was obtained for eight savannah tree species [62]. The delineation of savannah trees is greatly facilitated when compared with dense natural forests. However, there are geometric and radiometric registration challenges when two different sensors are used, given that data are usually acquired at different times and from differing viewing geometries between lidar and optical systems [63,64]. Evidently, automated delineation is required for operational use. Errors of commission and omission arising from delineation, and the difficulties of identifying training crowns in the field, or label noise (see next section), are all factors that affect classification accuracy when using machine-learning classifiers such as Random Forest. Linear ALS systems are now widely used to provide the structural information that is used to construct enhanced forest inventories, specifically with the ABA [8]. Cost is an important factor to consider, due to the large areas that need to be covered operationally. The ground that is sampled by an ALS sensor at any given time is a function of flight altitude, speed, and maximum scan angle. For ALS systems, there is a direct relationship between point density and cost. If a sensor, such as SPL, can cover more km 2 h −1 at the same theoretical point density, then there is a clear cost advantage in using SPL, at least in the case of the ABA [34,61]. As demonstrated by our results, there are apparent differences in the point cloud that was produced by linear ALS and SPL systems, resulting in lower accuracies across the board for SPL acquisitions. The structure of the returns (far fewer second returns) arising from the lower penetration of the canopy achieved by the SPL system is different when compared to linear systems [65]. When combined with the fact that features using the ratio of first vs. second returns are frequently retained in our models, this results in the lower accuracies that were recorded for the SPL system. Limitations and Research Avenues Our study revealed some limitations when we tried to apply machine-learning methodologies to a natural environment and on a large scale. The first limitation concerns the sparseness of our training data. Machine-learning classifiers, such as Random Forest, show a corresponding increase in accuracy when the sample size is increased [66][67][68]. At the 12 species classification level, some species consist of 35 exemplars, which is a very small number when compared to typical machine-learning image classification problems, where each class features tens of thousands (millions in the case of deep learning) of exemplars for each class [69]. Without mitigation measures such as feature selection (this is especially true when using balanced Random Forest models, as in this paper), the paucity of our training data would also lead to issues that are related to the aforementioned curse of dimensionality, since the ratio of training crowns to calculated features would be far too low. The noisiness, or occasional mislabelling, of our training data is another limitation of this study. The software that was used to delineate the crowns attempts to assign a precise delineation to detected tree-tops to produce the crown polygon layer depicting theoretical crowns. This layer is then used (as discussed in Section 3.3.1) to identify training crowns and to assign a species to them. Several sources of systematic error are then introduced into the model: GPS drift and difficulties in spatial orientation, which originates from relating the crowns that are generated by SEGMA to the canopy that is observed by looking upward, mean that some training species are obviously mislabelled, or suffer from label noise [70], such as two entwined crowns growing together. An additional source of training crown impurity is delineation error, especially at the edges of the crowns. If delineation is not exact, then there can be different species that are included in the training crown around its edges. Although Random Forest is shown to exhibit robustness to label noise [71], higher levels of label noise exert a subsequent negative effect upon classification accuracy [72]. As mentioned in the previous section, there are possible temporal decorrelation issues that are related to the training data acquired in 2012, while SPL was acquired in 2018. The strong results that were obtained from the MSL 2016 acquisition mitigate this possibility. The differences between SPL and linear ALS data for species classification at the individual crown level need to be investigated further, together with accuracies that must be improved across the board, to become operational at large scales. An encouraging observation from this study, however, shows that training crowns that are acquired in one year can be used in subsequent acquisitions, even when accounting for the usual intensity standardisation problem between different lidar sensors, and even with the same instrument. This shows the potential for building a library of training crowns that would be usable across different datasets when accuracy levels become high enough to be operational. To bring forward actionable species information for enhanced forest inventories at the individual crown level, future research should concentrate on improving the delineation process. Improvements in the accuracy of the delineation process should translate directly to enhanced accuracy of tree species identifications at all levels of fineness. We can also ask ourselves whether we need to delineate the entire crown exactly; perhaps a circular (or other shape) buffer around radius of a given distance from the local maximum could provide features that suffer less from the label noise effects that are caused by uncertain crown edges than are experienced currently. The temporal effect species signal that may exist for features calculated across multiple data acquisitions should also be further investigated. This does not require three types of sensors to capture this signal per se, but it would be interesting to observe whether just two ALS over-flights that are separated by a few years exhibit the same behaviour as found in this study. To reach its maximum potential usefulness, more must be known about lidar intensity (across all systems) to be able to standardise the values across acquisitions. This would surely increase the already significant classification power of intensity features for species identification. Conclusions This paper compared the performance in tree species identification achieved by three different lidar systems, including multispectral and single-photon instruments, at the individual tree crown level, using the same training crowns and methodology across the three datasets. MSL provided the greatest species identification accuracy across all the groupings, while SPL displayed the lowest. In the case of the combined dataset, MSL provided more intensity features, while ALS and SPL provided mostly 3D features. When the results were broken down by feature type (3D vs. I), we found that geometric features performed better than intensity features for the monospectral linear and single-photon instruments. As expected, the enhanced intensity features of linear multispectral lidar performed better than the geometric features, even with the enhanced point density that had been acquired by the three laser beams in that particular instrument. In all cases, the combined geometric and intensity features performed the best. Single-photon lidar intensity features performed the poorest across all datasets. Interpreting this result is made difficult by the fact that the derivation and meaning of the SPL intensity measurements is still not well described in published research. In dense mixed forests such as PRF, hardwoods remain a classification challenge at the 12 species classification level, while softwoods are classified more accurately. Hardwoods are more challenging to delineate accurately and are more prone to identification error when selecting training crowns in the field. The low number of exemplars in certain species classes lowered the effectiveness of the Random Forest classifier, since all classes would have their training data limited by the class with the lowest count. The fact that training crown polygons were segmented and field-sampled in one year (2012) and used in subsequent lidar over-flights (2016 and 2018) is encouraging, as it means that fieldwork does not have to be duplicated to use a more recent acquisition. A novel combination of all three dataset features in a single classification model, which improved accuracy by an additional 5% in most cases, was performed as well. The success of this combination suggests that multi-temporal species differences between features arising from multiple lidar acquisitions would not necessarily have to originate from three different types of sensors, as were used in this study, but these differences in features could contribute to accuracy improvement, which merits further investigation.
v3-fos-license
2023-05-12T15:17:03.254Z
2023-05-07T00:00:00.000
258633351
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.innovareacademics.in/index.php/ajpcr/article/download/46898/28138", "pdf_hash": "c1f95deaef0af488edbad92741199888228a70a8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:365", "s2fieldsofstudy": [ "Medicine" ], "sha1": "29b7e37362003c524d9a0257ba5880bd13f17996", "year": 2023 }
pes2o/s2orc
BEFORE AND AFTER COMPARISON STUDY ON SINGLE-USE PLASTIC BAN IMPLEMENTATION AMONG URBAN COMMUNITY RESIDENTS IN PUDUCHERRY, SOUTH INDIA Objective: Plastic industry is rapidly growing industries in India. To assess the knowledge, attitude, and practice of plastic usage before and after plastic ban in urban Puducherry. Methods: A community-based, non-randomized control trial (before and after comparison study) was conducted from May to October 2019 (6 months) at an urban field practice area of a Government Medical College in Puducherry. The study population were community residents aged above 18 years from the selected wards of urban field practice areas. The systematic random sampling was employed to cover 450 community residents. Data were collected by face-to-face interview into a pre-tested, semi-structured questionnaire. The pre-ban data collection was completed in July 2019. Ban on single-use plastics was implemented in Puducherry by August 1, 2019. After 1 month of ban (wash-out period), post-ban data collection was done among the same residents during September 2019. Data were captured using epi-collect and analyzed using the SPSS 16. Ethical clearance was obtained from the Institutional Ethics Committee. Results: Mean age of study participants was 39.64 (13.23) years, of which 255 (56.7%) of them were females. The median income of the respondents was 16000 (25000). Before ban 403 (89.6%) were carrying their shopping items or products using plastic bags provided by the shopkeeper whereas post-ban, it has reduced to 102 (22.7%). Mean KAP (knowledge, attitude, and practice) scores before ban was 9±3.8 (95% CI 6.6–9.2), and after ban, mean scores increased to 17.2±1.5 (95% CI 16.2–18.4). A pair t-test was done between the pre-and post-ban KAP scores and was found to be statistically significant (p=0.000). Conclusion: Most participants were aware of both environmental and health hazards from plastics and supported the single-use plastics ban. INTRODUCTION In the South Asian region, plastic represents the third highest proportion of municipal solid waste.it is significant since poor management of such solid waste can contribute to fall in the quality of air, water, and soil [1].The most appropriate intervention to reduce such waste will be by targeting a change in the consumption behaviour [2].Plastics are highly preferred as they are light, durable, resistant, durable, cheap, and affordable, benefitting individuals but placing a burden on an entire society when it comes to their disposal [3,4].Further, single-use plastics are in vogue but pose an environmental threat, when it comes to the nature and duration of decomposition.With respect to duration, they may take 100-1000 years to decompose, and most are non-biodegradable, breaking into smaller particles called micro plastics which may contaminate water or soil and cause many environmental and health hazards.Burning of plastics may contaminate air by releasing harmful gases into the air.Cost of plastic right from production to decomposition is huge and only 9% of plastic ever produced has been recycled; the rest cumulatively continues to pollute the environment.They cause physical nuisance such as choking drains and contribute to mosquito menace by even acting as breeding ground for mosquitoes [5,6], harm food by releasing certain chemicals if used to package hot edibles for example, styrene, a known carcinogen, Phthalates and Bisphenol which causes diabetes, heart, and liver diseases [7].Therefore, it is imperative to turn to healthier alternatives. To curb plastic waste, Government of India has formulated the Plastic Waste Management Amendment Rules (2021).Accordingly, the permitted thickness of plastic bags will be increased to 75 m from 50 m from September 30, 2021 and furthered to 120 m from December 31, 2022.It also encourages the 4Rs: Reduce, Reuse, Recycle, and Recover [8,9].Since the ban in Puducherry is recent compared to its neighboring states, it is of crucial importance to understand the consumer's knowledge, attitude, and practice regarding the plastics usage and their opinion on banning policy of the government.Most of research evidence on plastics usage is from developed countries [5,10].There is dearth of information in developing countries like India.The current study mainly focuses on the consumers so that the results will be helpful planning future needs and awareness generation strategies for effective implementation of law which would help reduce the consumption of plastic in future.With this background, this community-based study, non-randomized controlled trial was done with the aims to assess the knowledge, attitude, practice of plastic usage before and after plastic ban in Puducherry. METHODS A community-based, non-randomized control trial (before and after comparison study) was conducted from May to October 2019 (6 months) at an urban field practice area of a Government Medical College in Puducherry.The study population were community residents aged above 18 years from the selected wards of urban field practice areas under a Government Medical College and hospital in Puducherry.Community people who were not permanent residents of the study area, individuals who were not available even after three household visits, and unstable or terminally ill-patients and known cases of major psychiatric disorders were excluded from the study.Considering prevalence(p) as 40% from the previous literature [11], absolute precision as 5%, and non-response rate as 10% in calculating sample size using the formula recommended in the "WHO Manual for sample size determination in health studies-1999" [12].The estimated minimum required sample size was 423 which was rounded to 430.The list of households in the field practice area was taken as the sampling frame, using systematic random sampling every 189 th household was included in the study.An adult participant (preferably head of the family) in the selected household was interviewed.A pre-tested, face-validated, semi-structured questionnaire was used for face-to-face interview.The study tool comprised of two parts with questions related to socio-demographic details in Part I; Knowledge, attitude, and practice toward the usage of single-plastics in part II.The pre-ban data collection was completed in July 2019 and the single-use plastic ban was implemented in Puducherry on August 1, 2019 [13].After 1 month of ban (wash-out period), post-ban data collection was done among the same residents during September 2019.Data were captured using epi-collect and analyzed using the SPSS 16.Quantitative variables will be summarized as mean (standard deviation) or median (IQR) and qualitative variables as percentages and proportions.Test of significance such as Chi-square test and paired t-test was done.p<0.05 is considered as statistically significant.Ethical clearance was obtained from the Institutional Ethics Committee before the commencement of the study. RESULTS The mean age of study participants was 39.64 (13.23) years, of which 255 (56.7%) of them were females.The median income of the respondents was found to be 16000 (25000).Number of family members were four in 129 (28.7%) of the households in the study area and three in 113 (25.1%).Most 402 (89.3%) of the houses in the study area were pucca hoses.The sociodemographic description of the study participants is given in Table 1. Before ban, 403 (89.6%) were carrying their shopping items or products using plastic bags provided by the shopkeeper, whereas after ban implementation, it has been reduced to 102 (22.7%).Before single-use plastic ban, only 341 (75.8%) individuals were aware about the negative consequences of plastic usage, whereas after the implementation of the ban, it has been increased to 341 (79.3%).Easy availability 255 (56.8%) followed by low cost 118 (26.2%) was found to be the most common cause of single-use plastic usage.Out of 450 respondents, nearly 416 (92.4%) aware about the single-use plastic ban.However, only 165 (36.7%) were in favor of the ban.The most challenging factors for single-use plastic ban stated by the residents were cost-effective and easily available alternatives by 274 (60.9%) and lack of proper enforcement of the ban by 176 (39.1%).The best alternatives to single-use plastic bags as per the respondents were 217 (48%) jute bags, 201 (45%) cloth bags, and 32 (7.1%) paper bags which is depicted in Fig. 1.Age, gender, educational status, and occupation significantly (p=0.01)influenced their perception on the legislation of prohibiting consumption of single-use plastics among the study participants (Table 2). DISCUSSION Nearly 56% of the participants reported easy availability of plastics was the reason for usage.Hence, ensuring availability of suitable and affordable alternatives like cloth bags, jute bags or even paper bags at a subsidized rate in markets would improve the practice of switching from plastic in case the consumer forgets to get one from home.This strategy was effective in reducing plastic utilization by 90% and 49% in Ireland and China, respectively [10,17]. Fig. 1: Best alternatives to single-use plastic bags and abroad where only 50-81.1% of the participants were aware of associated health hazards [7,[14][15][16].The most common reason reported for favoring the use of plastic bags in our study was insufficient alternatives for plastics followed by easy availability of plastics.This was similar to the observation in a study done in Delhi where the most common reason reported was convenience for shopping [15] while a study from Ethiopia reported its cheap price, ready availability and light weight as main reasons [16]. The present study shows high proportion of the study subjects (95.1%) were aware of at least one health hazard arising from plastic use, an observation better than that reported in other studies both from India In this study, 7.6% of the urban participants were not aware of the legislation on plastic.The shopkeepers too, unawares, provide plastic bags for their customers, irrespective of whether they need it or not.Similar observation was reported in the study at Delhi where some stores repeatedly violated the ban [4].This brings to light the ignorance on part of both shopkeepers and consumers regarding the legislation ban.It also highpoints the ineffectiveness or low reach of awareness campaigns for spreading information on penalties for breach. On the other hand, in China, Xing et al., observed that usage of plastic bags fell drastically on implementing a ban following the implementation of ban and public awareness regarding environmental protection increased [18]. In the current study, about 63% respondents were not in favor of plastic ban.They were unskilled and semi-skilled workers.Similarly, in Delhi, 76% of housewives and 53% of low-income group were those who were against the plastic ban [9].Most of these homemakers were aware of the health hazards posed by plastic bags but preferred them due to convenience. 77.8% of the participants were disposing their plastic waste in open and barren areas before the ban and this proportion has reduced to after implementation of the ban.This is higher than that reported by studies on Ethiopia (59.6%),Rajasthan, India (40%) and Tamil Nadu, India (43.1%) who litter plastic bags in open areas [11,16,19].The higher litter rate in this study before the plastic ban could be because of lack of awareness on plastic related health and environmental dangers and issues like non-biodegradability. In this study, pre-ban reuse of plastic was 71% which diminished to 64.8% post-ban.Other studies from California, USA revealed only 18.9% participants were reusing bags.In another study done in Delhi and Mangalore city [15] only 4.6% and 20% of participants carried their own plastic bags for shopping [14,15,20]. Although, most consumers were aware of the plastic borne hazards, only 40% in the present study were cognizant of eco-friendly bags and very negligible percentage were using them. The best alternatives to single-use plastic bags as per the respondents were 217 (48%) jute bags, 201 (45%) cloth bags and 32 (7.1%) paper bags.Note should be taken however, that it is still controversial whether paper bags could be considered an affordable, eco-friendly alternative to plastic.Although paper bags degrade much quicker in the environment, they require more energy to be produced, are more expensive and once discarded take more space in collection trucks and landfills.IEC (Information, Education, and Communication) materials can be distributed to inform citizens about available alternatives.On the island of Guanaja (Honduras), each household was provided with information through a door-to-door campaign and in addition, each household was given two canvas reusable bags [6]. Strengths and limitations This study stands as first before and after comparison study to be conducted in Puducherry immediately after single-use plastic ban enforcement for assessing the knowledge, attitude, practice, and also for assessing the effective implementation of the ban. CONCLUSION Most of the participants in the study area had the awareness of environmental and health hazards from single-use plastic products and supported in banning the same.However, practice of reusing already used plastic bags or using better alternatives was poor among majority of the participants.Creating awareness on these strategies and effective implementation of legislation will contribute to diminution in the usage of single-use plastics in the city.The respondents observed the enforcement of the ban as for their own betterment and so therefore believed it is their responsibility to co-operate with the government to reduce the use of single-plastics. AUTHOR'S CONTRIBUTION Dr Devi.K conceived the idea and concepts, planned this study, reviewed the draft, finalized and approved the final version of manuscript.Dr. Lalithambigai and Dr. Sivapushani.A prepared the literature search, collected data, analyzed data, and prepared the initial draft.
v3-fos-license
2020-05-28T09:12:57.387Z
2020-05-26T00:00:00.000
219745740
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-2818/12/6/213/pdf", "pdf_hash": "e7f958c2847611a1646e448fb96dc7943b947ad0", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:366", "s2fieldsofstudy": [ "Biology" ], "sha1": "027ed4d04c6381d95c695549c3701e80073c4749", "year": 2020 }
pes2o/s2orc
Glycera sheikhmujibi n. sp. (Annelida: Polychaeta: Glyceridae): A New Species of Glyceridae from the Saltmarsh of Bangladesh : A new species of glycerid polychaete, Glycera sheikhmujibi , is described from the saltmarsh on the central coast of Bangladesh. The species is identified based on morphological characteristics using both a light microscope and scanning electron microscope (SEM). The species is characterized by the presence of three distinct types of proboscideal papillae: type 1 papillae (conical with three transverse ridges), type 2 (conical with a straight, median, longitudinal ridge), and type 3 (round, shorter, and broader, with a straight, median, longitudinal ridge). It has a Y-shaped aileron with gently incised triangular base, almost equal-size digitiform noto- and neuropodial lobes in the mid-body, and long ventral cirri at the posterior end. The new species is compared with its related species, previously described from the Bay of Bengal region. A key to all these species is provided. Introduction The family Glyceridae currently has 87 accepted species (80 species of Glycera, one species of Glycerella, and five species of Hemipodus) (World Register of Marine Species, WoRMS; www.marinespecies.org). Among the three genera, Glycera and Glycerella possess biramous parapodia, whereas Hemipodus includes species with only uniramous parapodia throughout the body [1]. The ailerons, accessory supports on the proboscis, are rod-like in Glycerella, and mostly triangularshaped or a more complicated structure having outer and inner rami in Glycera. Both Glycera and Hemipodus have spinigerous compound chaetae, but in Glycerella, additional compound falcigers are present [1]. Böggemann [2] revised all species of Glycera previously described worldwide and synonymised many of the previously described species (166), accepting only 36 as valid species. This was based on morphological data only, but subsequently he has used molecular studies to support some of these widely distributed species [3,4]. They are easily distinguishable from other polychaetes, as they have a pointed prostomium and eversible axial proboscis with numerous papillae. Glycera are widely distributed from tropical to temperate regions, and from intertidal to abyssal depths, inhabiting mainly soft bottom (sand/mud) sediments [1,3,5,6]. Glycerids are generally considered to be carnivorous burrowers, capturing and killing prey with their strong, well-developed jaws connected to venom glands that supply venom [7,8]. Polychaetes have been poorly studied in Bangladesh. While some benthic ecological studies have been carried out in the area [9][10][11], these studies have provided no taxonomic details, apart from the listing of polychaete species and their abundance data. None of these studies have deposited any material, so the validity of these species cannot be confirmed. Pramanik et al. [12] reported a new record of Glyceridae, Glycera lancadivae Schmarda, 1861 [13], which seems to be similar to Glycera brevicirris Grube, 1870 [14] (known from Sri Lanka and the Andaman Sea). Of the 80 accepted species of Glycera Lamarck, 1818 [15], only 10 species have been recorded from the Bay of Bengal region (WoRMS; www.marinespecies.org). This region includes Bangladesh, India, Myanmar, Sri Lanka, and the tip of Andaman Nicobar Islands. Only one of these 10 species has been recorded from Bangladesh [12], in the northeastern corner of the Bay of Bengal: G. lancadivae Schmarda, 1861 [13]. Muir and Hossain [16] reported an unidentifiable glycerid fragment from the Halishahar Coast of Bangladesh, and they also provided taxonomic keys for identifying 14 species from the Bay of Bengal and Indo-Pacific regions. Subsequently, Hossain and Hutchings [17] emphasized the possibility of undescribed polychaete taxa from Bangladeshi coastal waters. Hence, the aim of this report is to describe a new species of Glycera from the Bangladesh Coast, and to provide an updated key to all species of Glycera recorded from the Bay of Bengal region. Study Site Description The study area is located in an upper tidal channel of the lower Meghna river estuary, the largest estuarine ecosystem of Bangladesh, characterized by sunny tropical weather with monsoonal influence [18,19]. Mean annual temperature and rainfall in the study area are 25.5 °C and 2980 mm, respectively. According to the Köppen-Geiger climate classification, this climate is considered to be Am (tropical monsoon climate). The monsoon is characterized by strong southeastern winds with high rainfall, humidity, cloud cover, thunderstorms, cyclones, and occasional storm surges [18,19]. Almost all year round, the area is influenced by the incoming tide from Bay of Bengal. Tides are of a semi-diurnal type, with two high and two low waters during a lunar day. The tide varies with respect to magnitude, ranging from 0.07 m during neap tide to 4.42 m during spring tide [20]. The wave height of the estuary varies from 0 to 4 m [21]. However, the tidal wave is considerably slanted as it moves inside the channel, so that with increasing distance from the channel opening, the duration of flood becomes shorter than during the ebb tide. The lower, deeper areas near the opening of the channel are characterized by strong estuarine influence, with higher current velocity and stronger tides, while the estuarine water inflow is substantially reduced in the upper shallow areas. In addition, the upper part of the channel receives a continuous freshwater supply, especially during monsoons, through a small system of tidal creeks and streams. Sample Collection and Analysis Sediment samples were collected during April 2015 from Chairman Ghat (22°30'48.3876'' N, 91°5'6.6078'' E, Noakhali, Chittagong division), using a hand-held corer with a depth penetration of 10 cm (Figure 1). The collected samples were washed through a 0.5 mm mesh hand sieve, and polychaetes retained on the sieve were placed into plastic vials and fixed with 5% formalin in the field. After 2 days, the samples were washed with freshwater and transferred to 70% ethyl alcohol for further examination [17]. Material was examined using stereo (Motic 6.5x-50X Zoom Stereo) and compound (Carl Zeiss, Oberkochen, Germany) microscopes. Scanning electron microscope (SEM) observations were made with a Zeiss EVO LS15 SEM with a Robinson Backscatter Detector after critical-point drying and coating with 20 nm gold at the Australian Museum [17]. Specimens were photographed using a light microscope (Leica MZ16, Leica Microsystems, Wetzlar, Germany ) and Spot flex 15.2 (Leica Microsystems, Wetzlar, Germany ) with a camera attached. In some cases, the material was stained with methylene blue to increase the resolution of diagnostic characters. All material examined was deposited at the Australian Museum, Sydney (AM). Material Examined The Generic Identification This species has been placed into the genus Glycera based on its overall morphological similarities with other species of Glycera, including the presence of different types of dense papillae on their proboscis. The genus Glycera Lamarck, 1818 [15] can easily be identified from other genera by the following unique characters [2,15]: acutely pointed, usually ringed prostomium with four terminal tentacles; and a long, eversible, club-like proboscis, provided with four hooked horny jaws and accessory lateral ailerons. The ailerons possess a more complicated structure with outer and inner rami, and sometimes an interramal plate. Parapodia have two anterior lobes with cirri and one or two posterior lobes, as well as the ventral chaetae compound and dorsal capillary chaetae. Diagnosis The salient features of the new species are (i) the presence of three types of proboscideal papillae-type 1 papillae (main type), which are conical with three transverse ridges; type 2 papillae, which are conical with straight, median, longitudinal ridges; and type 3 papillae, which are round, shorter, and broader, with straight, median, longitudinal ridges; (ii) Y-shaped ailerons with gently incised triangular bases; and (iii) digitiform noto-and neuropodial lobes of almost equal size in the mid-body and long ventral cirri at the posterior end. Holotype The holotype has an incomplete, cylindrical body, which is elongated and tapered at both ends ( Figure 2A,B). The body reaches up to 42 mm long, with up to 158 segments, and has a width of 2.2 mm in the middle part of the body; preserved specimens in alcohol are whitish with numerous scattered small black pigmented spots ( Figure 2C,D). The body segment is biannulate ( Figure 2E), and the anterior annulus slightly shorter than posterior annulus. The prostomium is conical, pointed, and distinctly separated into about ten rings; the terminal ring has four antennae, and no nuchal organs or eyes (Figures 2A,B and 3A,C). The parapodia of the first two segments are uniramous, with a prechaetal and a postchaetal lobe, while the subsequent parapodia are biramous ( Figures 2B and 3D). There are two unequal, triangular to digitiform, prechaetal and postchaetal anterior lobes, and lobes of similar length in mid-body (Figures 2C, 3D, and 4A,D). Knob-like dorsal cirri from the second parapodium are inserted most clearly at the base of the anterior parapodia, as well as on the body wall far above parapodial base in the mid-body, and again near the posterior base ( Figure 2D-F). Ventral cirri are not distinct in the anterior part, but are well-developed in the posterior part ( Figure 4F). Retractile branchiae are present at chaetigers 27-31. Notochaetae slender capillaries with one margin covered with spines or hairs ( Figure 4A,C). There are neurochaetae homogomph spinigers (based on SEM) ( Figure 4B), as well as a pygidium with a terminal pair of slender, elongated cirri ( Figure 3G). The proboscis is very long, equal to 28 segments, bell-shaped, and densely covered with papillae, which are arranged in distinct longitudinal rows (Figures 2A and 3A,B). The papillae consist of three types: (1) numerous, conical papillae with three "V"-shaped ridges on posterior surface from top to bottom ( Figure 5C,D); (2) a few long, conical papillae, with one straight, median longitudinal ridge ( Figure 5C,D); and (3) very few, slightly shorter and broader, rounded papillae with a single very distinct median longitudinal ridge ( Figure 5C,D). Among the three types, type 2 is the longest. All papillae have small ciliated pores ( Figure 5D) and are smooth anteriorly. The terminal part of proboscis has four black hook-shaped jaws and accessory "Y"-shaped ailerons with gently incised triangular bases ( Figure 3B). has also decided to jointly celebrate the "Mujib Year" with Bangladesh at its 40th General Assembly. Distribution, Ecology, and Habitat Glycera sheikhmujibi n. sp. is one of 11 species in the genus Glycera distributed in the Bay of Bengal region, and is the second species from Bangladeshi coastal waters. Currently, it is only known from the type locality on the central coast of Bangladesh; however, increased sample collection from other parts of the coastline might extend its distribution range. Sympatric species include Nephtys bangladeshi, Lumbrineris spp., Capitella spp., Goniada spp., Nereis spp., Magelona spp., and Naidadae, as well as the crustacean groups Ocypodidae, Palaemonidae, Gammaridae, and Harpecticoida. The species was collected from the muddy saltmarsh zone (intertidal zone), with a water depth range of 0.5 to 1.0 m during high tide. The zone is densely covered with the grass Spartina spp. and connected to the Meghna River Estuary, which falls to the Bay of Bengal near Hatiya and Swandwip islands. The average salinity, dissolved oxygen, pH, alkalinity, and temperature was 6 ppt, 9.15 ppm, 7.72, 180 ppm, and 29 °C, respectively. Discussion The main diagnostic characteristics for the identification of glycerid species include shape and number of pre-and post-chaetal lobes, presence or absence of branchiae, shape of the aileron, and the structure of proboscidial papillae [2]. However, Fiege and Böggemann [23] and Rizzo et al. [1] found that parapodial lobes and branchiae are not reliable characters, because branchiae are retractable in some species, and the shape (size) and number of pre-and post-chaetal lobes are difficult to evaluate for some species. Therefore, they suggested the proboscidial papillae and ailerons to be the most reliable characters for the identification of species of Glycera. Glycera sheikhmujibi n. sp. can easily be distinguished from all other species of Glycera by the presence of three distinct types and shapes of proboscidial papillae (Table 1). [25] n/a n/a G. embranchiata Krishnamoorthi, 1962 [26] Nomen dubium Böggemann [2] reported as a nomen nudum [29] Nomen nudum Böggemann [2] reported as a nomen nudum G. sagittariae Fauvel, 1932 [27] Aileron with two long dagger-like processes Parapodia with two equal elongated, tapering anterior lobes, and two equal, blunt, triangular posterior lobes; dorsal cirrus more or less remote Present, simple and short beginning at 40th segment n/a G. subaenea Grube, 1878 [30] n/a Posterior parapodial lobes longer than the anterior ones; lower lobes triangular and wider than the upper ones, anterior lobes equally long, rounded Branchiae present and positioned at the anterior wall of parapodium, separated into 2-3 fingerlike filaments, longer than ventral cirrus n/a G. tesselata Grube, 1863 [31] n/a n/a n/a n/a Glycera sheikhmujibi n.sp. Dark, hookshaped jaws and ailerons with gently incised bases Parapodia with digitiform prechaetal and postchaetal lobes; knob-like dorsal cirrus along the body and long ventral cirrus present posteriorly Branchiae present, retractile, commencing from the 27th to 31st segments 5-6 slender capillary notochaetae Glycera sheikhmujibi n. sp. shares no diagnostic characteristics with either Glycera lancadivae, the only known species from Bangladesh [12], or fragments of another glycerid described by Muir and Hossain [16]; however, it clearly differs in many other aspects, especially in the form of three distinct types of proboscidial papillae. Again, Böggemann [2] mentioned that G. lancadivae is a nomen dubium, and it is almost similar to G. brevicirris. Glycera sheikhmujibi n. sp. seems to resemble Glycera nicobarica Grube, 1867 [32] and Glycera macintoshi Grube, 1877 [33] in the shape of ailerons, parapodial lobes, and types and shapes of the proboscidial papillae (Table 1). However, G. nicobarica and G. macintoshi have only two different types of papillae: G. nicobarica possesses few ovate papillae without ridges, and numerous leaf-like ones with five to six "U"-shaped ridges, and G. macintoshi has conical proboscidial papillae with three transverse ridges. Recently, G. nicobarica has been synonymized with Glycera unicornis Lamarck, 1818 by Read [24]. It has been argued that the proboscidial papillae may vary due to preservation, sample preparation, and development of papillae; however, for the sample collection and preservation of G. sheikhmujibi n. sp., standard procedures were followed. In addition, G. sheikhmujibi n. sp. differs from G. macintoshi by the presence of two equal triangular pre-and postchaetal lobes, whereas shorter, rounded neuropodial postchaetal lobes are present in G. macintoshi. Glycera. embranchiata Krishnamoorthi, 1962 [26] (known from India) is a nomen dubium, as there is no description of the species in the original report, and G. rutilans Grube, 1877 [33] (known from Sri Lanka) is nomen nudum, as reported in Böggemann [2]. Glycera convoluta, G. longipinnis, G. rouxii, and Glycinde oligodon, all reported from the Bay of Bengal region, have not been accepted by Böggemann [2], as G. convoluta is a junior synonym of G. tridactyla, G. longipinnis is a junior synonym of G. sphyrabrancha, G. rouxii is a junior synonym of G. unicornis, and Glycinde oligodon belongs to the family Goniadidae. Although Glycera tesselata Grube, 1863 [31] is a good species and is accepted by Böggemann [2] and Read [24], it is poorly described in the original description. The first procedure in any ecological work or applied research with organisms is an exercise in taxonomy. Taxonomy provides the fundamental understanding about the components of biodiversity, which is badly needed for effective decision-making regarding conservation, management, and sustainable use of the studied organisms. In addition, the loss of biodiversity due to human activities and climate change should be of major concern to everyone, because it threatens the functioning of an ecosystem. Despite this, there is very little information on the taxonomy of polychaetes from Bangladeshi coastal waters compared with those of neighbouring countries. To date, only thirty species have been identified from Bangladesh, which is a very low number compared to known polychaete species (~10,000) in the world. With a diverse coastline of about 720 km, it is hoped that the number of polychaete species will be increased with further studies. Key to the species of genus Glycera from the Bay of Bengal region (modified from Muir and Hossain [15]) is as follows: 1 Proboscideal papillae do not have a terminal fingernail structure 2 -Proboscideal papillae have a terminal fingernail structure 6 2 There is one postchaetal lobe in all parapodia 3 -There are two postchaetal lobes (at least) on the mid-body parapodia 5 3 Mid-body, the notopodial prechaetal lobes are shorter than the neuropodial lobes, and branchiae are absent Glycera lapidum Quatrefages, 1866 [24]. -Mid-body, the prechaetal lobes are about same length or longer than the notopodial lobes. Branchiae are present or absent 4 4 The proboscideal papillae are digitiform and without ridges, ailerons have deeply incised bases, and simple digitiform branchiae are situated termino-dorsally on the parapodia Glycera sphyrabrancha Schmarda, 1861 [12]. -Conical proboscideal papillae with 5-20 transverse ridges, ailerons have slightly arched bases, and branchiae are absent Glycera oxycephala Ehlers, 1887 [25]. 5 Ailerons have gently incised bases; long, mid-body postchaetal lobes are digitiform and of about equal length; three types of proboscideal papillae, with the main type having fewer than three ridges; and branchiae are absent Glycera sheikhmujibi n. sp.
v3-fos-license
2023-03-26T15:03:24.912Z
2023-03-24T00:00:00.000
257748351
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3425/13/4/534/pdf?version=1679636710", "pdf_hash": "c31142b48638f2b03f495bb88472b92711e7b547", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:367", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "65a37d2690c71acc36afa4b21e155d3a05f1008a", "year": 2023 }
pes2o/s2orc
Natural Oscillatory Frequency Slowing in the Premotor Cortex of Early-Course Schizophrenia Patients: A TMS-EEG Study Despite the heavy burden of schizophrenia, research on biomarkers associated with its early course is still ongoing. Single-pulse Transcranial Magnetic Stimulation coupled with electroencephalography (TMS-EEG) has revealed that the main oscillatory frequency (or “natural frequency”) is reduced in several frontal brain areas, including the premotor cortex, of chronic patients with schizophrenia. However, no study has explored the natural frequency at the beginning of illness. Here, we used TMS-EEG to probe the intrinsic oscillatory properties of the left premotor cortex in early-course schizophrenia patients (<2 years from onset) and age/gender-matched healthy comparison subjects (HCs). State-of-the-art real-time monitoring of EEG responses to TMS and noise-masking procedures were employed to ensure data quality. We found that the natural frequency of the premotor cortex was significantly reduced in early-course schizophrenia compared to HCs. No correlation was found between the natural frequency and age, clinical symptom severity, or dose of antipsychotic medications at the time of TMS-EEG. This finding extends to early-course schizophrenia previous evidence in chronic patients and supports the hypothesis of a deficit in frontal cortical synchronization as a core mechanism underlying this disorder. Future work should further explore the putative role of frontal natural frequencies as early pathophysiological biomarkers for schizophrenia. Introduction Schizophrenia is a major psychiatric condition that ranks among the leading causes of disability worldwide [1]. Illness onset typically occurs between 15 to 30 years of age with the emergence of positive (hallucinations, delusions, disorganized thought/behavior) and negative (anhedonia, avolition, flattened affect) symptoms that are accompanied by a progressive decline in cognitive and social abilities. This functional decline is particularly marked in the first years after the first psychotic break; accordingly, the early course of schizophrenia has been regarded as an important window for intervention [2] and tertiary prevention (the so-called "critical period" hypothesis, [3]). In this light, the search for objective biomarkers that can be reliably detected since the earliest stages of schizophrenia has been a major focus in psychiatric research due to their potential to inform novel treatment strategies [4]. Early biomarkers of schizophrenia are also more likely to be directly linked to its pathophysiology rather than to superimposed confounding factors such as duration of psychosis, length of exposure to antipsychotic medications, neurodegenerative processes, and comorbidities, and may thus enable a better understanding of the biological underpinning of chronic psychoses [5]. Transcranial magnetic stimulation (TMS) has increasingly gained popularity in psychiatric research and therapeutics, given its ability to precisely and non-invasively target In this study, we used TMS-EEG to probe the intrinsic oscillatory properties, including the natural frequency, of the left premotor cortex in N = 16 patients with early-course schizophrenia and N = 16 age and gender-matched healthy control subjects. We hypothesized that the premotor natural frequency would be reduced in the patients' group, reflecting a local deficit in synchronization that is present since the early stages of schizophrenia. Participants We recruited sixteen patients with early-course schizophrenia (ECSCZ, defined as <2 years from a first psychotic episode, as in previous studies [33]) and sixteen healthy control subjects (HCs). Common exclusion criteria included major medical or neurological conditions affecting the central nervous system, intellectual disability according to DSM-5 criteria, pregnancy or postpartum, and inability to complete magnetic resonance imaging (MRI) scans or TMS. Exclusion criteria for HCs included a history of treatment with antipsychotic medications; personal or family history of schizophrenia-spectrum disorder or psychosis; and current use of psychotropic medications. Table 1 summarizes the demographics of the study populations. No significant differences in age and gender were found between groups. Subjects were evaluated by one expert rater, and the diagnosis of schizophrenia was confirmed with the Structured Clinical Interview for DSM Disorders (SCID) [34]. The severity of psychotic symptoms in the patients' group was quantified using the Scale for the Assessment of Positive Symptoms (SAPS) and the Scale for the Assessment of Negative Symptoms (SANS) ( Table 1). The study protocol was approved by the University of Pittsburgh Institutional Review Board, and all participants provided written informed consent prior to completing study procedures. Procedure Study participants sat comfortably on a reclining chair. For the assessment of each subject's resting motor threshold (RMT), the left motor cortex was identified on T1-weighted individual MRIs using a neuronavigation system (Localite Classic Edition, Bonn, Germany). Following international guidelines, the RMT was determined as the lowest intensity capable of eliciting an electromyographic response of the abductor pollicis brevis muscle > 50 µV in 5 out of 10 TMS trials. [35]. TMS was delivered in biphasic single-pulses using a TMS stimulator (MagPro X100, MagVenture, Farum, Denmark) and a figure of 8 coil (MagVenture MCF-B65). The RMT, expressed as % of Maximum Stimulator Output (%MSO), did not significantly differ between the two groups (HC: 51.0 ± 5.38; ECSCZ: 55.75 ± 10.44; p = 0.1305 Wilcoxon rank-sum test). Then, the neuronavigation target was moved to the left middle frontal gyrus and adjusted to match the coordinates [x: −26; y: −4; z: 69] of the Montreal Neurological Institute (MNI) space [36]. This targeting spot lies in the left Brodmann Area 6 (BA6), i.e., the left premotor cortex. The intensity of stimulation for TMS-EEG at this target was set to 120% of the RMT, with a stimulation angle of 45 degrees relative to the midline. We used a 64-electrode cap based on the 10-20 system (Easycap), passive ring-shaped EEG electrodes, and a TMS-compatible amplifier (BrainAmp DC, Brain Products, Gilching, Germany). The quality of the TMS-evoked EEG response was ensured by employing an online, real-time graphical interface displaying the signal from averaged trials referenced to the average of all channels [37]. Briefly, before the EEG recording was started, the TEP resulting from 20 TMS pulses was visualized. The EEG recording was started only if (1) no large-amplitude (>100 µV) decay or muscle-related artifacts were visualized and (2) the first component of the average TEP had an amplitude >5 µv. Otherwise, the coil orientation and/or the TMS target were slightly adjusted, and the real-time inspection was repeated. A freely available brain atlas based on MNI coordinates was used to confirm that the final targeting spot lied on BA6. EEG data were acquired at a sampling rate of 5000 Hz. Each recorded TMS-EEG session consisted of 150 single pulses with a jittered interstimulus interval (0.4 to 0.6 Hz), a frequency similar to those employed by previous studies probing the premotor cortex [16,[38][39][40][41][42] and which does not induce significant neuronal plasticity in BA6 [43]. To reduce the amplitude of "off-target" (i.e., stereotypically induced regardless of the stimulation site) auditory artifacts due to the TMS "click" sound and contaminating the TMS-evoked EEG responses [44], participants wore earbuds playing a state-of-the-art, TMS-specific noise masking track [38]. Data Analysis Data analysis was performed with Matlab R2017a (The Mathworks, Natick, MA, USA) employing customized algorithms based on the EEGlab toolbox [45] and the SiSyPhus Project interface (SSP 2.8U, University of Milan, Italy), as in previous studies [46,47]. After the removal of channels contaminated by noise ("bad" channels, HC: 4.31 ± 3.11; ECSCZ: 6.43 ± 6.34; p = 0.7332; Wilcoxon rank-sum test), EEG signals were split into trials of 1600 ms around the TMS pulse (−800 +800 ms, time 0 corresponding to the TMS pulse). Trials contaminated by noise, artifacts from eye movements, or muscle activity were rejected by visual inspection, which resulted in a comparable number of remaining EEG trials between the two groups (HC: 135.68 ± 11.53; ECSCZ: 136.68 ± 14.91; p = 0.6371; Wilcoxon rank-sum test). Artifacts from the high-energy TMS pulse were removed from each trial by replacing the interval around the pulse (from −2 ms to 6 ms) with the data immediately before (from −10 ms to −2 ms). A fifth-order moving-average filter was applied between 4 and 8 ms to reduce high-frequency edges. EEG signals were downsampled to 1000 Hz, bandpass filtered (1-80 Hz), re-referenced to the average of all channels, and baseline corrected. Residual artifacts from eye movements, muscle activity, cardiac, and TMS pulse were excluded using Independent Component Analysis (ICA). Only components clearly imputable to artifacts were removed. The spherical function of the EEGLAB toolbox [45] was then used to interpolate bad channels. In order to quantify the TMS-evoked EEG response in the time domain, the Mean Field Power (MFP, [48]) was calculated across all channels (Global MFP, GMFP) as well as by averaging the voltages squared across the channels surrounding the stimulation site (FC5, FC3, FC1, FCz, C5, C3, C1, Cz; Local MFP, LMFP). A Morlet time-frequency decomposition (3.5 cycles) was employed to analyze TMSevoked oscillatory activity. Event-related spectral perturbation (ERSP) matrices between 8 and 45 Hz were computed as the ratio of the spectral power (µV 2 ) of individual EEG trials and their respective mean baseline spectra. These parameters (wavelet cycles, frequency interval) were chosen to maximize the comparability of findings with previous studies investigating the natural frequency of the premotor cortex [16,17,42]. Statistical Analysis A two-tailed t-test was used to compare age between groups; χ-squared tests were used to assess differences between groups for dichotomous variables (i.e., gender). Wilcoxonrank sum tests were used to establish statistical differences in TMS-EEG measures between early-course schizophrenia patients and HC participants. In patients, Spearman's correlation coefficients were calculated between TMS-EEG measures and SAPS/SANS scores and dose of antipsychotic medications, quantified as chlorpromazine equivalents (see Table 1). The level of significance was set at 0.05 for all tests. Statistical Analysis A two-tailed t-test was used to compare age between groups; χ-squared tests were used to assess differences between groups for dichotomous variables (i.e., gender). Wilcoxon-rank sum tests were used to establish statistical differences in TMS-EEG measures between early-course schizophrenia patients and HC participants. In patients, Spearman s correlation coefficients were calculated between TMS-EEG measures and SAPS/SANS scores and dose of antipsychotic medications, quantified as chlorpromazine equivalents (see Table 1). The level of significance was set at 0.05 for all tests. In the time-frequency domain, no differences in ERSP were found between groups in any frequency band, both in a region of interest (ROI) overlying the left premotor cortex and across all EEG channels (see Supplementary Materials, Table S1). However, the natural frequency evoked by the stimulation of the left premotor cortex was significantly reduced in ECSCZ patients (HC: 29.15 ± 6.59 Hz; ECSCZ: 23.27 ± 4.27 Hz; p = 0.0186; Wilcoxon rank-sum test), corresponding to a large effect size (Cohen's d = 1.06). Individual values and boxplots with median, quartiles, and extreme values for both groups are shown in Figure 2. No significant differences were found when comparing the time course of the EEG responses to TMS, as quantified by GMFP and LMFP, across groups (GMFP, HC 0.77 ± 0.69 µV; ECSCZ: 0.61 ± 0.43 µV; p = 0.6375. LMFP, HC: 0.92 ± 0.91 µV; ECSCZ: 0.56 ± 0.42 µV; p = 0.2662. Wilcoxon rank-sum tests). Supplementary Figures S1 and S2 show grand averages of GMFP and LMFP and time-bin-by-time-bin group comparisons (no significant differences after Wilcoxon rank-sum tests). In the time-frequency domain, no differences in ERSP were found between groups in any frequency band, both in a region of interest (ROI) overlying the left premotor cortex and across all EEG channels (see Supplementary Materials, Table S1). However, the natural frequency evoked by the stimulation of the left premotor cortex was significantly reduced in ECSCZ patients (HC: 29.15 ± 6.59 Hz; ECSCZ: 23.27 ± 4.27 Hz; p = 0.0186; Wilcoxon rank-sum test), corresponding to a large effect size (Cohen s d = 1.06). Individual values and boxplots with median, quartiles, and extreme values for both groups are shown in Figure 2. No correlation was found in either group between the natural frequency and age. In ECSCZ patients, no significant correlations were found between the natural frequency and No correlation was found in either group between the natural frequency and age. In ECSCZ patients, no significant correlations were found between the natural frequency and the patients' positive or negative symptoms quantified by SAPS (ρ = −0.06; p = 0.8245) and SANS (ρ = −0.28; p = 0.2882) scores, respectively, nor with the dose of antipsychotic medications that the patients were taking at the time of the TMS-EEG recordings (ρ = −0.27; p = 0.3679). Discussion In this study, we performed single-pulse TMS-EEG of the left premotor cortex in patients with early-course schizophrenia and in HC subjects (Figure 1). Our main finding was a significant reduction in the natural frequency of patients with schizophrenia compared to HCs (Figure 2), corroborating the hypothesis that a slowing of the natural oscillatory frequency of the premotor cortex is present not only in chronic schizophrenia, as previously reported by our group [17], but it is also a feature associated with the disorder since its earliest stages. This finding is consistent with recent TMS-EEG evidence in first-episode psychosis, where the TMS-evoked EEG activity in the beta range was found to be reduced in another frontal area, the motor cortex [50]. Altogether, these results suggest that an impairment of the intrinsic oscillatory properties of frontal cortical circuits is likely a neurophysiological characteristic associated with the pathophysiology of schizophrenia rather than a superimposed alteration due to neurodegenerative processes, duration of illness, and/or length of exposure to antipsychotic medications. Deficits in fast frontal neural oscillations in patients with schizophrenia, including activity in the beta and gamma frequency bands, have been consistently replicated across different neurophysiological paradigms [51,52]. In generating these fast oscillations, an essential role is played by the inhibitory control exerted by parvalbumin+ gamma-aminobutyric acid (GABAergic) interneurons [53,54], a population of cells notably affected in schizophrenia [55,56]. In addition, GABAergic transmission is critically implicated in EEG responses to TMS [57]. Our findings are, therefore, consistent with a deficit in GABAergic control which is present since the early phases of the disorder. This expands previous findings from TMS studies using paired pulses in the motor cortex that demonstrated that deficits in inhibitory control are present in early-course schizophrenia [58,59], further supporting the hypothesis of an imbalance between cortical excitation and inhibition as a core mechanism underlying this disorder. While several studies employing TMS-EEG in schizophrenia have successfully targeted the DLPFC [17,[22][23][24][25], the technical challenges associated with investigating this cortical area (largely related to the interposition of lateral cranial muscles between the coil and the brain surface) may limit the potential for translation of the identified biomarkers [15,[27][28][29]31,60]. This is particularly the case in the absence of a real-time TMS-EEG monitoring interface, which can be used to minimize muscle activation caused by TMS [27,30,37]. On the other hand, given the more centro-medial projection on the scalp of the premotor cortex, TMS-EEG recordings targeting this area are less affected by artifacts caused by muscle activation [31,32]. Here, we showed that the natural frequency of the premotor cortex is significantly reduced in early-course schizophrenia patients compared to HC subjects, with a large effect size (Cohen's d = 1.06). However, it has to be noted that a certain overlap exists between the two distributions ( Figure 2), which may limit the validity of this biomarker. Conversely, previous TMS-EEG studies targeting the DLPFC [17,25] have yielded to larger separation between chronic schizophrenia patients and HCs (up to complete non-overlap between groups [17]). Furthermore, the clinical significance of these DLPFC TMS-EEG biomarkers is supported by correlations with behavioral variables [17,25], which were not found in this study. Future research in early schizophrenia should investigate the natural frequency and other TMS-EEG measures at prefrontal sites and assess its clinical correlates. However, additional work should also elucidate the relationship between prefrontal and premotor TMS-EEG responses at the beginning of schizophrenia, aiming at the development of biomarkers for this disorder that are both accurate and feasible. Future work is needed to address the limitations of the present study. First, we tested a relatively small cohort of patients. As increasing evidence has highlighted the large biological heterogeneity underlying the clinical phenotype of schizophrenia [61,62], our Brain Sci. 2023, 13, 534 8 of 12 findings will need to be confirmed by future studies recruiting larger groups of patients. Second, we did not examine patients with other psychiatric conditions, including subjects with mood disorders. Importantly, one previous study has shown that the natural frequency of the premotor cortex is not only reduced in chronic schizophrenia but also during a depressive episode in subjects diagnosed with either major depressive disorder or bipolar disorder [42]. This suggests a shared deficit in cortical synchronization in the premotor cortex across these conditions, which warrants caution in the interpretation of our findings. More studies are therefore needed to elucidate whether an early deficit of premotor corticothalamic circuits is specific to schizophrenia or rather represents a window into cortical synchronization abnormalities that are shared between schizophrenia and mood disorders [63]. Third, our investigation was limited to the left hemisphere. Interestingly, one recent study in healthy subjects did not find significant differences in the natural frequencies between homologous areas of the left and right hemispheres, including between the left and right premotor cortices [18]. However, given the broad literature on altered lateralization of brain functions in schizophrenia [64][65][66], future studies should explore the symmetry of EEG responses to TMS in this disorder while investigating potential (lateralized) functional correlates (e.g., language, handedness). Finally, in the present study, we did not find any correlation between premotor TMS-EEG measures, including the natural frequency, and clinical symptoms. While this may suggest the absence of a relationship between a reduced intrinsic oscillatory activity of the premotor cortex and the clinical phenotype of schizophrenia (positive symptoms, negative symptoms), it is also possible that we were not powered enough to establish such relationships. Alternatively, the reduced intrinsic oscillatory activity in the premotor cortex may be related to behavioral alterations that were not investigated in our study, such as cognitive impairment or mirror neuron dysfunction. Thus, future studies administering cognitive tasks in larger groups of schizophrenia patients are needed to answer these questions. Conclusions The natural frequency of the premotor cortex is reduced since the early stages of schizophrenia, with a large effect size. As the interest for clinical applications of TMS-EEG arises, the premotor natural frequency may represent a feasible and inexpensive early pathophysiological biomarker for schizophrenia. However, more research is needed to elucidate its specificity, clinical significance, and relationship to the course of illness. As TMS is gaining momentum in psychiatric research and treatment, future studies should explore a role for TMS-EEG measures, including the natural frequency, in informing the diagnosis, prognosis, and personalization of treatment in schizophrenia and other psychiatric conditions [67,68]. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/brainsci13040534/s1, Table S1: Summary of TMS-EEG measures; Figure S1: Global Mean Field Power-group comparison; Figure Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the University of Pittsburgh Institutional Review Board (protocol code STUDY18120018, approved on 3/27/2019). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2024-03-12T15:39:49.071Z
2024-03-07T00:00:00.000
268350375
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9717/12/3/535/pdf?version=1709868029", "pdf_hash": "8b3e72575090ce541b5ba9ce26e8b6e6d013428b", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:370", "s2fieldsofstudy": [ "Engineering" ], "sha1": "310ae4b2ff96ec6d8ebb94128a9299433f2470d7", "year": 2024 }
pes2o/s2orc
Control-Volume-Based Exergy Method of Truncated Busemann Inlets in Off-Design Conditions : A scramjet engine consisting of several components is a highly coupled system that urgently needs a universal performance metric. Exergy is considered as a potential universal currency to assess the performance of scramjet engines. In this paper, a control-volume-based exergy method for the Reynolds-averaged Navier – Stokes solution of truncated and corrected Busemann inlets was proposed. An exergy postprocessing code was developed to achieve this method. Qualitative and quantitative analyses of exergies in the Busemann inlets were performed. A complete understanding of the evolution process of anergy and the location where anergy occurs in the inlet at various operation conditions was also obtained. The results show that the exergy destroyed in the Busemann inlet can be decomposed into shock wave anergy, viscous anergy and thermal anergy. Shock wave anergy accounts for less than 4% of the total exergy destroyed while thermal anergy and viscous anergy, in a roughly equivalent magnitude, contribute to almost all the remaining. The vast majority of inflow exergy is converted into boundary pressure work and thermal exergy. Some of the thermal exergy excluded by the computation of the total pressure recovery coefficient belongs to the available energy, as this partial energy will be further converted into useful work in combustion chambers. Introduction Hypersonic airbreathing vehicles are the most promising equipment to achieve reusable launch vehicles, hypersonic aircrafts and hypersonic cruise missiles, which can reduce transport costs, increase the dependability of transporting payloads to Earth orbits and improve the striking capacity.The scramjet engine is one of the key technologies for hypersonic airbreathing propulsion, as the forebody serves as an inlet to compress the coming air and the afterbody acts as a nozzle expansion surface [1].To obtain a high propulsion performance, an excellent inlet should be characterized typically by being smaller in size and having an aerodynamic drag, providing efficient uniform compressed air flow, and maintaining a high performance over a wide Mach number range [2].The three-dimensional inward-turning intakes based on the isentropic compression method are particularly notable for their good overall performance.The hypersonic truncated Busemann inlets introduced by Mölder and Szpiro [3] not only shorten the inlet length, but also ensure the acceptable starting performance.In recent years, Johnson performed an experimental investigation of the stream-traced truncated Busemann inlet at a subdesign Mach number [4].The startability [5] and flow quality [6] of the modified wavecatcher Busemann-based intakes were studied by Zuo and Mölder.A multi-point optimum design of an axisymmetric intake for ascent flight were conducted by Fujio [7]. The performance parameters for evaluating the aerodynamic/propulsive performance of hypersonic inlets include the cycle static temperature, self-starting Mach number, inlet drag, total pressure recovery efficiency, kinetic energy efficiency, dimensionless entropy increase and adiabatic compression efficiency, etc.However, these parameters are confined by merely providing the performance values at a cross-section, such as the exit, and failing to provide information on which physical process causes the energy loss, where the loss is located and what amount the loss is.Van Wie pointed out that these parameters have their own advantages and disadvantages when used for evaluating inlet characteristics, and a single parameter is insufficient to completely specify the performance [2].In addition, these parameters cannot be universally suitable in each component of scramjets, such as the total pressure recovery coefficient, which is not convenient for the analysis of the combustion chambers and overall performance of scramjets. Exergy has been employed in the system-level analysis of hypersonic vehicles as a common metric by Moorhouse [8] and Riggins [9] and to analyze scramjet engines [10] and commercial aircrafts [11].As opposed to the previous research on an engineering analytical approach to the exergy method, a high-fidelity computational-fluid-dynamics (CFD)-based approach to exergy analysis has been reported by some researchers.Arntz proposed an exergy-based formulation which brought a balance between the exergy supplied by the propulsion system and its (partial) destruction within a control volume in integral ways [12].This theoretical formulation was employed to study the NASA Common Research Model (CRM) [13] and Boundary Layer Ingestion (BLI) [14].Based on Arntz's work, Aguirre proposed an exergy-based drag breakdown formulation [15] and applied it in wind-tunnel testing [16].Gao adopted the concept of exergy and decomposed the aerodynamic drag on the wake plane of an RAE 2822 airfoil and an ONERA M6 wing [17].Recently, Novotny implemented exergy-based drag and exergy sensitivity analyses in FUN3D and verified them in a Generic Hypersonic Vehicle [18] and several semi-analytical and drag-based test cases [19].However, these numerical studies mainly focus on the aerodynamic drag or wake flow of aircrafts.Fewer reports have been found on the exergy-based numerical analysis of scramjet engines, especially for the highly coupled internal flow of scramjets, as well as on issues of the exergy loss decomposition in inlets. This paper proposed a control-volume-based exergy method to evaluate Busemann inlets.The main idea of the exergy method is to qualitatively and quantitatively analyze the exergy destroyed in the inlet for a better understanding of the mechanism of the exergy destroyed and the location where the anergy occurs.Firstly, a truncated and corrected Busemann inlet was designed and numerical simulations of the inlet at four Mach numbers were conducted.Afterwards, the control-volume-based exergy method was presented and validated, and the flow field was analyzed by the exergy method with the numerical results.Thereafter, the comparison of the exergy performance indicator with commonly used total performance parameters was also performed to specify the characteristics of the exergy-based evaluation method. Methods The whole theoretical basis and numerical computation process are presented in this section.To begin with, the theoretical design method of Busemann inlets is introduced in Section 2.1.Then a control-volume-based exergy method is detailed in Section 2.2.Afterwards, numerical scheme of Busemann inlets and exergy post-processing technique are expounded in the subsequent section.Finally, equations of two total performance indicators used to evaluate Busemann inlets are listed. Truncated and Corrected Busemann Inlet Design Methods Busemann flow and conical flow with the assumptions of inviscid, axisymmetric and irrotational are governed by the Taylor-Maccoll Equation [20].Taylor-Maccoll Equation is a non-linear second-order total differential equation, which is described in such spherical polar coordinates as: where  is the angle measured counterclockwise from the downstream direction and U is radial flow velocity. The equation can be decomposed into two first-order ordinary differential equations and solved using the fourth-order Runge-Kutta algorithm [20,21].When a throat Mach number ( throat Ma ) and a freestream Mach number ( Ma  ) and freestream parameters ( T  , p  ) are given, as listed in Table 1, the velocity field between the throat and the freestream can be solved by iterating shock wave angle d in Equation ( 1).The convergence condition of the equation is the wave angle as exit turns flat.After the velocity field was solved, streamline-tracing technique [22] was applied to generate the geometry information of Busemann inlet.The initial discrete points to trace are from a circle with a radius of 10 cm.A schematic diagram of an axial symmetry Busemann inlet generated by the theoretical design method is shown in Figure 1.To reduce the inlet length, the streamlines of the Busemann inlet was truncated at the leading edge with a surface angle (  ) of 2.4°.An in-house code was developed to finish the theoretical design process.The comparison of the theoretical design data and numerical simulations of the designed inlet is presented in Section 3.2. K As the Busemann inlet designed from Taylor-Maccoll equation is valid for inviscid flow, boundary layer correction must be made for realistic viscous flow.As is known, the Reynolds number and the characteristic length affect the boundary layer growth.A viscous correction method [23], which is considered more accurate than plate boundary layer correction, was adopted to correct the boundary layer: A and B are constant factors, which are first approximately determined by the dis- placement thickness of plate boundary layer correction method.Then, iterative correction using the numerical results are made to correct the predicted value with the constraints of core flow field maintained.In this paper, after several iterations, A is set to 0.015 and B is set to 0.0015. Exergy-Based Approach Following the methods proposed by Arntz [12] within a control volume surrounding the aero-propulsive system, the exergy balance equation can be written as: where prop  is the rate of exergy supplied by propulsion systems and q  is the rate of heat exergy supplied by conduction. W  is the energy height which can accumulate or restitute exergy.m  is mechanical exergy and th  represents thermal exergy.tot  is the total exergy destroyed which is also defined as total anergy. In this work, the control volume of Busemann inlet is defined as a space enclosed by three surfaces, as shown in Figure 2 with yellow dash line: the wall of inlet, inlet surface and outlet surface.The Busemann inlet is set to fly under cruise state, thus the flow is steady without any energy addition, neither thermal nor mechanical.Thus, 0, 0, 0 In the control volume analysis method, the Busemann inlet is usually considered stationary while the airflow is in motion.Thus, the initial mechanical exergy ( m  ) mainly comes from the kinetic exergy of the freestream airflow.However, the mechanical exergy of airflow in other position consists of three terms [24]: streamwise kinetic exergy deposition rate ( (5) where io S represents inlet and outlet surface, , , u v w are components of vector V , n is normal vector of the surface S ,  and p are density and pressure of air and  rep- resents quantity at freestream condition. Thermal exergy ( th  ) of airflow consists of three terms.The first and second term are the rate of thermal energy and the rate of anergy contained in exergy [25].The third term is the rate of isobaric surrounds work and it is an unavailable work due to the system interacting with the reference atmospheric pressure field at p  : Assuming a perfect gas in the Busemann inlet, e  is the internal energy which is proportional to temperature ( The viscous anergy is mainly produced by viscous dissipation and turbulence mixing in the control volume V , especially in the boundary layer zone and shock wave interaction zone.The expression of   is: Dissipative function ( eff  ) is defined as where eff  is the effective (viscous and turbulence) stress tenor, which can be expressed by Boussinesq's hypothesis [26]. and t  are the molecular viscosity and the eddy viscos- ity. S is the mean stain rate tensor.Thermal anergy ( T   ) is related to thermal mixing in the control volume, especially in the shock wave zone with high temperature. where eff k is the effective thermal conductivity.Shock wave anergy ( w  ) is related to shock waves and is expressed as: To be noted, the calculation of shock wave anergy relies on the definition of the shock surfaces ( w S ), which enclose the entropy production in the control volume.The detection method for shock wave regions relies on the following dimensionless function proposed by Lovely and Haimes [27]: where a is speed of sound and p  is pressure gradient.The identification of shock wave region depends on the threshold value w  .The region where w   in the control volume of flow fields is selected to integrate the shock wave anergy.A datum value of 0.95 is chosen from existing experience [12]. Numerical Methods and Boundary Conditions The Reynolds-average Navier Stokes (RANS) simulations were conducted to analyze half of Busemann inlets using commercial software ANSYS FLUENT 2020 R2.The fluid computing domain is shown in Figure 2, which is divided into two regions.One is inlet control volumes (yellow dash line) and the other is surroundings.The Busemann inlet cruising at 30 km height was designed with an incoming Mach number of 5 and exit Mach number of 3. The condition of the Busemann inlet was set as pressure far field with an incoming flow pressure of 1170 Pa, while the outlet was set as pressure outlet.The wall is no-slip adiabatic and the symmetry plane is symmetric.RNG k  − turbulent model was applied to close governing equations.A Menter-Lecher near-wall treatment was used to provide high-resolution numerical predictions in the near-wall region.The viscosity is calculated according to the Sutherland law.The advection upstream splitting method (AUSM) was used to reconstruct the flux and second-order upwind scheme was adopted to discretize the spatial terms.The selection of the numerical method and turbulence model are validated in McCready [28] and Liu [29]. When the simulation is done, the basic physical quantities such as temperature, pressure, velocity vectors and its components, density and entropy are all obtained.Moreover, the gradients of these physical quantities are also available.Geometry information such as volume of each cell and area and direction of each face of cells can be obtained from the results of RANS solver as well.These physical and geometrical quantities are extracted and represent the input data for the exergy postprocessing code.FLUENT user-defined functions (UDFs) are applied to transform all the items related to exergy in Section 2.2 into code.UDFs are a user programming environment provided by FLUENT to enhance its capabilities.Figure 3 shows the procedure of exergy postprocessing code.The term on the left side of Equations ( 5), ( 6) and ( 8)-( 10) can be derived with the physical and geometrical quantities according to the expressions of the right side of the equations.The integration operation listed in these equations can be calculated by summing up variables of the discrete cell in the fluid computing domain.When the exergy postprocessing code is finished, an output file containing all terms related to exergy is produced for users to further analyze. Performance Indicators Total pressure recovery coefficient and exergy destruction efficiency were adopted to evaluate the total performance of the Busemann inlets and comparisons of these two indicators were also made to point out the merits of exergy methods.Total pressure recovery coefficient (  ) is commonly defined as the ratio of total pressure at the exit ( te P ) to the total pressure at the freestream ( t P  ): Exergy destruction efficiency represents the percentage of the exergy destroyed in Busemann inlets ( tot  ) in the total incoming exergy ( tot  ).The total exergy destroyed in the Busemann inlet was decomposed into three parts, as shown in Equation (7).The total incoming exergy in Busemann inlets is equal to the sum of the mechanical exergy and the thermal exergy of the airflow, as shown in Equations ( 5) and (6).Thus, the exergy destruction efficiency ( ) can be written as follows: Validations When the geometry shape was designed by the theoretical design method, numerical simulations were conducted to validate the design method.During the numerical computation, the number and distribution of grids in the inlet should be chosen to ensure the accuracy of numerical simulations.Thus, grid dependency validation and comparisons of design data and numerical results are presented in Section 3.1 and Section 3.2, respectively.Moreover, before the exergy method was applied to evaluate the corrected Busemann inlet, the inviscid flow in a Busemann inlet was analyzed by the exergy method and the results are validated with the Gouy-Stodola theorem in Section 3.3. Grid Dependency Validation As shown in Figure 2, all parts of the computing domain were partitioned by structured hexahedral grids which were generated by the commercial software Pointwise V18.0R1.The typical normal-wall cell spacing is set to 5 microns to keep the y + values below 1. Refinement on the edge was performed to smooth transitions around corners.The analysis of the exit Mach number and exit static pressure varies with different numbers of grids and was conducted to determine the suitable number of grids, as presented in Figure 4.It can be observed that the differences in the exit Mach number and static pressure between 3 million grids and 12 million grids are 0.016% and 0.034%, respectively.As the difference is much smaller than the errors of the numerical process, the number of grids chosen for the numerical simulations is in the middle range of about seven million. Comparison of Design Data and Numerical Models The comparison of the Mach number along the wall between the theoretical design methods and numerical results is illustrated in Figure 5.As can been seen, the numerical curve coincides well with the theoretical curve.After the last oblique shockwave, the Mach number drops rapidly below 3 and then remains at 2.74 in the numerical results, which cannot be shown in the theoretical results.It is because the Busemann inlet in the numerical analysis has an isolation section.Figure 6 shows the contour lines of the Mach number of the numerical results.The straight and clear contour lines gradually decrease from five to three, which completely conforms to the conical flow design methods. Exergy Analysis of Inviscid Flow in Busemann Inlets When the airflow in the Busemann inlets is inviscid, the only way for exergy destruction in the inlets is the irreversible process of discontinuous shock waves.The contours of the entropy production of shock waves are shown in Figure 7. Entropy production mainly occurs in the areas after the last oblique shock wave of the conical flow.The larger values of entropy production are mostly distributed around the axis of symmetry and near the wall.After the airflow is compressed by the last shockwave, the direction and magnitude of the airflow velocity near the symmetric axis and the wall are not completely equal, resulting in a large number of weak compressions of shock waves that cause exergy loss.This phenomenon in the contours of entropy production agrees well with the contours of the density gradient from the Euler flow solution of design conditions [30]. .Thus, the rate of anergy (42.84 J•s −1 ) in the designed Busemann inlet was obtained by the formula, as presented in Table 2. On the other hand, the exergy destruction in the control volume can also be calculated by the difference between the total exergy of the inflow and outflow, as illustrated in Table 3.The exergy of the incoming flow is composed mainly of streamwise kinetic exergy and thermal exergy.When the airflow was compressed in the Busemann inlet, part of the incoming exergy was transformed into boundary pressure-work (4449.83J•s −1 , 12.12%) and some was turned into thermal exergy (4972.74J•s −1 , 13.54%), while the remaining main energy was still reserved in the high Mach number airflow as a form of kinetic exergy (25,274.78J•s −1 , 68.81%).A very small amount of exergy turned into transverse kinetic exergy, as listed in Table 3.Thus, the difference between the exergy inflow and exergy outflow is 42.93 J•s −1 .This value is consistent with the anergy calculated from the Gouy-Stodola theorem with an error of 0.21%.That is, the data listed in Tables 2 and 3 proved that the exergy method adopted and the calculation process are completely effective and correct.To be noted, the anergy produced by shock waves accounts for only 0.12% of the total incoming exergy. Results and Discussion After the numerical results of the inviscid Busemann inlet were mutually validated with the design values and the control-volume-based exergy method was confirmed by the Gouy-Stodola theorem, the corrected Busemann inlets in the off-design condition of Mach 4.5, 5.5 and 6 were analyzed further.Qualitative and quantitative analyses of the flow fields in the Busemann inlet using the exergy method were conducted.Moreover, several total performance indicators were compared to identify the characteristics of the control-volume-based exergy method. Flow Field Exergy Loss Analysis The contours of four different Mach numbers in Busemann inlets are shown in Figure 8.The thickness of the boundary layer gradually increases from the entrance of the inlet and the Mach number contours become curved relative to that of the inviscid flow.As the incoming Mach number gradually increases, the apex of the conical shock moves backwards towards the throat.It is worth noting that the base of the conical shock aligned with the shoulder when the inlet was an on-design case, while in other speed cases, the phenomena of the boundary layer interacting with the shoulder-generated shocks, the conical shock and its reflected shocks are more obvious, leading to a more complicated flow field.These trends are consistent with the results obtained in [30].As would be expected, the intensity of impinging the shock waves and reflected shock waves increases with the Mach numbers.Two obvious zones can be seen from the distribution of total entropy production in inlets, as shown in Figure 9: the main flow area and near-the-wall area.Observing from the inlet to the outlet direction, there is initially nearly no anergy generated in the central region of the main flow until it encounters the first oblique shock wave, resulting in a strong exergy loss increasing with the shock wave strength.When the airflow enters the isentropic compression region, the exergy loss is significantly reduced.Afterwards, the airflow passes through the conical shock and turns, approaching parallel to the axis.However, when the base of the conical shock is not aligned with the shoulder, the shock waves are reflected between the walls and symmetric axis, causing lots of shock wave entropy production.Meanwhile, the viscous anergy caused by the airflow shear effect and the thermal anergy caused by the temperature gradient within the airflow are also produced in the mainstream.Moreover, in the area near the wall, a substantial amount of viscous anergy and thermal anergy is generated within the velocity boundary layer and the temperature boundary layer.The irreversible energy loss gradually increases with the boundary layer along the inlet to the outlet direction.In addition, the amount of entropy production generated in the main flow after conical shock gradually increases with the Mach number, without any deviation due to the influence of the design Mach number.Contours of entropy production caused by shock waves, viscous interactions and thermal mixing, respectively, are displayed in Figures 10-12.The entropy production of shock waves is mainly caused by the interaction between shock waves and the boundary layer, incident shock waves, conical shock waves and its reflected shock waves, as shown in Figure 10.A certain degree of compression is also produced near the intersection of conical shock waves.Among them, the interaction between the shock wave and boundary layer takes the largest proportion.As the thickness of the boundary layer gradually increases, the airflow is further compressed by the wall and causes more shock wave entropy production.The inverse pressure gradient of the shock waves in turn induces the deformation, separation and turbulent pulsation of the boundary layer. When the free stream Mach number is 4.5, the conical shock wave hits the inlet side and reflects, as shown in Figure 10a.The airflow passing through the reflected shock expands and accelerates at the shoulder, resulting in the disappearance of shock wave entropy production near the shoulder.Meanwhile, the reflected shock wave intersects on the axis of symmetry and is further reflected within the isolation section, thus generating more entropy production.When the Mach numbers are 5.5 and 6, as in Figure 10c,d, the shock wave gradually moves towards the exit and the entropy production disappearing area where the expansion wave occurs near the shoulder becomes more obvious.Moreover, as the conical shock wave is reflected by the wall of isolators, the boundary layer of the isolators is disturbed, which causes the discontinuation of entropy production near the wall.By contrast, the shock wave hits the shoulder at the design Mach number in Figure 10b and the expansion wave area at the corner is the smallest.In addition, the entropy generation area in the isolator is continuous and keeps to a minimum number.Thermal entropy production in the inlet mainly occurs at the intersection of conical shock waves and the boundary layer of isolators, as observed in Figure 11.When the Mach number is lower than or equal to the design Mach number (Figure 11a,b), it is obvious that the thermal entropy production increases with the boundary layer thickness and the intensity of the shock waves and their reflected shock wave.This phenomenon is consistent with the formula that a key factor influencing thermal entropy production is the value of the temperature gradient in the airflow (Equation ( 9)).When the Mach number is greater than the design value (Ma = 5.5 or 6), the thermal entropy production contours become complex, as shown in Figure 11c,d.In addition to a substantial amount of entropy production generated at the intersection of the conical shock wave, the reflection of the conical shock wave at the isolator jointed with the expansion wave occurring at the shoulder caused severe turbulence pulsation, deformation and flow separation in the boundary layer, resulting in an intense heat exchange.Moreover, the main flow area could not remain spatially uniform due to sudden changes in the thickness of the boundary layer, leading to the emission of shock waves from the boundary layer towards the main flow.This shock wave follows the expansion wave and the reflected conical shock waves aforementioned that simultaneously affect the temperature gradient in the main flow and generate a considerable amount of entropy generation.The distribution of the viscous entropy production contours is generally similar to that of the thermal entropy production contours, as a large velocity gradient in hypersonic airflows is always accompanied by large temperature gradient.However, two differences can be found between Figures 11 and 12. Firstly, the viscous entropy generation not only exists at the intersection of conical shock waves, but there is also a circle of viscous dissipation around the intersection.Furthermore, the viscous dissipation gradually decreases from the intersection center to the wall direction.This phenomenon indicates that the temperature gradient mainly occurs within the shock wave at the intersection of the conical shock waves, while viscous shear has large values in the shock wave and its surroundings.Secondly, the high viscous entropy production area in the isolator is shorter in axis length compared to the high thermal entropy production area, which implies that the descent speed of the velocity gradient is faster than that of the temperature gradient.In addition, both thermal entropy production and viscous entropy production are much higher in magnitude compared to shock entropy production. Exergy Distribution Quantitative Analysis The amount of loss and storage of exergy in the inlet was quantitatively analyzed in this section.The total exergy at the entrance and exit of the control volume can be calculated by adding mechanical exergy and thermal exergy according to Equations ( 5) and ( 6), and then the loss of exergy in the control volume was obtained by subtracting the total exergy at the exit from that at the entrance.On the other hand, the total loss of exergy can also be obtained by summing all the anergy production (shock wave anergy, thermal anergy and viscous anergy) in the control volume according to Equation (7).Theoretically, the exergy losses calculated by these two methods should be completely equal.Figure 13 shows that the total loss of exergy in the inlet obtained by the first method is almost equal to the sum of the three losses obtained by the second method, except for a small amount of numerical dissipation with its maximum value accounting for 1.85% of the total anergy.This in turn proves that the computing method in this work is completely accurate.The amount of each kind of anergy in the inlet increases with the Mach number, as shown in Figure 13.Moreover, the design point of Mach 5 has little effect on the growth rate of the ratio of anergy to the Mach number.Loss caused by heat exchange and viscous shear accounts for the majority of the total anergy and they have a roughly equivalent magnitude.In other words, large velocity gradients and temperature gradients lead to most of the anergy, according to Equations ( 8) and ( 9), and the severe velocity gradients and temperature gradients mainly occur in the boundary layers and around intense shock waves (Figures 11 and 12).A large number of kinetic exergies convert into the energy of random molecular motion when the supersonic airflow decelerates due to shock compression or viscous blockage.Therefore, high velocity gradients are always accompanied by high temperature gradients.The loss caused by shock waves is relatively small, especially when the Mach number is below the design point.That means the entropy change inside shock waves is a small amount and has a minor effect on the loss of exergy.The magnitude of the numerical dissipation error is relatively stable and its proportion of the total anergy decreases with the increase in Mach number.Regarding the aspect of numerical dissipation, Arntz employed an empirical correction method to allocate 95% of the numerical dissipation to viscous anergy and the remaining 5% to thermal anergy in the aerodynamic analysis of the NASA common research model [13].Large eddy simulation (LES) or direct numerical simulation (DNS) solvers in conjunction with a higher precision exergy postprocessing code can be employed to improve numerical accuracy. The decomposition of the exergy at the entrance and exit of the control volume according to Equations ( 5) and ( 6) is shown in Figure 14.It can be observed that the inflow exergy increases with the Mach number, as the incoming streamwise kinetic exergy shown below the coordinate axis accounts for all the inflow exergy.Afterwards, the vast majority of the inflow exergy is converted into boundary pressure work (>40%) and thermal exergy stored in the high-temperature airflow (approximately 46%).The proportion of boundary pressure work decreases from 43.6% to 41.2% as the Mach number increases from 4.5 to 6.A very small portion of the inflow exergy is converted into transverse kinetic energy which is probably due to the turbulent shear flow.All the remaining exergy is destroyed in the manners mentioned in Figure 13 with an exergy efficiency of about 90%. Total Performance Parameter Analysis Commonly used total performance parameters and indicators related to exergy are analyzed in this section.The total pressure Recovery coefficient is usually taken to evaluate the loss of work potential in the process when the high-speed airflow passes through the inlets.As shown in Figure 15, the total pressure recovery coefficient drops from 67.3% to 60.6% as the Mach number increases, which corresponds to a rise from 9.6% to 11.6% of exergy destruction efficiency.It can be deduced from the values that some of the lost energy calculated in the total pressure recovery coefficient way still belongs to the available energy.This is mainly because thermal exergy in the high-temperature airflow at the outlet is treated as a loss in the calculation process of the total pressure recovery coefficient, while in fact, this part of thermal exergy enters the combustion chamber and will further convert into useful work.The total pressure recovery coefficient (left) near the wall is very low, almost close to 0. Correspondingly, a large amount of entropy production (right) is generated at the same place, as seen in Figure 16.Moreover, the total pressure recovery coefficient gradually rises and then falls along the radial direction towards the center parts, and the entropy production drops and then rises with the same pace.This indicates that the airflow with a maximum total pressure recovery coefficient and maximum exergy efficiency are not located at the center, approximately in the middle area along the radial direction.The airflow near the axis loses exergy as the airflows are unequal in their direction and magnitude of radial velocity, as mentioned in Section 3.2, while the airflow near the wall loses exergy by the interaction between the shock wave and boundary layer, as mentioned in Section 4.1, thus resulting in the high performance of airflows in the middle area.In addition, the entropy production of the entire cross-section of the outlets increases as the Mach number rises, which leads to an ascension of the total exergy destruction efficiency, as shown Figure 15.As a whole, the Busemann inlet with a circular cross-section and no offcenter has a very uniform airflow at the outlet along the circumference.Thermal anergy and viscous anergy are the main losses of exergy in inlets.Figure 17 shows that thermal anergy rises from 411.8 J•s −1 to 877.28 J•s −1 and viscous rises from 479.7 J•s −1 to 744.17 J•s −1 as the Mach number increases from 4.5 to 6.An intersection point was observed between these two curves when the Mach number was between 5 and 5.5.The thermal anergy is lower than the viscous anergy when the Mach number is below the intersection point.However, after the intersection point, the thermal anergy rapidly increases and is higher than the viscous anergy.It is probable that the convective heat transfer coefficient increases significantly and heat mixing in the inlet becomes more intense when the Mach number is above the design point, whereas the viscosity coefficient varies less with the airflow speed and temperature.As the shock is almost negligible, it implies that the optimization of the minimum total exergy destroyed can be made to find the particular Mach number when the sum of thermal anergy and viscous anergy is minimum.The curves of the static pressure compression ratio and uniformity index are displayed in Figure 18.A higher static pressure compression ratio represents higher pressure at the exit, which helps to prevent the back pressure from the combustion chamber that may cause the inlet unstart.The air flow at the design point has the best uniformity index (98.8%),but with the lowest static pressure compression ratio (14.6).The increased outlet static pressure comes at the price of the nonuniformity of the flow field.The exit Mach number rises from 2.23 to 3.22 and the static temperature ratio rises from 2.56 to 2.80 as the flight Mach number increases, as shown in Figure 19.A higher static temperature ratio usually means a higher thermodynamic efficiency if it does not exceed the maximum allowable compression temperature (1440 K-1670 K) which may cause unequilibrated dissociation and loss of exergy.However, the increased exit Mach number implies a shorter time for the airflow to stay in the combustion chamber, which may lead to a decrease in the combustion efficiency.Thus, is it necessary to determine a reasonable flight Mach number to ensure an optimal tradeoff combination of various performance parameters. Conclusions Exergy is considered as a potential universal indicator to design and assess a highly coupled internal flow of a scramjet in a system-level framework.In this paper, a controlvolume-based exergy performance evaluation method was proposed for a Reynolds-averaged Navier-Stokes flow solution of truncated and corrected Busemann inlets.It is supposed to be the first time that exergy has been used to evaluate scramjet inlets in a CFD form.The striking characteristic of the control-volume-based exergy method is that it is clear to figure out which physical process causes the energy loss, where the loss is located and what amount the loss is.An exergy postprocessing code was also developed to carry out the evaluation process. To begin with, a Busemann inlet was designed from the Taylor-Maccoll equation and the geometry was extracted using the streamline-tracing technique.Truncation and boundary layer correction were conducted to generate an acceptable Buseman inlet.The theoretical design method of the Busemann inlets and numerical scheme were then validated.The exergy method within a control volume of Busemann inlets was first verified by the Gouy-Stodola theorem.Then, the exergy analysis of the Busemann inlets at four Mach numbers in CFD-RANS solutions qualitatively and quantitatively was carried out.The main findings include the following: (1) Compared to other traditional performance indicators such as the total pressure recovery coefficient, only providing an overall performance value, the exergy method can interpret the amount and the evolution process of each amount of exergy destroyed in the inlet.In the Busemann inlet, the exergy destroyed can be decomposed into shock wave anergy, viscous anergy and thermal anergy.Shock wave anergy accounts for less than 4% of the total exergy destroyed, while thermal anergy and viscous anergy have a roughly equivalent magnitude and contribute to almost all the remaining anergy.The vast majority of the inflow exergy is converted into boundary pressure work and thermal exergy. (2) The total pressure recovery coefficient, total anergy and static temperature ratio of the Busemann inlets increase nonlinearly with the Mach number without any deviation due to the influence of the design point.However, the Busemann inlet on-design has the maximum pressure uniformity at the exit.(3) The exergy efficiency of the Busemann inlets is higher than the total pressure recovery coefficient since some of the thermal exergy is treated as a loss in the calculation of the total pressure recovery coefficient, but further enters the combustion chamber and is converted into useful work.An intersection point was found between the curves of the viscous anergy and thermal anergy when the Mach number was between 5.0 and 5.5.It implies that the minimum total exergy destroyed at a particular Mach number can be observed if the optimization of the total anergy in the inlet was carried out. Control-volume-based exergy can also be extended to other components of scramjet engines, such as nozzles and isolators.However, if the combustion process is calculated, the chemical exergy must be taken into account in the exergy balance equation, and this should be explored in future research. Figure 2 . Figure 2. Computing domain and structure mesh of a Busemann inlet. s is the mass specific entropy.Total anergy ( tot  ) in the Busemann inlet can be decomposed into three terms: viscous anergy (   ), thermal anergy ( T   ) and shock wave anergy ( w  ). Figure 5 . Figure 5. Curves of distribution of Mach number along the wall. Figure 7 . Figure 7. Contours of entropy production of shock waves.According to the Gouy-Stodola theorem, exergy destruction is equal to the entropy increase of the total system multiplied by the ambient temperature, 0 E* destruction generation TS = Figure 12 . Figure 12.Contours of viscous entropy production at four Mach number. Figure 13 . Figure 13.Exergy destroyed decomposition in control volume of inlets. Figure 14 . Figure 14.Exergy stored in the entrance and exit of the control volume. Figure 15 . Figure 15.Total pressure recovery coefficient and exergy destroyed efficiency versus flight Mach number. Figure 16 . Figure 16.Contours of total pressure recovery coefficient and entropy production at the cross section of outlets. Figure 18 . Figure 18.Static pressure compression ratio and uniformity index versus flight Mach number. Figure 19 . Figure 19.Exit Mach number and static temperature ratio versus flight Mach number. Table 1 . Physical and design parameters of Busemann inlets. Table 2 . Comparison of anergy calculated from entropy production and balance equations. Table 3 . Difference between the forms of exergy of the inflow and outflow.
v3-fos-license
2023-09-02T13:37:59.565Z
2023-09-02T00:00:00.000
261432533
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcoralhealth.biomedcentral.com/counter/pdf/10.1186/s12903-023-03350-y", "pdf_hash": "e672c949cea029b4cd94881c20671a416716472f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:372", "s2fieldsofstudy": [ "Medicine" ], "sha1": "523ef9be5cf1a99981d8df5a9319c260d07b70b2", "year": 2023 }
pes2o/s2orc
Analysis of 5-Year-old children’s oral health service utilization and influencing factors in Guizhou Province, China (2019–2020) Background This study aimed to investigate the utilization patterns and factors related to oral health care for 5-year-old preschoolers based on Andersen’s Behavioural Model in Guizhou Province, Western China. Method A cross-sectional study of 4,862 5-year-old preschoolers in 66 kindergartens was conducted in 2019 and 2020. A basic oral examination and a survey of parents and grandparents were conducted to gather data on oral health services. The results were analysed using chi-square tests and logistic regression analysis. Result The utilization rate of oral health services for children in Guizhou province was 20.5%. The dmft was 4.43, and the rate of caries was 72.2%. The average cost of a dental visit was higher in rural areas and higher for girls. Logistic regression analysis revealed that dmft ≥ 6 teeth, a history of toothache, starting toothbrushing at age ≤ 3 years and limited parental knowledge were the primary factors impacting dental visits. Conclusion Needs factors such as severe oral conditions and pain in children are the main reasons for the utilization of these services. This study underscores the urgency to actively promote the importance of oral health and expand insurance coverage for oral health services. Introduction Oral diseases are among the most prevalent noncommunicable diseases, affecting approximately half of the people (3.5 billion individuals) worldwide while being mainly preventable.An estimated 2.5 billion persons have untreated tooth decay [1].The health of deciduous teeth is intimately tied to the healthy eruption of permanent teeth, the child's nutritional intake, and the child's growth [2].During the primary teeth stage, dental caries is the most prevalent oral disease [3].Dental problems affect children's chewing and eating, enunciation, facial development, and even their mental health as adults [4].The dental status of a five-year-old child is critical, as this age coincides with the beginning stages of mixed dentition [5].According to the findings of the fourth national oral health epidemiological survey conducted in China in 2017, the prevalence of early childhood caries (ECC) in 5-year-olds reached 71.9% [6]. Utilization of oral health services refers to residents' actual use of oral health services, as well as the quantity and efficacy of oral health services provided by dental institutions to residents.It can indirectly reflect the importance that residents place on oral diseases and the level of oral health care service development [7].It is generally accepted that children have access to oral health services in most countries.Since its inception in 1921, the School Dental Service (SDS) of New Zealand has investigated novel strategies for improvement, and children's dental health has steadily improved since the 1980s [8].In the United States, 94% of children who have been to the dentist for the first time continue to receive regular checkups [9].The Fourth National Oral Health Epidemiology Survey in China provides potential predictive factors related to sourcing dental services among children, such as feeding method, initiation time of tooth brushing, and usage of fluoride toothpaste [10].In Beijing, China, 45.5% of preschool children aged 2 to 6 years utilized oral services within 12 months [11]. Guizhou Province is located in the western region of China and is an economically undeveloped, mountainous province [12,13].In addition, Guizhou is a province with people of different ethnicities, including Miao, Gelao, and Buyi individuals [14].However, Guizhou does not have as many medical resources as other southern Chinese provinces [15]. China is investigating relevant policies to lower medical costs and lighten the financial burden [16].Reducing the financial cost of health care requires improved basic health insurance support, and it is important to include oral health services for children.It is worth noting that the utilization of oral health services for children in China is still low at present in some parts of China [10].However, this has not been previously reported in Guizhou.Hence, to verify this claim and to explore the importance of including oral health services for children in basic public health coverage, the study aimed to conduct an epidemiological survey on the utilization of oral health services for children and related factors.For this purpose, this survey selected five-year-old children in Guizhou Province, China, and conducted a survey on them. Several of the explanations for the significance of our findings include the following.First, the present study fills a gap in the investigation of oral health care utilization among Chinese 5-year-olds in Guizhou Province.Second, since provincial trends are not balanced across regions in China, the results of this study can be compared with those of other regions (e.g., Beijing, etc.) so that researchers can easily identify regional differences, determine their causes, and propose solutions.In addition, the local basic insurance for children does not cover dental services, and those who need dental services must bear all the costs themselves, which reduces the utilization of dental services to a certain extent and places a greater burden on preventive dental work and dentists. In the current analysis, relevant variables were selected using the Andersen Behavioral Model as a guiding framework [17,18].In this investigation, we hypothesized before conducting the study that several factors might contribute to underutilization of oral health services among children in Guizhou, China, including feeding method, use of fluoride toothpaste, history of toothache and parents' knowledge of oral health.Our study assessed these factors as independent variables. Study design This survey was conducted in accordance with the STROBE guidelines [19].This research utilized a crosssectional design.According to local policy, a person is considered a current resident after six months of residence [20].The inclusion criteria emphasized that participants should meet the following two requirements: residing in Guizhou Province for more than six months; being 5 years old [21].Study participants were examined at the kindergarten school.Recruitment for this study commenced between June 2019 and November 2020.Data sources for the sample of children during recruitment included oral examinations and questionnaires (covering demographic, oral health-related behavior, evaluation of attitudes towards oral health, and oral health knowledge). Methods of sampling and participants The sampling procedure used was a multistage stratified, cluster random sampling method, with the first stage using the probability proportional to size (PPS) method to select 11 districts (counties) through 9 cities in Guizhou Province; the second stage randomly selecting 6 kindergartens in each district (county); and the third stage using the whole group sampling method.Eligibility criteria for inclusion in the study were children aged 5 years, attending the sampled kindergartens, and living with parents/grandparents.The ideal sample result was 4884 people: 11 (districts) *6 (kindergartens) * 74 (people) = 4884.Twenty-two were excluded for the following reasons: inability to cooperate with the examination, absence due to illness, and parents' refusal to sign the informed consent form.The finalized sample size was 4,862. During the first two weeks of the agreed-upon survey, the personnel of the local health board and education administration initiated the mobilization process and distributed information and informed consent forms to the parents of the participants. Oral examination Children were measured for crown caries following the WHO Basic Methods for Oral Health Surveys (5th edition) [18].The index of decayed-missing-filled teeth (dmft) was used to measure the prevalence of primary dental caries, with data collected in accordance with World Health Organization standards [18].According to the WHO Surveys guidance, the survey is completed in an open, quiet classroom after the examiner has cleaned the area and put on a sterilized white coat, headlamp, gloves, and set up portable dental chairs.Instruments for oral examination: portable dental chairs (Sinol, 3,052,021,070,371), sterilized plane mouth mirrors, community periodontal index (CPI) probe, rubber gloves, wash basin, and gauze are provided. Questionnaire The variables included in the questionnaire were derived from the guidance provided by China's Fourth Oral Epidemiological Survey and the Oral Health Surveys Basic Methods (5th edition, 2013) published by the World Health Organization [10,22].The outcome of the study pertains to children aged 5 who have utilized oral health services at any point in their lives.This includes various aspects such as community oral health services, seeking medical attention, and participation in oral health promotion activities [10].The selection of items and measures was conducted through extensive discussions among Chinese public health experts, medical statistics experts, and dentistry experts.This collaborative process ensured the relevance and appropriateness of the constructs being assessed in the study.The relevant variables have also been validated in other regions, such as Beijing, China [11].Face-to-face interviews with the children's parents/grandparents were used to carry out the questionnaire, which asked them questions about the children's demographic variables (such as sex, birth year, ethnicity, type of household, respondent, birth weight, feeding method) and oral health-related behaviour variables (eating habits and consumption frequency of sweets, brushing frequency, toothpaste use, dental visit experience, etc.).Evaluation of attitudes regarding oral health (if oral health is vital to one's life, whether healthy or unhealthy parental teeth would affect those of their children, whether routine oral exams are needed, and whether dental disorders need to be prevented by oneself first), oral health knowledge Q&A (including whether it is normal for gums to bleed due to brushing, whether bacteria can cause gum inflammation, the role of brushing in preventing gum bleeding, whether bacteria can cause tooth decay, whether eating sugar can cause tooth decay, whether bad deciduous teeth need to be treated, whether pit and fissure sealing can prevent tooth decay in children, and whether the use of fluoride toothpaste can prevent tooth decay), parents educational experience, annual family income, etc., were assessed.The oral health knowledge survey consisted of eight questions, with 1 point awarded for a correct response and 0 points deducted for an incorrect response.Received 5 or more points, denoting performance ≥ 60%.The oral health attitude questionnaire consisted of four questions, all of which supported what was taken as a good oral health attitude. Conceptual model According to Andersen Model [17], factors that predispose, enabling, and create the need for seeking health care are the basis for this study. Predisposing factors were categorized as clinical measures, eating/feeding factors, cleaning-related behaviours, and parental oral health knowledge and attitudes among these.Parents' education and annual family income are categorized as enabling factors.The remaining need factors include dmft and dental pain experiences. Quality control The 6 examiners were licenced dentists who had received standardized training and calibration from dentists with 20 years of experience prior to the study, and 5% of respondents were chosen for calibration during the onsite survey.Kappa coefficients were used to assess the consistency between different examiners (inter-examiner reliability) and within the same examiner at different time points (intra-examiner reliability) for the dmft index, following the guidelines outlined in the Oral Health Surveys Basic Methods [22].Kappa values were evaluated using the categorization system developed by Landis and Koch [23].The kappa value of 0.61-0.80indicates substantial agreement, while a kappa value of 0.81-1.00represents almost perfect agreement.Both inter-examiner reliability and intra-examiner reliability achieved substantial agreement and demonstrated high reliability.The Kappa coefficient for inter-examiner reliability was 0.80, while the Kappa coefficients for intra-examiner reliability ranged from 0.80 to 1.00. Statistical analysis Two trained professionals conducted the data collection, including the oral exam and questionnaire components.The data were double-entered into a self-built oral prevention data platform.In addition, the results were statistically analysed using SPSS 24.0 software.To account for the complex sampling design, weighted statistical methods were employed to adjust for potential bias introduced by the multistage stratified, cluster random sampling method.Comparisons between urban and rural groups, as well as male and female groups, were conducted using chi-square tests with nonparametric tests, and the relevant influencing factors were analysed using logistic regression with a significance level of α = 0.05. Caries prevalence among 5-year-old children in Guizhou Province The caries rate of primary teeth in the 5-year-old group was 72.2%, with a statistically significant difference between participants living in rural (79.8%) and urban (67.6%) areas (χ 2 = 53.162,p < 0.001) and a statistically significant difference in dmft between participants residing in rural (5.19) and urban (3.98) areas (t = 54.441,p < 0.001). In the 5-year-old group, the teeth with the highest caries prevalence were the primary maxillary incisors and the mandibular second primary molars (Fig. 1).Using the Fédération Dentaire Internationale Two-Digit System, the position of the primary teeth was recorded [24]. Oral health service utilization among 5-year-old children in Guizhou Province At age 5, the rate of dental visits for children in Guizhou province was 20.5%.Experience rates in Guizhou province were significantly higher in rural areas (21.4%) than in urban areas (20%) and among girls (21.5%) than among boys (19.6%) (Table 1). Oral health service utilization among children aged 5 years in 12 months in Guizhou Province In Guizhou province, 14.0% of 5-year-olds received oral medical care within 12 months (Table 2), with no statistically significant variations between rural and urban areas or between males and females (χ 2 = 0.542, p = 0.462; χ 2 = 2.863, p = 0.094).The average cost of a dental visit over 12 months was 633.19 Chinese Yuan and was higher in rural than in urban areas and higher in girls than in boys (Z=-3.822,p < 0.001; Z=-2.653, p = 0.008). Factors influencing the usage of paediatric oral health services In the univariate analysis of children attending versus not attending paediatric oral health services (Table 3), factors such as oral health status and sociodemographic characteristics of survey respondents were included.The data show that the decision to attend a dental visit for a 5-year-old child in Guizhou province is related to several factors (the number of caries teeth, eating before sleep, whether they brush their teeth, age of starting to brush their teeth, parents recently helping their children brush their teeth, use of fluoride toothpaste, history of toothache within the last 12 months, parents' knowledge of oral health, parents' education, and annual family income).There were statistically significant differences (p < 0.05). Logistic regression investigation of characteristics associated with children's use of oral health services By univariate analysis, several factors were identified to be included in the multifactorial logistic regression analysis of dental visits.The results revealed that dmft ≥ 6, a history of toothache within the last 12 months, beginning toothbrushing at less than 3 years of age, and limited parental knowledge were the primary factors impacting the use of oral health services (Table 4). Discussion Oral health, especially in children, is currently a medical challenge.This is due to the consumption of a variety of sugary foods and poor oral habits, which can have a negative impact on general and mental health [1].The oral health of a population may be worsen by low visit rates and financial burden.Our 2019-2020 study examined oral health service utilization and associated factors among 5-year-olds in Guizhou Province. The results of this study showed that the utilization of oral health services for children in Guizhou province was 20.5% (including dmft 4.43, caries rate 72.2%, and untreated caries 98.3%).This could be attributed to the fact that the average cost of a dental visit was higher in rural areas and for girls.Moreover, logistic regression analysis showed that dental caries ≥ 6 teeth, a history of toothache, starting toothbrushing at less than 3 years of age, and limited parental knowledge were the most important factors impacting dental visits. A ten-year examination of Brazilian preschoolers revealed that the average dmft had decreased from 1.88 to 2006 to 0.99 in 2016 and that up to 78% of children had reached caries-free status [25].In Germany, the prevalence and incidence of caries among children aged 5 were 26.2% and 0.9 ± 2.0 dmft, respectively, in 2015 [26].In this study, from 2019 to 2020, 5-year-olds in Guizhou Province in China had a dmft of 4.43 and a caries rate of 72.2%.However, the study conducted by this group from 2015 to 2016 revealed that the prevalence of dental caries in children aged 3-5 in Guizhou was 63.1%, and the mean dmft of ECC in children aged 3-5 in Guizhou province was 3.32 [27].From these numbers, it is clear that the dental caries of 5-year-olds in Guizhou are much higher than those in other nations, even greater than the survey results provided from four years ago, and that action needs to be taken by the local medical administration and dentists. According to the Chinese Fourth National Oral Epidemiological Survey, the overall dental visit rate for children aged 5 in China is 25.4%, and the attendance rate in the last 12 months is 19.2% [10].In this study, at age 5, the rate of dental appointments for children in Guizhou province was 20.5%, and the attendance rate in the last 12 months was 14.0%.Guizhou Province, unlike other developed provinces in China (e.g., 21.5% for children in Guangdong Province and 15.63% for children in Zhejiang Province), has a relatively low incidence of 5-year-olds seeing the dentist in the previous 12 months [28,29]. Caries is the most common form of oral illness in children and the leading cause of dental visits [30].This survey showed that 98.3% of open cavities were not filled.In addition, the cost of a dental appointment for a 5-yearold child in Guizhou province was 633.19 Chinese Yuan, almost entirely self-pay, which is more than 413.65 Chinese Yuan nationally [10].The aforementioned demonstrates that 5-year-olds in Guizhou Province have low utilization of oral health services and a significant economic burden of oral diseases. At the age of 5, dental caries rates are higher among rural children than among urban children, so rural children ought to have a higher attendance rate.However, the difference between rural and urban children is not statistically significant, likely due to a lack of oral health care services in rural areas, consequently, some rural children might skip a consultation or treatment.Some children's parents choose to travel to urban areas for oral care services.Considering travel time and road and hotel expenses, the majority of rural children will have a single treatment for multiple teeth or complete the entire treatment at once, resulting in a higher cost.We assume that this is the reason that there is no difference in attendance between rural and urban areas but that there is a difference in cost, although further investigation is needed. In this study, a multifactor logistic regression study revealed that dental caries ≥ 6, a history of toothache for the past 12 months, age of starting to brush teeth before 3 years, and inadequate parents' knowledge of oral health were the most influential factors in the utilization of oral health services.The high dental visit rate of children with dental caries ≥ 6 cavities and a history of toothache within the last 12 months suggests that the utilization of dental services for 5-year-olds in Guizhou Province continues to be demand-driven.In addition, children at this age are in the early phases of mixed dentition, and the growth of permanent teeth, such as the first molars, can be associated with dental pain [31], hence increasing the need for dental consultations. A 5-year-old youngster with more than six teeth with cavities is deemed to have severe underage caries [32].It takes a long time for caries to reach a painful stage, and this study indicated that many parents would only take their children for treatment if caries had reached a severe stage, indicating that the actual need for dental care is significantly higher than the demand for dental visits [11].There is one possible explanation for the higher dental visits of children's parents who begin brushing their teeth before the age of three.This is similar to this group's previous study in that children who brush more frequently have higher caries rates [27].The caries rate was higher among children who began brushing their teeth before the age of three (73.2%)than among children who began brushing their teeth after three (71.7%).It can be inferred that some of the children who started brushing their teeth under the age of three may have already developed caries before their parents chose to brush their teeth or seek medical attention.Parents are primarily responsible for the oral health of their newborns and early children, and a lack of parental health knowledge is associated with a deterioration in the oral health status of children and an increase in their likelihood of seeking medical assistance [33]. Ultimately, comparing the current results of oral status of 5-year-olds and policies regarding other countries or regions [34][35][36], passive attendance is a key characteristic of 5-year-old children's utilization of oral health services in Guizhou province and that a shift from a passive to an active attitude is a direction in which to work.According to the draft Global Oral Health Action Plan (2023-2030), by 2030, 75% of the global population will be covered by essential oral health care services to ensure progress towards Universal Health Coverage (UHC) for oral health [37].UHC indicates that all individuals and communities have access to essential, high-quality health services that meet their needs and that they can use without experiencing financial hardship [37].In recent years, the Lancet has published article proposals for incorporating basic oral health services and necessary oral care packages into universal health programs [38].Although children in Guizhou, China, are largely covered by basic health insurance, children's oral health care is not included.The fact that children's oral health treatments are completely self-funded by their parents may contribute to the low dental visit rate.Children of this age group in the region require a high level of dental care to reduce dental caries.Three suggestions can be made to improve the utilization of oral services for preschool children in Guizhou province.First, oral health education for pregnant women and young parents should be enhanced so that they are concerned about dental health and recognize the need for regular checkups.Second, the system of oral health consultation services based in kindergartens needs be expanded so that oral health issues can be diagnosed and treated on campus.Third, it is proposed that the government cover the expense of dental services for children in basic health insurance to reduce the financial burden of oral problems on families. Limitations: this study harbors several limitations that should be acknowledged.First and foremost, a considerable limitation is the use of cross-sectional study design, which precludes the ability to establish causality between the associated factors and utilization of oral health services.Moreover, the study was confined to 5-year-old preschoolers in Guizhou Province.This geographical and age limitation may prevent the generalizability of the findings to other regions or age groups.In addition, reliance on self-reported data from parents and grandparents, could introduce recall bias, influencing the accuracy of the data.Additionally, the reasons why 5-year-old females had a higher expenditure on dental visits in the past 12 months weren't fully explored, warranting further research. Conclusion The utilization of oral health services among 5-year-old children in Guizhou Province is significantly associated with the level of importance attributed by the parents.Need factors such as severe oral conditions and pain in children are the main reasons for the utilization of these services.This study underscores the urgency to actively promote the importance of oral health and expand the insurance coverage for oral health services. Table 1 Distribution of oral health service utilization and last dental visit among 5-year-olds in Guizhou Province Table 2 Principal reasons for 12-month oral health service utilization, average cost, and last dental visit among 5-year-olds in Guizhou Province Table 3 presents the outcomes of a univariate study of the current status of oral health service utilization among 5-year-olds in Guizhou Province Table 4 Logistic regression analysis of factors associated with the use of oral health services for children
v3-fos-license
2021-12-09T17:54:34.694Z
2021-10-22T00:00:00.000
244991329
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10021-021-00717-6.pdf", "pdf_hash": "322cc2a9310ec635a2ed8b5e55d911b603c31796", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:373", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "938fd84d2293edeca1313eb82a93466c801cc37e", "year": 2021 }
pes2o/s2orc
Elevated Allochthony in Stream Food Webs as a Result of Longitudinal Cumulative Effects of Forest Management The river continuum concept (RCC) predicts a downstream shift in the reliance of aquatic consumers from terrestrial to aquatic carbon sources, but this concept has rarely been assessed with longitudinal studies. Similarly, there are no studies addressing how forestry related disturbances to the structure of headwater food webs manifest (accumulate/dissipate) downstream and/or whether forest management alters natural longitudinal trends predicted by the RCC. Using stable isotopes of carbon, nitrogen and hydrogen, we investigated how: 1) autochthony in macroinvertebrates and fish change from small streams to larger downstream sites within a basin with minimal forest management (New Brunswick, Canada); 2) longitudinal trends in autochthony and food web length compare among three basins with different forest management intensity [intensive (harvest and replanting), extensive (harvest only), minimal] to detect potential cumulative/dissipative effects; and 3) forest management intensity and other catchment variables are influencing food web dynamics. We showed that, as predicted, the reliance of some macroinvertebrate taxa (especially collector feeders) on algae increased from small streams to downstream waters in the minimally managed basin, but that autochthony in the smallest shaded stream was higher than expected based on the RCC (as high as 90% for some taxa). However, this longitudinal increase in autochthony was not observed within the extensively managed basin and was weaker within the intensively managed one, suggesting that forest management can alter food web dynamics along the river continuum. The dampening of downstream autochthony indicates that the increased allochthony observed in small streams in response to forest harvesting cumulates downstream through the river continuum. Supplementary Information The online version contains supplementary material available at 10.1007/s10021-021-00717-6. management intensity [intensive (harvest and replanting), extensive (harvest only), minimal] to detect potential cumulative/dissipative effects; and 3) forest management intensity and other catchment variables are influencing food web dynamics. We showed that, as predicted, the reliance of some macroinvertebrate taxa (especially collector feeders) on algae increased from small streams to downstream waters in the minimally managed basin, but that autochthony in the smallest shaded stream was higher than expected based on the RCC (as high as 90% for some taxa). However, this longitudinal increase in autochthony was not observed within the extensively managed basin and was weaker within the intensively managed one, suggesting that forest management can alter food web dynamics along the river continuum. The dampening of downstream autochthony indicates that the increased allochthony observed in small streams in response to forest harvesting cumulates downstream through the river continuum. INTRODUCTION The river continuum concept (RCC) predicts that stream food webs in forested catchments follow a longitudinal (upstream-downstream) gradient from reliance on terrestrially derived food sources (for example, leaf litter) in shaded headwaters, to aquatic sources (for example, algae) in midreaches, to particulate sources (seston) at the larger, downstream locations (Vannote and others 1980). Considering how deeply established this conceptual framework is in aquatic ecology, the scarcity of empirical evidence testing this prediction is surprising (Rosi-Marshall and others 2016). Multiple stream metabolism studies have documented longitudinal increases in gross primary production (GPP) and autotrophy (that is, GPP > respiration) in support of the RCC (for example, Bott and others 1985;McTammany and others 2003;Finlay 2011;Kaylor and others 2019). However, food webs can be decoupled from stream metabolism as exemplified by the heterotrophy paradox, where decomposers may contribute substantially to the heterotrophic state of a system via respiration of detritus but only minimally to animal production, which is mostly supported by autotrophy through the algae-grazer pathway (Thorp and Delong 2002). Therefore, food web studies that specifically assess the longitudinal changes in food use predicted by the RCC are warranted. Although longitudinal increases in the autochthony of consumers have been reported in some systems (Finlay 2001;Rosi-Marshall and Wallace 2002), there is a growing body of food web research challenging some of the RCC predictions. For example, several studies have reported a considerable contribution of autochthonous food sources to food webs in small streams (for example, Lau and others 2009; Hayden and others 2016; Rosi-Marshall and others 2016; Erdozain and others 2019; Reis and others 2020) as well as in very large rivers (Delong and Thorp 2006; Thorp and Bowes 2017). These examples indicate that high-quality food sources such as algae (Guo and others 2016) con-tribute disproportionately more to animal production than would be predicted based on the limited algal production in small shaded streams or large turbid rivers (Marcarelli and others 2011). But with evidence also supporting the importance of terrestrial production as key basal resource for headwater food webs (for example, Wallace and others 1997;Reid and others 2008), debate over the relative importance and longitudinal patterns of the two sources continues (Brett and others 2017). It is also not clear the degree to which land use changes contribute to differences in the use of autochthonous food sources along a longitudinal gradient. Anthropogenic catchment disturbances can alter stream food web dynamics by influencing resource availability and/or community structure, and this could have disproportionate effects downstream. Forest harvesting has been linked to increased algal production and autochthony in small streams due to an elevated delivery of nutrients and/or light availability (Rounick and others 1982;England and Rosemond 2004; Gö the and others 2009; but see Ishikawa and others 2016). But when riparian buffers are retained, as stipulated by management practices in most North American jurisdictions (Schilling 2009;McDermott and others 2010), a decreased reliance on algae has been documented in small streams, likely due to an elevated delivery of terrestrial materials such as sediments or dissolved organic carbon (DOC; Jonsson and others 2018; Erdozain and others 2019). These changes in the headwaters may have subsequent impacts downstream given the hydrological connectivity of fluvial systems. More specifically, the increased algal production in small streams resulting from canopy removal could either dissipate (Finlay 2011) or disproportionately affect productivity (Koenig and others 2019) downstream, potentially resulting in little or positive longitudinal changes in autochthony. In contrast, the accumulation of sediments and decrease in nutrients and autotrophic index of biofilms observed downstream in harvested catchments others 2021a, 2021b) could lead to a longitudinal decrease in autochthony. Yet, to our knowledge, there are no studies addressing how forestry related disturbances to headwaters manifest (accumulate/dissipate) downstream and/or whether forest management alters natural longitudinal trends such as the increased autochthony predicted by the RCC. Considering the superior nutritional quality of algae, a decrease in its assimilation may result in a less efficient energy transfer to upper trophic levels (Brett and others 2017;Guo and others 2017) with potential implications for food web length (FWL) and macroinvertebrate/fish production (Finlay 2011;Kaylor and Warren 2017;Saunders and others 2018). In this study, we investigated how food web structure (using C, H and N isotopes) changed along the river network within three basins differing in forest management intensity in New Brunswick (Canada) at a time of year when maximum autochthony would be likely. The objectives of the study were to assess how: 1) autochthony and FWL change from small streams to downstream waters within a basin with low forest management (minimal basin) to test the predictions of the RCC; 2) longitudinal trends in these measures compare among basins with different forest management intensity (intensive-includes replanting after harvesting, extensive-harvesting only, minimal; more details below) to detect potential cumulative or dissipative effects; and 3) forest management intensity and other catchment variables are influencing food web dynamics across this spatial scale. We predicted that 1) autochthony and FWL would increase downstream as predicted by the RCC, but that 2) the increase would be less pronounced in the intensively and extensively managed basins due to 3) autochthony and FWL being negatively affected by the elevated delivery of terrestrial materials resulting from increased forest management intensity. Study Area The study was conducted in three basins each established in areas of differing forest management in northern New Brunswick (NB, Canada) (Figure S1). The basin representing minimal management (NBR hereafter) is identified as a designated Watershed Protected Area by the Government of New Brunswick because it supplies municipal drinking water to the community and is therefore under stricter forest management guidelines (for example, wider riparian buffers, smaller cut blocks) (Government of New Brunswick 2020). The basin representing intensive forest management (NBI hereafter) is located in the Black Brook forestry district (privately owned and operated by J.D. Irving, Inc.). It is considered one of the most intensively managed forests in the country (Etheridge and others 2005) and implements artificial regeneration and various stand improvement interventions to maximize yield. The third basin represented a more extensive type of forest man-agement (NBE) as forests are left to regenerate naturally after harvesting, resulting in less intervention and longer rotation cycles. It was not possible to find a reference basin of similar size to the other two that did not have any forest management. However, total disturbance (% of area with clearcut, partial harvest and replanting; more details in SI) from forestry was lowest in NBR (7.3% of the basin harvested in the 10 years prior to sampling) followed by NBE (12.7%) and NBI (23.0%). Site characteristics are shown in Table S1, and a more detailed characterization of the study areas is found in Erdozain and others (2021a). Within each of the three basins, six stream sites were selected to represent an upstream-downstream gradient (stream orders 1-5). Because the contributing catchment area increases along this gradient, drainage area was used to quantitatively represent the upstream-downstream direction. Note that not all six sites were located along the same flowpath because of access constraints (Figure S1); however, we assumed that the same longitudinal processes were happening along different flowpaths within the same basin. The watershed of each site was delineated and characterized, yielding 18 sub-catchments that ranged in drainage area (0.7-233.5 km 2 ), harvest intensity (0-23% of the catchment harvested in the 10 years prior to sampling), road density (1.30-3.58 km/km 2 ), and forest structure (6-16 m average height) and composition (38-89% deciduous cover) (Table S1). This resulted in stream sites ranging in water chemistry (for example, 0.6-8.0 ppm DOC), dissolved organic matter quality (for example, 1.9-14.3 humification index), sediment deposition (for example, 0.1-1.9 g fine inorganic sediments) and water temperature (for example, 8.8-13.3 ºC in September) as explored in Erdozain and others (2021a) and available at https://doi.org/10.5683/S P2/B2URHU. Sample Collection Food resources and macroinvertebrates were collected along a 100-m stream reach in September 2017 to match the timing of natural leaf fall. Due to the turnover time of consumer tissues, this timing likely reflected the summertime incorporation of food resources into food webs, that is, the time when maximum autochthony would be likely. Coarse particulate organic matter (CPOM) was sampled by collecting conditioned leaves from instream leaf accumulations, benthic fine particulate organic matter (FPOM) by suctioning the top centimeter of the substrate from depositional areas Elevated Allochthony in Stream Food Webs along the reach, and biofilm by scraping the surface of rocks and washing the slurry with stream water into bags (n = 3/site). Macroinvertebrates were collected by electroshocking and catching the drifting invertebrates with 363-lm mesh drift nets. Additionally, rocks and leaves were inspected to collect the invertebrates that are less likely to enter the drift (for example, Glossosoma). All the invertebrates were live-sorted to the lowest possible taxonomic level in the field, stored in bags partially filled with stream water and kept in the dark and on ice. All samples were frozen the same day until further analysis in the laboratory. Macroinvertebrates were not left to clear their guts overnight as our previous study found no effects of gut contents on the isotopic composition of similar taxa from the NBI watershed (Erdozain and others 2019). Slimy sculpin were collected about two weeks later from the same reach and transported to the laboratory in aerated stream water. After measuring length and body weight, fish were euthanized by cervical dislocation and frozen following the UNB Animal Care Committee approved protocol. No sculpin were caught at the smallest (the most upstream) site in NBI (NBI6). Water samples for H isotope analysis were collected along the reach, filtered through a 0.2-lm PES filter and kept cold and in the dark (3 subsamples per stream). Carbon, nitrogen and hydrogen stable isotope ratios in food sources and consumers were measured at the Stable Isotope in Nature Laboratory (SINLab; Fredericton, New Brunswick, Canada). The analytical precision of internal standards was ± 0.06&, 0.15& and 2.50&, and duplicates within runs yielded average differences of 0.14&, 0.12& and 4.3& (n = 28) for carbon, nitrogen and hydrogen, respectively. Water samples were analyzed for H isotope ratios at the Colorado Plateau Stable Isotope Laboratory (Flagstaff, Arizona, USA); the analytical precision of internal water standards was ± 0.17& on average. Stable isotope measurements are expressed as delta (d) parts per thousand (&) relative to the international standards Vienna PeeDee Belemnite for C, air for N, and Vienna Standard Mean Ocean Water for H, according to the equation: where X is 13 C, 15 N or 2 H, and R is the corresponding 13 C/ 12 C, 15 N/ 14 N or 2 H / 1 H ratios. Mixing Models The relative contribution of food sources to the diets of macroinvertebrates and sculpin was estimated using a Bayesian 2-isotope (d 13 C and d 2 H), 2-source (algae-aquatic source/autochthony and CPOM-terrestrial source/allochthony) mixing model with MixSIAR (Stock and Semmens 2016) in R 3.6.1 (R Core Team 2019). Separate mixing models were performed for primary consumers (genus included as a fixed factor), predatory macroinvertebrates (genus included as a fixed factor) and sculpin within each site. Convergence of the models on the posterior distributions was determined before accepting the MixSIAR results with the diagnostic Gelman-Rubin and Geweke tests in MixSIAR. After visualizing the data and prior to running these mixing models, several best practices were followed to ensure reliable and informative mixing model solutions (Philips and others 2014). Details on the specific adjustments made along with biplots are in Appendix S1, and include: 1) why biofilm samples were not a good representative of the aquatic food source and how algal isotope values were estimated to overcome this limitation; 2) the selection of only one (CPOM) out of two terrestrial food sources for mixing models; and 3) the selec-tion of fractionation factors and adjustment for environmental water contributions. To complement and confirm the mixing model results, we also conducted simple regression analyses that did not rely on the assumptions that had to be made for mixing models (see below). Food web Length The predictable increases in d 15 N with each step in the food web have been used to calculate food web length in aquatic ecosystems following the equation: trophic Catchment Explanatory Variables Explanatory catchment variables describing the intensity of forest management (harvesting and roads), landscape characteristics (for example, drainage density, slope, wetness) and catchment forest condition (structure and composition) were calculated using provincial and J.D. Irving GIS data. Details on how these variables were calculated can be found in Appendix S1 as well as in Erdozain and others (2021a). Statistical Analysis Differences in autochthony (that is, % algal contribution calculated using mixing models) among basins and taxa were assessed by running two-way analysis of variance (ANOVA) with Tukey's post hoc tests. Relationships between autochthony and catchment explanatory variables were quantified by means of regression analyses that included basin type (intensive, extensive, minimal) and taxon as covariates to detect potential basin-and/or taxondependent relationships [Autochthony = Catchment variable x Basin type x Taxon]. Type II AN-OVAs (car package) were used to test the significance of each variable and interaction term in the model. Regression models and potential interactions were visualized by plotting the relationship between autochthony and each explanatory variable for each basin and taxon separately. When significant interaction terms were detected, linear regressions were run separately for each basin and/or taxon to quantify and compare the response-explanatory variable associations among forest management types and/or taxa. A similar analysis to the one described for autochthony was performed for FWL estimates (d 15 N range) but without taxon as a covariate in the models. The plots and regression model results with the natural logarithm of drainage area as explanatory catchment variable were used to examine whether: 1) autochthony or FWL showed longitudinal trends from small streams to downstream waters in the minimally managed basin to test the RCC, and 2) longitudinal trends varied among basins (that is, significant drainage area x basin interaction). For significant interactions, cumulative or dissipatory effects were inferred when forest management-related differences among basins increased or decreased longitudinally, respectively, in NBI or NBE relative to NBR. Additionally, a complementary approach was used to detect cumulative/dissipative effects that was free from the assumptions made for the mixing models (see Mixing Models section). Regression analyses were performed between raw isotope data and drainage area, and the slopes for consumers and food sources were compared (that is, consumer/food source x drainage area interaction tested). A significant interaction term between a particular consumer and a food source was interpreted as a longitudinal shift in diet, and the direction of the shift was determined by the sign of the slope (+ or-indicating an increasingly terrestrial or aquatic contribution, respectively) (Figure 1). The relatively constant d 13 C and d 2 H values for food sources among sites facilitated this approach. Regression analyses were done using CPOM as the reference food source to avoid the assumptions that had to be made when calculating algal values (see Mixing Models section), but calculated algal isotope values were also plotted to determine whether consumer slopes represented longitudinal changes in diet or longitudinal changes in algal d 13 C. Alpha was set to 0.10 to com-Elevated Allochthony in Stream Food Webs pensate for the low sample size. Statistical analyses were performed in R 3.6.1 (R Core Team 2019). Autochthony significantly increased with drainage area for taxa from NBR (Table 1). Within taxa, the longitudinal increase in autochthony was the clearest for Baetis, Ephemerella and Hydropsychidae, with the first two showing a significant increase (Table 1) and the latter two showing the steepest slopes (see first column in Figure 3). Comparing Autochthony and FWL Among River Networks with Varying Forest Management Intensity Mixing models indicated that overall mean autochthony significantly differed among basins (F 2,148 = 10, p < 0.001) and was 14.2% greater at NBE than at NBI (p < 0.001) and 8% greater at NBE than at NBR (p = 0.03) when all taxa were pooled ( Figure S5). Regarding longitudinal trends, the relationship between autochthony and drainage area was basin (management type) and taxon dependent (Table 2). Within basins and for all taxa combined, the longitudinal increase in autochthony observed within NBR was also observed within NBI but not within NBE. However, the downstream increase in autochthony was greater within NBR (autochthony was 27% greater downstream than upstream) than within NBI (18% greater) and NBE (15% greater). Within taxa, the interaction between drainage area and basin was significant for Ephemerella, with autochthony being 50% greater downstream than upstream within NBR, 24% greater within NBI and 8% greater within NBE (Figure 3). Autochthony in sculpin was 17% greater downstream than upstream within NBR, but 22% lower within NBI (that is, autochthony decreased longitudinally). When using raw d 13 C values to assess longitudinal changes in food use, different spatial trends were also observed across basins (Figure 4, Table 3). The slopes of the relationships between d 13 C values and drainage area differed between consumer and CPOM (that is, significant interaction) for several taxa, suggesting longitudinal changes in diet. At NBR, the interaction was significant for Ephemerella and Hydropsychidae: unlike d 13 C in CPOM (and in calculated algae), consumer d 13 C decreased longitudinally, suggesting a longitudinal decrease in terrestrial C reliance. At NBE, the opposite trend was observed for Ephemerella and Pteronarcys, as their d 13 C values increased longitudinally, becoming more similar to those of CPOM and suggesting a longitudinal increase in terrestrial C reliance. At NBI, the d 13 C values of Heptageniidae, Baetis and sculpin (significant interaction) became more positive along the gradient, indicating a greater reliance on terrestrial carbon downstream. The slope of the relationship between d 2 H values and drainage area also differed between some consumers and food sources ( Figure 5, Table S3 Elevated Allochthony in Stream Food Webs Figure 3. Linear relationship between autochthony (y-axis) in 8 invertebrate taxa and sculpin classified according to their functional feeding group (rows) and the logarithm of drainage area (x-axis) in three basins differing in forest management intensity (columns). Algal contribution was calculated using a Bayesian 2-isotope (d 13 C and d 2 H), 2-source (algae-autochthony and CPOM-allochthony) mixing model with six sites per basin. ( Figure 6a). FWL was unrelated to drainage area; however, there was a nonsignificant negative trend at NBE and NBI, where FWL was shorter at the most downstream than upstream site, which was not observed in NBR (Figure 6b). The Effect of Forest Management and Other Catchment Variables Of the other catchment variables examined herein, autochthony of the consumers was significantly related to DTW < 0.1 m (that is, proportion of the catchment with depth-to-water values lower than 0.1 m, wet areas) and to % clearcut, but the latter relationship was basin and taxon dependent (Table 4). When all taxa were pooled, a significant decrease in autochthony with increasing % clearcut in NBR, a significant taxon-dependent increase in NBI and no relationship in NBE were found (Figure 7, Table S4). Heptageniidae was the only taxon that showed a consistent decrease in autochthony with clearcut across basins, but for several other taxa the relationship was basin dependent (Table S4). As examples, autochthony in Dolophilodes decreased with clearcut within NBR and NBE, but not within NBI, and autochthony in Perlodidae decreased with clearcut within NBR, but increased within NBI. Regarding DTW below 0.1, auto-chthony was significantly and positively related to this variable only in NBR ( Figure S6). FWL was also related to catchment variables, but these relationships also varied among basins. Within NBE, FWL significantly decreased with crossing density and total disturbance (Table S5). Within NBI, FWL significantly decreased with road density and increased with deciduous cover. Testing the RCC The RCC predicts a downstream increase in autochthony in forested catchments (Vannote and others 1980), a prediction that we tested using the longitudinal trends within a basin with minimal forest management (NBR). In this basin, we found a significant increase in consumer autochthony with drainage area, consistent with the RCC, but this trend was taxon (and FFG) specific, as shown by others (Finlay 2001;Rosi-Marshall and Wallace 2002). Herein, the clearest and most consistent increases in autochthony were found in the collector-gatherers Baetis and Ephemerella and the collector-filterer Hydropsychidae. The wide range in their autochthony from upstream to downstream was notable (that is, 33% to 95% for Elevated Allochthony in Stream Food Webs Hydropsychidae, 45 to 95% for Ephemerella) and makes sense considering they are collectors and, thus, may better represent overall food availability compared to other more selective FFGs (for example, grazers). Our results match those of Finlay (2001), in which collectors and filterers showed the clearest longitudinal shifts from terrestrial to aquatic sources, but they contrast with studies that did not find longitudinal shifts in autochthony for these FFGs (Hayden and others 2016;Jonsson and others 2018). In the current study, the shredder Leuctra showed consistently low autochthony ($29%) along the gradient in our sites as described by others (Finlay 2001; Hayden and others 2016), Figure 4. Linear relationship between d 13 C and drainage area in stream consumers (black line, 8 invertebrate taxa and sculpin classified according to their functional feeding group (rows)) and food sources (terrestrial-green and aquatic-blue) in three basins differing in forest management intensity (columns). Six sites per basin were sampled. but autochthony in the facultative shredder Pteronarcys increased from 14% in the smallest stream to 95% in the largest, supporting that this genus is able to adapt to changes in resource availability (Plague and others 1998;Rosi-Marshall and others 2016). In addition, in our study, there was also little evidence for longitudinal trends in the autochthony of predators along NBR: we detected an increase in autochthony for Perlodidae (only with d 2 H) but no changes for Sweltsa or sculpin, which could be mostly feeding on a taxon that we did not collect (for example, Chironomidae; Arciszewski and others 2015). Overall, results show that predators were not selecting prey based on their degree of autochthony, as seen elsewhere (Lau and others 2014). Autochthony levels of taxa in the small shaded streams within the basin with minimal harvesting were higher than would be expected based on the RCC. All taxa in the smallest NBR stream had autochthony values greater than 25% and as high as 92% for Heptageniidae, 88% for Glossosoma scrapers, and 69% for Perlodidae predators. Although these values should be considered with caution due to the assumptions that had to be made prior to running mixing models, they are consistent with other studies reporting high levels of autochthony in biota in small streams (for example, Lau and others 2009; Rosi-Marshall and others 2016; Hayden and others 2016; Erdozain and others 2019; Reis and others 2020) and suggest that high-quality food sources such as algae (Guo and others 2016) contribute more to animals than would be predicted based on the limited algae available. For this reason, both resource quantity and quality need to be considered to understand food web dynamics along fluvial systems (Marcarelli and others 2011). However, it is important to consider tissue turnover times and note that the timing of our sampling likely represented the maximum autochthonous resource incorporation; therefore, autochthony estimates would probably be lower later in the fall or winter months (Junker and Cross 2014). In addition, the distribution of FFGs herein did not match the RCC prediction of shredders dominating small, shaded streams: only 17.5% of the macroinvertebrates in the smallest stream were shredders, and some downstream sites had higher percentages (Erdozain and others 2021b), supporting claims that the distribution of FFGs along the river continuum is not a reliable indicator of resources consumed (Rosi-Marshall and others 2016). This discrepancy in longitudinal trends between autochthony and community composition may be linked to the dietary plasticity of some taxa along this continuum. The effect of forest management on food web structure The longitudinal increase in autochthony for all consumers combined within the minimally managed basin was not found within the extensively managed basin and was weaker within the intensively managed one, suggesting that forest management affects some of the predictions made by the RCC (Vannote and others 1980). In fact, when comparing consumer d 13 C to that of food sources along the gradient, some taxa showed a longitudinal decrease in aquatic C reliance within NBE (Ephemerella and Pteronarcys) and NBI (Heptageniidae and sculpin). These differences in the basins with greater forest management could result from a lower downstream availability of autochthonous food sources. Several abiotic and biotic indicators measured at these sites (for example, temperature, sediments, biofilm composition, DOC, nutrients; Erdozain and others 2021a, 2021b) support this hypothesis: 1) GPP (Saunders and others 2018; Kaylor and others 2019) and autochthony (Junker and Cross 2014) are controlled by water temperature, and this measure increased downstream in NBR, not at all in NBE and only weakly in NBI, mirroring the trends in autochthony reported herein; 2) the downstream increase in inorganic sediments was greatest at NBE and this could im- Collectively, results suggest that the downstream increase in autochthony is dampened within the basins with more harvesting. A loss of longitudinal trends related to catchment disturbance was also reported for primary production (Finlay 2011). Such a decrease in trophic diversity at the basin scale could have cascading ecological effects, and thus, additional examination of whether catchment disturbance diminishes trophic diversity along the river continuum, as well as its ecological implications, is recommended. Additionally, because the sampling was done at a time likely representing the maximum reliance on autochthonous food sources, studies investigating these questions during different seasons are recommended. The negative effect of forest management on consumer autochthony was further supported by the negative relationship between autochthony and % clearcut detected herein. Increased clearcut intensity in these basins led to higher DOC concentrations of a more terrestrial origin as well as lower algal biomass on rocks others 2021a, 2021b), explaining the negative effect of % clearcut on autochthony in these taxa. Our results contrast with studies reporting positive effects of harvesting on consumer autochthony at sites with no riparian buffers (Rounick and others 1982;England and Rosemond 2004; Gö the and others 2009) but concur with others that also detected negative effects of forestry on organism autochthony in the presence of buffers and consequent shading (Jonsson and others 2018; Erdozain and others 2019). Similarly, the negative relationship between consumer autochthony and % clearcut but not % partial harvest (strongly related to total disturbance herein) in the current study suggests that the complete removal of trees had a greater effect than partial harvest on food web dynamics, Figure 7. Linear relationship between autochthony (y-axis) in 8 invertebrate taxa and sculpin classified according to their functional feeding group (rows) and clearcut intensity between 2008 and 2017 (x-axis) in three basins differing in forest management intensity (columns). Six sites per basin were sampled. as was shown for DOC concentrations (Kreutzweiser and others 2008; Erdozain and others 2018) or sediment transport (Croke and Hairsine 2006). This could explain why the attenuation of longitudinal trends in autochthony was greater in NBE than NBI, as: 1) most of the harvest is partial rather than clearcut in NBI (Erdozain and others 2021a); and 2) the enhanced post-harvest regeneration practices (for example, planting, herbicides) applied in NBI speed up the recovery of the forest. Similarly, the very low % clearcut values and regeneration practices in NBI could explain why the negative effects of clearcut on consumer autochthony were not detected in this basin. C versus H Isotopes Stable isotope ratios of carbon (d 13 C) have been widely used as a tracer of the energy base supporting stream food webs. However, under certain conditions, aquatic and terrestrial food sources can overlap in their d 13 C values (Finlay 2001), limiting the effectiveness of this tool in food web studies. Recently, stable isotope ratios of hydrogen (d 2 H) have gained attention to complement d 13 C due to the large difference in d 2 H between aquatic and terrestrial food sources (Doucett and others 2007;Solomon and others 2009;Cole and others 2011). However, although both diet tracers would be expected to yield similar results, that was not the case in the present study nor in a review that found a surprising lack of correlation between allochthony estimates based on d 13 C and d 2 H (Brett and others 2018). Herein, autochthony estimates for NBE consumers were considerably lower based on d 13 C than d 2 H (see Figures 5 and 6), and we detected a longitudinal decrease in autochthony for some taxa in NBE and NBI using d 13 C but an increase using d 2 H. This lack of congruence has direct implications for conclusions drawn and suggests that one may be more reliable than the other in such studies. Since uncertainty remains around some key and influential assumptions for d 2 H such as environmental water contributions, fractionation, routing or lipid extraction (Vander Zanden and others 2016; Newsome and others 2017; Brett and others 2018), we have put more weight on the d 13 C results herein. CONCLUSIONS We showed that the reliance of some macroinvertebrate taxa (especially collector feeders) on algae increased from small streams to downstream waters in the basin with minimal forest management as predicted by the river continuum concept (RCC). However, the basin with extensive forest management did not show the same longitudinal increase in consumer autochthony and the basin with intensive forest management showed a weaker increase, suggesting that forest management alters food web dynamics along the river continuum. This deviation from the RCC was mostly likely due to a greater delivery of terrestrial materials (DOC, sediments) as well as differences in the longitudinal trends in water temperatures observed at these more impacted sites. Finally, our results indicate that the increased allochthony observed in aquatic biota from small streams with forest harvesting also manifests downstream in a cumulative manner.
v3-fos-license
2020-01-23T09:06:07.018Z
2020-01-21T00:00:00.000
213028798
{ "extfieldsofstudy": [ "Biology", "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-020-18230-0.pdf", "pdf_hash": "b0190b650941d6ba26c334fd93d71d2c09a5ce96", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:374", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "aaa0ac578026a5aec279d29ca148e83f5bb2fdf3", "year": 2020 }
pes2o/s2orc
Effectiveness of protected areas in conserving tropical forest birds Protected areas (PAs) are the cornerstones of global biodiversity conservation efforts, but to fulfil this role they must be effective at conserving the ecosystems and species that occur within their boundaries. Adequate monitoring datasets that allow comparing biodiversity between protected and unprotected sites are lacking in tropical regions. Here we use the largest citizen science biodiversity dataset – eBird – to quantify the extent to which protected areas in eight tropical forest biodiversity hotspots are effective at retaining bird diversity. We find generally positive effects of protection on the diversity of bird species that are forest-dependent, endemic to the hotspots, or threatened or Near Threatened, but not on overall bird species richness. Furthermore, we show that in most of the hotspots examined this benefit is driven by protected areas preventing both forest loss and degradation. Our results provide evidence that, on average, protected areas contribute measurably to conserving bird species in some of the world’s most diverse and threatened terrestrial ecosystems. I think this paper represents a valuable contribution to the literature and would fit very well with the journal. All data sets used and analytical descriptions make sense to me and the conclusions are well justified given the approach taken and data used. The paper flows logically and the text is well complemented by the figures included in main text. Supplementary text, figures and tables are well done as well and are of great use to readers that are interested in additional details to the main text. When reading the manuscript, I thought about possible explanations of varying patterns across regions and a potential link to statistical power/sample size. It is my view that the authors have accounted for this as much as possible and in my opinion the mention of this as a caveat in the main text is sufficient to point this out (line 244 on). Reviewing the paper, this was the only potential issue that I could see, but the authors have proactively addressed this, which helps the transparency of this study. Readers can draw their own conclusions, but for me, having worked with eBird data extensively, the authors have made every effort to account for bird data issues and I think this is as far as one can realistically go in trying to explain differences between and within regions. Two minor comments: 1. It would have been really helpful in the review process to have a look at the R code as well. I know that the authors have included the following code availability statement, but I think it would make sense for reviewer to be able to reproduce the analysis during their review: All R scripts will be deposited on an open repository after revision. 2. The acknowledgements and author contributions sections should be removed for blinding the manuscript. Reviewer #1 (Remarks to the Author): Thank you very much for the opportunity to review manuscript 252244_0, "Effectiveness of protected areas in conserving tropical forest birds". This is a great and timely paper, providing much needed insights into the effectiveness of protected areas. I was impressed by the authors effort included in this study and would also like to commend the authors effort on both the paper and the supplementary information describing their methodology. It clearly shows that a lot of effort went into this paper and making sure its accessible to readers. Very well done! I think this paper represents a valuable contribution to the literature and would fit very well with the journal. All data sets used and analytical descriptions make sense to me and the conclusions are well justified given the approach taken and data used. The paper flows logically and the text is well complemented by the figures included in main text. Supplementary text, figures and tables are well done as well and are of great use to readers that are interested in additional details to the main text. -> We genuinely thank Reviewer 1 for his positive comments. When reading the manuscript, I thought about possible explanations of varying patterns across regions and a potential link to statistical power/sample size. It is my view that the authors have accounted for this as much as possible and in my opinion the mention of this as a caveat in the main text is sufficient to point this out (line 244 on). Reviewing the paper, this was the only potential issue that I could see, but the authors have proactively addressed this, which helps the transparency of this study. Readers can draw their own conclusions, but for me, having worked with eBird data extensively, the authors have made every effort to account for bird data issues and I think this is as far as one can realistically go in trying to explain differences between and within regions. -> Indeed, we too spent some time thinking about explanations for this variation across regions. We did not have much space to discuss it in the main text, so we explored more deeply in the Supplementary Discussion (especially whether heterogeneity in the results comes from sampling effort differences, ecological differences, or conservation differences). To make sure the reader does not miss this additional discussion, we now point to it explicitly in the main text ("see Supplementary Discussion for further discussion on heterogeneity in the results"). Two minor comments: 1. It would have been really helpful in the review process to have a look at the R code as well. I know that the authors have included the following code availability statement, but I think it would make sense for reviewer to be able to reproduce the analysis during their review: All R scripts will be deposited on an open repository after revision. -> Sincere apologies for not having provided the scripts earlier. They are now attached to this submission. 2. The acknowledgements and author contributions sections should be removed for blinding the manuscript. -> We have removed this part from the main text and moved it to the first page of submission. Best regards, Richard Schuster Research Associate, Carleton University Email: [email protected] Reviewer #2 (Remarks to the Author): I found the MS entitled "Effectiveness of protected areas in conserving tropical forest birds" to be highly interesting, important, innovative, and well written. I have but few suggestions that could potentially help cement the authors claims regarding the effectiveness of PAs in protecting vulnerable bird species. -> We thank Reviewer 2 for their positive comments and constructive suggestions. Major comments The authors claim that PAs were established in more remote, and less desirable locations to begin with. However, there is also a possibility that these regions were specifically chosen to protect at risk (bird) species. I think that this option should at least be raised. Furthermore, there could potentially be some more test to try to disentangle this chicken and egg conundrum. -> Chicken and egg conundrum is a very good description. It is well established that PA location is biased towards more remote and less desirable locations (e.g., high altitude, low productivity areas; e.g., Joppa and Pfaff 2009 in Plos One; Venter et al. 2018 in Conservation Biology), which may create a bias against high biodiversity areas. In parallel, PAs are often established in (some of) the best remaining habitat patches, which may create a bias towards high biodiversity areas. The two are not necessarily contradictory: the best remaining habitat patches at the time of PA creation may be in remote locations that were not particularly biodiversity rich to start with. We have spent a good time thinking about how to separate the two effects, which we discuss in detail in Supplementary Methods 4D. As explained, we believe we have adequately controlled for biases in PA location. We now refer explicitly to this in the main text to ensure readers do not miss it. First, I think that the year PAs were established should be used as another predictor in some of the tests -to see if PAs established earlier are doing better that those more recently established. These data should be available from the UNEP-WCMC database. This is a very good suggestion, even though we note two caveats. First, while the WDPA indeed includes a date for each PA (field status_yr), it does not necessarily correspond to the year that given territory was first protected, but to the year of establishment of the current PA (e.g., if a Game Reserve designated in 1990 changed status to National Park in 2005, the status year for the National Park designation will be 2005 and the earlier Game Reserve will no longer be in the WDPA), which may mask a potential increase in PA effectiveness with PA age. Second, whereas earlier PAs were more frequently established to protect scenic landscapes or particular resources (e.g. game), recent decisions on PA location are more likely to have incorporated better data on the distribution of, and threats to, biodiversity, including a stronger focus on threatened species (even because much of those data are themselves quite recent), and so PAs are not necessarily expected to have had less impact over time. With this in mind, we agree that it is interesting to test whether there is a relationship between PA age and effectiveness, given that a negative trend (i.e., if older PAs have higher effectiveness) would reinforce our assumption that the differences in terms of bird biodiversity measured in our study can be interpreted as measures of PA effectiveness. We have thus modelled, for each model used in Analysis I (8 hotspots × 4 bird indices), the link between residuals (i.e., the remaining difference in bird diversity indices that is not explained by protection, duration, expertise, latitude, longitude, remoteness, altitude, agricultural suitability) and PAs status year. Most (19 out of 32) of these relationships are not significant, but those that are mainly go in the expected direction. Given space constraints and the nuances needed to interpret this result (the two caveats above) we have opted to present it in supplementary materials rather than in the main text. We have added a phrase in the main text to point to this additional analysis: "We have also found that older protected areas tend to be more effective (in terms of conserving bird diversity; analysis I), consistent with a cumulative implementation effect of protected areas (Supplementary Methods 4E, Extended Fig. 18)." Furthermore, beyond the metrics of forest quality explored here, I would suggest looking at the Hansen forest cover-change layer (https://earthenginepartners.appspot.com/science-2013-globalforest) which gives nearly 20 years spatial data regarding forest loss. There are similar time-series derived spatial datasets spanning even longer periods. Here too the dynamics of the forest cover/loss are important for linking longer-term biodiversity phenomena. -> This is a very useful suggestion and we thank the reviewer for it. We have followed this proposal by adding a new analysis that investigates how deforestation rates differ inside protected areas versus counterfactual unprotected sites. Our finding that on average PAs experienced 46.7% lower deforestation rates (Extended Table 1) is consistent with the interpretation that our measure of effectiveness reflects recent trends in habitat rather than only biases in the original protected areas locations. Put together, placing your results within such an (also) temporal perspective should give them even stronger foundations. -> These two analyses have indeed given stronger foundations to our results! Minor comments Could you provide in the supp. the power of your different tests for different groups of species, or regions. This could help compare significant / non-significant results across your many tests with different sample sizes. -> We have added a table (Extended Table 3) with the number of checklists (so the number of samples used in each test), P-values, and R-squared to give more insights on statistical power. Could you please also provide a supp. with the lists of species belonging to different categories in different regions, this should help reproducibility of your results, and further explorations of them. -> This table has been added as a spreadsheet in this current submission. Thank you for this great suggestion. Please note that the Simrad et al. 2011 data is limited to a max canopy height of 40m. I suggest at least mentioning this. -> Indeed, we forgot to specify this. We have now added this information in the Supplementary Methods: "limited to maximum canopy heights of 40m" Reviewers' Comments: Reviewer #2: Remarks to the Author: Overall I think this MS is improved and should be ready for publication following minor changes/addition I think it is worth mentioning somewhere in your main text clearly that your measures of bird diversity which are not species richness are corrected for species richness -i.e. values of first specialists, endemics. Or threatened (and NT) above and beyond what is expected by richness. In most cases richness is a linear positive predictor of the other measures. Font in lines 407-408 should be regular text font and not notation font. I'd suggest including all the references mentioned in the supp, in the supp reference list (even if they appear in the main text) to aid the readers. Did you try other values for your smoothing parameter (k) other than 4 in your GAM models, to see if the patterns remain? Also, how important was your control parameter that included the lat & long, I suspect that this parameter would contain much of the variation across sites. Remarks to the Author: Overall I think this MS is improved and should be ready for publication following minor changes/addition -> We would like to thank Reviewer 2 for they careful look at our manuscript revision. I think it is worth mentioning somewhere in your main text clearly that your measures of bird diversity which are not species richness are corrected for species richness -i.e. values of first specialists, endemics. Or threatened (and NT) above and beyond what is expected by richness. In most cases richness is a linear positive predictor of the other measures. -> We agree with the reviewer and have highlighted this in the main text: "Indeed, controlling for overall richness, we find for each of these three groups significant positive effects of protected areas across hotspots" Font in lines 407-408 should be regular text font and not notation font. -> We have applied this change I'd suggest including all the references mentioned in the supp, in the supp reference list (even if they appear in the main text) to aid the readers. -> We know provide a list of reference at the end of the unique PDF supplementary file, with all references used across the SI. Did you try other values for your smoothing parameter (k) other than 4 in your GAM models, to see if the patterns remain? -> Patterns are robust to changes in the smoothing parameter but curves gain in complexity. We considered that k=4 was the most consistent value in ecological terms. We have specified in Supplementary methods: "Results were robust to changes in the degree of smoothing function" Also, how important was your control parameter that included the lat & long, I suspect that this parameter would contain much of the variation across sites. -> Our control for latitude and longitude carries some information but is not the major factor explaining bird diversity (often duration, expertise and altitude are responsible for a larger amplitude of diversity as can be seen in Supplementary Figures 10-17). Regardless of the amplitude of the impact, it is unlikely to bias our results as our control applies at scale much larger than protected areas (several hundreds of kilometres) as can be seen in Supplementary Figures 10-17. Moreover, our results are conservative towards such bias (i.e., our control could shadow some covariates effects but not create an artefact effect of protected areas).
v3-fos-license
2023-03-26T15:05:05.103Z
2023-03-24T00:00:00.000
257749556
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40831-023-00677-2.pdf", "pdf_hash": "e8c69027a230cac8ad218f18d7ceb2ec50945450", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:375", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "7b270f1923efa052b76324067008c3abfc24b1b5", "year": 2023 }
pes2o/s2orc
Experimental Study of the Combined Effects of Al2O3, CaO and MgO on Gas/Slag/Matte/Spinel Equilibria in the Cu–Fe–O–S–Si–Al–Ca–Mg System at 1473 K (1200ºC) and p(SO2) = 0.25 atm The combined effects of Al2O3, CaO and MgO slagging components on phase equilibria and thermodynamics in the basic Cu–Fe–O–S–Si system have been evaluated at 1473 K (1200 ºC) and p(SO2) = 0.25 atm for a range of oxygen partial pressures and matte compositions. The experimental technique included high-temperature equilibration of the samples on a spinel substrate under controlled gas atmosphere (CO/CO2/SO2/Ar), followed by rapid quenching and subsequent measurement of the equilibrium phase compositions using Electron Probe X-ray Microanalysis (EPMA). The experimental data have been compared with the results of thermodynamic calculations undertaken using FactSage software and an internal thermodynamic database. Both the experimental results and the calculations results revealed that the presence of Al2O3, CaO and MgO reduced both the sulphur and copper concentrations in the slag phase for a given set of process conditions. The data have been used for further optimisation of the parameters of the thermodynamic database describing multicomponent metallurgical systems. The resulting thermodynamic database is capable of predicting, with high accuracy, the phase equilibria and the distribution of all elements between the phases in the Cu–Fe–O–S–Si–(Al, Ca, Mg) system. Introduction The Cu-Fe-O-S-Si-(Al, Ca, Mg) system describes the principal chemical components present in copper smelting, converting and refining systems. In industrial practice, The contributing editor for this article was Markus Reuter. * Svetlana Sineva [email protected] Extended author information available on the last page of the article fayalite-based copper smelting slags typically contain 2-5 wt% Al 2 O 3 , 1-4 wt% CaO and 1-2 wt% MgO [1]. These elements are introduced into the processes from concentrates or other feedstocks, fluxing agents and refractory materials, and are distributed between slag, spinel and other solid phases. Accurate quantification of the systems sensitivity to changes in bulk composition and process conditions is highly important for optimization of industrial processes. The study of the phase equilibria in these complex, high temperature, multi-component, multi-phase systems is a challenging task. The chemical characteristics of slag/ matte equilibria and the factors influencing the distribution of major elements between phases have been established in several foundational papers published in the 1950s-70 s of the last century [2][3][4][5][6]. Those studies were focused on the experimental measurement of copper, sulphur, oxygen and minor element distributions between silica-saturated slags and mattes under controlled gas atmospheres. The effects of Al 2 O 3 and CaO on the matte/slag phase equilibria were evaluated later [7,8]. The equilibrium copper and sulphur concentrations in MgO-containing Fe x O-SiO 2 slags and as well as oxygen concentration in matte at 1300 ºC and p(SO 2 ) = 0.1 atm were measured by Takeda et al. [9,10]. MgO crucibles were used in both studies but MgO concentration of and Fe/SiO 2 ratio in slags were not fixed at constant values, making it difficult to establish the trends associated solely with MgO concentration. In all of the above studies, the bulk matte and slag compositions were measured using conventional wet chemical analysis methods, with samples weighing approximately 10 g. Uncertainties associated with this type of technique included potential entrainment of matte in slag phase, difficulties of reaching equilibrium phase compositions in bulk samples within the targeted equilibration time and possible compositional changes during cooling. Using ceramic (i.e. CaO, MgO, Al 2 O 3 ) crucibles resulted in uncontrolled slag composition and inability of a systematical investigation of the fluxing elements effects. In recent series of studies, an integrated approach combining experimental study and thermodynamic modelling has been implemented to investigate phase equilibria and the distribution of elements in the Cu-Fe-O-S-Si-(Al, Ca, Mg) system [11][12][13][14][15][16][17][18]. The studies of individual and combined effects of Al 2 O 3 , CaO and MgO on the gas/slag/ matte/ tridymite phase equilibria at 1200 and 1300 ºC and p(SO 2 ) = 0.25 atm have been reported by Fallah-Mehrjardi et al. [19] and Sineva et al. [20]. The samples were equilibrated on silica substrates in controlled gas atmospheres at fixed temperatures. The same experimental technique with spinel substrates has been used to measure the individual effects of Al 2 O 3 , CaO and MgO for gas/slag/matte/spinel phase equilibria at 1200 ºC and p(SO 2 ) = 0.25 atm [21]. Similar research focused on study of MgO and CaO effect on gas/slag/matte/spinel at slightly different p(SO 2 ) = 0.3 atm with experimental technique adopted from [15] has been published in papers [22][23][24].The latest research of individual effects of Al 2 O 3 and CaO on distribution of major elements between slag and matte phases in equilibrium with spinel at p(SO 2 ) = 0.25 atm and 1250 ºC have been reported in doctoral thesis of Min Chen [25]. Thermodynamic modelling of the effects of Al 2 O 3 , CaO and MgO on slag/matte equilibria in the Cu-Fe-O-S-Si-(Al, Ca, Mg) system was carried out by Shishin et al. [26]. The model reproduced individual effects of Al 2 O 3 , CaO and MgO on slag/matte phase equilibria on spinel and tridymite substrates. The combined effects of Al 2 O 3 + CaO + MgO on slag/matte/spinel equilibria for high Fe/SiO 2 have not been accurately measured to date. The aim of the present paper is to fill that gap, producing accurate experimental measurements of combined Al 2 O 3 , CaO and MgO effects on the gas/slag/matte/spinel phase equilibria at 1200 ºC and p(SO 2 ) = 0.25 atm for a range of oxygen partial pressures and matte compositions. Overall Description of the Experimental Technique The experimental technique used in the present study involved high-temperature equilibration of slag and matte phases using spinel substrates in controlled gas atmospheres (CO/CO 2 /SO 2 /Ar), quenching of the equilibrated sample and accurate measurement of the compositions of coexisting phases using Electron Probe X-ray Microanalysis (EPMA). The technique has been developed for accurate measurement of the phase equilibria in the multicomponent systems at high temperatures. Application of EPMA for determination of the compositions of phases presented in equilibrated slag/matte samples eliminated the possibility of measuring entrained droplets or solids since the phases were readily distinguished. Combining EPMA with SEM enabled the selection of areas in which all equilibrium phases were in close association and accurate measurement of the compositions of coexisting phases. Low weight of the samples, 0.3-0.5 g, ensured the achievement of equilibrium within a reasonable experimental time and enabled rapid quenching after equilibration process. Applying the primary phase as a substrate for matte/slag/gas equilibria prevented contamination of the sample by side elements that could be associated with crucible materials. The sample mixture compositions and process conditions used in each experiment were calculated using the thermodynamic database, thus decreasing the overall number of experiments required to accurately characterise the system and to determine the optimum values of the database model parameters. Possible uncertainties arising from the technique used could be related to experimental errors, arisen from the equilibration experiments fulfillment and uncertainties of EPMA measurements. The possible uncertainties as well as potential ways of overcoming them at applying the discussed technique are listed below. Experimental Uncertainties • The presence of impurities in the initial reagents and possible contamination during mixture and sample preparation. Potential side impurities were detected and controlled by measuring EDS (energy dispersion spectra) of samples following equilibration. • Uncertainties of high-temperature equilibration: • Temperature uncertainties (location of the sample relative to hot zone, thermocouple errors, gradual contamination and disintegration of thermocouples during their use) were minimised by thermocouple calibration against standard thermocouple supplied by Australian Standards Laboratory and periodic measurements of the temperature profile of the furnace. • Gas composition uncertainties include initial gas purity, mixing ratio of gases, possible minor cracks and leakages in the connections, blockage of furnace with sulphur precipitates. The flowrates of high purity gases were controlled by calibration of the flowmeters. Generated oxygen partial pressures were cross-checked using an oxygen probe. Several experiments were repeated to ensure reproducibility of p(O 2 ) vs matte grade curve. To ensure complete achievement of equilibrium a "4-point test" approach was applied involving examination of the effects of the equilibration time, homogeneity within the individual phases, approach to equilibrium from different starting conditions and analysis of the reaction sequence taking place during equilibration. Details of the "4-point test" approach can be found in [18]. • Compositional changes in the phases during quenching were minimised by selection of well-quenched areas and appropriate probe diameter for measurement. Uncertainties of EPMA Measurement • Thickness and uniformity of carbon coating of the samples and standards were controlled by introducing standard carbon coating parameters. • Surface condition (smoothness, pores/cracks, oxidation/ hydration) of the samples was controlled by maintaining clean work environment and special conditions of the samples storage and reporting totals before normalisation. • Uncertainties in EPMA standards for given measurement session were avoided by repeated measurement of standards as unknowns during the session, using the pure components or stoichiometric phases observed in samples to recheck the standards, and, where possible, by mounting small standard particles in the sample block. • Selection of the appropriate measurement location and number of measured points was carried out by examination of different areas of the sample by SEM. • Uncertainties in ZAF correction were controlled by selecting standards close to compositions of the samples and using secondary stoichiometric standards for testing. • Effect of secondary fluorescence (manifested as apparent solubility of certain elements in a particle that has no or lower real solubility of that element, but is surrounding by a matrix containing that element) was corrected by a blank unreacted couple test, and where appropriate by changing other energy of measured characteristic line for the same element (for instance, changing from K to L characteristic line for some elements) [27,28]. Experimental Technique The first step of experiments preparation was aimed to prediction the appropriate initial mixture compositions at fixed conditions using FactSage software [29] and the current version of the confidential thermodynamic database for FactSage developed by Pyrosearch innovation centre (The University of Queensland) [12,30]. The principal objectives of the calculations were to estimate the phase ratios at selected p(SO 2 ), p(O 2 ), temperature and matte grade. The slag/matte/spinel mass ratios used in the experiments were approximately 0.6/0.3/0.1, respectively. The initial mixtures were prepared from metal, sulphide and oxide powders: precalcined SiO 2 (99.9 wt% purity), Cu 2 S (99.5 wt% purity), FeS (99.9 wt% purity), FeO 1.05 (99.9 wt% purity), Fe (99.9 wt% purity), Cu (99.9 wt% purity), MgO (99.99 wt% purity), Al 2 O 3 (99.99 wt% purity) supplied by Alfa Aesar (MO, USA). The CaO was added to the system in the form of preliminary synthesised "master slag" with composition of 60 mol.% SiO 2 -40 mol.% CaO. The master slag was prepared from calcined SiO 2 and CaCO 3 at required weight ratio in an open silica ampoule at 1400 °C in a muffle furnace overnight, cooled slowly, and contained a mixture of wollastonite CaSiO 3 and quartz or tridymite SiO 2 as confirmed by SEM. The typical examples of the initial mixtures with different ratio of initial reagents targeted for fixed oxygen partial pressure are given in Table 1. The oxide, sulphide and metal powders in preset ratios were thoroughly mixed, pelletised and placed on a spinel (magnetite) substrate. The average weight of the samples was approximately 0.3-0.5 g. The spinel (Fe 3 O 4 ) substrate was prepared through oxidation of pure iron foil (99.99 wt% purity) at 1200 ºC in CO 2 atmosphere for 2 h. Various shapes of spinel substrates were tested by Hidayat et al. [15]. The final type of substrate was adopted in the form of rectangular basket with an open bottom. The schematic illustration of spinel basket, photo of oxidised spinel with size dimensions as well as photo of mounted spinel with sample inside are shown in Fig. 1. The spinel substrate containing the experimental sample was suspended in a 32 mm ID recrystallised alumina reaction tube and positioned in the calibrated, uniform hot zone of a vertical, electrically heated furnace at a given temperature, 1200 ºC. The sample temperature was measured by an alumina-shielded and calibrated Pt/Pt-13 wt% Rh thermocouple placed immediately adjacent to the sample. Digital 4-channel multimeter TM-947SD manufactured by Lutron electronic (Taiwan) with data logger from SD card was connected to thermocouple for temperature measurement. The sample temperature was estimated to be within ± 5 ºC of the target value. The constant p(SO 2 ) = 0.25 atm and oxygen partial pressures, p(O 2 ) in the range of 10 -8.3 -10 -8.6 atm were maintained by an accurate control of the CO, CO 2 , SO 2 , and Ar ratios in the gas phase using a system of calibrated U-tube capillary flow-meters. Totally 5 different oxygen partial pressures for experimental mixtures were targeted: 10 -8.3 , 10 -8.4 , 10 -8.5 , 10 -8.55 , 10 -8.6 atm (see details in Table 1). The aim of the proposed experimental plan was to reach the equilibrium state from different directions (from higher or lower copper concentration in matte phase). It was assumed that for the same mixture directions of the main reactions can be shifted depending on oxidation potential in the gas phase. Detailed analysis of the potential reactions during the gas/slag/matte/ spinel equilibration process was published by Hidayat et al. [15]. The desired flowrates of gases to achieve the selected conditions were calculated using FactSage FactPC database for the ideal gas phase [29]. The accuracies of the oxygen potentials produced by the sulphur-free gas mixtures were confirmed by flowing the mixtures through a separate vertical tube furnace equipped with a DS-type oxygen probe (supplied and calibrated by Australian Oxygen Fabricators, AOF, Melbourne, Australia) operated at the same temperature as used in the experiments. Several preliminary tests were carried out for 0.5, 1, 6, 18, 24, and 48 h to determine the required experiments duration. Detailed explanation of the technique for determination of the proper equilibration time for experiments in equilibrium with gas phase was published earlier [15]. Based on that technique and analysis of the results obtained the final experimental time was selected to be between 20 and 24 h, which allows the phases of the system to reach thermodynamic equilibrium. All samples were then directly quenched in brine solution (20 wt% of CaCl 2 in water kept at − 20 ºC) to capture the equilibrium phase compositions at 1200 ºC. The quenched samples were washed, dried, mounted in the epoxy resin and polished using Tegramin polishing machine, manufactured by Struers (Denmark) for further examination. Direct measurement of the equilibrium phase compositions was undertaken by electron probe X-ray microanalysis (EPMA) using the JEOL JXA 8200L probe (trademark of Japan Electron Optics Ltd., Tokyo) at an acceleration voltage of 15 kV and a probe current of 20 nA. Kα 1 characteristic lines were selected for Cu (LIF crystal 1 ), Fe (LIF crystal), S (PET crystal1), Si (TAP crystal11), Ca (PET crystal), Mg (TAP crystal) and Al (TAP crystal) concentrations measurements. Appropriate reference materials from Charles M. Taylor, Stanford, CA were used as standards. Cu and Fe in matte phase were calibrated against pure Cu and Fe metals, S was calibrated against chalcopyrite standard (CuFeS 2 ). For slag and spinel phases, standard materials Cu metal, Fe 2 O 3 , SiO 2 , CaSiO 3 , MgO and Al 2 O 3 were used to calibrate Cu, Fe, Si, Ca, Mg, and Al, respectively. Fe 2 O 3 was used as the oxygen standard for direct measurements of oxygen in the matte phase. The detailed technique of oxygen measurement in matte phase was described in [25]. Briefly, oxygen concentration in matte was measured using the Kα line by an LDE1 spectrometer crystal1, specially designed for measuring of light elements. The signal collection times for peak and background were 40-50 s. and 6-8 s, respectively, depending on the estimated oxygen concentration in the phase. The Duncumb-Philibert ZAF correction procedure 2 supplied with the JEOL JXA 8200L probe software was applied to the data obtained. The standard ZAF correction was further improved for the fayalite slag compositions, following an approach described in [28,31]. For slag and spinel phases, measured metal cations were recalculated to selected oxidation states (Cu 2 O, FeO, SiO 2 , Al 2 O 3 , CaO and MgO) for presentation purposes. The obtained results were listed in a tabular form with indicated initial sum of elements before normalisation. Figure 2 illustrates the typical sample microstructures, containing matte, slag and spinel phases adjacent to and in equilibrium with the gas phase. It can be seen that all phases are homogeneous and well quenched. The cross-sections of the matte phase are typically in the range of 10-300 µm diameter, and compositions of the matte droplets larger than 50 µm were measured. The slag phase compositions were measured in close proximity to matte and spinel phases, but not closer than 20 µm from phase boundaries so as to avoid any effect of secondary fluorescence. Spinel grains, typically 10-50 µm diameter, were randomly distributed in the slag phase. Results The EPMA measured compositions of the phase present in the equilibrated samples are listed in Table 2. The concentrations of Al 2 O 3 , MgO and CaO in the matte phase are below detection limits and therefore are not reported. The concentrations of Cu and S dissolved in spinel phase are close to the EPMA detection limit and do not exceed 0.1 wt. % for the majority of the samples. The concentrations Table 2; b Sample # 14 from Table 2 1 LDE1, TAP, PET, and LIF are all different types of crystals that are used to cover the entire X-ray spectrum in EPMA. Lithium fluoride (LIF), pentaerythritol (PET), thallium acid pthalate (TAP), and artificial layered dispersive element (LDE) crystals are the most commonly used. LDE1 and TAP crystals are used for light-elements (lowenergy) analysis, while PET and LIF crystals, in conjunction with the TAP crystals, cover the heavier elements. 2 ZAF correction means a mathematical correction of raw X-ray data that takes into account the following three effects on the characteristic X-ray intensity when performing quantitative analysis: 1) atomic number (Z) effect, 2) absorption (A) effect, and 3) fluorescence excitation (F) effect. ZAF is the abbreviation of the effects. The ZAF correction procedure is an important step in quantitative X-ray microanalysis, which is used to determine the elemental composition of samples at high spatial resolutions. The procedure allows for accurate quantification of X-ray spectra, enabling researchers to obtain reliable elemental concentrations from samples. Spinel n/a n/a n/a n/a n/a n/a n/a n/a 9 − 8.5 62.9 ± 0.6 11.9 ± 0. Spinel n/a n/a n/a n/a n/a n/a n/a Spinel n/a n/a n/a n/a n/a n/a n/a Table 2. The column "EPMA Total" refers to unnormalised sum of elemental concentrations measured by EPMA. The experimental data were plotted on a set of graphs with copper concentration in matte phase (wt% Cu) on the X-axis. The graphs presenting p(O 2 ), p(S 2 ), p(SO 2 ); S and O concentrations in the matte phase; "FeO", SiO 2 , Cu and S concentrations in the slag phase; Fe/SiO 2 ratio and Fe 3+ / (Fe 2+ + Fe 3+ ) ratio in the slag phase are illustrated in Fig. 3. The experimental and literature points are plotted along with the results of thermodynamic calculations carried out with the FactSage software [29] and the current version of the confidential thermodynamic database developed by Pyrosearch innovation centre (The University of Queensland) [12,30]. The concentrations of Cu and S dissolved in the slag phase are dependent on the p(O 2 ) in the system and the concentrations of slagging elements (Ca, Al, Mg) present. Analysis of the data also indicates that Al 2 O 3 and MgO are distributed between the slag and spinel phases whereas CaO is only present in the slag phase. To illustrate distribution of Al 2 O 3 and MgO between slag and spinel phases the logarithm of the ratio between measured Al 2 O 3 (or MgO) wt.% concentration in slag phase and measured Al 2 O 3 (or MgO) wt.% concentration in spinel phase was calculated and presented in Fig. 4a, b with error bars illustrating the standard deviation of the experimental data. O 2 and S 2 Partial Pressures The relationships between matte grade and oxygen partial pressure at a fixed sulphur dioxide partial pressure of 0.25 atm are shown in Fig. 3a. The experimental data show the same tendency of increasing copper concentration in matte with increasing oxygen partial pressure that was observed earlier for the Cu-Fe-S-O-Si system without addition of Al 2 O 3 , CaO and MgO. This trend is consistent with thermodynamic predictions carried out using the current version of database, however, the oxygen partial pressure predicted to obtain a given matte grade is approximately 0.2 log units greater than observed for the range of matte compositions measured. Possible reasons for these differences have been previously discussed by Shishin et al. [11], but a clear explanation of this difference has yet to be established. The model predictions indicate that the relationship between the oxygen partial pressure and the matte grade is almost independent of Al 2 O 3 , CaO and MgO at low concentrations of these components in slag phase. This appears to be the case both for equilibrium with spinel and for [20], also with study of individual effects of Al 2 O 3 , CaO and MgO on slag/matte/ spinel phase equilibria [21,32]. The predicted sulphur partial pressures decrease from 10 -2.0 to 10 -4.0 atm with increasing the matte grade between 40 and 80% Cu for p(SO 2 ) = 0.25 atm as shown in Fig. 3b. Sulphur partial pressure mainly depends on P(SO 2 ) that was fixed in the system and inversely proportional to oxygen partial pressure. The effect of Al 2 O 3 , CaO and MgO on p(S 2 ) is smaller compared to the above-mentioned effects. Composition of the Matte Phase The fluxing agents (Al 2 O 3 , CaO and MgO) are not present in the matte phase at measurable concentrations. The sulphur concentration in the matte phase decreases with increasing copper concentration in matte, as illustrated in Fig. 3c. The experimental points follow the Cu 2 S-FeS stoichiometric line at high matte grade area with copper concentration higher than 50 wt%, then change the slope due to increasing of oxygen and excess Fe metal solubility in mattes at low copper concentrations area. The thermodynamic predictions significantly underestimate the sulphur concentrations for low matte grades. This difference was also observed in other experimental studies of slag/matte/tridymite phase equilibria [21]. The mentioned discrepancy can be explained by overestimation of iron concentration in matte phase at low matte grade area. All these results indicate that further optimisation of thermodynamic database parameters for the matte solution is necessary. According to the calculated trends, matte in equilibrium with tridymite has higher sulphur concentration than matte equilibrated with spinel. This can be related to higher activity of "FeO" in the system. A qualitative comparison of the obtained results with the data of [25] showed the similar trends, but their experiments were carried out from 60 wt% Cu in matte and above. Therefore, the mentioned above discrepancy between experimental and calculated data at low-matte grades can not be confirmed by their experimental results. The oxygen concentrations in the matte phase have been measured directly by EPMA and the results are presented in Fig. 3d. The concentration of oxygen in matte is evidently increasing with decreasing matte grade. It can be explained by higher concentration of iron in matte, which has higher chemical affinity to oxygen than copper. And finally, at very low matte grades (for instance, at Fe-S-O system) matte and slag will form a single oxysulphide solution [33,34]. Good agreement between the experimental points and calculated trends is observed for 40-55 wt% copper concentration in matte, however, the experimental points are 0.2-0.4 wt% O higher than the calculated values at higher matte grades. The calculated oxygen concentrations in mattes in equilibrium with tridymite are lower than for mattes equilibrated with spinel that can be explained by lower oxygen partial pressure at fixed matte grade. The presence of Al 2 O 3 , CaO and MgO in the system has little effect on oxygen concentration in matte. Composition of the Liquid Slag Phase The "FeO" concentrations ( Fig. 3e) and the Fe/SiO 2 ratios in the slag phase (Fig. 3f) for Al 2 O 3 , CaO and MgO-containing slags in equilibrium with spinel decrease with increasing matte grade. The experimental and predicted values agree well, and the FeO and Fe/SiO 2 ratios for a given matte grade are significantly lower than in ACM-free 3 slags. These trends are also consistent with the previously reported data for the systems in equilibrium with tridymite [20] also with data from [25]. The observed trends can be explained by dissolution of the fluxing agents in the slag and are associated with decrease of the activity coefficient of iron oxide in the slag phase. For slag equilibrated with tridymite, iron concentration in slag is almost constant over a wide matte grade area. However, in equilibrium with spinel the condition of spinel phase formation and keeping the liquid slag requires increasing of SiO 2 concentration in the slag phase that, in turn, results in decreasing of "FeO" concentration. The said statement is confirmed by Fig. 3f, illustrating constant Fe/SiO 2 ratio for tridymite-equilibrated slags and decreasing of the said ratio for spinel-equilibrated slags. Another proof is given in Table 2 and in Fig. 3f showing increasing the SiO 2 concentration in slags from 24.5 to 35.5 wt% at increasing of matte grade from 40 to 76 wt% Cu. In contrast, SiO 2 concentration in tridymite-equilibrated slag is almost constant over all matte grades range. Experimental data for SiO 2 concentration in slag are well correlated with thermodynamic prediction. With increasing matte grade, the compositions of slags in equilibrium with tridymite and with Fig. 3 Gas/slag/matte/spinel equilibria in the Cu-Fe-O-S-Si-Al-Ca-Mg system at 1473 K (1200 °C) and p(SO 2 ) = 0.25 atm: a oxygen (O 2 ) partial pressure; b sulphur (S 2 ) partial pressure; c concentration of sulphur in matte; d dissolved oxygen in matte (measured directly by EPMA); e concentration of "FeO" in slag; f Fe/SiO 2 ratio; g concentration of Cu in slag; h concentration of sulphur in slag; i Fe 3+ /Fe total ratio in slag. Solid and dashed lines are calculated using FactSage 7.3 software and confidential thermodynamic database developed by Pyrosearch innovation centre ; symbols are experimental and literature data [20][21][22][23]. The abbreviation ACM in legend means the bulk concentrations (wt%) of Al 2 O 3 , CaO and MgO, respectively ◂ spinel approach each other. For ACM-free slags, the slag compositions become identical at 76 wt% of copper in matte. Dissolved copper concentration in slag in equilibrium with spinel decreases with increasing matte grade from approximately 40 to 65% Cu and then increases for higher matte grades (Fig. 3g), resulting in a minimum copper concentration in slag for matte grades containing between approximately 65 and 70 wt% Cu. As it was mentioned above, at decreasing the matte grade chemical properties of slag and matte become closer with tendency of formation of one solution. Therefore, the mutual dissolution of copper and sulphur in slag, and oxygen in matte is increasing. For copper-enriched mattes with copper concentration higher than 70 wt%, increasing copper in slag can be explained by increase of copper activity in the system. The presence of Al 2 O 3 , CaO and MgO decrease the concentration of copper in slag relative to ACM-free slags. For all matte grades, the copper concentrations in slag in equilibrium with spinel are higher than for tridymite-equilibrated slags. According to the calculated trends, for matte grades of 70 wt% Cu and higher, the copper concentration in slags is the same for both equilibria with tridymite and with spinel; however, the experimental results indicate that for these conditions copper concentrations in slag in equilibrium with tridymite are lower than for slag in equilibrium with spinel. Scatter of experimental data around 0.2 wt% is observed both for current results as well as literature data [21]. It can be explained by slightly different concentrations of ACM in the slag phase which are quite difficult to accurately adjust according to targeted values. Copper concentrations in slag measured by Sun et al. [22,23] are 0.2-0.3 wt% lower that the results of the current study. The concentrations of sulphur dissolved in slag in equilibrium with spinel and tridymite decrease with increasing matte grade as shown in Fig. 3h. The sulphur concentration in the slag in equilibrium with spinel at fixed matte grade is always higher than that in equilibrium with tridymite. The presence of Al 2 O 3 , CaO and MgO decreases the sulphur concentrations in slags in equilibrium with both tridymite and spinel phases. There is good agreement between the experimental points for slag/matte/ spinel equilibria, the calculated values and literature data [20][21][22][23]25]. The higher measured sulphur concentrations in paper [21] at fixed matte grade can be explained by lower ACM addition. Only the calculated Fe 3+ /Fe total ratios in slag are illustrated in Fig. 3i, as the experimental values have not been determined in this study. For slags in equilibrium with spinel, the ratio is predicted to decrease with increasing matte grade up to approximately 75 wt% Cu and then slightly increase at higher matte grades. This tendency can be explained by increasing concentration of SiO 2 in slag, as SiO 2 has a higher affinity to FeO than to Fe 2 O 3 . In contrast, the Fe 3+ /Fe total ratios in slags in equilibrium with tridymite increase monotonically with increasing oxygen partial pressure in the system through the whole range of matte grades similar to changing of copper concentration in slag (Fig. 3 e). The presence of the fluxing agents in the system results in decreased Fe 3+ /Fe total ratios relative to the ACM-free slags for equilibrium with both tridymite and spinel phases. Al 2 O 3 , CaO and MgO concentrations in slag phase were targeted at 2.5, 2.5. and 0.9 wt% correspondingly. However, measured concentrations of these elements in slag phase are slightly differed from the targeted values. The calculated standard deviations of the measured flux concentrations from the targeted values were as follows: 2.5 ± 0.7 wt% for Al 2 O 3, 2.5 ± 0.3 wt% for CaO and 0.9 ± 0.2 wt% for MgO. Composition of the Spinel Phase It was noted earlier that both Al 2 O 3 and MgO dissolve in the spinel phase, while CaO is concentrated in the slag phase. The main component of spinel is magnetite with concentration of Fe 3 O 4 ranging from 94 to 98 wt% (see the resulting Table 2). Depending on the P(O 2 ) and total concentration of the corresponding compound in the system 0.7-3.6 wt% of Al 2 O 3 and 0.1-0.35 wt% of MgO are dissolved in spinel phase. The graphs illustrating the distribution coefficients of Al 2 O 3 and MgO between slag and spinel are shown in Fig. 4a, b. For experimental data high values of standard deviation are observed due to measurement uncertainties of Al 2 O 3 and MgO concentrations in spinel phases. The distribution coefficients for both of these components increase with increasing matte grade by a factor of two at coper concentration in matte phase from 40 to 75 wt% Cu. Within the mentioned above experimental uncertainties calculated trends are in good agreement with experimental data. Industrial Implications Formation of solid phases (tridymite or spinel) in smelting furnaces results in increased effective viscosity of slag (slurry effect) and slows down the settlement of physically entrained droplets of matte. Uncontrolled amounts of solids may cause accretion formation and, in extreme cases, freezing of the furnace. A potential benefit of the solid phases present in the slag is decreased refractory wear due to lower solubility of refractory components and formation of protective layer on the furnace walls. A common practice is operating the furnace at fixed Fe/SiO 2 in slag, based on extensive experience of operators. The optimized value is selected for a typical furnace feed. The trend towards increasing variability in feed composition is continuing and may even be accelerating due to changes in the mining industry, such as the depletion of high-grade ore reserves, the increasing use of alternative feed materials, such as recycled metals, electronic waste, etc. These secondary materials are often high in Al 2 O 3 and Al. The addition of Al 2 O 3 has a strong effect on spinel and tridymite liquidus, as shown in the present study and in previous studies [20,21,26]. To achieve optimum performance and output from the furnace, a change in process conditions will be required when a high-Al material enters the feed. The calculated fluxing table shown in Fig. 5 demonstrates the effect of alumina concentration in the slag on the percentage of solid phases for a set Fe/SiO 2 ratio at 1200 °C, total pressure of 1 atm, p(SO 2 ) = 0.25 atm, 60 wt% Cu in matte, fixed CaO = 2.5 wt% and MgO = 0.9 wt%. The p(SO 2 ) = 0.25 atm in the calculation was selected to correspond to the experiments in the present study. In industrial practice, the effective p(SO 2 ) for slag/matte equilibria depends on the type of furnace. The value of "% Solids" is the mass ratio [m(Spinel) + m(Tridymite)] / [m(Liquid sla g) + m(Spinel) + m(Tridymite)]·100%. Within the selected range, no other solid phases (e.g. mullite, wollastonite, feldspar) are predicted to form. The range of compositions for which the slag is fully liquid is highlighted in green between Having developed an improved thermodynamic database describing the system it is possible to predict the effect of Al 2 O 3 concentration and Fe/SiO 2 ratio on both the concentration of dissolved copper in slag and the % solids for a given matte grade. It can be seen from the example given in (Fig. 6) that the presence of Al 2 O 3 in slag decreases the concentration of dissolved copper whereas increased Fe/ SiO 2 ratio results in increased copper in slag. In Fig. 6, wt% Cu is defined as the mass ratio [m(Cu in spinel) + m(Cu in Liquid slag]/[m(Liquid slag) + m(Spinel) + m(Tridymite)]· 100%. The range of fully liquid slags is shown between the thick red lines. Looking just at the concentration of copper in slag + solids (Fig. 6) without considering the mass of slag can lead to the wrong conclusion that as much as possible of Al 2 O 3 and SiO 2 should be added during copper concentrate smelting to reduce copper losses. This is certainly not the case because copper losses are function of copper concentration in the slag and the mass of produced slag. Moreover, in industrial practice copper in slag is present in a form of dissolved copper and mechanically entrained matte droplets. Thermodynamic model of the present study provides a tool to calculate the concentration of dissolved copper and the mass of slag. In the calculations above, the concentrate containing 28% Cu, 38% Fe and 33 wt% S, was smelted with a O 2 -N 2 atmosphere. The amount of oxygen was increased in the modelling to achieve a target matte compositions, while p(SO 2 ) was fixed at 0.25 atm. The mass of SiO 2 , Al 2 O 3 , MgO was adjusted to achieve the target slag compositions. Conclusions The integrated experimental and thermodynamic modelling approach has been applied to determine gas/slag/matte/spinel equilibria in the "Cu 2 O"-"FeO"-SiO 2 -S-Al 2 O 3 -CaO-MgO system under selected conditions at fixed p(SO 2 ) = 0.25 atm and controlled oxygen partial pressures at 1200 °C. The combined effects of Al 2 O 3 -CaO-MgO on the system have been characterised by measurement of the compositions of the slag, matte and spinel phases formed following equilibration of the system. The results have been compared with those in the ACM-free systems in equilibrium with spinel and tridymite. The thermodynamic database accurately describes the equilibria in these complex systems, demonstrating the value of a rigorous thermodynamic approach to predicting process outcomes. An example of the application of the thermodynamic database is given in the form of a fluxing table, a guide that could be used by furnace operators to identify safe or optimum operating ranges in situations of varying slag compositions.
v3-fos-license
2023-03-24T15:23:06.653Z
2023-03-01T00:00:00.000
257711679
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "7a8620d1a9a40e543b105f2f29b84eb2b55b3c70", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:381", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ea8c26d0b0f42cb0a82e906c74e5cdc1db2463d4", "year": 2023 }
pes2o/s2orc
The Association of Combined Per- and Polyfluoroalkyl Substances and Metals with Allostatic Load Using Bayesian Kernel Machine Regression Background/Objective: This study aimed to investigate the effect of exposure to per- and polyfluoroalkyl substances (PFAS), a class of organic compounds utilized in commercial and industrial applications, on allostatic load (AL), a measure of chronic stress. PFAS, such as perfluorodecanoic acid (PFDE), perfluorononanoic acid (PFNA), perfluorooctane sulfonic acid (PFOS), perfluoroundecanoic acid (PFUA), perfluorooctanoic acid (PFOA), and perfluorohexane sulfonic acid (PFHS), and metals, such as mercury (Hg), barium (Ba), cadmium (Cd), cobalt (Co), cesium (Cs), molybdenum (Mo), lead (Pb), antimony (Sb), thallium (TI), tungsten (W), and uranium (U) were investigated. This research was performed to explore the effects of combined exposure to PFAS and metals on AL, which may be a disease mediator. Methods: Data from the National Health and Nutrition Examination Survey (NHANES) from 2007 to 2014 were used to conduct this study on persons aged 20 years and older. A cumulative index of 10 biomarkers from the cardiovascular, inflammatory, and metabolic systems was used to calculate AL out of 10. If the overall index was ≥ 3, an individual was considered to be chronically stressed (in a state of AL). In order to assess the dose-response connections between mixtures and outcomes and to limit the effects of multicollinearity and other potential interaction effects between exposures, Bayesian kernel machine regression (BKMR) was used. Results: The most significant positive trend between mixed PFAS and metal exposure and AL was revealed by combined exposure to cesium, molybdenum, PFHS, PFNA, and mercury (posterior inclusion probabilities, PIP = 1, 1, 0.854, 0.824, and 0.807, respectively). Conclusions: Combined exposure to metals and PFAS increases the likelihood of being in a state of AL. Background The totality of exposures people endure throughout their lives and how those exposures affect health have been referred to as the exposome [1]. Although certain environmental exposures might lead to unfavorable health outcomes, little is understood about how these factors interact or synergize to affect the stress response system [2]. This is especially critical to understand when exposure to metals is mixed with exposure to perand polyfluoroalkyl substances (PFAS). The negative consequences of PFAS and mixed metals may be deleterious. They could have a long-term effect on the impacted populations' social, educational, and economic advancement [3,4]. In many environments with high levels of chronic stress, several metals and PFAS co-exist at moderate to high levels. Individuals maintain physiological balance through allostasis, which involves adjusting bodily characteristics to meet environmental requirements. Homeostasis describes health as a state in which all physiological parameters function within non-changing setpoints. Allostasis, on the other hand, states that there are no setpoints and that the demands of the moment will determine the normal values of markers. However, the body adjusts to the higher set point if the impediments persist [5]. When the setpoint is changed, people are said to be in a state of allostatic load. Allostatic load (AL), an index of persistent physiological stress, is the biological consequence of stress. AL depends on the assumption that repetitive activation of the hypothalamic-pituitary-adrenal (HPA) axis affects multiple organ systems [5][6][7][8]. The wear and tear on the body caused by ongoing exposure to stressors can be measured by AL, which combines markers from systems within the human body to form a comprehensive biological stress index. An adult's well-being is negatively impacted by psychosocial stresses, such as poverty, racial inequality, lack of access to resources, and water and food insecurity, which may be combined with environmental factors to increase AL within populations. At the individual and population levels, real-world human exposure to stressors is extraordinarily varied and temporally dynamic. Humans are constantly exposed to intricate chemical combinations of PFAS, metals, and other environmental pollutants [9,10]. Data analytics techniques provide a novel way to analyze the combined risk of various exposures in order to develop methodologies to properly identify and evaluate their impact on indices of stress, such as AL, because we do not fully understand the combinational nature of these exposures [11]. Human Exposure Pathways to PFAS and Metals According to the Agency for Toxic Substances and Disease Registry at the Centers for Disease Control and Prevention (CDC), metals such as cadmium (Cd), arsenic (As), lead (Pb), and mercury (Hg) are among the top 10 most toxic substances. Most people are exposed to metals through ingestion (through water and food), inhalation (through cigarette smoke or industrial products), or skin contact (through paint or soil) [12]. For example, As comes in two forms: the inorganic form is highly toxic, while the organic form is not. Most people are exposed to inorganic As, which is found in soil and groundwater, through drinking water, often from unregulated private wells. Most people are exposed to organic arsenic, which is found in fish and shellfish, through ingestion [13]. In the United States, people of different races, ethnicities, and socioeconomic backgrounds experience widely varying degrees of exposure. For example, non-Hispanic blacks have higher Pb exposure than non-Hispanic whites [14]. Humans most commonly absorb toxic PFAS through their diets [15]. Inhalation of air or dust containing PFAS particles is another route of exposure. Over the past decade, there has been extensive research on the dangers of PFAS exposure for people's health. The CDC, for example, has set limits on PFAS concentrations in drinking water (70 ppt for PFOA and PFOS). PFAS spreads through many sites, including landfills and sites where PFAS has been processed. E-waste sites, for example, leach PFAS into groundwater, soil, and air, while wastewater treatment plants (WWTPs) release PFAS-laden effluent into rivers, lakes, and farms [16]. PFAS from treated or untreated effluent enters sewers, rivers, lakes, and oceans through aquatic ecosystems, making water the ultimate repository of PFAS in the environment [2]. Pregnant and parturient women, elderly people, children, and neonates are the most vulnerable to PFAS exposure, which can cause thyroid, lung, kidney, reproductive organ, metabolic, brain, and behavior disorders, obesity, type 2 diabetes, proteinuria, hematuria, immunosuppression, and adverse pregnancy outcomes [17]. Bayesian Kernel Machine Regression (BKMR): A Mechanism for Monitoring Multiple Environmental Exposures Bobb et al. introduced Bayesian kernel machine regression (BKMR) for analyzing mixtures within the R statistical program [18]. By using the (bkmr) package for the R programming language, BKMR was created to estimate the health effects of pollutant mixtures and is used for toxicological, epidemiological, and other applications. It does this by using procedures from Gaussian predictive methods or hierarchical variable selection [18,19]. The estimation of health outcomes of the mixtures under kernel function is modeled on the exposure variables by adjusting for potential covariates or cofounder factors [20]. These procedures can address the possible collinearity of the mixtures' components and test the exposures' overall health effects [21]. Ultimately, BKMR modeling is a technique that (1) models the exposures and outcomes comprehensively, (2) evaluates the components of chemicals independently of the independent-dependent function, (3) evaluates the effects of mixtures of chemicals, and (4) distinguishes the necessary chemical mixtures for any dataset that is simulated [19,21]. BKMR is also used to solve the challenges encountered when evaluating the health impacts of chemical mixtures (i.e., PFAS and metals). In epidemiological and toxicological studies, BKMR helps solve problems such as collinearity and strong correlations between exposures [22]. BKMR uses variable selection that produces and estimates posterior inclusion probabilities (PIPs) values, which measure the values of variable importance for each exposure in a mixture [18,20]. This study using BKMR hypothesizes that exposure to metals and PFAS is associated with high levels of AL. PFAS and metals were chosen due to the unique opportunity to assess combined exposures to organic and inorganic contaminants, the extensive research on both groups of contaminants with National Health and Nutrition Examination Survey (NHANES) data, and the vast historical and emerging research related to these contaminants. To test this hypothesis, data from the NHANES were used to identify the factors most critical in combined exposures to PFAS and metals. Study Cohort and Design Data from the NHANES 2007-2014 of adults aged 20 years and over were utilized in this investigation. This dataset is a representative sample of non-institutionalized people residing in all 50 U.S. states and the District of Columbia. The U.S. Centers for Disease Control and Prevention (CDC) collected the data, which are available in two-year cycles and include multi-year, stratified, multi-stage, and clustered samples. The population of the United States is represented by the statistics for four cycles within 2007-2014. Selected individuals in the NHANES underwent a physical examination and an interview. The participants' blood was extracted, and samples were sent to a laboratory for evaluation. On the NHANES website of the CDC, additional descriptions and information about the study, as well as the steps and processes involved in data collection, are provided. The association between the various PFAS/metals concentrations and AL levels was examined using weighted data in order to produce sample estimates, which reflect how many people in the U.S. population one individual represents. PFAS and Metals Measurements There were two examination sessions each day. Exams in the morning, afternoon, or evening were randomly assigned to participants. After fasting for nine hours, participants were instructed to consume 75 g of dextrose (10 oz. of glucose solution) within 10 min after the initial blood draw. After the first blood draw was taken, a second blood sample was taken [23]. PFAS Quantification At the mobile examination center (MEC), the CDC gathered blood samples for laboratory analysis to evaluate serum for PFAS. Polypropylene or polyethylene containers were used to store the serum samples. The vials were subsequently shipped to several laboratories across the country. Sample analysis was performed at every survey location under the same conditions, owing to the controlled environments at separate facilities. In order to concentrate the analytes (PFAS) in a solid-phase extraction column, one aliquot of 50 mL of serum was injected into a commercial column switching system after being diluted with formic acid. High-performance liquid chromatography was used to separate the analytes from one another and the other serum constituents. A negative-ion Turbo Ion Spray (TIS) ionization source was utilized for detection and quantification (DOQ). Tandem mass spectrometry was used to change liquid-phase ions into gas-phase ions, utilizing a variation of the electrospray ionization source. These PFAS can be quickly detected in human serum using this technique, with detection limits in the low parts per billion (ppb or ng/mL) range [24]. An imputed value was placed in the analyte results field for analytes with analytic results below the lower limit of detection; 0.10/square root of 2 = 0.07 was the lower limit of detection divided by the square root of 2. Thus, the LOD for each PFAS was 0.10 or 0.07. Metals Quantification Inductively coupled mass spectrometry (ICP-MS) measured metals in diluted whole blood. ICP-MS is a validated technique for analyzing metals in biological media. All data set metal analytes had the same detection limits. An imputed fill value was placed in the analyte results field for analytes below the lower limit of detection using the equation: lower limit of detection divided by the square root of 2 [23].The NHANES Laboratory Procedures Manual describes specimen collection and processing in detail [24]. The National Center for Environmental Health (NCEH) of the CDC's Division of Laboratory Sciences performed metal assays on whole blood samples for the NHANES 2007-2014. Blood metals were identified and quantified using the inductively coupled plasma mass spectrometry method No. ITB0001A. Determining Allostatic Load Levels This study's AL was determined using physiological evaluations of 10 health indicators or biomarkers. The biomarkers included systolic blood pressure (SBP), diastolic blood pressure (DBP), total cholesterol (TC), high-density lipoprotein (HDL) cholesterol, glycosylated hemoglobin (HbA1c), albumin (Alb), triglycerides (TG), body mass index (BMI), creatinine clearance (CLCR), and C-reactive protein (CRP). Measures of AL were determined by calculating the cutoffs for various biomarkers based on their distribution within the database. All biomarkers were transformed into quartiles based on the data distribution. The top 25% of the distribution for each marker was designated as high risk for (1) C-reactive protein (CRP), (2) triglycerides (TG), (3) total cholesterol (TC), (4) systolic blood pressure (SBP), (5) diastolic blood pressure (DBP), (6) body mass index, and (7) glycosylated hemoglobin. For the other markers where high risk is determined by lower values, the bottom 25% of the distribution was used. These markers included (1) urinary albumin (Alb), (2) creatinine clearance (CLCR), and (3) high-density lipoprotein (HDL) cholesterol. High risk for each marker was assigned a value of 1, with low risk assigned a value of 0 to obtain a total AL index out of 10. An AL value greater than 3/10 was considered elevated, as indicated by the prior work of the team and others [2,[25][26][27]. Data Analysis We used BKMR with the hierarchical variable selection method due to highly correlated variables and collinearity in the datasets. We utilized the BKMR model in the R program using the R package (bkmr) to simulate the dataset. In this study, the model evaluated the impacts of mixtures or multipollutant exposures (e.g., PFAS and metals such as cadmium, cobalt, cesium, molybdenum, lead, etc.) by comparing the implementation of statistical and characteristics methods using the (kmbayes) function. BKMR Modeling for Binary Outcomes Combining data sources from various samples, including probability and nonprobability samples, is appropriate when using Bayesian inference. The use of Bayesian inference has various benefits. It first enables the estimation of complicated models and the quantification of uncertainty measurements. The likelihood function can be used to analyze sample units based on probability. As the probability sample size grows, it is primarily set up to give these units priority in the posterior calculations. Third, it enables posterior estimates to be more effective and efficient than estimates obtained from tiny probability-only samples, with less uncertainty [19,28]. We implemented kernel machine regression (KMR) for binary outcomes, as follows: The outcome variable in this study was AL. AL index values ≥ 3 were considered high risk, with values < 3 considered low risk. Those who were high risk were assigned a 1 in the dataset, while those who were low risk were assigned a 0. Binary outcomes were performed by applying the BKMR package using the probit model for convenience of computation and to overcome some of the issues that may arise in the dataset, such as collinearity under Bayesian inference [29]. Posterior inclusion probabilities (PIPs), which offer a gauge of the variable importance of each exposure, were extracted and plotted. All models were adjusted for sex, age, smoking, physical activity, ethnicity, occupation, income, alcohol consumption, education, birthplace, and time in the U.S. The analysis within this study was conducted using R software, version 4.1.2 (R Foundation for Statistical Computing, Vienna, Austria). A flow chart containing all the steps performed in the analysis can be found in Figure 1 below. Table 1 below provides the posterior inclusion probabilities (PIPs), which measure the percentage of the data that backs the inclusion of exposure or variable in the model. In other words, it quantifies the variable significance to be included within the model. The exposures to be included in the model were PFNA, PFUA, PFOA, PFHS, mercury, cesium, and molybdenum. Figure 2 shows the association between the response variable and each individual exposure included in the model, which is known as the univariate relationship. The other exposures were fixed at their median values, and the covariates were fixed as constant. This figure shows that the association of some variables is not significant or has no association with the outcome. In other words, Figure 2 below shows the univariate independentresponse association (each individual independent and dependent-AL association) by fixing the remaining exposures to their median, with the covariates being constant. The associations in Figure 2 present the relationship of exposures with responses when the model is adjusted for covariates (sex, age, smoking, physical activity, ethnicity, occupation, income, alcohol consumption, education, birthplace, and time in the U.S.). For instance, exposure to PFNA, PFUA, PFOA, PFHS, mercury, cesium, thallium, tungsten, and uranium are associated with AL, with some of these contaminants having sharper inclines, indicating different levels of exposure. Uphill on the graphs represents a higher level of exposure, and downhill shows lower levels of exposure. In other words, concentration values increase and decrease depending on the amount of exposure. Results In Table 2, the PIPs with the highest values are explored using critical sociodemographic and behavioral variables. The six highest PIPs were molybdenum, cesium, mercury, PFNA, PFOA, and PFHS. Table 3 explores mean AL levels by ethnicity and age group. This was performed to give context to the results. The results indicated that both ethnicity and age are significantly related to AL. Table 4 explores the correlation between all the critical environmental exposures in this study. The results demonstrate that the strongest correlation exists between cesium and mercury. Discussion The main PFAS have extensive half-lives in humans and are physiologically and biologically persistent. The gap in the body of knowledge on the impact of environmental pollutants on stress and health is partly filled by attempting to understand the relationship between the cumulative physiological burden of stress (AL) and PFAS and metals [30]. This is especially true because stressors are constantly present in people's lives, and the cumulative effect on health is apparent when resilience is lacking [30,31]. BKMR provides a way to address the potential multicollinearity among numerous PFAS and metal exposures, which cannot be resolved using traditional regression modeling. Based on a comprehensive analysis of the NHANES 2007-2014 data, we assessed the relationships between metal and PFAS exposures and AL among a nationally representative sample of adults. The study's findings supported the main hypothesis, which stated that exposure to a combination of PFAS and metals is strongly linked to AL. This expands prior work by the team, which found that metals and PFAS are associated with AL using simpler modeling techniques [2,30]. In this study, combined exposure analyses of PFAS and metals showed a significant positive association between mixed PFAS and metal exposure and AL, to which cesium, molybdenum, PFHS, PFNA, and mercury contributed the most (PIP = 1, 1, 0.854, 0.824, and 0.807, respectively). In addition, the correlation between selected metals and PFAS (Table 4), with some negatively and others positively associated, suggests that the relationships between these factors are varied and require dynamic modeling techniques to capture the combined relationship appropriately. In the BKMR model, a substantial positive association between combined metal and PFAS exposure and AL existed for PFNA, PFUA, PFHS, thallium, and tungsten. The univariate relationship between AL and each exposure in the model is depicted in Figure 2. All other exposures and covariates were held constant at their respective median values. The results demonstrated which variables, in combination, were not significantly associated with AL. These models were adjusted for confounding factors, and the associations between exposures and responses became clear. Exposures to PFNA, PFUA, PFOA, PFHS, mercury, cesium, thallium, tungsten, and uranium, to name a few, are all associated with AL; some of the graphs had steeper slopes than others, reflecting the fact that there were varying degrees of association between variables. The molecular processes or toxicological pathways that underlie the relationships between human exposure to PFAS and metals and AL are not fully understood. The means by which exposure to PFAS and metals brings forth adverse health outcomes may be via AL. Simply put, AL may be the mediator between exposure to multiple contaminants and adverse health outcomes [2,32], such as heart disease, high blood pressure, metabolic syndrome, obesity, and arthritis [33]. Table 2 shows that the mean levels of the contaminants of interest varied by ethnicity; for example, Asians had high mean levels of molybdenum, cesium, and PFUA, with values of 65.9, 537, and 0.26, respectively. Blacks had higher mean mercury levels, and Whites had higher PFOA and PFHS levels than the other groups. These varied exposure levels by ethnicity speak to the variability of the contaminants of interest and the dynamism of exposure in various environments. Within our results, compared to those of the White, Asian, and Hispanic ethnicities, non-Hispanic Blacks had greater rates of high AL. Tables 2 and 3 demonstrate that across all age groups, high stressors in addition to lower levels of resilient behavior, such as physical activity, exist. This may play a role in adverse health outcomes driven by AL. Understanding the social implications of AL may help explain some of the results of this study. For instance, many ethnic groups in the U.S. experience prejudice, face poor wage employment disproportionately, and are susceptible to chronic stress [34]. In the context of multiple environmental exposures, these factors may play a role in promoting AL. When this is intertwined with inadequate healthcare, the health burden on communities exposed to combinations of exposures and health outcomes is vast [34]. Non-Hispanic Whites in the US often have lower levels of AL than minority ethnic groups, as demonstrated in Table 3, across all age groups [35]. This may partly explain the lower disease burden within this group compared to the other groups. Age is a critical variable in AL levels, with younger people typically having lower AL levels than older persons [36]. Our results, as shown in Table 3, confirm this. Continuous stressor exposure over the course of a lifetime can promote inflammation and oxidative stress, which can lead to physiological impairment and promote disease [37]. Among these is cardiovascular disease, the leading killer in the U.S. and in the world [38]. As people become older, their biological sensitivities to chronic stress vary, and the body's physiological response system also changes naturally. As a result, biological regulation may deteriorate over time, which may result in an unhealthy physical state. This scenario has the potential to cause mortality over time, especially in elderly people [35]. The literature on AL by sex varies. Some research has shown that AL levels are often lower in men who hold professional positions, such as managers and directors. On the contrary, Rogers et al. reported that men with higher levels of education are likely to have higher AL [39]. According to several studies, women who obtained higher levels of education and simultaneously held professional jobs as managers had a higher prevalence of AL [40]. People who experience continual stress due to issues such as unemployment and poverty are more likely to engage in excessive drinking, smoking, and eating, which leads to obesity, poor sleep, and, of course, increased AL [41]; our results in Tables 2 and 3 support this. According to a study by Petrovic et al., drinking, smoking, and eating too much sodium were all associated with a higher risk of developing AL. Meanwhile, physical activity and a vegetarian diet were linked to a reduction in AL [40]; our results in Table 2 support these findings. Very few laboratory studies have examined combined exposure to PFAS and metals. Therefore, future experimental and human investigations are required to further corroborate our findings and to investigate the probable mechanism for the health impacts of PFAS and metal exposure on AL, given the dearth of laboratory data and the cross-sectional design of our study. The limitation of this design means that temporality cannot be inferred. A longitudinal study would offer better insight into these exposures and health outcomes. Conclusions PFAS and other toxicants, such as metals, interact in the human body to produce AL. The mixture of PFAS and metals is critical to understand, as they may, in combination, bring forth adverse health outcomes via AL. When PFAS are found in the body alongside metals, our results indicate that their combined toxicity needs to be considered, with cesium, molybdenum, mercury, PFHS, and PFNA especially being of concern. More research is required into this matter. Research into the levels of exposure to multiple pollutants required to bring about AL must be explored if we are to gain an understanding of the realworld mixture concentrations that bring forth disease. This is of paramount significance for at-risk communities because their members lack the resources to effectively manage stress and/or avoid exposure to environmental contaminants. Funding: This research was funded by NHLBI grant R25 HL105400 and the BCSP Foundation. Institutional Review Board Statement: This study did not require IRB approval because de-identified secondary data were used. In the collection of the data by the Centers for Disease Control and Prevention, the study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the Centers for Disease Control and Prevention. Informed Consent Statement: Not applicable. Data Availability Statement: The NHANES dataset is publicly available online, accessible at cdc. gov/nchs/nhanes/index.htm (accessed on 12 January 2023). Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2018-04-03T03:55:37.758Z
2012-07-01T00:00:00.000
19604299
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": null, "pdf_hash": "cf2495f92068756d6bb8cb38876fcaef6b591f69", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:382", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "7cf9a5566403cd37fc573c826cc020a686ce49ea", "year": 2012 }
pes2o/s2orc
Out-of-plane Stokes imaging polarimeter for early skin cancer diagnosis Abstract. Optimal treatment of skin cancer before it metastasizes critically depends on early diagnosis and treatment. Imaging spectroscopy and polarized remittance have been utilized in the past for diagnostic purposes, but valuable information can be also obtained from the analysis of skin roughness. For this purpose, we have developed an out-of-plane hemispherical Stokes imaging polarimeter designed to monitor potential skin neoplasia based on a roughness assessment of the epidermis. The system was utilized to study the rough surface scattering for wax samples and human skin. The scattering by rough skin—simulating phantoms showed behavior that is reasonably described by a facet scattering model. Clinical tests were conducted on patients grouped as follows: benign nevi, melanocytic nevus, melanoma, and normal skin. Images were captured and analyzed, and polarization properties are presented in terms of the principal angle of the polarization ellipse and the degree of polarization. In the former case, there is separation between different groups of patients for some incidence azimuth angles. In the latter, separation between different skin samples for various incidence azimuth angles is observed. Introduction Melanoma is the most deadly form of skin cancer, with a 14% mortality rate and 60,000 new cases per year in the US. 1 Successful clinical detection of melanoma ranges between 40% and 80%, 2 and accuracy of diagnosis depends heavily on clinician expertise. Improved diagnostic techniques for screening of melanoma would have a great impact on patient care and survival. Several methodologies and instrumentations have been used in the past few years for this purpose. Swanson et al. 3 utilized an imaging system to diagnose skin lesions using several morphological and physiological parameters. These parameters were derived from predictive models of light absorption and scattering by chromophores such as hemoglobin, keratin, and melanin at different epidermal and dermal depths. Wan and Applegate 4 developed a high-resolution molecular imaging technique, based on a fusion of spectroscopy and optical coherence microscopy to provide a strong contrast between melanotic and amelanotic regions. Han et al. 5 presented a near-infrared (NIR) fluorescence imaging system with particular utility for direct in vivo characterization of cutaneous melanin. Yaroslavsky et al. 6 utilized a multispectral polarized light imaging technique to enhance the skin lesion margin. Tuchin and colleagues 7,8 developed several methods to reduce the confounding effect of light scattering inside the biological tissue and blood, which allowed them to increase the quality of optical imaging, especially for cancer diagnostics. Polarized light imaging has been used in the past as a noninvasive method for evaluating borders of nonpigmented lesions. 9, 10 Jacques and colleagues 11,12 used polarized light imaging to determine the margins of certain skin cancers by relying on the contrast provided by a cancer-induced disruption of the underlying collagen matrix. A similar effect is produced by scar tissue that exhibits a lower degree of polarization than normal tissue, possibly induced by the random restructuring of collagen. 13 Ghosh and colleagues [14][15][16] reported a sensitive polarimetric platform and presented a Mueller matrix decomposition methodology and its application to decouple the combined polarization information from tissue. Furthermore, Vitkin and colleagues 17,18 and Ghosh and colleagues 19,20 presented several studies on tissue polarimetry and its application in biomedical imaging and diagnosis. Recently, several authors have been evaluating superficial structural components, such as roughness, as a way of discriminating melanocytic from normal pigmented lesions. For example, Tchvialeva and colleagues 21,22 developed a methodology for quantifying skin surface roughness using laser speckle contrast. Pacheco et al. 23 used microtopographic inspection of the skin surface to determine a unique pattern of roughness for benign and malignant skin lesions. Gareau et al. 24 used reflectance confocal microscopy to introduce a roughness score for the dermal-epidermal junction. Polarized backscattering measurements offer high sensitivity to many types of defects, including surface roughness, subsurface features, and particulate contaminants. [25][26][27][28] However, it is often difficult to distinguish between these various scattering mechanisms. Germer and colleagues 25,29 used light scattering ellipsometry to distinguish surface from subsurface scattering for a variety of inorganic materials, such as silicon wafers, glass, and metals. The authors found that different single-scattering mechanisms did not depolarize the light, but yielded different polarization states. Our group 29 has applied similar techniques to the study of skin, demonstrating that rough-surface effects of skin could be minimized using out-of-plane polarized illumination and detection. Finally spectropolarimetric techniques have been used successfully to assess skin roughness including wrinkles. 10,30 In this paper, we introduce a novel polarimetric system that captures the illumination-direction dependence of the polarization state of scattering from skin. After calibrating the Stokes polarimeteric imaging module, the polarization state of each light illumination part was aligned precisely with a set of gold roughness standards, and the overall system was tested with optical phantoms. A facet scattering model was used to validate the results of calibration. After validation of the method against roughness standards, we tested the instrument in a clinical study on human skin. Polarization parameters such as the principal angle of the polarization ellipse and the degree of polarization show meaningful behavior in relation to the change of illumination azimuth angle. Theory The bidirectional reflectance distribution function (BRDF), f r , is commonly used to describe scattering by surfaces. 31 For isotropic materials, the BRDF is a function of the incident polar angle θ i , the incident azimuthal angle ϕ i , and the polar scattering angle θ s , and is given by where Φ s is the scattered light power, Φ i is the incident light power, and Ω is the collection solid angle (see Fig. 1). The BRDF is a function of the incident light polarization and, if expressed as a Mueller matrix, can include information about the outgoing light polarization. Skin is a very complex layered medium exhibiting multiple scattering components. To simulate such complexity, we would be expected to include a combination of numerous scattering mechanisms, which include rough-surface scattering from the stratum corneum and dermoepidermal junction, single scattering from cell nuclei, and multiple scattering from cells and collagen bundles. Significant insight in light scattering, however, can be obtained from a few simple models. Subsurface scattering has been modeled in the past as a sum of a single scattering component, based on a Henyey-Greenstein phase function, and a diffuse highly scattering component. 32 In this paper, we chose to treat the scattering with a rough surface model for the air/stratum corneum interface and a totally diffuse, depolarizing model for the volume scattering beneath the surface. The facet scattering model for rough surfaces makes the assumption that the features on the surface and the correlation lengths are large compared to the wavelength and that the surface can be represented by random flat facets, each of which specularly reflects light according to its orientation. The BRDF is then given by 33 where θ i is the incident angle, θ s is the reflected, scattered angle, θ n is the angle of the facet normal to the mean surface normal, and Pðς x ; ς y Þ is the slope distribution function describing the facet orientations, where ς x and ς y are the facet slopes in the x and y directions, respectively. The reflectance of the facets, R, is presumed to be given by the Fresnel equations and is the only contribution to the polarimetric behavior of the BRDF. That is, the slope distribution function and the polarization are independent. Roughness may also be treated using firstorder vector perturbation theory; 34 however, the roughness of skin is generally considered to be too large for perturbation theory to be applicable. The theory does share the behavior that the light-scattering polarization is independent of the roughness statistics. Because we are treating the volume scattering as completely depolarizing, the total scattering signal has a polarized part that indicates the rough-surface scattering and a depolarizing part that carries little information. If the scattered Stokes vector is S, we can uniquely decompose it into its polarized component, S pol , and its unpolarized component, S unpol , We thus characterize the scattering polarization by the principal angle of the polarization ellipse, η, and the degree of polarization, DOP, or We use the Modeled Integrated Scatter Tool (MIST) to evaluate the facet scattering model. 35 The MIST program is designed to evaluate the reflectance integrated over a solid angle Ω, Fig. 1 The geometry for out-of-plane scattering; θ i is the incidence polar angle, θ s is the scattered polar angle, and ϕ i is the incidence azimuth angle. for a wide variety of scattering models. The program can evaluate the integrated reflectance as a function of model parameters (e.g., index of refraction and slope distribution function), geometric parameters (e.g., incident direction and collection geometry), wavelength, and polarization. Out-of-plane scattering measurement has been shown to be helpful for distinguishing between different scattering mechanisms. 25,27 In the plane of incidence, any isotropic material will not mix polarization (defined by the electric field) parallel to the plane (p-polarization) with that perpendicular to the plane (s-polarization). Furthermore, the polarization of light scattered by many models-including the facet model, subsurface Rayleigh models, and models for particles above the surfaceshow very little polarimetric differentiation for s-polarized incident light. As a result, the greatest polarimetric differentiation between scattering sources occurs when viewing the samples out of the plane of incidence with the p-polarized incident light. However, when illuminating the samples from many directions, adequate differentiation can be obtained when the incident polarization is linearly polarized at 45 deg for all incident directions. Experimental Setup Our hemispherical Stokes imaging system is designed to enable multi-angle out-of-plane measurements without any moving parts. This system is composed of a Stokes imaging polarimeter with a 12-bit digital charge-coupled device (CCD) black and white camera (Dalsa Genie, Billerica, MA) positioned at one scattering angle θ s ¼ 49 deg as shown in Fig. 2. * Sixteen illumination tubes are distributed about a hemisphere. Illuminators 1 to 9 are centered at θ i ¼ 49 deg, illuminators 10 to 15 are centered θ i ¼ 24 deg, and illuminator 16 is centered on the surface normal, θ i ¼ 0 deg. The choice of polar incidence angles (0 deg, 24 deg, and 49 deg) is made simply by attempting to cover the hemisphere with ports. Each illumination tube contains a tricolor light emitting diode (LED) followed by a polarizer P 1 (Edmund Optics, Barrington, NJ), and a lens 1 1 (Edmund Optics, Barrington, NJ). The tricolor LED emits in three bands: red, centered on λ ¼ 630 nm; green, centered on λ ¼ 525 nm; and blue, centered on λ ¼ 472 nm (widths ¼ 30 nm measured as the full width at half maximum). Each LED is controlled with a digital-to-analog module (Measurement Computing Corp., Norton, MA). Each tube illuminates the sample, located at the center of the hemisphere, with an approximately 2-cmdiameter beam. The polarization state analyzer (PSA) consists of two nematic liquid crystal (LC 1 and LC 2 ) variable retarders (Meadowlark Optics Inc., Frederick, CO) followed by a vertical linear dichroic polarizer, P 2 . Figure 3 presents the system geometry, showing one illumination tube and the Stokes imaging polarimeter. Images are captured by the fast-acquisition CCD camera attached to a zoom lens l 2 (Computar, Commack, NY). The LC cells are mounted on manual rotation stages (precision of 1 deg) that help adjust their fast-axis rotation angles, γ 1 and γ 2 , with respect to the axis of the polarizer. It takes 2 min to obtain a set of Stokes vector images for each of the 16 illumination directions using this setup. Calibration of the Stokes Polarimeter The calibration of the Stokes imaging polarimeter followed a method outlined by Boulbry et al. 36 that involved generating a set of known Stokes vectors and computing the data reduction matrix W from the measured intensities and the calibration Stokes vectors. A Stokes vector, which describes the polarization state of light propagating in a particular direction, takes the form . LED is a threecolor light source, P 1 is the illumination polarizer at 45 deg with respect to the plane of incidence, l 1 is the collimating lens, LC 1 and LC 2 are liquid crystal retarders, P 2 is the vertical reference polarizer, and l 2 is a zoom lens. CCD is a black-and-white camera used for acquisition. *Certain commercial equipment, instruments, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, The Catholic University of America, or the Washington Hospital Center, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose. where the first term, S 1 , is the total intensity, and I j is the intensity at various states of polarization with j ¼ ðx; y; 45 deg; AE45 deg; rcp; lcpÞ. The x (y) direction is defined in our measurements as the direction parallel (perpendicular) to the plane defined by the viewing direction and the surface normal. The acronyms rcp and lcp stand for right-circular and left-circular polarization, respectively, whereas AE45 deg stands for linear polarization of light at AE45 deg about the normal. One cannot measure the elements of S directly and thus must compute them through polarization analysis measurements. The relationship between the Stokes elements and the measured intensities can be expressed in matrix form as where W is the data reduction matrix and I is a vector of the measured intensities for different combinations of retardances for LC 1 and LC 2 . A minimum of four intensity measurements is required to compute S. 12 Calibrating the polarimeter means assessing W. The LC azimuth angles as well as their driving voltages were chosen to minimize the condition number of W, Cond(W) (defined as the ratio of the largest to the smallest of the singular values of W). 37,38 The higher the condition number, the less linearly independent are the columns and rows of W. Minimizing the condition number maximizes the relative importance of each of measurements, increasing system stability and decreasing noise propagation. 36,39 We computed a single set of driving voltages (hence, retardances) and angles for LC1 and LC2 that minimized the sum of squares of the condition number for each of the three available illumination wavelengths. The LC cells were modeled as linear retarders in the simulations. The retardation curves as a functions of the driving voltage were provided by the supplier at λ ¼ 630 nm and were estimated for 472 and 525 nm based on the assumption of negligible dispersion. Hence, the chosen azimuth angles are γ 1 ¼ 22 deg and γ 2 ¼ 45 deg, and the retardances for the four measurements ðδ 1 ; δ 2 Þ ¼ ðδ 1 a ; δ 2 a Þ, (δ 1 a ; δ 2 b ), (δ 1 b ; δ 2 a ), (δ 1 b ; δ 2 b ), along with the corresponding condition numbers, are presented in Table 1. Since we chose to use the same set of driving voltages for each of the three illumination wavelengths, these parameters led to condition numbers for W that are not the ideal minimum of 1.73 reported by Tyo. 39 To calibrate the polarimeter, we set up a polarization state generator right in front of the imaging arm to generate an input set of known Stokes vectors. These consisted of a set of 18 linearly polarized states generated by a rotating linear polarizer, and 18 circularly polarized states generated by a rotating linear polarizer positioned before a quasi-achromatic quarter-wave plate. The vectors spanned the equator and a meridian of the Poincaré sphere. The original calibration procedure by Boulbry et al. 36 used a Fresnel rhomb instead of an achromatic quarter-wave plate, which required a beam displacement that made the calibration harder. We chose a quasi-achromatic quarter-wave plate to prevent the beam displacement at the expense of the achromaticity and accuracy of the retardance. However, the resulting calibration of the polarimeter is less than 3%, which is a typical value for this type of measurement. 40 For each illumination tube, the orientation of the polarizer was set to 45 deg from the incident plane, i.e., the plane defined by the sample normal and the illumination direction. Because the setup is not built on a goniometric platform, it is somewhat difficult to adjust the orientation of the linear polarizer in each illumination tube. To aid in the adjustment of the incident polarizer orientations, we measured the Stokes vector for λ ¼ 630 nm illumination from each of the tubes and that of four reference aluminum samples that were roughened by electrodischarge machining at different levels of roughness and coated with gold. 30 A detailed process of fine adjustment of each illumination polarizer is available in our previous reported study. 10 We calculated ellipticity and the principal angle of polarization of the measured Stokes vectors. The data were then modeled using a facet scattering model from the SCATMECH/MIST library. 35 There is good agreement between the results of the model and experimental measurements of the gold samples obtained using Stokes polarimeter with well-aligned illumination polarizers. Phantom Samples The system was tested using skin-simulating phantoms. We built surface-roughened optical phantoms that mimic skin's optical properties. The values provided for human dermis available in the literature 41 are, at λ ¼ 630 nm, the absorption coefficient, μ a , which varies from 0.1 to 0.2 mm −1 , and the reduced scattering coefficient, μ 0 s , which varies from 3.55 to 5 mm −1 . We used wax (Batik Wax, Jacquard, Healdsburg, CA) as the casting material, since it is easy to mold; however, wax has scattering properties that needed to be accounted for in the final result. The phantoms were made as follows. Wax was melted on a hotplate stirrer, and TiO 2 was added to adjust μ 0 s . The mixture was stirred until it was visibly homogeneous, and black wax (Jacquard, Healdsburg, CA) was incorporated to adjust μ a . This preparation was poured into two molds. One had a rough imprint (based on sandpaper with ANSI grade 60) at the bottom that created a 5-mm-thick phantom for polarization measurements. The other provided a smooth 2-mm-thick phantom that was used to measure the bulk optical properties. The inverse adding doubling (IAD) program 42 was used with measurements of the total reflectance and the total transmittance to compute μ 0 s and μ a . These measurements were obtained with an integrating sphere (Labsphere, 10.2 cm diameter, calibrated wall reflectance of 97.1%) and a He-Ne laser source λ ¼ 632.8 nm (CVI Melles Griot, Albuquerque, NM). Table 2 presents the measured optical parameters of the four smooth wax phantoms. The anisotropy factor for TiO 2 has been reported as equal to 0.5 by several authors at the wavelength of interest. 43,44 All phantoms exhibited optical properties that fit the reference human dermis values, except phantom B. For each phantom, we computed the Stokes vector versus the illumination azimuth direction, ϕ i , following the procedure previously explained. We used only one illumination wavelength, λ ¼ 630 nm. The principal angle of the polarization ellipse, η, and the degree of polarization, DOP, were estimated as functions of the illumination azimuth angle and were compared to the same parameters predicted by the facet scattering model (with optical constants n ¼ 1.42 and k ¼ 0 for wax at λ ¼ 630°nm), shown for tubes 1 to 9 in Fig. 4 (η) and Fig. 5 (DOP). The principal angle of polarization shows a cyclical behavior as a function of the illumination azimuth angle. The incoming polarization is linearly polarized at 45 deg to the local incident plane for each illumination direction. Therefore the resulting principal angle of the polarization is not symmetrical. Furthermore the discontinuities visible in the graph are due to the mathematical formulation of the principal angle of polarization, where a small change in S 1 or S 2 can cause shift from −90 to 90 deg as seen in Fig. 4. Figures 4 and 5 show that although the scattering and absorption coefficients do not affect the resulting principal angle of polarization, they do influence the degree of polarization. That is, the phantom with the highest scattering coefficient (phantom B) consistently has the lowest degree of polarization. Skin Samples To assess different rough-surface scattering effects due to multiple sources of skin scattering, polarization studies were conducted on Caucasian skin in vivo. A portion of the skin was smeared with index-matching gel and covered with a thin glass slide, as illustrated in Fig. 6. A glass slide and an indexmatching fluid (mineral oil) were used to minimize the effect of skin roughness and to increase the sensitivity of the measurement to subsurface features. The covered portion allowed for a quick elimination of the rough-surface effect in one section of the image. The birefringence of the glass slide and the matching fluid were insignificant; therefore, they were not expected to alter the polarization state. The remaining part of the skin sample was left untouched; rough-surface scattering effects were most visible in this section. The MIST program was utilized to evaluate the facet scattering model for each experiment, with average index of refraction n ¼ 1.38 for skin tissue 45 at λ ¼ 630 nm and 45 deg linearly polarized illumination based on the geometric parameter of the imaging system. Figure 7 shows results for the glass-slide-covered and uncovered portions of the skin sample for two different incidence angles θ i . The graphs show the ϕ i -dependent variation of the Results An ongoing clinical trial is being conducted at the Washington Cancer Institute's Melanoma Center, and Institutional Review Board approval and informed consent have been obtained. The goal of the study is to assess the validity of rough-surface scattering as a diagnostic tool for melanoma. A total of 13 individuals have been imaged so far. All volunteers were Caucasian Journal of Biomedical Optics 076014-6 July 2012 • Vol. 17 (7) with fair skin (types I and II in the Fitzpatrick scale 46 ). Nine benign pigmented nevi, two melanocytic nevus, two melanoma, and 13 normal skin tissue were imaged. All suspicious lesions were excised and sent to pathology for evaluation. Sequential azimuthal illumination images of a benign nevus from one patient are depicted in Fig. 8. The principal angle of polarization and the degree of polarization were calculated for each nevus or lesion and its surrounding area. The combined results are presented in Fig. 9 (for DOP) and Fig. 10 (for η), and in the former figure is compared to model results. The error bars in all plots are standard deviation of the mean for each group of patients; this is very low for DOP of normal skin and are therefore not depicted in Fig. 9. Maximum values of standard deviation for different groups of patients were benign nevi 0.02 for DOP and 4.30 deg for η; melanocytic nevus 0.03 for DOP and 5.20 deg for η; and melanoma 0.03 for DOP and 4.24 deg for η. For most of the azimuth incidence angles (except 216 and 282 deg), a separation exists between the three groups of pigmented lesions when observing the degree of polarization. In contrast, separation of patient groups is visible for only some of the azimuth incidence angles using the principal angle of polarization. Pigmented lesions have a higher absorption coefficient compared to normal skin. Therefore, the degree of polarization of backscattered light for pigmented lesions is higher than that of normal skin tissue as expected due to the Umov effect 47 and the loss of highly scattered light in the overall diffuse reflectance from pigmented lesions. 48 The relationship between diffuse reflectance from the skin tissue and degree of polarization, DOP, is explored in Fig. 11. Furthermore, scattering from surface and subsurface structures of melanocytic nevi and melanoma is higher (possibly due to greater roughness) compared to benign nevi (less roughness). The degree of polarization of backscattered light from the melanocytic nevi and melanoma should thus be lower than that of the benign lesion. Discussion We have shown that out-of-plane Stokes imaging polarimetry can provide information regarding rough-surface scattering, including that from highly scattering and absorbing tissue. The system was calibrated by generating a set of known Stokes vectors and computing a data reduction matrix using a Fig. 9 Degree of polarization for skin illuminated sequentially by tubes 1 to 9 (q i ¼ 49 deg). Circles are averaged normal skin values (Caucasian), whereas crosses are benign nevi, upright triangles are melanocytic nevus, and upside-down triangles are melanoma. 11 Degree of polarization versus diffuse reflectance measured at various incidence azimuth angles of a benign nevus (crosses) located at right hip lower abdomen, and its normal surrounding tissue (circles) from a 29-year-old Caucasian female. previously published calibration methodology. The measurements rely on out-of-plane polarized illumination with polarization-sensitive viewing. The system was utilized to study roughsurface scattering from wax samples and human skin. The metrics utilized for analysis are the principal angle of polarization ellipse and the degree of polarization. Based on these, the scattering by rough skin-simulating phantoms exhibited behavior that is reasonably described by a facet scattering model. The ultimate goal of this study was to demonstrate that a melanoma lesion has a different superficial structure (roughness) than a normal pigmented lesion. A few studies, mostly based on speckle sensing, have pointed to this phenomenon, although the biological mechanisms are not clear at present. One hypothesis is that early-stage melanocytic cells form nodes at the dermal-epidermal junction. At later stages, melanomas progress to the radial growth phase and vertical growth phase and consequently invade dermal component, changing its architecture. 49 This change is reflected at the surface and could be responsible for the different roughness. Using the out-ofplane imaging system, we believe information about roughness structure of different groups of patients can be gathered. We applied our methodology to measure roughness properties between four groups of patients including normal skin, benign nevi, melanocytic nevus, and melanoma. Although our results are very preliminary due to the small population, we note a separation between the degrees of polarization of these groups. Particularly in Fig. 9, representing the degree of polarization for four populations, the melanoma data seem to be separated from the other data at all angles except two (216 deg and 282 deg). The degree of polarization, though, is influenced not only by roughness but also by the media scattering and absorbing properties, as we have shown in our wax phantoms study. The more interesting metric to us is the principal angle of polarization, which is influenced primarily by different rough-surface scattering mechanisms. Again, looking at the wax study, we note that all the rough phantoms have similar rough structure, therefore they all present the same η behavior regardless of their scattering properties. Only when the rough surface structure is changed, as in our experiment with human skin covered with a glass slide, do we note a different η behavior, since the mechanisms of rough-surface scatter are completely different, although that is true only for few azimuth angles (0 deg and 252 deg at θ i ¼ 49 deg; 0 and 240 deg at θ i ¼ 24 deg). Similarly, in the human data, the principle angle of polarization for melanoma deviates from a model behavior, melanocytic nevus, and benign nevi at few angles (36 deg, 108 deg, 144 deg, 252 deg, 282 deg, 324 deg). We believe this could be due to difference in the roughness of this particular lesion compared to normal skin or benign lesions. To truly generalize these preliminary findings, further studies are necessary to confirm our hypothesis. A clinical trial at the Washington Cancer Institute's Melanoma Center is ongoing and will be the focus of future analysis.
v3-fos-license
2021-10-22T15:07:58.615Z
2021-10-01T00:00:00.000
239285225
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/20/10993/pdf", "pdf_hash": "76636476e6974f7b98361d1dc7624016d869ba16", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:383", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "c4f09b6c8479eb9a975522842b8915ac722197fd", "year": 2021 }
pes2o/s2orc
Gender and Socioeconomic Inequality in the Prescription of Direct Oral Anticoagulants in Patients with Non-Valvular Atrial Fibrillation in Primary Care in Catalonia (Fantas-TIC Study) Background: Evidence points to unequal access to direct oral anticoagulant (DOAC) therapy, to the detriment of the most socioeconomically disadvantaged patients in different geographic areas; however, few studies have focused on people with atrial fibrillation. This study aimed to assess gender-based and socioeconomic differences in the prescriptions of anticoagulants in people with non-valvular atrial fibrillation who attended Primary Care. Method: A cross-sectional study with real-world data from patients treated in Primary Care in Catalonia (Spain). Data were obtained from the SIDIAP database, covering 287 Primary Care centers in 2018. Results were presented as descriptive statistics and odds ratios estimated by multivariable logistic regression. Results: A total of 60,978 patients on anticoagulants for non-valvular atrial fibrillation were identified: 41,430 (68%) were taking vitamin K antagonists and 19,548 (32%), DOACs. Women had higher odds of treatment with DOAC (adjusted odds ratio [ORadj] 1.12), while lower DOAC prescription rates affected patients from Primary Care centers located in high-deprivation urban centers (ORadj 0.58) and rural areas (ORadj 0.34). Conclusions: DOAC prescription patterns differ by population. Women are more likely to receive it than men, while people living in rural areas and deprived urban areas are less likely to receive this therapy. Following clinical management guidelines could help to minimize the inequality. Introduction Gender-Based and Socioeconomic Differences in Health Care and Drug Prescription Oral anticoagulants are the drugs of choice to prevent stroke in people with atrial fibrillation. In non-valvular atrial fibrillation, two classes of oral anticoagulants are available for preventing a thromboembolic event: vitamin K antagonists (VKA) and direct oral anticoagulants (DOAC). VKAs are characterized by a narrow therapeutic window, require frequent follow-ups, are effective for preventing stroke, increase the risk of bleeding, can be used in people with any degree of renal insufficiency, and are less expensive than DOACs. For their part, DOACs do not require monitoring, effectively prevent stroke, and also increase the risk of bleeding, although to a lesser extent than VKAs for intracranial bleeding [1,2]. The cost and follow-up profile of a given treatment can influence its prescription differently depending on certain factors and contexts. For example, prescription patterns can be conditioned by sociodemographic and economic factors that are unrelated to medication appropriateness criteria [3]. Different authors have analyzed the influence of deprivation in health inequalities, at both a national and international level [4][5][6][7]. Deprivation has been conceptualized since the 1970s [4], differing from the classical concept of poverty in that it is linked to difficulties (capability) in access to employment, education, culture and social development at levels considered acceptable for society. The concept of deprivation thus encompasses more than food insecurity, lack of basic goods such as clothing, sub-standard housing and other purely economic or monetary indicators of well-being, in consonance with a holistic model of health [8]. In Spain, several studies have applied deprivation indexes to different settings based on the MEDEA project ("Mortality and socioeconomic and environmental inequalities in small Spanish areas") [8]. In England, socioeconomic deprivation was associated with opioid and non-opioid analgesics, antipsychotics and reflux medication prescriptions, while affluence was associated with epinephrine, combined oral contraceptives and hormone replacement therapy [9]. A recent meta-analysis showed that the rate of prescription of guideline-recommended medications in managing acute coronary syndrome was significantly different between patients with the lowest and the highest socioeconomic status [10]. Regarding oral anticoagulants, in Sweden, differences by age, income, education and country of birth were found in their prescribing after stroke. Those differences were not explained by common risk factors, indicating socioeconomic inequalities in the prescribing of preventive treatment after stroke [11]. In Denmark, patients with atrial fibrillation who had a low income, low education and were living alone were associated with a lower chance of being initiated with oral anticoagulation therapy, and new high-cost drugs were increasing inequality [12]. To date, few studies have assessed inequalities in the prescription of DOACs, which are more expensive than VKA, in patients with atrial fibrillation [6,13]. There are those that have highlighted the substantial disparities that exist around access to new anticoagulant therapies in the USA among socioeconomically disadvantaged patients and the need to study inequalities related to the prescription of oral anticoagulants [6,13]. Differences in DOAC prescription patterns have already been observed in relation to socioeconomic indicators [6]. Yet, any analysis of socioeconomic determinants must also take into account the gender dimension, as this is the relational aspect that governs how sex interacts with the world around it [14,15]. There is evidence that oral anticoagulants are prescribed less frequently to women compared to men with atrial fibrillation [16]; although, few studies have evaluated the variety of anticoagulant prescribed by gender [13]. Not enough information is currently available regarding gender-based and socioeconomic differences in DOAC prescription in our geographical area. Studying gender-based and socioeconomic differences in health care is essential for identifying modifiable causes of inequality and developing solutions to guarantee equity and quality in health care. The aim of this study is to assess gender-based and socioeconomic differences in the prescription of DOACs in people with non-valvular atrial fibrillation seen in Primary Care in Catalonia (Spain). Study Design and Population As part of the Fantas-tic study in Catalonia, we used a cross-sectional design and real-world data from patients seen in Primary Care centers (PCCs) managed by the Catalan Health Institute (ICS). The 287 PCCs included employ 3384 physicians and are responsible for the care of an estimated 5,564,292 people, about 80% of the Catalan population. All registered patients diagnosed with non-valvular atrial fibrillation and treated with oral anticoagulants in 2018 were included. Data were drawn from the SIDIAP database (Information System for Research in Primary Care), a representative population-based database in Catalonia that collects anonymized clinical information from different data sources: (a) electronic health records from ICS Primary Care, including sociodemographic characteristics, registered diagnoses coded according to the International Classification of Diseases, 10th revision (ICD-10) [17], general practitioner prescriptions and clinical parameters; (b) laboratory data; and (c) prescription data from the Catalan Health System community pharmacies, based on the Anatomical Therapeutic Chemical (ATC) Classification System codes [18]. A total of 97,350 registered patients with a diagnosis of atrial fibrillation from 12 months prior to the study were identified from the SIDIAP database, and all those who had an active prescription for oral anticoagulants on 1 January 2018 were included. All authorized anticoagulant treatments with VKAs (acenocoumarol and warfarin) and DOACs (dabigatran, rivaroxaban, apixaban and edoxaban) in Spain in 2016 were included in the study. Drug data based on ATC codes were collected [18]. Inclusion and Exclusion Criteria We included patients under treatment with oral anticoagulants and followed in PCCs who had been diagnosed with non-valvular atrial fibrillation at least one year before the study date (1 January 2018) and had at least six controls of the international normalized ratio (INR) over the previous 12 months. This restriction was aimed at minimizing INR variability at the start of the treatment and avoiding the effect of temporary withdrawal of VKAs in patients with good INR control. Patients were considered to have been exposed to anticoagulation if they had been prescribed anticoagulants (acenocoumarol, warfarin, dabigatran, rivaroxaban, apixaban or edoxaban) for at least two months before the start of the study. The anticoagulant medication included in the study was the one which had been started the closest to the study date. We excluded patients with no oral anticoagulant therapy, patients whose treatment was monitored in hospital, those with valvular atrial fibrillation (mitral stenosis or with a mechanical prosthetic valve), pregnant women, and patients whose anticoagulant treatment at the beginning of the study could not be ascertained. Secondary variables. Sociodemographic variables related to patients (gender, age) and the socioeconomic deprivation degree of the PCC geographical area. To measure deprivation, we followed the classification used by the Catalan Health Institute, which uses the MEDEA index [8] to rate urban PCCs according to the deprivation level of each PCC area (the census tract corresponding to the PCC area), which is updated when the census is updated, every 10 years (we used the results calculated for 2018, the period of study). The MEDEA instrument classifies urban areas on a scale from MEDEA 1 (low deprivation) to MEDEA 5 (high deprivation). As a composite deprivation index, it assesses barriers to accessing employment, education, culture and social development at a level that is considered acceptable to the society or surrounding region, and it is composed of subindicators for employment and education [5]. As the MEDEA was initially designed for urban areas, based on an analysis of five large Spanish cities [8], rural PCCs were not included in the classification. In our study, rural PCCs were grouped into a separate category and defined as centers serving a population of less than 10,000 inhabitants and with a population density of less than 150 inhabitants/km 2 [19]. Other secondary variables included clinical variables: time since diagnosis of atrial fibrillation; health care setting where oral anticoagulants were prescribed (Primary Care or hospital); history of cardiovascular disease; intracranial bleeding; comorbidities; risk factors for bleeding; risk scores based on participants' real-world data (CHA 2 DS 2 -VASC for stroke risk and HAS-BLED for bleeding risk); patients attending outside the PCC (home care or institutionalized care); and teaching PCC. Comorbidities were classified according to the ICD-10 [17]. Statistical Analysis Data cleaning was performed by verifying minimum and maximum values and by analyzing missing data. The treatment variable was classified as VKA or DOAC. Once the database was cleaned, a descriptive analysis was undertaken. Categorical variables were expressed as absolute and relative frequencies and continuous variables as median (interquartile range, IQR). Included patients were described according to their treatment and other characteristics, and they were compared by using the two proportion Z-test for categorical variables and the non-parametric Mann-Whitney U test for continuous variables. To test the association between the type of treatment and the rest of the variables, and to study the factors related to the prescription of DOACs, we calculated the adjusted odds ratio (ORadj) using a multivariable logistic regression model. The statistical analysis was performed using Microsoft Office Excel 2013 (Redmond, Washington, USA) and SPSS version 20.0 software (Armonk, New York, NY, USA). Regarding gender differences in the type of oral anticoagulant prescribed, a higher proportion of women were prescribed DOACs than men (Table 1). Of the patients receiving DOACs, 50.1% were women. There were also differences according to the level of socioeconomic deprivation; patients whose PCC area was classified as the least deprived (MEDEA 1) were more likely to be prescribed DOACs (12.8% VKA versus 18.6% DOAC; p < 0.001). The smallest proportion of patients receiving DOACs were those who attended rural PCCs (22.6% VKA versus 13.0% DOAC; p < 0.001). Figure 1 presents the proportional distribution of VKA and DOAC prescriptions by PCC category (MEDEA index 1 to 5, rural PCCs). In the centers classified as MEDEA 1, the difference in the prescription between VKAs and DOACs is smaller (59.3% VKA versus 40.7% DOAC) than in rural PCCs (78.7% VKA versus 21.3% DOAC). There were also differences according to the level of socioeconomic deprivation; patients whose PCC area was classified as the least deprived (MEDEA 1) were more likely to be prescribed DOACs (12.8% VKA versus 18.6% DOAC; p < 0.001). The smallest proportion of patients receiving DOACs were those who attended rural PCCs (22.6% VKA versus 13.0% DOAC; p < 0.001). Figure 1 presents the proportional distribution of VKA and DOAC prescriptions by PCC category (MEDEA index 1 to 5, rural PCCs). In the centers classified as MEDEA 1, the difference in the prescription between VKAs and DOACs is smaller (59.3% VKA versus 40.7% DOAC) than in rural PCCs (78.7% VKA versus 21.3% DOAC). Patients with a history of cardiovascular diseases, cerebrovascular diseases and gastrointestinal bleeding were prescribed DOACs more frequently than VKAs. On the other hand, DOAC prescriptions were more frequent in people with a score of less than 2 on the CHA2DS2-VASC tool for assessing risk of stroke and in patients who attended outside the PCC premises (home care and institutional care) ( Table 1). According to the results of the logistic regression, the variables associated with differences in prescription of DOACs versus VKAs are: gender, socioeconomic deprivation and rurality. The logistic regression showed that being a woman was associated with DOAC prescription ( Table 2). As the level of socioeconomic deprivation rose, the odds of Patients with a history of cardiovascular diseases, cerebrovascular diseases and gastrointestinal bleeding were prescribed DOACs more frequently than VKAs. On the other hand, DOAC prescriptions were more frequent in people with a score of less than 2 on the CHA 2 DS 2 -VASC tool for assessing risk of stroke and in patients who attended outside the PCC premises (home care and institutional care) ( Table 1). According to the results of the logistic regression, the variables associated with differences in prescription of DOACs versus VKAs are: gender, socioeconomic deprivation and rurality. The logistic regression showed that being a woman was associated with DOAC prescription ( Table 2). As the level of socioeconomic deprivation rose, the odds of being prescribed DOACs decreased (taking MEDEA 1 levels as the reference). Thus, the highest level of deprivation, MEDEA 5, showed the lowest odds of DOAC prescription among urban areas (ORadj 0.58; p < 0.001). However, a rural PCC location was the most important factor associated with lower DOAC prescription rates (ORadj 0.34, p < 0.001; Table 2). Advanced age, arterial hypertension, renal insufficiency and longer time since diagnosis of atrial fibrillation were associated with a lower frequency of DOAC prescription ( Table 2). In contrast, a history of ischemic cardiopathy, peripheral artery disease, heart failure, gastrointestinal bleeding and cerebrovascular events were associated with higher odds of being prescribed a DOAC. Patients who attended outside the PCC, whether at home or in an institution, were more likely to be prescribed a DOAC. On the other hand, receiving care in a teaching PCC or receiving the anticoagulant prescription in a PCC was associated with lower DOAC prescription rates. Figure 2 shows the proportional distribution of prescriptions (DOAC versus VKA) according to whether the patient received the prescription in or outside a PCC. Within PCCs, 21.3% of the prescriptions for oral anticoagulants are DOACs, while outside the centers, this figure stands at 56.3%. Patients who attended outside the PCC, whether at home or in an institution, were more likely to be prescribed a DOAC. On the other hand, receiving care in a teaching PCC or receiving the anticoagulant prescription in a PCC was associated with lower DOAC prescription rates. Figure 2 shows the proportional distribution of prescriptions (DOAC versus VKA) according to whether the patient received the prescription in or outside a PCC. Within PCCs, 21.3% of the prescriptions for oral anticoagulants are DOACs, while outside the centers, this figure stands at 56.3%. Discussion This study, analyzing real-world data, identified differences in the prescription of different types of oral anticoagulants (VKA versus DOAC) based on the characteristics of patients and the PCC area. The main factors associated with the type of drug prescribed were gender, socioeconomic deprivation of the urban area and rurality. Discussion This study, analyzing real-world data, identified differences in the prescription of different types of oral anticoagulants (VKA versus DOAC) based on the characteristics of patients and the PCC area. The main factors associated with the type of drug prescribed were gender, socioeconomic deprivation of the urban area and rurality. Being a woman was associated with more frequent prescriptions for DOACs, after adjusting for the rest of the variables. Lower DOAC prescription was associated with both socioeconomic deprivation and rurality. The statistical differences detected with regard to gender, deprivation and rurality could reflect inequality in the prescription patterns of oral anticoagulants if these differences are avoidable. If the inequality is unfair, it would represent an example of inequity, an ethical concept that considers inequality on the basis of a values system. The central criterion to define inequality as inequitable is fairness, where inequity is defined as an unfair inequality [20]. Socioeconomic determinants such as education, employment status, income, gender and ethnicity have a clear influence on an individual's health. The lower socioeconomic status a person has, the higher their risk for poor health [20]. Health inequities are systematic differences in the health status of different population groups. These inequities have a high social and economic cost for both individuals and the society as a whole [20]. In Sweden, socioeconomic inequalities in the prescription of oral anticoagulation for preventive treatment after stroke were based on age, income, education and country of birth [11]. In Denmark, patients with atrial fibrillation had a lower chance to being initiated with oral anticoagulation therapy when they had low income, low education and were living alone. Inequality reduced when more detailed new guidelines were published in 2011 [12]. Following clinical guideline recommendations improves adequacy and reduces inequality. Oral anticoagulants (VKAs and DOACs) are medicines of proven efficacy and effectiveness in preventing thromboembolic events in non-valvular atrial fibrillation. DOACs are considerably more expensive than VKAs, but they are also more practical to use because patients on DOACs do not require close follow-up, as they do on VKAs. In some circumstances, for instance in patients with a history of intracranial bleeding, DOACs have also demonstrated more safety [1]. The differences in cost, convenience and even in therapeutic advantages, could be the cause of inequities in prescription patterns that are related to the differences detected in socioeconomic deprivation. The association between being a woman and being prescribed DOACs shows that women are more likely to receive these drugs than men, even after adjusting for age, medical history, MEDEA index and prescription setting. Other factors not analyzed in this study could also have had an impact, such as polypharmacy (more prevalent in women) or the lower time in therapeutic range observed in women [21]. Gender inequality in the prescription of analgesics and antidepressants is well documented in our country, and the observed differences cannot be fully explained by clinical factors [22]. Specifically, the prescription of analgesics in Spain is more frequent in women than in men, especially in people with a low educational and socioeconomic level [22]. On the other hand, women, especially those with a low socioeconomic status, are more likely to be diagnosed with depression and prescribed antidepressants and other psychotropics. These differences cannot be attributed to a higher frequency of symptoms of depression or visits to Primary Care [7]. Our results also indicate that a younger age is associated with higher prescription of DOACs, even after adjusting for other included variables. Some factors that we did not study may have influenced this result, for example the loss of labor productivity due to follow-up appointments, patient preferences, purchasing power, and co-payments for the medication. However, younger age also tends to indicate less severe pathology, and in turn a lower probability of meeting the current medication appropriateness criteria for DOACs [23,24]. Lower age is also associated with a lower CHA 2 DS 2 -VASC score (<2), in which case treatment with oral anticoagulants would not be appropriate [23]. The influence of socioeconomic factors on inequalities in DOAC prescriptions and the relation to its high cost has been studied in different countries. In line with our results, the literature shows that low socioeconomic status is associated with lower use of DOACs in different geographic areas and contexts [3,25]. In Denmark, increasing inequality was observed regarding high-cost drugs, such as DOAC, for the treatment of atrial fibrillation [12]. The way that medications are financed and the type of health system seems to play a relevant role in prescription patterns [3]. The substantial difference in the cost of treatment can hinder the prescription of DOACs in areas affected by greater socioeconomic deprivation, as observed in our study. Rural residence is one of the factors that is most closely associated with lower prescription rates of DOACs. However, the greater convenience associated with following patients treated with DOACs versus VKAs makes the former a more functional treatment for patients who have difficulties in accessing health centers. There is some controversy around whether the higher cost of DOACs compared to VKAs is offset by the lower need for follow-up, making the total healthcare expenditure comparable between the two treatments [26]. Studies being carried out in our context will provide evidence of the cost-effectiveness of both classes of medication [27]. The treatment strategy must consider a holistic, integrated assessment of the patient within the framework of current guidelines. The MEDEA index used in our study has a limitation in terms of its interpretation, as this index of socioeconomic deprivations is linked to the health center and therefore reflects the index of the census tract corresponding to the basic health area. This means that the patients seen in each PCC may have a different level of deprivation than the area in which they live; although, in general, evidence shows that in population terms, the socioeconomic situation of the census tract is related to mortality in the resident population [19]. Furthermore, potential physician conflicts of interest with pharmaceutical companies cannot always be discarded, which could influence prescription decisions. In other cases, physicians could consider the patient affluent and could prescribe DOAC, which although they are more expensive are more comfortable. This study opens the door to new studies that can help establish the socioeconomic determinants of inequalities in DOAC prescriptions and assess whether these are avoidable, for the ultimate purpose of achieving fairer and more equitable prescription patterns. The inequality observed in prescriptions of oral anticoagulants should be minimized by following current atrial fibrillation management guidelines and addressing modifiable determinants in the pursuit of healthcare equity, better health and social justice. Conclusions This study reveals differences in prescription patterns for oral anticoagulants, specifically in relation to DOACs. Being a woman was associated with higher prescription rates for DOACs, while lower prescription rates were seen in socioeconomically deprived and rural areas. This means that DOACs, co-financed drugs, are prescribed more to women than to men (young), and more to patients with high socioeconomic status than those with low socioeconomic status. These differences could not be explained by the adequacy factors included in the recommendations of the current guidelines for the treatment of atrial fibrillation. Thus, reflecting inadequacy in the treatment. Future studies should identify modifiable factors associated with the inequalities detected. Current atrial fibrillation clinical management guidelines needs to be followed in order to minimize the inequality.
v3-fos-license
2022-02-09T06:22:55.475Z
2022-02-07T00:00:00.000
246650968
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.pnas.org/content/pnas/119/6/e2113076119.full.pdf", "pdf_hash": "c53147afa12a9530f4d992096bae3724d144eb9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:385", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "sha1": "02a0cc86714581b3959960c0c34e4a3edc02011e", "year": 2022 }
pes2o/s2orc
Spiritual over physical formidability determines willingness to fight and sacrifice through loyalty in cross-cultural populations Significance Despite intermittent interest in and evidence of the importance of nonmaterial factors in war and other extreme forms of intergroup conflict, material factors such as optimal use of physical strength, manpower, and firepower remain the dominant concerns of US and allied military training, decision-making, and related academic literature. In this work, we demonstrate the cross-cultural primacy of personal spiritual over physical formidability on the will to fight in populations from the Middle East, Europe, and North America, including US cadets in whom stronger group loyalty mediates the effect. This empirical examination of spiritual formidability and its link between self and group in willingness to self-sacrifice aims to extend understanding of interpersonal and intergroup conflict and inform considerations of policy. Sensitivity Analyses In all studies, we performed a sensitivity analysis, using G*Power (1), to determine which would be the minimum size effect to reject the null hypothesis considering our sample size and assuming an alpha level of .05 and 80% power. Study 1. The results indicate that an p ≥ .194 for a correlation (point biserial model, two tails) would be enough to reject the null hypothesis. Study 2. The results indicate that an p ≥ .208 for a correlation (point biserial model, two tails) would be enough to reject the null hypothesis. Study 3. The results indicate that an χ 2 ≥ .3.841 for a generic χ 2 test would be enough to reject the null hypothesis. Study 4. The results indicate that an f 2 ≥ .140 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 5. The results indicate that an f 2 ≥ .023 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 6. The results indicate that an f 2 ≥ .013 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 7. The results indicate that an f 2 ≥ .021 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 8. The results indicate that an f 2 ≥ .057 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 9. The results indicate that an f 2 ≥ .030 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 10. The results indicate that an f 2 ≥ .082 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. Study 11. The results indicate that an f 2 ≥ .046 for a linear multiple regression (fixed model R 2 deviation from zero) would be enough to reject the null hypothesis. In Studies 1-3, all participants who completed all items assessing religiosity, spiritual formidability, and physical formidability were included in the analysis. Similarly, all participants who completed physical and spiritual formidability items and dependent measures in Studies 4-9 were included in the analysis. In Studies 10-11, all participants who completed these items were included in the replication analyses. Participants not completing the loyalty items were excluded from the mediation analysis. Table 5 in the main text contains a detailed description of each sample including Institutional Review Board (IRB) approvals. Supplementary Methods and Measures All of the survey items used in Studies 1-11 were validated in previous research (e.g., 1). Across all studies, physical and spiritual Formidability were always presented side by side on the same display regardless of the data collection method. The measure of physical and spiritual formidability in English is as follows: In the next set of questions you will see images that represent the physical and spiritual formidability of a person, group, country, or institution. The physical formidability of a person or a group represents the ability and material resources (e.g., access to weapons, size, strength) of a person or group to fight and achieve their objectives. Physical formidability endows the person or group with the material potential to defend themselves or inflict physical damage to the opponent. Data for all studies are available on OSF at the following link: https://osf.io/mvhgj/?view_only=10b8928478964e5684f8fa8ea7d3dbee Studies 1-3 Religiosity was measured by a single-item scale asking participants to what extent they consider that most Spanish people are religious (from 0 = Not religious at all to 6 = Extremely religious) in Studies 1 and 2. In Study 3, we asked participants if spiritual formidability is more important to predict the behavior of a group than religiosity. The religious and practicing group was not related to physical or spiritual formidability, r(7) =.10. p=.79 with physical and 0.01, p = .98 for spiritual. For those reporting they are religious but not practicing, religiosity was not related to spiritual formidability, r (70) = -.06, p = .64 and -.12, p = .32 and, for those reporting as non-religious, spiritual formidability and religiosity were not related, r(93) = 0.05. p=0.66 and 0.07, p = .53. Studies 4-5 Physical and Spiritual Formidability (avg = .90) -As described and displayed in the main text, physical and spiritual formidability were measured on a slider scale that enabled participants to increase or decrease the size and muscularity of an image of a male body. The smallest, thinnest figure corresponded to a value of zero and the largest, most muscular figure corresponded to a value of one. Willingness to Fight and Commit Costly Sacrifice (avg = .89) -Costly sacrifices for the country (i.e., ingroup) were measured by a five-item scale adapted from our previous study (1), on scales from 0 (totally disagree) to 6 (totally agree), where participants were asked to what extent, if necessary, they would be willing to display different kinds of self-sacrifice to defend their country as follow: "lose my job or source of income", "go to jail", "use violence", "let my children suffer physical punishment", and "die". In Study 5, this scale includes a sixth item: "If necessary, I would be willing to be exiled from Morocco and be stripped from my Moroccans citizenship to defend Moroccans". Studies 6-9 Physical and Spiritual Formidability (avg = .89) -Formidability measures mirrored Studies 1-2. Because Study 7 used a paper and pencil version of the survey, we used six bodies to measure each type of formidability (see Figure 1). Willingness to Fight and Commit Costly Sacrifice (avg = .92) -This was measured similar to Studies 4-5 except Studies 6-9 did not include the item "let my children suffer physical punishment". Studies 6 and 9 included two additional items: "physically suffer", and "risk harm to friends close to me". Willingness to Fight and Commit Costly Sacrifice (avg = .92) -In Study 10, costly sacrifices for the country were measured by a five-item scale ranging from 0 (totally disagree) to 6 (totally agree), where participants were asked to what extent, if necessary, they would be willing to display different kinds of self-sacrifice to defend their country as follows: "lose my job or source of income", "go to jail", "use violence", "let my children suffer physical punishment", and "die". Because cadets generally do not have children, the item "let my children suffer physical punishment" was replaced with "be a prisoner of war" in Study 10. In Study 11 the item "be a prisoner of war" was not included. Loyalty -Group Loyalty was assessed by a single item asking participants how important it is for them to be loyal, or to show loyalty, to their group. Mediation Analyses The analyses used in Studies 10 and 11 to assess the extent to which loyalty to the group mediated the positive correlation between spiritual formidability and costly sacrifices controlled for physical formidability perceptions, age, and gender. For this mediation analysis, we utilized the bias-corrected bootstrapping procedure (5,000 samples) in the indirect macro for SPSS, model 4 (2).
v3-fos-license
2019-03-30T13:12:06.649Z
2015-01-01T00:00:00.000
86341226
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.transbiomedicine.com/translational-biomedicine/phylogenetic-tree-and-antigenicshift-analysis-of-hemagglutiningene-of-influenza-a-virus-inh5n1-strains-found-in-20052007.pdf", "pdf_hash": "a7b99b279c42981681be2dfc7edc70e0635f3024", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:387", "s2fieldsofstudy": [ "Biology" ], "sha1": "06826ec8429884fe5b85415ebccb7338e164286f", "year": 2015 }
pes2o/s2orc
Phylogenetic Tree and Antigenic Shift Analysis of Hemagglutinin Gene of Influenza A Virus in H5N1 Strains Found in 2005-2007 This study brings the analysis of phylogenetic tree and amino acid sequences of Hemagglutinin (HA) from the influenza A virus that can infect a wide variety of birds and mammals. We have analyzed strains of three different years (2005, 2006 and 2007) of H5N1 from different country to see the antigenic shift patterns with respect to reported mutant positions of amino acids. We did not find the exact location where reported mutations are occurred. But we found similar amino acids near the reported mutated positions but we found similar mutations around the mutated position that may cause antigenic shifts. Background Avian influenza A virus is playing a key role to the emergence of human influenza. Recently transmission of Avian Influenza virus from bird to human has increased in several Asian countries. Influenza A virus is a member of Orthomyoxoviridae family is avirulent but it can be virulent by the acquisition of some genetic features which includes multibasic cleavage sites or glycosylation sites in the hemagglutinin (HA) gene can infect a wide range of species includes poultry, humans, horses, swine, quail etc. [1]. The highly pathogenic avian influenza (HPAI, strain type H5N1) virus has emerged in southern china more than a decade ago [2]. Among all A/Goose/Guangdong/1/1996 is the precursor of H5N1 viruses which is established initially in southern China from 1996 to 1999 in domestic geese [3,4]. From the emergence of this virus it has caused endemic infections in poultry industry in many Southeast Asian countries [5,6]. In HPAI virus there is a high rate of nucleotide substitution which is done by RNA virus [7]. RNA viruses have the higher rate of capability for mutation so that they can cross the species boundaries and jump to the new host to emerge new species [8]. It is believed that crossing of species boundaries require both environmental and appropriate virus genetics factors to transmission of the virus between species [9]. Segment 4 hemagglutinin (HA) genes is recognized to be the most mutable portion responsible for the attachment to the cell surface which acts as primary target of the host immune response resulting frequent genetic drift [10]. HPAI virus has the ability to transmit through both bird and human host contact system [11]. Variants from unique HPAI viruses could cause infection and has the ability to replicate in humans. Human strains may arise from some Hong Kong avian H5N1 strains without prior adaptation in a mammalian intermediate host [12] Avian virus strains circulate locally within poultry and wild birds. This virus may be migrated through the migratory birds to the new geographic regions. It can be spread by the movement of poultry and poultry products [13]. Virology Avian influenza A consists of two major glycoproteins which are Hemaglutinin (HA) and Neuraminidase (NA) [14]. HA glycoproteins are more prone to attach to the cell surface sialic acid receptors. There is a difference between host surface receptors on the target cell which is believed to be the possible restrictive factor of avian influenza. HA gene of avian cell binds to Sia2-3Galactosecontaining receptor which is different from human Sia2-6Galactose containing receptor [15]. Before functioning as a virus it needs post translational cleavage by host proteases [16]. HA followed by NA are important antigenic determinant from which neutralizing antibodies are directed. There are several subtypes of HA and NA. 18 different HA subtypes (H1 to H18) and 11 different NA subtypes (N1 to N11) are found [17]. There is a membrane protein named M2 protein which regulates the internal P H level of the virus. This membrane protein is responsible for uncoating the virus during early stages of viral replication [18]. Amantadine and rimantadine block this function. NA catalyze the cleavage of glycosidic linkages to sialic acid on the surface of the viral particle and host cell thus preventing the aggregation and facilitating the release of progeny viruses from the infected cell. Antiviral drugs like Oseltamivir and zanamivir (NA inhibitors) inhibits this important function are the key to the antiviral treatment. Transmission Transmission pattern of avian influenza A from one bird to another is poorly understood because of its complexity, huge number of species among birds and environmental factors. Some experiments have been done to identify the transmission pattern and it shows poorer transmission from infected to susceptible animals [19][20][21]. Migration process can influence transmission of viruses. Migratory birds can carry pathogens from country to country thereby playing a role distributing influenza viruses. Materials and Methods: Sequence and Data Source Data used in this study are obtained using nucleotide BLAST search from publicly available database of National Centre for Biotechnology Information (NCBI).Multiple sequence alignments, editing, assembly of strains were performed in windows platform with the Geneious program version 7.1.3 (trial). Numbers at nodes in the tree indicate Neighbor-Joining bootstraps value generated from 1,000 replicates. Results and Discussion In 2005 we have selected total 184 strains of Hemagglutinin (HA) strain of H5N1 (Figure 1 and Table 1). In 2006 we have selected total 164 strains of Hemagglutinin (HA) strain of H5N1 (Figure 2 and Table 2). In 2007 we have selected total 205 strains of Hemagglutinin (HA) strain of H5N1 (Figure 3 and Table 3). From three years we have got some strains which seems to diverse from our analysis. We did our literature search but we did not get any information about these diverse strains. So it seems to us that these strains are not responsible for antigenic shift. Neighbor joining method and bootstrap value shows that these diverse strain is showing antigenic drift which may transfer to the other avian in the same country or other different country as well through migratory process. Our target is to identify the antigenic shift pattern from avian to human species. Analysis with amino acid (AA) shows the most specific way to identify the antigenic shift. For this we will combine the amino acid sequence of all these three years (2005, 2006, and 2007). After combining we had run alignment using ClustalW. It took almost 12 hours to complete. The molecular mechanisms that enable avian influenza viruses to cross the species barrier and transmit efficiently in humans are incompletely understood. Some experiments have been done to identify the transmission pattern and it shows poorer transmission from infected to susceptible animals [22][23][24]. Migration process can influence transmission of viruses. Migratory birds can carry pathogens from country to country thereby playing a role distributing influenza viruses. Avian influenza A consists of two major glycoproteins which are Hemagglutinin (HA) and Neuraminidase (NA) [25]. HA glycoproteins are more prone to attach to the cell surface sialic acid receptors. There is a difference between host surface receptors on the target cell which is believed to be the possible restrictive factor of avian influenza. Human infections are periodic. In some cases these viruses are accompanied by high mortality. As a result they are the major concern about the potential H5N1 as an endemic virus. Although human infections are sporadic, they are accompanied by high mortality, raising major concerns about the potential of H5N1 as a pandemic virus [26]. Fortunately, H5N1 viruses have not yet naturally acquired the ability to stably transmit between humans [27,28]. One factor that limits transmission of avian viruses in humans is the receptor specificity of the hemagglutinin (HA) [29]. Avian viruses, like H5N1, preferentially bind to α2, 3 sialosides (avian-type receptors), whereas human viruses prefer α2, 6 sialosides (human-type receptors that are found in the human respiratory tract). Before functioning as a virus it needs post translational cleavage by host proteases [30]. HA followed by NA are important Gs [31]. In humans, the SAα2, 6 Gal receptor is expressed mainly in the upper airway, while the SAα2, 3 Gal receptor is expressed in alveoli and the terminal bronchiole [32]. A virus with good affinity to both SAα 2, 3 Gal and SAα2, 6 Gal receptors may be a very dangerous one, which could both infect efficiently via its binding to Saα2, 6Gal in the upper airway and cause severe infection in the lung via its binding to Saα2, 3Gal. Data used in this study are obtained inside using nucleotide BLAST search from publicly available database of National Centre for Biotechnology Information (NCBI). Multiple sequence alignments, editing, assembly of strains were performed in windows platform with the Geneious program version 7. 1.3 (trial). In this study we will analyze some avian hemagglutinin (H5N1) of different years. Analysis includes building nucleotide sequence and translating them into amino acid sequence. Then we will study amino acid positions with respect to some reported mutation to see the genetic pattern. After analyzing we will try to find out whether there are any similarities between avian and human or not. There are some reported avian H5N1 strains that affect human which are A/Goose/Hong Kong/739. 2/2002 [33], A/ duck/Egypt/D1Br12/2007 [34], A/Duck/Singapore/3/97 [35], A/ egret/Egypt/1162/2006 [36]. All of these strain show preferential binding to Siaα (2, 6) Gal receptor that can infect a human. Few specific positions of amino acids are responsible for this binding. We found some avian amino acid position Q222L [35], G224S (35), S227N [33], Q192H [34] are specific to SAα 2, 3 Gal receptor which has a previous reported history to affect human. On the other hand there are some avian amino acid position S227N [33,37], Q192H [34], N186K [37], Q196R [36], N182K [38], Q192R [38], S223N [39], G228S [36,40] are specific to SAα2, 6 Gal receptor which has a previous reported history to affect human. In our avian H5N1 analysis we did not find the exact location where reported mutations are occurred. But we found similar amino acid near the reported mutated position. We have analyzed around (before and after the mutation point) twenty positions with respect to the reported mutation point. Here are the summary of some reported position which can be responsible for antigenic shift from avian to human (Tables 4 and 5). Analysis of our study data with respect to reported mutation point to see the antigenic shift pattern of avian H5N1 (Table 6). In case of S227N (Ser-227-Asn) reported position, we found amino acid Proline (P) in our software (Geneious). We found two mutations here, Proline (P) to Arginine (R) in two strains and Proline (P) to Alanine (A) in one strain. We found amino acid Serine (S) in two positions (215 and 219) which are located within twenty positions before 227. In these two positions there are no mutations. We have also found that, there is Serine (S) in two positions (233 and 239) which are located within twenty positions after 227. We found S233P mutation in three strains which indicates that polarity is changed from polar to non-polar as Serine (S) is Polar and Proline (P) is nonpolar. In case of Q192H (Gln-192-His) reported position, we found amino acid Tryptophan (W) in our software (Geneious). We found amino acid Glutamine (Q) in one position (185) which is located within twenty positions before 192. We found Q185R mutation in three strains, Q185H in two strains and Q185K in two strains. Here polarity is changed from Polar to positive electrically charged amino acid as Lysine (K), Histidine (H), Arginine (R) is positive electrically charged. We have also found that, there is Glutamine (Q) in two positions (203 and 208) which are located within twenty positions after 192. We found no mutation here. We found N170D mutation in numerous strains, N171S/D in numerous strains, N181T in two strains. We will not give importance to mutation in numerous mutations in one position as these are common and are not responsible for virulence. Here Polarity is not changed in case of N181T. We have also found that, there is Glutamic acid (E) in two positions (184 and 198) which are located within twenty positions after 186. We found N198K mutation in two strains and N198S in one strain which indicates that polarity is changed from polar to positively electrically charged Lysine (K) in case of N198K. In case of N182K (Asn-182-Lys) reported position, we found Asparagine (N) in our software (Geneious). We found Asparagine (N) in four positions (162, 170, 171 and 181) which are located within twenty positions before 182. We found N170G in five strains, N171G in one strain and N181T in two strains. Here polarity is changed from Polar to non-polar amino acid as Glycine (G) is non-polar. We have also found that, there is Asparagine (N) in two positions (184 and 198) which are located within twenty positions after 182. We found N184D mutation in one strain, N184S mutation in one strain, N198S in one strain and N198K in two strains which indicates that polarity is changed from polar to negative electrically charged Aspartic acid (D), positive electrically charged Lysine (K) in case of N184D and N198K respectively. In case of Q192R (Gln-192-Arg) reported position, we found Tryptophan (W) in our software (Geneious). We found Tryptophan (W) in one position (185) which is located within twenty positions before 192. We found Q185K in two strains, Q185R in three strains and Q185H in two strains. Here polarity Translational Biomedicine ISSN 2172-0479 This article is available in: www.transbiomedicine.com is changed from Polar to positive electrically charged amino acid as Lysine (K), Histidine (H), Arginine (R) is positive electrically charged. We have also found that, there is Tryptophan (W) in two positions (203 and 208) which are located within twenty positions after 192.We found no mutation here. In case of S223N (Ser-223-Asn) reported position, we found Glutamine (Q) in our software (Geneious). We found Glutamine (Q) in two positions (215 and 219) which is located within twenty positions before 223. We found no mutations here. We have also found that, there is Glutamine (Q) in two positions (233 and 239) which are located within twenty positions after 223. We found S233P mutation in three strains which indicate that polarity is changed from polar to non-polar as Proline (P) non-polar. In case of G228S (Gly-228-Ser) reported position, we found amino acid Lysine (K) in our software (Geneious). We found two mutations here, Lysine (K) to Glutamic acid (E) in one strain and Lysine (K) to Asparagine (N) in one strain. We found Glycine (G) in one position (217) which is located within twenty positions before 228. We found no mutations here. We also found Glycine (G) in two positions (237 and 240) which is located within twenty positions before 228. We also found no mutations here. In case of Q226L (Gln-226-Leu) reported position, we found amino acid Valine (V) in our software (Geneious). We found two mutations here, Valine (V) to Glutamic acid (E) in one strain and Valine (V) to Alanine (A) in one strain. We found Valine (V) in one position (208) which is located within twenty positions before 226. We found no mutations here. We also found Valine (V) in one position (238) which is located within twenty positions before 226. We also found no mutations here. In case of Q196R (Gln-196-Arg) reported position, we found amino acid Histidine (H) in our software (Geneious). We found amino acid Glutamine (Q) in one position (185) which is located within twenty positions before 196. We found Q185R mutation in three strains, Q185H in two strains and Q185K in two strains. Here polarity is changed from Polar to positive electrically charged amino acid as Lysine (K), Histidine (H), Arginine (R) is positive electrically charged. We have also found that, there is Glutamine (Q) in two positions (203 and 208) which are located within twenty positions after 196. We found no mutation here. In case of S227N (Ser-227-Asn) reported position, we found amino acid Proline (P) in our software (Geneious) ( Table 7). We found two mutations here, Proline (P) to Arginine (R) in two strains and Proline (P) to Alanine (A) in one strain. We found amino acid Asparagine (N) in two positions (209 and 222) which are located within twenty positions before 227. We found N209R mutation in one strain and N222D in one strain. Here polarity is changed from Polar to positive electrically charged amino acid Arginine (R) and negative electrically charged Aspartic acid (D). We have also found that, there is Asparagine (N) in one position (236) which is located within twenty positions after 227. We found no mutation here. In case of Q192H (Gln-192-His) reported position, we found amino acid Tryptophan (W) in our software (Geneious). We found no Histidine (H) which is located within twenty positions before 192. But we found Histidine (H) in two positions (195 and 196) which are located within twenty positions after 192. We found no mutation here. In case of N186K (Asn-186-Lys) reported position, we found Glutamic acid (E) in our software (Geneious). We found amino acid Lysine (K) in two positions (168, 169) which are located within twenty positions before 186. We found K168N mutation in two strains, K169R in one strain. Here Polarity is changed from positive electrically charged to polar Asparagine (N) in case of K168N. We did not find any Lysine (K) which is located within twenty positions after 186. In case of N182K (Asn-182-Lys) reported position, we found Asparagine (N) in our software (Geneious). We found Asparagine (N) in two positions (168, 169) which are located within twenty positions before 182. We found K168N in two strains and K169R in one strain. Here polarity is changed from positive electrically charged to polar Asparagine (N) in case of K168N. We did not found any Asparagine (N) which is located within twenty positions after 182. In case of Q192R (Gln-192-Arg) reported position, we found Tryptophan (W) in our software (Geneious). We found Arginine (R) in one position (178) which is located within twenty positions before 192. We found R178V in three strains. Here polarity is changed from positive electrically charged to non-polar amino acid Valine (V). We have also found that, there is Arginine (R) in one position (205) which is located within twenty positions after 192. We found R205G in one strain. Here polarity is changed from positive electrically charged to non-polar amino acid Glycine (G). In case of S223N (Ser-223-Asn) reported position, we found Glutamine (Q) in our software (Geneious). We found Asparagine (N) in two positions (209 and 222) which is located within twenty positions before 223. We found N209R in one strain and N222D in one strain. Here polarity is changed from polar to both positively charged amino acid Arginine (R) and negatively charged Aspartic acid (D). We have also found that, there is Asparagine (N) in one position (236) which is located within twenty positions after 223. We found no mutation here. In case of G228S (Gly-228-Ser) reported position, we found amino acid Lysine (K) in our software (Geneious). We found two mutations here, Lysine (K) to Glutamic acid (E) in one strain and Lysine (K) to Asparagine (N) in one strain. We found Serine (S) in two positions (215 and 219) which is located within twenty positions before 228. We found no mutations here. We have also found Serine (S) in two positions (233 and 239) which is located within twenty positions before 228. We found S233P in three strains. Here polarity is changed from polar to non-polar amino acid Proline (P). Translational Biomedicine ISSN 2172-0479 This article is available in: www.transbiomedicine.com (Q) in two positions (223 and 238) which is located within twenty positions after 222. We found no mutation here. In case of G224S (Gly-224-Ser) reported position, we found amino acid Arginine (R) in our software (Geneious). We found one mutation here, Arginine (R) to Lysine (K) in two strains. We found amino acid Glycine (G) in one position (217) which is located within twenty positions before 224. We found no mutation here. We have also found Glycine (G) in two positions (237 and 240) which is located within twenty positions after 224. We found no mutation here. In case of S227N (Ser-227-Asn) reported position, we found amino acid Proline (P) in our software (Geneious). We found two mutations here, Proline (P) to Arginine (R) in two strains and Proline (P) to Alanine (A) in one strain. We found amino acid Serine (S) in two positions (215 and 219) which are located within twenty positions before 227. In these two positions there are no mutations. We have also found that, there is Serine (S) in two positions (233 and 239) which are located within twenty positions after 227. We found S233P mutation in three strains which indicates that polarity is changed from polar to non-polar as Serine (S) is Polar and Proline (P) is non-polar. In case of Q192H (Gln-192-His) reported position, we found amino acid Tryptophan (W) in our software (Geneious). We found amino acid Glutamine (Q) in one position (185) which is located within twenty positions before 192. We found Q185R mutation in three strains, Q185H in two strains and Q185K in two strains. Here polarity is changed from Polar to positive electrically charged amino acid as Lysine (K), Histidine (H), Arginine (R) is positive electrically charged. We have also found that, there is Glutamine (Q) in two positions (203 and 208) which are located within twenty positions after 192. We found no mutation here. In case of Q222L (Gln-222-Leu) reported position, we found amino acid Asparagine (N) in our software (Geneious) ( Table 9). We found one mutation here, Asparagine (N) to Aspartic acid (D) in one strain. We found amino acid Leucine (L) in two positions (206 and 221) which is located within twenty positions before 222. We found L206I in four strains. Here polarity is not changed. We have also found Leucine (L) in one position (225) which is located within twenty positions after 222. We found L225M in one strain, L225F in one strain and L225S in two strains. Here polarity is changed from non-polar to polar Serine (S) in case of L225S. In case of G224S (Gly-224-Ser) reported position, we found amino acid Arginine (R) in our software (Geneious). We found one mutation here, Arginine (R) to Lysine (K) in two strains. We found amino acid Serine (S) in two positions (215 and 219) which is located within twenty positions before 224. We found no mutation here. We have also found Serine (S) in two positions (233 and 239) which is located within twenty positions after 224. We found S233P in three strains. Here polarity is changed from polar to non-polar Proline (P). In case of S227N (Ser-227-Asn) reported position, we found amino acid Proline (P) in our software (Geneious). We found two mutations here, Proline (P) to Arginine (R) in two strains and Proline (P) to Alanine (A) in one strain. We found amino acid Table 3 List of diverse strains from our analysis of 2007 are given below. Amino acid mutation position with reference Short form Amino acid in Geneious in reported position Gln-222-Leu [35] Q222L N Gly-224-Ser [35] G224S R Ser-227-Asn [33,37] S227N P Gln-192-His [34] Q192H W In case of Q226L (Gln-226-Leu) reported position, we found amino acid Valine (V) in our software (Geneious). We found two mutations here, Valine (V) to Glutamic acid (E) in one strain and Valine (V) to Alanine (A) in one strain. We found Leucine (L) in three positions (206, 221 and 225) which is located within twenty positions before 226. We found L206I in four strains, L225S in two strains, L225F in one strain, L225M in one strain. Here polarity is changed from non-polar to polar Serine (S) in case of L225S. We found no Leucine (L) which is located within twenty positions before 226. In case of Q196R (Gln-196-Arg) reported position, we found amino acid Histidine (H) in our software (Geneious). We found amino acid Arginine (R) in one position (178) which is located within twenty positions before 196. We found R178V mutation in three strains. Here polarity is changed from positive electrically charged amino acid to non-polar amino acid Valine (V). We have also found that, there is Arginine (R) in one position (205) which is located within twenty positions after 196. We found R205G in one strain. Here polarity is changed from positive electrically charged amino acid to non-polar amino acid Glycine (G). We found one mutation here, Asparagine (N) to Aspartic acid (D) in one strain. We found amino acid Glutamine (Q) in two positions (203 and 208) which is located within twenty positions before 222. We found no mutation here. We have also found Glutamine Table 6 Correlation of Reported α 2, 6 receptor specific avian amino acid position with our experimental strain, and their mutation pattern (exact and around twenty positions): Original amino acid analysis. Reported AA with mutation point AA in Geneious in reported position Exact AA which matches with the reported data with respect to specific mutation point (before 20 positions) Exact AA which matches with the reported data with respect to specific mutation point (after 20 positions) S227N P227R (2) P227A (1) N209R (1) Polar to + electrically Charged (R) N222D (1) Polar to -electrically Charged (D) S236 Q192H W192 Q195 Q196 Table 9 Correlation of Reported α 2, 3 receptor specific avian amino acid position with our experimental strain, and their mutation pattern (exact and around twenty positions): Mutated Amino Acid analysis. Asparagine (N) in two positions (209 and 222) which are located within twenty positions before 227. We found N209R mutation in one strain and N222D in one strain. Here polarity is changed from Polar to positive electrically charged amino acid Arginine (R) and negative electrically charged Aspartic acid (D). We have also found that, there is Asparagine (N) in one position (236) which is located within twenty positions after 227. We found no mutation here. In case of Q192H (Gln-192-His) reported position, we found amino acid Tryptophan (W) in our software (Geneious). We found no Histidine (H) which is located within twenty positions before 192. But we found Histidine (H) in two positions (195 and 196) which are located within twenty positions after 192. We found no mutation here. Conflict of interest None
v3-fos-license
2017-03-31T08:35:36.427Z
2017-02-01T00:00:00.000
16753959
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/14/2/185/pdf", "pdf_hash": "060d6a1d5f18b55290edd0ecbb98adc297ad5e8f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:389", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "f216835eeba0253aae80d1b211cd163c89bcd93b", "year": 2017 }
pes2o/s2orc
Effects of Sleep Quality on the Association between Problematic Mobile Phone Use and Mental Health Symptoms in Chinese College Students Problematic mobile phone use (PMPU) is a risk factor for both adolescents’ sleep quality and mental health. It is important to examine the potential negative health effects of PMPU exposure. This study aims to evaluate PMPU and its association with mental health in Chinese college students. Furthermore, we investigated how sleep quality influences this association. In 2013, we collected data regarding participants’ PMPU, sleep quality, and mental health (psychopathological symptoms, anxiety, and depressive symptoms) by standardized questionnaires in 4747 college students. Multivariate logistic regression analysis was applied to assess independent effects and interactions of PMPU and sleep quality with mental health. PMPU and poor sleep quality were observed in 28.2% and 9.8% of participants, respectively. Adjusted logistic regression models suggested independent associations of PMPU and sleep quality with mental health (p < 0.001). Further regression analyses suggested a significant interaction between these measures (p < 0.001). The study highlights that poor sleep quality may play a more significant role in increasing the risk of mental health problems in students with PMPU than in those without PMPU. Problematic Mobile Phone Use There has also been rapid development and increasingly widespread use of mobile phones. Despite its advantages of convenience and practicability, excessive use has been associated with potential risks in people's life. Problematic mobile phone use (PMPU) is not a new term and it is defined as an inability to regulate one's use of the mobile phone, which involves negative consequences in daily life [1]. The prevalence of PMPU does not seem to be negligible, for example, it affects 16% of middle school students in Korean [2] and 26% in a Tunisian population [3]. Problematic Mobile Phone Use and Mental Health The incidence of mental health problems has increased worldwide [4]. These observations have raised concerns about the adverse effects of excessive mobile phone use on the physical and mental health of college students. Researchers have reported a prospective relationship between mobile phone use and psychological symptoms in college students, and a possible model for this association has been proposed [5], which suggested that depression and sleep disorders were the consequences of high rates of information and communication technology use. It has proved that PMPU was correlated to anxiety or insomnia [6], depression [7], and psychological distress [8] in adolescents or college students. However, there are several other factors that should be considered in epidemiological studies that may influence associations between mobile phone exposure and mental health. Problematic Mobile Phone Use and Sleep Sleep disturbances (e.g., delayed sleep phase, sleep duration, sleep patterns, chronotype, sleep quality) among adolescents are closely associated with mobile phone use. Short sleep duration during the week showed higher problematic usage [9]. Those who used their mobile phones more frequently after lights out reported a significantly poorer sleep quality, more fatigue and insomnia symptoms [10]. Mobile phone use for calling and texting after lights out was associated with sleep disturbances (short sleep duration, subjective poor sleep quality, excessive daytime sleepiness, and insomnia symptoms) [11]. Results showed that Composite Scale of Morningness (as a measure of chronotype) scores were the best predictor for problematic mobile phone usage, and as a consequence, evening-oriented university students scored higher on the Mobile Phone Problem Usage Scale [12]. The sleep quality worsened with an increasing level of excessive mobile phone use [13]. As an unstructured leisure activity, mobile phone use with no fixed starting and stopping point may increase the risk of extending and taking up more time, and thus other possible activities and sleep were displaced [14]. The Possible Role of Sleep for Mental Health Sleep is recognized as necessary for health and overall growth. Sleep deprivation was associated with psychological symptoms, such as negative emotions and depressive symptoms [15]. A large population-based study suggested that higher levels of depression and anxiety were more common in Norway adolescents with a delayed sleep phase [16]. Furthermore, a longitudinal survey suggested there may be a causality between sleep pattern and mental health in adolescents, which indicated that a late bedtime and short sleep duration predicted consequential anxiety and depression [17]. As such, it was effective to develop adolescent health education programs or interventions focusing on sleep quality to improve mental health [17]. A survey based on a national sample showed that US adolescents with insomnia were more at risk for having mental disorders, including mood and anxiety disorders, and poor perceived mental health [18]. Evening chronotype was reported to be associated with depression [19]. The mechanism through which short sleep has an impact on emotional and behavioral functioning in adolescents may involve an increase in negative mood and a decrease in the ability to regulate emotions [14]. Potential Role of Sleep for the Associations between Problematic Mobile Phone Use and Mental Health Whether there are factors that moderate the associations between PMPU and adolescent mental health has not been well examined. Sleep disturbance was a major risk factor for adolescent mental health, and also influences the association between addictive behaviors and psychological symptoms. At the same time, PMPU was related to sleep problems. A study reported that severe mobile phone use in Finnish girls was related to poor self-reported health, which was directly through poor sleep quality and daytime fatigue [20]. Results from Adams and Kisler supported the mediation hypothesis that sleep disturbance may mediate the relationship between electronic media use and depressive symptoms/anxiety in a sample of college students [21]. Therefore, the interactions of PMPU and sleep quality with mental health need further study. Above all, the study aimed to examine PMPU and its association with mental health in Chinese college students. Furthermore, we investigated how sleep quality influences this association. Participants A cross-sectional survey was conducted in October 2013 to examine the health and well-being of college students in Anhui, China. Self-completion questionnaires were administered in the classrooms of each participating college major. Cluster sampling was used, with the school department as the primary sampling unit. Between November and December 2013, 4915 questionnaires were distributed. However, we received 4858 completed questionnaires (response rate: 98.8%), as some students were absent. Owing to some responses, a total of 4747 respondents (58.4% female) with a mean age of 19.24 (Standard Deviation (SD) = 1.41) were recruited in the final analysis. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the study was approved by the Ethics Committee of Anhui Medical University (No. 20131196). Procedure Data collection was completed between October and December 2013. Teachers and professional investigators distributed questionnaires to the students and instructed them to complete the questionnaires anonymously within 20-30 min in classroom settings. Voluntary cooperation principles were obtained from all participants before the survey. Demographic Factors Participants provided details about their sex, age, residential background, siblings, family income, tobacco and alcohol use, and internet addiction. We included two questions from the Young Risk Behavior Surveillance System questionnaire [22]. To measure cigarette use, we asked "How many days of the past month did you smoke?" Cigarette users were defined as those smoking on at least one day during the past month. To measure alcohol use, we asked "How many days of the past month did you have at least one drink?" Alcohol users were defined as those drinking on at least one day during the past month. We used Young's 20-item Internet Addiction Test (YIAT) [23] to assess Internet addiction. Scores of each item range from 1 to 5 (1 = not at all, 2 = occasionally, 3 = frequently, 4 = often, and 5 = always)-total scores range from 20 to 100. A YIAT score greater than 50 was defined as internet addiction [24]. Cronbach's alpha and split-half reliability coefficients were 0.90 and 0.86, respectively. Problematic Mobile Phone Use To evaluate PMPU, we used the Self-rating Questionnaire for Adolescent Problematic Mobile Phone Use (SQAPMPU), which was a standardized instrument and suitable for use with college students [25]. It was comprised of 13 items, with three dimensions named withdrawal symptoms, craving, and physical and mental health status. Example items included, "My leisure activities are reduced due to the time I spend on my mobile phone," "I become irritable if I have to switch off my mobile phone for meetings, dinner engagements, or at the movies." and "I need to spend more time on my mobile phone to be satisfied." Each item scored from 1 (Not true at all) to 5 (Extremely true), based on a five-point Likert scale [26]. The variance cumulative contribution rate was 59.13%. Cronbach's alpha coefficient was 0.87. Total scores ranged from 13 to 65, using the 75th percentile as the cutoff point. As such, PMPU behaviors were categorized as "No" (<P 75 ) or "Yes" (≥P 75 ). Sleep Quality The Pittsburgh Sleep Quality Index (PSQI) [27] is a self-rated scale that assesses sleep quality during the past month. The scale contains 19 items, which covered seven components named subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleep medication, and daytime dysfunction. Subjective sleep quality referred to perceived overall sleep quality. Sleep latency measures how long it took to fall asleep. Sleep duration referred to the actual length of sleep. Habitual sleep efficiency was calculated by the number of hours slept and the number of hours spent in bed. Sleep disturbances referred to some behaviors that negatively affect sleep, such as waking up at late night or early in the morning, getting up at night to use the bathroom, uncomfortable breathing, coughing or snoring loudly, feeling too hot or too cold, having nightmares, and having pain. Each component scored from 0 (no difficulty) to 3 (severe difficulty). Cronbach's alpha coefficient was 0.729 in the present study. A total score was totaled from the seven component scores, ranging from 0 to 21, with the score of 7 used as the cut-off point [28]. A total score less than or equal to 7 was recognized as good sleep, while a score more than 7 implied poor sleep. Psychopathological Symptoms The Multidimensional Sub-health Questionnaire of Adolescents (MSQA) was a multidimensional self-report instrument to measure physical and psychological symptoms [29] which consisted of 71 items. A subset of 39 items was for psychopathological symptoms, which were categorized into three dimensions: emotional symptoms (e.g., anxiety and depressive symptoms), behavioral symptoms (e.g., hostility, out of control, and lack of concentration), and social adaptation problems (e.g., poor interpersonal relationships with schoolmates, families, or friends). Each item had six answer categories, based on the duration of each symptom (<7 days; 7 days-14 days; 15 days-30 days; 31 days-60 days; 61 days-90 days; and >90 days). Cronbach's alpha coefficient was 0.939 in the present study. In our study, the duration of symptom less than 1 month were recognized as 0, and symptom lasted more than 30 days were recognized as 1. Respondents with a total score of 8 or more were defined as having psychopathological symptoms. Anxiety Symptoms To estimate anxiety symptoms, we used the Self-rating Anxiety Scale (SAS) [30], which has been tested for reliability and validity all over the world [31][32][33]. The SAS is comprised of 20 items and the answers were based on a four-point Likert scale: 1 = a little of the time, 2 = some of the time, 3 = good part of the time, 4 = most of the time. Cronbach's alpha coefficient was 0.818 in the present study. A higher score indicated more severe anxiety symptoms. SAS total scores were categorized into two groups using a cut-off score of 50. As such, SAS scores ≥50 reflected the experience of anxiety symptoms [34]. Depressive Symptoms The Center for Epidemiologic Studies Depression Scale (CES-D) [35] was used to assess depressive symptoms during the past week. It is comprised of 20 questions and has four answer categories with scores from 0 to 3: rarely or none of the time/<1 day (0), some or a little of the time/1-2 days (1), occasionally or a moderate amount of the time/3-4 days (2), and most or all of the time/5-7 days (3). Cronbach's alpha coefficient was 0.87 in the present study. CES-D scores ≥20 in accordance with the Chinese norm was defined as the experience of depression symptoms [36]. Statistical Analysis All statistical analyses were conducted using SPSS version 10.0 (SPSS Inc., Chicago, IL, USA). Frequencies and percentages for categorical variables, and mean and SD for continuous variables were used in descriptive analysis. Chi-square tests were conducted to examine the prevalence of mental health problems among students grouped according to PMPU and sleep quality. Multivariate logistic regression analyses were employed to examine the independent and interactive effects of PMPU and sleep quality with mental health, adjusting for confounding factors. According to Table 1, not all confounding factors were significant for mental health, thus we did not include all confounding factors in Table 2. We have detected the multi-collinearity for the variables, and the results showed that the variance inflation factors (VIF) were less than 2, the values of tolerance were less than 1, and the condition index was less than 4, which indicated that multicollinearity was not existent. p-values < 0.05 (two-tailed) were considered as statistically significant. Characteristics of the Sample Sample characteristics were displayed in Table 1. There were responses from 4747 students (41.6% male, n = 1973). We observed PMPU in 28.2% and poor sleep quality in 9.8% of participants. Psychopathological symptoms were more common in male individuals (16.2% of female individuals vs. 18.9% of male individuals, p < 0.05). However, there was a near sex-based significance for anxiety (p = 0.065) and the results were not being significantly different for depressive symptoms (p = 0.594). Students reporting low family income showed higher rates of poor mental health. Furthermore, students who were current smokers reported higher rates of psychopathological symptoms than non-smokers (23.4% vs. 17.0%, respectively, p < 0.01) and higher rates of anxiety symptoms (25.5% vs. 15.7%, respectively, p < 0.001). Anxiety symptoms also appeared to be higher among students who were current drinkers than those who were not (18.9% vs. 14.8%, respectively, p < 0.001). Higher rates of psychopathological, depression and anxiety symptoms were also seen in those with Internet addiction (IA), PMPU, and poor sleep quality (all p < 0.001). Associations of PMPU and Sleep Quality with Mental Health There was a positive association of PMPU and sleep quality with mental health symptoms ( Interactions of PMPU and Sleep Quality with Mental Health The results of a regression analysis examining the interactions of PMPU and sleep quality with mental health were shown in Table 3. There was a significant interaction of PMPU and sleep quality with mental health symptoms (p < 0.001). Table 3 presents crude and adjusted OR (95% CI) for psychopathological symptoms, anxiety symptoms, and depressive symptoms in those with PMPU or poor quality sleep compared with the reference group (no PMPU or poor sleep quality). OR (95% CI) for psychopathological symptoms, anxiety symptoms, and depressive symptoms were 7.60 (5.55-10.41), 6.68 (4.89-9.13), and 11.28 (8.21-15.50), respectively. PMPU students with poor sleep quality were more likely to be with mental health symptoms. Discussion This study was one of few to investigate PMPU in Chinese college students. The results provided evidence of the association between PMPU and poor mental health among college students. The results suggested that PMPU was positively related to psychopathological symptoms, anxiety, and depression. In addition, our results indicated that sleep quality was also positively correlated with mental health. Furthermore, we confirmed interactions of PMPU and poor sleep quality with mental health among Chinese college students. We observed PMPU in 28.2% of participants, which was higher than in previous studies [37]. This may be because our sample was college students, whereas the other study investigated high school students. Furthermore, the evaluation criterion for PMPU differed. Sánchez-Martínez and Otero [37] used two forced choice (yes/no) questions, reporting an estimated prevalence of mobile phone dependence of 20%. They used direct judgment methods [37], whereas we considered total scores and percentiles. We might therefore expect our results to differ. To our knowledge, several studies supported the existence of the relationship between PMPU and psychological symptoms. An explorative prospective study reported women with high combined use of computers and mobile phones at baseline showed an increased risk of depression in a one-year follow-up. Furthermore, text message use was also related to depressive symptoms in men [5]. Augner and Hacker [38] reported that scores of depression were positively correlated with PMPU scores. The association between depression symptoms and intensive cell phone use has been confirmed elsewhere [37]. Mobile phone use has also been reported as a diversion to kill time or avoid some other activity by anxious individuals [39], whilst it has been proven that excessive mobile phone use was connected with a high risk of anxiety and insomnia [6]. General psychological distress has been shown to be related to abnormal use of both the internet and mobile phones [8]. Another study indicated that those with excessive mobile phone use not only experienced higher levels of depression and interpersonal anxiety [40]. In addition, depressed adolescents are at higher risk of PMPU after controlling for the confounding effects of sex, age, and residential area [7]. Consistent with our hypothesis, we found that PMPU was related to sleep quality. Using the mobile phone after going to bed led to increasing sleep problems [41]. The total score of sleep quality showed a direct significant association with cell-phone overuse score [42]. Daytime dysfunction scores were higher in the high smartphone use and positive correlations were found between the Smartphone Addiction Scale scores and PSQI scores in university students [43]. However, few studies have examined the association between PMPU and sleep. A hypothesis was that many technologies such as computers, televisions, and phones could emit shortwave light. When many adolescents use mobile phones at night, artificial shortwave light exposure could affect sleep and neurobehavioral functions, and chronic inopportune exposure to shortwave light causes a malfunction of the circadian timing system, which results in sleep problems and depressive symptoms [44]. The mechanisms underlying this association remain to be explained. Future studies should reveal more information about these mechanisms. Poor sleep quality can cause mental problems. Adolescents may suffer depressive symptoms exacerbated by poor sleep. A large population-based study also reported a significant interaction of insomnia and sleep duration with depression, which indicated a greater than eightfold increase in odds ratios of depression in Norwegian adolescents aged 16-18 years with insomnia who slept <6 h [45]. Similarly, a Chinese cross-sectional survey showed that sleep disturbance was more prevalent among Chinese adolescents with depressive symptoms [46]. Additionally, reduced sleep has been found to be associated with increased anxiety scores [47]. There were changes in sleep patterns during the transition to college, and the authors considered potential cross-lagged associations between adolescents' sleep and anxiety and depressive symptoms. A prevention of depression is by improving adolescents' sleep quality, as there is evidence that disturbed sleep is a factor for the development of depressive symptoms during adolescence [48]. We have found that poor sleep quality was an independent risk of mental health. Furthermore, the association between PMPU and mental health was less significant in adolescents who had a higher sleep quality. The results raised the possibility that good sleep quality may reduce the risk of mental health among adolescents with PMPU. Studies that obviously examined the mediation of sleep disturbance for the relationship between PMPU and mental health were rare. Our findings supported that the relationships between PMPU and mental health were mediated by sleep disturbance, which were in line with Lemola et al., who implied that sleep disturbance partially mediated the relationship between electronic media use in bed before sleep and symptoms of depression [14]. There was evidence that maintaining healthy sleep patterns may possibly reduce the incidence of mental disorders in college students with PMPU. Understanding these relationships may provide a basis for developing strategies to prevent mental health problems in adolescents with PMPU, and thus warrants further study. Limitations Several limitations should be considered when interpreting our results. First, as this was a cross-sectional study, causality cannot be confirmed. Second, students answered questionnaires anonymously, which may increase the students' arbitrary disclosure of symptoms. Third, self-reporting questionnaires may lead to recall bias. Finally, the findings may not be generalized to all Chinese students since only medical college students were included. Future research, particularly cohort studies, are essential for determining the direction of causality between PMPU and mental health problems, and to better understand the influence of sleep quality on this relationship. Nonetheless, whether mental health is a cause or effect of PMPU is worthy of more discussion. Conclusions We found cross-sectional associations between PMPU and mental health problems, with interactions of PMPU and sleep quality on mental health in adolescents. The study highlights that poor sleep quality may play a more significant role in increasing the risk of mental health problems in students with PMPU than in those without PMPU. Based on these results, we suggest that improving sleep quality may be an effective strategy to decreasing the risk of mental health problems among adolescents with PMPU.
v3-fos-license
2014-10-01T00:00:00.000Z
2010-03-11T00:00:00.000
8384112
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0009651&type=printable", "pdf_hash": "665b1f8335b2a1d5972a1a4ebb84ec4b80ec0852", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:390", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "ccb06d7c00f22a5df867730d91f7ee65f50a0903", "year": 2010 }
pes2o/s2orc
Genetic and Epigenetic Somatic Alterations in Head and Neck Squamous Cell Carcinomas Are Globally Coordinated but Not Locally Targeted Background Solid tumors, including head and neck squamous cell carcinomas (HNSCC), arise as a result of genetic and epigenetic alterations in a sustained stress environment. Little work has been done that simultaneously examines the spectrum of both types of changes in human tumors on a genome-wide scale and results so far have been limited and mixed. Since it has been hypothesized that epigenetic alterations may act by providing the second carcinogenic hit in gene silencing, we sought to identify genome-wide DNA copy number alterations and CpG dinucleotide methylation events and examine the global/local relationships between these types of alterations in HNSCC. Methodology/Principal Findings We have extended a prior analysis of 1,413 cancer-associated loci for epigenetic changes in HNSCC by integrating DNA copy number alterations, measured at 500,000 polymorphic loci, in a case series of 19 primary HNSCC tumors. We have previously demonstrated that local copy number does not bias methylation measurements in this array platform. Importantly, we found that the global pattern of copy number alterations in these tumors was significantly associated with tumor methylation profiles (p<0.002). However at the local level, gene promoter regions did not exhibit a correlation between copy number and methylation (lowest q = 0.3), and the spectrum of genes affected by each type of alteration was unique. Conclusion/Significance This work, using a novel and robust statistical approach demonstrates that, although a “second hit” mechanism is not likely the predominant mode of action for epigenetic dysregulation in cancer, the patterns of methylation events are associated with the patterns of allele loss. Our work further highlights the utility of integrative genomics approaches in exploring the driving somatic alterations in solid tumors. Introduction Head and neck squamous cell carcinoma (HNSCC) is the eighth most commonly diagnosed malignancy in males, responsible for over an estimated 11,000 deaths each year in the United States [1]. The genetic alterations common to HNSCCs have been characterized using both cytogenetic and molecular approaches: Importantly, the presence of genetic imbalances, specifically loss in chromosomal regions 3p, 8p, 9p, 15p, 18q, 22q and gains in 1q, 3q, 8q, 11q, 14q, 16q, 20q have been shown to be significantly associated with poor patient survival [2,3,4,5,6]. Epigenetic alterations commonly observed in this disease include promoter hypermethylation, resulting in gene silencing, of CDKN2A, CDH1, DAPK1, RASSF1, and MGMT, which have been shown to be associated with patient outcome [7,8,9]. Evidence has emerged that CDKN2A [10,11], RASSF1 [12], and other genes are regulated both by hypermethylation and allele loss in many solid tumor types [13,14,15] leading to the hypothesis that, in addition to classical Knudson inactivation of tumor suppressors through mutation [16], first and second hits commonly occur in the form of promoter methylation and loss of heterozygosity (LOH) even in the absence of mutation. The combination of genetic and epigenetic alterations is fundamental in the genesis of neoplasia, resulting in the inappropriate activity level of cell signaling pathways that regulate key processes such as cellular growth and differentiation, DNA fidelity, apoptosis, and metabolic stability. Thus, a more complete study of carcinogenesis would include simultaneous evaluation of multiple types of alterations in common tumors. Investigation of various cancers using genome level technologies, such as highresolution single nucleotide polymorphism (SNP) microarrays to measure somatically arising allelic imbalance, have shown that these genetic alterations profiles are remarkably diverse [3,17,18]. Further, recent large scale array-based studies of epigenetic events have yielded similar insight into the pattern of gene silencing in cancers, in that alterations to promoter methylation status of genes occurs in a highly variable pattern even amongst tumors arising from the same tissue or cell type [19,20,21]. Results from studies employing these methods have been effective in gaining insight into the basis of hereditary disease [22] and in identifying novel candidate cancer genes [23]. Our technological capability to assess both genetic and epigenetic genome-wide alterations has improved. Thus, it is now critically important to begin integrated analyses that will allow us to define the relationship between epigenetic alterations (represented by changes in DNA methylation) and genetic changes (represented by alterations in copy number) that comprise the etiologic keystones of malignant disease. The need for combined high-resolution profiling of DNA copy number and methylation profiling is becoming recognized, particularly as pharmacologic targeting of the epigenome has gained momentum, and methods to simultaneously investigate both types of alterations are emerging [24,25,26]. In addition, recent investigations in gliomas [21] and cancer cell lines [27,28] using combinatorial high-throughput methods have elucidated individual genes that are differentially regulated through these mechanisms; however, the global relationships between epigenetic and copy number profiles in human tumors remain poorly characterized. We hypothesized that epigenetic and genetic alterations in HNSCC are clonally selected in a fashion that is not independent. To investigate this, we have integrated these genomic-level data in an analysis of 19 primary HNSCCs. DNA Copy Number and Methylation Measurement Somatic DNA copy number analysis was performed on a representative case-series of 19 malignant HNSCCs (Table 1) with high-density Affymetrix 500 k SNP mapping arrays, using matched blood DNA as referents. For the purpose of exploration, copy number data were subjected to unsupervised hierarchical clustering (Ward's method with Hamming distance) and coordinately arranged by chromosome ( Figure 1A). Consistent with the considerable literature addressing the cytogenetics of HNSCC [29,30], frequent gross structural abnormalities of chromosomes 8q or 3q are observed in 6 (32%) of the cases, appearing as amplifications (red) or allele losses (green), while smaller-scale aberrations are identifiable in most samples. Previously, we reported the methylation status of 1505 CpG loci using the Illumina GoldenGate platform in 68 HNSCC tumors (including the 19 samples with copy number data) [20]. Employing an unsupervised method for clustering methylation data using a mixture of beta distributions, termed recursively partitioned mixture modeling (RPMM) [31], we showed that normal epithelium is distinguished from tumor in the classifications (epigenetic signatures) that result. When restricting to tumor-only modeling, six methylation profile classes were defined, and class membership was significantly associated with tumor stage (p,0.01), patient age (p,0.01), and marginally associated with tumor site (p,0.10) and Human Papillomavirus (HPV) status (p,0.10) by permutation tests [20]. Tumor membership in Class 5 carried an increased risk of high stage disease while Classes 6 and 2 were associated with a protective effect against advanced stage. Patients in Class 4 had a higher prevalence of HPV16 positivity and Class 3 had the highest proportion of laryngeal tumors. These associations lend biological significance to the six epigenetic profiles that were identified. Subsequent analyses presented in this report utilize these previously published methylation classifications. Loci profiled for methylation (hypermethylated = blue, hypomethylated = yellow) for the 19 tumors with copy number measurements were visualized by clustering ( Figure 1B) and are ordered by their corresponding placement within a dendrogram obtained by RPMM grouping, with the terminal nodes filled by Ward's method of hierarchical clustering. RPMM classes are indicated beneath the dendrogram. Local Molecular Alterations Are Not Correlated To compare methylation levels and copy number alteration (CNA) in greater detail, we generated integrated color image plots of methylation and copy number profiles for individual genomic regions around specific loci ( Figure 2C) and entire chromosomes of interest ( Figure 2A/B and Figure S1), where samples were grouped by RPMM methylation class membership. These plots illustrate the local relationships between DNA methylation and CNA. Specifically, certain loci (e.g. SOX17, chromosome 8) in tumors with allelic amplification exhibit hypermethylation in nearby CpGs compared to those tumors without allelic imbalance in that region. At the same time, methylation values were stable for most loci across all the samples despite local regions of gain or loss, demonstrating that, as we have previously reported [32], the relationship between methylation profile and CNA is neither an artifact of the analysis nor allelic bias in the samples. Initial analyses sought to take advantage of ''two hit'' gene inactivation, as proposed by Knudson [16], in order to identify potential novel loci as candidates that are causal in this disease. We scanned the genome for locations where there was a systematic relationship between copy number and methylation, such as previously reported sites of hypermethylation and LOH. Calculating the Pearson correlations for all overlapping loci, we encountered only eight loci (q,0.05) that demonstrated a significant correlation between these disparate mechanisms of gene silencing. This apparent independence was true both at individual loci and when averaged over multiple CpGs upstream of the transcriptional start site (TSS) ( Table 2). For example, hypermethylation and LOH occurred infrequently within the CDKN2A gene and only at one CpG, while more often DNA methylation occurred in the absence of aberrant copy number states or vice-versa ( Figure 2C). Importantly, many other loci (such as those within MGMT) had little variation in either form of molecular alteration. We next investigated each form of alteration individually, and estimated the deviations from their expected normal values to determine the significance of CNA and DNA methylation alterations in HNSCC. Volcano plots of gene-specific mean methylation alteration and mean copy number alteration versus pvalue revealed distinctions in promoter-associated alterations between molecular processes, evident through an increased tendency for significant loss of methylation versus a tendency for significant increase in copy number ( Figure 3A and Table S1). To further explore allele losses in the context of the overall process of methylation at the specific loci on the array, these data were stratified by their RPMM methylation classification. Specifically, ''left class'' included tumors in RPMM Methylation Classes 1-4, while ''right class'' included Methylation Classes 5-6. These groupings represent the two main epigenetically distinct methylation class subsets and are defined as the initial left and right splits of the RPMM clustering dendrogram, shown in Figure 1B, as adapted from [20]. Interestingly, the pattern of allelic copy number and methylation alterations differed considerably between right and left classes ( Figure 3B), with significantly decreased levels of methylation alterations and significantly increased CNA occurring primarily in the left class. This provides evidence that different global processes are at work between the groups of methylation classes and that this distinction is replicated in the copy number data. CNA and Methylation Profiles Are Not Independent In order to more fully explore the evidence of a global epigenetic effect on copy number, we plotted genome-wide allele copy number changes in tumors stratified by RPMM methylation class membership ( Figure 4). The extent of copy number alterations varied significantly by methylation class (permutation test p,0.002), with tumors in Methylation Classes 1 and 3 showing substantial, large-scale copy number alteration relative to other tumors. This clearly shows that copy number and methylation alterations do not occur independently. If there were no association between genetic and epigenetic alterations, one would observe an even distribution of aberrant copy number states across methylation classes. To investigate the notion that clinical variables previously shown to be associated with the methylation classes may also be the reason for similar copy number data clustering, tests for association between the degree of CNA and the clinical covariates age, site, stage, and HPV16 status were performed. Importantly, these tests were not statistically significant, further indicating that a relationship between theses global processes of regulation exists rather than purely a manifestation of clinical parameters in both datasets. Global Methylation in High/Low CNA Tumors Since it has long been hypothesized that genomic instability is related to decreased levels of global DNA methylation, we measured LINE-1 methylation, as a surrogate marker of global methylation, in tumors with available DNA (n = 11). The tumors that hierarchically clustered in the high CNA group (see Figure 1A) had generally lower LINE-1 methylation than tumors with low levels of allelic imbalance (mean differential methylation: 213.2%, 95% CI: (233.6%-7.2%)). Discussion We recently constructed epigenetic profiles of HNSCC, reporting that DNA methylation events are common and associated with etiologically important exposures [20]. Aberrant DNA methylation events have been hypothesized to accumulate initially in a stochastic fashion and, through positive selection, result in clones that have a growth advantage that leads to the genesis of a rapidly-dividing tumor. Here we expand upon these data and include an analysis of chromosomal integrity in these same tumors. Using genomic-level measurements, we observed a highly significant association between copy number and DNA methylation profiles, definitively showing that these modes of gene regulation are linked in HNSCC. These observations supplement recent evidence from Sadikovic et al. that copy number alterations are generally correlated with both methylation and gene expression levels in osteosarcomas [33]. At the same time, while specific targeting of genes through both mechanisms occurs in a deterministic manner within subgroups of patients, when we tested for regionally matching local (gene level) epigenetic and copy number events we only observed that global, rather than local, alterations were correlated. This indicates that coordinated two-hit gene inactivation (LOH followed by epigenetic silencing) is not the dominant character of somatic alteration over the genome. As the GoldenGate methylation array investigates nearly 800 cancerinvolving genes and is enriched for tumor suppressor-associated loci, we were uniquely positioned to investigate just this question. Recent evidence supports our conclusion, as gene regulation by CNA and DNA methylation measured at 691 loci in meningiomas appears to be somewhat mutually exclusive [34]. In addition, our combined analysis of the promoter regions of previously reported genes with allele loss or hypermethylation demonstrates that this situation is rare (see Table S2), however a much larger investigation with higher resolution is needed to determine if these alterations occur systematically. One possible explanation for the association between global profiles of DNA methylation and copy number is that amplification or loss of genetic material may result in a bias of measured methylation for CpGs within that region, potentially contributing to the inferred methylation profile (e.g. in our RPMM approach). Indeed, previous microarray-based methods to determine methylation status have been hindered by copy number changes that bias the measured relative methylation values at CpG loci [35]. However, our recent work utilizing bead-arrays has shown that CNA produces little bias in absolute methylation data generated on the GoldenGate methylation panel, except in the case of homozygous deletion [32]. We and others have previously demonstrated the validity of Illumina GoldenGate methylation array results with other high-and low-throughput technologies [36,37]. Integrative analysis revealed that several tumors with similar methylation profiles had large regions of chromosomal abnormality, particularly in chromosomes 8 and 3, consistent with the possible formation of isochromosomes i(8q) and i(3q) in aneuploid cells. These cytogenetic abnormalities commonly appear in HNSCC, possibly a result of chromosomal missegregation events during mitosis [29]. We also observed that Methylation Class 3 tumor data reflect gross allelic amplification of 8q, which extends through the centromere and partially into 8p, possibly indicating a distinct mechanism of formation for this anomaly. Among tumors with an amplified 8q arm, several methylated CpG loci were observed in this region relative to tumors without this gross chromosomal alteration. Two mechanisms can be posited to explain this result. Firstly, epigenetic dysregulation may occur early in the genesis of these head and neck tumors and aberrant methylation marks are faithfully replicated despite the amplification event, which is consistent with previous reports implicating epigenetic modification as an early event in the progression of this disease [reviewed in 38]. In fact, there is evidence that aberrant methylation in certain chromosomal regions, especially located near centromeres, predisposes the surrounding area to genetic alteration, including fragile breakpoint sites [39,40]. On the other hand, it is possible that this differential methylation occurs following the chromosomal aberration, possibly in response to the genetic event and selective pressures. However, we are unable to distinguish between these possibilities in our data, highlighting the need for mechanistic studies. Gain of 8q has been reported as a relatively common event in HNSCC, particularly at 8q24 [41], which houses the MYC oncogene, and 8q22, thought to be targeting LRP12 [42]. Similarly, in one-third of our cases we observed amplification of this entire arm, while putative tumor suppressor genes, such as SOX17 and PENK, within this amplified chromosomal arm are methylated. These findings are suggestive of a context wherein genetic modification (possibly a result of genomic instability) is responsible for perpetuating inappropriate oncogene expression with concomitant epigenetic silencing of local tumor suppressors (Figure 2A). Molecular alterations in chromosome 3 have been previously reported as the most prevalent and potentially most important in HNSCC [43]. Consistent with these findings, we observed extensive copy number and methylation alterations in this chromosome. For example, the gastric cancer-associated tumorsuppressor HRASLS in the amplified q-arms were more highly methylated than those tumors which possessed normal 3q. In addition, the proto-oncogenic MST1R loci, associated with poor prognosis through potentiation of cell scattering and invasion in breast cancer [44], were unmethylated in most tumors irrespective of chromosome 3p loss. However, we observed a number of genes that did not follow the expected directions of methylation within copy number variable loci, indicating that they may be hitchhikers or simply regulated by other genetic or epigenetic means. Overall, these structural modifications in chromosomes 3 and 8 are consistent with the literature and are thought to develop early during the genesis of disease [30,45]. Although 13 of the tumors examined (Methylation Classes 1, 3, and 6, Fig. 4) demonstrated a preponderance of CNA, we observed a notable lack of CNA among the remaining tumors. We hypothesize that this may be due to these samples having higher levels of aberrant epigenetic or non-copy number altering genetic events such as mutation or chromosomal rearrangements. It is also possible that clinical stage could account for the observed levels of abnormal copy number, as this has been reported in other cancers [46]. Other possible confounders include HPV16 status and tumor site, although our data do not indicate associations between any of these covariates and CNA. Larger future studies are required to investigate the nature of these notable differences with statistical rigor. There was also an apparent relationship between global hypomethylation, represented by the extent of LINE-1 DNA sequence methylation, and the increased levels of allelic imbalance among HNSCC cases. This finding is consistent with the literature [47,48] and with the hypothesis that global hypomethylation of transposable elements culminates in genomic instability. While it is apparent that the various modes of alteration are related, the timing of these events is less clear. Our data underscore the need for additional investigations into the chronology of multifaceted somatic alterations leading to the onset and progression of this deadly disease. In our analysis to define the local relationships between copy number and methylation, we observed only one gene (where promoter-associated CpG alterations were averaged), HOXA11, with a marginally significant correlation between methylation and copy number alteration, although a number of individual CpG loci reached significance, including sites within potentially oncogenic GRB10, IHH, and HOXA11. While the strong positive correlation at these sites could indicate selection pressure for dual mechanism inactivation was occurring to promote neoplasm formation, there was little evidence of this pressure acting over the entire set of measured genes. Our finding that different genes are preferentially targeted through different mechanisms in HNSCC could reflect dramatic differences in the timing of these events (e.g. one type of somatic changes predominating early in clonal evolution with the other becoming dominant later in clonal evolution). Alternatively, it is possible that other simultaneous genetic events obviated the need for epigenetic modifications (e.g. copy number-activating mutations common in other cancers [49]) or that sequence context (e.g. proximity to fragile sites or the CpG Figure 3. Log significance plots for mean alteration difference in array genes. Gene regions were compared to their expected value (normal tissue betas for methylation and copy number = 2 for CNA) and t-tests were performed. Negative log-transformed p-values (generated by tumor/normal t-tests) are shown on the y-axes and the indicated mean alterations are displayed in the x-dimension. The space above the dotted line represents a significance level of p,0.05. A) Promoter-associated methylation alterations (663 genes) and copy number-altered genes (n = 15,790) are shown and B) separated by overall methylation class structure (Left Class n = 10, Right Class n = 9) as defined by grouping the methylation classes based on the original RPMM dendrogram splitting in [20]. doi:10.1371/journal.pone.0009651.g003 content of promoter regions) may interact with carcinogen exposure to select the order and the type (epigenetic or genetic) of alteration that inactivates genes. At the same, our stratified analysis of the two main biological methylation subgroups revealed that the events leading to abnormal copy number and CpG methylation are fundamentally different in each group, suggestive of an overall collateral relationship. In sum, epigenetic profiles in HNSCC are significantly associated with the extent of CNA, but this global relationship is not widely reflected at the local level. Furthermore, the molecular targets of each are dissimilar. The precise mechanisms responsible for gene inactivation are obscure but in the framework of carcinogenic progression within Knudson's two-hit model, our data indicate that local, coordinate DNA methylation and copy number alteration do not dominate the profile of changes in primary HNSCC. Materials and Methods Study Population/Ethics. The study group were members of a casecontrol population presenting at Boston-area hospitals from 2000-2004, as previously described [50]. In short, samples from incident cases of HNSCC were microscopically examined and histologically confirmed to have .75% tumor content by the study pathologist. This study was conducted according to the principles expressed in the Declaration of Helsinki. Selected patients were enrolled upon providing written, informed consent. All protocols and documentation were approved the Brown University institutional review board administered through the Research Protections Office (Protocol #0707992334). Clinical information was collected and HPV16 status was assessed using short fragment PCR to amplify a region of the L1 gene of HPV16, according to previously published methods [51].Tumor specimens from all head and neck sites (excluding glandular, nodal, and nasopharyngeal carcinomas) selected for CpG methylation analysis included 26 fresh-frozen samples and 42 formalin-fixed paraffin-embedded (FFPE) archived pathology samples. From those 68 samples, 19 fresh-frozen tumors were selected for copy number analysis by frequency matching to the larger methylation cohort on age, gender, and stage. Matched peripheral blood was used for SNPprobe normalization. Eleven fresh-frozen non-malignant specimens from the oral cavity, pharynx, and larynx were procured through the National Research Disease Interchange (NRDI). DNA Extraction and Array-based Methylation Analysis. FFPE tumors were sectioned and DNA was isolated, as previously published [20]. DNA was extracted from fresh-frozen tissues and matched peripheral blood samples using the QIAamp DNA mini kit according to the manufacturer's protocol (Qiagen, Valencia, CA). For methylation assessment, sodium bisulfite modification of the DNA was performed using the EZ DNA Methylation Kit (Zymo Research, Orange, CA) with 1 mg of DNA, as described previously [20]. Illumina GoldenGateH methylation bead arrays were used to simultaneously interrogate 1505 CpG loci associated primarily with promoter regions of 803 genes. Arrays were run at the University of California-San Francisco Genomics Core Facility according to the manufacturer's protocol. LINE-1 Methylation. Global DNA methylation was quantified for 11 of the 19 tumor samples with available substrate by pyrosequencing following bisulfite-PCR, with primers and protocols as described in [52]. Four CpG dinucleotides within the human LINE-1 transposon consensus sequence 302-331 (Accession X58075) were analyzed using the PyroMark Q96 MD system. DNA methylation at each locus was calculated by taking the percent of methylated signal divided by the sum of the methylated and unmethylated signals and reported as the mean over all four CpGs. Pyrosequencing reactions were performed in triplicate and bisulfite conversion efficiency was monitored using internal non-CpG cytosine residues. SNP Genotyping for Copy Number Status. Tumors were examined for copy number alterations by hybridizing isolated tumor DNA to the GeneChipH Human Mapping 500 K single-nucleotide polymorphism array (Affymetrix, Santa Clara, CA) following established protocols according to the manufacturer at the Harvard Partners Microarray Core Facility. Probe intensities at each locus were determined in the Affymetrix GeneChip Operating Software and genotypes calls were generated using the Genotyping Analysis Software (Affymetrix). Probe signals were normalized to the matched samples using Copy Number Analysis Tool v4.0.1 [53] (Affymetrix) with the defaults for tuning parameters, Gaussian smoothing, transition decay, and median scaling. Copy number states were inferred by Hidden Markov Model analysis in the same application. Statistical Analysis. BeadStudio software from the array manufacturer Illumina (San Diego, CA) was used for methylation dataset assembly. All array data points are represented by fluorescent signals from both M (methylated) and U (unmethylated) alleles, and methylation level is given by b = (max(M, 0))/ (|U|+|M|+100), the average methylation (b) value is derived from the ,30 replicate methylation measurements and a Cy3/Cy5 methylated/unmethylated ratio. Subsequent analyses were carried out in the R statistical software package (http://www.r-project. org/). For visualization, hierarchical clustering was performed on sample copy number data calls using a Hamming distance metric and calculated by Ward's minimum variance method. Copy number clusters were dichotomized into low and high allelic imbalance ( Figure 1A); the difference in absolute mean LINE-1 methylation between the two groups was estimated and confidence intervals were computed. Fisher's exact test for small sample sizes was used to test the degree of abnormal copy number (high/low clustering) for association with the covariates stage (dichotomized as I/II and III/IV), site, and presence of absence of HPV16 viral integration, using a Monte Carlo simulation for site. A Wilcoxon rank-sum test was used to test age as a continuous variable versus degree of copy number variation. For visualization of methylation data, tumors were ordered first by methylation classifications developed in [20] by a recursively partitioned mixture model (RPMM), as described in Houseman et al. [31]. Finally, the terminal nodes were obtained by hierarchically clustering methylation b values using Ward's method with a Euclidean distance metric. Although the 19 tumor samples were the primary focus of this investigation, we used the six classes obtained from the RPMMclustering on 68 tumors, described in [20], because we anticipated better precision in capturing the true biological inferences with the larger sample size. The relationship between DNA copy number and methylation class membership was tested using the mean value of |CNS-2|, where CNS is the copy number state of each of 500,446 loci. A permutation test with 10 k iterations using the Kruskal-Wallis test statistic was performed. Locus-specific relationships between copy number and methylation were examined graphically in a chromosome-specific manner. Analyses were restricted to autosomal chromosomes. We investigated the local relationship between methylation and CNA at 1413 loci by calculating the Pearson product-moment correlation coefficients. Note that the discrete nature of the Hidden Markov Model motivates the use of the Pearson, rather than Spearman, coefficient. GoldenGate CpG loci were matched to Affymetrix SNPs in the manner described in [32]. In short, each locus was matched to CNS data by selecting the locus having closest HG18/ NCBI36 coordinate (typically within 1 kb). P-values were calculated via permutation test (5 k permutations). To correct for multiple comparisons, q-values were computed by the qvalue package in R. To assess correlation at the gene (rather than locus) level, CpGs were matched to genes by chromosomal position, and assigned promoter status if they were upstream of the TSS. Methylation at promoter CpGs were averaged together by gene. Similarly, copy number calls were averaged together for all SNPs associated with a gene. To investigate the molecular processes individually by locus, two-sided, two-sample t-tests assuming unequal variance were used to compare methylation between the 19 tumors and 11 non-diseased tissues. For copy number, a one-sample t-test was used, with the normal copy number call assumed to be 2. Mean alteration differences were considered statistically significant where q,0.05. Figure S1 Integrated color image plots for remaining chromosomes.
v3-fos-license
2020-07-02T10:29:47.034Z
2020-07-01T00:00:00.000
225615006
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://revistadechimie.ro/pdf/19%20LIAO%20CHUNFA.pdf", "pdf_hash": "a85cdac37be0f3af679127d64c53cb2b7a096f16", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:391", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "51b1100c8f5487c44c34d43f6426ec6e319125bc", "year": 2020 }
pes2o/s2orc
Hydrolysis Mechanism of Bismuth in Chlorine Salt System Calculated by Density Functional Method Based on the density functional theory, this paper presents the calculated cellular electronic properties of BiCl3, BiOCl and Bi3O4Cl, including unit cell energy, band structure, total density of states, partial density of states, Mulliken population, overlapping population, etc. Combined with the thermodynamic analysis of Bi hydrolysis process in chlorine salt system, the conversion mechanism of oxychloride bond in BiCl3 to form BiOCl and Bi3O4Cl by hydrolysis, ethanololysis and ethylene glycol alcohololysis was obtained by infrared spectroscopy. The results indicate that the energy of Bi3O4Cl cell system was lower than that of BiOCl cell, indicating that the structure of Bi3O4Cl was more stable. From the analysis of bond fluctuation, the electron nonlocality in BiOCl belt was relatively large, and the orbital expansibility was strong; thus the structure of BiOCl was relatively active. The state density map of Bi3O4Cl had the widest energy gap, i.e., the covalent bond between Bi3O4Cl was stronger than BiOCl. Therefore, the hydrolysis of BiCl3 would preferentially generate Bi3O4Cl with a more stable structure. The number of charge arrangement, overlapping population and infrared spectrogram indicate that there were two basic ways in the hydrolysis and alcoholysis of BiCl3. Firstly, two chlorine atoms in BiCl3 were replaced by hydroxyl groups ionized by water and alcohol to form [Bi(OH)2Cl] monomer, and BiOCl and Bi3O4Cl were formed by intra-molecular dehydration or inter-molecular dehydration. The other way was that the Bi atom directly reacted with the OH ionized by water and alcohol to form the [Bi-OH] monomer, and the Cl atom replaced the H atom on the hydroxyl group in the [Bi-OH] monomer to further form BiOCl and Bi3O4Cl. oxychloride to form the bismuth oxide [21][22] have been extensively reported. However, the mechanism of bismuth oxide hydrolysis preparation by bismuth chloride compound photo catalyst has not been reported. The present study was thus motivated. To explore the element hydrolysis mechanisms for the preparation of bismuth oxychloride, in this paper, electronic structure of BiCl3, BiOCl, Bi3O4Cl was calculated using density functional method. The formation mechanisms was explored from the respects of cell structure, band structure, density of states, charge layout number and overlapping inhabit in BiCl3 basic reaction and process of hydrolysis, alcohol solution, and further confirmed by infrared spectrum from the perspective of valence change. Structure design and calculation of bismuth compounds Based on the first principle density functional theory (DFT), combined with the CASTEP [23][24] module of the planar pseudopotential method, the cell structure of bismuth compounds was optimized, and the energy band structure, total state density and separation density, charge arrangement number and overlap polymerization number were calculated. BiCl3, BiOCl and Bi3O4Cl cell models were established according to relevant parameters shown in Table 1. The exchange effect of electronelectron interaction between crystal cells was corrected by general gradient approximation (GGA). PBE in GGA was adopted to deal with the interaction correlation energy between electrons. The grid points of k-space were selected by the Monkhorst-Pack scheme, and the total energy and charge density of the system were integrated in the Brillouin zone. The k-vectors of the Brillouin zone was selected as 1×2×2, 5×5×2, and 5 × 5 × 4; plane wave truncation could be set to 258.5 eV, 489.8 eV and 489.8 eV, respectively. The self-consistent precision was set to be 2.0 × 10 -6 eV / atom, and the force acting on each atom did not not exceed 0.05 eV /nm. Infrared spectrum analysis -Bi 3+ hydrolysis experiment in chlorine salt system BiCl3 in two solid and liquid states was hydrolyzed and alcoholylyzed in different systems. BiCl3 solid powder and its solution (0.17 mol/LBi 3+ , 1 mol/L Cl -) were mixed with the same amount of deionized water, ethanol and ethylene glycol at 25℃ (water bath) for 2 h. Then the solutions were adjusted to pH=4 and treated in the ultrasonic cleaner ultrasonic for 1 hour. The precipitation was achieved by centrifugal method and filtrated with repeated washing, and the product was obtained after drying at 60℃ for 6 h. All the experiments were performed by blank experiment, i.e., to determine the infrared spectrum of the same volume of deionized water, ethanol, ethylene glycol and hydrochloric acid. First principle analysis of bismuth compounds 3.1.1 Energy analysis of bismuth compounds The until cell structure of BiCl3, BiOCl and Bi3O4Cl from structure optimization and energy calculation are shown in Table 2. It has been reported that the lower the total cell energy, the more stable the cell structure [25][26][27]. The cell energy of Bi3O4Cl was lower than that of BiOCl. Therefore, Bi3O4Cl was preferential to form stable structure when BiCl3 was hydrolyzed. https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (6) Band analysis of bismuth compounds As shown in Figure 1, BiCl3 had a band width of 37.2 eV and a forbidden band width of 3.996 eV; BiOCl had a band width of 38.8 eV and a forbidden band width of 2.707 eV; while Bi3O4Cl had a band width of 40.5 eV and a forbidden band width of 2.704 eV. The wider the width of the energy band was, the larger the undulation was, the smaller the effective mass of the electrons in the energy band was, and the larger the degree of non-locality was, the stronger the atomic orbital scalability of the energy band was ,and the more active nature was. On the contrary, the narrower band indicates that the eigenstate corresponding to this band was mainly composed of atomic orbitals of a certain grid point in the local area. The electron locality of this band was very strong, the orbital expansion was weak, and the nature was stable [28][29]. Therefore, it can be seen from Figure 1 that the energy bandwidth of BiOCl was smaller than the bandwidth of Bi3O4Cl, and the undulation of the BiOCl band was relatively larger than that of Bi3O4Cl. Therefore, in terms of band width, BiOCl was narrower, the electron locality of the band was stronger, the orbital expansion was weak, and the property was relatively stable. From the band undulation, Bi3O4Cl had a small degree of undulation and could carry electrons. The degree of nonlocality was small, the orbital expansion was weak, and the nature was relatively stable. The stabilities of BiOCl and Bi3O4Cl would be further analyzed. Figure 2 shows that the valence band (-21.9-0 eV) of BiCl3 was mainly formed by the electron states of Bi 6s6p and Cl 2s2p; the conduction band (0-15.3 eV) was excited by the electron states of Cl 2p and Bi 6p, and the peak value at the Fermi energy level was mainly associated to the electron orbitals of Cl 2p and Bi 6p. Analysis of bismuth compounds about density of state Regarding the BiOCl, the valence band (-21.9-0 eV) was mainly formed by the electronic states of Bi 6s6p5d, O 2s2p and Cl 2s2p, while the conductance band (0-16.9 eV) was assigned to the electronic states of Bi 6s6p5d, O 2s2p and Cl 2s2p. Regarding the Bi3O4Cl, the valence band (-22.4-0 eV) was mainly formed by the electron states of Bi 6s6p, O 2s2p and Cl 2s2p, and the conduction band (0-18.1 eV) was assigned to the electron states of O 2p and Bi 6s6p. The peak value at the Fermi level was mainly contributed by the electron orbitals of Cl 2p, O 2p and Bi 6s6p. Because the size of the pseudo gap (the peak and valley of the low-energy bonding state and the high-energy anti-bonding state are defined as the pseudo gap) of the state density map can reflect the strength of covalent bonds. The wider the pseudoenergy gap is, the stronger the covalency is. The larger the horizontal coordinate of the peak state density is, the easier it is for the electrons outside the nucleus to be distributed in the high energy region, and the easier it is to lose electrons; otherwise, the easier it is to gain electrons [30][31]. Therefore, according to the state density diagram in Figure Comparing the pseudoenergy gap width of BiCl3, BiOCl and Bi3O4Cl, viz. BiCl3<BiOCl<Bi3O4Cl, the covalent bond between Bi3O4Cl atoms was stronger than BiCl3 and BiOCl. This suggested that BiCl3 hydrolysis was more likely to produce relatively stable Bi3O4Cl. Analysis of the charge layout of bismuth compounds The number of charge layouts can reflect the gain and loss of atomic electrons. A positive charge indicates the loss of electrons, and a negative charge indicates the gain of electrons. The gain and loss of electrons can feedback the strength of the bond interaction between atoms. The more electrons are transferred, the stronger the bond interaction between atoms will be, and vice versa [32]. Table 3 shows that each BiCl3 crystal cell contained 12 Cl atoms, among which 8 Cl atoms got 0.42 electrons and 4 Cl atoms got 0.45 electrons respectively. Four Bi atoms, each of them lost 1.29 electrons. Each Bi atom lot an electron from two Cl atoms that gained an electron of 0.42 and one Cl atom that gained an electron of 0.45. The more electrons are transferred, the stronger the interaction between atomic bonds will be. Therefore, in BiCl3 structure, the interaction between the two groups of [Bi-Cl] bonds was weaker than that of the other group. Therefore, BiCl3 preferentially would break up two groups of [Bi-Cl] bonds with weak interaction during hydrolysis to form Bi(OH)2Cl. BiOCl cell contained 2 groups [BiOCl], in which each Bi atom lost 1.48 electrons, each Cl atom gained 0.56 electrons, and each O atom gained 0.92 electrons. The electrons lost by each Bi atom were gained by one Cl atom and one O atom. The more electrons are transferred, the stronger the bond interaction between atoms will be; thus, the bond interaction between [Bi-O] was stronger than that between [Bi-Cl], which was easy to remove Cl ions in the later stage, and to refine Bi2O3 or Bi powder. According Analysis of overlapping concentration number of bismuth compounds The overlapping population can be used to express the interaction between atoms, analyze the bonding properties between atoms, and identify the bonding strength between atoms. When the number of overlapping population is positive, the bonding between atoms is covalent bond. The larger the value is, the stronger the covalent bond is and the more stable the structure is. When the overlapping population is negative, the bonding between atoms is antibonding. The smaller the value is, the stronger the repulsion between atoms is, and the worse the stability of crystal cells is. When the overlap concentration number is 0, there are ionic bonds between atoms [33][34][35]. The overlapping population of BiCl3, BiOCl and Bi3O4Cl are shown in Table 4. The positive overlapping population of BiCl3 indicates that the chemical bond between the Cl atom and the Bi atom was covalently bonded. The population of the four groups of Bi-Cl bonds was 0.30, and the population of the eight groups of Bi-Cl bonds was 0.28. The larger the population, the stronger the stability of the covalent bond. Each [BiCl3] unit contained a Bi-Cl bond with a population of 0.30 and two Bi-Cl bonds with a population of 0.28; thus, hydrolysis occurred when BiCl3 was dissolved in water, followed by that the hydroxide generated by water ionization was replaced. Two of the Cl atoms in the Bi-Cl bond with a population of 0.28 formed Bi(OH)2Cl, which further formed BiOCl, which was consistent with the above-mentioned charge distribution number analysis. The positive overlapping population of BiOCl meant that the chemical bonds between O and Bi atoms were covalently bonded, i.e., the total overlap population was 1.12. The overlapping population of O-O was -0.23 (<0), which meant that the O-O bond formed an anti-bond, i.e., the two O atoms were mutually exclusive. The smaller the negative value, the stronger the inter-atomic repulsive force. The stability of the unit cell was therefore lower. It is known from the overlap of Bi3O4Cl that the O-Bi bond can form a covalent bond or an antibond. The 24 groups of O-Bi bonds were covalently linked, and the total overlap population were 4.2. The anti-bonds were formed between the 4 groups of O-Bi bonds, and the total overlap population was -0.04. The negative overlap of 10 groups of O-O bonds was less than 0, indicating that the 10 groups of O-O bonds were formed with anti-bonds, i.e., the total overlap population was -0.66. The smaller the overlapping population of the anti-bonds, the atoms representing the anti-bonds were received in the unit cell would suffer greater the repulsive force, the more unstable the unit cell. Comparing the size and number of overlapping populations of BiOCl and Bi3O4Cl, we can see that the total number of overlapping populations of BiOCl covalent bonds was 1.12, the total number of overlapping populations of anti-bonds was -0.28, the total number of overlapping populations of covalent bonds, and the overlap of anti-bond. The sum of the total number of settlements was 0.84, and the effect of covalent bonds was 4 times that of reverse bonds [36][37]. The greater the role of covalent bonds, the more stable the structure; the total number of overlapping settlements of Bi3O4Cl covalent bonds was 4.20. The sum of the numbers was -0.70, the sum of the number of overlapping settlements of covalent bonds, the number of overlapping settlements of anti-bonds was 3.50, and thus, the effect of covalent bonds was 6 times that of reverse bonds. It can be seen that Bi3O4Cl had a stronger covalent bond between atoms than BiOCl, and the interaction force between atoms was large, i.e., the stability of the unit cell Bi3O4Cl was higher BiOCl, and a series of Bi (OH) 2Cl would be generated during the hydrolysis of BiCl3. It was easier to continue to remove water to generate Bi3O4Cl. In summary, the energy of Bi3O4Cl was lower than that of BiOCl, the width of constraint energy gap was: BiOCl<Bi3O4Cl, which suggested the covalent bond between Bi3O4Cl atoms was stronger than that of BiOCl, and BiCl3 was more likely to form more stable Bi3O4Cl. The number of charge distribution shows that BiCl3 was hydrolyzed. In the process, the two bonds with weak bond interaction [Bi-Cl] were preferential to be broken to form Bi(OH)2Cl; the bond between [Bi-O] in BiOCl and Bi3O4Cl iwas stronger than [Bi-Cl], and the Cl atoms were thus easier to be removed in the later stage, and the Bi2O3 or Bi powder was easier to be refined. The overlapping populations of the overlapping groups / unit covalent bonds of the two unit counter bonds were further compared: Bi3O4Cl>BiOCl. This indicated that a series of Bi(OH)2Cl produced in the hydrolysis process of BiCl3 could easily continue to remove a molecule of water to generate Bi3O4Cl. In order to understand whether the fracture and formation process of valence bond during BiCl3 hydrolysis was consistent with the calculated results, the thermodynamic analysis of Bi 3+ hydrolysis process under chlorine salt system was conducted in the designed experiment, and the hydrolysis path of BiCl3 and the formation mechanism of chloro-oxygen valence bond were further analyzed by infrared spectroscopy to verify the calculated results. Thermodynamic analysis of Bi 3+ hydrolysis process in chloride system In the Bi 3+ -Cl-H2O system, Bi 3+ was hydrolyzed at a lower pH, resulting in multiple solid phase intermediates including BiOCl, Bi3O4Cl, Bi2O3, and Bi(OH)3. Controlling the appropriate reaction conditions could enhance Bi 3+ to form a series of stable hydrolysates. The stability constants of bismuth, chlorine and hydroxide ions and the standard formation molar free energy of related substances by software HSC and FactSage are listed in Table 5 The reaction of Bi 3+ and Clto form complex ions in Bi 3+ -C-H2O system is expressed by formula (1). The reaction of Bi 3+ with OHforming complex ions is represented by formula (2); Bi 3+ and Clformation and OHforming the complexing constants of the complex ions are shown in Table 6. (11) where [Cl -] T 0 and [Bi 3+ ] T 0 were the total concentration of bismuth and chlorine, respectively. 1)equilibrium of BiOCl solution: From the reaction (3), it was known that 1 mol of bismuth ion and 1 mol of chloride ion were required to form 1 mol of BiOCl; thus we could obtain the following equation: -[Cl -] T (12) By substituting equations (10) and (11) into equation (12), we could obtain: , which was a constant; Substituting the reaction formula (3) into the formula (9), we could obtain: 2)equilibrium of Bi2O3 solution: Similarly, when pH≥2, [Cl -] was approximately equal to [Cl -] T 0 , which was also a constant; Substituting the reaction formula (6) into the formula (9), equation (15) can be obtained: 3)equilibrium of Bi3O4Cl solution: The formation of 1 mol of Bi3O4Cl required 3 mol of bismuth ions and 1 mol of chloride ions, shown as follows, , which was a constant. Substituting the reaction formula (7) into the formula (9), then: Substituting the reaction formula (8) into the formula (9), then: When the total concentration of bismuth and the total concentration of chlorine were given, the relationship between pH and lg(Bi 3+ ) was made by the above formulas (14), (15), (17) and (18), as shown in Figure 3. The figure shows the equilibrium line between the solid phase and the liquid phase. The area above the curve was the solid phase stability zone. The area below the curve was the liquid phase stability zone. It can be divided into four solid phase stability zones, namely BiOCl, Bi2O3, Bi(OH)3 and Bi3O4Cl. In the Bi 3+ -Cl-H2O system, when the control 3<pH<5, stable BiOCl and Bi3O4Cl could be formed, so that the Bi 3+ hydrolysis verification experiment in the chloride salt system was chosen to adjust the pH to 4. of hydrazine hydrolysis in chloride system Analysis of bismuth hydrolysis mechanism under chlorine salt system by infrared spectroscopy The filtrates of BiCl3 hydrolytic action, alcoholysis reaction and alcoholysis reaction with ethylene glycol were characterized by infrared spectrum, and blank experiment was performed for comparative analysis to verify the reaction path of BiCl3 in the hydrolysis process and the conversion mechanism of chlorine oxygen [38][39]. The results are shown in Figure 4 to Figure 7, and the characteristic infrared peaks are shown in Table 8. Infrared spectroscopy analysis of BiCl3 solid hydrolysis in Bi 3+ -Cl-H2O system It can be seen from curves A and B in Figure 4 that, a peak occurred at 3313.21 cm -1 assigned to O-H stretching vibration, and a peak occurred at 1643.38 cm -1 assigned to bending vibration of O-H molecules. On the one hand, due to the addition of BiCl3 and its hydrolysis, the degree of hydroxyl ionization in water was strengthened, the number of free hydroxyl groups in water was increased, and the concentration of hydroxyl groups was relatively large. On the other hand, due to the ionization of BiCl3 after BiCl3 was added, Bi atom replaced H atom of water ([H-OH]) to form a [Bi-OH] monomer. In the [Bi-OH] monomer, there was a hydrogen bond between the oxygen atom in the OH and another water molecule, which was not easy to be broken; the electronegativity of chlorine was 3.16, the electronegativity of hydrogen in water was 2.1, and free Cl ions were generated in water. It was easy to replace the hydrogen atom in the water hydroxyl group to form [Cl-O-Bi], which was BiOCl, and the Bi atomic nucleus was larger than the H nucleus. It was easy to form Bi[OH]3 with the hydroxyl group generated by water ionization. Since Bi[OH]3 was unstable, it would continue to hydrolyze to form BiOCl or Bi3O4Cl. Cl atom was replaced by the H atom in the water hydroxyl group, and therefore, the electron cloud density increased, the force constant k increased, and the induction effect occurred. The group frequency shift to a high wave number, and correspondingly, the infrared spectrum of the water had a red shift. The stronger the electronegativity of the element, the stronger the induction effect, and the more obvious the shift of the position of the absorption peak to the high wave number [40][41]. Comparing curves A and B in Figure 4, the absorption peak at 1643.38 cm -1 only shift to the peak by https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (6) 0.25 cm -1 , and the absorption peak at 3313.21 cm -1 shift by 9.22 cm -1 , which demonstrates that the red shift of the absorption peak at 3313.21 cm -1 was not only due to the increase of the hydroxyl group concentration in the solution, but also due to that the chlorine replaced the hydrogen in the water hydroxyl group to induce an effect. Infrared spectroscopy analysis of BiCl3 solid hydrolysis in Bi 3+ -Cl-C2H5OH system Comparing curves C and D in Figure 5, the absorption peak at 3316.10 cm -1 shift to 3.76 cm -1 at the peak region, and the absorption peak at 636.44 cm -1 shift to the low peak region by 3.29 cm -1 , this was resulted from that the H on the -OH in ethanol was replaced by Cl, which increased the electron cloud density and the force constant k, and induced an effect. Therefore, the group frequency at 3316.10 cm -1 shift to a high wave number, and thus, the infrared spectrum of ethanol red-shift. The electronegativity of chlorine was relatively larger than that of hydrogen. The electronegativity of carbon was 2.55, the electronegativity of oxygen was 3.44, the electronegativity of hydrogen was 2.1, and the electronegativity of chlorine was 3.16; so it would replace -OH. After the upper H, a monomer such as [C-O-Cl] was formed. The greater the difference in the electronegativity between the two ends of the bond (the greater the polarity), the stronger the absorption peak, the stronger the polarity, the red shift of the absorption peak. The weaker the polarity, the blue shift of the absorption peak. O-Cl was weaker than -OH, and therefore, the absorption peak at 636.44 cm -1 shift to the low peak region, and blue shift occurred, which proves that H atom on -OH was replaced by Cl atom. Since the alcoholysis of BiCl3 was slow in ethanol, ammonia water was added to BiCl3+C2H5OH system to promote alcoholysis [42][43][44]. Comparing curves E and F in Figure 5, the peak at 1642.82 cm -1 was excited by stretching vibration of NH-NH3, wherein the absorption peak at 3358.72 cm -1 and the absorption peak at 562.79 cm -1 were red at the peak. The shift of 5.98 cm -1 and the blue shift of 18.99 cm -1 at the low peak further confirmed the substitution of BiCl3 in ethanol. https://doi.org /10.37358/Rev. Chim.1949 Rev. Chim., 71 (6) Infrared spectroscopy analysis of BiCl3 solid hydrolysis in Bi 3+ -Cl-(CH2OH)2 system In order to more clearly understand the mechanism of chlorine and oxygen regulation in the hydrolysis process of BiCl3, it was added to prove whether BiCl3 could undergo alcoholysis in ethylene glycol. Comparing curves G and H in Figure 6, it can be seen that 3297.97 cm -1 blue-shift by 4.93 cm -1 to the low peak. This was because the electronegativity of chlorine was relatively larger than that of hydrogen, and the electronegativity of carbon was 2.55. The electronegativity was 3.44, the electronegativity of hydrogen was 2.1, and the electronegativity of chlorine was 3.16. Therefore, after hydrogen was replaced by -OH, a monomer such as [C-O-Cl] was formed. The greater the difference in the electronegativity between the two ends of the bond (the greater the polarity), the stronger the absorption peak, the stronger the polarity, the red shift of the absorption peak; the weaker the polarity, the blue shift of the absorption peak. Therefore, the O-Cl was weaker than the -OH polarity, and the absorption peak at 3297.97 cm -1 shift to the low peak region, causing a blue shift, which proves that the hydrogen on the -OH was replaced by chlorine. On the other hand, the absorption peak of 881.27 cm -1 red-shift by 0.02 cm -1 and the absorption peak at 860.18 cm -1 blue-shift by 0.4 cm -1 , which was due to the unsubstituted -OH in ethylene glycol adjacent. The other -OH hydrogen was replaced by chlorine, resulting in the substitution of -OH; the electron cloud density increased, the force constant k increased, and the induced effect occurred, causing the absorption peak at 881.27 cm -1 to shift to a high wave number. The infrared spectrum of the diol slightly red-shift while the other was not replaced, so a blue shift occurred. Since the alcoholysis of BiCl3 in ethylene glycol was extremely slow and almost no alcoholysis occurred, ammonia water was added to BiCl3+(CH2OH)2 system to promote alcoholysis [45] .Comparing I and J in Figure 6, the peak at 1643.32 cm -1 was generated by the stretching vibration of NH-NH3; comparing curves E and F, the absorption peak at 3284.52 cm -1 had a blue shift of 3.47 cm -1 , the absorption peak at 882.33 cm -1 had a blue shift of 0.4 cm -1 , and the absorption peak at 860.99 cm -1 had a red shift of 0.49 cm -1 , which further confirmed the substitution of BiCl3 in ethylene glycol. The above proof was verified, and an absorption peak was added at 1643.27 cm -1 , which was generated by the -NH stretching vibration. Hydrolysis mechanism of BiCl3 liquid in Bi 3+ -HCl system In order to verify whether the hydrolysis, alcoholysis, and glycolysis of solid BiCl3 were hydrolyzed, a simulated liquid test was performed. The above experiment was carried out by formulating a certain concentration of BiCl3 solution, and the same conditions were applied for infrared analysis. The final results are shown in Figure 7 (1), Figure 7 (2), and Figure 7 (3). The results were consistent with the results of Figure 4, Figure 5, and Figure 6, respectively, because the analysis results were basically the same, no more tautology here. 4.Conclusions The electronic properties of BiCl3, BiOCl and Bi3O4Cl cells were calculated by density functional method here. The valence bond properties of the cell structure of BiCl3, BiOCl and Bi3O4Cl were analyzed from the respects of unit cell structure, unit cell energy, band structure, total density of states, partial density of states, Mulliken population and overlapping population. The atomic transfer pathway of BiOCl and Bi3O4Cl formed during the hydrolysis of BiCl3 was further analyzed by infrared spectroscopy. There were two main ways to prove that BiCl3 was hydrolyzed into oxychloride. One was that the [Bi-Cl] ionic bond between BiCl3 was broken, and the hydroxide replaced the chlorine atom to form Bi(OH)2Cl. Bi(OH)2Cl was extremely unstable in aqueous solution, and it was easy to continue hydrolysis. It contained two hydroxyl groups that were easy to combine with each other and lose a part of water. The other one was that the hydroxyl groups contained in Bi(OH)2Cl tended to react with H + in aqueous solution and lose a molecule of water. The number of Bi(OH)2Cl reactions involved determined the degree and complexity of the formation of a series of chlorinated compounds (BixOyClz). The ruthenium atom easily formed a [Bi-OH] monomer with a hydroxyl group, and the chlorine atom directly formed a BiOCl instead of a hydrogen atom on the hydroxyl group of the [Bi-OH] monomer. https://doi.org/10.37358/RC.20.6.8182 Infrared spectroscopy shows that under water, ethanol and ethylene glycol systems, water hydroxyl and alcohol hydroxyl groups had red-shift and blue-shift, respectively; the hydroxyl vibration in the high-wave region was caused by the substitution of chlorine atoms for hydrogen atoms, which increased the density of electron clouds, increased the force constant k, and induced effects. The group frequency shift to high wavenumbers, and the infrared spectrum shift red. -OCl was relatively weaker than -OH, so the hydroxyl vibration at the low peak region shift to the low peak region, causing a blue shift, which proved that the hydrogen atom on the -OH was replaced by a chlorine atom. It was consistent with the strength of the covalent bond between the calculated chlorine and oxygen, and the results of the density functional method were further verified using experimental results.
v3-fos-license
2024-05-07T23:48:16.658Z
2024-05-11T00:00:00.000
269616960
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3613904.3642715", "pdf_hash": "8c8b565b79e34225c909cc4e94174a6f75e195e0", "pdf_src": "ACM", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:393", "s2fieldsofstudy": [ "Economics", "Business", "Computer Science" ], "sha1": "892cbc5934b8acbc671cae109a5a9ed5a3692168", "year": 2024 }
pes2o/s2orc
Stranger Danger? Investor Behavior and Incentives on Cryptocurrency Copy-Trading Platforms Several large financial trading platforms have recently begun implementing “copy trading,” a process by which a leader allows copiers to automatically mirror their trades in exchange for a share of the profits realized. While it has been shown in many contexts that platform design considerably influences user choices—users tend to disproportionately trust rankings presented to them—we would expect that here, copiers exercise due diligence given the money at stake, typically USD 500–2 000 or more. We perform a quantitative analysis of two major cryptocurrency copy-trading platforms, with different default leader ranking algorithms. One of these platforms additionally changed the information displayed during our study. In all cases, we show that the platform UI significantly influences copiers’ decisions. Besides being sub-optimal, this influence is problematic as rankings are often easily gameable by unscrupulous leaders who prey on novice copiers, and they create perverse incentives for all platform users. INTRODUCTION Making investment decisions on complex fnancial products is both difcult and time consuming.In particular, modern investment instruments with potentially high rewards (and equally high risks), such as cryptocurrencies and their derivative products, operate in 24/7 markets, and often present tremendous volatility.Fortunes can be made, or lost, in mere hours.Hence, prudent investing requires not only time spent learning the complexities of the market, but close attention to constantly monitor events, news, and market movements. To alleviate this burden, in the late 2000s, several fnancial trading platforms-e.g., eToro, 1 ZuluTrade, 2 and ayondo, 3 among othersstarted ofering "copy trading, " also known as "social trading." Copy trading allows (novice) investors, or copiers (also known as "followers") to delegate investment decisions and passively beneft from the expertise of a leader (or signal provider) in exchange for a share of any realized proft.The concept of delegation itself is not new-portfolio managers have, for decades, performed similar services-but copy trading takes it to an extreme, by letting any copier follow any leader at the click of a button.Importantly, no endorsement or credentials are needed to become a leader: anybody can play that role, as long as somebody is willing to copy their portfolio and trades. Copy trading has gained increased attention and popularity in the recent past.For instance, in 2023, Twitter announced a forthcoming partnership with eToro to ofer copy-trading services [12].Cryptocurrency investment, in particular, is an area where copy trading has surged in popularity, due to the large number of traders and considerable investment risks leading to high potential upsides [39,63].Several crypto exchanges launched copy trading services on their platforms, claiming that they ofer environments where individuals can successfully invest without having to pay close attention to price movements and without any deep knowledge of fnance.In particular, the top three cryptocurrency exchanges at the time of writing, 4 Binance, OKX, and Bybit, now ofer copy-trading services. However, revenue incentives in copy-trading platforms can cause conficts of interest.Copiers want to fnd competent leaders, while leaders are incentivized to appear proftable, since their monetary rewards increase with the number of copiers.In addition, the platform itself profts from trading commissions, which may lead it to prioritize trading volume over user profts.This revenue structure can lead to the use of manipulative design patterns (also known as "dark patterns") [30,38,48,57,58]-design features adopted to covertly raise user engagement [20] (i.e., here, getting users to trade more than they originally planned).A particularly prominent manipulative design pattern in copy-trading platforms is gamifcation [21,30].Specifcally, copy-trading platforms advertise top portfolios on their main (landing) pages with leading phrases like "most liked" and "most proftable." One of the platforms we study adorns top portfolios with crown icons.Moreover, many platforms hold trading competitions between users, and ofer bonuses to users who trade more.Among gamifcation features, leaderboards are common to all cryptocurrency copy-trading platforms.These leaderboards sort leader portfolios based on certain performance metrics.Highranking portfolios are prominently featured, while low-ranking portfolios are virtually invisible to copy traders.This can create perverse incentives: instead of trying to maximize the performance of their portfolio, leaders may try to get as high a leaderboard rank as possible.This is particularly problematic if the correlation between a portfolio's actual fnancial returns and its leaderboard ranking is weak. Prior literature on search engine result pages (SERP) unfortunately suggests strategies optimizing for leaderboard placement are likely to yield success.Guan and Cutrell [32] show that SERP sorting order matters signifcantly.Pages that rank higher are more likely to be found than those with a lower rank, which are virtually invisible.More recently, Trielli and Diakopoulos [64] and Gleason et al. [29] confrm that top-ranked items dominate clicks, and also show that SERP design afects subsequent browsing behavior.These phenomena are not limited to information searches.For product searches (i.e., those that ultimately involve a payment), Edelman and Lai [23] demonstrate that the highlighted area for paid listings in Google's fight search infuences user choices.In short, these prior studies hint that portfolios ranked high on a leaderboard should attract more copiers, regardless of actual fnancial performance. The fundamental question we attempt to answer in this work is whether current market designs adopted by copy trading platforms truly beneft users-both leaders and copiers.While design patterns, particularly leaderboards, may signifcantly impact user behavior on many online platforms, the monetary stake involved in portfolio choice is particularly signifcant here: the average copier entrusts USD 500-2 000 to their leaders, and some greatly exceed these amount.Furthermore, fnancial literature suggests choosing portfolios based on past performance records, which leaderboards are based on, carries high risk [19,22,49]. To the best of our knowledge, however, the extent to which user interface (here, leaderboard) design features infuence user choice when potentially large amounts of money are at stake remains an unexplored question.UI design efects are extensively studied in online shopping [38,46], social media [57], privacy protection [2,3,67], but there is a notable lack of analysis for online fnancial services, where the monetary stake is much more consequent than in these other environments.The paucity of work in the area may be due to the relative recency of online fnancial services.However, they are currently experiencing a rapid rise: a study shows that 17% of US adults, particularly young adults (e.g., 41% of young men), have participated in cryptocurrency trading, which is primarily ofered via online platforms and mobile apps [26].This is an impressive percentage considering that modern cryptocurrencies appeared in 2008, and were virtually unknown to the mainstream until 2011. To fll this gap in the literature, we attempt to measure the impact of UI design patterns on online fnancial services, specifcally by answering the following research questions: RQ1 Does (cryptocurrency) copy-trading platform website design, specifcally the ranking order of leader portfolios, infuence copiers' trading choices?RQ2 Does current platform design help copy-traders realize profits and "beat the market"?RQ3 Are there systematic dangers inherent to the current designs of (cryptocurrency) copy-trading markets?If so, which measures can help users recognize risks and evaluate the merits of copy trading? Our research can also contribute to addressing fnancial regulator concerns.Online marketplaces' use of "digital engagement practices" (DEPs)-the term regulators have recently started using to refer to design features chosen for attracting users, including leaderboards-are receiving increased attention as online trading grows and starts to foster new trading behaviors, such as trading activism as exemplifed by the GameStop frenzy [27].More precisely, the U.S. Securities and Exchange Commission (SEC) released a request for comments (RFC) on the use of DEPs in 2021 [60].The UK Financial Conduct Authority (FCA) also recently published an analysis that suggests that online platforms employ design practices that may have an adverse efect on consumers, and called for further study [33]. Figure 1: TraderWagon landing page (Jan.2023).This landing page prominently advertises one can "copy trades with one click," and lists a number of leader portfolios, ranked, here, by 30-day return on investment. To answer these questions, we analyze two major cryptocurrency derivatives copy-trading platforms: TraderWagon5 and Bybit. 6Both platforms prominently feature a leaderboard that ranks published portfolios-Figure 1 shows TraderWagon's (old) landing page as an example.It prominently advertises that one can "copy trades with one click," and presents a list of "top" portfolios (ranked here by 30-day return-on-investment) that users may want to copy.These designs seemingly aim to reduce copiers' friction and help them get started with copy trading. We fnd that portfolio popularity is highly correlated with how the platform ranks them, rather than their actual performance.Even though both platforms use diferent ranking algorithms, copiers predominantly follow whatever ranking scheme the platform suggests.Serendipitously, TraderWagon slightly modifed its front page during our study and substituted return-on-investment (ROI)-based rankings for an assortment of short lists each ranking a diferent metric.We show, through a regression analysis, that this coincides with statistically signifcant decreases in the number of copiers explained by the ROI-based ranking (RQ1). Neither of the ranking choices is directly correlated with longterm performance.In Bybit's case, rankings refect leader popularity more than actual performance, and are poorly correlated with future returns.In TraderWagon, rankings over-emphasize recent past performance of a specifc portfolio, rather than the consistent excellence of a given trader (RQ2).Crucially, in both cases, leaders do not have to bear large fnancial risks to climb up the rankings and reap the benefts of having many copiers -even if for a short time.This means leaders can easily abuse leaderboards to increase profts made from copiers. Furthermore, copy-trading platforms and the exchanges where trading is taking place are incentivized to foster much copy-trading activity as possible, since this drives trading volumes and ultimately platform profts, which leads to undesirable copy-trading platform designs (RQ3).We discuss potential solutions to fx these misaligned incentives. RELATED WORK AND BACKGROUND This section discusses the background on copy trading, and highlights some of the challenges users face when investing in cryptocurrencies.We also discuss more extensively prior work on the infuence of UI design patterns, especially manipulative designs. Copy-trading complexities.Even though there is no direct money transfer between copiers (or "followers") and leaders (also known as "signal providers") who publish their portfolios, most literature discusses copy trading as a form of delegated portfolio management [19,49,54]. Delegated portfolio management involves various activities that difer in accountability.At one end of the spectrum, investment funds and money managers are legally required to disclose their performance so that investors can make informed decisions.While the obligation is demanding, trustworthy information helps competitive frms attract customers.On the other end, traders with a large social media following, known as "fnfuencers," have no accountability and may even make public statements that confict with their actual trading positions [39].As a result, followers may want more assurance to properly judge of the value of the advice they are receiving. Copy-trading services (including those for traditional fnancial assets) were launched to let novice traders leverage the knowledge of more experienced traders, while providing reputational checks to ensure these novice traders follow sound advice.More precisely, copy-trading platforms attempt to provide a reliable delegated portfolio management system in two ways.First, they make signal providers accountable by disclosing their investment performance.Second, they also make it easy for signal providers to discuss their outlooks and investment strategies. However, while such disclosures should help copiers exercise due diligence, Huddart [35] claims that it is difcult to distinguish whether signal providers' past performance is attributable to their trading skills or to exogenous factors like mere luck.On the contrary, Huddart argues that, because copy-trading platforms have very low barriers to entry (i.e., virtually anybody can become a signal provider), they motivate traders to take risky positions to differentiate themselves from the rest of the pack.Equally concerning, Dorfietner et al. [22] show that simply following high-performance portfolios, which are often prominently featured on copy-trading platforms, can lead to losses.Doering et al. [19] and Neumann [49] explain this by showing that signal provider investment returns display non-normal distributions, which yield systematic risks of rare but large losses for followers.Instead, they claim that copiers should choose leaders using risk-adjusted performance metrics. As we will demonstrate, these fndings may be misaligned with the incentives of the copy-trading platforms themselves.Indeed, copy-trading platforms make money from trading volumes-it does not matter who wins and who loses (and how much they win/lose), as long as they trade.Therefore, these platforms may prefer to adopt features that increase trading volume over any other metric, including customer success. Leader compensation mechanisms.How leaders are compensated is a crucial factor in the success of copy-trading platforms.Indeed, the competitiveness of these platforms depends on the quality of leaders, and capable leaders will choose the platform that they feel fairly rewards their contributions to the platform success.However, there is a risk of moral hazard if the compensation scheme incentivizes leaders to engage in deceptive activities or excess risktaking.Therefore, it is essential that copy-trading platforms adopt compensation schemes that strike a balance between attracting leaders and maintaining market integrity. Diferent copy-trading platforms, including the platforms we study, use diferent compensation mechanisms to incentivize signal providers to publish their portfolios.Proft-based models, where signal providers receive a share of followers' proft, are common.Crucially, signal providers who take erroneous positions generally do not share in their followers' losses.As a result, while proft-based models can help attract talented signal providers, they can motivate leaders to take excessive risks since their profts are multiplied when they win, while their losses are limited to their own bids [14].Another compensation model is volume-based, where signal providers receive revenue proportional to the trading volume copying their portfolios.In this model, compensation is a function of trading volume rather than proft, which in theory should mitigate excess risk-taking by signal providers.However, Chevalier et al. [15] and Sirri et al. [61] show that, even here, signal providers have the motivation to take risky positions.In short, fnding a compensation model that fosters sound incentives for leaders to trade without impairing market integrity remains an open problem. Cryptocurrency copy trading.In the past few years, cryptocurrency trading markets started integrating copy trading services into their online platforms.Tellingly, the top three cryptocurrency exchanges at the time of writing, 4 Binance, OKX, and Bybit, ofer copy trading services.The main motivation is that cryptocurrencies are complex, "high-tech, " and thus require a high level of expertise to invest in them.At the same time, their rapid appreciation has led to fortunes being made almost overnight, sparking a "fear of missing out" (FOMO) in those who did not invest [1]. More precisely, cryptocurrencies are among the most volatile class of fnancial assets, and are thus predominantly recognized as highly speculative investments [8].There is no broad consensus about their fundamental underlying value, if any, despite myriads of theoretical [9,16,52,53,59,62] and empirical [43] analyses.As such, forecasting their rise or fall is a notoriously treacherous exercise; in addition, contrary to many traditional fnancial markets, cryptocurrency markets are open 24/7, 365 days a year, which makes them particularly difcult to constantly monitor.Despite these difculties, cryptocurrencies have become popular enough that derivative products, such as "perpetual futures" [4,5], are now being ofered.According to Kawai et al. [40], a leading crypto-derivatives exchange had more than eight million users in 2021.An important feature of these derivative exchanges is that investors can engage in leveraged trading (simply put, trading with a proft or loss multiplier).While leveraged trading ofers a chance for small investors to make signifcant profts, the risk they bear also increases proportionally.The levels of leverage are far higher than those allowed in traditional fnancial markets, and make it even more critical to pick the right bets to proft.More often than not, though, they spell disaster for retail-level investors [63].It is therefore unsurprising that investors-especially novices-look for expert insights, thereby leading copy-trading platforms to fourish. Research on cryptocurrency use.In recent years, numerous studies have explored the usability and user experience of cryptocurrencies, including in non-investment contexts [36,44].Sas et al. [56] show that cryptocurrencies' novel features, e.g., decentralization and independence from trusted third parties, contribute to broad adoption, and Elsden et al. [24] generate a typology of challenges cryptocurrencies pose, focusing on their design choices.The UI of cryptocurrency-related services has been extensively studied: Voskobojnikov et al. [68] show user risks inherent to the design choices in wallet software, and Kraft et al. [42] argue that the UI design of spot exchanges can magnify peer-traders' infuence on investment behaviors. Johani et al. [37] evidence that cryptocurrency price volatility positively correlates with hype-driven posts in online forums, while tech-focused discussions tend to lead to lower price volatility.This result echoes Gao et al. [28]'s fnding that cryptocurrency holders comprise both short-term investors and users believing in future success. These prior studies both show that UI design is critical, and that investors are susceptible to their peers.Our work advances these eforts further by examining the impact of manipulative design patterns embedded in emerging new services, and the potential new risks they breed. Infuence of design patterns.As competition between various online services-including fnancial services-became increasingly ferce, some of these online services started to adopt "manipulative design patterns."Bringnull frst compiled a taxonomy of such design patterns [11], and Grey et al. re-classifed manipulative design patterns into fve categories [30].The common feature of these patterns is that they exploit our cognitive biases to gain more user engagement [48]. Manipulative design patterns are surprisingly common.Mathur et al. show 1 818 out of 11K shopping sites use manipulative design patterns [46].Beyond shopping sites, Schafer et al. show social media use manipulative designs to discourage users from deleting their accounts [57].Likewise, Netfix reportedly designs its website to make users watch videos longer than they originally planned [58].Other studies report manipulative designs that nudge users to compromise their privacy [2,3,50,67]. A manipulative design pattern particularly relevant to our study is "gamifcation, " i.e., integrating game-like features [11,30].While gamifcation itself is not necessarily manipulative per se, it can be an extremely efective technique to increase user engagement.Service providers implement gamifcation by ofering users rewards (or fame) as they accomplish certain tasks or meet certain goals.Some literature suggests that gamifcation may facilitate education [21,30]; the fip side of the coin is that users might be spending unreasonable amounts of time and money.As we will see, copytrading platforms actively engage in gamifcation, by advertising various bonus programs and prominently featuring leaderboardsone of the hallmarks of gamifcation [30].These features potentially incentivize leaders to increase their trading volume to maximize rewards and publicity. Leaderboards can also negatively afect copiers.Recall the Trader-Wagon example in Figure 1.These pages are, to some extent, similar to top results presented by search engine results pages (SERPs), whose infuence has been extensively studied.Studies document that SERP design signifcantly afects both user browsing behavior and click-through rates [23,29,32,64].More precisely, the way information is laid out is critical, as Huang et al. [34]'s analysis of mouse movements shows.Novin and Meyers [51] and Azzopardi [6] discuss cognitive biases having a substantial infuence on searching behaviors.Epstein et al. [25] even shows, through an experiment with respondents in dozens of countries, that SERP ranking algorithms are capable of infuencing election polls.These prior works all evidence the critical impact of information ranking algorithms on users. While literature documents the use of manipulative designs in various online services and the substantial infuence of SERP design, studies about online fnancial services are notably scarce, possibly due to the relative novelty of these platforms.However, as online fnancial services are increasingly directly marketed toward individuals [7,39,63], studying the impact of design choices in online fnancial platforms is becoming more important.Our study attempts to provide a frst step toward understanding and quantifying the risks inherent to design choices in online fnancial services, using cryptocurrency copy-trading platforms as a case study.Another potential key contribution of our study is to examine the infuence of UI design on user behavior in a high-stake situation.Indeed, although prior research distinctly evidences UI design infuence, whether the situation changes (and if so, how, and to which extent), when users face higher stakes (e.g., monetary losses), is a far more challenging question to address.Our analysis of real market data can help us move toward an answer to this question. DATASET We collect investor data from TraderWagon 5 and Bybit, 6 two major copy-trading platforms for "perpetual futures, " from Oct. 2022 to Aug. 2023.This section briefy introduces each platform and describes our dataset of users' investment records. Ethics of data collection.Importantly, none of the data we collecton either platform-contain personal identifers: trader accounts, in particular, are completely pseudonymous.In addition, we are not correlating multiple sources of data (instead, we use and analyze TraderWagon and Bybit data independently).Therefore, our work, according to our institutional rules, does not qualify as humansubject research, and is thus not subject to IRB review.We are also purposely only using the publicly available API from the sites (as opposed to, e.g., scraping pages), and in doing so, do not violate TraderWagon and Bybit's terms of service. TraderWagon Data TraderWagon was launched in Dec. 2021. 7The primary function of the platform is to match investors willing to publish their portfolios (leaders or signal providers) with those who want to copy them (copiers or followers), in exchange for potential additional proft.Through a partnership with Binance Futures, the largest online cryptocurrency derivatives market at the time of writing, 4 orders from TraderWagon investors are executed on the Binance Futures market (see Appendix A for details).TraderWagon announced that its service would be migrated to Binance in late Dec. 2023, 8 and Binance now hosts a copy-trading platform similar to TraderWagon. 9 To assist in matching leaders to copiers, TraderWagon provides a ranking, or "leaderboard, " of published portfolios sorted by several investment-performance metrics.As shown in Figure 2, leaders can have multiple portfolios, and copiers select portfolios, rather than individuals.This leaderboard was originally on the site's front page, as shown in Figure 1.(We will discuss later updates to the interface that took place during the course of our study, but they can be ignored for the moment.)These performance metrics include proft and loss (PnL, the total amount of money made or lost in a given interval; in the case of Figure 1, since the portfolio was published), return on investment (ROI, that is, the percentage of money made or lost compared to the initial investment, over a given interval of time; in the case of Figure 1, 30 days); among a host of other metrics we discuss in Appendix A. 7 https://www.facebook.com/photo/?fbid=202413802285898. 8https://traderwagon.zendesk.com/hc/en-us/articles/25580027833753. 9https://www.binance.com/en/copy-tradingTraderWagon also features a number of reward programs.In particular, a referral program allows users to earn a continuous stream of income from their referrals' trading fees. 10Users who have been referred by others can earn rewards simply by making copy trades. 11Along the same lines, TraderWagon also has an afliate program that compensates users for growing their social media following. 12Last, TraderWagon hosts trading competitions, where participants can earn rewards by achieving high scores according to certain performance metric(s). 13All of these programs appear to be designed to increase user engagement. TraderWagon uses proft-based and conditional volume-based compensations schemes for signal providers/leaders. 14If a copier closes positions with a positive realized PnL, the associated leader receives 10% of the proft the copier made as their share for portfolio publication.Moreover, if the weekly PnL of the copier is positive, the leader additionally receives 10% of the transaction fee paid for copying their portfolio. Copiers, broadly speaking, have two options for copying a leader portfolio: 15 fxed-ratio or fxed-amount.With fxed-ratio, copiers mirror the portfolio investment ratios across positions.For instance, if the leader puts USD 100 in their portfolio margin account -that is, they send USD 100 to the platform for trading, and of these, use USD 10 on asset A, and USD 20 on asset B (the rest is unused), the copier will use 10% of their own investment toward asset A and 20% on asset B, regardless of the amount of money the copier has in their own margin account. 16For instance, if the copier has USD 10 000 in their margin, fxed-ratio will lead them to acquire USD 1 000 worth of asset A, and USD 2 000 worth of asset B. With fxed-amount, in short, copiers set total and per-asset amounts when they start copying a portfolio (see Appendix A for the details). TraderWagon sets a maximum number of followers (i.e., a quota) to portfolios separately for each copying mode, ranging from 50 to 200.The maximum quota is determined by the portfolio margin size and the number of copiers. 17 Data collected.We collect data from TraderWagon's publicly available API from Oct. 26, 2022, to Aug. 31, 2023.The API provides metadata and numeric values about performance metrics (e.g., PnL and ROI) of leaders' portfolios, reportedly updated once every ten minutes.We also collect data about ongoing and closed positions for each portfolio, underlying cryptocurrencies involved (e.g., BTC/USDT), position amount and side (long or short).In addition, closed position data includes its realized PnL for both the leader and their copiers.We collected portfolio data every twelve hours and position data every seven days, until Feb. 4, 2023.We then gradually shifted to shorter data collection intervals to increase data resolution.We collected portfolio data every two hours until Feb. 27, 2023, and every ten minutes thereafter.We collected position data every day after Feb. 4, 2023. Publication rules.On TraderWagon, a single leader can publish up to six portfolios whose performance metrics are independently calculated. 18As such, holding a top-tier portfolio is not a guarantee of a leader's overall performance: they may simultaneously have negative-proft and top-tier portfolios.For instance, in Figure 2, Dave has a portfolio up 12% that is still open, but also recently closed a portfolio which was down 92%.In addition, leaders can close losing portfolios and open new ones at their discretion, which allows them to establish a better performance history by rapidly clearing underperforming portfolios. There is practical evidence that some leaders adopt this very strategy.Figure 3 shows a specifc leader history, namely their record of publishing and closing portfolios. 19The fourth portfolio they opened (lowest green line in Figure 3) was successful, and ended up being listed on the top page of TraderWagon in Feb. 2023.However, many of their subsequent portfolio choices ended up being closed with negative profts within a few days of their creation; this leader only managed a small number of portfolios for months.In short, they try multiple investment strategies almost simultaneously, and only keep successful ones.This strongly questions the legitimacy of the portfolio leaderboard as an indicator of overall leader performance.4a, the top page started displaying only four to eight portfolios instead, ranked using several metrics.The leaderboard also changed (Figure 4b), listing now 20 portfolios, and ordered by 7-day ROI.Switching from 30-day ROI to 7-day ROI does not have a major impact: 44.0% of the top-20 portfolios using the 7-day ROI ranking are also in the top-20 using the 30-day ROI ranking. In addition, TraderWagon switched from showing lifetime PnLs to showing 7-day, 30-day, and 90-day PnL after the update.The latter is a close approximation of the lifetime PnL, as portfolios are typically shorter-lived (median: 2.9 days, average: 16.1 days).Except for the diferent time interval, we did not observe any changes in the calculation of performance metrics.Therefore, this update primarily focused on the website UI design. Finally, TraderWagon started to allow Binance Futures investors to publish portfolios without registering to TraderWagon in late Feb. 2023. 20However, we exclude these portfolios since they are listed separately from those opened by TraderWagon-registered leaders, and usually only have a few copiers. Descriptive statistics.Figure 5 shows the number of portfolios and the amount of money staked by followers.Specifcally, Fig. 5a shows the number of portfolios, where active portfolios denote those with non-zero PnL values (i.e., taking positions) in the past seven days; Fig. 5b shows the total copiers' assets under management (AUM) in TraderWagon and the average AUM per portfolio, where average AUM is calculated from the total copy amount divided by the number of followed portfolios; and Fig. 5c shows the average amount of money a copier entrusts to a portfolio.The total number of available and active portfolios (Fig. 5a) increases over time.On the other hand, the number of portfolios with followers is roughly stable, indicating that the total number of available options does not appear to greatly infuence copiers' choices.The total amount of money entrusted to leaders (Fig. 5b) is 3-8 million USD in our observation period.This is orders of magnitude smaller than the amount of money allegedly deposited to Binance, which is in the billions of US dollars [66].However, more interestingly, this means that, on average, roughly USD 10 000 are entrusted to each active published leader portfolio, even though there is absolutely no performance or qualifcation requirements to become a leader.Each copier, on average, invests between USD 1 000-2 000 with leaders (Fig. 5c).In short, copiers invest non-negligible amounts of money into leaders, who have not been subject to any strict vetting process. Bybit Data Bybit is a major online cryptocurrency exchange with the secondlargest cryptocurrency trading platform at the time of writing. 4 The exchange was launched in 2018, and started to host a copytrading platform in April 2022, 21 which was awarded an iF design award in 2023 for user experience (UX). 22Similar to TraderWagon, Bybit sorts published portfolios on a leaderboard based on PnLs, ROIs, and other performance-related metrics; and holds campaigns and trading competitions to attract more investors to the platform.Figure 6 shows example screenshots of Bybit.The top-page design (Figure 6a) is very similar to TraderWagon after the update, but the leaderboard page (Figure 6b) uses the aggregated 7-day PnL over all copiers.The landing page notably uses bright colors and crowns on the top three portfolios' icons.Bybit also periodically 21 https://fnance.yahoo.com/news/bybit-launch-copy-trading-084400033.html. 22https://ifdesign.com/en/winner-ranking/project/bybit-copytrading/581618.advertises trading campaigns, in particular, the "World Series of Trading (WSOT, Figure 6c). 23Bybit provides bonuses to highly ranked leaders and copiers participating in the WSOT. Bybit employs a proft-based compensation scheme.Leaders gain between 10% and 15% of their copiers' profts depending on a trader level assigned by the platform.New leaders start at the lowest level, "Cadet, " and receive 10% of their copiers' profts.Leaders can climb up levels by depositing funds and consistently generating profts. 24eaders at the highest level, "Gold, " receive 15%. Likewise, each leader is limited to a maximum number of copiers, determined by the leader's level: Cadets can have at most 100 copiers, while Gold can have 2 000.As an interesting exception, WSOT participants are allowed to have 2 000 copiers regardless of their level.Copiers have two options for mirroring portfolios: (1) a mode similar to fxed-ratio mode in TraderWagon and (2) setting copy parameters by themselves. 25Bybit recommends the frst choice to beginners. Data collected.We collect data from Bybit's publicly available API from Feb. 18, 2023 to Aug. 31, 2023.The API provides leader metadata and performance metrics, such as PnL and ROI, as well as rankings derived from these metrics for 7-, 30-, and 90-day intervals.The API also gives the number of followers and their associated proft from copying positions for each published portfolio.We collect these data every two hours throughout our observation period. Publication rules.Until Mar.2023, on Bybit, leaders were allowed to publish only a single portfolio.However, in Apr.2023, Bybit announced that it would, from then on, allow users to create "subaccounts," distinct from their main accounts for copy trading. 26eaders can use these subaccounts to publish more than one portfolio, and/or simultaneously be copiers.The Bybit website and API do not provide information to immediately link a specifc portfolio (and the subaccount involved) with a main account. Descriptive statistics.Figure 7 shows the number of published portfolios, the total and average AUMs, and the average amount of money entrusted by a copier, where defnitions for statistics are the same as TraderWagon. Fig. 7a shows that the number of published portfolios steadily increases over time, but number of followed portfolios remains roughly constant at 2 000.This mirrors what we saw in Trader-Wagon: an increased number of leader options does not mean that these options are particularly popular with copiers.In contrast, a clear diference between TraderWagon and Bybit is the ratio of active portfolios to the total number of portfolios: more than 50% for TraderWagon, but below 20% for Bybit, where a majority of portfolios are thus dormant.This can be explained by diferences in portfolio publication rules.As we have seen, leaders in Trader-Wagon have a strong incentive to immediately close unproftable portfolios and create new ones, while -at least until Apr.2023 -Bybit restricted each leader to a single portfolio.Fig. 7b shows that, similar to TraderWagon, the average AUM in Bybit is in the order of USD 10 000.However, Fig. 7c shows that copiers individually invest less money (around USD 500) than in TraderWagon; the average amount is steadily increasing over time. METHODS To analyze the infuence of the leaderboard on how copiers select portfolios, we employ a quantile regression (QR) of portfolio popularity for TraderWagon data.This section introduces this quantile regression. For a given portfolio, we formally defne portfolio popularity as the ratio between the number of copiers and the maximum number of copiers allowed for that portfolio.A popularity of 1 denotes an extremely popular portfolio (which cannot aford more subscribers), whereas a popularity of 0 denotes a portfolio with no copiers at all.Using this normalized metric eschews issues stemming from diferent tiers of leaders being allowed diferent maximum number of copiers (see Section 3 for details).The justifcation for using portfolio popularity is described in Appendix C. Quantile Regression Instead of following a normal distribution, the conditional distribution of portfolio popularity conditioned on explanatory variables (e.g., ROI and performance metrics-based rank) is highly skewed to lower values.As a result, a simple ordinary least square (OLS) regression could miss important efects of explanatory variables. In contrast, a quantile regression (QR, [41]) can estimate coefcients without any normality assumptions on the underlying distributions.Hence, QR is robust to the skewness and outliers that are evident in our dataset, which makes it a desirable technique for us.An added beneft of QR is that we can estimate the coefcients for arbitrary quantiles: we can separately consider the impact of explanatory variables on portfolios with small (i.e., low quantile) and large (high quantile) portfolio popularity. We can analyze which explanatory variable has the most infuence on portfolio popularity for diferent quantiles, by performing multiple QR analyses with diferent . Model construction TraderWagon features between 600-1,200 portfolios in our observation period (see Figure 5).However, more than half of these portfolios are dormant with zero or negative ROI, so they will not be attractive to copiers; in fact, they likely will be buried in the interface, and potential copiers would have to make signifcant efort to fnd them.Hence, we hypothesize that only a few, if any, copiers will consider negative-PnL portfolios.Conservatively, we assume that most copiers will consider the top 100 ranked portfolios. Model variables.We build a model to infer the infuence of performance metrics and interface design on portfolio popularity.We denote the maximum number of copiers allowed for portfolio at .max time on TraderWagon by max ranges from 150 to 400. 27, , Let , be the number of copiers of portfolio .We can then formally defne portfolio popularity for portfolio at time as: We next turn to explanatory variables.We frst consider the deviation at time in 30-day ROI for portfolio compared to the average ROI over all top 100 portfolios: Using the deviation from the average, rather than an absolute value, allows us to ofset the diference in overall performance across timeslices. We also consider the (logarithmic) time elapsed from the time portfolio was opened measured in the unit of days , ≡ log( − ℎ ) , , to help us measure the infuence of being exposed to copiers for a longer (resp.shorter) period. Next, we elaborate on the variable representing the efect of platform design.We primarily want to measure the infuence of being among the highest portfolios ranked by ROI, which is used to populate the TraderWagon leaderboard.As discussed in Sec. 3, 18 portfolios were listed on the leaderboard page before the Mar.16, 2023 update, and 20 thereafter.We thus defne an indicator variable: This metric is available to users and can be used to rank portfolios, but is not the default used for the leaderboard.By contrasting its infuence with , we can tease out the impact of interface defaults. ,  1 if the rank of -th portfolio is within top 18 in lifetime PnL for < 3/16/2023, if the rank of -th portfolio is within top 20 in 90-day PnL for ≥ 3/16/2023, otherwise. Finally, we consider another indicator variable capturing the fact that a portfolio was featured in the leaderboard in the past, even if they are not anymore:  1 -th portfolio was on the frst page of 30-day ROI-based   = ranking at some time < , but is not on the page at , ,  0 otherwise. allows us to diferentiate between consistently low-ranking , portfolios and those which went down.We take the twelve-hour average for observations after Feb. 4, 2023, and then consider indicator variables for the average to equate the number of observations per time period.We confrmed that the correlations between explanatory variables are not strong enough to bias our regression analysis.(See Appendix B for details.) QR Models.To summarize, our full-fedged regression model for portfolio popularity is given by RESULTS This section frst considers the correlation between portfolio popularity and portfolio ranking, before delving into the quantile regression results.Finally, we look into implications of our fndings with respect to investment outcomes, i.e., whether the chosen ranking schemes help people maximize proft.Figure 8a shows that there appears to be a fairly strong correlation between portfolio and ranking according to the 30-day ROI ranking: higher ranked portfolios are more popular.On the other hand, if we rank portfolios by life-long PnL, the correlation appears to be much more modest; in fact some of the portfolios ranked in the middle of the pack appear to be more popular that some of the top ranked ones.These results hint that the choice of TraderWagon to use 30-day ROI ranking as a default to present leaderboard information has a strong impact on popularity.There are two ways we can interpret this result.Copiers may genuinely believe that portfolios with high 30-day ROI are competitive, out of fnancial rationality -that is, they understand how TraderWagon ranks portfolios, and agree with that design choice.The other possibility is that copiers blindly choose portfolios shown in the frst several pages.Namely, the leaderboard default nudges copiers to select portfolios with high 30-day ROI, even though they may not understand whether it is a good metric or not.To resolve this dilemma, we look at Bybit in Figure 9, where we plot the relationship between 30-day ROI ranking and portfolio popularity (Fig. 9a) and the relationship between the 7-day aggregate follower PnL ranking (i.e., the sum of the PnL of all the followers of a given portfolio) and portfolio popularity (Fig. 9b).29 The latter refects what the Bybit leaderboard uses as a default ranking; the former is for comparison with TraderWagon. Correlation between publicized portfolios' popularity and rankings Tellingly, in contrast to TraderWagon, portfolio popularity appears to be weakly correlated, if at all, with 30-day ROI ranking.On the other hand, we observe an apparent strong correlation between 7-day aggregate copier PnL ranking and portfolio popularity.This appears to confrm that interface default, rather than the goodness of a specifc metric, is crucial to portfolio popularity.We confrm these insights by computing Pearson correlation coefcients, between the portfolio popularity and ranking according to the diferent performance metrics used in TraderWagon and Bybit: 30-day ROI, Life-long PnL, 30-day Win-Rate, 30-day maximum drawdown (MDD), and 7-day aggregate followers' PnL."Win rate" is the percentage of positions that had a positive PnL when they were closed; "maximum drawdown" (MDD) is the maximum percentage diference for a portfolio between its highest PnL and its lowest PnL.We exclude leaders ranked lower than 50 for TraderWagon, and 300 for Bybit, to prevent contamination from low-ranked leaders who are dormant. 30able 1 summarizes the results.They confrm what we were suspecting from graphical inspection: interface defaults -specifcally, default leaderboard rankings -play an outsized role on portfolio popularity.This result also suggests that the leaderboard page infuences copiers' choices far more than the front page.TraderWagon and Bybit's landing pages currently show the top-8 portfolios based on the 30-day PnL (TraderWagon) and 7-day ROI (Bybit).There is some overlap with the top-8 portfolios in Table 1: 63.1% of top-8 portfolios in the 90-day PnL ranking on TraderWagon are also in the top-8 of the 30-day PnL ranking; and, 30.5% of the top-8 portfolios in the 7-day ROI ranking on Bybit are also in the top-8 of the 30-day ROI ranking.However, overall, their correlation with portfolio popularity is far smaller than for the portfolios in the respective leaderboard.This indicates that the copiers primarily rely on (at most the frst couple of pages of) the leaderboard, even at the detriment of any short list featured on the landing page.One possible explanation is that, to the users, the short lists might look too much like "featured listings, " i.e., advertisements, whereas the leaderboard has the appearance of a more objective ranking. The moderate correlation between life-long PnL and portfolio popularity in Bybit is due to the fact that life-long PnL and 7-day aggregate follower PnL are highly correlated themselves. Quantile regression: substantiation of results from correlation coefcients Table 2 summarizes the results of QR analysis for TraderWagon with the model described in Section 4. We frst see that the excess ROI ( d ), generally only has little impact on portfolio popularity throughout our observation period.Namely, even if the excess ROI reaches 100% (i.e., d = 1.0), the number of copiers increases at most by 3-5% before and after the update, respectively.That indicates that ROI, on its own, is not a crucial factor for gaining popularity. On the other hand, if a portfolio ends up in (the frst page of) the leaderboard page (i.e., = 1), then we see a median increase in popularity of 12.9-22.3%.This is even more true for the 90th percentile: 62.4%-76.7% of popularity are explained by the presence of the portfolio in the leaderboard page.In short, an increased ROI by itself is not enough to gain copiers, but if it leads to the portfolio breaking into the leaderboard, then it pays large dividends.confrms the outsized infuence of being in the leaderboard page on a portfolio's popularity. Figure 10a generalizes the results from Table 2 by displaying the regression coefcients across all quantile points , and Our regression also tells us that a portfolio's age does not signifcantly infuence its popularity, meaning that simply holding a portfolio does not make it more (or less) popular.Conversely, being among the top-20 portfolios in terms of profts and losses (PnL) does have some impact.PnL is not the default leaderboard ranking in TraderWagon, and its impact on popularity is markedly less than that of ; but this makes sense because PnL is-to some extentcorrelated with ROI (although the correlation could be modest for certain portfolios, as we later explain). We also look at changes in popularity Δ .The popularity of portfolios in the 90th percentile increases by 6.1%-6.7% in a half day when a portfolio reaches (the frst page of) the leaderboard.Even more tellingly, the adverse efects of dropping of from the frst page of the rankings is also evident.The 10-percentile coefcient for shows that a portfolio will lose 1.3%-1.4% in popularity in a half day after dropping of the leaderboard. Table 2 shows another interesting efect.Recall that the Trader-Wagon update essentially moved the leaderboard page to its own page, and started listing slightly diferent portfolios on the main landing page.The impact of this does appear in our regression analysis: the positive infuence of decreased after the interface update (Figure 10).These results suggest that the infuence of a portfolio featuring in the top 30-day ROI ranking (i.e., the default for leaderboard in TraderWagon) on its popularity was reduced after the update. Even so, is still highly infuential (especially when considering the 75th-percentile) on portfolio popularity-which goes to show that while copiers follow the default rankings, they primarily trust the leaderboard page, rather than the main landing page. These results support our hypothesis that leaderboards substantially infuence copiers' portfolio choices, suggesting that copiers rely on the perceived credibility of "top-rankers" rather than conducting thorough due diligence by themselves.Unfortunately, this can result in unproftable investments.Recall the portfolio publication rule on TraderWagon (Section 3): leader can publish a number of portfolios, gaming the leaderboard metrics in the process.As a result, high-performance metrics do not guarantee a leader's trading skills or any fnancial returns, and in fact, the competitiveness of such portfolios can quickly decline. TraderWagon: Impact on leader profts We next turn to leaders.Leader portfolios provide two sources of proft: direct profts, that come from the portfolio's PnL; and indirect profts, that come from commissions owed by copiers to the leadernamely, 10% of each copiers' proft.The sum of direct and indirect proft yields the total proft for a portfolio.We next examine to which extent having a high-ranking portfolio (i.e., present on the leaderboard page) impacts indirect profts. To do so, we calculate the average ratio between total proft and direct proft: where the average ( ∈ {, } ) is taken over all positions () of a portfolio ( ) that were closed on the -th day from the time the portfolio reached the frst page of the leaderboard (i.e., had a top-18 or top-20 30-day ROI ranking) ( ). , , , , and , denote the direct profts from the position itself, the profts copiers/followers made from betting on that position, and the total profts (i.e., the sum of direct and indirect profts) for that position, respectively. Before Update After Update Figure 11: Average proft ratio ( , ) starting one week before , the time at which a portfolio appears on the frst page of the leaderboard, and up to two weeks after .Light blue and light orange areas show the 95% confdence intervals. Figure 11 shows the evolution of this ratio , over time, starting one week before the portfolio made it to the (main) leaderboard page, all the way until two weeks after its inclusion in the leaderboard page.The blue curve shows what happened before TraderWagon changed its interface in March 2023; the orange curve shows what happened after the update.Clearly, the immediate jump after = 0 for both curves indicates that appearing in the leaderboard has an immediate impact on indirect profts.In fact, leaders holding a portfolio listed on the leaderboard page make over half to three quarters of their total proft from copiers' commissions.This further substantiates our claim that leaders have very strong economic incentives to attempt to game leaderboard rankings.Next, we plot, in Figure 12, the average ratio of total proft over direct proft for fourteen days from the time the corresponding portfolio is listed on the frst page of the 30-day ROI ranking ( ∈ (0,...,14) , ) against the portfolio's direct proft in the same time period ( ∈ (0,..., 14) [ ∈ {, } , ]). The fgure shows that in both cases (before, and after the update), high proft source ratios come from very low direct profts.In other words, there appears to be a strong incentive for leaders to maximize ROI at the expense of the PnL.For instance, somebody that turns a USD 1 investment into USD 2 would have a 100% ROI, but only a USD 1 PnL.While a USD 1 PnL is not impressive, a 100% ROI would probably guarantee a spot in the leaderboard, and with that, a large number of copiers since only top portfolios in ROI are visible to copiers.In other words, ROI-based ranking seems to incentivize leaders to take potentially risky bets, but without much at stake, thereby creating a dangerous moral hazard. Bybit: Impact on copier profts In Bybit, the leaderboard orders portfolios by 7-day PnL aggregated over all followers.We next delve into the impact of this ranking on overall proftability for copiers. Figure 13 shows the relationships between a portfolio (direct) ROI (Fig. 13a) and their leaderboard ranking; and between win-rate and leaderboard ranking (Fig. 13b).We frst notice that portfolios ranked around 300 and below appear to be mostly dormant-with zero ROI.For portfolios that have a positive PnL, we observe a slight decrease of both the (median) ROI and the (median) win-rate with the leaderboard ranking.This is not unexpected: 7-day aggregated PnL over all copiers is likely to be at least modestly correlated with the ROI; what is more surprising to us is that the correlation, if any, is quite weak.We next look at the infuence of the leaderboard ranking on copiers' PnL. Figure 14 shows that the aggregated PnL over all copiers of a given portfolio exponentially decreases with the leaderboard rank.This is expected as the leaderboard rank specifcally relies on that metric.More interestingly, the fgure also shows what happens when we normalize this aggregated PnL by the number of followers-the decrease is markedly smaller (and the numbers are small, in the order of USD 1-25 on average past the top 50 ranked portfolios), which means that the number of copiers a given portfolio has is the dominant factor for its ranking.Copiers' PnL (left) Copiers' PnL / Num. of copiers (right) Combining the two results, copying top-ranked leaders does not lead to substantial investment profts.We turn to the distribution of portfolio levels (Gold, Silver, Bronze, Cadet) over leaderboard rank in Figure 15.In that fgure, rank is normalized between 0 (top rank) and 1 (lowest rank). 31We show in the left panel the top 10% of portfolios, and in the right panel, the bottom 10%.As expected, there is a mix of higher-level portfolios among highly ranked portfolios.However, we also fnd a similar mix among the lowest ranked portfolios-while portfolios ranked in the middle are almost exclusively "Cadet."While initially surprising, this makes a lot of sense: higher-level portfolios can get more copiers-e.g., 2 000 for Gold compared to 100 for Cadet.Since leaderboard ranking in Bybit is based on the aggregate PnL over all followers, a losing position with a lot of copiers will perform disastrously according to that metric.Unfortunately, the fgure indicates this does happen quite frequently: a number of copiers follow supposedly more reputable, higher-level portfolios, and end up, cumulatively, losing signifcant amounts of money.(Note that these losses may not be realized, since the positions may not be closed yet; nevertheless, it is fair to say that these positions are performing very poorly.)In short, being in a higher level means that the portfolio performance will have multiplicative efect over potentially large segments of the population.We dig deeper on this aspect in Figure 16.First, we plot the evolution over time of the leaderboard rank of Gold portfolios (Fig 16a).We see that initially, Gold-level portfolios are also highly ranked in the leaderboard.Regrettably, this is short-lived: some portfolios start to dive in the rankings as early as two days after becoming Gold; and the majority collapses about six days later.This impacts the profts and losses of their (many) followers (Fig. 16b).For about one week, copiers make a proft, but their PnL plummets and even goes negative.In particular, the bottom 25% Gold portfolios lead to losses of more than USD 15 000 within ten days of their promotion to Gold.In other words, copiers have to be extremely attentive to the fate of the Gold portfolios they are copying, as losses can mount very abruptly.Unfortunately, having to pay such close attention to market movements is precisely what copy-trading platforms are-in theory at least-supposed to alleviate. DISCUSSION This section discusses the implications of our analyses. Implications about design patterns Our results show, through a quantitative analysis of real market data, that leaderboards, a prominent gamifcation feature [30], signifcantly afect copiers' portfolio choices.Specifcally, our correlation coefcients (Figure 8 and 9) and regression results (Figure 10) evidence that leaderboard ranking order substantially afects the popularity of top-listed portfolios, confrming earlier results on click-through rates in the context of SERPs [29,64]. These fndings have two important implications.First, sorting, or ranking algorithms critically infuence user behavior in highstake situations.Our results support prior experimental studies that suggest a signifcant impact on our major decisions, such as Epstein et al. [25]'s experiment about SERP's infuence on election polls, by providing an afrmative answer to the question of whether UI designs known to impact user behavior in low-stake situations similarly afect them in higher (e.g., monetary) stakes.Second, UI design is crucial for online fnancial services.Our study demonstrates that UI designs substantially afect copier portfolio choices, even though a myriad of economics literature warns about related risks [19,22,35,49].Considering that many investors on the platforms we study are presumably individual investors, and that they invest non-negligible amounts of money (e.g., >USD 1 000), we observe a priori simple UI design decisions can foster substantial monetary risks.More broadly, this work calls for studying more closely the impact of user interfaces in online fnancial services. Misaligned incentives for leaders We have shown that on both platforms, copiers are overwhelmingly infuenced by the leaderboard default rankings.We discuss how this situation fosters questionable incentives for leaders. TraderWagon portfolios and hedging.We have shown that Trader-Wagon leaders can get signifcant proft in commissions from their copiers; crucially, this income is risk-free, since leaders only share in the profts of their copiers, and not in the losses.In other words, the commission scheme makes it attractive for leaders to quickly appear proftable, rather than patiently building a competitive investment history.Unfortunately, TraderWagon makes this a lot easier than it should be, by allowing each leader to create as many as six portfolios.A rational strategy is to take a range of opposite positions (essentially betting for and against everything on the market), with signifcant risk and potentially high leverage. The idea is that the overall risk across all portfolios is close to zero, since the leader is completely hedging their positions, but that one of the portfolios is likely to get very high ROI and appear in the leaderboard, thereby capturing a bunch of copiers.If the portfolio can then hold its winning ways for a little bit longer, the leader can make signifcant proft from commissions, risk-free; if it does not, then the leader can simply close its portfolios, and try again.This situation creates a moral hazard, and should probably be prevented. Even worse, a ranking purely based on ROI implies that a leader does not even need to invest a signifcant amount of money, since ROI is percentage-based, rather than an absolute measure of proft.To prevent abuse, copy-trading platforms should monitor the trading behavior of leaders, and probably only present data from leaders with a non-negligible amount of money at stake.Bybit and collusion.Bybit's default ranking uses aggregated PnL across all followers.While diferent from TraderWagon's choice, this creates diferent but equally questionable incentives, especially since Bybit now ofers the option for leaders to hold multiple "subportfolios," and ofers no easy way for copiers to trace back these sub-portfolios to the same owner.Worse, a leader can also act as a copier, which means that leaders can simply copy their own portfolios to artifcially infate their aggregated follower PnL. Here again, a winning strategy is to hedge bets across multiple portfolios, and copy each bet as much as possible to give the illusion of a strong aggregated follower PnL for whichever portfolio is winning.Diferent from TraderWagon, this strategy does require a more considerable amount of money to be invested by a leader, since PnL numbers are absolute, rather than relative proft.Nonetheless, a proper hedging strategy can make this investment almost riskfree, and let the leaders simply make profts from their copiers' commissions.Here, it is critical that copy-trading platforms ensure that users can clearly identify that diferent portfolios belong to the same individual, so that hedging strategies are obvious to potential copiers.Interestingly, Bybit set a policy that prohibits leaders from holding multiple portfolios for hedging purposes. 32To which extent this policy is enforced in practice is unclear. Misaligned incentives for platforms Unfortunately, copy-trading platforms have incentives poorly aligned with mechanisms that could protect users.Trading platforms mainly proft from commissions on each trade, which incentivizes them to promote seemingly successful portfolios and downplay fnancial risks to copiers.These misaligned incentives appear to foster the use of manipulative designs, particularly gamifcation features.In a concerning trend, Binance and OKX, two of the top three cryptoexchanges (Bybit is the other one), 4 launched copy-trading services whose designs closely resemble TraderWagon and Bybit in late 2023.Furthermore, major copy-trading platforms for traditional fnancial assets also adopt similar design patterns. 33s we have shown, these interface designs, particularly leaderboards, can be gamed by unscrupulous leaders-who in fact have very strong incentives to do so.Worse, the current interface design is likely to indirectly cause fnancial hardship to novices.While copy-trading platforms arguably meet their self-professed goal of facilitating trading for novice investors, the cost seems high since those trades are likely disastrous. Safeguard mechanisms As a result, we argue that safeguard mechanisms to help copiers are crucial for copy-trading platforms to succeed on the long run.Currently, copy-trading platforms are designed to let copiers start trading right away.In Figure 1, TraderWagon advertises that users can "copy trade with one click," for instance.However, such frictionless design is at odds with copiers performing due diligence and more fully considering their options.Instead, as we have seen, users heavily rely on leaderboards. Copy-trading platforms should recognize the negative impacts of the current interface designs both for users and, even more importantly, for themselves.Indeed, while frictionless designs, such as leaderboards and one-click trading, may contribute to short-term revenue increase for a copy-trading platform, the moral hazards these market mechanisms foster, by being easily gameable by aspiring leaders, may, down the road, cause users to lose all faith in the platform, leading to its eventual collapse.To avoid such a bleak future, platforms should work together to create sound design guidelines, including ethical website design practices.For example, displaying appropriate and timely warnings for high-risk investments, or building didactic content with the goal of improving user fnancial literacy would be highly valuable.In fact, Epstein et al [25] shows timely alerts substantially reduced biases induced by ranking algorithms.Studies evidence high fnancial literacy leads to better fnancial decisions [10,31,47].Tutorial programs for new users may be particularly helpful [45].Alternatively, it may be helpful to give users the autonomy to set up a user-driven UI to meet their investment goals [65].Creating more specifc design guidelines is a fruitful avenue for future research in the feld. Which role for regulators? While a fair and sustainable market structure is crucial for the long-run success of copy-trading platforms, it may be onerous for these platforms to voluntarily introduce safeguard mechanisms as it does not align with their (short-term) incentives to increase trading volume.As a result, many platforms may be reluctant to design and implement the modifcations we argue are needed.This is where regulators can play a role, by incentivizing or even mandating the adoption of safeguard mechanisms such as outlined above.Indeed, individual investors already bear signifcant investment risks [63]; the presence of manipulative design patterns magnifes these risks even more.As such, regulators are likely to step up eforts to protect consumers. Limitations and future work This work presents limitations common to empirical studies.First, our measurement period and the number of platforms studied may not be sufcient to fully capture all investor behaviors.Second, we could not perfectly disentangle all potential confounding factors.While TraderWagon and Bybit share similarities, they are not identical.Furthermore, we have to assume away potential transient efects (e.g., exchange rate fuctuations and shifts in investment trends) in this analysis. A valuable direction for future studies would be to test the robustness of our fndings with experiments more rigorously separating confounding factors.Another interesting extension to this study would be to studying other (copy-trading) platforms to confrm generalizability. Last, our work only scratches the surface of how UI design infuences investor decision-making process.Interviews or surveys would likely provide additional insights about how users decide which portfolios to copy. CONCLUSION This paper explores the trading behaviors of investors in Trader-Wagon and Bybit, two major copy trading platforms for cryptocurrency derivative products.In these platforms, novice traders ("copiers," or "followers") delegate their investment decisions to leaders, paying a share of their realized proft.Although the share of copy-trading in the entire cryptocurrency derivative market is still small, the customer base of copy-trading platforms base is steadily increasing.We saw that each leader, on average, is entrusted with more than USD 10 000 of follower funds. These platforms extensively use gamifcation features, notably trading competitions, and "leaderboards," to promote supposedly high performing traders.We fnd, through correlation and quantile regression analyses, that copiers overwhelmingly follow leaderboard default rankings.Our fnding is robust to 1) interface changes (as we have observed on TraderWagon during our measurement interval) and 2) specifc choice of a ranking metric (Bybit and Trader-Wagon use drastically diferent metrics) -answering RQ1 afrmatively. Unfortunately, we also fnd that this strategy may not be particularly efective in terms of maximizing copier proft; often, positions collapse shortly after they appear on the leaderboard and start being extensively copied (in other words, we answered RQ2 negatively). More generally, we also showed that the current market designs create pernicious incentives (answering RQ3 in the afrmative in the process): we outlined strategies for leaders to invest limited amounts of money, nearly risk-free, and create a constellation of portfolios that can lead them to acquire a decent follower base-and with it, a potential source of (risk-free) proft.The copy-trading platforms themselves unfortunately have little incentive to improve the situation, as they are mostly beneftting from trading volume rather than from providing tools for trading proftably. While the picture this article paints is bleak, we believe it can foster considerably more work on how to better communicate to users the inherent risks of their activities.Many modern trading platforms, especially cryptocurrency trading platforms, embrace gamifcation features.However, as we have seen in this paperand as has been extensively discussed in related work [63] -real money is at stake, and this makes these platforms closer to gambling outfts than to video games.We believe that efective messaging about the real risks of these investments is absolutely necessary to protect users; and, a redesign of the interfaces used by copy-trading platforms is also a must.Moreover, because of the aforementioned misaligned incentives, regulators might have to step in to ensure that this messaging is obvious to users.While copy trading may have not been in the spotlight until recently, the negative impact its design choices can have on users is already substantial; it is not too late to try to turn the tide.Bybit.While leaders ranked in the Cadet class comprise the majority of observations (72.6%), those ranked in other classes -Bronze, Silver, and Gold -also have sizable weights in the observations within the top 300 in the 7-day aggregated copiers' PnL ranking (see Section 3 for the relationship between leaders' class and maximum quota).Therefore, we replicate the analysis in Section 5.1 for each class, by calculating the Pearson correlation coefcients for each class (i.e., maximum number) separately. Table 4 summarizes the correlation between portfolio popularity and rank in selected metrics for each class.It clearly shows the same tendency as in Section 5 for all classes.Namely, the correlation between portfolio popularity and rank based on the 7-day aggregated copiers PnL ranking is higher than the 30-day ROI ranking.This result supports our conclusion that the default ranking on the leaderboard (i.e., the 7-day aggregated copiers' PnL ranking) has an outstanding infuence on how copiers select portfolios to copy.The modest correlation between life-long PnL and portfolio popularity is due to the fact that life-long PnL and 7-day aggregated follower PnL are correlated themselves.The positive correlation with the 30-day ROI ranking for leaders in the Gold class may be due to their quick decline in rank after they gained substantial copiers (See Section 5.4 for the details). Figure 2 : Figure 2: Portfolio publication and selection on Traderwagon.Leaders (Alice and Dave) publish, maintain, and close portfolio(s).Copiers (Bob and Charlie) select portfolio(s) to copy.The investment performance is calculated for each portfolio. Leaders may close existing portfolios and open new ones at any time. Closed w/ positive PnL Closed w/ negative PnL Closed w/ zero PnL Figure 3 : Figure 3: Portfolio publication and closure records of a leader.The fourth portfolio from the bottom (in green) was listed on the front page in Feb. 2023.However, most of this leader's portfolios closed quickly with losses thereafter. (a) Landing page after website update (b) Leaderboard page after website update Figure 5 : Figure 5: Descriptive statistics for TraderWagon investors.Light gray-shaded areas denote partial temporary outages of our collection infrastructure. Figure 7 : Figure 7: Descriptive statistics for Bybit investors.Light gray-shaded areas denote partial temporary outages of our collection infrastructure. Figure 8 Figure 8 : Figure 8 shows how, in TraderWagon, portfolio popularity relates to 30-day ROI-and life-long PnL-based rankings. Figure 9 : Figure9: Portfolio popularity on Bybit as a function of different types of portfolio rankings: 30-day ROI (upper panel), and 7-day aggregate follower PnL (lower panel); the latter is the default.The blue line is the median portfolio popularity (over time) for a given rank.The light blue shaded areas denote the 10th-90th percentiles. Figure 12 : Figure 12: Portfolios' average ratio of total proft over direct proft, for fourteen days from the time they are listed on the frst page of the 30-day ROI ranking ( ∈ (0,...,14) , ), as a function of the portfolios' own proft in the same time period ( ∈ (0,...,14) [ ∈ {, } , ]).The blue shaded area represents the distributions' kernel density estimation (KDEs).Portfolios with more than USD 200 in absolute value are omitted in the KDE analysis.The number ( ) in each fgure's title shows the number of portfolios plotted in the fgure. Figure 13 : Figure 13: The relationship between the ROI (left) and winrate (right) for the portfolios and the ranking based on the 7-day followers' PnL.The win rates (close to) zero for below 300th portfolios come from their dormancy. Figure 15 : Figure 15: Ratio of observations at each rank in the 7-day followers' PnL ranking for each portfolio level: Cadet (lowest), Bronze, Silver, and Gold (highest).The left and right panel show the top and bottom 10% of the rankings in descending order, respectively. Figure 16 : Figure16: Gold-level portfolios' median leaderboard rank (left) and copiers' PnL (right) from the day they are promoted to the Gold level.Shaded areas denote the 25th-75th percentiles. Table 1 : Pearson correlation coefcients between portfolio popularity and portfolio rank based on selected performance metrics.Boldfaced entries represent the interface default for each platform.Blank entries indicate that the platform does not show the metric.30-day ROI Life-long PnL 30-day Win Rate 30-day MDD 7-day agg.copiers PnL Table 2 : Quantile regression results for TraderWagon.We omit the standard errors for the estimated coefcients in quantile regressions since they are very small (typically, in the order of 10 −4 or less due to the large sample size). Table 3 : Correlations between explanatory variables Table 4 : Pearson correlation coefcients between portfolio popularity and portfolio rank based on selected performance metrics.30-dayROI Life-long PnL 30-day Win Rate 7-day agg.copiers PnL
v3-fos-license
2016-03-14T22:51:50.573Z
2016-02-04T00:00:00.000
16340089
{ "extfieldsofstudy": [ "Computer Science", "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/8/2/114/pdf?version=1454573604", "pdf_hash": "0fbe84356e39430b05a6a7191ff00221eab128dd", "pdf_src": "Crawler", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:395", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "f17443cf817dbc9ebcb08035083de31078841f2d", "year": 2016 }
pes2o/s2orc
Evaluating Multi-Sensor Nighttime Earth Observation Data for Identification of Mixed vs . Residential Use in Urban Areas This paper introduces a novel top-down approach to geospatially identify and distinguish areas of mixed use from predominantly residential areas within urban agglomerations. Under the framework of the World Bank’s Central American Country Disaster Risk Profiles (CDRP) initiative, a disaggregated property stock exposure model has been developed as one of the key elements for disaster risk and loss estimation. Global spatial datasets are therefore used consistently to ensure wide-scale applicability and transferability. Residential and mixed use areas need to be identified in order to spatially link accordingly compiled property stock information. In the presented study, multi-sensor nighttime Earth Observation data and derivative products are evaluated as proxies to identify areas of peak human activity. Intense artificial night lighting in that context is associated with a high likelihood of commercial and/or industrial presence. Areas of low light intensity, in turn, can be considered more likely residential. Iterative intensity thresholding is tested for Cuenca City, Ecuador, in order to best match a given reference situation based on cadastral land use data. The results and findings are considered highly relevant for the CDRP initiative, but more generally underline the relevance of remote sensing data for top-down modeling approaches at a wide spatial scale. Introduction Issues of urban development are increasingly being addressed at the global scale, with international non-governmental organizations (NGOs) and development institutions often setting the path and moving the public agenda forward.Regularly published reports such as the United Nations' World Urbanization Prospects [1] or the World Bank's World Development [2] and Global Monitoring Reports [3] address fundamental issues and define key research questions to be tackled by the scientific and international development community.In that context it has become more and more evident that spatial data is playing a crucial role for consistent cross-regional analyses and unbiased evaluation of locally implemented actions.Remote sensing data in particular provide a rich and globally consistent source for analyses at multiple levels.At the global scale, different aspects have to be considered than for local-level spatial analyses, including consistency, scalability, retraceability etc.Several global project initiatives address these issues in various thematic domains.The World Bank's Global Urban Growth Data Initiative, for example, addresses pending issues of regional definition and data incompatibilities and supports the international collaborative setup and development of a consistent data set of global urban extents and associated population distribution patterns.In the same context, the Global Human Settlement Working Group, established under the umbrella of the Group on Earth Observations (GEO) (www.earthobservations.org/ghs),aims at establishing a new generation of global settlement measurements and products based on consistent high-resolution satellite imagery analysis. The presented study has been carried out within the framework of the World Bank's Country Disaster Risk Profiles (CDRP) project initiative which has been successfully implemented at the continental scale for Central America [4] and is currently being expanded to the Caribbean Region.With the clear aim at extending to other regions, global applicability and easy transferability are considered crucial for the model setup.Global spatial datasets are therefore used throughout the CDRP project, with the presented approach specifically developed to support implementation of a disaggregated property stock exposure model, one of the key elements for subsequent disaster risk and loss estimation.While focusing primarily on natural hazards and risks, urban-rural identification and intra-urban classification aspects are highly relevant for setting the basic spatial framework for analysis [5]. This paper introduces a novel approach to geospatially identify and distinguish areas of mixed use from predominantly residential areas within urban agglomerations.After initial urban-rural classification at a 1 km grid level, that urban mask needs to be classified in residential and mixed use areas in order to spatially link accordingly compiled property stock information (e.g., from global tabular databases such as PAGER-STR [6]).The distinct identification of urban residential and mixed use areas serves as crucial input to define inventory regions for subsequent exposure assessment.Impervious Surface Area (ISA) data [7] based on remotely sensed nighttime lights from the Defense Meteorological Satellite Program's (DMSP) OLS sensor (Operational Linescan System) are used as proxy to identify areas of peak human activity, often associated with a high likelihood of commercial and/or industrial presence.ISA is chosen due to its inherent correlation with built-up area (providing an indication on the percentage of built-up per grid cell) and thus good suitability for building stock related land use classification.Several ISA thresholds are tested for a case study in Cuenca City, Ecuador, in order to best match a given reference situation on the ground, where local-level cadastral land use data is used to identify the actual distribution ratio of residential vs. mixed use areas.Furthermore, unaltered nightlight intensity data as provided by the VIIRS sensor (Visible Infrared Imaging Radiometer Suite) [8] are evaluated as alternative to the ISA data.With the DMSP program fading out VIIRS provides the option for successive nighttime Earth Observation analyses due to its low light imaging capability.We apply the same methodological steps as for the ISA data in order to determine best-matching thresholds for binary land use classification and subsequently perform a comparative analysis of the results.Also scale effects are accounted for in that regard with VIIRS featuring a higher spatial resolution than OLS-based data products. Preliminary results of this study were presented at the ECRS-1 conference [9].Extensive further research and integration of alternative data sources then lead to the multi-sensor approach illustrated in this paper, highlighting the relevance of global remote sensing data for top-down modeling approaches at wide spatial scale.Outcome is considered relevant for global urban spatial modeling in a variety of topical domains including urban monitoring, disaster risk management, and regional development. Study Area and Data Due to availability of detailed in situ reference data for comparative analysis and evaluation of the proposed methodology, the city of Cuenca, Ecuador, was chosen as study area.Cuenca City is located in the mountainous southern region of Ecuador at an elevation of around 2500 m above sea level and is the capital of the Azuay province (Figure 1).The city stretches across an area of roughly 70 km 2 and had an urban population of 329,928 inhabitants according to the census 2010.Latest figures of the Ecuador National Statistical Office (INEC) estimate an urban population of approximately 400,000 in 2015.The use of satellite-observed nighttime lights has a long tradition in research dealing with monitoring urban areas and patterns of human and economic activity [10][11][12][13][14][15][16] as well as its impact on the environment [17][18][19].As opposed to attempts of using nighttime lights for basic delineation of urban areas or as weights for population disaggregation, in this paper we rather aim at exploring the use and value in determining intra-urban characteristics. Public-domain applications of nighttime Earth Observation have long been restricted to one satellite sensor, namely the Operational Linescan system (OLS) onboard the Defense Meteorological Satellite Program (DMSP) platform [20].More recently, data from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor onboard the Suomi NPP satellite platform have become available, providing both higher spatial and radiometric resolution and being considered as natural successor to the fading-out DMSP-OLS series [21].The commercial satellite EROS-B also offers nighttime acquisition capability, even at very high spatial resolution [22].However, global-scale and temporally continuous open data availability remains restricted to DMSP-OLS and NPP-VIIRS, therefore the only reasonable choice given the scope of the above-outlined CDRP initiative.In the following, we briefly introduce the two data sources we use for the Cuenca City case study, (1) the Impervious Surface Area (ISA) product derived from DMSP-OLS; and (2) VIIRS Day/Night band light intensity data. Impervious Surface Area (ISA) Data, Derived from DMSP-OLS The OLS sensor onboard the DMSP satellite series is able to detect faint light on the Earth's surface at night due to its high sensitivity in the visible spectrum.While initially designed to monitor cloud coverage, that low light imaging capacity allows identification of various light emitting sources including human settlements and associated human activity patterns [20].The National Geophysical Data Center (NGDC) of the National Oceanic and Atmospheric Administration (NOAA) is processing and archiving OLS imagery, thereby also making certain derived products accessible to the public.DMSP-OLS data was first used to approximate impervious surface area (ISA) in the early 2000s in the development of a national-scale model for the conterminous United States [23].The ISA approach was then consequently adjusted to global scale whereby a radiance-calibrated annual composite of nighttime lights is analyzed in conjunction with ancillary data such as population counts.Output is consistently provided at 30 arc-sec spatial resolution The use of satellite-observed nighttime lights has a long tradition in research dealing with monitoring urban areas and patterns of human and economic activity [10][11][12][13][14][15][16] as well as its impact on the environment [17][18][19].As opposed to attempts of using nighttime lights for basic delineation of urban areas or as weights for population disaggregation, in this paper we rather aim at exploring the use and value in determining intra-urban characteristics. Public-domain applications of nighttime Earth Observation have long been restricted to one satellite sensor, namely the Operational Linescan system (OLS) onboard the Defense Meteorological Satellite Program (DMSP) platform [20].More recently, data from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor onboard the Suomi NPP satellite platform have become available, providing both higher spatial and radiometric resolution and being considered as natural successor to the fading-out DMSP-OLS series [21].The commercial satellite EROS-B also offers nighttime acquisition capability, even at very high spatial resolution [22].However, global-scale and temporally continuous open data availability remains restricted to DMSP-OLS and NPP-VIIRS, therefore the only reasonable choice given the scope of the above-outlined CDRP initiative.In the following, we briefly introduce the two data sources we use for the Cuenca City case study, (1) the Impervious Surface Area (ISA) product derived from DMSP-OLS; and (2) VIIRS Day/Night band light intensity data. Impervious Surface Area (ISA) Data, Derived from DMSP-OLS The OLS sensor onboard the DMSP satellite series is able to detect faint light on the Earth's surface at night due to its high sensitivity in the visible spectrum.While initially designed to monitor cloud coverage, that low light imaging capacity allows identification of various light emitting sources including human settlements and associated human activity patterns [20].The National Geophysical Data Center (NGDC) of the National Oceanic and Atmospheric Administration (NOAA) is processing and archiving OLS imagery, thereby also making certain derived products accessible to the public.DMSP-OLS data was first used to approximate impervious surface area (ISA) in the early 2000s in the development of a national-scale model for the conterminous United States [23].The ISA approach was then consequently adjusted to global scale whereby a radiance-calibrated annual composite of nighttime lights is analyzed in conjunction with ancillary data such as population counts.Output is consistently provided at 30 arc-sec spatial resolution giving indication on the distribution of manmade surfaces including buildings, roads and related elements [7]. Due to its relevance for a broad set of applications, global impervious surface or general built-up area mapping has been in the focus of attention for a while with data from different satellite sensors used and various approaches implemented (overview provided by [24]), recent efforts including high resolution products such as the Global Urban Footprint (GUF) [25] and the Global Human Settlement Layer (GHSL) [26].The DMSP-derived ISA data set is unique in a sense that it does not directly extract built-up from satellite imagery but uses artificial night lighting as proxy measure.While a detected general correlation between night lights and impervious surfaces provides the basis for the global ISA product, inherent patterns point to different human activities (e.g., commercial, industrial) rather than mere built structures.Given the scope and purpose of the presented study this two-sided relation to built-up with a specific weight on non-residential human activity patterns is particularly relevant.Figure 2 shows the global ISA data set for the year 2010 extracted for the Cuenca City study area.giving indication on the distribution of manmade surfaces including buildings, roads and related elements [7].Due to its relevance for a broad set of applications, global impervious surface or general built-up area mapping has been in the focus of attention for a while with data from different satellite sensors used and various approaches implemented (overview provided by [24]), recent efforts including high resolution products such as the Global Urban Footprint (GUF) [25] and the Global Human Settlement Layer (GHSL) [26].The DMSP-derived ISA data set is unique in a sense that it does not directly extract built-up from satellite imagery but uses artificial night lighting as proxy measure.While a detected general correlation between night lights and impervious surfaces provides the basis for the global ISA product, inherent patterns point to different human activities (e.g., commercial, industrial) rather than mere built structures.Given the scope and purpose of the presented study this two-sided relation to built-up with a specific weight on non-residential human activity patterns is particularly relevant.Figure 2 shows the global ISA data set for the year 2010 extracted for the Cuenca City study area. VIIRS Day/Night Band Data Since 2011 the VIIRS sensor onboard the Suomi NPP satellite platform provides a natural successor to DMSP-OLS with its panchromatic Day/Night Band (DNB) detecting dim nighttime scenes in similar manner.Using advanced processing schemes (e.g., excluding/correcting data impacted by stray light), NOAA-NGDC is producing global monthly composite products featuring average radiance values at 15 arc-sec spatial resolution [27].As supplementary product the number of cloud-free observations that was used to create the average composite is reported for each cell. In addition to its superior spatial resolution, VIIRS offers a set of improvements over OLS-derived nighttime lights.These include lower light detection limits, improved dynamic range as well as quantification and calibration options previously unavailable [21].One of the few disadvantages of VIIRS, on the other hand, refers to the later overpass time (after midnight), when outdoor lighting is at a significantly lower level as compared to early evening when OLS acquisitions are made.Figure 3 illustrates the VIIRS DNB data for the June 2015 monthly composite extracted for the Cuenca City study area (left).For comparative purposes, we also aggregate that data to a 30 arc-sec grid (right), thus matching the resolution of the ISA product. Cadastral Data Since 2010 the Municipality of Cuenca has intensified the efforts to collect specific information on all buildings located in the urban area of Cuenca City enabling the construction of a complete cadastral database including geo-localization and information on building characteristics. VIIRS Day/Night Band Data Since 2011 the VIIRS sensor onboard the Suomi NPP satellite platform provides a natural successor to DMSP-OLS with its panchromatic Day/Night Band (DNB) detecting dim nighttime scenes in similar manner.Using advanced processing schemes (e.g., excluding/correcting data impacted by stray light), NOAA-NGDC is producing global monthly composite products featuring average radiance values at 15 arc-sec spatial resolution [27].As supplementary product the number of cloud-free observations that was used to create the average composite is reported for each cell. In addition to its superior spatial resolution, VIIRS offers a set of improvements over OLS-derived nighttime lights.These include lower light detection limits, improved dynamic range as well as quantification and calibration options previously unavailable [21].One of the few disadvantages of VIIRS, on the other hand, refers to the later overpass time (after midnight), when outdoor lighting is at a significantly lower level as compared to early evening when OLS acquisitions are made.Figure 3 illustrates the VIIRS DNB data for the June 2015 monthly composite extracted for the Cuenca City study area (left).For comparative purposes, we also aggregate that data to a 30 arc-sec grid (right), thus matching the resolution of the ISA product. Cadastral Data Since 2010 the Municipality of Cuenca has intensified the efforts to collect specific information on all buildings located in the urban area of Cuenca City enabling the construction of a complete cadastral database including geo-localization and information on building characteristics.More specifically, this cadastral database contains detailed information on the use of each building, allowing for example the distinction of residential and nonresidential occupancy types.Data sources that served as input for the cadastral data base originate from different national entities such as the Municipality of Cuenca, the National Institute of Statistics and Census (INEC), the Telecommunications, Water and Sewage Service Company of Cuenca (ETAPA), the National Secretariat for Risk Management (SNGR) and the University of Cuenca.After a validation and filtering process, the cadastral building data base eventually comprises 65,436 records [28].Each building footprint record is georeferenced and includes information on built-up area (in m 2 ) and occupancy type (Figure 4).Residential buildings thereby cover an area of 12.9 km 2 , complemented by a 4.3 km 2 non-residential built-up area.For comparative purposes, we aggregate building footprint data to a 15 arc-sec and a 30 arc-sec grid respectively, thus matching the resolutions of the two analyzed nighttime lights data sets.Figure 5 illustrates the aggregated grids for the non-residential share of the built-up area.On top, non-residential built-up percentage is shown.At the bottom, each cell's contribution to the total built-up area is visualized, whereby a main cluster in the center of the city is clearly depicted.More specifically, this cadastral database contains detailed information on the use of each building, allowing for example the distinction of residential and nonresidential occupancy types.Data sources that served as input for the cadastral data base originate from different national entities such as the Municipality of Cuenca, the National Institute of Statistics and Census (INEC), the Telecommunications, Water and Sewage Service Company of Cuenca (ETAPA), the National Secretariat for Risk Management (SNGR) and the University of Cuenca.After a validation and filtering process, the cadastral building data base eventually comprises 65,436 records [28].Each building footprint record is georeferenced and includes information on built-up area (in m 2 ) and occupancy type (Figure 4).Residential buildings thereby cover an area of 12.9 km 2 , complemented by a 4.3 km 2 non-residential built-up area.More specifically, this cadastral database contains detailed information on the use of each building, allowing for example the distinction of residential and nonresidential occupancy types.Data sources that served as input for the cadastral data base originate from different national entities such as the Municipality of Cuenca, the National Institute of Statistics and Census (INEC), the Telecommunications, Water and Sewage Service Company of Cuenca (ETAPA), the National Secretariat for Risk Management (SNGR) and the University of Cuenca.After a validation and filtering process, the cadastral building data base eventually comprises 65,436 records [28].Each building footprint record is georeferenced and includes information on built-up area (in m 2 ) and occupancy type (Figure 4).Residential buildings thereby cover an area of 12.9 km 2 , complemented by a 4.3 km 2 non-residential built-up area.For comparative purposes, we aggregate building footprint data to a 15 arc-sec and a 30 arc-sec grid respectively, thus matching the resolutions of the two analyzed nighttime lights data sets.Figure 5 illustrates the aggregated grids for the non-residential share of the built-up area.On top, non-residential built-up percentage is shown.At the bottom, each cell's contribution to the total built-up area is visualized, whereby a main cluster in the center of the city is clearly depicted.For comparative purposes, we aggregate building footprint data to a 15 arc-sec and a 30 arc-sec grid respectively, thus matching the resolutions of the two analyzed nighttime lights data sets.Figure 5 illustrates the aggregated grids for the non-residential share of the built-up area.On top, non-residential built-up percentage is shown.At the bottom, each cell's contribution to the total built-up area is visualized, whereby a main cluster in the center of the city is clearly depicted. Methods As outlined in the introduction, the presented study was carried out under the framework of the World Bank's Central American Country Disaster Risk Profiles (CDRP) Initiative [4].With that kind of continental and global models, the implemented scale level plays an important role in defining the basic spatial units of analysis.Working on a 30 arc-sec resolution grid level (i.e., approximately 1 km at the equator)-frequently used for global models-the spatial identification and distinction of unique inventory regions is often not unambiguously possible at the grid cell level due to the well-studied mixed pixel issue [29,30].While large urban residential areas as well as certain dedicated industrial zones are still often built in rather compact manner and can thus indeed cover entire grid cells, particularly commercial areas are commonly intertwined with residences forming wider areas of mixed use.In order to appropriately identify urban non-residential areas in a spatial top-down model it is therefore considered reasonable to assume a certain share of residential occupancy throughout and consider grid cells that also include a non-residential share as areas of mixed use. For identification of those built-up urban areas that also feature a share of non-residential use, we refer to the above-outlined nighttime Earth Observation data and derivative products as proxy measure.The assumption hereby is that intense lighting in that context is associated with a high likelihood of commercial and/or industrial presence, commonly clustered in certain parts of a city (such as central business districts and/or peripheral commercial zones).Areas of low light intensity, in turn, can be considered more likely residential. The main objective of this study is thus to identify the light intensity thresholds that match best the separated distribution of residential vs. mixed use areas on the ground.DMSP-OLS derived ISA data and VIIRS-DNB data are both evaluated and comparatively analyzed for that purpose.It should be noted that the presented approach is proposed only for pre-identified urban areas [31], as for rural regions coarse-scale lighting intensity has reduced spatial correlation with built-up and other additional aspects come into play. Methods As outlined in the introduction, the presented study was carried out under the framework of the World Bank's Central American Country Disaster Risk Profiles (CDRP) Initiative [4].With that kind of continental and global models, the implemented scale level plays an important role in defining the basic spatial units of analysis.Working on a 30 arc-sec resolution grid level (i.e., approximately 1 km at the equator)-frequently used for global models-the spatial identification and distinction of unique inventory regions is often not unambiguously possible at the grid cell level due to the well-studied mixed pixel issue [29,30].While large urban residential areas as well as certain dedicated industrial zones are still often built in rather compact manner and can thus indeed cover entire grid cells, particularly commercial areas are commonly intertwined with residences forming wider areas of mixed use.In order to appropriately identify urban non-residential areas in a spatial top-down model it is therefore considered reasonable to assume a certain share of residential occupancy throughout and consider grid cells that also include a non-residential share as areas of mixed use. For identification of those built-up urban areas that also feature a share of non-residential use, we refer to the above-outlined nighttime Earth Observation data and derivative products as proxy measure.The assumption hereby is that intense lighting in that context is associated with a high likelihood of commercial and/or industrial presence, commonly clustered in certain parts of a city (such as central business districts and/or peripheral commercial zones).Areas of low light intensity, in turn, can be considered more likely residential. The main objective of this study is thus to identify the light intensity thresholds that match best the separated distribution of residential vs. mixed use areas on the ground.DMSP-OLS derived ISA data and VIIRS-DNB data are both evaluated and comparatively analyzed for that purpose.It should be noted that the presented approach is proposed only for pre-identified urban areas [31], as for rural regions coarse-scale lighting intensity has reduced spatial correlation with built-up and other additional aspects come into play. Referring to the Cuenca City cadaster data we distinguish purely residential areas from areas of mixed use.Using the building footprint area data we obtain that 75% of the total built-up area of Cuenca City features residential occupancy, complemented by 25% non-residential occupancy.At the aggregated 15 arc-sec level, the top 25% mixed use cells (covering 25 km 2 out of the total 98.75 km 2 ) account for 92% of the city's total non-residential built-up area.We then use this bottom-up-determined distribution ratio to identify the appropriate lighting intensity thresholds in the top-down model. In order to define the relevant data value histograms for the threshold identification, we select all cells of the respective ISA and VIIRS data sets that fall within the pre-defined urban test case area of Cuenca City.In the case of the ISA data, the min-max value range is thereby identified as 5.7-77.8.For the VIIRS data, the min-max value range is identified as 3.3-73.8.In order to factor out potential effects generated by the mere difference in spatial resolution between ISA and VIIRS data we aggregate the original 15 arc-sec VIIRS data to a 30 arc-sec grid, thus enabling direct spatial comparability to the ISA grid.We iteratively apply several threshold cut-off points in the identified value ranges and compare the resulting areas of relatively low and relatively high ISA and VIIRS values respectively to the aggregated cadastral data.The eventually selected final cut-off point is that threshold value that produces the best-matching output with regard to the 75:25 cadaster-based residential vs. mixed use area distribution ratio. Identification of Residential vs. Mixed Use Areas Using ISA Data Table 1 illustrates the various tested ISA threshold values and the corresponding building use distribution ratios as derived from comparative spatial overlay with the aggregated cadastral data at the 30 arc-sec grid level.ISA min-max range and respective threshold values are shown in the left part of the table, with the percentile column indicating the relative value distribution.In mathematical terms the percentile value is derived as follows: (Threshold-Min)/(Max-Min). Specifically, that means that for the highlighted ID 4 the ISA value of 42 indicates the median value (50th percentile) in the distribution histogram.Half of the values in the study area under consideration thus feature an ISA value lower than 42 and the other half feature a higher value.Spatially overlaid on the aggregated cadastral building use density grid (at the 30 arc-sec level, comparable to the ISA grid), that results in a 74% residential ratio and a 26% mixed use share, thus best-matching the bottom-up-derived 75% residential share (ISA data is provided in integer numbers, thus making it impossible to exactly match that 75% target value).Figure 6 maps the binary land use classification (residential use vs. mixed use) for the 5 tested ISA thresholds respectively.The share of mixed use area decreases thereby corresponding to the higher ISA cut-off points.Having the building-level cadastral data at hand enables not only determination of the binary land use distribution ratio, but furthermore allows us to consequently evaluate the degree of spatial overlap as a measure of model output accuracy.Using the above-identified ISA threshold, thus best matching the relative distribution of the two occupancy types (residential and mixed use), 82.8% of the total non-residential building stock of Cuenca City (3.6 of 4.3 km 2 ) is indeed captured within the selected top-down-derived binary mixed use mask. Identification of Residential vs. Mixed Use Areas Using VIIRS Data in Original Spatial Resolution Applying the same approach of iterative thresholding as illustrated for the ISA data we use VIIRS data to perform comparative analysis.As outlined above, VIIRS data is evaluated in that context first at its original resolution level and furthermore at aggregated level matching the ISA resolution in order to guarantee direct spatial comparability and factor out potentially biased scale effects. Application of VIIRS Data in Original Spatial Resolution As shown above with the ISA data, Table 2 relates the various tested VIIRS threshold values to the corresponding cadastral building use distribution ratios.Several light intensity thresholds are applied iteratively, approximating the occupancy-type-specific built-up area distribution shares on the ground.Spatial overlay of the VIIRS data and the corresponding aggregated cadastral building use density grid (at the 15 arc-sec level) indicates that the 53rd percentile threshold exactly matches the land use distribution as derived from the cadastral information, i.e., 75% residential and 25% mixed use share.Figure 7 maps the binary land use classification (residential use vs. mixed use) for 5 selected thresholds respectively.The share of mixed use area decreases thereby corresponding to the higher VIIRS cut-off points.Having the building-level cadastral data at hand enables not only determination of the binary land use distribution ratio, but furthermore allows us to consequently evaluate the degree of spatial overlap as a measure of model output accuracy.Using the above-identified ISA threshold, thus best matching the relative distribution of the two occupancy types (residential and mixed use), 82.8% of the total non-residential building stock of Cuenca City (3.6 of 4.3 km 2 ) is indeed captured within the selected top-down-derived binary mixed use mask. Identification of Residential vs. Mixed Use Areas Using VIIRS Data in Original Spatial Resolution Applying the same approach of iterative thresholding as illustrated for the ISA data we use VIIRS data to perform comparative analysis.As outlined above, VIIRS data is evaluated in that context first at its original resolution level and furthermore at aggregated level matching the ISA resolution in order to guarantee direct spatial comparability and factor out potentially biased scale effects. Application of VIIRS Data in Original Spatial Resolution As shown above with the ISA data, Table 2 relates the various tested VIIRS threshold values to the corresponding cadastral building use distribution ratios.Several light intensity thresholds are applied iteratively, approximating the occupancy-type-specific built-up area distribution shares on the ground.Spatial overlay of the VIIRS data and the corresponding aggregated cadastral building use density grid (at the 15 arc-sec level) indicates that the 53rd percentile threshold exactly matches the land use distribution as derived from the cadastral information, i.e., 75% residential and 25% mixed use share.Figure 7 maps the binary land use classification (residential use vs. mixed use) for 5 selected thresholds respectively.The share of mixed use area decreases thereby corresponding to the higher VIIRS cut-off points.Results regarding the degree of spatial overlap between the binary classified VIIRS data and the correspondingly aggregated cadastral grid indicate that 76% of the total non-residential building stock of Cuenca City (3.27 of 4.3 km 2 ) are captured within the selected top-down-derived mixed use mask (using the identified best-matching 53rd percentile threshold). Application of VIIRS Data Aggregated to a 30 arc-sec Grid In order to perform comparative analysis at identical scale levels, VIIRS data is aggregated to a 30 arc-sec grid before iterative threshold determination.Table 3 illustrates the various tested threshold values from the aggregated VIIRS data and the corresponding cadastral building use distribution ratios.To guarantee direct comparability with the ISA-based analysis, the thresholds for the aggregated VIIRS data are applied in such a way that the building use distribution ratios (shown in the right part of Table 3) are identical to the tests carried out before using the ISA data.Figure 8 maps the binary land use classification (residential use vs. mixed use) for the 5 tested thresholds in the aggregated VIIRS data respectively.As with the previous tests, the share of areas with mixed occupancy decreases thereby corresponding to the higher cut-off points. Spatial overlay of the aggregated VIIRS data and the corresponding cadastral building use density grid indicates that the 55th percentile threshold best matches the target value of 75% residential and 25% mixed use shares (as derived from the in situ cadaster data), thus slightly higher than for the original-resolution VIIRS data.Evaluating the degree of spatial overlap between the aggregated VIIRS data and the corresponding cadastral grid, we detect that 79% of the total non-residential building stock of Cuenca City (3.4 of 4.3 km 2 ) is captured within the selected top-down-derived binary mixed use mask.Results regarding the degree of spatial overlap between the binary classified VIIRS data and the correspondingly aggregated cadastral grid indicate that 76% of the total non-residential building stock of Cuenca City (3.27 of 4.3 km 2 ) are captured within the selected top-down-derived mixed use mask (using the identified best-matching 53rd percentile threshold). Application of VIIRS Data Aggregated to a 30 arc-sec Grid In order to perform comparative analysis at identical scale levels, VIIRS data is aggregated to a 30 arc-sec grid before iterative threshold determination.Table 3 illustrates the various tested threshold values from the aggregated VIIRS data and the corresponding cadastral building use distribution ratios.To guarantee direct comparability with the ISA-based analysis, the thresholds for the aggregated VIIRS data are applied in such a way that the building use distribution ratios (shown in the right part of Table 3) are identical to the tests carried out before using the ISA data.Figure 8 maps the binary land use classification (residential use vs. mixed use) for the 5 tested thresholds in the aggregated VIIRS data respectively.As with the previous tests, the share of areas with mixed occupancy decreases thereby corresponding to the higher cut-off points. Spatial overlay of the aggregated VIIRS data and the corresponding cadastral building use density grid indicates that the 55th percentile threshold best matches the target value of 75% residential and 25% mixed use shares (as derived from the in situ cadaster data), thus slightly higher than for the original-resolution VIIRS data.Evaluating the degree of spatial overlap between the aggregated VIIRS data and the corresponding cadastral grid, we detect that 79% of the total non-residential building stock of Cuenca City (3.4 of 4.3 km 2 ) is captured within the selected top-down-derived binary mixed use mask. Discussion The application of ISA and alternatively VIIRS data to identify intra-urban occupancy type distribution patterns that we outline in this paper and the corresponding findings include several interesting aspects for further discussion.In the following we highlight three relevant points at different stages in the model setup.First, we provide some background information on selection criteria of VIIRS data.Then, we discuss the actual differences in the outcome of the proposed binary land use classification approach when implemented using ISA vs. VIIRS data.Finally, we highlight the impact that proper urban spatial delineation has on the model outcome by applying a spatially shrunk urban mask for the Cuenca City test case. VIIRS Data Selection Initial nightlights data selection has a big influence on the model outcome and is particularly important in a sense that VIIRS data is provided by NOAA-NGDC as basic monthly average light intensity composites whereas ISA data comes as a fully-processed product derived from annual DMSP-OLS composites and calibrated with ancillary built-up reference information.The number of cloud-free observations is a crucial factor for producing average composites as excessive cloud cover can obscure light-emitting sources on the ground.In monthly products fewer observations are potentially available to compute composite grids as compared to yearly products and average values can therefore more easily turn out to be skewed and non-representative in case of extended cloud cover in the respective month.There are obviously other influencing parameters that can impair light identification such as obscuring factors like smoke or fog and misleading reflections from snow cover, lightning or the aurora.However, cloud cover is clearly considered the most relevant parameter in the context of the compositing process, particularly in equatorial regions such as the study area of Ecuador.For our study we evaluated the 6 most recent available readily-processed monthly composites (at the time of writing) covering January-June 2015 (other monthly composites were only available as preliminary beta versions having lower quality).For the Cuenca City study area the monthly VIIRS composites of May and June 2015 feature the highest average number of cloud-free observations (see Table 4), thus providing best data reliability. Figure 9 illustrates the light intensity values for the 6 analyzed monthly composites as well as the corresponding number of cloud-free observations at a pixel-by-pixel basis.For the light intensity grids, darker blue tones indicate higher intensity.For the cloud-free observation grids, dark blue would represent the best situation (no cloud cover at any day during the month) whereas green, yellow and red colors indicate decreasing data reliability (due to fewer cloud-free observations). Discussion The application of ISA and alternatively VIIRS data to identify intra-urban occupancy type distribution patterns that we outline in this paper and the corresponding findings include several interesting aspects for further discussion.In the following we highlight three relevant points at different stages in the model setup.First, we provide some background information on selection criteria of VIIRS data.Then, we discuss the actual differences in the outcome of the proposed binary land use classification approach when implemented using ISA vs. VIIRS data.Finally, we highlight the impact that proper urban spatial delineation has on the model outcome by applying a spatially shrunk urban mask for the Cuenca City test case. VIIRS Data Selection Initial nightlights data selection has a big influence on the model outcome and is particularly important in a sense that VIIRS data is provided by NOAA-NGDC as basic monthly average light intensity composites whereas ISA data comes as a fully-processed product derived from annual DMSP-OLS composites and calibrated with ancillary built-up reference information.The number of cloud-free observations is a crucial factor for producing average composites as excessive cloud cover can obscure light-emitting sources on the ground.In monthly products fewer observations are potentially available to compute composite grids as compared to yearly products and average values can therefore more easily turn out to be skewed and non-representative in case of extended cloud cover in the respective month.There are obviously other influencing parameters that can impair light identification such as obscuring factors like smoke or fog and misleading reflections from snow cover, lightning or the aurora.However, cloud cover is clearly considered the most relevant parameter in the context of the compositing process, particularly in equatorial regions such as the study area of Ecuador.For our study we evaluated the 6 most recent available readily-processed monthly composites (at the time of writing) covering January-June 2015 (other monthly composites were only available as preliminary beta versions having lower quality).For the Cuenca City study area the monthly VIIRS composites of May and June 2015 feature the highest average number of cloud-free observations (see Table 4), thus providing best data reliability. Figure 9 illustrates the light intensity values for the 6 analyzed monthly composites as well as the corresponding number of cloud-free observations at a pixel-by-pixel basis.For the light intensity grids, darker blue tones indicate higher intensity.For the cloud-free observation grids, dark blue would represent the best situation (no cloud cover at any day during the month) whereas green, yellow and red colors indicate decreasing data reliability (due to fewer cloud-free observations).Theoretically, just a couple of high-quality observations can be sufficient to produce an appropriate composite product.While the monthly composites of May and June have the highest number of cloud-free observations, other months' composites can thus feature very similar light intensity value distributions (as it is the case for example for the April composite).We therefore explicitly refer to data reliability as an indicator as opposed to general data quality.Theoretically, just a couple of high-quality observations can be sufficient to produce an appropriate composite product.While the monthly composites of May and June have the highest number of cloud-free observations, other months' composites can thus feature very similar light intensity value distributions (as it is the case for example for the April composite).We therefore explicitly refer to data reliability as an indicator as opposed to general data quality.Although the May composite has the highest number of cloud-free observations on average, that value is almost identical to the June composite (see Table 4).In that case, an additional parameter should be identified to justify selection.On the one hand visual inspection of the cell-level distribution of cloud-free observations could give an extra indication on potential data quality.If, for example, more cloud-free observations are found in the city center (where non-residential activity is expected), that could be beneficial given the context of the presented study.Another parameter could be the detected light intensity range, with detection of higher intensities (i.e., likely non-obscured) being potentially favorable.Following the latter criterion, higher intensity levels are identified in the June composite as compared to the May data set (see Table 4).Other secondary selection criteria could take into account influencing parameters that impair light identification (as mentioned above).Data on those parameters is usually not publicly available though.As intra-urban cloud-free observations at cell-level are similarly distributed for the May and June composites, the higher detected light intensity range was eventually the determining factor in selecting the June data set for the test study. To further highlight the differences in spatial patterns between the 6 available monthly composites, cell-by-cell light intensity deviations of every grid to the eventually selected June composite are computed as illustrated in Figure 10.In line with the observations described above, the May composite matches the June dataset most closely also in that regard.Besides, the overall patterns of those cell-by-cell deviations align adequately with the corresponding grids showing the number of cloud-free observations (see Figure 9).The February and March composites, for example, show the largest deviations to the June grid on a cell-by-cell basis, thus assumingly confirming the poorer reliability of those grids when referring to the low number of cloud-free observations.Specifically the March grid can be considered unusable, while there may be additional reasons for the extreme light intensities observed during the month of February (e.g., night parades and other associated carnival celebration activities in the middle of the month).Although the May composite has the highest number of cloud-free observations on average, that value is almost identical to the June composite (see Table 4).In that case, an additional parameter should be identified to justify selection.On the one hand visual inspection of the cell-level distribution of cloud-free observations could give an extra indication on potential data quality.If, for example, more cloud-free observations are found in the city center (where non-residential activity is expected), that could be beneficial given the context of the presented study.Another parameter could be the detected light intensity range, with detection of higher intensities (i.e., likely non-obscured) being potentially favorable.Following the latter criterion, higher intensity levels are identified in the June composite as compared to the May data set (see Table 4).Other secondary selection criteria could take into account influencing parameters that impair light identification (as mentioned above).Data on those parameters is usually not publicly available though.As intra-urban cloud-free observations at cell-level are similarly distributed for the May and June composites, the higher detected light intensity range was eventually the determining factor in selecting the June data set for the test study. To further highlight the differences in spatial patterns between the 6 available monthly composites, cell-by-cell light intensity deviations of every grid to the eventually selected June composite are computed as illustrated in Figure 10.In line with the observations described above, the May composite matches the June dataset most closely also in that regard.Besides, the overall patterns of those cell-by-cell deviations align adequately with the corresponding grids showing the number of cloud-free observations (see Figure 9).The February and March composites, for example, show the largest deviations to the June grid on a cell-by-cell basis, thus assumingly confirming the poorer reliability of those grids when referring to the low number of cloud-free observations.Specifically the March grid can be considered unusable, while there may be additional reasons for the extreme light intensities observed during the month of February (e.g., night parades and other associated carnival celebration activities in the middle of the month).Table 5 illustrates a set of tested VIIRS threshold values using the May data as alternative in order to demonstrate potential model output variation in case a different monthly composite was selected.When applying the 53rd percentile threshold (highlighted in green in Table 5) that delivered the best match to the cadastral data in the June composite, a 65:35 building use distribution split was obtained for the May composite, thus significantly overestimating the non-residential share.In the May data the 64th percentile is identified as fitting threshold (highlighted in grey in Table 5) best approximating the aggregated cadastral grid.Table 5 illustrates a set of tested VIIRS threshold values using the May data as alternative in order to demonstrate potential model output variation in case a different monthly composite was selected.When applying the 53rd percentile threshold (highlighted in green in Table 5) that delivered the best match to the cadastral data in the June composite, a 65:35 building use distribution split was obtained for the May composite, thus significantly overestimating the non-residential share.In the May data the 64th percentile is identified as fitting threshold (highlighted in grey in Table 5) best approximating the aggregated cadastral grid. Comparative Analysis of ISA-and VIIRS-Based Results of the Binary Land Use Classification The second aspect to be discussed is a comparison of the model output when using ISA and VIIRS-DNB data respectively.This is relevant in several aspects, most specifically (1) in terms of evaluating feasibility of continued applicability of the presented approach with the DMSP program fading out as well as (2) to assess the impact and examine expected multisided improvements due to VIIRS' improved spatial and radiometric resolution as compared to OLS. To factor out potential influences caused by the higher spatial resolution we first compare the findings of the ISA-based analysis to those using a correspondingly aggregated 30 arc-sec VIIRS grid.Results prove to be similar in fact, with a 55th percentile threshold identified as best fit to distinguish residential and mixed occupancy areas in the VIIRS data as compared to the 50% threshold in the ISA data.In case of applying the same 50% threshold to the aggregated VIIRS composite, the obtained occupancy distribution in the correspondingly aggregated cadastral data would show a 70:30 residential-mixed split as compared to the targeted 75:25 ratio.When evaluating the degree of spatial overlap as a measure of model output accuracy, applying the respectively identified best-fitting thresholds to both data sets results in a slightly better capturing of non-residential built-up area in the binary mixed use mask that is derived from the ISA data (83%) as compared to the aggregated VIIRS data based mask (79%).If again the 50% threshold was applied to the VIIRS composite instead of the identified 55th percentile threshold, approximately 84% of the non-residential built-up area would be captured.While thereby a marginally better result is achieved in terms of capturing non-residential built-up area, the residential-mixed distribution ratio would be skewed and mixed use areas would actually be overrepresented spatially. Applying the VIIRS data in its original spatial resolution (15 arc-sec), the best-fitting threshold value to approximate the targeted 75:25 residential-mixed distribution pattern is identified at the 53rd percentile.This is slightly below the threshold value identified for the aggregated VIIRS composite (55th percentile).76% of the total non-residential building stock of Cuenca City (3.27 of 4.3 km 2 ) is captured within the derived mixed use mask.That value is below both the 79% value when using the aggregated VIIRS data and the 83% value when using the ISA data.In case of applying the initial 50% threshold for the binary classification, 79% of the non-residential building stock would be captured at a 71:29 occupancy type ratio distribution. Checking those numbers it therefore appears that using ISA data renders a better model performance than using VIIRS data both in original and aggregated form, inasmuch as more non-residential built-up area is detected in the binary masks that were derived using optimized thresholding to match residential-mixed occupancy distribution ratios.However, while a higher percentage of the non-residential built-up area is captured, ISA-derived mixed use areas are slightly more scattered.Taking VIIRS as input data source clusters the detection more in a sense that the average cell-level non-residential built-up density is higher in those binary occupancy type mask derivatives.Using the original-resolution composite, 76% of the total non-residential building stock (3.27km 2 ) is captured within 25.75 km 2 , thus featuring an average non-residential built-up density of 12.7% per km 2 .When using ISA data, 83% of the total non-residential building stock (3.57km 2 ) is captured within 32 km 2 , thus an average density of 11.1% per km 2 . With the threshold values and associated parameters are obviously rather similar for the VIIRS-and ISA-based approaches, another interesting evaluative perspective is to derive a corresponding binary mask from the aggregated cadastral data and then check spatial pattern concurrence to the nightlights products.Figure 11 shows the binary classification of the aggregated cadastral data (top), both for the 15 arc-sec (left) and the 30 arc-sec (right) aggregate.The binary mask separates cells that contribute strongly to the total non-residential area from cells that only have a marginal share.This approach is congruent to the nightlights thresholding approach in a sense that it aims at separating high-intensity from low-intensity cells (referring to "non-residential" as observed parameter).The thresholds are determined in a way that, as for the nightlights data thresholding, the 75:25 residential-mixed occupancy type reference ratio split is matched best-possible.For the 15 arc-sec grid the threshold is identified at 0.25% (i.e., cells that have a percent-contribution to the total non-residential area of less or equal than 0.25%), while for the 30 arc-sec grid the derived threshold value is 1%.Interestingly, the difference in fact exactly reflects the scale difference between the two datasets (i.e., factor of 4).The binary 15 arc-sec classification results in a non-residential mask (in dark blue) that captures 88.3% of the total non-residential building area of Cuenca City on an area of 25.5 km 2 , thus an average density of 14.9% per km 2 (compared to the 12.7%/km 2 average density in the VIIRS-derived 15 arc-sec binary mask).The 30 arc-sec mask on the other hand captures 86%.With the threshold values and associated parameters are obviously rather similar for the VIIRS-and ISA-based approaches, another interesting evaluative perspective is to derive a corresponding binary mask from the aggregated cadastral data and then check spatial pattern concurrence to the nightlights products.Figure 11 shows the binary classification of the aggregated cadastral data (top), both for the 15 arc-sec (left) and the 30 arc-sec (right) aggregate.The binary mask separates cells that contribute strongly to the total non-residential area from cells that only have a marginal share.This approach is congruent to the nightlights thresholding approach in a sense that it aims at separating high-intensity from low-intensity cells (referring to "non-residential" as observed parameter).The thresholds are determined in a way that, as for the nightlights data thresholding, the 75:25 residential-mixed occupancy type reference ratio split is matched best-possible.For the 15 arc-sec grid the threshold is identified at 0.25% (i.e., cells that have a percent-contribution to the total non-residential area of less or equal than 0.25%), while for the 30 arc-sec grid the derived threshold value is 1%.Interestingly, the difference in fact exactly reflects the scale difference between the two datasets (i.e., factor of 4).The binary 15 arc-sec classification results in a non-residential mask (in dark blue) that captures 88.3% of the total non-residential building area of Cuenca City on an area of 25.5 km 2 , thus an average density of 14.9% per km 2 (compared to the 12.7%/km 2 average density in the VIIRS-derived 15 arc-sec binary mask).The 30 arc-sec mask on the other hand captures 86%.For comparative purposes, the bottom two illustrations in Figure 11 show the above-presented best-matching binary masks derived from the 15 arc-sec VIIRS and the 30 arc-sec ISA data.Visually evaluating spatial distribution and extent of the non-residential class in the two maps reveals interesting patterns.The VIIRS-derived mask covers the south-western corner of the corresponding cadaster-based non-residential mask well and misses out on the north-eastern corner whereas it is the other way around with the ISA-derived mask.VIIRS in that context seems not to detect For comparative purposes, the bottom two illustrations in Figure 11 show the above-presented best-matching binary masks derived from the 15 arc-sec VIIRS and the 30 arc-sec ISA data.Visually evaluating spatial distribution and extent of the non-residential class in the two maps reveals interesting patterns.The VIIRS-derived mask covers the south-western corner of the corresponding cadaster-based non-residential mask well and misses out on the north-eastern corner whereas it is the other way around with the ISA-derived mask.VIIRS in that context seems not to detect above-average light intensities from the Cuenca City Airport (Aeropuerto Mariscal La Mar), whereas it is a major contributing factor in the ISA data.The latter could be explained with the inherent data configuration of ISA, which per se is more correlated with built-up area rather than pure light intensity. Evaluating Model Sensitivity via Application of Different Spatial Urban Delineation For further evaluation of the model sensitivity we re-run the implemented approach with a geospatially shrunk urban mask.While in the above-outlined implementations all the ISA and VIIRS grid cells were considered that fall within a pre-defined urban area of Cuenca City, now a more central part of the urban agglomeration is selected.Two tests are carried out in that context.For the first one, we keep the same built-up area occupancy type distributions (75% residential vs. 25% mixed use for VRIIS original resolution and 74% residential vs. 26% mixed use for the aggregated grid).In the second test, we keep the same threshold values identified above as the best match, respectively, for each dataset (50th percentile for the ISA data and 53rd percentile for the VIIRS data). For the first test, the derived best-matching thresholds are now higher for both datasets.For the ISA data the 55th percentile and for the VIIRS-original and aggregated grids the 63rd and 65th percentile are identified respectively.This was expected as predominantly residential areas in the periphery of the city are now not included in the newly-defined urban mask and those cells (featuring lower ISA and light intensity values) are thus missing in the histograms.The threshold increment is higher for the VIIRS data application (roughly 10%-12%-increase) as compared to the ISA data application (5%-increase).This aspect can be associated with different sensitivity of the identified VIIRS and ISA thresholds due to varying histogram distributions (see Figure 12).Given the purpose of the presented modeling, a more even histogram distribution could imply less sensitivity in the threshold determination.above-average light intensities from the Cuenca City Airport (Aeropuerto Mariscal La Mar), whereas it is a major contributing factor in the ISA data.The latter could be explained with the inherent data configuration of ISA, which per se is more correlated with built-up area rather than pure light intensity. Evaluating Model Sensitivity via Application of Different Spatial Urban Delineation For further evaluation of the model sensitivity we re-run the implemented approach with a geospatially shrunk urban mask.While in the above-outlined implementations all the ISA and VIIRS grid cells were considered that fall within a pre-defined urban area of Cuenca City, now a more central part of the urban agglomeration is selected.Two tests are carried out in that context.For the first one, we keep the same built-up area occupancy type distributions (75% residential vs. 25% mixed use for VRIIS original resolution and 74% residential vs. 26% mixed use for the aggregated grid).In the second test, we keep the same threshold values identified above as the best match, respectively, for each dataset (50th percentile for the ISA data and 53rd percentile for the VIIRS data). For the first test, the derived best-matching thresholds are now higher for both datasets.For the ISA data the 55th percentile and for the VIIRS-original and aggregated grids the 63rd and 65th percentile are identified respectively.This was expected as predominantly residential areas in the periphery of the city are now not included in the newly-defined urban mask and those cells (featuring lower ISA and light intensity values) are thus missing in the histograms.The threshold increment is higher for the VIIRS data application (roughly 10%-12%-increase) as compared to the ISA data application (5%-increase).This aspect can be associated with different sensitivity of the identified VIIRS and ISA thresholds due to varying histogram distributions (see Figure 12).Given the purpose of the presented modeling, a more even histogram distribution could imply less sensitivity in the threshold determination.In fact, using the ISA composite, it only takes an increment of 24% (raising the threshold from 50% to 74%) to change the building occupancy type distribution ratio from 74:26 to 96:4.Regarding the aggregated DNB-VIIRS it would require a 35% increment (raising the threshold from 55% to 90%) to achieve the same theoretic change of the built-up area distribution.Small threshold shifts therefore have a bigger impact when using ISA as compared to VIIRS.To illustrate and emphasize this statistically, we use a sample of 10 value pairs each for ISA, original-resolution VIIRS, and aggregated VIIRS as compared to the cadastral building use distribution, and run a linear regression (see Figure 13).Considering all the value pairs, in fact the original-resolution VIIRS data shows the steepest slope in the regression (1.2468) whereas the aggregated VIIRS data indeed show the flattest slope (1.0341) with ISA in between (1.1613).The aggregated VIIRS would thus be the least sensitive to threshold shifting in a sense that the building use distribution ratios would accordingly deviate less from the target value (see dashed line in Figure 13).While steepest when considering all value pairs, the slope of the original VIIRS graph matches ISA almost identically In fact, using the ISA composite, it only takes an increment of 24% (raising the threshold from 50% to 74%) to change the building occupancy type distribution ratio from 74:26 to 96:4.Regarding the aggregated DNB-VIIRS it would require a 35% increment (raising the threshold from 55% to 90%) to achieve the same theoretic change of the built-up area distribution.Small threshold shifts therefore have a bigger impact when using ISA as compared to VIIRS.To illustrate and emphasize this statistically, we use a sample of 10 value pairs each for ISA, original-resolution VIIRS, and aggregated VIIRS as compared to the cadastral building use distribution, and run a linear regression (see Figure 13).Considering all the value pairs, in fact the original-resolution VIIRS data shows the steepest slope in the regression (1.2468) whereas the aggregated VIIRS data indeed show the flattest slope (1.0341) with ISA in between (1.1613).The aggregated VIIRS would thus be the least sensitive to threshold shifting in a sense that the building use distribution ratios would accordingly deviate less from the target value (see dashed line in Figure 13).While steepest when considering all value pairs, the slope of the original VIIRS graph matches ISA almost identically around the relevant target value (dashed line).Threshold shifts in the nightlights products would therefore have a similar impact on the resulting building use distribution ratios.around the relevant target value (dashed line).Threshold shifts in the nightlights products would therefore have a similar impact on the resulting building use distribution ratios.1), original-resolution VIIRS (extended data sample of Table 2), and aggregated VIIRS (extended data sample of Table 3).The dashed line shows the residential building use ratio for Cuenca City (75%) as derived from cadastral data.Regression equations are colored according to the respective graphs. In the second test using the best-matching thresholds identified with the initial urban mask (50th percentile for ISA and 53rd and 55th percentile respectively for VIIRS) the newly obtained built-up area occupancy type distribution for the ISA data now corresponds to a 50% residential and 50% mixed use share while for the VIIRS data the distribution now shows a pattern of 56% residential and 44% mixed use considering the original resolution and a 48:52 ratio taking in account the aggregated 30 arc-sec grid.These newly derived built-up area occupancy type distribution patterns are similar for both data sources (ISA and VIIRS) and clearly overestimate the share of mixed use area.This, again, was expected in the same way than the first test result inasmuch as in the selected central part of the urban area there are a decreased number of residential buildings as compared to the sub-urban periphery. Both tests are correlated in a sense that they give indication on higher light intensity values being clustered in central core urban areas of Cuenca City whereas sub-urban areas feature dimmer lights (and consequently also lower ISA values) on average as a result of higher residential densities.This exercise basically highlights the importance of correct spatial pre-identification of the urban area for subsequent intra-urban analysis.If the urban mask is spatially over-or under-defined, the appropriate nightlights threshold values would de-or increase respectively.1), original-resolution VIIRS (extended data sample of Table 2), and aggregated VIIRS (extended data sample of Table 3).The dashed line shows the residential building use ratio for Cuenca City (75%) as derived from cadastral data.Regression equations are colored according to the respective graphs. In the second test using the best-matching thresholds identified with the initial urban mask (50th percentile for ISA and 53rd and 55th percentile respectively for VIIRS) the newly obtained built-up area occupancy type distribution for the ISA data now corresponds to a 50% residential and 50% mixed use share while for the VIIRS data the distribution now shows a pattern of 56% residential and 44% mixed use considering the original resolution and a 48:52 ratio taking in account the aggregated 30 arc-sec grid.These newly derived built-up area occupancy type distribution patterns are similar for both data sources (ISA and VIIRS) and clearly overestimate the share of mixed use area.This, again, was expected in the same way than the first test result inasmuch as in the selected central part of the urban area there are a decreased number of residential buildings as compared to the sub-urban periphery. Both tests are correlated in a sense that they give indication on higher light intensity values being clustered in central core urban areas of Cuenca City whereas sub-urban areas feature dimmer lights (and consequently also lower ISA values) on average as a result of higher residential densities.This exercise basically highlights the importance of correct spatial pre-identification of the urban area for subsequent intra-urban analysis.If the urban mask is spatially over-or under-defined, the appropriate nightlights threshold values would de-or increase respectively. Conclusions and Outlook The presented result of the ISA data application is very interesting as it in fact backs up the prior non-evaluated assumption implemented in the Central American CDRP model to use ISA median values as a threshold for the binary land use classification of residential and mixed use areas.At the continental scale, without ground reference data as are available for the presented Cuenca City test case study, the use of the median value seemed most appropriate as it introduces the least possible subjectivity and merely separates a certain data set in high and low according to its histogram without additionally induced statistical skew. With that initially assumed median value (50%) threshold for the binary ISA classification confirmed through comparative in situ data analysis for an accurately defined urban agglomeration, the presented case study is considered very beneficial for the overall implementation process of the CDRP initiative.Also, the second re-run of the model with a geospatially shrunken, more central urban mask that showcased the correspondingly expected threshold upward shifts provides another back-up for the model validity as well as underlining the importance of accurate urban delineation in the first place. It has to be noted that with the presented Cuenca City test case, those findings have to date just been evaluated for that one particular city and caution is advised when it comes to directly transferring those conclusions to other cities.With the CDRP exposure and subsequent risk and loss models already implemented for all of Central America, further test studies can be carried out to increase the sample size of the model evaluation and also test the approach in different regional settings.Cuenca City is considered a rather typical Latin American city with regular patterns of clustered land use within the urban agglomeration.Though, in Central America, basically no major deviations are expected with regard to model applicability, it will be interesting to see testing results when extending to the Caribbean and across as well as to cities of much larger spatial urban extent.Analysis of areas further from the equator may furthermore be influenced by varying seasonal day duration as well as different cloud cover patterns, two parameters which directly affect the nighttime lights compositing. Testing VIIRS-DNB data as alternative to the DMSP-OLS-based ISA data is considered a crucial step towards a continued applicability of the model.With the DMSP program fading out, VIIRS-DNB is considered the natural successor to the OLS-based nightlights products.With certain visual improvements expected due to VIIRS's higher spatial and radiometric resolution as compared to OLS, it is still highly valuable to get a clear idea about how these improvements eventually transfer to the binary land use classification output.One specific finding in that context refers to the stronger clustering and thus higher non-residential built-up density in VIIRS-derived binary classification as compared to the ISA-based approach.A major and often-stated benefit of VIIRS with regard to intra-urban pattern analysis is the much-improved radiometric resolution which eliminates the restricting light intensity saturation issues in urban centers in OLS data [21].For the purpose of the presented study, however, this is not of major relevance as the OLS-derived ISA data refer to a specific radiance-calibrated nightlights product where such saturation issues have already been addressed [32].Only two ISA datasets are publicly available, however, for the years 2000 and 2010, thus limiting potential direct applicability of the proposed approach in continuous time series analyses.Anyway, even using the annually produced and publicly available OLS stable lights-product would likely not result in a major deterioration of the binary classification as the high intensity values would still be correctly identified irrespective of their relative lower displacement in the histogram due to the saturation issue. For future studies as well as potential longer-term time series analyses, the finding is very positive in indicating that the DMSP-OLS-based ISA data and the more recent VIIRS data seem to be applicable in very similar fashion as input data sources for the residential-mixed identification model.Two main differences found using VIIRS data concern the varying threshold sensitivity and the amount of built-up detected per square kilometer of land use.The procedure of binary land use classification using VIIRS is considered more flexible than using ISA, and has the potential to give a finer-scale classification of residential and mixed used in urban areas. Figure 4 . Figure 4. Cadastral building footprints of Cuenca City, classified in residential and non-residential occupancy types. Figure 4 . Figure 4. Cadastral building footprints of Cuenca City, classified in residential and non-residential occupancy types. Figure 4 . Figure 4. Cadastral building footprints of Cuenca City, classified in residential and non-residential occupancy types. Figure 5 . Figure 5. Cadastral data for Cuenca City aggregated to grids at 15 arc-sec (left) and 30 arc-sec (right) resolution.Non-residential built-up percentage (top).Percent-contribution of each cell to the total non-residential built-up area (bottom). Figure 5 . Figure 5. Cadastral data for Cuenca City aggregated to grids at 15 arc-sec (left) and 30 arc-sec (right) resolution.Non-residential built-up percentage (top).Percent-contribution of each cell to the total non-residential built-up area (bottom). Figure 6 . Figure 6.Binary land use classification of Cuenca City based on ISA thresholds from Table 1.Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Figure 7 . Figure 7. Binary land use classification of Cuenca City based on thresholds from original VIIRS data (Table 2).Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Figure 7 . Figure 7. Binary land use classification of Cuenca City based on thresholds from original VIIRS data (Table 2).Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Figure 8 . Figure 8. Binary land use classification of Cuenca City based on thresholds from aggregated VIIRS data (Table3).Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Figure 8 . Figure 8. Binary land use classification of Cuenca City based on thresholds from aggregated VIIRS data (Table3).Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Figure 9 . Figure 9. VIIRS data of the first six months of 2015 for the Cuenca City study area.Grids of average light intensity (top).Grids showing the number of cloud-free observations at the cell level used to produce the average light intensity composites (bottom). Figure 9 . Figure 9. VIIRS data of the first six months of 2015 for the Cuenca City study area.Grids of average light intensity (top).Grids showing the number of cloud-free observations at the cell level used to produce the average light intensity composites (bottom). Figure 10 . Figure 10.Cell-by-cell deviations to the selected June grid. Figure 10 . Figure 10.Cell-by-cell deviations to the selected June grid. Figure 11 . Figure 11.Binary classification of the non-residential cadastral built-up area aggregated to 15 arc-sec (top-left) and 30 arc-sec grids (top-right) based on the percent-contribution to the total non-residential area of Cuenca City.Best-matching binary classifications as derived from 15 arc-sec VIIRS (bottom-left) and 30 arc-sec ISA (bottom-right) data. Figure 11 . Figure 11.Binary classification of the non-residential cadastral built-up area aggregated to 15 arc-sec (top-left) and 30 arc-sec grids (top-right) based on the percent-contribution to the non-residential area of Cuenca City.Best-matching binary classifications as derived from 15 arc-sec VIIRS (bottom-left) and 30 arc-sec ISA (bottom-right) data. Figure 13 . Figure 13.Plot of residential building use ratio vs. data percentile from histogram distribution for ISA (extended data sample of Table1), original-resolution VIIRS (extended data sample of Table2), and aggregated VIIRS (extended data sample of Table3).The dashed line shows the residential building use ratio for Cuenca City (75%) as derived from cadastral data.Regression equations are colored according to the respective graphs. Figure 13 . Figure 13.Plot of residential building use ratio vs. data percentile from histogram distribution for ISA (extended data sample of Table1), original-resolution VIIRS (extended data sample of Table2), and aggregated VIIRS (extended data sample of Table3).The dashed line shows the residential building use ratio for Cuenca City (75%) as derived from cadastral data.Regression equations are colored according to the respective graphs. Table 1 . ISA distribution thresholds and corresponding building use distribution ratios (grey indicating selected best-matching threshold). Table 2 . VIIRS distribution thresholds (original 15 arc-sec grid) and corresponding building use distribution ratios (grey indicating selected best-matching threshold, orange indicating 50% threshold for comparison).Binary land use classification of Cuenca City based on ISA thresholds from Table1.Blue indicates residential and orange mixed use.Table record IDs are indicated in the figure as 1-5. Table 2 . VIIRS distribution thresholds (original 15 arc-sec grid) and corresponding building use distribution ratios (grey indicating selected best-matching threshold, orange indicating 50% threshold for comparison). Table 3 . VIIRS distribution thresholds (aggregated 30 arc-sec grid) and corresponding building use distribution ratios (grey indicating selected best-matching threshold, orange indicating 50% threshold for comparison). Table 3 . VIIRS distribution thresholds (aggregated 30 arc-sec grid) and corresponding building use distribution ratios (grey indicating selected best-matching threshold, orange indicating 50% threshold for comparison). Table 4 . Average number of cloud-free observations in VIIRS 2015 monthly composites for the Cuenca City study area (grey indicating eventually selected monthly composite). Table 4 . Average number of cloud-free observations in VIIRS 2015 monthly composites for the Cuenca City study area (grey indicating eventually selected monthly composite). Table 5 . VIIRS distribution thresholds (original 15 arc-sec grid) and corresponding building use distribution ratios using the monthly composite for May ratios (grey indicating selected best-matching threshold, green indicating previously identified June threshold for comparison).
v3-fos-license
2018-12-12T00:52:53.179Z
2018-06-01T00:00:00.000
54679132
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/AB1238067F7DAB646DE91C937047B916/S0003598X1800008Xa.pdf/div-class-title-climate-change-and-the-deteriorating-archaeological-and-environmental-archives-of-the-arctic-div.pdf", "pdf_hash": "e2e002d789d79b225f97b7b5ea9d0b25a66c4cdc", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:396", "s2fieldsofstudy": [ "Environmental Science", "History" ], "sha1": "ce17df9bfa4a841635777d259469f78418b0dcca", "year": 2018 }
pes2o/s2orc
Climate change and the deteriorating archaeological and environmental archives of the Arctic Abstract The cold, wet climate of the Arctic has led to the extraordinary preservation of archaeological sites and materials that offer important contributions to the understanding of our common cultural and ecological history. This potential, however, is quickly disappearing due to climate-related variables, including the intensification of permafrost thaw and coastal erosion, which are damaging and destroying a wide range of cultural and environmental archives around the Arctic. In providing an overview of the most important effects of climate change in this region and on archaeological sites, the authors propose the next generation of research and response strategies, and suggest how to capitalise on existing successful connections among research communities and between researchers and the public. Introduction The past decade has witnessed growing global concern about the accelerating impact of climate change on archaeological sites (Colette 2007; see online supplementary material Brown et al. 1997). (OSM) 1, references 1-4). An increasing number of ancient sites and structures around the world are now at risk of being lost. Once destroyed, these resources are gone forever, with irrevocable loss of human heritage and scientific data. Often defined as the territory north of the +10°C July isotherm, the Arctic ( Figure 1) is a bellwether for current large-scale repercussions of climate change, and for the future changes predicted to occur around the world. The Arctic has warmed at a rate of more than twice the global average since the 1980s (Stocker et al. 2013). While some historical changes in climate result from natural causes and variations, the strength of current trends indicate clearly that human influences have become a dominant factor (ACIA 2004;Stocker et al. 2013). Due to increasing concentrations of greenhouse gasses in the Earth's atmosphere, currently observed climatic trends are predicted to accelerate (ACIA 2004;Stocker et al. 2013). Climate change will cause wide-ranging alteration to the Arctic, with some impacts already observable. Rising air temperatures, permafrost thaw, fluctuations in precipitation, melting glaciers and rising sea levels are just some of the changes affecting the natural system (ACIA 2004), and causing physical and chemical damage to archaeological sites and materials. The potential scale of this threat to archaeological sites has led to growing concern among polar archaeologists (Blankholm 2009; see OSM 1, references 1 & 5). The subject has, however, received limited attention within the wider research community, and little is known about how sites are being, and will be, affected. Here we present the first broad synthesis on the most significant climate change impacts on the Arctic, and describe how these changes are currently affecting archaeological sites. We also give examples of current management strategies and mitigation measures, including awareness-raising initiatives. Finally, we propose the next generation of research and response strategies, and suggest how to capitalise on existing successful connections among research communities and between researchers and the public. Focusing on Alaska, northern Canada, Greenland, northern Norway (including Svalbard) and northern Russia (Figure 1), we also draw parallels with archaeological sites from outside the region. Arctic archaeological potential The Arctic's cold and wet conditions have led to the extraordinary long-term preservation of archaeological material, including both artefacts and environmental evidence. The lack of modern development has also left many sites relatively undisturbed. Researchers therefore have unique opportunities to learn about past environments and cultures, many of which connect directly to modern indigenous cultures. Arctic archaeological sites often provide concrete connections to cultural heritage that language and other intangible aspects of culture cannot. Furthermore, they provide an ideal medium through which to engage younger generations with local heritage and culture (Lyons 2016). Spectacular finds and surviving structures have provided many novel contributions to the understanding of our common cultural history ( Figure 2). Recent methodological advances are providing new results (e.g. Lee et al. 2018;see OSM 1, references 6-8). The archaeological deposits also contain a diverse range of animal, plant and insect remains, and anthropogenic soils and sediments that enable us to move beyond the human-mediated aspects of the environmental system to address questions within other research fields (Pitulko & Nikolskiy 2012; see OSM 1, references 9-15). Causey et al. (2005), for example, used avifauna from multiple archaeological sites in the Aleutian Islands to model the impact of climate change on regional ecosystems. As there is no official record of the total number of archaeological sites in the Arctic, we collected data from national cultural heritage databases and found that ∼180 000 sites are currently registered (Table 1; OSM 2). This approximation is, however, somewhat uncertain due to a lack of official site numbers in the Russian Arctic and differences in how 'sites' are defined from country to country. Regardless of total number, very few of the sites have been excavated, and we anticipate that many more sites await discovery in those parts of the Arctic yet to be surveyed. Thus, archaeological sites in this region continue to offer great potential for further spectacular discoveries and novel scientific contributions. These archaeological sites may, however, be under serious threat from climate change, which is influencing a range of processes that can accelerate site destruction. The following overview is based on a detailed compilation and review of published articles and publicly available reports that identify impacts of climate change on archaeological sites in the Arctic, or that provide information about archaeological resources already damaged by climate change. Figure 3. Examples from Walakpa in Alaska of newly exposed archaeological layers that are quickly degrading due to multiple processes (permafrost thaw, frost/thaw processes, microbial degradation and wave action during storms) (photograph by Anne M. Jensen). Coastal erosion Sea-level rise, the lengthening of open-water periods due to sea-ice decline and a predicted increase in the frequency of major storms are all expected to intensify erosion of the Arctic coastline (Lantuit et al. 2012). Coastal erosion poses a widespread threat to many archaeological sites in this region due to the predominantly coastal lifeways of Arctic people. The permafrost coasts of north and north-west Alaska and the western Canadian Arctic are characterised as one of the largest areas of high-sensitivity shoreline in the circumpolar Arctic (Lantuit et al. 2012). While not a new phenomenon in this area, coastal erosion is currently widespread and is greatly affecting the archaeology in the region (Jones et al. 2008;Friesen 2015;Gibbs & Richmond 2015;Jensen 2017; O'Rourke 2017; see OSM 1, references 16-22). Jones et al. (2008), for example, focused on a stretch of the Beaufort Sea coastline near Drew Point in north Alaska, and found that three out of four known archaeological sites had disappeared, with the remaining site heavily damaged by erosion. Furthermore, the coastlines near Barrow on Alaska's North Slope, having been inhabited by semi-sedentary Alaska Natives for at least 4000 years, are quickly being lost to erosion and thawing permafrost ( Figure 3). Twenty years ago, rapidly eroding coastal bluffs began exposing human remains at Nuvuk, a key site for understanding the Thule migration across the North American Arctic (Jensen 2017; see OSM 1, references 20-21). Since then, sea-level rise, fierce coastal storms and permafrost thaw have removed over 100m of land. This has destroyed several Ipiutak structures, and has heavily eroded a cemetery containing over 100 individuals ( Figure 3). The most important archaeological sites of the Inuvialuit-the aboriginal inhabitants of north-westernmost Canada-are endangered by erosion (Friesen 2015;O'Rourke 2017). In the Russian Arctic, erosion is severe along the Laptev and East Siberian Seas ( Figure 1) (Lantuit et al. 2012; see OSM 1, references 23-24), although how this is affecting archaeological sites is virtually unknown. Erosion rates of 5-6m per year have, however, been measured over a 10-year period at the archaeological site of Yana (Pitulko 2014). This site represents the earliest-known occupation in the Arctic region (25 000 kya) and is a key site for understanding the first peopling of the Americas. Although the Chukchi shorelines (north-west of the Bering Strait) are considered less vulnerable, erosion is still removing local archaeological sites (Dikov 1977;Gusev 2010;Lantuit et al. 2012). The coastlines of the Canadian Archipelago, Greenland and Svalbard are considered stable due to their predominantly rocky nature, the persistence of sea ice throughout the summer season (for the Canadian Archipelago) and because of a strong post-glacial rebound (rise of land) (Lantuit et al. 2012). Nevertheless, erosion on a local scale may still be a major threat to archaeological sites, such as Fort Conger on northern Ellesmere Island in Canada, Iita in north-west Greenland and Herjolfnes in south Greenland, where the remains of a Norse settlement are threatened by coastal erosion (Dawson et al. 2015;see OSM 1, references 25-27). Several sites in northern Norway and Svalbard have also been categorised as threatened by coastal erosion (Flyen 2009; see OSM 1, references 28-31). Permafrost thaw and microbial degradation Large parts of the exposed land surface of the circumpolar north contain permafrost (perennially frozen ground) (Figure 1). Permafrost often preserves organic archaeological materials, as cold temperatures and high saturation levels slow the decay of organic materials (Hollesen et al. 2017). Model predictions show that a warmer climate will affect both the spatial extent of permafrost and the depth of the active layer, which thaws during summer (Slater & Lawrence 2013;Hollesen et al. 2015). An increase in active layer depths in response to warmer temperatures is significant because it exposes the previously frozen soil layers to accelerated erosion, to wet/dry and freeze/thaw cycles and to increased microbial activity (Hollesen et al. 2017). Studies from north-western Canada, northern Alaska and Siberia show that permafrost destabilisation is leading to severe erosion and landscape change, with dramatic effects on the preservation of archaeological sites (e.g. Solsten & Aitken 2006;Jones et al. 2008;Pitulko 2014;Andrews et al. 2016). In Auyuittuq National Park Reserve, Nunavut, Canada, 24 out of 48 archaeological sites are categorised as being at high risk of soil disturbance (Solsten & Aitken 2006). In Russia, the speed of slope erosion (up to 10m per year) is shown to be highly dependent on the ice content of the soil, mean summer temperature and the amount of incoming solar radiation (Pitulko 2014). Clear evidence of hydro-thermal erosion has also been reported in Greenland (Hollesen et al. 2017). The physical erosion of sites is relatively easy to document by, for example, remote sensing or repeated site visits. It is, however, more difficult to discover, quantify and predict ongoing microbial or chemical degradation of archaeological deposits and similar processes in archaeological wood. These degradation processes have been scientifically documented (e.g. Mattsson et al. 2010;Matthiesen et al. 2014;Hollesen et al. 2015Hollesen et al. , 2016aHollesen et al. , 2017; see OSM 1, references 32-38). The results show that microbial and fungal communities in archaeological deposits and surviving wooden structures have adapted to the cold Arctic environment; they are sensitive to increasing soil temperatures, especially when water is drained and increasing oxygen availability triggers degradation. The deterioration of Roussel (1937) and Matthiesen (2016)). organic archaeological deposits is accompanied by high microbial heat production. In some cases, this increases soil temperatures, thereby accelerating the decomposition processes and intensifying significantly the impact of climate change (Hollesen et al. 2015). Increasing soil temperatures and changes in the soil's water content are also important in areas without permafrost (Hollesen et al. 2016a). Recent studies of organic archaeological deposits in northern Norway indicate that a predicted air temperature increase of 3°C during the twenty-first century could accelerate the overall decay rate by ∼50 per cent (Hollesen et al. 2016b;Martens et al. 2016). This will be highly dependent, however, on the timing of precipitation, the frequency of dry periods and on evaporation rates (Hollesen et al. 2016b). Vegetation increase and tundra fires Several studies using satellite-based remote sensing and field observations show that the circumpolar Arctic tundra has undergone a 'greening' during recent decades (e.g. Tape et al. 2006;see OSM 1, references 39 & 40). Over time, climate change is projected to cause a shift in vegetation zones and to promote the expansion of boreal forests into the Arctic tundra, and of tundra into the polar deserts. A direct consequence of vegetation increase is that archaeological sites will become overgrown and eventually hidden (Figure 4). Furthermore, thicker vegetation and the spread of trees will increase summer evapotranspiration (Swann et al. 2010), which may lower the soil's water content and contribute to the decay rate of organic archaeological deposits (described above). An increase in root depth may also represent a risk to sub-soil archaeology (Crow & Moffat 2005). When roots exploit the soil for water and nutrients, they may penetrate and cause physical damage to organic archaeological material, including bone and wood (Tjelldén et al. 2015). Additionally, roots may disturb the archaeological stratigraphy, which is crucial to site interpretations (Tjelldén et al. 2015). Together with the shift in vegetation, wildfire activity is expected to increase dramatically (Young et al. 2017; see OSM 1, references 41 & 42), with strong impacts on permafrost stability and the loss of organic material in the soil (Mack et al. 2011). Tourism and the impact of local communities Climate change is responsible for longer and more extensive seasonal sea-ice melt in the Arctic, which has increased the accessibility of the region. The Intergovernmental Panel on Climate Change (Stocker et al. 2013) predicts that the Arctic Ocean will be nearly ice-free during summers before the end of this century, thereby opening up new shipping routes and extending the use of those that already exist. This will probably drive an increase in the development of coastal infrastructure and cruise tourism (Larsen et al. 2014). Such changes will open up more archaeological sites to visitors, bringing more traffic into sensitive environments. Improved accessibility to cultural heritage sites-which are often marketed together with the natural landscape as integral parts of the wilderness experience-is already challenging resource managers to balance the use and protection of sensitive sites (Høgvard 2003;Hagen et al. 2012). The impacts of uncontrolled or poorly planned tourism on archaeological sites have been well documented in other parts of the world (Markham et al. 2016), but there is currently limited information for the Arctic. In Norway, damage to archaeological sites at Kautokeino (Finnmark) and Lake Leinavatnet (Troms County) from all-terrain vehicles, illegal campsites and hikers' paths has been documented (Blankholm 2009). Increased visitor numbers to Svalbard has caused clear damage to cultural heritage sites, such as at the early twentieth-century marble mining settlement of London (Hagen et al. 2012;Thuestad et al. 2015). Melting ice, thawing permafrost and coastal erosion is exposing archaeological sites in the Arctic to potential damage not just from tourists, but also from commercial and non-commercial collectors. This includes some local Arctic communities who collectoften legally, but sometimes illegally-artefacts and other archaeological resources found on the ground surface or eroding from the shoreline (Staley 1993;Hollowell 2006). An increase in this type of collecting should be expected as the erosion of coastal archaeological sites accelerates and melting ice and thawing permafrost expose more remains. There is, however, also the danger of large-scale plundering of archaeological resources, as has been reported in north-east Siberia. Here, high-pressure hydraulic pumps have been used to 'mine' concentrations of mammoth remains at kill and butchery sites such as Berelekh, Yana and Buor-Khaya (Pitulko pers. comm., 2014;see OSM 1, references 43-44). Discussion Our research has reviewed 46 articles and reports that identify impacts of coastal erosion, permafrost thaw, vegetation increase, tundra fires and increased accessibility to archaeological sites in the Arctic (OSM 3). That 42 of these articles are published after 2000 demonstrates the recent increase in evidence of damage to Arctic archaeological sites. The increase may be due partly to a rising awareness of the issues, but it also signals a real increase in the number of sites that are being damaged. In light of both the damage already documented and predicted (Stocker et al. 2013), we should prepare for a new reality where archaeologists and heritage managers must deal with a growing number of vulnerable and degrading sites. An effective response to this emerging situation requires the development of new methods and strategies to detect, monitor and mitigate vulnerable sites, and, where necessary, to prioritise between them. Detecting vulnerable sites The Arctic contains at least 180 000 archaeological sites (Table 1). Very few of these sites have been investigated and we know little about their current state of preservation. It is often assumed that the remoteness and the climate associated with these sites provide protection enough. As the examples highlighted here demonstrate, however, climate change means that this may no longer be the case. Paradoxically, remoteness now compounds the problem: sites far from population centres or popular travel routes cannot be visited often, and may be damaged or disappear completely before being documented. As it is impossible to visit and survey all the sites in the Arctic, new methods to detect and quantify site changes on a regional scale must be developed. This will allow for more effective and targeted site inspection and monitoring or mitigation efforts in the future. Studies have shown that the impact of sea-level rise on archaeological sites can be assessed at a regional scale, using techniques such as remote sensing (e.g. unmanned aerial vehicles or satellite imagery) combined with geographic information systems (GIS) (e.g. Solsten & Aitken 2006;see OSM 1, references 45-46). The value of such methods is highly dependent upon the quality of the input data. This can be highly variable for the Arctic, large areas of which remain poorly mapped, with positions and elevations of archaeological sites often inaccurately recorded. This has serious negative consequences for the development of predictive models and assessments that are so vital for effective prioritisation between sites. Reliable estimates for impact rates of erosion, permafrost thaw, vegetation increase and human access are also currently lacking. To a certain extent, however, such estimates have been advanced for the modelling of the natural environment (e.g. Lantuit et al. 2012;Slater & Lawrence 2013), but the resolution is often too low to be useful for the monitoring of archaeological sites. Furthermore, the physical and chemical composition of archaeological deposits is very different from the natural soils for which such estimates were originally developed. Increased research effort is therefore required to investigate how archaeological sites and artefacts are being affected by ongoing climate change. Monitoring and mitigation Monitoring vulnerable sites in the vast and remote Arctic (Table 1) presents an enormous challenge, especially considering the limited number of archaeologists working here. One method to increase capacity in response to this challenge is to work with local people. Scotland's Coastal Heritage at Risk Project (SCHARP), for example, asked volunteer citizen scientists to use a smartphone and tablet applications to assist with the identification and monitoring of vulnerable sites (Dawson 2015). Several other studies have also used vulnerability protocols to monitor the state of archaeological sites (e.g. Daly 2014; see OSM 1, references 47-48). If future archaeological surveys in the Arctic were equipped with a standard protocol for evaluating site vulnerability, the systematic data collected could serve as baselines for monitoring change. Observations by archaeologists or local informants will determine the necessity of establishing more detailed environmental monitoring of relevant parameters. The collected data will help to provide a stronger knowledge base for protection and mitigation strategies (Rytter & Schonhowd 2015;Sidell & Panter 2016). We currently lack a full understanding of which parameters have the most effect on preservation conditions and therefore of how to set threshold values for when to respond (Martens 2016). Although there are many examples of different mitigation measures being applied to protect archaeological deposits from erosion, microbial degradation and vegetation increase (e.g. Rytter & Schonhowd 2015), such strategies have seldom been applied in the Arctic. This is probably due to the high costs and significant logistical challenges of applying such protective measures here. In some cases, however, low-tech mitigation measures could be an option to slow the degradation processes. Snow fences, for example, could be used to increase the soil-water content, and soil covers could be used to insulate the ground surface. These measures would buffer against variations in soil-water content, and may also prevent erosion, although such measures would require thorough testing before large-scale application. Rescue and prioritisation of sites The erosion processes occurring along the north coasts of Alaska, the western Canadian Arctic and in parts of Siberia are already so frequent and destructive that immediate action is needed. Excavations in both Alaska and Siberia demonstrate that anything not excavated will be lost within a few years of exposure (Jensen pers. comm.; Pitulko pers. comm.). Excavations in the Arctic are often more challenging than those in other regions and hence can be very expensive and time-consuming. The existing mechanisms for response-including rescue excavations-are already regularly overwhelmed, and pressures will become more acute in years to come. In addition, conventional science funding models are insensitive to the rate at which sites are now being destroyed. In Alaska, it is already impossible to manage all threatened sites using existing resources. It is therefore essential to find effective methods of evaluating the significance and potential of sites in order to prioritise those that should be excavated and those that must be allowed to decay. Existing methods of prioritising eroding coastal sites in Scotland (Dawson 2013) could be adapted for the Arctic. The way forward Awareness of the climatic threats faced by cultural heritage around the world is increasing. A range of 'bottom-up' initiatives, such as IHOPE and the Pocantico Call, have emerged (see OSM 3), and several national initiatives aimed at monitoring and responding to the impacts of climate change on cultural heritage have also been developed. The US National Park Service (NPS) established one such approach in 2009. The NPS Climate Change Response Program recognises the need to address the impact of climate on cultural heritage, and to learn from cultural heritage for all areas of climate response (Rockman 2015; see OSM 1, references 49-50). As archaeologists and cultural resource managers produce strategies to handle this growing problem, however, it must be understood that it cannot be engaged effectively by any single organisation or nation. It is therefore crucial that knowledge is shared between cultural resource managers, researchers and those engaged in international projects dealing with the issue of climate effects on heritage. Current scientific projects, such as the Arctic CHAR Project (Canada), REMAINS of Greenland, NABO (Greenland), InSituFarms (Norway) and SPARC (Norway), focus on how archaeological sites and materials are, and will be, affected by climate change in the Arctic (see OSM 3). Financially stretched regional and national research-oriented funding agencies, however, cannot bear the burden of supporting the large-scale, sustained response required to face these challenges. New funding models, staff education and recruitment, public engagement and research must be developed and implemented. Archaeologists and allied scientists must also publicise research on climate threats to cultural heritage, both in scientific journals and in more popular forms. Media coverage, especially in the Arctic, has a key role to play in creating awareness and increasing the public pressure required to direct resources to research and mitigation. Conclusions Coastal erosion, permafrost thaw, increasing vegetation, tundra fires and increased accessibility are all part of the broad picture of climate change, with significant implications for the continued preservation of archaeological sites in the Arctic. Each type of impact has different effects, causing damage at timescales varying from days to decades, or even centuries. Consequently, some sites are in immediate danger and others are safe, at least for now. How many sites fall into each of these categories is unknown, so we must develop methods to detect the most vulnerable sites. Methods are also required for effectively managing sites currently characterised as vulnerable. In some areas, it should be possible to monitor sites through the combined use of citizen science projects, vulnerability protocols and environmental monitoring programmes. We must also acknowledge the particular challenges of monitoring such sites, given the size and remoteness of the Arctic. Given that very little has been done to develop methods for mitigation, excavation may seem to be the only currently applicable solution for managing archaeological deposits at risk of degradation. Excavations in permafrost regions are, however, expensive and timeconsuming, and archaeologists are already overwhelmed. With the climate continuing to change, this situation will undoubtedly deteriorate. As natural processes are causing the damage, most jurisdictions have no designated funds or programmes for archaeological mitigation. This must change if we are to respond in a serious and efficient manner to the problem of natural threats to heritage. Concurrently, we must be realistic and acknowledge that it will be necessary to prioritise between sites in order to direct limited resources to the most valuable sites. The current situation in parts of the Arctic clearly demonstrates that we are poorly prepared to respond to a scenario where system-wide, natural processes affect thousands of archaeological sites at once. There are no easy solutions, but the longer we wait, the more difficult the challenges will become. Supplementary material To view supplementary material for this article, please visit https://doi.org/10.15184/aqy .2018.8
v3-fos-license
2018-12-28T03:48:16.225Z
2013-01-01T00:00:00.000
59483992
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jnm/2013/507647.pdf", "pdf_hash": "cb2747a2038b1723dcb3eabfc8fb74eb4e10bb5d", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:397", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "cb2747a2038b1723dcb3eabfc8fb74eb4e10bb5d", "year": 2013 }
pes2o/s2orc
Chemical Bath Deposition of PbS : Hg 2 + Nanocrystalline Thin Films 1 Facultad de Ciencias Fı́sico Matemáticas, Posgrado en Fı́sica Aplicada, Benemérita Universidad Autónoma de Puebla, Avenida San Claudio y 18 Sur, Colonia San Manuel, Ciudad Universitaria, 72570 Puebla, PUE, Mexico 2 Facultad de Ciencias Quı́micas, Benemérita Universidad Autónoma de Puebla, P.O. Box 1067, 72001 Puebla, PUE, Mexico 3 Laboratorio de Sı́ntesis de Complejos, Facultad de Ciencias Quı́micas, Benemérita Universidad Autónoma de Puebla, P.O. Box 1067, 72001 Puebla, PUE, Mexico 4Centro de Fı́sica Aplicada y Tecnoloǵıa Avanzada, Universidad Nacional Autónoma de México, Boulevard Juriquilla 3001, 76230 Santiago de Querétaro, QRO, Mexico Introduction There is increasing interest into deposition of ternary derivative materials due to their potential for designing and tailoring not only the lattice parameters, but also the forbidden band gap energy , by means of the growth parameters [1,2].Accordingly, two techniques have been successfully employed: successive ionic layer and reaction (SILAR) [3] and sol gel methods [4].However, most of the reported studies have been focused on the deposition of ternary derivatives material on thin films as Cd 1− Zn S [5], Cd 1− SCu [6], Hg Cd 1− S [7], and PbS 1− Ni .It must be pointed out that PbS thin films are promising photovoltaic materials, for their can be adjusted to match the ideal ∼1.5 eV required for an efficient solar cell [8].Also, new physical aspects, dependent on size, have generated an ongoing thrust for new practical applications.PbS nanocrystals with grain-size (GS) dimensions in the range 1-20 nm are of technological interest for advanced optoelectronic applications, showing a stronger quantum confinement effect when the crystallite size matches the dimension of Bohr exciton [9].In this context, there are two situations: the weak confinement and the strong regime [10].In the weak regime, the radius of the electronhole pair causes a blue shift in the absorption spectrum, but the range of motion of the exciton is limited.The confinement effect appears as a shift in the absorption spectra to lower wavelengths, due to change in the , which can be controlled through the modification of the surface functionalization [11,12].Several schemes for using nanocrystals in solar cells are under active consideration, including nanocrystalspolymer composites [13,14], and, in the present work, PbS and Hg 2+ -doped PbS nanostructured films were prepared by chemical bath (CB), in order to investigate structural and optical properties of undoped and doped PbS films.Semiquantitative measurements of atomic concentration of elements and the micrographs were achieved by Electron Dispersion Spectroscopy (EDS) utilizing a SEM JEOL JSM-6610LV.The crystalline structure characterization was carried out by X-ray diffraction (XRD) patterns recorded in a Bruker D8 Discover Diffractometer, using the Cu K 1 line.The grain size was determined from the Scherrer's formula.The optical absorption spectra, measured employing a Varian CARY 100 Spectrophotometer, allowed to calculate the forbidden band gap energy ( ) by using the (ℎ]) 2 versus Energy plot, where is the optical absorption coefficient and ℎ] the photon energy.The Raman spectra were determined with a micro-Raman System Lab Ram-Idler apparatus with an excitement line of 632.8 nm. Experimental Procedure The chemical reactions to grow of PbS films doped with Hg 2+ were determined by employing the reported cell potential values in basic media.The cell potential and the Gibbs free energy are related through the Nernst equations: Δ ∘ = − ∘ , where is the number of equivalents, is the Faraday constant, and ∘ is the cell potential.The numerical value Δ ∘ provides thermodynamic information on the possibility of spontaneous chemical reactions.The formation of the coordination complex [M(NH 3 ) 4 ] 2+ is key to release M 2+ ions (M 2+ = Cd 2+ , Pb 2+ , Zn 2+ , etc.) and their slow recombination with S 2− ions which leads to the spontaneous formation of the MS precipitate in an easily controlled process.The growth of PbS is therefore carried out according to the following steps: (a) by mixing Pb(CH 3 CO 3 ) 2 , KOH, and NH 4 NO 3 , to produce the coordination complex, [Pb(NH 3 ) 4 ] 2 is generated indirectly; (b) The S 2− ions are found in the solution and generated by the thiourea decomposition in alkaline solution; (c) the aforementioned steps allow the slow process at the substrate surface to take place predominantly over direct hydrolysis of thiourea in the bulk of the reaction bath as follows [15]: ] were obtained by the addition in situ of 5, 10, 15, 20, and 25 mLs in the solutions for PbS growth Pb(CH 3 CO 3 ) 2 (0.01 M), KOH (0.5 M), NH 4 NO 3 (1.5 M), and SC(NH 2 ) 2 (0.2 M).The solutions were mixed and the final solution kept at 40 ± 2 ∘ C during 0.5 h, while the substrate remains inside the solution.The optimal doping concentration [ [Hg 2+ ] ] Hg(NO 3 ) 2 (0.031 M) was determined after several trials, when the films had attained good adherence.This solution is routinely added to the reaction mixture during the growth of the PbS films.All the solutions used were prepared with deionised water with a resistivity of 18.2 MΩ.The samples were labelled as PbSHg0 for the undoped sample and PbSHg ( = 5, 10, 15, 20, and 25) for the doped samples.The total volume of the growing-solution consisted of the volumesolution ( PbS ) for the PbS growth plus the volume-solution [Ni 2+ ] containing the doping Hg 2+ chemical agent: PbS + [Hg 2+ ] = tot .The films were silver-colored, polycrystalline, and with a homogeneous consistency and good adhesion to the substrate.The substrates were fixed vertically in the chemical bath at correspondance deposition temperature ( ) for different periods.For first 13 min of reaction time, the solution remained transparent, indicating the occurrence of decomposition reaction.Beyond 15 min, the solution turned dark gray, indicating the formation of PbS nucleus.After completion of the deposition time, samples were removed from the solution, rinsed ultrasonically in hot deionized water for 5 min, and dried in air.Mirror-like gray thin film surfaces were obtained after removal of one side of glass slide using cotton with acid-chromium K 2 Cr 2 O 7 /HCl/H 2 O mixture. SEM-EDS. The elemental analysis was performed only for Pb, S, and Hg, while the average atomic percentage of undoped and doped Pb/S was calculated.The films semiquantitative analysis was carried out by using the EDS technique, for undoped and doped PbS thin films, at different locations, to study their stoichiometry.Table 1 contains the atomic concentrations of Pb, S, and Hg.For the samples, the increase in concentration of Hg in PbS films is easily noted, reaching a percent value of Hg = 11.51.In this case, when Hg 2+ ion enters as a substitute of the Pb 2+ ion, it is observed that the sample was slightly deficient in S 2− ion.Therefore, for the higher [Hg 2+ ] values considered here, the material growth can probably be estimated as a doped semiconductor but the material can also be regarded as similar to a solid solution of Pb 1− H S. It can also be considered how the concentration of Hg in PbS films increases, reaching a percent value of Hg = 11.51.In this case, when the Hg 2+ ion enters as substitute of the Pb 2+ ion, probably for the higher [Hg 2+ ] values considered here the growth material can also be estimated as a doped semiconductor being the material also as in the aforementioned conditions, similar to a solid solution of Pb 1− Hg S. The micrographs of undoped and doped PbS films are shown in Figure 1 and show scale bars at 30 nm and labeled as: (a) undoped-PbSHg0, (b) doped-PbSHg15, and (c) doped-PbSHg25.As can be seen from the uniform surface morphology, the aspect is compact and of polycrystalline nature.The SEM micrograph shows that the particle grain size decreases as [Hg 2+ ] concentration increases.The granules appeared to be of different sizes, and it can be concluded that the doping plays a vital role on the morphological properties of the thin PbS films.In such micrographs for the doped films, crystals as small spheres are observed and the degree of crystallinity decreases when increases the concentration of Hg as shown in Figure 2. A very adherent film with metallic gray-black colour aspect was obtained for doped films revealing continuous and compact polycrystalline films.Similar morphologies have been reported [7,16].Based on systematic studies of varying [Hg 2+ ] values, we have discovered the critical factor determining architectural features of PbSHg nanocrystals.and [222] planes, respectively.These diffraction peaks can be perfectly indexed to diffractograms of the undoped and doped PbS samples displaying the (ZB) crystallinity phase, according to reference patterns JCPDS 05-0592.The XRD spectra for PbSHg0-PbS25 films indicate that [111] is the orientation.The diffraction layer along the [111] plane shows the highest intensity of a well-defined sharp peak, indicating high crystallinity.Using the more intense peak for [111] plane and Bragg's law, the lattice dimension is = 5.94 Å because (ZB) phase belongs to the cubic crystal system.A maximum value in the intensity peak is reached for the prepared sample, indicating either the existence of a larger number of [111] planes or that the [111] planes have a lower number of defects.This phenomenon may be attributed to the doping effect.The low intensity peaks observed in the XRD patterns of the doped PbSHg20 and PbSHg25 samples indicate that the films are coarsely fine crystallites or nanocrystalline.The displayed pattern is due to an amorphous glass substrate and also to some amorphous phase present in the PbSHg crystallite size of films.There are two main possible causes for peak broadening, the increase in heterogeneity of the films due to the occupation of Hg 2+ into the host lattice and the decrease in crystallite size.These effects are associated with the doped-PbSHg with [Hg 2+ ] nanocrystals in the regime where the cluster mechanism is dominating, contrary to films grown via ion-ion mechanism, where the crystal size was larger, consisting of PbSHg nanocrystals embedded in an apparent matrix of PbS.A possible explanation is as follows: the ionic radii data are Pb +2 = 1.21 Å, S 2− = 1.84 Å, and Hg 2+ = 1.10 Å, and therefore for a relative low concentration of Hg 2+ ions a majority can be located in (i) Pb 2+ vacancies sites, which otherwise would be empty (ii) in Pb 2+ sites causing the appearance of Pb interstitial, and (iii) in interstitial positions.It can be mentioned that the stable crystal structure of PbS, when Hg 2+ occupies more and more sites of Pb 2+ in the host lattice, increases internal strain and the crystal structure of PbSHg solid solution becomes unstable.In order to stabilize the crystal structure, the grain size is reduced to release the strain.As the Hg 2+ concentration is increased, the diffraction peaks become broader due to reduction in the grain size.At this level of [Hg 2+ ] , the PbSHg can be considered as a doped material [6].The incorporation of Hg 2+ solubility has been proven to be more effective in Pb chalcogenide than Znchalcogenide, a result explained in terms of the cation size. XRD. The inset of the Figure 3 shows the average grain size (GS) versus [Hg 2+ ] for the undoped and doped PbS samples corresponding to the [220] plane.The GS decreases and GS ∼38 nm for PbSHg0.The effect of the GS decrease by the doping effect has been reported in films of PbS doped [6].A decrease in the degree of order of crystallites is expected to lead to enhanced growth of stable nuclei at the initial stages of growth, followed by impaired grain growth, and, hence resulting in smaller grains of Hg. Optical Absorption. Through the intersection of the straight line with the axis of the photon energy, an value is obtained in a similar way to all samples.Figure 3 shows a graph of versus [Hg 2+ ] .In this plot, it can be observed that = 1.4 eV for the PbSHg0 sample.The confinement effect appears as a shift in the edge of the absorption spectra and the absorption to lower wavelengths, possibly due to the decrease in GS, the decrease in number of defects, and the change in color.It is clearly seen, from the optical spectrum, an absorption edge shift toward a lower wavelength in doped films.This clearly indicates an increase in the band gap as a result of Hg doping.Doping of PbS with Hg is expected to alter the optical band gap between 0.41 eV, of PbS, in the resulting ternary PbSHg alloy.Thus, the observed large modification of ternary PbSHg alloy shows the existence of strong quantum confinement in this system.The experimentally observed values for the shift indicated an alloying in nanocrystalline PbS.Such increase has been observed [17,18].The for doped samples in the 1.4-2.4eV range shows the extent of quantum size effect in the nanoparticle films.The fundamental optical transition of doped films ( = 0.41 eV) is not observed in these doped films, presumably because of complete mixing of PbS with Hg 2+ affording a unique ternary intermetallic compound of the Pb Hg 1− S type [1].It is observed that the size effect on the optical band gap is stronger in nanoparticle films than in PbS nanoparticle of 24-10 nm (average crystallite size) with from 2.22 to 2.65 eV [2].The observed increase in the quantum size effect could possibly be attributed to a decrease in the effective mass [19][20][21].The increase in with concentration of [Hg 2+ ] in the films is revealed by the presence of an excitonic structure material.Excitonic structures are readily observed in large semiconductors with binding energy such as CdSe [22].The optical doped films varied from 1.4 to 2.4 eV, with doping increase of [Hg 2+ ] .A similar shift observed in the position of the excitonic peak towards higher energies in CdSe crystallites has been explained due to a decrease in crystallite size [23].The redshift of the band gap is associated with the decline of the SG.It is clear that the increases as the [Hg 2+ ] increases.As mentioned earlier, we observed a systematic decrease in the crystallite size with increasing concentration.Since the estimated mean crystallite size in this case is approximately half the value of the exciton Bohr radius in PbS, we observe a strong confinement in doped PbSHg films.Using published data, a nanocrystalline size of 4-5 nm corresponding to = 1-1.25 eV, 3.8 nm for = 1.4 eV, 2.7 nm for = 2.0 eV, and 2 nm for = 2.7-3.8eV, respectively, was calculated [19]. Figure 4 shows a plot of versus [Hg 2+ ] and the absorption spectrum of PbSHg0 is in the lower part.In our case, the introduction of Hg 2+ ions into PbS induced the increase of strain.On the other hand, the creations of S 2− vacancies relax the lattice.Strain in PbSHg tends to redsuce GS.When doping, the number of the nucleation centers increases both in the substrates and in the solution, and, in this way, the nucleation rate becomes larger than the growth rate leading to a broader dispersion in GS and to a decrease of this one. 3.4. Raman.The 514.5 nm wavelength laser Raman spectroscopy was used to analyze the films.The spectra displayed undoped and doped PbS films in Figure 5 show the same wavenumbers at 135, 217, 433, and 647 cm −1 .Peaks corresponded to the fundamental longitudinal optical (LO) phonon mode of rock-salt structure, first overtone (2LO), and second overtone (3LO), respectively.The strong band in ∼133-140 cm −1 is attributed to a combination of longitudinal and transversal acoustic modes [24].It has been reported that the band centred at 961 cm −1 could be due to sulphates in the sample not the laser-induced degradation [24,25], which is consistent with the results reported [26].However, in our samples, the XRD patterns of PbS undoped and doped structures confirm that the product consists of PbS pure cubic without the presence of sulphates.It should be noted that data from PbS nanoparticles (18 nm in diameter) in air at room temperature have shown the LO band to be at ∼210 cm −1 with a small shoulder, attributed to an SP mode at 205 cm −1 ; however, a downward trend in wave number was shown for the former as the size of particles was increased to 38 nm diameter [26]. Conclusions We have reported the growth of doped PbS with Hg 2+ ions affording nanocrystalline films by the chemical bath technique.X-ray spectra show 2 = 26.00∘ , 30.07 ∘ , 43.10 ∘ , 51.00 ∘ , and 53.48 ∘ , which belong to the ZB phase.The grain size ranges from ∼32 to 20 nm.Optical absorption spectra are quantified for the PbSHg film in which the redshift of band gap is associated with the decrease of the average GS.Forbidden band gap energy shifts in the range 1.4-2.4eV.Raman spectroscopy (RS) exhibited an absorption band ∼135 cm −1 displaying only a PbS ZB structure. Figure 4 : Figure 4: Band gap energy ( ) as function of [Hg 2+ ] .The inset illustrates the method to calculate from optical absorption measurements. Figure 5 : Figure 5: Raman spectra for the films undoped and doped of PbS films. The growth stage is the second one, characterized by an enhanced rate of PbS deposition.(iii)The doped stage is the third, where the high rate of deposition is associated to a high level of doping, with the accelerated growth of PbS doped nuclei on the substrate during the nucleation stage.Finally, (iv) the termination stage is where the rate of deposition gradually slows down, probably due to a depletion of the reagents.Preparation of polycrystalline PbS thin films on glass substrates was performed at a temperature of 40 ± 2 ∘ C, both undoped and doped with [Hg 2+ ] grown by chemical bath (CB) and pH = 11.0.The glass substrates were previously immersed in a K 2 Cr 2 O 7 /HCl/H 2 O solution for 24 h and then rinsed with deionised water and dried with clean hot-air.PbS films with six different levels of doping [Hg 2+ Table 1 : The atomic concentrations of Pb, S, and Hg.
v3-fos-license
2014-10-01T00:00:00.000Z
2012-11-22T00:00:00.000
2765921
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mathematical-neuroscience.springeropen.com/track/pdf/10.1186/2190-8567-2-13", "pdf_hash": "a0a401412075a8bf99736b65774c89dc3ed457e1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:398", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Physics" ], "sha1": "5dc007b13263ff3b2384dc92b457918028c779ad", "year": 2012 }
pes2o/s2orc
Multiscale analysis of slow-fast neuronal learning models with noise This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations. Introduction Complex systems are made of a large number of interacting elements leading to nontrivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves high-dimensional non-linear mathematical models integrating non-autonomous and stochastic perturbations. Modeling these systems gives rise to many different scales both in space and in time. In particular, learning processes in the brain involve three time-scales: from neuronal activity (fast), external stimulation (intermediate) to synaptic plasticity (slow). Here, fast time-scale corresponds to a few milliseconds and slow time-scale to minutes/hour, and intermediate time-scale generally ranges between fast and slow scales, although some stimuli may be faster than neuronal activity time-scale (e.g., submilliseconds auditory signals [1]). The separation of these time-scales is an important and useful property in their study. Indeed, multiscale methods appear particularly relevant to handle and simplify such complex systems. First, stochastic averaging principle [2,3] is a powerful tool to analyze the impact of noise on slow-fast dynamical systems. This method relies on approximating the fast dynamics by its quasi-stationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect time-scale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model. Second, periodic averaging theory [4], which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is time-averaged. It seems appropriate to gather these two methods to address our case of a noisy and input-driven slow-fast dynamical system. This combined approach provides a novel way to understand the interactions between the three time-scales relevant in our models. More precisely, we will consider the following class of multiscale stochastic differential equations (SDEs), with 1 , 2 > 0 two small parameters dv = 1 1 [F (v , w , u( t 2 ))] dt + 1 resents the value of the external input at time t. Random perturbations are included in the form of a diffusion term, and (B(t)) is a standard Brownian motion. We are interested in the double limit 1 → 0 and 2 → 0 to describe the evolution of the slow variable w in the asymptotic regime where both the variable v and the external input are much faster than w. This asymptotic regime corresponds to the study of a neuronal network in which both the external input u and the neuronal activity v operate on a faster time-scale than the slow plasticity-driven evolution of synaptic weights W. To account for the possible difference of time-scales between v and the input, we introduce the time-scale ratio μ = 1 / 2 ∈ [0, ∞]. In the interesting case where μ ∈ (0, ∞), one needs to understand the long-time behavior of the rescaled periodically forced SDE for any w 0 fixed Recently, in an important contribution [5], a precise understanding of the long-time behavior of such processes has been obtained using methods from partial differential equations. In particular, conditions ensuring the existence of a periodic family of probability measures to which the law of v converges as time grows have been identified, together with a sharp estimation of the speed of mixing. These results are at the heart of the extension of the classical stochastic averaging principle [2] to the case of periodically forced slow-fast SDEs [6]. As a result, we obtain a reduced equation describing the slow evolution of variable w in the form of an ordinary differential equation, whereḠ is constructed as an average of G with respect to a specific probability measure, as explained in Section 2. This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks. The individual elements of these networks are neurons or populations of neurons. A common assumption at the basis of mathematical neuroscience [7] is to model their behavior by a stochastic differential equation which is made of four different contributions: (i) an intrinsic dynamics term, (ii) a communication term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic variability. Assuming that their activity is represented by the fast variable v ∈ R n , the first equation of system (1) is a generic representation of a neural network (function F corresponds to the first three terms contributing to the dynamics). In the literature, the level of nonlinearity of the function F ranges from a linear (or almost-linear) system to spiking neuron dynamics [8], yet the structure of the system is universal. These neurons are interconnected through a connectivity matrix which represents the strength of the synapses connecting the real neurons together. The slow modification of the connectivity between the neurons is commonly thought to be the essence of learning. Unsupervised learning rules update the connectivity exclusively based on the value of the activity variable. Therefore, this mechanism is represented by the slow equation above, where w ∈ R n×n is the connectivity matrix and G is the learning rule. Probably the most famous of these rules is the Hebbian learning rule introduced in [9]. It says that if both neurons A and B are active at the same time, then the synapses from A to B and B to A should be strengthened proportionally to the product of the activity of A and B. There are many different variations of this correlation-based principle which can be found in [10,11]. Another recent, unsupervised, biologically motivated learning rule is the spike-timing-dependent plasticity (STDP) reviewed in [12]. It is similar to Hebbian learning except that it focuses on causation instead of correlation and that it occurs on a faster time-scale. Both of these types of rule correspond to G being quadratic in v. Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [13][14][15]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule [16]. Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [17][18][19][20]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slow-fast SDE formalism are analyzed using temporal averaging principles. The purpose of this application is to understand what the network learns from the exposition to time-dependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow time-scale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks. Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of [7]. The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then trace-learning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation. Averaging principles: theory In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3. Periodic averaging principle We present here an example of a slow-fast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the time-scale separation of the system and the time-scale of the input appears as a new crucial parameter. Example 2.1 Consider the following linear time-inhomogeneous dynamical system with 1 , 2 > 0 two parameters: This system is particularly handy since one can solve analytically the first ordinary differential equation, that is, where we have introduced the time-scales ratio μ := 1 2 . In this system, one can distinguish various asymptotic regimes when 1 and 2 are small according to the asymptotic value of μ: • Regime 1: Slow input μ = 0: First, if 1 → 0 and 2 is fixed, then v(t) is close to sin( t 2 ), and from geometric singular perturbation theory [21,22] and when 1 → 0, one does not recover the same asymptotic behavior as in Regime 1. • Regime 3: Time-scales matching 0 < μ < ∞: Now consider the intermediate case where 1 is asymptotically proportional to 2 . In this case, v can be approximated on the fast time-scale t/ 1 by the periodic solutionv μ (t) = 1 1+μ 2 (sin(μt) − μ cos(μt)) of dv dt = −v + sin(μt). As a consequence, w will be close to the solution of , Thus, we have seen in this example that 1. the two limits 1 → 0 and 2 → 0 do not commute, 2. the ratio μ between the internal time-scale separation 1 and the input time-scale 2 is a key parameter in the study of slow-fast systems subject to a time-dependent perturbation. Stochastic averaging principle Time-scales separation is a key property to investigate the dynamical behavior of non-linear multiscale systems, with techniques ranging from averaging principles to geometric singular perturbation theory. This property appears to be also crucial to understanding the impact of noise. Instead of carrying a small noise analysis, a multiscale approach based on the stochastic averaging principle [2] can be a powerful tool to unravel subtle interplays between noise properties and non-linearities. More precisely, consider a system of SDEs in R p+q : with initial conditions v (0) = v 0 , w (0) = w 0 , and where w ∈ R q is called the slow variable, v ∈ R p is the fast variable, with F , G, smooth functions ensuring the existence and uniqueness for the solution (v , w ), and B(t) a p-dimensional standard Brownian motion, defined on a filtered probability space ( , F , P). Timescale separation in encoded in the small parameter , which denotes in this section a single positive real number. In order to approximate the behavior of (v , w ) for small , the idea is to average out the equation for the slow variable with respect to the stationary distribution of the fast one. More precisely, one first assumes that for each w ∈ R q fixed, the frozen fast SDE, admits a unique invariant measure, denoted ρ w (dv). Then, one defines the averaged drift vector fieldḠḠ and w the solution of dw dt =Ḡ(w) with the initial condition w(0) = y 0 . Under some dissipativity assumptions, the stochastic averaging principle [2] states: As a consequence, analyzing the behavior of the deterministic solution w can help to understand useful features of the stochastic process (v , w ). Example 2.2 In this example we consider a similar system as in Example 2.1, but with a noise term instead of the periodic perturbation. Namely, we consider (v , w ) the solution of the system of SDEs, with > 0 a small parameter and σ > 0 a positive constant. From Theorem 2.1, the stochastic slow variable w can be approximated in the sense of (3) by the deterministic solution w of where ρ(dv) is the stationary measure of the linear diffusion process, that is, Consequently, w can be approximated in the limit → 0 by the solution of Applying (3) leads to the following result: for any T > 0 and δ > 0, Interestingly, the asymptotic behavior of w for small is characterized by a deterministic trajectory that depends on the strength σ of the noise applied to the system. Thus, the stochastic averaging principle appears particularly interesting when unraveling the impact of noise strength on slow-fast systems. Many other results have been developed since, extending the set-up to the case where the slow variable has a diffusion component or to infinite-dimensional settings for instance, and also refining the convergence study, providing homogenization results concerning the limit of −1/2 (w − w) or establishing large deviation principles (see [23] for a recent monograph). However, fewer results are available in the case of non-homogeneous SDEs, that is, when the system is perturbed by an external timedependent signal. This setting is of particular interest in the framework of stochastic learning models, and we present the main relevant mathematical results in the following section. Double averaging principle Combining ideas of periodic and stochastic averaging introduced previously, we present here theoretical results concerning multiscale SDEs driven by an external time-periodic input. Consider (v , w ) the solution of with t → F (v, w, t) ∈ R p a τ -periodic function and = ( 1 , 2 ) ∈ R 2 + . Parameter 1 represents the internal time-scale separation and 2 the input time-scale. We consider the case where both 1 and 2 are small, that is, a strong time-scale separation between the fast variable v ∈ R p and the slow one w ∈ R q , and a fast periodic modulation of the fast drift F (v, w, ·). The following assumption is made to ensure existence and uniqueness of a strong solution to system (4). In the following, z 1 , z 2 will denote the usual scalar product for vectors. Assumption 2.1 Existence and uniqueness of a strong solution (i) The functions F , G, and are locally Lipschitz continuous in the space variable z. More precisely, for any R > 0, there exists a constant α R such that for any z, z ∈ R p+q with z ≤ R and z ≤ R. (ii) There exists a constant R > 0 such that To control the asymptotic behavior of the fast variable, one further assumes the following. (ii) There exists r 0 < 0 such that for all t ≥ 0 and for all z, x ∈ R p+q , According to the value of μ ∈ {0, R * + , ∞}, the stochastic averaging principle is based on a description of the asymptotic behavior of various rescaled fast frozen processes. More precisely, under Assumptions 2.1 and 2.2, one can deduce that: • For any fixed w 0 ∈ R q and t 0 > 0 fixed, the law of the rescaled time-homogeneous frozen process, converges exponentially fast to a unique invariant probability measure denoted by ρ w 0 ,t 0 (dv). • For any fixed w 0 ∈ R q , there exists a τ μ -periodic evolution system of measures ν w 0 μ (t, dv), different from ρ w 0 ,t (dv) above, such that the law of the rescaled timeinhomogeneous frozen process, converges exponentially fast towards ν w 0 μ (t, ·), uniformly with respect to w 0 (cf. the Appendix Theorem A.1). • For any fixed w 0 ∈ R q , the law of the rescaled time-homogeneous frozen process, converges exponentially fast towards a unique invariant probability measure denoted byρ w 0 (dv). According to the value of μ, we introduce a vector fieldḠ μ which will play a role similar toḠ introduced in equation (2). Definition 2.2 We defineḠ μ : R q → R q as follows. In the time-scale matching case, that is, when 0 < μ < ∞, then Notation We may denote the periodic system of measures ν w μ (t, dv) associated with (6) by ν w μ [F, ](t, dv) to emphasize its relationship with F and . Accordingly, we may denoteḠ μ (w) byḠ [F, ] μ (w). We are now able to present our main mathematical result. Extending Theorem 2.1, the following theorem describes the asymptotic behavior of the slow variable w when → 0 with 1 / 2 → μ. We refer to [6] for more details about the full mathematical proof of this result. then the following convergence result holds, for all T > 0 and δ > 0: However, the study of the sequential limits 1 → 0 followed by 2 → 0 or 2 → 0 followed by 1 → 0 can be deduced from an appropriate combination of classical periodic and stochastic averaging theorems: • Slow input: If one considers the case where the limit 1 → 0 is taken first, so that from Theorem 2.1 with fast variable v and slow variables w and t (with the trivial equationṫ = 1), w is close in probability on finite time-intervals to the solution of the following inhomogeneous ordinary differential equation: Then taking the limit 2 → 0, one can apply the deterministic averaging principle to the fast periodic vector fieldG(w, t/ 2 ), so thatw converges when 2 → 0 to the solution of • Fast input: If the limit 2 → 0 is taken first, one first has to perform a classical averaging of the periodic drift F (v, w, t/ 2 ) leading to the homogeneous system of SDEs (4), but withF (v, w) instead of F (v, w, t/ 2 ). Then, an application of Theorem 2.1 on this system gives an averaged vector field 2. To study the extremal cases μ = 0 and μ = ∞ in full generality, one would need to consider all the possible relationships between 1 and 2 , not only the linear one as in the present article, but also of the type 1 = α 2 for example. In this case, we believe that the regime α < 1 converges to the same limit as taking the limit 2 first and the regime α > 1 corresponds to taking the limit 1 first. The intermediate regime α = 1 seems to be the only one for which the limit cannot be obtained by combining classical averaging principles. Therefore, the present article is focused on this case, in which the averaged system depends explicitly on the scaling parameter μ. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of time-scales between intrinsic neuronal activity and typical stimulus time-scales in a given situation. Although the zeroth order limit (i.e., the averaged system) seems to depend only on the position of α with respect to 1, it seems reasonable to expect that the fluctuations around the limit would depend on the precise value of α. This is a difficult question which may deserve further analysis. The case 0 < μ < ∞ is already very rich in the sense that it combines simultaneously both the periodic and stochastic averaging principles in a new way that cannot be recovered by sequential applications of those principles. A particular role is played by the frozen periodically-forced SDE (6). The equivalent of the quasi-stationary measure ρ w of Theorem 2.1 is given by the asymptotically periodic behavior of equation (6), represented by the periodic family of measures ν w μ (t, dv). 3. By a rescaling of the frozen process (6), one deduces the following scaling relationships: Therefore, if one knows, in the case μ = 1, the averaged vector field associated with the fast process generated by a drift F and a diffusion coefficient σ , denoted G 1 [F, ], it is possible to deduceḠ μ in the general case μ ∈ (0, ∞) with a change F → μF and → √ μ . 4. It seems reasonable to expect that the above result is still valid when considering ergodic, but not necessarily periodic, time dependency of the function F (v, w, ·). In equation (7), instead of integrating ν w μ (t, dv) over one period, one should integrate it with respect to an ergodic stationary measure. However, this extension requires non-trivial technical improvements of [5] which are beyond the scope of this paper. Case of a fast linear SDE with periodic input We present here an elementary case where one can compute explicitly the quasistationary time-periodic family of measures ν w μ (t, x), when the equation for the fast variable is linear. Namely, we consider v ∈ R p the solution of with initial condition v(0) = v 0 ∈ R p , and where A ∈ R p×p is a matrix whose eigenvalues have positive real parts and u(·) is a τ -periodic function. We are interested in the large time behavior of the law of v(t), which is a timeinhomogeneous Ornstein-Uhlenbeck process. From [5] we know that its law converges to a τ -periodic family of probability measures ν(t, dv). Due to the linearity in the previous equation, ν(t, dv) is Gaussian with a time-dependent mean and a constant covariance matrix and Q is the unique solution of the Lyapunov equation Indeed, if one denotes c(t) = v(t) −v(t), then c(t) is a solution of the classical homogeneous Ornstein-Uhlenbeck equation whose stationary distribution is known to be a centered Gaussian measure with the covariance matrix Q solution of (9); see Chapter 3.2 of [24]. Notice that if A is selfadjoint with respect to ( · ) −1 (i.e., A · ( · ) = ( · ) · A ), then the solution , which will be used in Appendix B.2. Hence, in the linear case, the averaged vector field of equation (7) becomes where N x,Q is the probability density function of the Gaussian law with mean x ∈ R q and covariance Q ∈ R p×p . Therefore, due to the linearity of the fast SDE, the periodic system of measure ν is just a constant Gaussian distribution shifted by a periodic function of time v(t). In case G is quadratic in v, this remark implies that one can perform independently the integral over time and over R p in formula (10) (noting that the crossed term has a zero average). In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way. Example 2.3 In this last example, we consider a combination between Example 2.1 and Example 2.2, namely we consider the following system of periodically forced SDEs: As in Example 2.1 and as shown above, the behavior of this system when both 1 and 2 are small depends on the parameter μ defined in (5). More precisely, we have the following three regimes: • Regime 2: fast input:Ḡ • Regime 3: time-scale matching: Truncation and asymptotic well-posedness In some cases, Assumptions 2.1-2.2 may not be satisfied on the entire phase space R p × R q , but only on a subset. Such situations will appear in Section 3 when considering learning models. We introduce here a more refined set of assumptions ensuring that Theorem 2.2 still applies. Let us start with an example, namely the following bi-dimensional system with white noise input: For the fast drift −(l − w)v to be non-explosive, it is necessary to have w < l − α with α > 0 for all time. The concern about this system comes from the fact that the slow variable w may reach l due to the fluctuations captured in the term v 2 , for instance, if κ is not large enough. Such a system may have exponentially growing trajectories. However, we claim that for small enough , w will remain close to its averaged limit w for a very long time, and if this limit remains below l − α, then w can be considered as well-posed in the asymptotic limit → 0. To make this argument more rigorous, we suggest the following definition. We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2. Let us introduce the following set of additional assumptions. Assumption 2.3 Moment conditions: (i) There exists p > 2 such that (ii) For any T > 0 and any bounded subset K of R q , Remark 2.2 This last set of assumptions will be satisfied in all the applications of Section 3 since we consider linear models with additive noise for the equation of v, implying this variable to be Gaussian and the function G only involves quadratic moments of v; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty. Moreover, if one considers non-linear models for the variable v, then the Gaussian property may be lost; however, adding sigmoidal non-linearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest. Then for any initial condition w 0 ∈ E, system (4) is asymptotically well posed in probability and w satisfies the conclusion of Theorem 2.2. Proof See Appendix A.2. Here, we show that it applies to system (11). First, with E α = {w ∈ R, w < l − α}, for some α ∈ ]0, l[, it is possible to show that Assumptions 2.1-2.2 are satisfied on R p × E α . Then, as a special case of (10), we obtain the following averaged system: It remains to check that the solution of this system satisfies that is, the subset E α is invariant under the flow ofḠ. This property is satisfied as soon as Indeed, one can show thatḠ(w) = 0 admits two solutions iff η < 1, Averaging learning neural networks In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity. In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3). A generic learning neural network We now introduce a large class of stochastic neuronal networks with learning models. They are defined as coupled systems describing the simultaneous evolution of the activity of n ∈ N neurons and the connectivity between them. We define v ∈ R n , the activity field of the network, and W ∈ R n×n , the connectivity matrix. Each neuron variable v i is assumed to follow the SDE where the function f i characterizes the intrinsic non-linear dynamical behavior of neuron i and u i is the input received by neuron i. The stochastic term · dB i (t) is added to account for internal sources of noise. In terms of notations, (B(t)) t≥0 is a standard n-dimensional Brownian motion, is an n × n matrix, possibly function of v or other variables, and · dB i (t) denotes the ith component of the vector · dB(t). The input u i to neuron i has mainly two components: the external input u ext i and the input coming from other neurons in the network u syn i . The latter is a priori a complex combination of post-synaptic potentials coming from many other neurons. The coefficient W ij of the connectivity matrix accounts for the strength of a synapse j → i. Note that neurons can be connected to themselves, i.e., W ii is not necessarily null. Thus, we can write where S : R → R and H is a function taking the history of v i and v j and returning a real for each time t (to take convolutions into account). In practical cases, they are often taken to be sigmoidal functions. We abusively redefine S and H as vector valued operators corresponding to the element-wise application of their real counterparts. We also define the function F : Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs. Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a nonautonomous, stochastic, non-linear slow-fast system. In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied, restricting the space of possible functions S, H, F, , and G. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems with multiple equilibria and suggests that the general theory is more suited to deal with rate-based networks. However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions. As we will show in Section 3.3, a particular form of history-dependence can be taken into account, to a certain extent. Indeed, for instance, if the function F is actually a functional of the past trajectory of variable v which can be expressed as the solution of an additional SDE, then it may be possible to include a certain form of history-dependence. However, purely time-delayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework. The noise term can be purely additive or set by a particular function (v, W) as long as it satisfies Assumption 2.2(i), meaning that it must be uniformly nondegenerate. In the following subsection, we apply the averaging theory to various combinations of neuronal network models, embodied by choices of functions S, H, F, , and various learning rules, embodied by a choice of the function G. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect time-scale separation and, in particular, study the convergence of this averaged system to an equilibrium point. Symmetric Hebbian learning One of the simplest, yet non-trivial, stochastic learning models is obtained when considering • A linear model for neuronal activity, namely f i (v i ) = −lv i with l a positive constant. • A linear model for the synaptic transmission, namely Actually, it corresponds to the tensor product: This model can be written as follows: where neurons are assumed to have the same decay constant: L = lI d ; u is a periodic continuous input (it replaces u ext in the previous section); σ, 1 , 2 , κ ∈ R + with 1 , 2 1 and B(t) is n-dimensional Brownian noise. The first question that arises is about the well-posedness of the system: What is the definition interval of the solutions of system (12)? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l. In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied. It corresponds to making sure the external (i.e., u m ) or internal (i.e., σ ) excitations are not too large compared to the decay mechanism (represented by κ and l). Note that if p ∈ ]0, 1[, u m and d are fixed, it is sufficient to increase κ or l for this assumption to be satisfied. Under this assumption, the space is invariant by the flow of the averaged systemḠ, where W ≥ 0 means W is semidefinite positive and W < pL means pL − W is definite positive. Therefore, the averaged system is defined and bounded on R + . The slow/fast system being asymptotically close to the averaged system, it is therefore asymptotically well-defined in probability. This is summarized in the following theorem. (12) is asymptotically well posed in probability and the connectivity matrix W , the solution of system (12), converges to W in the sense that for all δ, T > 0, In the following, we focus on the averaged system described by (13). Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following. Noise term As seen in Section 2, in the linear case, the noise term Q is the unique solution of the Lyapunov equation (9) with A = W − L and = σ I d. Because the noise is spatially uncorrelated and identical for each neuron and also because the connectivity is symmetric, observe that Q = σ 2 2 (L − W) −1 is the unique solution of the system. In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4. Correlation term This term corresponds to the auto-correlation of neuronal activity. It is only implicitly defined; thus, this section is devoted to finding an explicit form depending only on the parameters l, μ, τ , the connectivity W, and the inputs u. Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l. The auto-correlation term of a τ μ -periodic function can be rewritten as With this notation, it is simple to think of v as a 'semi-continuous matrix' of R n×[0, τ μ [ . Hence, the operator '·' can be though of as a matrix multiplication. Similarly, the transpose operator turns a matrixv It is common knowledge, see [17] for instance, that this term gathers information about the correlation of the inputs. Indeed, if we assume that the input is sufficiently slow, thenv has enough time to converge to u(t) for all t ∈ [0, +∞[. Therefore, in the first orderv(t) (W − L) −1 · u(t). This leads tov ·v (W − L) −1 · u · u · (W − L) −1 . In the weakly connected regime, one can assume that W − L −L leading tō v ·v 1 l 2 u · u which is the auto-correlation of the inputs. Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system. Before giving the expression of these temporal correlations, we need to introduce some notations. First, define the convolution filter where H is the Heaviside function. This family of functions is displayed for different values of l μ in Figure 4(a). Note that g l/μ → δ 0 when l μ → +∞, where δ 0 is the Dirac distribution centered at the origin. In this asymptotic regime, the convolution filter and its iterates g l/μ * · · · * g l/μ are equal to the identity. We also define the filtered correlation of the inputs C k,p ∈ R n×n by l/μ = g l/μ * · · · * g l/μ is the kth convolution of g l/μ with itself and u m = sup t∈R + u(t) 2 . This is the correlation matrix of the inputs filtered by two different Fig. 1 This shows the (k, q)-temporal profiles with l μ = 1, i.e., the functions g (k+1) 1 * g 1 (q+1) for q = 0 and k ranging from 0 to 6. For k = q = 0, the temporal profile is even and this also occurs to be true for any k = q. When k > q, the function reaches its maximum for strictly positive values that grow with the difference k − q. Besides, the temporal profiles are flattened when k + q increases. functions. It is easy to show that this is similar to computing the cross-correlation of the inputs with the inputs filtered by another function, which motivates the definition of the (k, p)-temporal profile g l/μ (−t). This notation is deliberately similar to that of the transpose operator we use in the proofs. These functions are shown in Figure 1. We have not found a way to make them explicit; therefore, the following remarks are simply based on numerical illustrations. When k = q, the temporal profiles are centered. The larger the difference k − q, the larger the center of the bell. The larger the sum k + q, the larger the standard deviation. This motivates the idea that C k,p can be thought of as the k − q lagged correlation of the inputs. One can also say that C 10,10 is more blurred than C 0,0 in the sense that the inputs are temporally integrated over a 'wider' window in the first case. Observe that g Thanks to Young's inequality for convolutions, which says that u * g l/μ 1 , it can be proved that C k,q 2 ≤ 1. We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in [25] to write the solution of a general class of non-autonomous linear systems (e.g., dv dt = (W − L) ·v + u(t)) as an infinite sum, in the case μ = 1. Lemma 3.2 Ifv is the solution, with zero as initial condition, of dv dt = (W − L) ·v + u(t) it can be written by the sum below which converges if W is in This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows. Property 3.3 The correlation term can be written This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain [26]. The speed of inputs characterized by μ only appears in the temporal profiles g (k) l/μ * g l/μ (q) . In particular, if the inputs are much slower than neuronal activity time-scale, i.e., μ = 0, then g +∞ = δ 0 and u * g +∞ = u. Therefore, C k,q = C 0,0 and the sums in the formula of Property 3.3 are separable, leading tov ·v = (L − W) −1 · u · u · (L − W ) −1 , which corresponds to the heuristic result previously explained. Therefore, the averaged equation can be explicitly rewritten In Figure 2, we illustrate this result by comparing, for different = 1 = 2 (i.e., we choose μ = 1 in this example), the stochastic system and its averaged version. The above decomposition has been used as the basis for numerical computation of trajectories of the averaged system. Global stability of the equilibrium point Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W Fig. 2 The first two figures, (a) and (b), represent the evolution of the connectivity for original stochastic system (12), superimposed with averaged system (13), for two different values of : respectively = 0.01 and = 0.001, where we have chosen = 1 = 2 . Each color corresponds to the weight of an edge in a network made of n = 3 neurons. As expected, it seems that the smaller , the better the approximation. This can be seen in the picture (c) where we have plotted the precision on the y-axis and on the x-axis. The parameters used here are l = 12, μ = 1, κ = 100, σ = 0.05. The inputs have a random (but frozen) spatial structure and evolve according to a sinusoidal function. is kept smaller than l 3 , i.e., Assumption 3.1 is verified with p ≤ 1 3 , then the dynamics is trivial: the system converges to a single equilibrium point. Indeed, under the previous assumption, the system can be writtenḠ(W) = −κW + F (W), where F is a contraction operator on E 1 3 . Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system. The fact that the equilibrium point is unique means that the 'knowledge' of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same 'knowledge' or 'understanding' of this input. Explicit expansion of the equilibrium point When the network is weakly connected, the high-order terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion. In fact, it is not natural to speak about a weakly connected learning network since the connectivity is a variable. However, we are able to identify a weak connectivity index which controls the strength of the connectivity. We say the connectivity is weak when it is negligible compared to the intrinsic leak term, i.e., |W | l is small. We show in the Appendix that this weak connectivity index depends only on the parameters of the network and can be writtenp In the asymptotic regimep → 0, we have W pl = O(1). This index is the 'small' parameter needed to perform the expansion. We also define λ = σ 2 l 2u 2 m , which has information about the wayp is converging to zero. In fact, it is the ratio of the two terms ofp. With these, we can prove that the equilibrium connectivity W * has the following asymptotic expansion inp. Proof See Theorem B.5 in Appendix B.2. At the first order, the final connectivity is C 0,0 , the filtered correlation of the inputs convolved with a bell-shaped centered temporal profile. In the case of Figure 3, this is quite a good approximation of the final connectivity. Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i.e., two successive but orthogonal events occurring in the inputs will be wired in the connectivity although they do not appear in the spatial correlations; see Figure 3 for an example. Trace learning: band-pass filter effect In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the history-dependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features: • Trace learning. It is likely that a biological learning rule will integrate the activity over a short time. As Földiàk suggested in [27], it makes sense to consider the learning equation as being where * is the convolution and g 1 : t ∈ R → β 1 e −β 1 t H (t). Rolls and Deco numerically show [15] that the temporal convolution, leading to a spatio-temporal learning, makes it possible to perform invariant object recognition. Besides, trace learning appears to be the symmetric part of the biological STDP rule that we detail in Section 3.4. • Damped oscillatory neurons. Many neurons have an oscillatory behavior. Although we cannot take this into account in a linear model, we can model a neuron by a damped oscillator, which also introduces a new important time-scale in the system. Adding adaptation to neuronal dynamics is an elementary way to implement this idea. This corresponds to modeling a single neuron without inputs by the equivalent formulations where g 2 (t) = β 2 e −β 2 t H (t). • Dynamic synapses. The electro-chemical process of synaptic communication is very complicated and non-linear. Yet, one of the features of synaptic communication we can take into account in a linear model is the shape of the post-synaptic potentials. In this section, we consider that each synapse is a linear filter whose finite impulse response (i.e., the post-synaptic potential) has the shape g 3 (t) = β 3 e −β 3 t H (t). This is a common assumption which, for instance, is at the basis of traditional rate based models; see Chapter 11 of [7]. For mathematical tractability, we assume in the following that β = β 1 = β 2 = β 3 ∈ R + such that g β = g 1 = g 2 = g 3 , i.e., the time-scales of the neurons, those of the synapses and those of the learning windows are the same. Actually, there is a large variety of temporal scales of neurons, synapses, and learning windows, which makes this assumption not absurd. Besides, in many brain areas, examples of these time constants are in the same range ( 10 ms). Yet, investigating the impact of breaking this assumption would be necessary to model better biological networks. This leads to the following system: where the notations are the same as in Section 3.2. The behavior of a single neuron will be oscillatory damped if = 1 − 4 l β is a pure imaginary number, i.e., 4l > β. This is the regime on which we focus. Actually, the Hebbian linear case of Section 3.2 corresponds to β = +∞ in this delayed system. To comply with the hypotheses of Theorem 2.2 (i.e., no dependence of the history of the process), we can add a variable z to the system which takes care of integrating the variable v over an exponential window. It leads to the equivalent system (in the limit This trick makes it possible to deal with some history-based processes where the dependence on the past is exponential. It turns out most of the results of Section 3.2 remain true for system (16) as detailed in the following. The existence of the solution on R + is proved in Theorem B.6. The computations show that in the averaged system, the noise term remains identical, whereas the correlation term is to be replaced by μ τ (v * g β ) · (v * g β ) . Besides, Lemma 3.2 can be extended to our delayed system by changing only the temporal filters; see Lemma B.7. Together with Lemma C.3, this proves the result of Theorem B.8. where Observe that applying Young's inequality to convolutions leads to C k,q where J n (z) is the Bessel function of the first kind. The value of the L1 norm of v is computed in Appendix C.3. It leads to v 1 = coth( π 2 ) if is a pure imaginary number and v 1 = 1 else. Therefore, the averaged system can be rewritten As before, the existence and uniqueness of a globally attractive equilibrium point is guaranteed if Assumption 3.1 is verified for p ≤ ; see Theorem B.9. Besides, the weakly connected expansion of the equilibrium point we did in Section 3.2.4 can be derived in this case (see Theorem B.10). At the first order, this leads to the equilibrium connectivity The second order is given in Theorem B.10. The main difference with the Hebbian linear case is the shape of the temporal filters. Actually, the temporal filters have an oscillatory damped behavior if is purely imaginary. These two cases are compared in Figure 4. These oscillatory damped filters have the effect of amplifying a particular frequency of the input signal. As shown in Figure 5, if is a pure imaginary number, then D 0,0 is the cross-correlation of the band-pass filtered inputs with themselves. Fig. 5 This is the spectral profile | v * v |(ξ ) for β = 1 and l ∈ ]0, 2], where v * v denotes the Fourier transform of v * v . When 4l < β, the filter reaches its maximum for the null frequency, but if l increases beyond β 4 , the filter becomes a band-pass filter with long tails in 1 ξ 2 . This band-pass filter effect can also be observed in the higher-order terms of the weakly connected expansion. This suggests that the biophysical oscillatory behavior of neurons and synapses leads to selecting the corresponding frequency of the inputs and performing the same computation as for the Hebbian linear case of the previous section: computing the correlation of the (filtered) inputs. Asymmetric 'STDP' learning with correlated noise Here, we extend the results to temporally asymmetric learning rules and spatially correlated noise. We consider a learning rule that is similar to the spike-timing-dependent plasticity (STDP) which is closer to biological experiments than the previous Hebbian rules. It has been observed that the strength of the connection between two neurons depends mainly on the difference between the time of the spikes emitted by each neuron as shown in Figure 6; see [12]. Assuming that the decay time of the positive and negative parts of Figure 6 are equal, we approximate this function by t → a + g γ (−t) − a − g γ (t), where g γ (t) = γ e −γ t H (t). Actually, this corresponds toẆ ij = −κW ij + a + v i (v j * g γ ) − a − (v i * g γ )v j . If the neuron has a spiking behavior, then the term a + v i (t)(v j * g γ )(t) is significant when the post-synaptic neuron i is spiking at time t, and then it counts the number of previous spikes from the pre-synaptic neuron j that might have caused the post-synaptic spike. This calculus is weighted by an exponentially decaying function. This accounts for the left part of Figure 6. The last term −a − (v i * g γ )v j takes the opposite perspective. It is significant when the pre-synaptic neuron j is spiking and counts the number of previous spikes from the post-synaptic neuron i that are not likely to have been caused by the pre-synaptic neuron. The computation is also weighted by the mirrored function of an exponentially decaying function. This accounts for the right part of Figure 6. This leads to the coupled system where the non-linear intrinsic dynamics of the neurons is represented by f . Indeed, is negligible when the neuron is quiet and maximal at the top of the spikes emitted by neuron i. Therefore, it records the value of the pre-synaptic membrane potential weighted by the function g γ when the post-synaptic neuron spikes. This accounts for the positive part of Figure 6. Similarly, the negative part corresponds to −a − (v * g γ ) ⊗ v . Actually, this formulation is valid for any non-linear activity with correlated noise. However, studying the role of STDP in spiking networks is beyond the scope of this paper since we are only able to have explicit results for models with linear activity. Therefore, we will assume that the activity is linear while keeping the learning rule as it was derived in the spiking case, i.e., we assume f (v) = −lv = −L · v in the system above. We also use the trick of adding additional variables to get rid of the historydependency. This reads In this framework, the method exposed in Section 3.2 holds with small changes. First, the well-posedness assumption becomes Assumption 3.2 There exists p ∈ ]0, 1[ such that where s 2 is the maximal eigenvalue of · . Under this assumption, the system is asymptotically well posed in probability (Theorem B.11). And we show the averaged system is where we have used Theorem B.12 to expand the correlation term. The noise term Q is equal to Q 11 · (L + γ − W ) −1 , where Q 11 is the unique solution of the Lyapunov equation (W − L) · Q 11 + Q 11 · (W − L) + · = 0. Lemma D.1 gives a solution for this equation which leads to Q = γ +∞ k=0 W k · · · (2L − W ) −(k+1) · (L + γ − W ) −1 . In equation (18), the correlation matrices D k,q are given by According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following. We perform a weakly connected expansion of the equilibrium connectivity of system (18). As shown in Theorem B.14, the first order of the expansion is According to Lemma C.1, the symmetric part is very similar to the trace learning case in Section 3.3. Applying Lemma C.2 leads to S = (a + − a − )(u * g l/μ * g γ ) · (u * g l/μ * g γ ) , Therefore, the STDP learning rule simply adds an antisymmetric part to the final connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric part corresponds to computing the cross-correlation of the inputs with its derivative. For high-order terms, this remains true although the temporal profiles are different from the first order. These results are in line with previous works underlying the similarity between STDP learning and differential Hebbian learning, where G(v) ∼v ⊗ v; see [29]. Figure 7 shows an example of purely antisymmetric STDP learning, i.e., a + = a − . The final connectivity matrix is therefore antisymmetric as shown in Figure 7(b) and the noise has no impact on learning. It shows the network finally approximates the connectivity given in (19). Discussion We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make The colors correspond to those of (b). The connectivity of system (17) corresponds to the plain thin oscillatory curves. The connectivity of the averaged system (18) (with k, q < 4) corresponds to the plain thick lines. Note that each curve corresponds to the superposition of three connections which remain equal through learning. The dashed curves correspond to the antisymmetric part in (19). The parameters chosen for this simulation were l = 10, κ = 100, γ = 3, a + = a − = 1, τ = 3, σ = 0.001, μ = 1, = 0.001. The system was simulated on the fast time-scale during T = 10,000 time steps of size dt = 0.01. sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the first-order term is simply the sum of the noise covariance and the correlation of the inputs. • As opposed to the former input/ouput vision of the neurons, we have considered the membrane potential v to be the solution of a dynamical system. The consequence of this modeling choice is that not only the spatial correlations, but also the temporal correlations are learned. Due to the fact we take the transients into account, the activity never converges but it lives between the representation of the inputs. Therefore, it links concepts together. The parameter μ is the ratio of the time-scales between the inputs and the activity variable. If μ = 0, the inputs are infinitely slow and the activity variable has enough time to converge towards its equilibrium point. When μ grows, the dynamics becomes more and more transient, it has no time to converge. Therefore, if the inputs are extremely slow, the network only learns the spatial correlation of the inputs. If the inputs are fast, it also learns the temporal correlations. This is illustrated in Figure 3. This suggests that learning associations between concepts, for instance, learning words in two different languages, may be optimized by presenting two words to be associated circularly with a certain frequency. Indeed, increasing the frequency (with a fixed duration of exposition to each word) amounts to increasing μ. Therefore, the network learns better the temporal correlations of the inputs and thus strengthens the link between these two concepts. • According to the model of resonator neuron [30], Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons. Actually, it has been observed that the auditory cortex is tonotopically organized, i.e., the neurons are arranged by frequency [31]. It is traditionally thought that this is achieved thanks to a particular connectivity between the neurons. We exhibit here another mechanism to select this frequency which is solely based on the parameters of the neurons: a network with a lot of different neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourier-like operation: decomposing the signal by frequency. In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms. • In this paper, the original hierarchy of the network has been neglected: the network is made of neurons which receive external inputs. A natural way to include a hierarchical structure (with layers for instance), without changing the setup of the paper, is therefore to remove the external input to some neurons. However, according to Theorem 3.5 (and its extensions Theorems B.10 and B.14), one can see that these neurons will be disconnected from the others at the first order (if the noise is spatially uncorrelated). Linear activities imply that the high level neurons disconnect from others, which is a problem. In fact, one can observe that the second-order term in Theorem 3.5 is not null if the noise matrix is not diagonal. In fact, this is the noise between neurons which will recruit the high level neurons to build connections from and to them. It is likely that a significant part of noise in the brain is locally induced, e.g., local perturbations due to blood vessels or local chemical signals. In a way, the neurons close to each other share their noise and it seems reasonable to choose the matrix so that it reflects the biological proximity between neurons. In fact, specifies the original structure of the network and makes it possible for close-by neurons to recruit each other. Another idea to address hierarchy in networks would be to replace the synaptic decay term −κW by another homeostatic term [32] which would enforce the emergence of a strong hierarchical structure. • It is also interesting to observe that most of the noise contribution to the equilibrium connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely skew-symmetric, i.e., a + = a − . In fact, it is only the symmetric part of learning, i.e., the Hebbian mechanism, that writes the noise in the connectivity. • We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skew-symmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbian-like learning rule. The skew-symmetric part of the final connectivity is the cross-correlation between the inputs and their derivatives. This has an interesting signification when looking at the spontaneous activity of the network post-learning. In fact, if we assume that the inputs are generated by an autonomous system du dt = ζ(u), then according to the bottom equation in formula (19), the spontaneous activity is governed by In a way, the noise terms generate random patterns which tend to be forgotten by the network due to the leak term −lv. The only drift is due to ζ(u) · u · v E v,u (ζ (u)) which is the expectation of the vector field defining the dynamics of inputs with a measure being the scalar product between the activity variable and the inputs. In other words, if the activity is close to the inputs at a given time t * ∈ R + , i.e., v, u(t * ) is large, then the activity will evolve in the same direction as what this input would have done. The network has modeled the temporal structure of the inputs. The spontaneous activity predicts and replays the inputs the network has learned. There are still numerous challenges to carry on in this direction. First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic non-linear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes [26] which has useful applications for learning neural networks [19,20]. This suggests that the dynamics studied in this paper might be quite similar to some non-linear network models. Studying more rigorously the extension of the present theory to non-linear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks. Second, we have shown that the equilibrium connectivity was made of a symmetric and antisymmetric term. In terms of statistical analysis of data sets, the symmetric part corresponds to classical correlation matrices. However, the antisymmetric part suggests a way to improve the purely correlation-based approach used in many statistical analyses (e.g., PCA) toward a causality-oriented framework which might be better suited to deal with dynamical data. A.1 Long-time behavior of inhomogeneous Markov processes In order to construct the averaged vector fieldḠ μ (w) in the time-scale matching case (0 < μ < ∞), one needs to understand properly the long-time behavior of the rescaled inhomogeneous frozen process Under regularity and dissipativity conditions, [5] proves the following general result about the asymptotic behavior of the solution of where t → b(x, t) and t → σ (x, t) are τ -periodic. The first point of the following theorem gives the definition of evolution systems of measures, which generalizes the notion of invariant measures in the case of inhomogeneous Markov processes. The exponential estimate of 2. in the following theorem is a key point to prove the averaging principle of Theorem 2.2. such that for all functions φ continuous and bounded, Such a family is called evolution systems of measures. 2. Furthermore, under stronger dissipativity condition, the convergence of the law of X to μ is exponentially fast. More precisely, for any r ∈ (1, +∞), there exist M > 0 and ω < 0 such that for all φ in the space of p-integrable functions with respect to μ(t, ·), L r (R p , μ(t, ·)), (7). Then for any initial condition w 0 ∈ E, system (4) is asymptotically well posed in probability and w satisfies the conclusion of Theorem 2.2. Proof The idea of the proof is to truncate the original system, replacing G by a smooth truncation which coincides with G on E and which is close to 0 outside E. More precisely, for β > 0, we introduce ψ β : R q → R q a regular function (locally Lipschitz) such that ψ β (w) = 0 if w / ∈ E or w ∈ ∂E and lim β→0 ψ β (w) = 1 if w ∈ E − ∂E . We defineG Then, we introduce (ṽ ,β ,w ,β ) the solution of the auxiliary system with the same initial condition as (v , w ). Let T , δ, η > 0 be three positive reals. Let us introduce a few more notations. We will need to consider a subset of E defined by We also introduce the following stopping times: Finally, we define T := min(T , τ ,τ ) and T β := min(T , τ β ,τ β ). Let us remark at this point that in order to prove that P[τ ≥ T ] → 1 (which is our aim), it is sufficient to work on the bounded stopping time min(T , τ ), since P[τ ≥ T ] = P[min(T , τ ) ≥ T ]. In other words, the realizations of w which stay longer than T inside E are not problematic. Therefore, we introduceτ := min(T , τ ). Our first claim is that on finite time intervals [0, T ],w ,β is a good approximation of w inside E as long as one chooses β sufficiently small. To prove our claim, we proceed in two steps, first working inside E β and then in E − E β : 1. For any β > 0, one controls the difference between w andw ,β on E β since one controls the difference between the drifts. By an application of Lemma A.3 below (we need here the moment Assumption 2.3(i)), there exists a constant C (which may depend on T , β, . . .) such that We conclude by an application of the Markov inequality, implying 2. One needs now to control the situation outside E β , that is, on E − E β . The idea is that while one does not control the difference between G andG β anymore, one can still choose β sufficiently small such that E β becomes arbitrary close to E, hence implying thatτ and T β are arbitrary close with high probability, namely With θ = (δη) 2 and λ = δη, one obtains that for sufficiently small β, Let us denote S := sup T β ≤t≤τ w t −w where we have used the Cauchy-Schwarz inequality and the moment Assumption 2.3(ii) (yielding the constant K G ) in the second line. So, we deduce by the Markov inequality that sup T β ≤t≤τ w t −w ,β t is arbitrary small in probability. From the combination of 1. and 2., we deduce that one can choose β small enough such that P sup 0≤t≤T ∧τ We can now proceed to the application of Theorem 2.2 to the truncated system. As (ṽ ,β 0 ,w ,β 0 ) remains in R p × E, one can extend smoothly F and outside E so that (F, ) satisfies Assumptions 2.1-2.2. Therefore, one can apply Theorem 2.2 to the auxiliary system: for all δ, T > 0, where w is defined by (8). As a consequence, there exists 0 such that for all with < 0 , P sup We know by assumption 2. of the statement of Property 2.3, for all t ≥ 0, w t ∈ E, so we conclude the proof by observing that for all T > 0, lim →0 P[τ ≥ T ] = 1. In the following lemma, we show that the solutions of two SDEs, whose drifts are close on a subset of the state space, remain close on a finite time interval. The difficulty here lies in the fact that we deal with only locally Lipschitz coefficients. Lemma A.3 Suppose x and y are solutions, with identical initial conditions in H ⊂ R n , of the following stochastic differential equations in R n : Let T > 0 be a fixed time. We define We make the following assumptions: 1. Approximation assumption: 2. Local Lipschitz assumption: for all a, b ∈ R n with max( a , b ) ≤ R, there exists a constant C R such that 3. Boundedness assumption: there exists p > 2 and A > 0 such that and if x ≤ R, then there exists K R such that a(x) ≤ K R . Under the above assumptions, there exists a constant C (depending on the quantities defined above, but not on ξ ) such that Proof Although the Lipschitz constant is not bounded on H, we can use the boundedness assumption to show that the probability of reaching a level R before time T will be very small for large R, and then use the classical strategy inside { x t ≤ R} where everything works as if the coefficients were globally Lipschitz. A similar strategy is used in [33] to prove a strong convergence theorem for the Euler scheme without the global Lipschitz assumption. We adapt here the ideas of their proof to our setting. Therefore, we introduce the following stopping times: We also denote e(t) := x t − y t . Splitting the following expectation according to the value of ρ, and applying the Young inequality, ab ≤ d r a r + 1 qd q/r b q for r −1 + q −1 = 1 and any a, b, d > 0, we obtain, for any d > 0, Then we use the boundedness assumption and the Markov inequality to deduce that x min(r,ρ) − y min(r,ρ) 2 ds + 2T 2 K 2 R ξ 2 . We then apply the Gronwall lemma Finally, we choose d small enough such that and R large enough such that Appendix B: Proofs of Section 3 B.1 Notations and definitions Throughout the paper, lower-case normal letters are constants, lower-case bold letters are vectors or vector-valued functions, and upper-case bold letters are matrices. • v ∈ C 1 (R + , R n ) is the field of membrane potential in the network. • u ∈ C 1 (R + , R n ) is the field of inputs to the network. We write • v ⊗ u ∈ C 1 (R + , R n×n ) is the tensor product between u and v, which simply means {u ⊗ v} ij (t) = u i (t)v j (t). • W ∈ C 1 (R + , R n×n ) is the connectivity of the network. Throughout the paper, we assume W(0) = 0. • x, y is the scalar product between two vectors x, y ∈ R n . • u(t) p for p = 1, 2 is the L p norm of u(t) ∈ R n , i.e., u(t) p = ( n i=1 |u i (t)| p ) 1 p . And similarly for the connectivity matrices of R n×n with a double sum. • J is the transpose of the matrix J ∈ R n×n . • x · y ∈ R n×n is the cross-correlation matrix of two compactly supported and differentiable functions from R to R n , i.e., • H is the Heaviside function, i.e., • The real functions are integrable on R. B.1.1 Notations for the Appendix The computations involve a lot of convolutions and, for readability of the Appendix, we introduce some new notations. Indeed, we rewrite the time-convolution between u and g, an integrable function on R, This suggests one should think of v as a semi-continuous matrix of R n×R and of G γ as a continuous matrix of R R×R , such that u it = u i (t) and G st = g(t − s). Indeed, in this framework the convolution with g is nothing but the continuous matrix multiplication between v and a continuous Toeplitz matrix generated row by row by g. Hence, the operator '·' can be though of as a matrix multiplication. Therefore, it is natural to define (u * g) = (u · G) = G · u , where G ∈ R R×R is the transpose of G, i.e., the continuous Toeplitz matrix generated row by row by g(−·) : t → g(−t) and u ∈ R R×n . Thus, for g and h, two integrable functions on R, we can rewrite where G and H are their associated continuous matrices. More generally, the bold curved letters G, V, W represent these continuous Toeplitz matrices which are well defined through their action as convolution operators with g, v, and w. The previous formulation naturally expresses the symmetry of relation (14). B.2 Hebbian learning with linear activity In this part, we consider system (12). B.2.1 Application of temporal averaging theory Theorem B.1 If Assumption 3.1 is verified for p ∈ ]0, 1[, then system (12) is asymptotically well posed in probability and the connectivity matrix W , the solution of system (12), converges to W, in the sense that for all δ, T > 0, Proof We are going to apply Property 2.3. For p ∈ ]0, 1[, consider the space First, since L − W is strictly positive for W in E p , Assumptions 2.1-2.2 are satisfied on R n × E p . Then, we only need to compute the averaged vector fieldḠ and show that E p is invariant under the flow ofḠ. 1. Computation of the averaged vector fieldḠ: The fast variable is linear, the averaged vector field is given by (10). This reads where N v,Q is the probability density function of the Gaussian law with mean v and covariance Q. And Q is the unique solution of (9), with = σ I d. This leads to Q = σ 2 2 (L − W) −1 . Therefore, The integral term in the equation above is the correlation matrix of the τ μ -periodic functionv. To rewrite this term, we definev ∈ R n×[0, τ μ [ such thatv(i, t) =v(t) i .v can be seen as a matrix gathering the history ofv, i.e., each column ofv corresponds to the vectorv(t) for a given t ∈ [0, τ μ [. It turns out τ μ 0v (t) ⊗v(t) dt =v ·v . Therefore,Ḡ According to the results in Section 2, the solutions of a differential system with such a right-hand side are close to that of the initial system (12). Hence, we focus exclusively on it and try to unveil the properties of its solutions which will be retrospectively extended to those of the initial system (12). 2. Invariance of E p under the flow of (13): Here we assume that W(0) ∈ E p and we want to prove that the trajectory of W is in E p , too. (a) Symmetry: It is clear that each term inḠ is symmetric. Their sum is therefore symmetric and so is W(t). The correlation termv ·v is a Gramian matrix and is therefore positive. Because L − W is assumed to be positive, therefore, its inverse is also positive. Therefore, if e i is an eigenvector of W ≥ 0 associated with a null eigenvalue, then e i ·Ḡ(W) · e i ≥ 0. Thus, the trajectories of (13) remain positive. (c) Inequality |W | < lp: The argument here is that of the inward pointing subspace. We intend to find a condition under which the flowḠ is pointing inward the space {W : |W | < lp}. Roughly speaking, this will be done by defining a real valued function g strictly negative on the subspace and positive outside and then showing that its gradient (or differential) on the border goes in the opposite direction of the flow, i.e., d W g(Ḡ(W)) < 0 for W ∈ g −1 (0). For all x ∈ C n such that x = 1, define a family of positive numbers (α x ) whose supremum is written α * and a family of functions (g x ) such that Observe that the differential of g x at W applied to J is dg x • Upper bound of A: Applying Cauchy-Schwarz leads to • Upper bound of B: Observe that for J a positive definite matrix whose eigenvalues are the λ i , then the spectrum of Using the previous observation and Cauchy-Schwarz leads to Therefore, for α * < l Now write α * = pl with p ∈ ]0, 1[. Equation (31) becomes When there exists p such that P (p) < 0 (which corresponds to Assumption 3.1), then their exists a ball of radius pl on which the dynamics is pointing inward. It means any matrix W whose maximal eigenvalue is α * = pl will see this eigenvalue (and those which are sufficiently close to it, i.e., for which α * − α x > 0 is sufficiently small) decreasing along the trajectories of the system. Therefore, the space E p is invariant by the flow of the system iff Assumption 3.1 is satisfied. The trajectories of system (13) with the initial condition in E p are defined on R + and remain bounded. Indeed, if W(0) ∈ E p , the connectivity will stay in E p , in particular 0 < L − W ≤ L along the trajectories, more precisely L − W is a strictly positive constant since p ∈ ]0, 1[. Becausev is also bounded by u m l (1−p) ,v ·v + σ 2 2 (L − W) −1 is bounded. The right-hand side of system (13) is the sum a bounded term and a linear term multiplied by a negative constant; therefore, the system remains bounded and it cannot explode in finite time: it is defined on R + . B.2.2 An expansion for the correlation term We first state a useful lemma. Lemma B.2 Ifv is the solution, with zero as initial condition, of dv dt = (W − L) ·v + u(t), it can be written by the sum below which converges if W is in Proof It can be proven as a trivial rewriting of the variation of parameters formula for linear systems. A more general approach, which extends to delayed systems, was developed by Galtier and Touboul [25]; see the first example for the proof of this lemma. This is useful to find the next result. Property B.3 The correlation term can be written Proof We can use Lemma 3.2 with μ = 1 and compute the cross productv ·v . Therefore, consider u(μ·) : t → u(μt) instead of u. A change of variable shows that (u(μ·) * g l ( · μ ))(μt). Therefore, First, we compute the differential of F and show it is a bounded operator. Second, we show it implies the existence and uniqueness of an equilibrium point under some condition. Then, we find an energy for the system which says the fixed point is a global attractor. Finally, we show the stability condition is the same as Assumption 3.1 for p ≤ 1 3 . 1. We compute the differential of each term in F : • Formally write the second termv ·v (W) = +∞ k,q=0 W k l k · C k,q · W q l q . To find its differential, computev ·v (W + J) −v ·v (W) and keep the terms at the first order in J. Before computing the whole sum, observe that This leads to • Write Q : W → (L − W) −1 . We can write (L − W) · Q(W) = I d and use the chain rule to compute the differential of Q at W, which gives −J · Q(W) + (L − W) · dQ W (J) = 0. Therefore, The differential of F at W is the sum of these two terms. 2. We want to compute the norm of dF W (J) 2 for J 2 = 1. First, observe that for three square matrices A, B, and C, for e i the vectors of the canonical basis of R n . This leads to A · B · C 2 ≤ B 2 |A | |C |. Therefore, because |A | ≤ A 2 , Therefore, This inequality is true for all J with J 2 = 1; therefore, it is also true for the operator norm Therefore, F is a k-Lipschitz operator where k = then 1 κ F is a contraction map from E p to itself. Therefore, the Banach fixed point theorem says that there is a unique fixed point which we write W * . 4. We now show that, under assumption (32), W → W − W * 2 2 is an energy function for the system dW dt = −W + 1 κ F (W) (which is a rescaled version of system (15)). Indeed, compute the derivative of this energy along the trajectories of the system The energy is lower-bounded, takes its minimum for W = W * and the decreases along the trajectories of the system. Therefore, W * is globally asymptotically stable if assumption (32) is verified. 5. Observe that if Assumption 3.1 is verified for p ≤ 1 3 , then 1 1−p < 2 1−p ≤ 1 p . Therefore, Assumption 3.1 implies that (32) is also true. This concludes the proof. Proof Define p * the smallest value in ]0, 1[ such that Assumption 3.1 is valid. This implies The weak connectivity indexp controls the ratio of the connection over the strength of intrinsic dynamics. Indeed, these two variables are of the same order because We want to approximate the equilibrium W * , i.e., the solution ofḠ(W * ) = 0, in the regimep 1. Define = W pl such that | | = O(1). We abusively writē G( ) =Ḡ(pl ) such that Now, we write a candidate (m) = m a=0p a a , then we chose the terms a = O(1) so that the first mth orders inḠ( (m) ) vanish. This implies that Ḡ ( * ) − G( (m) ) = O(p m+1 ), where * = W * pl . Then, we use the fact that the minimal absolute value of the eigenvalues ofḠ is larger than κ − ( i.e., (m) = * + O(p m+1 ). Thus, we need to find the a such that the first mth orders inḠ( (m) ) vanish. Therefore, we need to expand all the terms inḠ( ). The first term is obvious. In the following, we write the second term F ( ) associated to the correlations and look for an explicit expression of the F a such that F ( ) = +∞ a=0p a F a . Second, we write the third term Q( ) associated to the noise and look for an explicit expression of the Q a such that Q( ) = +∞ a=0p a Q a . This equation is scary but it reduces to simple expressions for small a ∈ N. where V is the convolution operator generated by v(t) = l μ (e − β 2μ (1− )t − e − β 2μ (1+ )t )H (t) (see Appendix C for details). Observe that applying Young's inequality for convolutions leads to C k,q 2 ≤ 1. Therefore, we can rewrite Theorem 3.3 into Theorem B.8 Proof Similar to that of Theorem 3.3. Theorem B.9 If Assumption 3.1 is verified for , there is a unique equilibrium point which is globally, asymptotically stable. Proof Similar to the proof of Theorem B.4. With the same definitions forp = Proof Define = W pl so that So, the expansion will be in orders ofp v 1 with v 1 ≥ 1. 0 ·C 2,0 +C 0,2 · 2 0 + 0 ·C 1,1 · 0 + 1 ·C 1,0 +C 0,1 · 1 Actually, it is possible to compute recursively the nth terms, although their complexity explodes. Therefore, it is easy to compute a = F a + Q a for a ∈ N. By definition W =pl =pl(F + Q), which leads to the result. B.4 STDP learning with linear neurons and correlated noise Consider the following n-dimensional stochastic differential system: where u is a continuous input in R n , l, 1 , 2 , κ ∈ R + , a + , a − ∈ R, ∈ R n×n and B(t) is n-dimensional Brownian noise, and for all γ > 0, g γ : t → γ e −γ t H (t) where H is the Heaviside function. Recall the well-posedness Assumption 3.2 wherev(t) is the τ μ -periodic attractor of dv dt = (W − L) ·v + u(μt), where W ∈ R n×n is supposed to be fixed. And Q 12 is described below. • Proving Q 22 ≥ 0: Q 22 is the covariance matrix of the random value z, therefore, it is positivesemi-definite. Inequality |W | < lp: For all x ∈ C n such that x = 1, define a family of positive numbers (α x ) whose supremum is written α * and a family of functions (g x ) such that Because g is linear, dg x W (J) = x, J · x . For W ∈ g x−1 (0), i.e., x, W · x = α x , compute • Upper bound of A: Cauchy-Schwarz leads to |A| ≤ |a + | v · G γ ·v · x + |a − | v · G γ ·v · x . As before, we can use Young's inequality for convolutions to find an upper bound of A which reads A ≤ τ u 2 m (|a + | + |a − |) (l − α * ) 2 . The rest of the proof is identical to the Hebbian case. Assumption 3.1 is changed to Assumption 3.2 for E p to be invariant by the flowḠ. Define D k,q = 1 u 2 m τ (|a + | + |a − |) such that D k,q 2 ≤ 1. In this framework, one can prove Theorem B. 12 The correlation term can be written Proof Similar to that of Theorem 3.3.
v3-fos-license
2022-07-20T15:25:56.825Z
2022-07-18T00:00:00.000
250656036
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2022.935157/pdf", "pdf_hash": "19dc2b827d35cdbad8bedf69db1f27026642b85a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:400", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "fa31a1fa4c24cedfebd20d942c122edfc230ed04", "year": 2022 }
pes2o/s2orc
Algorithm for appearance simulation of plant diseases based on symptom classification Plant disease visualization simulation belongs to an important research area at the intersection of computer application technology and plant pathology. However, due to the variety of plant diseases and their complex causes, how to achieve realistic, flexible, and universal plant disease simulation is still a problem to be explored in depth. Based on the principles of plant disease prediction, a time-varying generic model of diseases affected by common environmental factors was established, and interactive environmental parameters such as temperature, humidity, and time were set to express the plant disease spread and color change processes through a unified calculation. Using the apparent symptoms as the basis for plant disease classification, simulation algorithms for different symptom types were propose. The composition of disease spots was deconstructed from a computer simulation perspective, and the simulation of plant diseases with symptoms such as discoloration, powdery mildew, ring pattern, rust spot, and scatter was realized based on the combined application of visualization techniques such as image processing, noise optimization and texture synthesis. To verify the effectiveness of the algorithm, a simulation similarity test method based on deep learning was proposed to test the similarity with the recognition accuracy of symptom types, and the overall accuracy reaches 87%. The experimental results showed that the algorithm in this paper can realistically and effectively simulate five common plant disease forms. It provided a useful reference for the popularization of plant disease knowledge and visualization teaching, and also had certain research value and application value in the fields of film and television advertising, games, and entertainment. Introduction Dynamic visualization of plant diseases (Dickinson, 2020) can not only promote the development of agricultural informatics, but also has important implications for the study of plant phenomics (Pieruschka and Schurr, 2019). At a time of recurring epidemics, it can provide an innovative approach to the traditional study of plant diseases and can add interest to teaching in agriculture and forestry. It also exists in increasing demand in the film and television advertising and game entertainment industries, and can be applied to virtual space construction, virtual reality (VR) interaction, and game specific scene modeling. Combining the interrelationship between disease and environment in plant disease ecology and the description of plant disease pathogenesis patterns in epidemiology, one of the hot issues today is the realization of reasonable and realistic plant disease simulations. Plant disease visualization simulation includes the simulation of characteristics such as disease spot distribution, color, geometry and textural properties. Kider et al. (2011) developed a fungal-bacterial reaction-diffusion model to parameterize the physical properties involved in fruit decay as a way to simulate the aging and decay process of fruits. Based on this, Fan et al. (2013) used an improved reaction-diffusion model to model the appearance of fruit ring-spot decay. Miao et al. (2014) modeled the spatial movement of cucumber powdery mildew spots using the cellular texture proposed by Worley (1996) to model the mildew layer formed by powdery mildew using Shell rendering, taking the distribution, movement mode, and final morphology of the spots as three spatial information of the spots. Xu et al. (2017) proposed a time-varying appearance model by extracting information on the apparent characteristics of the disease from real disease images and reasonably extrapolating the disease spot infestation process, which was applied to the apparent modeling of plant diseases. Liu and Fan (2015) proposed a modified plasto-spring model combined with cell mechanics to implement simulation modeling of fruit sunburn disease. Wu et al. (2018) proposed a 3D visualization model for controlling the fruit decay process using global decay parameters and decay resistance parameters, which can flexibly and quickly perform each point on the fruit model manipulated to complete the simulation of fruit shape deformation and decay appearance. Leaf discoloration or wilting is also a manifestation symptom of common plant diseases. Tang et al. (2013) implemented leaf deformation based on a modified mass-spring model, which regarded the color change as a sequence of continuous discrete states, and combined these two parts based on a Markov chain model to realize the leaf change process under different environmental parameter settings. Jeong et al. (2013) represented the leaf as a triangular-Voronoi bilayer structure and simulated the complex curl and fold of the leaf by uneven contraction. As can be seen from the above, there are abundant studies related to plant disease simulation, but most of the proposed simulation algorithms are aimed at a particular plant disease symptom to analyze its apparent morphological characteristics to realize the simulation, lacking the exploration of common problems existing in different plant diseases, with complicated methods and large constraints. Plant morphology can reflect the gene expression, reproductive growth and resource acquisition of plants. The implementation of morphological modeling of plants using computer languages, as opposed to the graphical information of plants kept in the form of pictures, is also an important reference for this paper. Geometric topology-based modeling is the closest modeling approach to plant morphological structure. Chen et al. (2018) extracted modeling constraint rules and improved the parametric L system to generate complex 3D models of trees based on tree observation data and forestry theory knowledge. Wen et al. (2021) defined the mathematical representation of 3D plant nodes, specified the conversion method between its skeleton model and network model, and completed the plant population of different maize varieties by assembling 3D plant nodes 3D modeling. Such methods generate models with a strong sense of realism, but require professionals to provide specific plant growth rules and parameters that can describe plant morphology, which is more difficult for nonagroforestry professional users. Sketch-based modeling is a relatively flexible and interactive approach. Liu et al. (2019) built a system for interactive modeling of trees in VR based on 3D gestures with the help of a head-mounted display and a 6-DOF motion controller for interaction. Zhang et al. (2021) defined 3D sketches drawn by users in VR as an envelope of tree leaves and trunks that can automatically generate a complete 3D-tree model, and it can be edited twice. Such methods support direct user control over the generation of plant forms, but there are trade-offs to be made in terms of interactivity, usability, and fine-grained control over plant forms. Modeling based on measurement data mainly includes image-based and point cloud-based modeling. Chen et al. (2017) proposed a hierarchical denoising method based on multi-viewpoint image sequences to build 3D models of crops in order to improve the accuracy of 3D point cloud reconstruction. Liu et al. (2021) used conditional generative adversarial networks to predict the 3D skeleton of trees from individual images and 2D contours drawn by users, respectively. A tree model was generated using procedural modeling techniques. Such methods often face problems such as expensive collection equipment and cumbersome data processing. Curvilinear surface-based plant morphology modeling can better establish the connection between morphological structure and physiological function. Alsweis et al. (2017) extracted image contours using a curvature-scale spatial angle detection algorithm and proposed a procedural biologically motivated method to model leaf vein morphology at different levels. Isokane et al. (2018) used Bayesian expansion to infer plant branching probabilities and proposed a method to observe and infer the 3D plant branching structure hidden beneath the leaves from multiple perspectives. Such methods need to ensure smooth and continuous boundary, complicated operation and low efficiency of the algorithm. In general, the above approaches to modeling plant morphology have mainly focused on the organ structure and growth changes of the plant itself in a healthy state, while modeling the morphology of plants affected by disease infestation is lacking. The phenomena of discoloration, aging, and corrosion occurring on the surface of an object are to some extent common to the different disease symptoms on the plant epidermis due to disease infestation, so the research on texture simulation can provide effective reference for the simulation of plant diseases. Zhang et al. (2014) extracted the texture features of real rust spot pictures, which can be selected and set texture weights when drawing the model parameters to achieve texture blending and obtain different states of rust simulation. Kamata et al. (2014) considered the factors of surface geometric features (convexity, occlusion, orientation, and location) of metals and their anticorrosive coating peeling areas for corrosion calculations to simulate corrosion phenomena in peeling areas. Bellini et al. (2016) calculated the estimated age map of weathering phenomena in a texture of a given input image based on the prevalence of plaque-like patches in that image, generated a complete weathering texture and simulate the de-weathering and weathering processes. Zhang et al. (2018) proposed a first-order quasi-static cracking node method (CNM) to simulate cracking in a 3D surface model and established a new stress and energy combined cracking criterion to deal with crack generation and extension. Munoz-Pandiella et al. (2018) proposed a technique based on a fast physicsinspired method that Ishitobi et al. (2020) used a triangular grid to simulate the weathering of a rust-proof coated metal surface after mechanical deterioration in three steps based on fundamental mechanics: "separation-splitting-exfoliation." In texture representation and synthesis, Guingo et al. (2017) propose a two-layer representation of textures, with a noise layer capturing fine Gaussian patterns and a structure layer capturing non-Gaussian patterns and structures, synchronizing the two layers by a set of masks to make them consistent. Cavalier et al. (2019) propose a method based on local control of speckle noise by controlling the pulse distribution and a spatially defined kernel to create the desired texture appearance in a userinteractive manner. Due to the essential difference between the object of application and the principle of texture generation, a generation algorithm suitable for plant disease apparent texture needs to be explored on the basis of the reference. In summary, it is of high research and application value to realize a plant disease visualization simulation with high realism, high universality and stable operation. In this paper, we deconstruct and analyze five common and distinctive disease symptom patterns in plant diseases, and propose a time-varying generic model of plant disease without violating the theory of plant pathology to show the dynamic process of plant disease infestation under different environmental conditions. Using disease symptoms as the basis for plant disease classification, we propose simulation algorithms corresponding to five common disease symptoms respectively, and realize visual simulation of different types of multiple plant diseases. Deep learning is used to check the similarity of simulation results in terms of the accuracy of symptom type recognition. Algorithms for plant disease simulation Time-varying generalized model for plant diseases The infected host plant, the pathogenic agent and the environmental conditions conducive to disease development are known as "the disease triangle" (Scholthof, 2007). The occurrence of disease is the result of the fulfillment of these three necessary conditions. Therefore, the external environment in which the plant is located directly influences the growth of the plant and the spread of the disease. Meteorological factors are most closely related to the occurrence and prevalence of plant diseases, mainly including temperature, humidity, rainfall, light, wind, etc. In agroforestry research, data recording and analysis of environmental and plant diseases enable monitoring and prediction of plant disease development (Moyer et al., 2016;Shang et al., 2018;Chappell et al., 2020). The timevarying generic model is established with the principles of plant disease prediction as the main theoretical basis, and they are the three principles of continuity, analogy, and relevance. Due to professional and equipment limitations and lack of accurate meteorological measurement data, this paper combines the description of the process of different plant disease epidemics, ignores the influence of other factors, and assumes that the acceleration of the actual spread of disease spots is mainly influenced by two conditions: atmospheric temperature and humidity. Within the scope of the existence of unidirectional effects of temperature and humidity on phytophthora, a timevarying generic model is developed to represent the common relationship between different plant diseases under the influence of environmental conditions. For any plant disease, define the current spot morphology as State, expressed as Equation (1): where M refers to M 1 , M 2 , M 3 , M 4 , and M 5 in sequence for the disease symptom types described in the text in the actual code operations, denoted as variable parameters controlling the extent (size or number) of disease spot spread in the simulation algorithm. It will be described in detail in specific sections. C denotes the color component matrix of the disease spots. In this paper, V denotes the rate of disease diffusion, a denotes the acceleration of disease spot diffusion, and t denotes the diffusion time. To simplify the model and ignore the influence of other factors, the actual acceleration of disease spot diffusion is assumed to be influenced by two conditions: atmospheric temperature and humidity, and a quantitative relationship of uniform form is established in the range where temperature and humidity play a unidirectional role on plant diseases. Tan (1991) had proposed the Richards function as a general model to simulate the temporal dynamics of plant disease epidemics, and through detailed derivation, proved that it can reflect the epidemiological pattern of many plant diseases. Based on this theoretical formula, the following definition is made in this paper, as shown in Equations (2-4): where m is the shape parameter of the growth curve, reflecting the type of disease growth function. α and β are the influence coefficients of temperature and humidity on the disease, respectively, and the corresponding values of α and β are different for different plant diseases. T denotes temperature ( • C) and Q denotes relative atmospheric humidity (%). ε is the value of random error caused by other factors on acceleration, which is neglected in the actual operation. M max is the maximum value of M. For the color of the disease spot C, in order to make the color change process tend to be smooth, this paper uses the key frame linear interpolation method for simulation, and the color is updated once for each rendering of the screen. The color value in the most severe state is C max , and the initial spot color value is C min , then the color value C at time t is shown in Equation (5): where t max denotes the maximum value of the diffusion time. Types of symptoms with continuous area changes Discoloration symptom simulation Discoloration refers to a change in color of the diseased plant. In this section, Ginkgo yellows disease is selected for the study to simulate the discoloration symptoms. The leaf yellowing shows a gradual process from green to yellow. As shown in Figure 1B, the grayscale remapping transformation is represented by a right-angle coordinate system, with the x-axis being the grayscale value before mapping and the y-axis being the grayscale value after mapping. In order to represent the color change process more richly, after normalization, the initial ginkgo grayscale gradient map ( Figure 1A) is grayscale remapped. Three key points[Key 1 (x 1 , y 1 ), Key 2 (x 2 , y 2 ), Key 3 (x 3 , y 3 )] divide the whole process into three processing segments, and the default starting coordinates of the first segment are (0, 0), as shown in Equation (6): where x 1 , y 1 , x 2 , y 2 , x 3 , y 3 are the exact values in practical application to determine the mapping function of each segment. Figure 1C shows the grayscale gradient after the three-stage mapping, where the coordinates of Key 1 , Key 2 , and Key 3 are taken as (0.43, 0.14), (0.76, 0.60) and (0.94, 1.0), respectively, and the rendering results are shown in Figure 1D. Keeping the vertical coordinates unchanged, the horizontal coordinates of Key 1 and Key 2 are dynamically assigned from large to small, and the amount of change is M 1 . The calculation of the grayscale mask image for generating uniformly discolored ginkgo yellowing disease is shown by Equation (7): where (x, y) denotes the position of the pixel point. All image calculation formulas are performed simultaneously for each pixel point in the image, which essentially indicates the calculation of the value of each pixel point and is not repeated below. I 1 (x, y) denotes the image after segmented gray linear transformation, I 0 (x, y) denotes the initial gray gradient image, and Slope is the slope of the line between the key points, which takes the value of 1.4. Powdery mildew symptom simulation Powdery mildew symptom is characterized by the appearance of powdery or moldy material visible to the naked eye on the surface of the disease. In this paper, we take cucumber powdery mildew as the research object, use Worley noise to control the location of the occurrence of the disease spot and the geometry of the spot itself, and use Perlin noise (Perlin, 1985) to simulate the powdery mildew layer formed by the disease spot block to simulate the powdery mildew symptom. The texture edges of the Worley noise-like Voronoi map are clearly straight lines, so further transformations are needed. First, the Unity Shader is used to fill the noise after grayscale processing, so that the grayscale of each cell is randomized, and then blurred, and finally the threshold is set to binarize the image, so that the grayscale map can describe the shape of the lesion. After trial and error, a threshold value of 0.71 worked best. The process is shown in Figure 2A. In order to reflect the granularity characteristics of the spots, the Perlin noise with different parameters is superimposed to generate fractal noise to simulate the effect of powdery mildew. It can be adjusted by changing the frequency and amplitude of the two parameters. Users can choose the number of superimpositions according to the actual simulation needs, and the generated Perlin noise function is shown in Equation (8): where Scale() is the two-dimensional noise range, n is the number of noise functions superimposed, Noise() is the Perlin noise function. In this paper, we take n as 3, and the simulated noise effect after superposition is shown in Figure 2B. Combining the above steps, Ahpha blending of the two in the Unity Shader generates a grayscale map of the spot texture of cucumber powdery mildew, which is I 2 (x, y), expressed by Equations (9, 10): where I worley (x, y) is the grayscale image generated by Worley noise and I perlin (x, y) is the grayscale image generated by Perlin noise. After color mapping, the simulation result is rendered on the model, as shown in Figure 2C. The equal scale deflation of the crystals in Worley noise can control the size of the lesions. For some cells that are already small, the cells are scaled to a certain level and the small cells will disappear. Therefore, the cell is dynamically deflated from large to small to simulate the dynamic process of the spot from nothing to something, from small to large. The grayscale mapping representing the disease spots is updated in real time. The amount of deflation change is M 2 , as shown in Equation (11): where Noise w () denotes the function that can deflate the cell size in noise and Cell denotes the cell in Worley noise. Generation of initial water stain-like spot. Ring pattern symptom simulation Ring pattern symptom is characterized by ring spot pattern. Initially, the plant surface produces brown, round, water-stained spots, which gradually form concentric whorls of varying shades of color as the spots spread. In this paper, we take apple ring rot as the specific object of study and describe the simulation of ring pattern symptom. In this paper, the entire spot is split into two parts, the initial water-stained spot and the concentric whorl, and the generated gray-scale image of the spot morphology, which is I 3 (x, y), is expressed as Equation (12): where I S (x, y) is represented as a grayscale image of waterstained spots and I Y (x, y) is represented as a grayscale image of concentric whorls. It is reasonably assumed that the small water-stained spots initially produced by the onset of disease determine the overall size, color basis, and outermost morphology of the spots as they spread and amplify. This part is disassembled step by step, and the regular circle is randomly perturbed by using Gaussian noise. Finally, the simulation results of this part are obtained by combining image operations, and the specific process steps are as follows. (1) Four regular circular grayscale maps are generated in turn, with size satisfying Circle 1 > Circle 2 > Circle 3 > Circle 4 . Shape 1 and Shape 4 are obtained by preprocessing Circle 1 and Circle 4 . The result is shown in Figures 3A,B. Circle 2 and Circle 3 are perturbed by Gaussian twice, and the result after the first perturbation is subtracted from the result after the second perturbation to obtain Shape 2 and Shape 3 . The result is shown in Figures 3C,D. (2) The shape obtained in the above step is subtracted three times in turn, and the transparency of the result is adjusted to facilitate the subsequent superimposition of the whorl part to obtain the shape of the initial water-stained spots, as shown in Figure 4. From a microscopic point of view, whorls are seen as formed by colonies in the process of continuous growth movement and cessation of aggregation. In this paper, each circle of the whorl from deep to shallow is regarded as a layer-by-layer radial gradient mapping that can be increased with time. The amount of change in the overall deflation of the spot shape is M 3 , and the current number of circles is determined by rounding the value of M 3 . Then the gray-scale image generation of the water-stained spot part is calculated as shown in Equations (13, 14): where Shape denotes the initial water-stained spot, Shape S denotes the water-stained spot portion after deflation, and Image() denotes a function that converts the input into an image format of the same size as the plant texture mapping. The grayscale image generation for the concentric whorl section is calculated as shown in Equations (15, 16): where Rand() is the random function for perturbation, GradientMap() is the gradient mapping function from 0 to 255, r 0 is the radius of the initial circle, and N Turns is the value of M 3 rounded to represent the number of whorl circles. A random function is added to perturb the regular concentric whorl pattern (Figure 5A), which is closer to the real one. The new texture mapping map generated at each moment is continuously stored and updated. The mapping map of the initial water-stained spots after superimposed diffusion ( Figure 5B) is rendered to obtain the results of apple whorl disease, as shown in Figure 5C. Rust spot symptom simulation Rust spot symptom is characterized by the appearance of different shaped spots on the plant surface formed by aggregations of small particles of varying sizes and distinctive projections. The rust spot symptom is simulated using wheat stripe rust as a specific study object. Based on the characteristics of wheat stripe rust in parallel strips, this paper uses mask mapping to mark the areas affected by wheat stripe rust. The white color is used to mark the areas where stripe rust develops, and the areas where it does not develop are marked in black. The corresponding mask map is shown in Figure 6A. The generated spore mounds are viewed as consisting of a dense distribution of raised granules. In this paper, this granularity is represented by drawing a near-elliptical shape in two dimensions that can be used for gradient mapping, and the results after different color mapping are shown in Figure 6B. A number of granular points (the maximum number is 1,000 * 1,000) with 2 × 2 shape pixels are set, and the position distribution of the granules is randomly perturbed using a Gaussian random function. In this paper, we use a normal map to simulate the bump of rust particles. When the type of normal texture is set to Normal map in Unity, the built-in function UnpackNormal can be used to properly sample the normal texture and extract the normal information by adjusting the bump level. The result is obtained by applying it to the Surface Shader for output. The detail is shown in Figure 6C. The rendering result is shown in Figure 6D. In this paper, based on the description of the rust disease process, Unity Shader is used to update the mask mapping in real time based on dynamic color scale adjustment. This is able to simulate the dynamic process of disease spot from nothing to something and from sparse to dense in the actual rendering. For grayscale images, the algorithm for input color scale adjustment is to first calculate the difference Diff between the white field threshold threHigh and the black field threshold threShadow. Then, the algorithm traverses each pixel in the mask mapping and calculates the difference GrayDiff between the input gray value Gray and threShadow for each pixel. If the value of GrayDiff is less than or equal to 0, the adjusted pixel gray value Gray' is 0. Otherwise, the adjusted gray value is obtained by calculating the power of the inverse of the Midtone with the ratio of GrayDiff to Diff as the base and multiplying by 255, as shown in Equations (17-20). Frontiers in Plant Science 08 frontiersin.org Schematic diagram of rose black spot simulation. where Midtone 0 is denoted as the initial midtone value and M 4 is the amount of change. After the above adjustment, the grayscale image of the input color scale adjusted by the input color scale I in (x, y) is obtained. Then, the ratio coefficients of the deviation of the white field threshold outHigh and the black field threshold outShadow and 255 in the output color scale are calculated. After a series of calculations, the color-adjusted grayscale image is obtained as the updated mask mapping, which is I 4 (x, y), as shown in Equation (21). 21) In this paper, the value of threShadow is 86 and threHigh is 255; the value of outShadow is 0 and outHigh is 255. The dynamic adjustment of the color scale is done by dynamically and linearly adjusting the middle tone value M 4 for real-time rendering to simulate the change process of wheat stripe rust. Scatter symptom simulation Scatter symptom is characterized by the natural distribution of the spots on the plant surface, mostly scattered, rarely in patches, with a relatively smooth surface. In this paper, we take rose black spot as the specific object of study to realize the simulation of scatter symptom. In this paper, we use the Perlin noise function to perturb the regular circular spots in two-dimensional space in terms of distribution and shape, respectively, so that we can generate the disease spots that meet the characteristics of scattered morphological symptoms, as shown in Figure 7. The algorithm process steps are as follows. (1) The Perlin noise function is used as a random function to generate a number of regular circles for random distribution in the 2D plane, and the dynamic scaling of the radius of the circles can control the size of the spots. After adjustment, the scaling value of Perlin noise used here for position perturbation is set to 32. The larger the scaling value, the more intensive the Perlin noise calculation. (2) A random function is used to affect the size of the generated regular circles, setting the random range of shape scaling multipliers between 0.5 and 1.0. The Perlin noise function is again used, here scaled to a value of 8, to perturb the regular circular shape to deform it, thus generating an irregular speckle pattern. (3) The white patches generated above to represent the diseased spots are adjusted in gray scale. After performing color mapping, the color of the disease spots is adjusted by adjusting the value of HSI (Zhi et al., 2020). An image subtraction operation is performed with the original leaf texture mapping to generate the scattered spots of the disease in 2D view. After applying it to the 3D model, the final rendering result is obtained. The number of scattered spots is predetermined for the background program. According to step (1) above, the scattered spot locations of rose black spot are determined by Perlin noise as a random generator. Each random point generated by the random function corresponds to some random value in the interval. The random value corresponding to the i random point is value i , and the threshold that can be changed in real time for judgment is M 5 . Display(i) is the function that determines whether each random point will be shown to be rendered as a disease spot, as shown in Equation (22). Each random point initially generated is traversed, and when the corresponding random value is less than or equal to the set threshold value indicates that the point is displayed, otherwise, the point is not displayed, thereby updating the current spot texture mapping I 5 (x, y). Simulated similarity test Convolutional neural network is the leading architecture for image classification, recognition, and detection tasks in deep learning (Rawat and Wang, 2017;Li et al., 2020). In this paper, real images are used as the training set and the model is trained using ResNet (He et al., 2016). The simulation results are used as a test set to get their recognition accuracy of disease symptom types as a way to complete the simulation similarity test. Structural design of ResNet model The advanced nature of the ResNet model allows its structure to be changed and adapted flexibly according to the requirements. The network structure built in this paper is shown in Figure 8. It consists of 56 layers of network. Among them, Conv is the convolutional layer and stride is the step size. BN is Batch Normalization, which aims to regularize the image (Zhu et al., 2017). The activation function is Relu. Pool is the pooling layer. FC is the Fully Connected Layers. Because there are two sequences of steps with repeated operations in the ResNet model for feature extraction of image information, the steps with repeated operations are directly summarized into two different modules B1 and B2 to simplify the structure diagram in order to represent the network structure more clearly. The practical role of both modules is to continuously extract the feature information of the image. The algorithm flow steps of the model are as follows. (1) Initial feature extraction is performed on the training set images using a convolution kernel of size 3 × 3 with a step size of 1. The BN layer is used to normalize the disease features. ReLU activation function (Lin and Shen, 2018) is used to non-linearize the disease features. Each subsequent convolutional calculation is followed by batch normalization and activation function, which will not be reviewed later. (2) The image features are further extracted and fused with the input feature information using 2 sets of convolution kernels of size 3 × 3 with a step size of 1. This process is seen as the overall module B1 (Figure 8A), and the B1 calculation operation is repeated twice. (3) Deep features are extracted using one set of convolutional kernels of size 3 × 3 with a step size of 2 and one set of convolutional kernels of size 3 × 3 with a step size of 1. The result of this step is added to the result obtained by computing the original input image using a set of convolutional kernels of size 1 × 1 with step size 2, and the result after the summation is converted to a function using the activation function. This process may be seen as overall module B2 ( Figure 8B). (4) The calculation process of B1 and B2 is repeated three times in order to further extract the deep features of the image. (5) When the network finishes processing the image with feature extraction, the feature map is compressed using the average pooling operation to reduce the amount of network computation. Finally, Softmax classifier (Zeng et al., 2014) is used to output the probabilities of the corresponding categories through a fully connected layer of size 5. The label with the highest probability is output as the predicted classification result. Data acquisition and pre-processing The experimental images collected for training in this paper consist of PlantVilage (Hughes and Salathe, 2015), a publicly available online dataset for plant disease image classification, and a self-built dataset. A total of 1,000 RGB color images of different disease symptom types are collected. The experimental images collected for testing are all obtained simulated result maps, with a total of 200 RGB color images. The self-built dataset was obtained from the image data crawled in the Agroforestry Science website using crawler software, and the useless data were removed by manual screening. Under the guidance and advice of agroforestry-related professionals, we finished organizing the image data and tagging the category labels. In this paper, the pixel size of all images was adjusted to 256 × 256 × 3, and the original images with less than 256 × 256 × 3 pixels were zero-filled. In order to obtain experimental images that better meet the training requirements, some of the images in this paper are adjusted in terms of sharpness, contrast, sharpening, and interference information processing. Result of similarity test The parameters of the model training were set as follows: the learning rate was set to 0.005, the number of iterations was 600 rounds, and the loss function was the cross-entropy loss function, and the training accuracy could reach 98.1%. The performance of the model is evaluated by randomly taking 20% of the real image dataset as the validation set, and the accuracy obtained is 92%, which is a high recognition accuracy, indicating that the model can be used to simulate the similarity test. The simulated result maps of each type of plant diseases were identified as a test set, so as to achieve the purpose of similarity test proposed in this paper, and the overall accuracy of the test obtained is 87%. Because there are certain differences between simulated results and real images, some interference factors are difficult to avoid, including the difference between 3D models and real plants, the difference between the apparent texture of simulated diseases and real diseases, and the color space ratio, etc., the recognition accuracy will be significantly lower than that of the validation set when the test set is simulated results. The formula for calculating the recognition accuracy of different symptom types is shown in Equation (23): where Ob k refers to the recognition accuracy of the k symptom, Correct k refers to the number of image samples with correct recognition results for the k symptom type, and Error refers to the number of image samples with incorrect recognition results for the k symptom. The obtained recognition for each type is shown in the Table 1. It can be seen that the overall results of the simulated similarity test using deep learning are good. Ring pattern has the most distinctive features and is significantly different from other symptom types, with the highest recognition accuracy. In contrast, the plant diseases of rust spot are more easily misidentified as powdery mildew or scatter types. The formation of rust spot at a certain period of time is similar in distribution and shape to these two symptom types, and the identification accuracy is relatively lowest. Display interface operation In this paper, we analyze the functional requirements of the plant disease simulation user interface and design a simulation display interface based on a message-driven model instead of a command-line program using Unity. Users can: (1) select the simulation object and open the model file (.obj file), set the growth conditions of temperature and humidity, and enter the simulation algorithm process of the corresponding object; (2) slide the time module to observe the change process, and the system writes the current rendering time and real-time frame rate to the real-time information area in real time; (3) use the right mouse button to rotate the model. The W, S, A, and D keys of the keyboard control the zoom in, zoom out, left, and right movement of the model, respectively. The W, S, A, and D keys of the keyboard can control the zoom in, zoom out, left and right movement of the model, respectively. Experimental results In order to show the simulation effect clearly and intuitively, the complex plant model is pre-processed in this paper, and only the parts of plant organs with diseases are reserved for display. The average frame rates of different plant disease simulations are shown in Table 2, indicating that the simulation can be performed efficiently in real time. Figure 9 shows the simulation results of the above plant diseases under different environmental conditions and different disease occurrence times. It can be seen that the severity of plant disease damage to the plant epidermis increases with time, gradually spreading to infest the entire surface of the organ when the temperature and humidity are in the right range for the growth of the disease. Discussion Plant diseases are diverse and complex. The number of phenological patterns under the influence of different exogenous and endogenous conditions is even more uncountable. When visualizing them, it is difficult to classify and simulate plant diseases from a plant pathology perspective. In this regard, this paper defines a common relationship between the diffusion process of plant diseases under the influence of environment and designs a generic time-varying model of plant disease. In this paper, by observing and analyzing the apparent symptoms of diseases, we classify plant diseases by symptom characteristics and propose an apparent simulation algorithm to realize visual simulation of different plant diseases. To verify the generalizability of the algorithm, this paper implements the apparent simulation of five other plant diseases using the proposed five symptom simulation algorithms, respectively, as shown in Figure 10. In addition to the proposed deep learning-based similarity check method, to be able to evaluate the simulation results more comprehensively, this paper designs the "Questionnaire on the Effectiveness of Plant Disease Simulation Based on Feature Classification." We invite users to visually compare the simulation results with real pictures. Using a Likert scale, users rated the simulation results quantitatively and made suggestions for optimization, and the questionnaire data were analyzed using SPSS software (Pallant, 2013). In order to be able to cover different types of users to participate in the evaluation, users of different age groups, different educational stages and different industries were invited to this paper, and a total of 242 valid questionnaires were collected. The age groups covered from below 16 to above 45 years old, with the age group of 16-35 years old dominating; the education levels covered from junior high school to above master's degree, with bachelor and master's degree dominating; the professions included agriculture and forestry related, computer related and other professions, with reasonable composition. Descriptive analysis of the overall effect evaluation was conducted, and the results obtained are shown in Table 3. It can be obtained that the median evaluation score of each symptom type is 4, which indicates that it is similar, indicating that the overall simulation effect meets expectations and is recognized by users. Combined with the shortcomings and suggestions made by users on the simulation results collected from the nonscale questions in the questionnaire, they are summarized as follows: in terms of details, users suggested that the gradient texture of the simulated area of discoloration is not obvious, the stacking effect of powdery mildew needs to be refined, the boundary treatment of ring pattern is not detailed enough, the color of rust spot needs to be further processed, and the apparent differentiation between different periods of scatter is not enough. These also provide valuable reference directions for the subsequent optimization work. Conclusion The time-varying generic model proposed in this paper simplifies the unqualified and complex processes into quantitative common relationships in a uniform computational manner. It can also set different influence coefficients to express the variability of plant diseases by the action of influencing factors, effectively integrating algorithmic resources. The simulation algorithm proposed in this paper for different disease symptoms generates the texture of disease spots in two-dimensional space, and then renders them on the threedimensional model to get the final effect. For discoloration, this paper mainly uses the three-stage gray-scale remapping to realize the discoloration simulation with a sense of hierarchy; for powdery mildew, this paper combines Worley noise and Perlin noise application to realize the simulation; for ring pattern, this paper combines image processing and noise disturbance deformation to simulate the pattern of spots into two parts: initial water-stained spots and concentric circles; for rust spot, this paper uses mask mapping to mark specific onset areas, simulates the raised particles of rust spots through bump mapping, and uses color scale adjustment to complete the changes of spot texture; for scatter, this paper makes double application of Perlin noise to represent the distribution of spots and disturbance rule shape, and sets dynamic thresholds to complete the simulation of scatter from less to more. In the simulation similarity test, the recognition accuracy reached 87%, indicating that the disease phenology simulation algorithm in this paper can effectively and realistically realize the process simulation of different plant diseases. The overall complexity of the algorithm is moderate, and it operates efficiently, which provides a new solution for disease simulation research and can be extended to more types of disease simulation. In the future, we will work on three aspects: enriching the types of disease symptoms, optimizing the general model of disease time variation, and improving the overall functions to increase the freedom of simulation. Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
v3-fos-license
2020-11-06T22:09:23.233Z
2020-11-01T00:00:00.000
226262202
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aclweb.org/anthology/2020.emnlp-main.629.pdf", "pdf_hash": "d8cb361f6dd5da91b75fd498153acee6af0c7730", "pdf_src": "ACL", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:402", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "d8cb361f6dd5da91b75fd498153acee6af0c7730", "year": 2020 }
pes2o/s2orc
Constrained Fact Verification for FEVER Fact-verification systems are well explored in the NLP literature with growing attention owing to shared tasks like FEVER. Though the task requires reasoning on extracted evidence to verify a claim’s factuality, there is little work on understanding the reasoning process. In this work, we propose a new methodology for fact-verification, specifically FEVER, that enforces a closed-world reliance on extracted evidence. We present an extensive evaluation of state-of-the-art verification models under these constraints. Introduction A rapid increase in the spread of misinformation on the Internet has necessitated automated solutions to determine the validity of a given piece of information. To this end, the Fact Extraction and VERification (FEVER) shared task (Thorne et al., 2018a) 1 introduced a dataset for evidencebased fact verification. Given a claim, the task involves extracting relevant evidence sentences from a given Wikipedia dump and assigning a label to the claim by reasoning over the extracted evidence (SUPPORTS / REFUTES / NOTENOUGHINFO). Several recent works (Liu et al., 2020;Soleimani et al., 2020;Zhao et al., 2020) leverage representations from large pre-trained language models (LMs) like BERT (Devlin et al., 2019), and RoBERTa to achieve state-of-the-art results on FEVER. However, it is unclear how factual knowledge encompassed in these LMs influences the verification process. More recently, Lee et al. (2020) developed a fact verification system solely based on large pretrained LMs and presented their superior zero-shot performance on FEVER compared to a random baseline. This result clearly shows the influence of factual knowledge embedded inside these LMs, but relying entirely on such knowledge directly contrasts to the evidence-based paradigm of factverification. Such reliance can be problematic, especially with evolving evidence (Wikipedia pages are constantly updated to reflect the latest events). Schuster et al. (2019) illustrate this phenomenon through an example fact, "Halep failed to ever win a Wimbledon title", which was valid until July 2019 but not thereafter. In this work, we propose methods to train factverification models that explicitly reason on the available evidence instead of relying on the factual knowledge in pre-trained LMs, thereby emulating a closed-world setting. This is particularly important in the context of the FEVER dataset because of the overlap between the source corpus used for compiling FEVER and the ones commonly used to pre-train LMs (Wikipedia). We build upon the work of Clark et al. (2020) that demonstrated the ability of transformers (BERT, RoBERTa) to function as soft theorem provers. They induce a closed-world reasoning process by fine-tuning on a carefully curated synthetic natural language rulebase. In this work, we transfer this ability to FEVER and gauge the feasibility of such closed-world reasoning. Additionally, we also construct an entity-anonymized version of FEVER following Hermann et al. (2015) for evaluating our proposed models. We construct the anonymized version by masking prominent named entities in the claim-evidence pairs, thereby reducing any reliance on pre-trained factual knowledge. Our experiments adopt the popular three-stage pipeline of FEVER task, comprising document selection, evidence sentence extraction, and claim verification (Thorne et al., 2018b). We primarily focus on the claim verification stage of FEVER, while using the state-of-the-art document selec-tion and evidence sentences extraction from Liu et al. (2020). Our focus is motivated since only the claim verification step involves a joint (often complicated) reasoning over the extracted evidence. Our main contributions are, • We propose various pre-training strategies for large pre-trained LMs to induce a closedworld setting during fact verification in FEVER. • We adapt an existing synthetic natural language rulebase to FEVER by incorporating NOTENOUGHINFO label. • We create an anonymized version of the FEVER dataset to facilitate investigation into the factual knowledge through named entities. Our datasets and code are publicly available. 2 Constrained Verification Traditionally, most FEVER systems rely on large pre-trained language models (LMs) to encode the claim and extracted evidence sentences. Previously, Schuster et al. (2019) studied various reasons for the surprisingly good performance of claim-only classifiers on FEVER and reported dataset idiosyncrasies to be the primary reason as opposed to world knowledge in word embeddings. However, they present only a preliminary analysis of the impact of world knowledge from GloVe embeddings (Pennington et al., 2014). In this work, we present an in-depth analysis because the issue is particularly relevant in the context of large pre-trained LMs. To the best of our knowledge, we are not aware of any other works that look into the impact of embedding's world knowledge on FEVER. In a nutshell, we model the task under a closedworld setting with the extracted evidence as the only available factual information to the model. Overall, we believe the methods proposed in this paper are general enough to apply to any factverification task. However, we show a case study only on FEVER due to its wide-spread popularity. To this end, we first present an entityanonymized version of the FEVER dataset and then propose pre-training strategies to enforce the above described closed-world setting on FEVER models. Anonymization A straightforward way to discourage the use of prior factual knowledge in fact-verification systems 2 https://github.com/adithya7/ constrained-fever Kung Fu (TV series) ent1 Kung Fu ent1 is an American action adventure martial arts western drama television series starring David Carradine ent0 is to anonymize the named entities. An intuitive way to achieve this is to replace them with a custom list of abstract entity markers. We adapt a related technique from reading comprehension literature (Hermann et al., 2015) to our task. Given a pair of claim and extracted evidence sentences, we first identify the set of named entities from Wikititles of evidence sentences. We then replace all the occurrences of these named entities with abstract markers sampled randomly from a predefined list. We present an anonymized FEVER instance in Table 1. We use the resulting anonymized FEVER dataset to evaluate our proposed methods. Clark et al. (2020) analyze the logical reasoning capabilities of transformer-based models on a variety of question-answering and reading comprehension tasks. Given a question and a context comprising of a set of simple facts and rules in natural language, models are expected to reason only based on the provided context, thereby emulating the ability to perform closed-world reasoning. They propose a synthetic training dataset (henceforth referred to as RuleTaker dataset) to fine-tune pre-trained models like RoBERTa. They observe high performances (≥95% accuracy) on the synthetic test set, motivating us to adapt a similar training methodology for FEVER. Table 2 shows an example context from the RuleTaker dataset. Each question-context pair in this dataset belongs to one of the following types, Type-A: provable/disprovable statements, can be labeled by reasoning directly over the context, Type-B: unprovable statements, reasoning Facts/Triples F 1 : Bob is blue. F 2 : Fiona is kind. Rules R 1 : All white people are red. R 2 : Blue people are white. R 3 : If someone is red then they are kind. over the context is not sufficient to conclude these statements. 3 The RuleTaker dataset assigns a TRUE or FALSE label to each question-context pair. Type-A were labeled by reasoning over the context, whereas Type-B were labeled by invoking the closed-world assumption (CWA) (Q4, Q5 in Table 2). The provided context (facts and rules) constitutes the closed-world setup. Moreover, Type-A are additionally annotated with a proof constituting a reasoning chain over a subset of facts and rules. RuleTaker-CWA: Questions We adapt the RuleTaker dataset to FEVER by introducing a new NOTENOUGHINFO label for unprovable question-context pairs. In particular, we construct two FEVER-style RuleTaker datasets, namely RuleTaker-CWA and RuleTaker-Skip-Fact (example in Table 2). RuleTaker-CWA: We convert all the labels for Type-B pairs into NOTENOUGHINFO (Q4, Q5 in Table 2) and relabel TRUE and FALSE from Type-A into SUPPORTS and REFUTES respectively (Q1, Q2, Q3 in Table 2). RuleTaker-Skip-Fact: For each Type-A question, we create a contrastive setting by removing a necessary fact (i.e., required in proof) from the original context. The label for the modified questioncontext pair becomes NOTENOUGHINFO because the question can no longer be answered under the modified context (Q6, Q7, Q8 in Table 2). We also retain the original Type-A pairs by converting all TRUE and FALSE labels to SUPPORTS and RE-FUTES respectively (Q1, Q2, Q3 in Table 2). To maintain a balanced dataset, we randomly sample a fraction of newly created NOTENOUGHINFO labels. Note that we only work with Type-A pairs in this variant. Occasionally there could be multiple valid proofs for the same question-context pair. We currently ignore these questions to avoid inconsistencies arising from other valid reasoning methods over the modified context. Table 3 presents the statistics for the train, dev and test splits in the proposed RuleTaker-CWA and RuleTaker-Skip-fact datasets. As a natural adaptation, we also considered creating a similar Skip-fact variant of the FEVER dataset. Each claim in FEVER was annotated with potentially many evidence sets, and each evidence set can consist of multiple evidence sentences. Ideally, we need all sentences within single evidence set to validate the claim, i.e., it requires multi-hop reasoning. Unfortunately, we noticed cases where a proper subset of an evidence set is enough to prove/disprove the claim (see Table 4). Methodology We now present the methodology to train constrained fact-verification models for the FEVER shared task. Many state-of-the-art FEVER models use the standard BERT encoder (Devlin et al., 2019) to encode a concatenation of claim and evidence sentences. To enforce closed-world reasoning over available evidence, we first pre-train the BERT encoder on the proposed variants of Rule- Claim Roman Atwood is a content creator. (Roman Atwood) Roman Bernard Atwood (born 2. (Comedian) A popular saying, variously quoted but generally attributed to Ed Wynn, is, "A comic says funny things; a comedian says things funny", which draws a distinction between how much of the comedy can be attributed to verbal content and how much to acting and persona. Taker datasets following Clark et al. (2020). Firstly, the reasoning models in Clark et al. (2020) were first trained on the RACE multi-choice question answering dataset (Lai et al., 2017) and then fine-tuned on the RuleTaker dataset. In our experiments, we follow the same pipeline (including hyper-parameters) except to replace original RuleTaker dataset with our adaptations, RuleTaker-CWA and RuleTaker-Skip-fact. 4 In Table 5, we present the results of the pretrained RuleTaker-CWA and RuleTaker-Skip-fact on their respective test sets. In general, we notice high performance on the synthetic test sets, indicating the model's ability to rely only on available evidence. We now utilize the above fine-tuned BERT en-coders (CWA, Skip-fact) with two state-of-the-art graph-based reasoning networks for claim verification, KGAT (Liu et al., 2020) and Transformer-XH (Zhao et al., 2020), as well as a robust BERTbased classifier. BERT-concat: Evidence sentences retrieved before claim verification are concatenated to the claim along with their Wiki-titles and are encoded using a pretrained BERT encoder. The [CLS] representation from the encoder is then directly used for classification. 5 KGAT (Liu et al., 2020): A kernel-based graph attention network over the evidence graph. Each node in the graph encodes a concatenation of individual evidence sentence (along with Wiki-title) and the claim. Knowledge propagation between the nodes of this graph is achieved using a Gaussian edge kernel on a word-word similarity matrix, while individual node importance is measured using a separate node kernel. The initial node representations are refined using the above kernels and a single graph attention layer. Transformer-XH (Zhao et al., 2020): Evidence graph is constructed and initialized in a way similar to KGAT, but the knowledge propagation between the nodes is achieved using special eXtra-Hop attention mechanism. For each node, the [CLS] token embedding from BERT is considered as an attention hub and is revised using a combination of the extra-hop attention and the traditional in-sequence attention. 6 We compare the above-proposed curricula (CWA, Skip-fact) against a baseline curriculum (Original) where we initialize the verification models with standard pretrained BERT weights (bert-base-cased). We use huggingface transformers (Wolf et al., 2019) in all of our experiments. 7 Experiments For each of the three models, BERT-concat, Transformer-XH, and KGAT, we show results on the three different training curricula, Original, CWA, and Skip-fact in Table 6. We evaluate all our trained models on three datasets, the official devset of FEVER task (Std.), symmetric FEVER v0.2 On most evaluation sets, we found the models trained with Original curriculum performed better than our proposed curricula (CWA, Skip-fact) except on symmetric FEVER where Transformer-XH with Skip-fact does slightly better. Across the models, we notice a considerable drop in performance on Anon. set, validating our hypothesis about existing reliance on factual knowledge. To see the individual impact of the entity-anonymization, we train the BERT-concat model on train split of Anon. FEVER dataset. We observe improvements across the three curricula, with Original still outperforming the proposed curricula (Table 7). Through our constrained verification setup, we expect the models to reason using only the extracted evidence. The evidence retrieval from Liu et al. (2020) achieves a recall of 94%, indicating the feasibility of reasoning only on extracted evidence in FEVER. With Original outperforming the proposed strategies on both the standard and anonymized FEVER, we find that world knowledge is helpful for FEVER. Limitations Firstly, our anonymization is a regex-based method and relies only on the entities in Wiki-titles, and this might be insufficient for handling ambiguous titles. Secondly, the Rule-Taker dataset's domain is significantly different from that of the FEVER dataset, thereby presenting a challenge in re-using the pretrained encoder. Additionally, it is not entirely clear as to what constitutes the world (or factual) knowledge for a given task and as highlighted by Clark et al. (2020), effectively combining implicit pretrained knowledge (from encoders) with explicitly stated knowledge (from evidence) remains a challenge. Related Work We adopted the widely used document selection method from Hanselowski et al. (2018). Many recent state-of-the-art FEVER systems involve reasoning over evidence graphs Zhong et al., 2019;Liu et al., 2020;Zhao et al., 2020) along with competitive LMbased models (Soleimani et al., 2020). Dataset specific idiosyncrasies have been identified in FEVER (Thorne et al., 2019;Schuster et al., 2019) as well as in NLI (Gururangan et al., 2018;Poliak et al., 2018;Naik et al., 2018;McCoy et al., 2019), but is not the focus of this work. Conclusion We identify a critical issue with existing claim verification systems, especially the recent models that utilize large pre-trained LMs. We propose to perform fact verification under a closed-world setting and present our results on the task of FEVER. While it is hard to evaluate the reliance on implicit pretrained knowledge, our initial results indicate that such reliance is helpful for FEVER.
v3-fos-license
2018-04-03T03:36:49.651Z
2017-05-01T00:00:00.000
7257374
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.automatica.2017.01.014", "pdf_hash": "13fcc3f73dcef29ab30bb24771d530816328b262", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:405", "s2fieldsofstudy": [ "Mathematics", "Engineering", "Computer Science" ], "sha1": "fa872c0ea9766d692be8c1895dfc285d476dd006", "year": 2017 }
pes2o/s2orc
Data-driven model reduction by moment matching for linear and nonlinear systems Theory and methods to obtain reduced order models by moment matching from input/output data are presented. Algorithms for the estimation of the moments of linear and nonlinear systems are proposed. The estimates are exploited to construct families of reduced order models. These models asymptotically match the moments of the unknown system to be reduced. Conditions to enforce additional properties, e.g. matching with prescribed eigenvalues, upon the reduced order model are provided and discussed. The computational complexity of the algorithms is analyzed and their use is illustrated by two examples: we compute converging reduced order models for a linear system describing the model of a building and we provide, exploiting an approximation of the moment, a nonlinear planar reduced order model for a nonlinear DC-to-DC converter. © 2017 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction The availability of mathematical models is essential for the analysis, control and design of modern technological devices. As the computational power has advanced, the complexity of these mathematical descriptions has increased. This has kept the computational needs at the top or over the available possibilities (Åström & Kumar, 2014). A solution to this problem is represented by model reduction which consists in finding a simplified mathematical model which maintains some key properties of the original description. In the linear framework several techniques have been developed by the systems and control community (Antoulas, 2005). These techniques can be divided into two main groups: approximation methods based on singular value decomposition (SVD) and approximation methods based on Krylov projectors, also known as Willcox, and Backx (2008), Hinze and Volkwein (2005), Volkwein (1999, 2008), Rowley, Colonius, and Murray (2004) and Willcox and Peraire (2002). A nonlinear enhancement of the moment matching approach has eluded researchers until recently. In Gallivan et al. (2004Gallivan et al. ( , 2006, a characterization of the moments in terms of the solution of a Sylvester equation has been given. Exploiting this relation, in Astolfi (2007Astolfi ( , 2010 a new interpretation of the moment matching approach, based on a steady-state description of moment, has been proposed. This has led to further developments in the model reduction field, e.g. the extension of the model reduction theory to linear and nonlinear time-delay systems (Scarciotti & Astolfi, 2016b), see also Ionescu and Astolfi (2013b), Ionescu, Astolfi, and Colaneri (2014), Scarciotti (2015) and Astolfi (2015a, 2016a). When dealing with model reduction, one usually starts with a high-dimensional model to be reduced. In fact, most of the methods assume the knowledge of a state-space model of the system to be reduced. However, in practice this model is not always available. Among the moment matching methods, a datadriven approach to reduce linear systems has been proposed under the name of Loewner framework (Mayo & Antoulas, 2007). This method constructs reduced order models by using matrices composed of frequency-domain measurements, which makes intrinsically difficult to extend the method to nonlinear systems. In this paper, inspired by the learning algorithm given in Bian, Jiang, and Jiang (2014) and Jiang (2012, 2014) to solve a model-free adaptive dynamic programming problem (see also the references therein, e.g. Baird, 1994;Vrabie, Pastravanu, Abu-Khalaf, & Lewis, 2009), we propose time-domain data-driven online algorithms for the model reduction of linear and nonlinear systems. Collecting time-snapshots of the input and output of the system at a given sequence of time instants t k , two algorithms to define families of reduced order models (in the framework introduced in Astolfi, 2010) at each instant of the iteration t k are devised. The reduced order models asymptotically match the moments of the unknown system to be reduced. These algorithms have several advantages with respect to an identification plus reduction technique: there is no need to identify the system, which is expensive both in terms of computational power and storage memory; since the reduced order model matches the moments of the unknown system, it is not just the result of a low-order identification but it actually retains some properties of the larger system; since the proposed algorithm determines directly the moment of a nonlinear system from the input and output data, it does not involve the computation of the solution of a partial differential equation, which is usually a difficult task; in addition, for linear systems the capability of determining reduced order models from data matrices (of order proportional to ν) is computationally efficient when compared with model reduction obtained by manipulating the system matrices (of order n ≫ ν). Thus, the method is of computational value, for both linear and nonlinear systems, even when the system to be reduced is known. This work is at the intersection of large and active research areas, such as model reduction and system identification. For instance, the use of time-snapshots and data matrices is reminiscent of the methods of POD, see e.g. Lorenz (1956) and Noack, Afanasiev, Morzynski, Tadmor, and Thiele (2003) and of subspace identification, see e.g. van Overschee and de Moor (1996) and Verhaegen and Verdult (2007). Moreover, while the Loewner framework is a frequency-domain data-driven method proposed in model reduction, various time-domain data-driven techniques have been presented in system identification, such as the eigensystem realization algorithm, see e.g. Cooper (1999), Houtzager, van Wingerden, andVerhaegen (2012), Majji, Juang, and Junkins (2010) and Rebolho, Belo, and Marques (2014), and the dynamic mode decomposition, see Hemati, Williams, and Rowley (2014). Hence, this paper can be seen as part of the model reduction literature or of the system identification literature. However, since the results of the paper have strong connections with the notion of moment, we present the results using the terminology of model reduction. The rest of the paper is organized as follows. In Section 2.1 (3.1) the definition of moment and the model reduction techniques, as developed in Astolfi (2010), are recalled for linear (nonlinear) systems. In Section 2.2 we give a preliminary analysis to compute on-line estimates of the moments of a linear system. In Section 2.3 (3.2) approximations which converge asymptotically to the moments of the linear (nonlinear) system are given. Therein, a discussion of the computational complexity associated with the evaluation of these approximations is presented, a recursive least-square formula is given and a moment estimation algorithm is provided. In Section 2.4 (3.3) we give a family of reduced order models for linear (nonlinear) systems. In Section 2.5 we discuss how several properties, e.g. matching with prescribed eigenvalues or zeros, can be enforced in the present scenario. In Section 2.6 a linear reduced order model describing the dynamics of a building is estimated with the method proposed in the paper. In Section 3.4 a nonlinear reduced order model constructed using an approximation of the moment of a DC-to-DC converter provides a further example. Finally Section 4 contains some concluding remarks. Preliminary versions of this paper have been published in Scarciotti and Astolfi (2015b,c). The additional contributions of the present paper are as follows: the results are presented in a more formal and organized way (by means, e.g. of theorems and propositions) and all the results are directly proved; two recursive algorithms are given; properties of the exponentially converging models, such as matching with prescribed eigenvalues, are discussed; the reduction of the linear model describing the dynamics of a building provides a new example based on a physical system; the so-called ''U/Y'' variation of the nonlinear algorithm is given. Finally, the example in Scarciotti and Astolfi (2015c) is extended providing a nonlinear planar reduced order model estimated with the presented techniques. While this paper was under review, the proposed algorithm has been applied to the problem of model reduction of power systems (Scarciotti, 2017). Therein the authors provide an extensive testing of the linear algorithm. This suggests that the results of the present paper can be of relevance for applications in diverse research domains. Notation. We use standard notation. R ≥0 (R >0 ) denotes the set of non-negative (positive) real numbers; C <0 denotes the set of complex numbers with negative real part; C 0 denotes the set of complex numbers with zero real part. The symbol I denotes the identity matrix and σ (A) denotes the spectrum of the matrix A ∈ R n×n . The symbol ⊗ indicates the Kronecker product and ∥A∥ indicates the induced Euclidean matrix norm. The symbol ϵ k indicates a vector with the kth element equal to 1 and with all the other elements equal to 0. The vectorization of a matrix A ∈ R n×m , denoted by vec(A), is the nm × 1 vector obtained by stacking the columns of the matrix A one on top of the other, namely vec(A) = [a ⊤ 1 , a ⊤ 2 , . . . , a ⊤ m ] ⊤ , where a i ∈ R n is the ith column of A and the superscript ⊤ denotes the transposition operator. Model reduction by moment matching -recalled To render the paper self-contained in this section we recall the notion of moment for linear systems as presented in Astolfi (2010). Consider a linear, single-input, single-output, continuoustime, system described by the equationṡ with x(t) ∈ R n , u(t) ∈ R, y(t) ∈ R, A ∈ R n×n , B ∈ R n×1 and C ∈ R 1×n . Let W (s) = C (sI − A) −1 B be the associated transfer function and assume that (1) is minimal, i.e. controllable and observable. In Astolfi (2010), exploiting the observation that the moments of system (1) can be characterized in terms of the solution of a Sylvester equation (see Gallivan et al., 2004Gallivan et al., , 2006, it has been noted that the moments are in one-to-one relation with the well-defined steady-state output response of the interconnection between a signal generator and system (1). This interpretation of the notion of moment, which relies upon the center manifold theory, has the advantage that it can be extended to nonlinear systems and it is of particular interest for the aims of this paper. Moreover, system (1) has a global invariant manifold described by M = {(x, ω) ∈ R n+ν : x = Πω}. Hence, for all t ∈ R, Finally, as shown in Astolfi (2010), the family of systemṡ with G any matrix such that σ (S) ∩ σ (S −GL) = ∅, contains all the models of dimension ν interpolating the moments of system (1) at the eigenvalues of S. Hence, we say that system (5) is a model of (1) at S. A preliminary analysis In this section we provide a preliminary analysis assuming to know the matrices A, B, C and the state x(0) in Eq. (1). This analysis is used in the following section for the development of an estimation algorithm which, this time, does not use the matrices A, B, C and the state x(0). To this end we make the following assumptions. Assumption 1. The input u of system (1) is given by the signal generator (3), with S such that σ (S) ⊂ C 0 with simple eigenvalues. In addition, assume that the triple (L, S, ω(0)) is minimal. 1 A matrix is non-derogatory if its characteristic and minimal polynomials coincide. 2 The matrices A, B, C and the points s i identify the moments. Then, given any observable pair (L, S) with s i ∈ σ (S), there exists an invertible matrix T ∈ R ν×ν such that the elements of the vector C ΠT −1 are equal to the moments. Assumption 1 has a series of implications. The hypothesis on the eigenvalues of S is reasonable since the contribution of the negative eigenvalues of S to the response of the system decays to zero. The minimality of the triple (L, S, ω(0)) implies the observability of the pair (L, S) and the ''controllability'' of the pair (S, ω(0)). This last condition, called excitability of the pair (S, ω(0)), is a geometric characterization of the property that the signals generated by (3) are persistently exciting, see Åström and Wittenmark (1995), and Verhaegen and Verdult (2007). The choice of the particular structure (3) for the input u is limiting in applications in which the input cannot be arbitrarily chosen. We suggest in Section 2.3 a possible way to deal with alternative input signals. Note that the two assumptions imply that σ (A) ∩ σ (S) = ∅, which in turn implies that Eq. (2) has a unique solution or, equivalently, that the response of system (1) driven by (3) is described by (4). We evaluate Eq. (4) over a set of sample times T p k = {t k−p+1 , . . . , t k−1 , t k } with 0 ≤ t 0 < t 1 < · · · < t k−p < · · · < t k < · · · < t q , with p > 0 and q ≥ p. The set T p k represents a moving window of p sample times t i , with i = 0, . . . , q. We call Π k the estimate of the matrix Π at T p k , namely the estimate computed at the time t k using the last p instants of time t k−p+1 , . . . , t k . The estimate can be computed as follows. Theorem 2. Let the time-snapshots Q k ∈ R np×nν and χ k ∈ R np , with p ≥ ν, be defined as respectively. Assume the matrix Q k has full column rank, then Proof. Eq. (4) can be rewritten as Using the vectorization operator and the Kronecker product on Eq. (7) yields Computing Eq. (8) at all elements of T p k yields If the matrix Q k has full column rank, then we can compute Π k from the last equation yielding Eq. (6). Note that the selection of the set T p k can affect the quality of the data and the rank of the matrix Q k . Thus, to assure that T p k is nonpathological (Chen & Francis, 1995) we introduce the following technical assumption. If Assumption 1 holds then it is always possible to choose the elements of T ν k , i.e. the sampling times, such that Assumption 3 holds (see Lemma 1 in Padoan, Scarciotti, & Astolfi, in press, for a proof of this fact). We now show that this assumption can be used to make the matrix Q k full rank. Lemma 1. Suppose Assumptions 1-3 hold. If p = ν, then Q k is square and full rank. . Since σ (A) ⊂ C <0 and σ (S) ⊂ C 0 , the excitability of (S, ω(0)) implies that the i-block element of the matrix Q k is a n × nν matrix of rank n. Since the blocks are chosen corresponding to the elements of T ν k , by Assumption 3 the linear independence holds for all k. As a result Q k is a square full rank matrix. The claim follows noting that these blocks are chosen according to the elements of T ν k . Thus by Assumption 3 the blocks are linearly independent for all k. Since real data are affected by noise, the assumptions of Lemma 1 may not hold. In this case p, i.e. the number of samples in the moving window, can be selected larger than ν and, as well-known from linear algebra and remarked in Åström and Wittenmark (1995) and Jiang and Jiang (2012), the solution of Eq. (6) is the least squares solution of (9). We now prove that the estimate Π k is actually equal to Π. Theorem 3. Suppose Assumptions 1-3 hold. Let Π be the solution of Eq. (2). Then Π k computed by Eq. (6) is equal to Π. Proof. The matrix Π k defined in Eq. (6) is such that the equations hold. Consider the first equation of system (1) computed at t k , namelẏ Substituting Eqs. (10) and (11) in Eq. (12) yields The discussion carried out so far has the drawback that information on the state of the system is required. In practice, this is usually not the case and only the output y is available. Thus, we look for a counterpart of Theorem 2 in which the output is used in place of the state. Theorem 4. Let the time-snapshots R k ∈ R w×nν and Υ k ∈ R w , with w ≥ nν, be defined as respectively. Assume the matrix R k has full column rank, then Proof. The result can be proved following the same steps used to obtain Eq. (6). Similarly to Lemma 1, the following result guarantees that the matrix R k is full rank. Lemma 2. Suppose Assumptions 1-3 hold. If w = nν, then R k is square and full rank. Proof. The proof is omitted because similar to the one of Lemma 1, although this time also the observability of (C, A) is used. On-line moment estimation from data Eq. (6) contains terms that depend upon the matrix A and the initial states x(0) and ω(0). However, exploiting the fact that these terms enter the response as exponentially decaying functions of time, i.e. ω(0) ⊤ ⊗ e At and e At x(0), we present now an approximate version of the results of the previous section. Assume the matrix  Q k has full column rank, then following the same steps used to obtain Eq. (6), we define Note that if Assumption 3 holds and p = ν, then  Q k is square and full rank (the proof of this fact is similar to the proof of Lemma 1 when x(0) = Πω(0), thus, it is omitted). We now prove that  Π k converges to Π. To this end, we first present a preliminary result. Proof. By Assumptions 1 and 2 there exists a matrixΠ such that the steady-state response x ss (t) of the interconnection of system (1) and the generator (3) is described by the equation (14), (14) is such that Substituting Eqs. (15) and (11) in Eq. (12) yields (0)), from which, using Eq. (2) and Assumption 2, the equation (0)) follows. By Assumption 1 there exists a sequence {t k }, with lim k→∞ t k = ∞, such that for any t i ∈ {t k }, ω(t i ) ̸ = 0 and Assumption 3 holds. By Assumption 2 (0) where the superscript * indicates the complex conjugate transpose. The dimensions of ∆ are related to the number of samples, whereas the dimensions of Π are related to the order of the system to be reduced and of the signal generator. In fact, the POD is a decomposition of the entire cloud of data {x(t i )} along the vectors µ(t i ), called principal directions of {x(t i )} (Antoulas, 2005). By contrast, in the technique proposed in this paper the oldest data are discarded as soon as new data satisfying Assumption 3 is collected. As a consequence, while ∆ is built to describe the entire dynamics of {x(t i )}, Π is built to describe only the steady-state response of the system to be reduced. The result is that the POD is usually used with the Petrov-Galerkin projection for a SVD-based approximation (Rowley et al., 2004;Willcox & Peraire, 2002), whereas this technique is a moment matching method. A similar discussion can be carried out for Eq. (13) that contains also terms which depend upon the matrix C . In this case note that Eq. (4) can be written as (0)) an exponentially decaying signal. Thus, an approximate version of Theorem 4 follows. Theorem 6. Define the time-snapshots  R k ∈ R w×ν and  Υ k ∈ R w , with w ≥ ν, as Assume the matrix  R k has full column rank, then is an approximation of the on-line estimate C Π k , namely there exists Proof. Eq. (16) can be derived following the same steps used to obtain Eq. (6). The convergence of the limit to C Π is proved repeating the proof of Theorem 5. Note that if Assumption 3 holds and w = ν, then  R k is square and full rank (the proof of this fact is similar to the proof of Lemma 1, thus, it is omitted). Since  R k is smaller than R k , the determination of  C Π k is computationally less complex than the computation of  Π k . Note also that, from Eq. (16) we are not able to retrieve the matrix  Π k , but only  C Π k . However, as shown in Eq. (5), we only need C Π to compute the reduced order model, i.e. Π is not explicitly required. Eq. (16) is a classic least-square estimation formula. Thus, we can provide a recursive formula. and holds for all t ≥ t r . Proof. The formula is obtained adapting the results in Åström and Wittenmark (1995) (see also Ben-Israel & Greville, 2003;Greville, 1960;Wang & Zhang, 2011) to the present scenario, in which at each step we acquire a new measure and we discard an old measure: for completeness we provide the details of the proof. Note that Substituting the first equation in the second we obtain which substituted in (16), namely (17). Finally Eqs. (18) and (19) are obtained applying recursively the matrix inversion lemma (Åström & Wittenmark, 1995) Note that the construction of the initial values vec(  C Π r ), Φ r and Ψ r needed to start the recursion can be done in two ways: the first consists in using Eq. (16) to build vec(  C Π r ), Φ r and Ψ r and then updating the estimate with the equations in Theorem 7. However, this method has the drawback of requiring the inversion The second method consists in starting with dummy initial values vec(  C Π r ), Φ r and Ψ r . Since the formulas ''forget'' the oldest measurements, after a sufficient number of iterations all the dummy measurements are forgotten. Since for single-input, single-output systems the terms (I + In comparison, the Arnoldi or Lanczos procedure for the model reduction by moment matching has a computational complexity of O(νn 2 ) (Antoulas, 2005, Section 14.1) (or O(ανn) for a sparse matrix A, with α the average number of non-zero elements per row/column of A). In addition, note that these procedures require a model to be reduced and thus further expensive computation has to be considered for the identification of the original system. The approximations  Π k and  C Π k can be computed with the following algorithm. 1: Construct the matrices  Q k and  χ k (  R k and  Υ k , respectively). Else increase w. If k−w < 0 then restart the algorithm selecting a larger initial 4: Stop. As already noted, it is more realistic to approximate C Π k , using output measurements, than to approximate Π k , which needs state measurements. Moreover, the determination of  C Π k is computationally more efficient since the number of unknown elements is smaller. Nevertheless, the determination of  Π k is not irrelevant. From a theoretical point of view,  Π k provides a way to determine the solution of the Sylvester equation from measurements. Note also that  Π k contains more information than  C Π k . In particular, it provides information regarding the order of the unknown system. Thus, the estimation of Π k paves the way for an extension of this work in the direction of system identification. 3 This is the computational complexity of the fastest algorithm (Le Gall, 2014) for the inversion and multiplication of matrices. If the classical Gauss-Jordan elimination is used, then the computational complexity is O(ν 3 ). Remark 2. It is not always possible to arbitrarily select the input of the system to be reduced. For instance the input signal may be composed of several frequencies. Instead of system (3), consider the input described by the equationṡ with v(t) ∈ R n an unknown signal. In this case the output response of system (1) is which can be written as (0)). One can then apply the filtering techniques given in Åström and Wittenmark (1995, Chapter 11): we filter out v from y and u with a band-pass filter and apply the results of the paper to the filtered y f and u f . Remark 3. When the class of inputs is not given, it may be necessary to carry out a preliminary analysis to estimate specific frequencies at which we would like to interpolate the transfer function. This analysis requires an additional step. However, note that the estimation of some of the frequency peaks of the transfer function is not as computationally expensive as a full system identification procedure for two reasons: on one hand we are interested in ν ≪ n frequency features; on the other hand, since our aim is to obtain a reduced order model, we have the additional benefit of obtaining directly a model of order ν (in contrast with the additional computational cost of determining a reduced order model after a high-order identification procedure). Families of reduced order models Using the approximations given by Algorithm 1 a reduced order model of system (1) can be defined at each instant of time t k . Definition 3. Consider system (1) and the signal generator (3). Suppose Assumptions 1-3 hold. Then the systeṁ , is a model of system (1) at S at time t k , if there exists a unique solution P k of the equation such that where  C Π k is the solution of (16). With this definition we can formulate the following result. Proposition 1. Select P k = I, for all k ≥ 0. If σ (F k ) ∩ σ (S) = ∅ for all k ≥ 0, then the model is a model of system (1) at S for all t k . Note that for most purposes, models (22) and (25) can be defined using directly the asymptotic value of  C Π k . However, defining the reduced order model at each instant of time t k allows to implement the algorithm online. This is particularly advantageous when the unknown system has a parameter which is subject to variation. If the variation is sufficiently slow, then the algorithm would be able to produce updated reduced order models at each t k . Remark 4. The so-called Loewner framework represents an alternative data-driven approach to model reduction by moment matching (Mayo & Antoulas, 2007). One of the main differences between our approach and the Loewner framework is that we use time-domain measurements, while the Loewner framework makes use of frequency-domain measurements. Thinking in a classical interpolation/Krylov fashion, in the Loewner framework the measurements are used to build projectors with a particular structure. In the framework introduced by Astolfi (2010) Remark 5. The results developed so far can be easily extended to linear time-delay systems. In fact, Algorithm 1 can be used to estimate the moments of linear time-delay systems without any modification. On the other hand the choice of the structure of the time-delay reduced order model is more difficult. We refer the reader to Astolfi (2015b, 2016b) for more details. Properties of the exponentially converging models In Astolfi (2010), Ionescu and Astolfi (2013a), Ionescu et al. (2014) and Scarciotti and Astolfi (2016b) the problem of enforcing additional properties and constraints on the reduced order model has been studied. In this section we go through some of these properties to determine if they hold for the families (25) and under which conditions. (1) Matching with prescribed eigenvalues: Consider system (25) and the problem of determining at every k the matrix G k such that σ (F k ) = {λ 1,k , . . . , λ ν,k } for some prescribed values λ i,k . The solution of this problem is well-known and consists in selecting G k such that σ (S − G k L) = σ (F k ). This is possible for every k and for all λ i,k ̸ ∈ σ (S) because G k is independent from the estimate  C Π k . Note also that by observability of (L, S), G k is unique at every k. (2) Matching with prescribed relative degree, matching with prescribed zeros, matching with compartmental constraints: These problems can be solved at every k as detailed in Astolfi (2010) for all s ∈ σ (S) at k. Even though the asymptotic value of  C Π k satisfies this condition there is no guarantee that the condition holds for all k. However, if the condition holds for the asymptotic value, then there existsk ≫ 0 such that for all k ≥k condition (26) holds. Los Angeles University Hospital building model In this section we apply Algorithm 1 to the model of a building (Los Angeles University Hospital) with 8 floors, each having three degrees of freedom (Antoulas, Sorensen, & Gugercin, 2001;Chahlaoui & Van Dooren, 2005). The model is described by equations of the form (1) with a state of dimension n = 48. The output of the system is the motion of the first coordinate. Note that this model has been reduced with various methods in Antoulas (2005), obtaining a reduced-order model of order ν = 31. In this paper we reduce the system with a model of order ν = 19, interpolating at the points 0, ±5.22ι, ±10.3ι, ±13.5ι, ±22.2ι, ±24.5ι, ±36ι, ±42.4ι, ±55.9ι and ±70ι, corresponding to the main peaks in the magnitude of the frequency response of the system. A reduced order model (25) at time t k has been constructed assigning the eigenvalues of F k (determined with the data-driven technique given in Scarciotti, Jiang, & Astolfi, 2016). Fig. 1 shows the Bode plot of the system (solid/blue line) and of the estimated reducedorder model at t = 100 (dotted/red line). The green circles indicate the interpolation points. The surface in Fig. 2 (Fig. 3, respectively) represents the magnitude (phase, respectively) of the transfer function of the reduced order model as a function of t k , with 4.1 ≤ t k ≤ 23.7 s. The solid/blue line indicates the magnitude (phase, respectively) of the transfer function of the reduced order model for the exact moments C Π. The figures show how the approximated magnitude and phase of model (25) at S of system (1) evolve over time and approach the exact reduced order model as t k → ∞. We also check the qualitative behavior of the reduced order model when the input is not an interpolating signal. To this end, we select the input u = c 1 cos(0.001t) + c 2 sin(0.1t) + c 3 sin(3t) + c 4 cos(15t) + c 5 sin(23t) + c 6 sin(37t) + c 7 sin(50t), (27) where c i , with i = 1, . . . , 7, are randomly generated weights such that  7 i=1 c i = 1. Fig. 4 (top graph) shows the output response of the building model in solid/blue line and the output response of the asymptotic reduced order model in dotted/red line when the input to the two systems is selected as in (27). The bottom graph shows the normalized absolute error between the two output responses, i.e. |y(t) − Ψ ∞ (t)|/ max(y(t)). We see that the error is fairly small even though the input is not an interpolation signal. Model reduction by moment matching -recalled Similarly to the linear case, to render the paper self-contained we recall the notion of moment for nonlinear systems as presented in Astolfi (2010). Consider a nonlinear, single-input, single-output, continuous-time, system described by the equationṡ with x(t) ∈ R n , u(t) ∈ R, y(t) ∈ R, f and h smooth mappings, a signal generator described by the equationṡ with ω(t) ∈ R v , s and l smooth mappings, and the interconnected In addition, suppose that f (0, 0) = 0, s(0) = 0, l(0) = 0 and h(0) = 0. The following two assumptions play, in the nonlinear framework, the role that Assumptions 1 and 2 play in the linear case. These two assumptions can be used to give a description of steadystate response for the nonlinear system (28). Lemma 4 implies that the interconnected system (30) possesses an invariant manifold described by the equation x = π (ω), which can be used to define the moment for nonlinear systems. On-line moment estimation from data Solving Eq. (31) with respect to the mapping π is a difficult task even when there is perfect knowledge of the dynamics of the system, i.e. the mapping f . When f is not known Eq. (31) can be solved numerically, however this has the additional drawback of requiring information on the state of the system. In practice, this is usually not the case and only the output y is available, with the consequence that also the mapping h has to be known. Note that given the exponential stability hypothesis on the system and Lemma 4, the equation where ε(t) is an exponentially decaying signal, holds. We introduce the following assumption. The assumption that the mapping to be approximated can be represented by a family of basis functions is standard, see e.g. Toth (2010). For some families of basis functions, e.g. radial basis functions, there exist results of ''universal'' approximation (Park & Sandberg, 1991;Rocha, 2009). Practically a trial and error procedure can be implemented, for instance starting with the use of a polynomial expansion or with the use of an expansion based on functions belonging to the same class as the ones generated by the signal generator (e.g. sinusoids, for sinusoidal inputs). Thus, let with N ≤ M. Using a weighted sum of basis functions, Eq. (33) can be written as where e(t) =  M N+1 γ j ϕ j (ω(t)) is the error resulting by terminating the summation at N. Consider now the approximation which neglects the error e(t) and the transient error ϵ(t). Let Γ k be an on-line estimate of the matrix Γ computed at T w k , namely computed at the time t k using the last w instants of time t i assuming that e(t) and ϵ(t) are known. Since this is not the case in practice, define  Γ k as the approximation, in the sense of (35), of the estimate Γ k . Finally we can compute this approximation as follows. Theorem 8. Define the time-snapshots  U k ∈ R w×N and  Υ k ∈ R w , with w ≥ ν, as and If  U k is full column rank, then is an approximation of the estimate Γ k . To ensure that the approximation is well-defined for all k, we need that the elements of T w k be selected such that  U ⊤ k  U k is full column rank. This condition expresses a property of persistence of excitation that is guaranteed by the following assumption . Assumption 7. The initial condition ω(0) of system (29) is almost periodic 6 and all the solutions of the system are analytic. In addition, system (29) satisfies the excitation rank condition 7 at ω(0). To ease the notation we introduce the following definition. Definition 5. The estimated moment of system (28) is defined as for all t ∈ R, with  Γ k computed using (38). Note that a nonlinear counterpart of Theorem 7 can also be formulated. The recursive least-square algorithm is obtained with Ω(ω(t k )) playing the role of ω(t k ),  U k playing the role of  R k and )Ω(ω (t k )) ⊤ ) −1 . Similarly, Algorithm 1 can be adapted to the present scenario to determine the approximation  h • π N,k (if the system is linear, Algorithm 1 is recovered selecting N = ν and φ j = ϵ j ). Algorithm 2. Let k be a sufficiently large integer. Select η > 0 sufficiently small. Select w ≥ ν. (40)). Else increase w. If k−w < 0 then restart the algorithm selecting a larger initial then k = k + 1 go to 1. 4: Stop. The convergence of the estimated moment is guaranteed by the next result. Theorem 9. Suppose Assumptions 4-7 hold. Then lim t→∞ Proof. Assumption 7 guarantees that the approximation  Γ k is well-defined for all k, whereas Assumptions 4 and 5 guarantee that Lemma 4 holds and thus that h • π is well-defined. The quantity ∥ε(t k )∥ vanishes exponentially by Assumption 5. Note that if we try to apply the linear algorithm to data generated by a nonlinear system, then the algorithm would not converge. In fact, the algorithm would try to encode in a linear term information regarding higher order terms. Hence,conditions (20) or (21) would not be satisfied. Up to this point we have always considered one trajectory ω(t). While this is sufficient in a linear setting, in which local properties are also global, it may be restrictive in the nonlinear setting. To solve this issue, Algorithm 2 can be easily modified to operate with multiple trajectories. To this end, it suffices to implement the algorithm replacing the matrices  U k and  Υ k with the matrices respectively, where  U i k and  Υ i k are the matrices in (36) and (37), respectively, sampled along the trajectory of system (29) starting from the initial condition ω(0) = ω i 0 , with q ≥ 1. We refer to this method as the ''U/Y'' variation. Families of reduced order models Using the approximation given by (39) a reduced order model of system (28) can be defined at each instant of time t k . Definition 6. Consider system (28) and the signal generator (29). Suppose Assumptions 4-7 hold. Then the systeṁ with ξ (t) ∈ R ν , u(t) ∈ R, ψ(t) ∈ R and φ k and κ k smooth mappings, is a model of system (28) at (s, l) at time t k , i.e. system (43) has the same moment of system (28) at (s, l), if the equation has a unique solution p k such that where  h • π N,k (ω) is obtained by (38). The next result is a direct consequence of Definition 6 and Astolfi (2010). Then the system described by the equationṡ is a model of system (28) at (s, l) for all t k . Similarly to the linear case (see Section 2.5), the conditions given in Astolfi (2010) to enforce additional properties upon the reduced order model can be adapted to hold in the present scenario. Moreover, the results of this section can be extended to time-delay systems. In fact, Algorithm 2 can be used to estimate the moments of nonlinear time-delay systems without any modification, see Astolfi (2015c, 2016b). An approximated nonlinear reduced order model of the DC-to-DC Ćuk converter In this section we revisit the example given in Scarciotti and Astolfi (2015c). Therein, using a linear signal generator, a reduced order model of the DC-to-DC Ćuk converter with linear state dynamics has been given. Herein, we obtain a planar reduced order model with nonlinear state dynamics using an input generated by a nonlinear mapping l. Note that since the system is of loworder, the example serves only illustrative purposes. In particular, we provide a proof of principle that the method allows to obtain a nonlinear reduced order model without solving a partial differential equation. The averaged model of the DC-to-DC Ćuk converter is given by the equations (Rodriguez, Ortega, & Astolfi, 2005) where x 1 (t) ∈ R ≥0 and x 3 (t) ∈ R ≤0 describe the averaged currents, x 2 (t) ∈ R ≥0 and x 4 (t) ∈ R ≤0 the averaged voltages, L 1 , C 2 , L 3 , E and G positive parameters and u(t) ∈ (0, 1) a continuous control signal which represents the slew rate of a pulse width modulation circuit used to control the switch position in the converter. In the remaining of the paper we used the numerical values given in Rodriguez et al. (2005) to simulate system (47). Consider the input generated by the equationṡ ω 1 = −75ω 2 ,ω 2 = 75ω 1 , which generates a positive input signal with higher order harmonics. This polynomial approximation fits well for u < 0.8, whereas it does not decrease as fast as the actual output of the system when the input is close to 1: for this value the output of the converter becomes negatively unbounded. This suggests that the following results can be improved if other basis functions are used. The reduced order model is chosen as in Proposition 2, with δ η = 220  The top graph in Fig. 5 shows the time histories of the output of system (47) (solid/blue line) and of the reduced order model (dotted/red line) for the input sequence represented in the bottom 8 Simulations showed that polynomial surfaces of higher order are necessary to have a satisfactory approximation in a larger region W . graph. The input is obtained switching ω(0) every 0.5s (the switching times are indicated by solid/gray vertical lines). ω(0) takes, in order, the values of (−0.45, −0.45), (−0.25, −0.45), (0.15, 0.05) and (0.5, 0.5). The middle graph in Fig. 5 shows the absolute error (dashed/green line) between the two outputs. We note that the error is mostly due to the poor transient performance. This problem could be alleviated with a selection of δ η as a function of ξ . The already small steady-state error could be further reduced with a different selection of basis functions. Conclusion We have presented a theoretical framework and a collection of techniques to obtain reduced order models by moment matching from input/output data for linear systems and nonlinear systems. The approximations proposed in the paper have been exploited to construct families of reduced order models. We have shown that these models asymptotically match the moments of the unknown system to be reduced. Conditions to enforce additional properties upon the reduced order model have been discussed. The use of the algorithm is illustrated by several examples.
v3-fos-license
2020-06-19T15:19:07.108Z
2019-09-20T00:00:00.000
219899305
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mhealth.jmir.org/2020/6/e16350/PDF", "pdf_hash": "e18d02a7909a80cfffb89c931faa00d0dba157a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:406", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "e18d02a7909a80cfffb89c931faa00d0dba157a6", "year": 2020 }
pes2o/s2orc
Optimizing Smartphone-Delivered Cognitive Behavioral Therapy for Body Dysmorphic Disorder Using Passive Smartphone Data: Initial Insights From an Open Pilot Trial Background: Smartphone-delivered cognitive behavioral therapy (CBT) is becoming more common, but research on the topic remains in its infancy. Little is known about how people typically engage with smartphone CBT or which engagement and mobility patterns may optimize treatment. Passive smartphone data offer a unique opportunity to gain insight into these knowledge gaps. Objective: This study aimed to examine passive smartphone data across a pilot course of smartphone CBT for body dysmorphic disorder (BDD), a psychiatric illness characterized by a preoccupation with a perceived defect in physical appearance, to inform hypothesis generation and the design of subsequent, larger trials. Methods: A total of 10 adults with primary diagnoses of BDD were recruited nationally and completed telehealth clinician assessments with a reliable evaluator. In a 12-week open pilot trial of smartphone CBT, we initially characterized natural patterns of engagement with the treatment and tested how engagement and mobility patterns across treatment corresponded with treatment response. Results: Most participants interacted briefly and frequently with smartphone-delivered treatment. More frequent app usage ( r =–0.57), as opposed to greater usage duration ( r =–0.084), correlated strongly with response. GPS-detected time at home, a potential digital marker of avoidance, decreased across treatment and correlated moderately with BDD severity ( r =0.49). Conclusions: The sample was small in this pilot study; thus, results should be used to inform the hypotheses and design of subsequent trials. The results provide initial evidence that frequent (even if brief) practice of CBT skills may optimize response to smartphone CBT and that mobility patterns may serve as useful passive markers of symptom severity. This is one of the first studies to examine the value that passively collected sensor data may contribute to understanding and optimizing users’ response to smartphone CBT. With further validation, the results can inform how to enhance smartphone CBT design. Background The supply and demand imbalance between those who need psychological treatment and those who are able to receive it represents a serious public health concern [1,2]. Indeed, only 43.6% of those with psychiatric illnesses in the United States receive treatment and fewer receive gold-standard treatment [2]. Moreover, certain psychiatric illnesses are less well-recognized than others, and under-recognized illnesses likely have the biggest access to care gaps. For example, 35.1% of adults with body dysmorphic disorder (BDD), a psychiatric illness characterized by a preoccupation with a perceived defect in physical appearance [3], receive psychotherapy; only 17.4% with BDD receive the gold-standard cognitive behavioral therapy (CBT) [4], despite strong research demonstrating its efficacy [5][6][7]. Fortunately, the development of smartphone-delivered CBT treatments may help address this access gap. Compared with in-person therapy, smartphone-delivered CBT is less expensive, more widely accessible, and highly flexible (eg, it can be used anywhere and anytime patients have their phones). The potential benefits of smartphone-delivered CBT are compounded by the growth of smartphone ownership. At present, 81% of the US population own a smartphone, a rate that has more than doubled since 2011 [8]. Not surprisingly, therefore, there is mounting enthusiasm among clinical researchers for developing and deploying smartphone CBT treatments [9,10]. Despite growing excitement, our understanding of smartphone-delivered CBT remains in its infancy, with a dramatic gap between the number of publicly available mental health apps and the paucity of scientific papers reporting on their evaluation [11]. In particular, very little is known about how people naturally engage with smartphone-delivered CBT compared with traditional in-person treatments, but it is likely that usage patterns differ dramatically. For example, in-person CBT is most commonly administered in once weekly, 50-min sessions, representing a concentrated and infrequent format. Next-generation internet-based CBT (ICBT) treatments, which have garnered substantial empirical support [12,13], are often built to mimic this style of longer duration, spaced out, formalized sessions because they were designed to be completed on one's home computer. In both traditional CBT and ICBT, patients are instructed to practice skills between sessions to reinforce learning within real-world settings. Whereas the practice of skills between sessions has been associated with better CBT outcomes [14][15][16], many patients struggle to practice skills on their own between sessions. On the other hand, because people carry their phones at most times, smartphone-delivered treatments can be accessed by users at nearly any time and place. Having smartphone-delivered support available at all times may encourage practicing CBT skills with greater frequency and in a wider variety of settings than traditional in-person CBT, potentially opening doors to highly distinct engagement patterns. However, to date, we know very little about how often, for how long, or where people naturally engage with smartphone-delivered CBT treatments. Moreover, very little is known about which engagement patterns correspond with an optimal response to smartphone-delivered CBT. Understanding optimal engagement patterns can allow for the design of more potent treatments by seeking to promote the most effective patterns of CBT app use. For example, gaining information about whether one's frequency of use or duration of use matters more in terms of treatment response can inform whether apps should be designed to promote bursts of brief engagement or longer, less frequent sessions. Finally, little is currently known about how the mobility patterns of patients change over the course of smartphone-delivered CBT. Previous research suggests that time spent at home, measured via a GPS, can serve as a digital marker of avoidance [17] and may correlate with symptom severity in depressive disorders [18]. Therefore, obtaining initial information about how mobility patterns change across smartphone treatment, and how these changes correspond with changes in severity, can inform treatment optimization by passively detecting changes in severity and triggering just-in-time interventions. Altogether, in the field's current, early stage of developing smartphone-delivered CBT treatments, we can benefit from examining pilot engagement and mobility data, to shape how we design optimal digital services and their clinical trials in the future. Smartphones offer a unique avenue for gaining rich insights into patterns of treatment engagement and predictors of treatment response because smartphones can unobtrusively (ie, in the background, without user input) collect a wide variety of sensor-based data over the course of treatment. For example, with patient consent, smartphones can be configured to passively collect objective information about patients' engagement with the app (ie, how often and for how long patients use the program) as well as patients' behavioral patterns over the course of treatment (eg, where patients typically use the app, changes in mobility patterns across treatment, via GPS). Passive data offer notable strengths for learning how to optimize smartphone-delivered treatments compared with more traditional assessment methods such as clinician interviews and self-reports. Passive smartphone data are sampled at a far greater frequency than traditional clinical assessments, which, at most, might be administered weekly. Frequent assessment that is conducted as one lives daily life captures richer contextual information, has higher temporal resolution to detect changes in symptoms or severity, and reduces the influence of recall biases that arise from subjective recollection of experiences over a broad time frame [19]. Altogether, passive smartphone data can offer valuable, low-burden insights into patterns of treatment engagement and digital markers of progress or deterioration, to optimize future design and research of smartphone-delivered treatments [20]. Objectives To this end, this study exploratorily examines passive smartphone data from a 12-week open pilot trial of a smartphone-delivered CBT (Perspectives) for patients with BDD (N=10) to inform the study design, variables of interest, and hypothesis generation for future trials of smartphone-delivered CBT services. First, we aimed to initially characterize typical patterns of engagement with smartphone-delivered CBT for BDD in our sample, to obtain a preliminary understanding of how engagement may be similar to or different from participation in traditional in-person CBT. Second, we aimed to initially test how patterns of engagement corresponded with treatment response to inform early hypotheses about how we may design apps to optimize engagement and response. Third, we aimed to initially characterize the mobility patterns of participants across treatment, to preliminarily test whether GPS-based mobility patterns could serve as a digital marker of disorder severity. If validated in larger trials, digital markers of severity could be used to enhance treatments by triggering just-in-time interventions. Participants and Recruitment A paper by Wilhelm et al [21] gives detailed information on study methods, including a Consolidated Standards of Reporting Trials diagram, participant demographic information, and a description of the smartphone-delivered CBT for BDD treatment (ClinicalTrials.gov Identifier: NCT03221738). A total of 10 adults with a primary psychiatric diagnosis of BDD were enrolled nationally in the open pilot trial (female: n=8, male: n=2; mean age 27.6, SD 5.66 years). Other inclusion criteria required that participants had at least moderately severe BDD symptoms (defined as a Yale-Brown obsessive compulsive scale modified for BDD [BDD-YBOCS] score >20), an acuity level appropriate for an outpatient level of care and lived in the United States. Exclusion criteria prohibited participation if the individual had a current severe major depressive disorder; borderline personality disorder; substance use disorder or acute, active suicidal ideation; had a lifetime diagnosis of bipolar disorder or a psychotic disorder; had cognitive impairment or intellectual disability that would interfere with participation; had engaged in previous CBT for BDD, or did not own an iPhone that supported the app software. Participants were either unmedicated or those on medication were required to be on a stable dose for at least two months before starting the study and were instructed not to change their medication regimen during the trial. Procedures Procedures were approved by the hospital's institutional review board, and participants provided informed consent before beginning the study. Informed consent included a description of each type of passive smartphone data to be collected, a description of how those data were securely transmitted and deidentified before storage, the rationale for collecting those data, and a description of who would have access to the data. Assessments Clinical assessments were conducted by reliable, independent evaluators with a Master's degree or doctorate, who were trained in primary diagnostic and outcome measures. Assessments for this study were conducted at the screening and baseline (same visit; week 0), midpoint (week 6), and posttreatment (week 12) assessments, and participants were compensated US $25 for completing the week 6 and week 12 assessments. Clinician-administered measures were collected via secure video calls that were Health Insurance Portability and Accountability Act (HIPAA) compliant. Self-report data were collected via Research Electronic Data Capture [22], a secure, HIPAA-compliant web-based survey collection platform. In addition to providing clinical and outcome data, participants also provided qualitative feedback on the CBT app at several time points across the study. Specifically, written feedback was collected at the posttreatment assessment; oral feedback was gathered by members of the design team via separate interviews conducted shortly after the baseline, midpoint, and posttreatment clinical assessments. Treatment Following the screening and baseline assessment, the study staff instructed eligible participants on how to download and activate the Perspectives app onto their personal smartphones. The 12-week treatment consisted of psychoeducation and self-paced interactive exercises presented in a fixed order, which taught each of the core CBT skills for BDD (ie, cognitive restructuring, exposure with ritual prevention, mindfulness and perceptual retraining, core beliefs and self-esteem, engagement in value-based activities, and relapse prevention). The treatment was delivered via the smartphone app and was supported by light-touch communication with a doctoral-level therapist, whose primary role was to enhance motivation, address roadblocks, and answer questions [21]. Note that in this trial, Perspectives was developed for iPhones only; in 2018, iPhone operating systems represented approximately 44% of smartphones in the United States [23]. Passive Smartphone Data Collection Perspectives was configured to passively collect information about app usage and mobility patterns of participants via GPS (the Measures section gives further details). We chose to collect these 2 types of passive data based on previous literature that points to their utility. In particular, app usage data may offer valuable insights into which engagement patterns are optimal for promoting treatment response [20], whereas mobility patterns from GPS can detect the proportion of time spent at home, a potential digital marker of avoidance [17]. As BDD is characterized by substantial avoidance (including housebound avoidance) [24], mobility patterns, therefore, have the potential to passively detect signs of symptom severity. By carefully selecting data categories and sampling rates (by default, the location was sampled whenever location changed by at least 100 m), the app was optimized to balance battery life and allowance of natural phone use. To this end, no participants complained about battery problems during the study. Clinical Assessments The Mini-International Neuropsychiatric Interview (version 7.0.2) [25] is a semistructured, clinician-administered diagnostic assessment of psychiatric illnesses. It was administered at the screening assessment to evaluate the inclusion and exclusion criteria. The BDD-YBOCS [26] is a semistructured, clinician-administered, gold-standard assessment of current BDD symptom severity. The BDD-YBOCS is a 12-item Likert scale. Total scores range from 0 to 48, with higher scores corresponding to greater BDD severity. The BDD-YBOCS has strong psychometric properties, including internal consistency, interrater reliability, and test-retest reliability [26,27]. The BDD-YBOCS was administered at each assessment to evaluate the eligibility criteria (at screening) and changes in BDD severity. Percentage improvement in severity, a primary outcome in this study, was computed by dividing the difference between baseline and posttreatment (week 12) BDD-YBOCS scores by the baseline value. Passive Smartphone Features To quantify and analyze the patterns of engagement with Perspectives and mobility across treatment, we computed several variables based on passive smartphone data. Quantity of App Use The quantity of app use was calculated as the total duration in minutes that a participant used the app. This was calculated by adding together all app sessions, or the periods of on-app time devoted to the therapy. Before analyses, together with designers of the Perspectives app, we considered various cutoff points for outliers in session length. Taking into account the possibility that participants might occasionally engage in multiple longer components of the app in sequence (eg, a mindfulness audio exercise, responding to coach messages, and completing an exposure exercise), we decided a priori on a session length cutoff of approximately 60 min, and outliers beyond this length were removed. To account for bursty usage (ie, multiple brief usages separated by short breaks of <60 min in between), app usages that were separated by <60 min were summed together into a single session. For example, a participant who used the app for two 10-min increments with a 5-min break in between would be logged as having one 20-min session during this span. Quantity of app use was computed for the first half (6 weeks) and for the full 12 weeks of the CBT program ( Table 1). Frequency of App Use This metric measured the extent to which a participant tended to use the app frequently or infrequently, expressed as the mean duration between 2 consecutive sessions, or periods of uninterrupted use. Frequency of app use was computed for the first half (6 weeks) and the full 12 weeks of the CBT program (Table 1). Mobility Patterns Using GPS data, we calculated the percentage of time spent at home during 1-week time intervals that overlapped with baseline, midpoint, and posttreatment BDD-YBOCS assessments (including 3 days before, 3 days after, and the day of BDD-YBOCS administration). Of note, at baseline, the BDD-YBOCS was typically administered on the same day the app was installed. Therefore, GPS data were not generally available for the 3 days before the baseline BDD-YBOCS assessment. Home location was inferred as the most common location ID captured between 3 AM and 6 AM per individual. All the remaining location IDs were labeled as outside of home. The various locations of participants were collected in a privacy-preserving way; each location where a participant spent at least 30 min was assigned a unique and random location ID (eg, ID78) and stored in the logs. This procedure was performed locally on the phone, and raw locations were removed before transferring the data to the server. The GPS sampling rate was set to 15 min, yet GPS readings were missing for 60% of the days. Statistical Analyses Data were analyzed using Python 3.6 (Python Software Foundation). Descriptive Patterns of App Usage To characterize the overall patterns of app usage, we visually inspected longitudinal patterns of usage by the participants across the 12-week treatment and we calculated the number of times the participants were engaged with the app for different lengths of time (ie, session durations). We elected not to identify subsamples based on usage (ie, clusters of users with similar engagement patterns) either visually or quantitatively, because of the small sample size. App Usage Patterns as Correlates of Percentage Improvement in the Yale-Brown Obsessive Compulsive Scale Modified for Body Dysmorphic Disorder To examine how the patterns of engagement of participants with Perspectives corresponded with their percentage improvement in BDD severity, we focused on 2 types of app usage patterns: quantity of app usage and frequency of app usage across the treatment. Normality was tested using the Shapiro-Wilk test and visual inspection. As the frequency of app use variable followed a long-tail distribution, log-transformation was performed before the analysis. Two bivariate correlations were conducted, to preliminarily explore the relationships between the variables measuring (a) quantity and (b) frequency of app usage with percentage improvement in BDD-YBOCS from the baseline to week 12. Next, to initially examine the relative effect of quantity versus frequency of app use, a regression analysis of percent improvement was conducted, with both quantity and frequency of app use as independent variables. We primarily evaluated effect sizes, as opposed to statistical significance, for correlation and regression analyses, given the pilot nature of the data. GPS Data as a Correlate of the Yale-Brown Obsessive Compulsive Scale Modified for Body Dysmorphic Disorder Scores The relationship between symptom severity and mobility was explored via a bivariate correlation between BDD-YBOCS scores and the percentage of time spent at home during the week the BDD-YBOCS was measured. Note that absolute BDD-YBOCS scores were used for this analysis instead of percentage improvement, given the goal of exploring the predictive power of a GPS marker in assessing the current acuity of participants. The correlation analysis included 30 pairs of location variables and BDD-YBOCS scores (ie, 3 per participant, at baseline, midpoint, and posttreatment); thus, each participant was equally represented in the correlation analysis. Given that this analysis included multiple time points per participant, we followed up with a secondary analysis to verify that the results were not inflated based on the longitudinal nature of the data. Namely, 6000 correlation analyses were run by randomly selecting 1 of the 3 time points per participant (pre-, mid-, or posttreatment). This approach resulted in a very similar median correlation value to the analysis with 3 time points per participant; thus, secondary results are not presented. Again, we primarily evaluated the effect size, as opposed to statistical significance, for this correlation analysis, to best account for the pilot nature of the study. Results Wilhelm et al [21] report the feasibility and acceptability of Perspectives, as well as the symptom improvement from baseline to posttreatment. Descriptive Patterns of App Usage We visually examined the longitudinal patterns of engagement with Perspectives across the 12-week treatment ( Figure 1). Overall, app usage showed a great deal of variety between participants, in terms of the total duration of use (mean duration 398 min, SD 310 min; range 53 to 913 min), the number of days used (mean 30 days, SD 16 days; range 8 to 64 days), and the length of time between consecutive app uses. This variety was also reflected in the qualitative descriptions of how participants used the app. Whereas some participants described using the app daily (eg, "in the evenings every day -I am not a morning person" and "when at my desk 30 minutes a day"), others engaged with it less frequently (eg, "usually once or several times a week"). Despite the diversity in app usage across participants, several common usage patterns also emerged. First, most participants used the app with higher and lower intensities in the first and last weeks of the treatment, respectively (Figure 1). Additionally, data from both the app usage logs and GPS revealed that-within participants-participants generally preferred using the app at home over the first 8 weeks (1040/1488, 69.90% at home on average). During the ninth and tenth weeks, the proportion of app use at home and outside of home became more evenly distributed (60/105, 56.9% at home), and in the final 2 weeks of treatment, participants predominantly used the app outside of home (with only 16/98, 17% at home). Moreover, unlike in-person therapy, most interactions with Perspectives were frequent ( Figure 1) and very brief (Multimedia Appendix 1). The majority (374/510, 73.3%) of app sessions lasted ≤5 min. In only 11.7% (60/510) of cases, the app was used in sessions lasting 5 to 10 min, followed by 7.1% (36/510) and 6.4% (33/510) of cases in which the content was accessed for 10 to 20 or 20 to 40 min, respectively. Longer app usage was registered in only 1.4% (7/510) of the sessions. This pattern of brief engagement is consistent with how participants described their app usage in qualitative feedback. For instance, participants reported that they used the app during "dead time" while waiting (eg, in line at the store) or "for a few minutes each day to keep the lessons in mind" and described the app as "fast and easy to fit into your busy schedule." App Usage Patterns as Correlates of Percentage Improvement in the Yale-Brown Obsessive Compulsive Scale Modified for Body Dysmorphic Disorder Means, standard deviations, and bivariate correlations of the percentage improvement in the BDD-YBOCS, the quantity of app usage, and frequency of app usage are provided in Table 1. Table 1. Descriptive statistics and correlations between patterns of engagement with smartphone-delivered cognitive behavioral therapy for body dysmorphic disorder and treatment response. The quantity of app usage was uncorrelated with percentage improvement in the BDD-YBOCS, whereas the frequency of app usage correlated strongly with treatment response and trended toward significance. The strong, negative relationship between mean (log) length of breaks between sessions (ie, frequency of app use) and improvement in the BDD-YBOCS initially suggests that shorter breaks between sessions corresponded with greater improvements (Table 1). To follow up on patterns elucidated in bivariate correlations, we used regression analysis to preliminarily examine whether the frequency of app usage corresponded with treatment response more so than the quantity of app usage. When the primary outcome (percentage improvement in the BDD-YBOCS) was entered as a dependent variable, the frequency of app usage (ie, mean (log) duration between 2 consecutive sessions) predicted percentage improvement in the BDD-YBOCS with a small effect (beta=-0.13; P=.03; 95% CI -0.231 to -0.019), whereas the total quantity of app usage during the 12-week treatment did not predict improvement in the BDD-YBOCS (beta=-0.08; P=.13; 95% CI -0.184 to 0.027). GPS Data as a Correlate of the Yale-Brown Obsessive Compulsive Scale Modified for Body Dysmorphic Disorder Scores We used a scatterplot to visually inspect the relationship between time spent at home (based on GPS data) and symptom severity (measured with the BDD-YBOCS; Figure 2). The plot indicates that a shift occurred from baseline to posttreatment, characterized by a corresponding decrease in time spent at home and symptom severity. A follow-up correlational analysis suggests a moderately strong association between time spent at home and BDD symptom severity (r=0.49; P=.005). Principal Findings Although enthusiasm for smartphone-delivered CBT is growing rapidly, there has not yet been substantial research on ways to enhance smartphone treatment. Before the widespread development and deployment of smartphone CBT treatments, it is important to first examine pilot data that characterizes the natural engagement patterns of users with smartphone-delivered CBT and identifies which usage and mobility patterns may optimize treatment. Such pilot data will provide timely information to researchers about variables and hypotheses of focus, in advance of larger, more costly validation trials, and can elucidate how we may explore enhancing smartphone-delivered CBT for optimum response in larger trials. In particular, passively collected usage and sensor data from smartphones offer a unique, low-burden approach for gaining these important insights. Although a variety of passive data (eg, typing speed, activity level, phone usage, acoustic level) can be collected by smartphones [28], collecting sensor data involves a trade-off between gaining potentially useful information and depleting phone battery life (as well as risking user trust when collecting unnecessary data). Thus, initial signals from pilot research can shed light on which variables may be more or less fruitful to collect in clinical trials. To this end, this study used passive data from an open pilot trial of smartphone-delivered CBT for BDD, with the aim of preliminarily (1) characterizing the patterns of app usage of participants, (2) examining usage patterns that correspond with treatment response, and (3) examining mobility patterns that correspond with symptom severity. Although app usage patterns varied substantially across participants, visual examination and descriptive analysis of usage data revealed several common patterns of engagement in our sample. First, participants tended to use the app more frequently and for a greater overall duration at the beginning of the 12-week treatment, with considerably lower usage later in treatment. This result is not surprising and may reflect that early on, participants required more time on the app to learn new information and skills. Later in the treatment, the participants may have transitioned to practicing greater applied skills, offline and in the real world [29]. In fact, qualitative feedback reflects that once participants learn skills, they practice them offline. For example, one participant reported, "I use the exercises all the time without the app. I have the big picture view of what I am trying to do." Learning to use the treatment skills offline is likely an effective way to engage with smartphone-delivered CBT over time, as ultimately (like with in-person CBT), we hope for patients to internalize skills well enough to use them naturally as symptoms arise. Similarly, the results could reflect that participants simply received the necessary dose of treatment in a shorter time than the allotted 12 weeks [29]. On the other hand, lower usage at the end of treatment may reflect drops in engagement unrelated to CBT mastery (eg, because of boredom, lack of new content, loss of motivation). One participant's posttreatment qualitative feedback supports this hypothesis; the participant reported that toward the end of the 12 weeks, there was less new material, and the participant was therefore not on the app as often. Reduced engagement over time is a very common challenge for app-based treatments [30]. Additional research is needed to fully understand the reasons for the reductions in app usage over time. Second, descriptive results highlighted that participants typically used the app at home during the first two-thirds of treatment; later, the participants tended to use the app more when out of the house. This within-person pattern of increased usage outside of the house over time is consistent with the hypothesis that as participants gained CBT skills across treatment, they may have transitioned to using those skills offline and in the real world. Finally, we observed that overall, the participants tended to use the app in brief and frequent sessions. In fact, most app sessions lasted <5 min each. This pattern reflects the way in which most people use smartphones in general: engaging with them often during short moments of downtime throughout the day [31]. This pattern also aligns with how we designed the app to be used. That is, we intentionally pared down content into brief text and exercises that could be completed quickly and repeated as often as one wished. On the other hand, this pattern of brief and frequent sessions is notably distinct from how patients engage with face-to-face CBT or ICBT. Given the distinctive pattern of engagement we observed compared with better-established CBT modalities, it is critical to examine whether the naturally brief usage patterns of participants with smartphone-delivered CBT are effective or whether longer sessions are needed for response. Interestingly, preliminary correlation and regression results suggest that more frequent app usage, as opposed to greater duration of app usage, correlated strongly with treatment response-and trended toward statistical significance-in our (albeit small) sample. Consistent with these results, a previous review showed that overall time spent on web-based treatments for depression does not typically correlate with response to treatment [32]. In line with the aforementioned hypothesis that participants often practiced skills offline once learned, it is possible that the total duration of app usage does not fully capture the time participants spent engaging in treatment skills. Altogether, the results provide early, novel evidence that frequent (even if brief) practice of CBT skills may optimize the smartphone-delivered CBT response. It is possible that frequent doses of practice help with learning CBT skills, as regular reinforcement of skills across broad contexts may enhance consolidation and generalization [33]. Researchers who are in the process of designing clinical trials to test smartphone-delivered CBT should consider collecting both quantity and frequency usage metrics to further validate optimum usage patterns. If validated in subsequent trials, the results have implications for the design of smartphone-delivered CBT. For example, findings suggest that information should be provided in brief chunks, as opposed to packing long, self-help-style psychoeducation into smartphone-delivered treatments. Moreover, it may be beneficial to design apps that are discreet, to promote frequent app use not only at home but also as symptoms arise in day-to-day life. App design can actively promote frequent use by incorporating reminders or rewards for use, in addition to including instructions to engage with the app often. Future research could test these design strategies using experimental designs to investigate which are effective for promoting frequent use. In addition to usage patterns, we also examined mobility patterns from GPS data that correspond to BDD severity. Preliminary results showed that across treatment, the proportion of time spent at home-a potential digital marker of avoidance [17]-decreased. Time spent at home correlated positively with BDD severity across treatment, with a medium-to-large effect. Whether the proportion of time at home is truly tapping into avoidance behaviors (versus other aspects of BDD severity) is speculative and requires validation through future research. This is the first study to examine the time spent at home in relation to BDD severity. Whereas previous research has documented a link between time spent at home and depressive symptoms [18], because of the small sample, we did not examine this relationship when controlling for depression severity. However, as depression severity did not decrease across treatment in this sample [21], it is unlikely that the observed link is better accounted for by changes in depressive symptoms. Future research in a larger trial could parse apart the degree to which time spent at home serves as a digital marker of depressive versus BDD severity. Altogether, strong initial GPS results underscore one variable where gains of data collection may outweigh costs; researchers designing upcoming smartphone-delivered CBT trials should consider measuring time spent at home, to further validate this potential unobtrusive marker of clinical severity. With further validation, detecting changes in one's time spent at home could enhance smartphone-delivered CBT by unobtrusively triggering just-in-time interventions-a promising yet underdeveloped area of research [34]. For example, upon detecting increases in time spent at home, smartphone-delivered CBT treatments could send notifications to the user that reflect this observation (eg, "It looks like you've been spending more time at home") and suggest adaptive strategies (eg, "Would you like to schedule an activity with a friend?"). Moreover, in larger trials, researchers can explore the utility of applying machine learning methods to predict changes in BDD severity from GPS-derived time spent at home. Limitations Results from this study should be interpreted, bearing in mind its limitations. Most notably, this pilot study had a small sample size. Thus, it is possible for 1 or 2 participants' outlying usage patterns to unduly influence the results. That said, Kazdin [35] outlines a strong rationale for the ability to meaningfully examine data from small samples when data are collected at multiple time points across the treatment. Given the small sample size, we limited the scope of our aims and analyses to an exploratory examination of select patterns of interest, and we focused on robust effects that may indicate meaningful signals to follow-up. Follow-up in a larger sample would provide an opportunity to reliably test for statistical significance. To this end, results are intended to hone researchers' decisions (eg, variables and hypotheses of focus) in advance of larger, more costly clinical trials of smartphone-delivered CBT treatments rather than to provide conclusive evidence in and of themselves. In addition to a small sample, this pilot trial specifically focused on smartphone-delivered CBT for BDD. It is possible that insights will not generalize to smartphone-delivered CBT treatments for other disorders. However, given the core similarities between CBT for BDD and many other psychiatric conditions, such as anxiety disorders, obsessive-compulsive-related disorders, and eating disorders, we anticipate that findings will be relevant in the design of smartphone-delivered CBT treatments for related conditions. Finally, our strong initial GPS results should be interpreted, bearing in mind the high degree of missing GPS data (683/1134, 60.23% of the days) in our sample. Although the specific reasons for missing GPS data in our study are unknown, a high rate of missing geolocation data in mobile research is typical (eg, ranging from 40% to 90% missing) [36][37][38][39] and may be attributed to a range of factors, including participants switching off the device, participants activating a mode that does not permit location services (eg, airplane mode) or problems with permission to access the location sensor that can occur with the iPhone platform [36]. Importantly, missing GPS data in our study did not correlate with the BDD symptom severity of the participants and therefore were likely random with respect to BDD symptoms. Thus, it is unlikely that patterns of missingness meaningfully influenced this correlation result. As with other results in this pilot study, these initial findings should be used for hypothesis generation at this stage. Conclusions This study also had several notable strengths. First, whereas many existing smartphone-delivered CBT trials use nonclinical or convenience samples, we used a clinical sample that was diagnosed and assessed via gold-standard, clinician-administered measures. Participants were recruited nationally, which may enhance the generalizability of our initial findings. Finally, the correlation results for app usage and GPS patterns were robust despite our small sample, suggesting that these novel insights have strong potential to enhance costly, well-powered future trials. Altogether, the results suggest that as researchers design efficacy trials to test smartphone-delivered CBT, it is worthwhile to collect data on patterns of use (with a focus on frequency versus quantity of use) and time spent at home. Novel study results suggest that these variables may correspond meaningfully with the response to treatment and, with further validation, may inform how to enhance smartphone-delivered CBT interventions. this paper. HW and AM wrote the manuscript. AM and RC conducted the statistical analyses. All authors read, edited, and approved the manuscript before submission. This work was supported by Telefonica Innovation Alpha. Conflicts of Interest The sponsor and investigators from Massachusetts General Hospital collaborated in the development of the Perspectives digital service, and they collaborated on the analysis and writing of this manuscript. HW, JG, and SW received salary support from Telefonica Innovation Alpha and are presenters for the Massachusetts General Hospital Psychiatry Academy in educational programs supported through independent medical education grants from pharmaceutical companies. SW has received royalties from Elsevier Publications, Guilford Publications, New Harbinger Publications, and Oxford University Press. SW has also received speaking honoraria from various academic institutions and foundations, including the International Obsessive Compulsive Disorder Foundation and the Tourette Association of America. In addition, SW received payment from the Association for Behavioral and Cognitive Therapies for her role as Associate Editor for the Behavior Therapy journal as well as from John Wiley & Sons, Inc for her role as Associate Editor on the journal Depression & Anxiety. OH, RG, and AM are employees of Telefonica Innovation Alpha.
v3-fos-license
2022-08-24T15:24:24.390Z
2022-08-22T00:00:00.000
251747969
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7717/peerj-cs.1074", "pdf_hash": "75468000abb2cb69a33aec3fe7fc10eb495a9701", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:407", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "4bfa548b5cb5efa82de0d5728efe83eb29cfc82b", "year": 2022 }
pes2o/s2orc
A user satisfaction model for mobile government services: a literature review User satisfaction is essential for the success of an organisation. With the development of government service delivery through mobile platforms, a compatible measurement model must be found to measure user satisfaction with performing such services through a mobile government portal. Measuring user satisfaction with mobile government services is necessary nowadays due to the increasing popularity of smart devices. Research on mGovernment users’ satisfaction is lacking, leading to difficulties in understanding users’ expectations. In the present study, systematic literature reviews have been used to analyze users’ satisfaction with mGovernment portals and propose a comprehensive model compatible with such contexts. The results show that government agencies can evaluate users’ satisfaction using the proposed model of six quality constructs: usability, interaction, consistency, information, accessibility, and privacy and security. The study recommends improving the evaluation strategies of mGovernment portals regularly to ensure they fit with challenges. Measuring user satisfaction at mGovernment services encourages the user to perform the transactions through such online platforms, increasing the digitalization process and reducing the cost and efforts for both the service provider and end-users. INTRODUCTION Customer satisfaction is a fundamental approach to quality management. Identifying the needs and expectations of different customer segments forms the basis for obtaining satisfied customers. Therefore, analysing customer satisfaction is a critical element in understanding the quality of organisations' products and services. Consequently, these characteristics are adjusted to the quality demanded. Gundersen (2004) defined customer satisfaction as a post-consumption assessment decision taken by the customer related to a product or service. This evaluation process occurs based on the results of the "customer's prepurchase expectation with the perception of performance during and after the consumption experience" (Oliver, 1980). However, the development of information and communication technologies (ICT) has influenced how organisations deal with clients. Unlike traditional businesses, online organisations create virtual communication between service providers and clients through their websites or smart devices connected to the internet. Mobile government (mGovernment) service is a technology used nowadays by government agencies to deliver government services to the public by providing mobile application services. The technology of mGovernment is an extended form of e-Government to create an attractive and smart environment between the government service provider and the public (Chanana, Agrawal & Punia, 2016). Global government agencies transmit online services through mobile devices, but such services' success or failure is based on users' satisfaction. Online services are different from the offline environment in that they create different experiences (Verma, Chaurasia & Bhattacharyya, 2019), so applying offline customer satisfaction models to online platforms causes inaccurate results. Measuring the service quality of mGovernment is still at an early stage of investigation by academic researchers. Compatible service quality measurement scales that target the area of mGovernment are lacking, which causes difficulties in understanding the behaviours and expectations of users. Previous studies have discussed the concept of customer satisfaction with the electronic form (e.g., website retailing, electronic government services) by analysing the concept, identifying the constructs, and proposing models for each of the cases. The absence of analysing customer satisfaction in the field of mGovernment services leads government agencies to use other scales that are not compatible with this smart environment, which causes incorrect analysis and weak understanding of end-users. This article aims to propose a measurement model for customer satisfaction with mGovernment services, that is guide the authors to review the theoretical base models of customer satisfaction that guide researchers for a clear view of the main elements associated with the concept. A review of the previous studies in online customer satisfaction has been conducted to enhance the current research by elaborating on customer satisfaction constructs in the online service environment. Since the targeted area is mGovernment services, the present study considers the uniqueness of the mobile services that guide the identification of the criteria to evaluate customer satisfaction with mGovernment service portals. The outcome of this research encourages online government agencies to understand user satisfaction and conduct regular updates of such services to meet users' needs. Therefore, the main research question is, what is the appropriate measurement model that can be used to measure the customer satisfaction at mGovernment services? Literature review The concept of offline customer satisfaction The definitions of the satisfaction concept described by previous literature from different approaches are based on cognitive and practical aspects that indicate the transaction's specific character. Varying definitions in the research scope of usability create difficulties for researchers to analyse the usability concept's origin, develop measurement scales, or critique empirical results (Giese & Cote, 2000). The disconfirmation theory is considered to deal with usability confusion as it is a proper technique for the simplicity of operationalisation. This approach located in the cognitive perspective implies that satisfaction results from comparing performance and related standards (Oliver, 1997). Understanding customer satisfaction is associated with marketing studies and practice in Cardozo's (1965) research on customers' efforts, expectations, and satisfaction. Since that time, many attempts have been made to explain and measure customer satisfaction, but to date, no standard definition of the concept has been agreed upon among researchers (Park et al., 2019;Dianat et al., 2019;Sandro et al., 2019). One of the purposes of customer satisfaction that Gundersen, Heide & Olsson (1996, 74) stated is a "post-consumption evaluative judgment concerning a specific product or service." Achieving customer satisfaction can benefit organisations, for example when customers return to buy the product or service again. Therefore, the organisation obtains customer loyalty and creates the possibility to deal with a service provider in the future (Caggiani et al., 2018). When a customer is satisfied with an organisation, the positive experience is shared with the customer's relatives and friends. The satisfied customer leaves the positive values that obtains a particular place as a profit in the market (Tzavlopoulos et al., 2019). Customer satisfaction is constructed on three main elements: (i) perceived performance, (ii) expectations, and (iii) satisfaction level. The first element, perceived performance, refers to the performance in terms of value delivery that the client obtains after acquiring a product or service. In other words, it is the result that the customer perceives from the product or service. The second element is expectations, which are the hopes that customers have due to one or more situations, such as promises made by organisations, previous experiences, opinions of others, and other competitors' guarantees. The third element is the level of satisfaction that is experienced after purchasing the product or service (Hu, Kandampully & Juwaheer, 2009). Although customer satisfaction is a metric that helps organisations to enable their products or services to meet or exceed consumer expectations (Othman, Hamzah & Abu Hassan, 2020), all of the values must be necessary for customer satisfaction and recognise how that helps to manage and improve the business (Mannan et al., 2019;Tzavlopoulos et al., 2019). Electronic customer satisfaction The criteria for defining the concept of electronic customer satisfaction through online platforms are grounded in traditional business. One of the definitions of e-customer satisfaction is "the contentment of the customer concerning his or her prior purchasing experience with a given electronic commerce firm" (Anderson & Srinivasan, 2003, 125). A study by Novak, Hoffman & Yung (2003) defined the concept of e-customer satisfaction as a "cognitive state experienced during navigation", (p.22) while other studies define it as a psychological state that is constructed based on online interaction through the website (Rose et al., 2012, 309). Trevinal & Stenger (2014) described e-customer satisfaction from a shopping practice viewpoint and defined it as the interaction process that results between customers, shopping practice tools, and online portals. As mentioned in the definitions, electronic customer satisfaction is constructed based on the emotional aspect of the user's interaction with online portals. However, previous research proposed measuring scales for electronic customer satisfaction by identifying the scale dimensions associated with online website features. Standard dimensions include "ease of use through these scales," which describes a customer's ability to perform the online transaction with few difficulties. The dimension "ease of use" uses the exact name of the proposed measurement scale in studies by Reynolds (2011), Tang &Wang (2004), andCho &Park (2001). In other cases, "ease of use" reflects the status of the system, such as the study by Liu & Arnett (2000) that uses the dimension of "ease of the system." A study by Novak, Hoffman & Yung (2003) uses the dimensions of "ease of contact, easy ordering, easy of cancellation," all of which return to the practice of "ease of use." Measuring the quality of information is common: James & Sammy (1983) label it "information product," Reynolds (2011) labels it "format," Chen (2002) labels it "informativeness," and Novak, Hoffman & Yung (2003) label it "information quality." The importance of customer support in an online environment is considered on most ecustomer satisfaction scales; for example, James & Sammy (1983) use the dimension name "vendor support", Novak, Hoffman & Yung (2003) use the dimension name "ease of contact", Tang However, based on previous literature on electronic customer satisfaction, these proposed scales described the general virtual environment that may influence the level of customer satisfaction. MGovernment services MGovernment is a form of government service delivered through smart device applications (apps) and uses interactive SMS services to reach the public flexibly and comfortably. MGovernment is at the initial stages of delivering mobile applications services, and many countries have updated their regulation policies to be compatible with such services. The responsibilities of mGovernment services are not separate from e-Government services. Both e-Government and mGovernment portals aim to provide government services to the public, such as health services, education services, employee services, and business services. Studies that consider mGovernment as an extension of e-Government include Kassen (2017) and Santa, MacDonald & Ferrer (2019), while other scholars regard it as a "separate channel" that provides the services through smart wireless (Janita & Miranda, 2018;Chen, Vogel & Wang, 2016). MGovernment service is a more flexible way to deliver government services to the public due to the low cost of smart devices, hand-held devices, and ease of use for most people. The success of services on mGovernment portals depends on user satisfaction because users are the central element in the online services environment. Various quality dimensions are associated with the evaluation of mGovernment service quality, and user satisfaction is one of them. The quality dimension of user satisfaction is required to construct a broad evaluation scale that can measure the smart devices' nature with consideration of unique features such as portability, personalisation, limitation of technical features, input features, location, and interaction features (Demir et al., 2020;Khan, Zubair & Malik, 2019). The reliability is one of the mGovernment quality factor that has the ability to measure the system performance using the attributes of "timeliness, accuracy, error-free, service promise, and confidentiality" (Desmal et al., 2019a(Desmal et al., , 2019b(Desmal et al., , 2019c. Using other online measurement scales such as e-Government, e-Commerce, and eRetailing in mGovernment services can lead to difficulties in understanding user satisfaction because each of the contexts has its features and requires a particular measurement scale (Song & Christen, 2019). Jaafar Mohamed et al. (2019) stated that the mGovernment is an electronic interaction portal that can communicate between the government service provider and the user, where the quality can be measured using the quality factors of "user control, synchronicity, two-way communication, and responsiveness". Desmal, Othman & Hamid (2021) formulate the uniqness of the mGovernment portal among the factors of "location-based services, smart interactions, consistency, accessibility, and efficiency". Proposed model for customer satisfaction with mGovernment The present study aims to understand user satisfaction by proposing a compatible measurement scale model for the portal of mGovernment services. Due to the lack of studies directly reporting on the field of user satisfaction with mGovernment (Al-Hubaishi, Ahmad & Hussain, 2017; Shareef et al., 2014), the literature from other near areas such as e-Government and e-Commerce was reviewed to construct a model for measuring user satisfaction with mGovernment portals. These portals are unique and require more attention to ensure continued use by users. To achieve user satisfaction, it is essential to conduct a regular review of the service delivery process to users, which helps government agencies reengineer the strategy to meet users' expectations. Considering the unique features of mGovernment portals, a study by Al-Hubaishi, Ahmad & Hussain (2017) proposed a measurement scale to measure the quality of services and uses the dimensions of "interaction quality, environment quality, information quality, system quality, network quality, and outcome quality". Shareef et al. (2014) proposed a model for service quality by mGovernment, consisting of four dimensions: "connectivity, interactivity, understandability, and authenticity". In the field of mobile banking, Karjaluoto et al. (2018) measured end mobile application user satisfaction by using the dimensions of "personal innovativeness, self-congruence, perceived risk, new product novelty, perceived value, overall satisfaction, commitment", while Khan, Lima & Mahmud (2018) use the dimensions of "tangibility, reliability, responsiveness, assurance, and empathy" to measure customer satisfaction in the field of mobile application banking. In other sectors, a study by Othman & Razak (2010) aimed to measure the mobile application satisfaction of school dental service by using the dimensions of "technical competency, interaction, efficiency, environment." Therefore, to measure the satisfaction of mGovernment users, the present article proposes an mGovernment satisfaction scale consisting of the unique dimensions relevant to characteristics of service-based smart devices. Usability The term usability refers to "the extent to which specified users can use a product to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" (International Organization for Standardization, 2013). When the software or mobile applications are released to the markets, organisations are expected to accept users. That degree depends on characteristics that are considered essential by each user. Dynes & Whisler (2006) defined the concept of ease of use as the "degree to which users can use the system which the skills, knowledge, stereotype, and experience they can bring to bear". However, in previous research, Liu & Arnett (2000), Novak, Hoffman & Yung (2003), Tang & Wang (2004) and Reynolds (2011) used usability/ease of use to measure electronic customer satisfaction. Government agencies use mobile applications to deliver services to members of the public who have different educational levels. Considering mGovernment portals' usability features, maximum users perform transactions through such services due to their satisfaction. MGovernment portals have unique technical features to measure usability that are different from other electronic services (Jaafar Desmal, 2017;Desmal et al., 2019a;Belgaum et al., 2021). Based on previous theories, the following hypothesis is proposed: H1: Usability has a significant impact on users' satisfaction with mGovernment services. Interaction Interaction refers to measuring user satisfaction while interacting with the service and system through the mGovernment portal during the delivery of such services. Kim et al. (2015) argue that the mobile application is considered as the main channel that conducting interactivity between the service provider and end-users. It means that mobile interactivity is requested from user to machine to perform a service (Lee, Lee & Kim, 2019; Monica et al., 2022;Isidro & Ashour, 2022). The interaction element is intangible and occurs during service delivery to the customer, which influences customer satisfaction. Since goods are not considered in mGovernment portals, the government agency's service provider aims to satisfy users by targeting their expectations, affecting the relationship between them (Lu, Wu & Hsiao, 2019;Ashour, Hussin & Mahar, 2008). Measuring the interactivity dimension cannot be done in isolation; it is a complex dimension consisting of related elements, processes, operations, and perceptions. In this context, Heeter (1989) uses six elements to construct a complete context of interactivity, which are "complexity of choice available, user's effort, user's responsiveness, monitoring of information use, ease of adding information, and facilitation of interpersonal communication". To meet users' satisfaction with mGovernment, managers must pay more attention to their employees' skills to provide the best service level to the public. Interaction with the mGovernment portal can be through online chat, voice, video, or email (Yang & Zeng, 2018;Cupertino et al., 2019;Senthil Kumar et al., 2022;Uma Maheswari, Aluvalu & Mudrakola, 2022). Providing more options for interactions with mGovernment means the users can get in touch quickly with a government agency, which influences the continued use of such services. H2: Interaction has a significant impact on users' satisfaction with mGovernment services. Consistency The concept of consistency in mobile application service refers to the compatibility of application elements, design, interface, navigation, and operational process with the nature of services (Li et al., 2019). Measuring the consistency of service applications is necessary due to smart devices' unique features that may create difficulties for end-users. Heinrich et al. (2018) argue that any desktop or application software's goal is to keep end-users satisfied with such services and avoid relearning the transaction process in the future. Introducing mobile service features that are not familiar due to lack of consistency can impact users' efforts when using a mobile device (Vidyasankar, 2018;Alansari et al., 2020;Shuja, Humayun & Rehman, 2021). Previous researchers have measured the dimension of consistency to measure the satisfaction of online users. A study by Reynolds (2011) uses two main dimensions (content and format) to measure the concept of consistency in the field of electronic commerce websites and its impact on user satisfaction Chen (2002) uses the dimension of "organization". Liu & Arnett (2000) use the dimension of "quality of system design" to measure consistency. This shows that the concept of consistency is vital when measuring online user satisfaction. The introduction of parallel steps of service performance lets users understand the overview of functional process that assists in preparing any documents or information requirements before starting the service (Lai & Liu, 2019). Consistency can help improve mobile services' usability because they can deduce the application if it looks like other application structures (Li et al., 2019;Shuja et al., 2021b). Consistency in mobile services must be practised in the general design to avoid all kinds of failures during the execution of the application (Jung, 2017;Shuja et al., 2021a). Inconsistent design of a mobile application service may create a messy application that will be disappointing for its users (Park et al., 2019;Jung, 2017). Ensuring consistency of mGovernment service allows each section of the application to develop perfectly and generate exceptionally fluid flow. Based on previous studies, the following hypothesis is proposed: H3: Consistency has a significant impact on users' satisfaction with mGovernment services. Information The term information has been defined from the quality field view as "intrinsic, contextual, representational, and accessibility" (Lee et al., 2002), which influences interconnected elements such as the format, currency, and completeness of the information. Some researchers measure the satisfaction of users with online platforms by using alternative dimensions. For example, James & Sammy (1983) use the term "service information product", Reynolds (2011) uses the term "content", and Cho & Park (2001) use the term "product information". When services are delivered through smart devices, many attributes affect the satisfaction of users, and the dimension of information is one of the main elements (Riesener et al., 2019;Heinrich et al., 2018;Gharib & Giorgini, 2019;Sayed & Ashour, 2022). The information provided to users through the service provider's online platform must enhance their understanding, be received on time and be accurate and understandable (Oliveira & Chan, 2019;Torres & Sidorova, 2019;Gharib & Giorgini, 2019). In mGovernment portals, the information may depend on multiple government agencies to process the transaction before sending the final transaction to end-users, which requires quick processing, accuracy, and complete processing of the service to ensure users' satisfaction. Based on previous studies, the following hypothesis is proposed: H4: Information has a significant impact on users' satisfaction with mGovernment services. Accessibility The concept of accessibility refers to the use of online services, products, frameworks, or resources in an effective, efficient, and satisfying way by people with different abilities (Işeri, Uyar & Ilhan, 2017;Yoon et al., 2016;Ashour et al., 2014;Alansari, Siddique & Ashour, 2022). The concept of ICT accessibility is essential to ensure equal opportunities for all people to use and access online resources, products, and services (Crespo, Espada & Burgos, 2016). Previous research shows that websites do not meet the needs of people with various disabilities (Southwell & Slater, 2012;Lewis, 2013;Lazar, Olalere & Wentz, 2012), which causes difficulties for these people to utilise online services. A statistic revealed that 36 million people are blind (Cupertino et al., 2019). Considering this figure, creating standards to offer accessibility options for government services, especially mGovernment services, will enhance most people's ability to perform the transactions using their smart devices and save time and effort. The authors of the present study noted a lack of measuring the term of accessibility in mGovernment portals. Hence, the current study measures accessibility in how the application of mGovernment service portals provides options to access and perform the services with less time and effort by users. Based on these findings, the following hypothesis is proposed: H5: Accessibility has a significant impact on users' satisfaction with mGovernment services. Privacy and security The online environment's two main elements are privacy and security (Widjaja et al., 2019;Barth et al., 2019). These two elements are essential to satisfy end-users (Abedi, Zeleznikow & Brien, 2019). Information privacy is understood as the control exercised by the user over their information to prevent unauthorised parties from accessing it (Cui et al., 2019). This information may include data, photos, and files (Merhi, Hone & Tarhini, 2019). Simultaneously, the term information security refers to preventing all threats that affect online transactions (Alomar, Alsaleh & Alarifi, 2019). Ensuring complete privacy and security are essential aspects to be considered by users and service providers for transaction processing (Liao & Shi, 2017). When privacy or security in online services is weak, the users will not conduct any type of transactions, especially when related to financial data (Ma, Chen & Zhang, 2019;Sá et al., 2017;Widjaja et al., 2019). Users' satisfaction with online services is affected by the power of privacy and security (Cui et al., 2019;Barth et al., 2019). Since mGovernment is provided through mobile devices, it is necessary to ensure that the mobile application is developed with highly professional techniques to ensure public satisfaction with mobile services. Based on these findings, the following hypothesis is proposed: H6: Privacy and security have a significant impact on users' satisfaction with mGovernment services. The output from the previous literatures guide the authors to formulate the proposed model to measure user's satisfaction with mGovernment services as shown at Fig. 1. METHODOLOGY To propose a model that can measure user satisfaction with mGovernment service portals, a review of the literature was conducted to obtain a comprehensive view of the present approaches and identify the study's requirements. The selected articles are from the year 2010 or later and measured or evaluated user satisfaction with various online platforms. The reason behind the selected starting period is that the majority of the studies at the field of online services were started analysing the importance of measuring the satisfaction of online users. The first step was the collection of literature from the digital libraries of Emerald, SAGA, ScienceDirect, Scopus, Taylor and Francis, and Web of Science. The top search strings used in the present research were "mobile government services", "mobile application satisfaction", "mobile satisfaction", "application satisfaction", "e-satisfaction", "electronic satisfaction", "website satisfaction", "e-service satisfaction", "electronic service satisfaction", "online satisfaction", "e-government satisfaction", "electronic government satisfaction", "user satisfaction", and "customer satisfaction". The selection criteria of the articles guides the authors to formulate and validate the proposed satisfaction model for mGovernment services that consists of the basic elements of measuring the satisfaction of online users. Most of the publications in the literature reviews appeared between 2015 and 2019. Due to the absence of literature in the field of satisfaction with mGovernment portals, the present study used fields near to mGovernment to guide the authors in proposing the targeted model. The type of publications used were short articles and full articles from peer-reviewed sources. The abstracts were reviewed based on the criteria below: 1. The research focuses on online services (desktops or mobile devices). 2. The research aims to measure or propose a model or framework for user or customer satisfaction. 3. The research evaluates user or customer satisfaction with leads to propose the constructs, sub-dimensions or items for a model or framework. The publications that did not meet the above three criteria were removed from the current research analysis. Publications were excluded according to the criteria below: 1. The research measures customer satisfaction based on an offline service environment. 2. The research does not consider satisfaction as a primary model or sub-dimension. 3. The research does not propose a unique model or framework to measure user or customer satisfaction. In conducting the literature review, the abstract for each article was reviewed to ensure the scope of the research fit with the area of the present research. Table 1 and Fig. 2 show the number of publications in the literature review. The attributes of the user satisfaction model at mGovernment service are extracted from previous related literature. As shown in Table 2 and Fig. 3, the attribute of "consistency" has little literature when its analyses are based on the mobile application, while the attribute of "privacy and security" has the most significant percentage of focused literature. Models of online services from previous studies have been classified according to the type of user platform (desktop/online and mobile). However, as shown in Table 3 and Fig. 4, the highest number of available proposed models belong to the category of e-Services with a percentage of 48%, while no model measures user satisfaction with mGovernment service platforms. The reason for writing the present article is to study, analyse and propose this with its attributes. Research implications The proposed model of user satisfaction on mGovernment service platforms can be used for further study of the service delivery process from the perspective of end-users (see Appendix Table 3). Using other online models in the environment of mGovernment leads to difficulties in understanding exact levels of user satisfaction, which affects the continued use and the future of applying such services on mGovernment portals. The present article provides the model proposed to be used in the context of mGovernment services while considering the unique features of this type of service. A total of six attributes of the proposed model were described to enhance the government agencies' measure of each aspect that may influence the level of end-user satisfaction (see Fig. 1). Researchers and practitioners can use this model for further research in terms of quantitative or qualitative studies to analyse the attributes that influence the satisfaction of end-users with mGovernment portals to help government agencies focus on these most important attributes affecting the process of service delivery. Research limitations The present research consists of six constructs related to the model of user satisfaction. These constructs are extracted based on theories from previous literature reviews, and further practical studies are required to measure the impact of each construct on user satisfaction. CONCLUSIONS User satisfaction is a source of success in every sector. MGovernment services aim to deliver government services through smart devices to the public and to ensure that the end-users are satisfied with such services. It is essential to provide and reengineer the process of service delivery until the public is satisfied. Previous literature shows a lack of focus on mGovernment services in terms of user satisfaction. When it comes to measuring the delivery of services based on mobile devices, it is crucial to find the attributes that fit the unique features of mobile devices, such as portability, small screens, limited features compared with desktop devices, wireless access, and touch screens. Hence, using other models to measure user satisfaction with mGovernment can lead to more difficulties and inaccurate results. Each model has its features and attributes and is constructed based on its context. In this case, based on previous literature reviews, the current article proposed a comprehensive model with a total of six related attributes (usability, interaction, consistency, information, accessibility, and privacy and security) that can measure user satisfaction with mGovernment. It is a guide for decision-makers at government agencies to improve the services on mGovernment portals based on user satisfaction. ADDITIONAL INFORMATION AND DECLARATIONS Funding
v3-fos-license
2020-07-23T09:06:17.876Z
2020-07-17T00:00:00.000
220731253
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1097/md.0000000000021288", "pdf_hash": "f253a3ffab1d2d7715999887facd3551a008d238", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:408", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c80c0601811c0fc3c338fcba9575019ae34d3695", "year": 2020 }
pes2o/s2orc
Predictors of all-cause 1-year mortality in myocardial infarction patients Abstract Compared with the general population, myocardial infarction (MI) survivors have a higher risk of mortality in the first year after the index event. The aim of this study was to determine the associations between variables obtained during the index admission and 1-year all-cause mortality on follow-up. A cohort of 296 patients was enrolled in the study, with a median age of 63.8 ± 12.68 years. All patients received a coronary angiography and stent implantation by percutaneous coronary intervention. Each variable was tested for association with all-cause mortality, using chi-square tests for categorical and binary variables and t tests for continuous variables. The relative prognostic power of each significant variable was further evaluated by logistic regression before and after adjustment for differences in baseline characteristics. Patients who were deceased after 1-year of MI had significantly higher mean age, increased prevalence of diabetes, and elevated heart rate as compared to those who were surviving. Univariate analysis indicated that patient mortality within 1-year of MI was strongly correlated with higher rates of pump failure on admission (P < .0001), bleeding complications (P = .02), the severity of coronary artery disease measured by Gensini score (P = .04), and decreased left ventricular ejection fraction (LVEF) (P < .0001). After adjustment of baseline variables, only pump failure (P = .006) and reduced LVEF (P < .0001) were independently associated with 1-year mortality. Our study shows that LVEF dysfunction and pump failure are independent predictors of 1-year all-cause post-MI mortality, while the severity of coronary artery disease and bleeding did not qualify as independent predictors. Also, age, history of diabetes, and elevated heart rate may be used as markers for increased mortality rates. Introduction Long-term survival after myocardial infarction (MI) has improved over the last 3 decades in developed countries. [1][2][3][4][5][6][7][8] However, approximately 20% of patients experiencing an acute MI die within 1 year of the event, with over half the first-year mortality occurring after 30 days of MI. [2] To accurately predict survival after MI, 1 has to take into account multiple organ systems and comorbidities that may interact with heart disease and affect overall mortality. [3] The list of variables affecting post-MI mortality rates includes a gender, [6] age, [7] smoking, history of diabetes, [4] renal failure, [5] hypertension, peripheral artery disease, stroke, chronic obstructive pulmonary disease, chronic liver disease, and cancer. [2,8] In addition to these risk factors, numerous reports also show an association between increased annual mortality after MI and additional clinical parameters, such as elevated resting heart rate, [9] diagnosed pump failure on admission, [10] left ventricular ejection fraction (LVEF) dysfunction, [11] bleeding complications, [12,13] and a history of obstructive coronary artery disease. [8] The goal of this paper is to evaluate if any of these parameters could be independent predictors of death after 1 year of MI. Identification of these variables would help to develop and validate statistical models that can be used to determine 1-year mortality after an acute MI, and ensure intensive follow-up and risk factor modification. Study design This institutional ethical committee approved retrospective study (approval no. 2019KY11) included 330 patients with a diagnosis of MI admitted to the First Affiliated Hospital of USTC between January 1, 2018 and December 30, 2019. Since data was obtained from de-identified medical records and involved no patient interaction, informed consent was waived for the purpose of this study. All included patients had received a coronary angiography and stent implantation by percutaneous coronary intervention during their admission. The following data were recorded for all patients at baseline admission: demographic details, medical history, cardiovascular clinical details like pump failure, cardiac shock, malignant arrhythmia, recurrent MI, apoplexy, and bleeding based on the Bleeding Academic Research Consortium (BARC) classification. Further clinical data recorded included the patient's angiography details like number of diseased vessels, specification of the occluded artery; intraoperative medications administered (heparin, bivalirudin, tirofiban); echocardiography data (LVEF, left ventricular systolic function, left ventricular diastolic function); as well as the Gensini risk score for severity of coronary artery disease. After a follow-up period of 1 year, we sourced data on recurrent MI, apoplexy, heart failure, BARC bleeding classification, and mortality from the patient records. Fifteen patients that were lost to follow-up and 19 patients with missing data on allcause mortality were excluded from the study, leaving a patient cohort of 296 valid cases. Statistical analysis Baseline variables were tested for association with all-cause mortality, using chi-square tests for categorical and binary variables and t tests for continuous variables. All data were checked for quality, including reasonability and consistency of units of measure. All variables that showed a significant association with all-cause mortality were further tested using logistic regression to assess whether associations remained significant after adjusting for age and all significant patient history variables. Age was included in all models as it was strongly associated with all-cause mortality consistently for each variable tested and could otherwise confound the effects of patient history variables. Statistical analyses were performed using R Software Version 3.5.3 (R Core Team, 2019). Continuous data was presented as mean ± standard deviation and categorical data was presented as counts (% of total). For comparisons that were significant, multiple variable logistic regression was used to calculate odds ratios (ORs) with 95% confidence intervals (CIs) after adjusting for demographic and patient history variables. For all tests, P-values <.05 were considered statistically significant. Baseline characteristics Baseline characteristics for the study population are presented in Table 1. Two hundred ninety-six patients were included in the study, with a median age of 63.8 ± 12.68 years, and a median weight of 69.79 ± 13.6 kg. The majority of the patients were males (75.1%). One-year survival of the cohort was 91.2% (270 patients). To compare baseline differences, the patients were divided into 2 groups, alive and deceased. Patients in the deceased group were significantly older than in the surviving group, with a median age of 72.5 ± 10.4 as opposed to 63 ± 12.9 in the surviving group (P = .0003). As summarized in Table 2, there were no differences between the 2 groups for history of cerebrovascular disease (CVD) or hypertension as well as baseline systolic and diastolic blood pressures. However, patients in the deceased group had significantly higher prevalence of diabetes (P = .01) and elevated heart rate on admission (P = .02). The deceased group had significantly higher incidence of pump failure (P < .0001) and BARC defined bleeding complications, ranging from type 1 to type 5b (P = .01). The Gensini risk score in the deceased group was significantly higher as compared to the surviving group (84.6 ± 34.8 vs 68.7 ± 37.3) (P = .05). Also LVEF was significantly reduced in the deceased group as compared to the surviving group (P < .0001). Prognostic factors and predictors of 1-year mortality We next performed a multivariate logistic regression analysis to identify possible predictors of 1-year mortality after MI. Unadjusted models are summarized in Table 3 After adjustment for age, medical history parameters, diabetes, and heart rate, only pump failure and LVEF parameters remained statistically significant independent predictors of 1-year post-MI mortality, while the severity of coronary artery disease measured by the Gensini score and bleeding were not found to be statistically significant predictors of mortality (Table 4). Discussion Our data suggest that the presence of pump failure and LVEF dysfunction on admission are strong independent predictors of 1year mortality following MI. We also confirm age, history of diabetes, and increased heart rate as significant risk factors of increased post-MI mortality. Our findings are consistent with the results of previous investigators. Numerous studies have shown that the risk of cardiovascular events and patient mortality is highest in the first year following MI. [1,14,15] Patients with accompanying conditions such as hypertension, diabetes, peripheral artery disease, or history of stroke are known to have significantly higher rates of mortality. [16] Accurate prediction of post-MI mortality, therefore, has to take into account multiple variables and comorbidities that impact heart disease. [2] In this study we identified baseline parameters such as age, diabetes, and heart rate on admission to strongly correlate with the survival of patients after 1 year of MI. Age is one of the most significant prognostic parameters of post-MI death, partially due to the high risk of vascular events after MI in elderly patients. [17] Our data confirms that there is a strong association between the risk of death and age, with almost a 10-year difference in the mean age of patients who did not survive the first-year post-MI compared to the surviving group (75.5 years versus 63 years). Our results also indicate that 1-year mortality post-MI is significantly higher in patients with a history of diabetes. Our observations are in agreement with studies that have shown the impact of diabetes on mortality rates after MI. [4,18] Population studies have shown that the post-MI mortality rates are doubled in patients with diabetes, and are equivalent to the effect of 15 years of ageing. [19] Elevated heart rate at the time of hospital admission for acute MI is known to be an independent predictor of short-and longterm mortality. It is also a reliable measure of autonomic tone and physiologic stress. [20][21][22] In our study, increased heart rate was associated with increased chances of mortality after 1-year post-MI. Taken together, our results further confirm that the survivors of MI are at higher risk of mortality 1 year after the event when risk factors such as old age, diabetes, and elevated heart rate are present. These variables could serve as simple markers that can be easily used in risk assessment of patients. In addition to baseline variables, we looked at several clinical parameters collected on admission, such as severity of coronary artery disease, bleeding complications, LVEF dysfunction, and pump failure as potential predictors of all-cause mortality in our cohort. In our study, when no other variables were taken into account, bleeding (1-5b, BARC classification), lowered LVEF, Gensini score indicating obstructive coronary artery disease, and pump failure on admission were associated with statistically significant increase in post-MI death rates. However, after adjusting for baseline variables, such as age, patient medical history, heart rate, and diabetes, only prognostic values of pump failure and LVEF dysfunction persisted, while bleeding and high Gensini score did not qualify as independent predictors for 1-year mortality after MI. Pump failure and LVEF dysfunction are considered among the leading causes of mortality in patients with heart failure. Narang et al [23] reported that circulatory failure was the most frequently reported mode of death in chronic heart failure, accounting for up to 42% of all deaths, and noted that pre-existing chronic heart failure significantly impacted survival of MI patients. Pump failure is becoming a leading cause of mortality in patients with newly diagnosed or severe heart failure, and patients with heart failure associated with Chagas' disease. [24] In our study, pump failure on admission was also a strong independent predictor of 1-year all-cause mortality after MI, with its predictive value unaffected by correction for any baseline variables. This finding supports previous reports that over 70% of patients who survived MI, died of uncontrolled pump failure during the follow-up period. [25] Low LVEF values were a significant independent predictor of 1-year death in our study. This is in agreement with earlier Table 3 Unadjusted risk factors of all-cause post-MI mortality after 1-year of follow-up. reports that LVEF of <40% is an independent predictor of mortality after MI. [26] Our data further strengthens the importance of addressing timely treatment of heart failure with reduced LVEF, as implantable cardioverter-defibrillator and pharmacologic interventions with agents such as angiotensinconverting enzyme inhibitors and b-blockers may significantly increase survival rate of MI patients. [27] The study results should be interpreted keeping in mind the study limitations. Firstly, the sample size of the study was not very high. Secondly, due to limited follow-up, our study presents data of only 1-year mortality. With the present information, it is not known if the analyzed variables also predict long-term mortality after MI. Model To conclude, in our current analysis, we explored the impact of older age, comorbidities such as diabetes and coronary artery disease, and clinical parameters on admission on all-cause mortality 1 year after MI. Pump failure, severity of coronary artery disease, LVEF dysfunction or bleeding complications may serve as reliable preliminary predictors of post-MI mortality, but only pump failure on admission and lower LVEF may be considered independent predictors. Age, history of diabetes, and elevated heart rate were associated with increased mortality rates in our cohort, and could be used as markers for risk assessment. Author contributions QY and LM conceived and designed the study. QY and JZ collected and analyzed the data. QY was involved in the writing of the manuscript. All authors have read and approved the final manuscript.
v3-fos-license
2023-04-05T15:12:40.195Z
2023-04-01T00:00:00.000
257943109
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1729/13/4/927/pdf?version=1680316759", "pdf_hash": "a26f34b7f3267e924beed87469e514a08fe2bca9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:409", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "827d3bd8d5615effc3d5a495d629ae60f836302f", "year": 2023 }
pes2o/s2orc
Humic Substances as a Feed Supplement and the Benefits of Produced Chicken Meat Humic substances with a high proportion of humic acids (more than 40%) have been classified by the European Commission as feed materials that can be used in animal nutrition since 2013. A protective effect on the intestinal mucosa, as well as anti-inflammatory, adsorptive and antimicrobial properties, were recorded. Nutrient absorption, nutritional status and the immune response in chickens supplemented with HSs were significantly improved. HSs have the ability to enhance protein digestion as well as the utilization of calcium and trace elements. They are known to improve feed digestibility as a result of maintaining an optimal pH in the gut, leading to lower levels of nitrogen excretion and less odor in the husbandry environment. HSs not only increase digestibility and result in greater utilization of the feed ration but also improve the overall quality of the meat produced. They increase the protein content and reduce the fat content in breast muscles. They also contribute to improving the sensory characteristics of the meat produced. Their antioxidant properties improve the oxidative stability of meat during storage. The influence of HSs on fatty acid composition may be one of the reasons that meat has a more beneficial effect on the health of consumers. Introduction Humic substances (HSs) are natural organic compounds found in soil, coal, water and other sources. They are formed by the biological and chemical decomposition of plant biomass via the activity of microorganisms. Their heterogenous macromolecular structures and compositions may vary depending on the site of occurrence. According to their solubility, HSs are divided into humic acids, fulvic acids and humins. Humin has a high molecular weight and is insoluble in water regardless of the pH. Humic acids have a medium molecular weight and are insoluble in acidic environments with a pH less than 2; however, they become soluble in alkaline environments. Fulvic acids are soluble regardless of the pH of the environment and have the lowest molecular weight [1]. As the molecular weight increases, the carbon and oxygen contents, acidity and degree of polymerization also change [2]. The chemical structures of HSs are not fully known; they contain different functional groups (carboxylic, phenolic, carbonyl, hydroxyl, amine, amide and aliphatic). Due to their diverse molecular structures, many benefits in agriculture have been proven. They aid in the transport of micronutrients from the soil into plants, increase water retention and stimulate the growth of positive microorganisms in the soil. The ability of HSs to form chelate complexes with micronutrients and facilitate nutrient uptake by plants is also used in plant breeding [3]. Due to the diverse contents of functional groups, HSs, along with other natural substances, are among the most potent chelating agents. Compared to other inorganic adsorbents, such as zeolites, their adsorption capacity is several times higher. The ability of HSs to bind heavy metals, such as cadmium and lead, also increases with increasing atomic weight [4,5]. They are good adsorbents of heterogenous substances, which can eliminate or reduce the toxicity of endogenous or exogenous toxins. Several studies have confirmed the effectiveness of HSs in reducing toxicity caused by aflatoxins [4,6]. Naturally occurring mycotoxins in contaminated feed are known to significantly impair animal growth parameters, organ morphology and the values of most blood biochemical parameters; they do not, however, cause clinical signs during short periods of feeding. HSs added to feed with low levels of mycotoxins act as adsorbents and thus modify the values of growth and biochemical parameters. Fulvic acids present in HSs form complexes with minerals and change their electrical charges, thus facilitating their faster uptake into the body. HSs induce an increase in the permeability of cell membranes and consequently facilitate the transport of minerals from the blood into cells [1]. The Use of HSs in Broiler Fattening In the past, antimicrobials have been used as growth promoters in livestock nutrition. Due to the risk of drug residues and the increase in the resistance of microorganisms to antibiotics used in both veterinary and human therapeutics, their use in animal nutrition has been banned. Nowadays, HSs are considered to be one of several classes of suitable alternatives, and many positive effects on production parameters, the immune system and animal health have been attributed to them. They are able to bind various toxic substances and form insoluble complexes with them. Due to this property, they are also suitable for use as adsorbents and consequently are able to reduce the absorption of various endotoxins, which is of paramount importance in the protection of animal and human health. HSs have antibacterial, antiviral and antimicrobial effects in animal husbandry, thus improving the economics and ecology of livestock production (mainly by increasing growth, reducing the cost per kilogram of gain and minimizing the risk of disease). Moreover, they are neither toxic nor teratogenic [7,8]. In 2013, according to Commission Regulation 68/2013 [9], leonardite as a source of humic substances was included in the catalogue of feed materials that can be used in animal nutrition in the EU. In this regulation, leonardite is defined as a naturally occurring mineral complex of phenolic hydrocarbons, also known as humate, which originates from the decomposition of organic matter over the course of millions of years. HSs, as organic mineral feed with a high proportion of humic acids (more than 40%), have been classified as feed supplements used in the EU. In horses, cattle, sheep, goats, pigs and poultry, HSs serve as treatments for diarrhea, dyspepsia and acute intoxications. They also show a marked tendency to inhibit pathogenic bacterial and microscopic fungal growth, and therefore may reduce mycotoxin levels. A protective effect on the intestinal mucosa as well as anti-inflammatory, adsorptive and antimicrobial properties have also been recorded. HSs improve gut health, nutrient absorption, nutritional status and the immune response in animals. They have the ability to improve protein digestion as well as the utilization of calcium and trace elements significantly. They are known to improve food digestibility due to their property of maintaining an optimal pH in the intestines, resulting in lower levels of nitrogen excretion and less odor in the husbandry environment. Humic acids not only increase the digestibility and utilization of food, but they also improve the overall environment in the gastrointestinal tract [1]. Effects of Humic Substances on Growth Parameters and Feed Conversion Additives of natural origin can be added to feed for the purpose of improving growth parameters, animal health and/or improving the quality of the meat produced [10]. In 1999, the EMEA (The European Agency for the Evaluation of Medicinal Products) issued the approval of the oral administration of humic acids to all food-producing animals. In animal production, the addition of humic acids to feed can positively influence all production parameters. Humates included in the feed or water of poultry promote their growth [11]. The positive effects of HSs added to the feed and water of broiler chickens at different concentrations on growth parameters (chick weight, gain, feed consumption and feed conversion) has been reported by several authors [7,[12][13][14]. These effects ensure, among other things, the proper composition of the intestinal microflora [15,16]. The presence of organic acids suppresses the production of toxic products by bacteria and prevents the colonization of the intestine by pathogenic microorganisms [17]. HSs support the formation of a protective layer of the intestinal mucosa against pathogens and toxic substances that could cause reductions in weight gain and thus the final weight of chickens [18]. Feed conversion is also an important parameter that is monitored during fattening of chickens. Its value is calculated based on weight gain and feed consumption during the fattening period. Ozturk et al. [12] stated that the improvement of feed conversion and feed increments in poultry fed with the addition of 1.5 g of HSs per kg of compound feed proves their utilization as a suitable alternative to antimicrobial feed additives used as growth promoters, which was, furthermore, confirmed by other authors as well [11]. Arif et al. [18] found that the addition of HSs at 0.75, 1.5 and 2.25 g·kg −1 of feed resulted in an increase in final weight, feed intake and weight gain and also improved feed conversion values in quails. They noted that, as the concentration of HSs in feed increased, the final weight of chicks increased, feed intake decreased and feed conversion improved. A pronounced effect on growth parameters with an increasing concentration of HSs in feed was also observed by Jad'uttová et al. [14], where after feeding higher amounts of HSs in broiler feed (0.8 and 1.0%) there was a slight increase in final chick weight and gain, and, moreover, the differences in values were balanced. This finding is in agreement with the results of Eren et al. [19], who reported that feeding feed supplemented with HSs in a concentration of 2.5 g·kg −1 of compound feed significantly improved chick gains and feed conversion. Hudák et al. [20] added HSs to broiler chickens' diets in natural and acidified forms at a 0.7% concentration. The acidified form contains formic acid that functions to increase feed digestibility. They noted that both forms of HSs had an effect on improving the final weight and achieving higher gains during fattening as well as better feed conversion compared to the control group. However, it is important to note that the acidified form had no effect on the growth parameters of chicks compared to the natural form of HSs. The effect of HS administration in drinking water on the final weight and weight gain was described by Lala et al. [7]. As the concentration of HSs in the water fed to chickens increased, their final weight and gains also increased. Similarly, feed conversion was better in chickens supplemented with HSs in water. A positive finding for poultry farmers is that, after the addition of HSs to chick feed, feed conversion was improved and chickens achieved higher final weights, although statistical differences in final weight were not always noted. However, for farmers, an increase in chicken weight of 70-90 g per chicken on average is a significant economic benefit, which may represent a considerable economic benefit when the number of chickens reared per pen is large. On the contrary, no effect of HSs on growth parameters has been recorded [21]. Kaya and Tuncer [22] reported no improvement in feed conversion with the implementation of HSs in feed at a rate of only 0.25%. For the action of HSs to effectively improve growth parameters, the concentrations used are of chief importance. Regarding the above-mentioned experiments, the optimum dose of HSs was 0.5 to 1.0%. The exact concentration is always dependent, also, on the humic acid content, which should be at least 40%, and on the way the HSs are treated before use. Effect of Humic Substances on Carcass Yield An important indicator of the efficiency of fattening and rearing poultry is the carcass yield, as is the yield of individual body parts of chickens. A positive effect of HSs on carcass yield was observed at concentrations ranging from 0.25 to 1.0%. The addition of 1% of HSs to chicken feed will ensure better carcass yield. A significant increase in breast and thigh muscle yield was observed in chickens fed with 1% HSs supplemented in broiler feed compared to the control group [14]. Feeding 0.75% HSs increased carcass yield as well. Breast muscle yield was comparable to that of the control group [20]. The addition of 0.6% HSs to chicken feed during a fattening period of 39 days had a significant effect on body yield, with higher body weights, as well as breast and thigh muscle weights, recorded [23]. These claims are in agreement with the work of Celik et al. [24], who reported significant increases in carcass and breast muscle yields of poultry after the addition of HSs at concentrations as low as 0.25%, and also with the results of Arif et al. [18], who confirmed the best values of carcass and breast muscle yields after feeding HSs at 2.25 g·kg −1 of the feed mixture. Effect of Humic Substances on the Digestive Tract The digestive tract of chicks is immature and sterile shortly after hatching. Chickens are very susceptible to pathogenic microorganisms until their gut microbiome develops. Antimicrobial feed additives have often been administered to suppress pathogenic microorganisms and to improve growth and fattening efficiency. The benefits of antibiotic administrations as promoters of animal health and growth have been well documented in the scientific literature. Unfortunately, the risks of pathogen resistance to antibiotics posing a serious threat to animal and human health have been equally well described [7]. For this reason, various alternatives, such as probiotics, prebiotics, plant-based ingredients and organic acids, have started to be used in poultry farming. Most of these alternative additives work by affecting the gut microbiome and the digestive process. HSs, representing one of these alternatives, have been said to inhibit the growth of bacteria and microscopic fungi and thus reduce mycotoxin levels in feed [25]. Acidification of the digestive tract by various organic acids reduces the formation of toxic bacterial products and colonization of the intestinal wall by pathogens, thereby preventing damage to the epithelial cells of the intestines [26]. Mudroňová et al. [15] investigated the effect of HSs on the microbiome of the small intestine and appendix; in addition, the contents of lactic acid bacteria and enterobacteria were also monitored. They noted that HSs at a 0.8% concentration in chicken feed had a positive effect on the gut microbiome, represented by a decrease in Enterobacteriaceae and, conversely, a significant increase in lactic acid bacteria, compared to the control group. A positive effect of HAs on the composition of the gut microbiome was also observed by Arif et al. [18]. They stated that, after the addition of 0.25 g·kg −1 of HSs, the contents of coliform bacteria, E. coli as well as Clostridium perfringens decreased in the ceca of quails. They also noted a decrease in the pH of the environment in the cecum. In broilers, benefits of HS addition, such as increase in the length of the villi of the small intestinal mucosa and reduction in the depth of the villi due to the formation of a protective HS layer, have been reported [27,28]. Due to colloidal characteristics and a high capacity of HSs to form aggregates within solutions, it has been proposed that HSs have the ability to create protective layers on the epithelial mucous membrane of the digestive tract, preventing the penetration of pathogenic bacteria or toxic substances produced by bacteria [29] and improving the utilization of nutrients from feed [1]. HSs also interact with biomolecules, such as collagen, promoting the resistance and maturity of its fibers, resulting in an increase in intestinal villi integrity [29,30]. Ceylan and Ciftci [31] state that HSs can increase the uptake of nitrogen, phosphorus and other nutrients due to their chelating properties. The acids' anions bind with calcium, phosphorus, magnesium and zinc, resulting in the improved digestibility of these minerals, which serve as substrates for intermediary cell metabolism [7]. Influence on the Immune System HSs also exert a beneficial effect on the immune system of poultry. Nagaraju et al. [21] noted an improvement in the production parameters and the immune status of broilers after the addition of HSs to antibiotic-free feed. There are several ways in which substances can influence the immune system. One of the modes of action is the formation of solid complexes of HSs with carbohydrates which subsequently enable the formation of glycoproteins capable of binding to NK cells and T lymphocytes [29,32]. These glycoproteins act as modulators of intercellular communication and therefore regulate the immune response, including regulation of cytokine production and preventing excessive activity of cytotoxic T lymphocytes and natural killer cells. Subsequently, cytokines affect/regulate a number of immune reactions in the organism [32,33]. In several experiments, an effect on the representation of poultry lymphocytes was noted. An increase in total lymphocyte numbers after HS application was noted in laying hens [34], broilers [35] and Japanese quails [36]. Cetin et al. [34] reported that feeding humic acids (at 0.15%) to poultry resulted in a significant increase in the number of lymphocytes via increased IL-2 production and expression of IL-2 receptors on lymphocytes, which led to an increase in the IL-2 production activity of the cells. In addition, changes in the representation of individual lymphocyte subpopulations were also observed. After feeding 0.8% HSs from leonardite to broiler chickens, there was an increase in the representation of T lymphocytes-specifically, helper T lymphocytes-whereas cytotoxic T lymphocytes were reduced. The gene expression of IgA was not changed [15]. In contrast, in laying hens receiving 0.5% HSs, an increased percentage of the B lymphocyte subpopulation was noted, which corresponds to the increased gene expression of IgA in the intestines. The expression of genes for mucin production (MUC-2) was also increased, which, together with IgA, is significantly involved in the protection of mucosal surfaces, thereby improving feed utilization [16]. Rath et al. [35] found that, after feeding HSs at 0.25%, there was an increase in the weight of the bursa of Fabricius-a key organ for the development and differentiation of B lymphocytes in birds. The activation of B lymphocytes was confirmed, also, by significant increase in serum IgM and IgG levels in laying hens after 0.1 and 0.5% HS employment [37] and by increased serum gamma globulins in broilers [38]. Another mechanism which is strongly influenced by HSs is phagocytosis. In our previous experiments during which broilers received 0.8% and laying hens 0.5% HSs in feed, we noted significantly higher values of active phagocytes as well as their engulfing capacity [15,16]. Similar results were observed, also, in other animal species [39]. Sanmiguel and Rondón [40] found that the effect of HSs on phagocytes depends on time. The addition of 0.1 and 0.2% HSs to laying hens' diets was associated with stimulated phagocytosis after 8 and 30 days of application; however, phagocytic activity was significantly reduced after 60 days as compared to the control group. Similarly, they also noted an increase in the oxidative burst of phagocytes at day 30 and a decrease at day 60. It is not completely clear how HSs act on phagocytosis, but it has been confirmed that HSs stimulate the adhesion abilities of phagocytes and the production of reactive oxidative intermediates and that they are able to induce nuclear factor κB, which is decisive for the transcription of many genes involved in the process of phagocytosis (e.g., GMCSF, IL-8 and TNF-α) [41]. Based on the results of scientific studies, it can be assumed that the effect of humic substances on the immune system is affected by the concentration and duration of application and by the category and species of chickens to which the HSs are administered. These facts should be taken into account when applying HSs on farms in order to achieve an optimal effect. Bone Mineralization Calcium and phosphorus are macronutrients that are essential for bone formation. Insufficient dietary calcium sources can lead to hypocalcemia in the blood, which may lead to decreased bone strength and mineralization. HSs are considered to be excellent natural sources of minerals, as they have a high complexation capacity and are able to form chelates with different ions [29]. Their application is therefore associated with improved mineral utilization by plants and animals [2]. Angeles et al. [42] investigated the effect of HSs in water on the calcium and phosphorus contents of tibia bones. Chickens were administered HSs at concentrations of 161, 322, 483 and 644 µg·L −1 . They noted that increasing concentrations of HSs in water resulted in increased tibial Ca levels and improved bone mineralization in broiler chickens at 21 and 42 days of age. The effect of HSs (0.8 and 1.0% supplementation in feed mixture) on bone mineral composition was also evaluated by Jad'uttová et al. [14]. They observed a significant increase in the amount of calcium and a decrease in the amount of phosphorus in the long bones (tibias) of broilers, with a decrease in the amount of these macronutrients in the blood. They also noted better mineralization as well as bone quality. A possible explanation of the lower amount of calcium in the blood of the chickens could be that there was a higher accumulation of calcium in the bones of the experimental groups fed with HSs. The high accumulation of calcium in bones and the pronounced growth rate of broiler chickens may have caused a drop in calcium blood levels at the time of slaughter. Similar findings are also presented in the works of Rath et al. [35] and Ozturk et al. [43], in which reductions in the serum concentrations of calcium, magnesium and phosphorus in the blood of broiler chickens were also recorded. A lower concentration of calcium and phosphorus in blood serum may be due to the ability of HSs to chelate metals, which is influenced by the large number of carboxylic acid side chains [2,35] that they have. Humic Substances and Meat Quality of Broiler Chickens Poultry meat is popular among consumers due to its high protein content, low fat content and its being a source of vitamins and minerals. The quality of meat can vary depending on its chemical composition, which is influenced by the diet and substances added to the animal feed. HSs have been tested quite intensively as feed additives in recent years. However, their effects are mainly studied in regard to growth parameters as replacements for antibiotic growth promoters, improvement of the health status of broiler chickens and reduction in the use of antibiotics for the treatment of chickens [12]. In terms of meat quality, there are fewer records regarding the effects of humic substances [20,[44][45][46][47]. Humic substances administered to broiler chickens influenced the basic chemical composition of the meat. In the conducted experiments, a decrease in fat content and an increase in protein content in breast muscles was observed, along with an increase in fat content and a slight decrease in protein content in the thigh muscles of chickens [12]. According to Wang et al. [48], the decrease in fat thickness and increase in the marbling of the meat produced after feeding humic ingredients suggests that humic ingredients are capable of influencing the distribution of fats and proteins in the body and thus altering the composition of the meat. Semjon et al. [47] and Hudák et al. [20] reported that after feeding humic substances there is a decrease in fat content and an increase in protein content in breast muscles. However, Semjon et al. [47] reported a significantly higher fat content in chicken thigh muscles after the addition of 0.8% HSs. Similarly, Ozturk et al. [12] reported that the addition of 0.5, 1.0 and 1.5% concentrations of HSs affected the fat and total protein contents differently. While the total protein content decreased after feeding feed supplemented with 1.0% HSs, no significant effect on the total protein content of breast muscles after feeding 0.5 and 1.5% concentrations was observed. On the contrary, all three experimental groups showed decline in the breast meat fat content compared to the control group. The results indicate that diets with HS addition cause reduction in fat content and, conversely, increase in the protein content of chicken breast muscles. This finding is very encouraging, especially regarding human dietary recommendations, where a lower fat diet and a higher protein diet is preferred. Meat of broiler chickens fed with feed supplemented with HSs can be considered a very valuable meat type compared to conventional commercial meat due to its improved nutritional composition. Several indicators of meat quality are known. One of the main parameters of meat quality is pH [49]. Levels of 5.8 or less 24 h after slaughter are recommended; higher values cause changes in meat quality, especially regarding color and tenderness [50,51]. Semjon et al. [47] reported a lower pH in the breast and thigh muscles of broiler chickens after the addition of 0.8% and 1.0% humic substances in diets. Similarly, Hudák et al. [20] also reported a decrease in the pH of thigh muscles after the addition of 0.7% HSs in the diet of broiler chickens. In comparison, Akaichi et al. [52] observed no effect on the pH of chicken breast muscles after feeding humic acids (0.1%) in feed. Similarly, Ozturk et al. [12] did not report changes in the pH of chicken breast muscles after feeding 0.5-1.5 g·kg −1 HSs in feed. In the thigh muscles, the pH of the meat was slightly higher compared to the control group. However, it is of importance that more experiments need to be conducted to unambiguously determine the effect of HSs on meat pH. Meat color is an important factor for the market. Meat color can be evaluated instrumentally or by a sensory panel. The instrumental approach is based mainly on the CIE system (International Commission on Illumination) that functions as the standard for color specification and measurement universally accepted. It takes three fundamental aspects into consideration, namely, luminosity or lightness (L*), red tones or redness (a*), and blue-yellow tones or yellowness (b*) [53]. The use of humic substances causes a change in the color of the breast muscles. However, the concentration of HSs used in feed plays an important role regarding the color of the meat. Hudák et al. [20] reported significantly lighter breast muscle meat after feeding 0.7% HSs in acidified forms. Akaichi et al. [52] did not observe any changes in the color of the breast muscles of chickens after feeding HSs compared to the control group. Disethle et al. [54] reported similar results. Semjon et al. [47] noted a change in the lightness and redness of breast meat with a 1.0% supplementation of HSs. The meat was observed to be significantly darker and redder in color than the meat of chickens from the control group. Even though the results on the effect of HSs on meat color are not unambiguous, we can conclude that HS feeding has a beneficial rather than a negative effect and is dependent on the concentration of HSs used in the feed. An important parameter for the acceptance of meat as an important food is its sensory qualities. Feed ingredients that can influence the sensory evaluation of meat are also of interest to poultry farmers and meat producers, as they can increase the attractiveness of products to consumers. HSs positively affect the sensory quality of meat. Semjon et al. [47] noted in the sensory evaluation of chicken breast meat a positive response with regard to the perception of meat quality in relation to the supplementation of humic substances in the diet. In particular, they noted a significant improvement in meat flavor after feeding 1.0% HSs. Sensory evaluation of taste indicated a positive response with respect to the perception of meat quality in relation to increased supplementation of HSs in the diet. A beneficial effect of HSs on the sensory evaluation of breast muscles of chickens was also reported by Akaichi et al. [52]. They noted a significant improvement in meat color and aroma.Čurlej et al. [46] noted an improvement in meat texture after the addition of 0.6% HSs to chicken feed. These results also confirm the beneficial effect of HSs in improving the sensory quality of meat and thus increasing its attractiveness to consumers. Beneficial effects of adding HSs to chicken feed regarding the water-binding capacity of meat and the water loss of meat after cooking were also observed. HSs had the effect of increasing the water-holding capacity of both breast and thigh muscles [12]. Another interesting finding from a consumer perspective is the increased water-holding capacity of meat after heat treatment [47]. This does not significantly reduce the weight of the meat and also preserves the juiciness of the meat. This may also be the reason why the meat of HS-supplemented chickens is evaluated as better from the sensory aspect after HS feeding. Effect of HSs on the Fatty Acid Profile of Meat Fat The effect of HSs on fatty acid profiles has not been investigated so far. Therefore, our research is focused on the effect of adding humic substances to broiler chicken feed mixtures on the fatty acid composition of meat and cavity fat. A significant finding is that, after feeding humic substances at concentrations of 0.8 and 1.0%, the n-6/n-3 PUFA ratio in the breast muscles decreased. There was also a significant increase in the proportion of oleic acid and a decrease in the proportion of arachidonic acid compared to the control group's breast muscles (Table 1). When fed 0.6% HSs, the n-6/n-3 PUFA ratio remained unchanged in both breast and thigh muscles (Table 2). However, there was a decrease in the proportion of oleic acid and an increase in arachidonic acid in the thigh muscles. We observed a desirable increase in the proportion of n-3 PUFAs, DPA, EPA and DHA after feeding with all three tested concentrations. The body fat profile was minimally affected by a decrease (0.6%) or increase (0.8 and 1.0%) in the proportion of oleic acid. In general, it can be concluded that humic substances have a different effect on the fatty acid profile of meat fat (breast and thigh muscles) and a different effect on body fat. A more pronounced effect of feeding humic substances on the fatty acid composition of meat fat was observed. The fatty acid profile of body fat was affected minimally and rather adversely. We can also conclude that humic substances, as natural substances, are able to reduce the proportion of n-6/n-3 PUFAs in meat fat. This is mainly due to a reduction in the proportion of arachidonic acid and a slight increase in EPA, DPA and DHA in the fat of chicken meat. Interestingly, different HS concentrations in broiler feed had different effects on the fatty acid profile. However, we cannot state unequivocally that the proportions of certain acids increased or decreased with increasing concentrations of humic components in chicken feed. Further studies, especially in terms of fat quality, will be needed in the future to confirm unequivocally the influence of different concentrations of humic substances on the fatty acid profile of the meat produced. C-control group with standard diet; 0.6% HS-0.6% addition of HSs to feed; 1.0% HS-1.0% addition of HSs to feed; C 18:1 n-9-oleic acid; C18:1 n-7-vaccenic acid; C18:2 n-6-linolenic acid; C18:3 n-6-gamma-linolenic acid; C18:3 n-3-alfa-linolenic acid; C20:3 n-6-dihomo-gamma-linolenic acid; C20:4 n-6-arachidonic acid; C20:5 n-3-eicosapentaenic acid; C22:6 n-3-docosahexaenic acid; SFAs-saturated fatty acids; UFAs-unsaturated fatty acids; PUFAs-polyunsaturated fatty acids. Values in rows with a different mark ( a-c ) are significantly different (p < 0.05). 4.16 ± 0.03 a 3.71 ± 0.08 b ∑PUFAs n- 6 26.61 ± 0.17 a 24.90 ± 0.33 a n-6/n-3 6.40 ± 0.26 6.72 ± 0.16 C-control group; 0.6% HS-0.6% addition of HSs to broilers' diet; C14:0-myristic acid; C18:1 n-9-oleic acid; C18:2 n-6-linolenic acid; C18:3 n-6-gamma-linolenic acid; C18:3 n-3-al-fa-linolenic acid; C20:3 n-6-dihomogamma-linolenic acid; C20:4-n-6-arachidonic acid; C20:5 n-3-eicosapentaenic acid; C22:5 n-3-docosapentaenic acid; C22:6 n-3-docosahexaenic acid; SFAs-saturated fatty acids; UFAs-unsaturated fatty acids; PUFAspolyunsaturated fatty acids. Values in rows with a different mark ( a , b ) are significantly different (p < 0.05). Oxidative Changes in Lipids after Humic Acid Supplementation Lipid oxidation affects the most important quality attributes, including sensory (flavor, color and texture) and functional properties (water-holding capacity and emulsifying ability), of meat. Therefore, the prevention of lipid oxidation in meat is important for meat quality and also for human health [55]. Humic acids are active substances with antioxidant effects-they promote the activity of antioxidant enzymes [56]. Vašková et al. [57] indicate strong antioxidant effects of humic acids in the body by the promotion of antioxidant enzymes. Aeschbacher et al. [58] report that the antioxidant effects may be based on the phenolic and quinone groups present in the structures of humic substances, which are electron donors and acceptors, providing their antioxidant effects. Polyphenols-lignin-derived compounds-are considered to be some of the main components of humic substances that contribute to their antioxidant effects [59]. In addition, they are able to chelate metals, especially iron and copper, inhibiting the formation of free radicals via transition metal catalysis, thereby controlling lipid peroxidation and DNA fragmentation [60]. Their antioxidant effects, although not precisely described, can also be seen in the stabilization of meat fats. The addition of 0.7% humic substances can ensure decreased lipid oxidation and increased antioxidant activity of broiler meat during chilled storage conditions [20]. Aksu et al. [44] observed a decrease in lipid oxidation in vacuum-packed breast and thigh muscles during chilled storage after the use of 0.1, 0.2 and 0.3% commercial humate in broiler chicken diets. Reitznerová et al. [61] reported that the oxidative stability of poultry meat after humic substance feeding was favorably affected and that breast meat stored for 12 months in the freezer had higher oxidative stability compared to that of the control group. Similarly, Marcinčáková et al. [62] reported that, after feeding humic substances at a dose of 0.8% to chickens in the feed, the antioxidative stability of meat stored in a refrigerator and meat frozen for 12 months was similar to the antioxidative activity of meat from the control group. The results of Semjon et al. [47] also confirmed that humic substances in the diet of broiler chickens did not negatively affect nor increase the oxidative stability of the meat produced. The oxidative stability of meat after feeding 0.8 and 1.0% humic substances in the diet was not affected by the humic substances. On the contrary, the addition of humic substances slightly improved the oxidative stability of meat stored for 7 days in the refrigerator (Table 3). The better oxidative stability of the meat of broilers fed with HS addition could have been caused by a lower content of fat in comparison to meat of the control group, as noted also by Ozturk et al. [12]. The fatty acid profile of the meat after humic acid digestion did not change significantly, and, interestingly, there was an increase in oleic acid, which is oxidatively stable. Oxidation of thigh muscle fat is higher during storage, as thigh meat contains more fat and also a slightly higher proportion of unsaturated fatty acids than breast muscle meat [61]. However, the effect of HSs, as antioxidant components, was higher in thigh muscle meat, which was reflected in the lower amounts of oxidation products compared to the control group. This also supports the observation that, as the proportions of monounsaturated and polyunsaturated fatty acids in broiler chicken meat increased, fat oxidation also increased during storage of the samples [63]. Although the antioxidant effect of HSs in living organisms has been demonstrated, the effect of HSs on oxidative changes in meat has not been clearly confirmed. Recent work carried out suggests that HSs have an effect on the oxidative stability of fats at higher concentrations (1%) and in meat with a higher fat content. Conclusions The present review presents information on the beneficial effects of HSs in the diet of broiler chickens. The main confirmed benefits of using HSs as feed materials from the perspective of broiler chicken breeding are the formation of protective barriers on the intestinal mucosa, the stabilization of the intestinal microflora and the improvement of the immune system of broiler chickens. Equally important benefits of HS feeding include the improved bone quality of chickens, improved feed conversion and, in part, the final weight of chickens. In terms of the quality of the meat produced, an important benefit of HS feeding is the increase in protein content and the decrease in fat content of the breast meat produced. Positive findings also include the effect on the sensory characteristics of the meat, such as improved color and flavor. Among the confirmed effects of HSs on meat quality are the proven higher water-binding capacity of meat and thus lower water loss in meat after cooking. The addition of HSs can ensure lower lipid oxidation and higher antioxidant activity of broiler meat during chilling and freezing storage. A beneficial effect of HSs was also observed on the fatty acid profile of chicken breast muscle fat. However, at present, there are fewer data in this area and more experiments need to be carried out. Although there are many scientific papers confirming the beneficial effects of HSs on the health and production parameters of broiler chickens, as well as on meat quality, their mechanism of action remains unclear. However, their use in broiler chicken fattening is, according to scientific experiments, the right choice to improve the health and productivity of broiler chickens, as well as to increase the quality of the meat produced.
v3-fos-license
2021-01-07T09:05:00.050Z
2021-01-06T00:00:00.000
230796053
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jivp-eurasipjournals.springeropen.com/track/pdf/10.1186/s13640-020-00542-2", "pdf_hash": "d849a945b1706058b8a199acda8af6a521189eac", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:410", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "eb97be39136c789957a75a441be4263df9e5b478", "year": 2021 }
pes2o/s2orc
JPEG image steganography payload location based on optimal estimation of cover co-frequency sub-image The excellent cover estimation is very important to the payload location of JPEG image steganography. But it is still hard to exactly estimate the quantized DCT coefficients in cover JPEG image. Therefore, this paper proposes a JPEG image steganography payload location method based on optimal estimation of cover co-frequency sub-image, which estimates the cover JPEG image based on the Markov model of co-frequency sub-image. The proposed method combines the coefficients of the same position in each 8 × 8 block in the JPEG image to obtain 64 co-frequency sub-images and then uses the maximum a posterior (MAP) probability algorithm to find the optimal estimations of cover co-frequency sub-images by the Markov model. Then, the residual of each DCT coefficient is obtained by computing the absolute difference between it and the estimated cover version of it, and the average residual over coefficients in the same position of multiple stego images embedded along the same path is used to estimate the stego position. The experimental results show that the proposed payload location method can significantly improve the locating accuracy of the stego positions in low frequencies. scheme of stego positions is known, if the investigator can locate the steganography payload with the accuracy higher than randomly guessing, he (or she) can extract the hidden information by a collision attack. Although Quach [17] has proved the locatability of modified pixels in a single stego image, the actual steganography payload algorithms designed for a single stego image can only locate the steganography payload with low accuracy because it is very difficult to precisely estimate the cover of the given stego image and about half of the stego elements are still unchanged [18]. However, for the convenience of communication, many communication participants use the same key in a certain period of time and limit the embedding ratio. At this point, if they use multiple images with the same size to embed a large amount of data, the investigator may possess a number of stego images each containing payload at the same locations. Under such a scenario, in 2008, Ker [19] firstly proposed a payload location algorithm based on weighted stegoimage (WS) residuals for least significant bit (LSB) replacement. After that, many payload location algorithms have been proposed for spatial image steganography under this condition. Chiew and Pieprzyk [20] modified Ker's algorithm to locate the payload of binary image replacement steganography under the same condition. Ker and Lubenko [21] proposed a payload location algorithm for LSB matching, which filters the horizontal, vertical, and diagonal wavelet subbands of stego images by Wiener filter, and locates the stego pixel positions according to the absolute sum of the wavelet residuals in the same positions of multiple images embedded messages into the same positions. Quach [22,23] proposed several payload location algorithms for LSB replacement and LSB matching, which employ the Viterbi decoding algorithm or Quadratic Pseudo-Binary Optimization (QPBO) algorithm to find the optimal estimate of the cover image, and compute the residuals between the estimated cover images and the stego images to locate the payload. Gui et al. [24] proposed a payload location algorithm for LSB matching steganography by fusing the mean of 4 neighborhood pixels and 8 residuals computed along 8 different directions by the algorithm proposed by Quach [22]. Liu et al. [25] proposed a payload location algorithm for embedding messages into the spatial images subjected to JPEG compression by LSB replacement or LSB matching, which estimates the cover images by JPEG recompressing the stego images and decompressing the re-compressed versions. Yang et al. [15] proved the properties of the optimal stego subset of the multiple least significant bits (MLSB) steganography, then proposed a payload location algorithm and a stego key recovery algorithm based on the optimal stego subset. Sun et al. [26] proposed a payload location algorithm base on a tailored deep neural network (DNN) equipped with the improved feature named the "mean square of adjacency pixel difference." The above algorithms can locate the payload of LSB replacement, LSB matching, and MLSB replacement steganography with high accuracy and even can be used to estimate groups in group parity steganography or extract the hidden message for some special cases. However, they cannot work for the steganography algorithms with JPEG image as cover. When the messages are embedded into the JPEG images, recently, the authors [27] proposed a payload location method based on co-frequency sub-image filtering for a category of pseudo-random scrambled JPEG image steganography. The accuracy of this payload location method is influenced by the fidelity of the estimated cover images and can be improved if a more precise estimator can be designed. Activated by the optimal cover estimation method proposed by Quach in [22] for spatial image steganography, this paper proposes a payload location method for JPEG image steganography based on the optimal estimation of cover co-frequency subimage. Instead of directly applying the maximum a posterior (MAP) probability algorithm to the given stego spatial image to estimate the cover spatial image by the method in [22], the proposed method divides the stego JPEG image into 64 cofrequency sub-images, then applies the MAP algorithm to estimate the optimal cover co-frequency sub-images, and combines them to obtain the optimal cover JPEG image. This makes use of the correlation between the coefficients in the same position of adjacent blocks with a size of 8 × 8. The structure of this paper is as follows: Section 2 briefly introduces the random JPEG image steganography targeted in this paper. Section 3 proposes the payload location method based on the optimal estimation of cover co-frequency sub-image. Section 4 gives a specific payload location algorithm for F5 steganography. Section 5 presents the experimental results and the discussions. Finally, the paper is summarized in Section 6. Related work-Pseudo-random JPEG image steganography In order to improve the security of JPEG image steganography, the steganographer often embeds secret messages into the quantized DCT coefficients scrambled pseudorandomly. And because there are a lot of quantized DCT coefficients with value of 0 in JPEG images, if the steganographer embeds messages into these coefficients, the doubtful artificial clue will be found by steganalyzer. Thus, many JPEG image steganography methods do not embed message bits into these coefficients and do not embed message bits into the coefficients whose values would be changed to be 0. These JPEG image steganography methods can be described as follows. Input: a cover JPEG image C = c 1 c 2 …c N , a secret message bit sequence M = m 1 m 2 … m L and a stego key K. Output: a stego JPEG image. Steps: 1. Scramble the quantized DCT coefficients in the cover JPEG image C according to the stego key K, to generate the scrambled coefficient sequence 2.5.Embed the ith message bit into the jth coefficient c 0 j . 2.6.If the embedding changes the value of coefficient c 0 j to be the value which cannot carry a message, for example, F5 steganography changes the coefficient value 1 to be 0, assign the index of the scrambled coefficient as j + 1, viz. j = j + 1. If j > N, return 0, otherwise go to step 2.3. 2.7.Assign the index of the secret message bit as i + 1, viz. i = i + 1. If i > L, go to step 3. 2.8.Assign the index of the scrambled coefficient as j + 1, viz. j = j + 1. If j > N, return 0, otherwise go to step 2.2. 3. Inverse scramble the coefficient sequence after embedding according to the stego key K; 4. Encode the obtained coefficient sequence to a stego JPEG image, and return the generate stego JPEG image. 3 Methods-Payload location based on optimal estimation of cover cofrequency sub-image Principle When the secret messages are embedded into the pseudo-randomly scrambled coefficients as described in Section 2, if the investigator possesses T stego images S 1 , S 2 , ⋯, S T embedded along the same embedding path, then either of the following two cases may happen to the coefficients S 1 (i, j), S 2 (i, j), …, S T (i, j) in the same position (i, j) of T stego images: 1) If the position (i, j) is a stego position, the steganographer will determine whether to embed the message bit into the coefficient in this position according to whether the coefficient is available. Thus, any coefficient of S 1 (i, j), S 2 (i, j), …, S T (i, j) is either an unavailable coefficient or a stego coefficient containing a message bit. 2) If the position (i, j) is a non-stego position, the steganographer will not embed the message bit into the coefficient in this position regardless of whether the coefficient is available. Thus, no coefficients of S 1 (i, j), S 2 (i, j), …, S T (i, j) contain a message bit. Let C 1 , C 2 , …, C T denote the corresponding cover images of the stego images S 1 , S 2 , …, S T . A residual r t (i, j) of the coefficient in the position (i, j) of the tth stego image is defined as Let rði; jÞ denote the mean of all r t (i, j) over T stego images in the position (i, j). If the position (i, j) is a non-stego position, rði; jÞ must equal to 0, viz. rði; jÞ ¼ 0. If the position (i, j) is a stego position, rði; jÞ must be larger than or equal to 0, viz. rði; jÞ ≥ 0 , where the equal sign only holds in the case of that all of the coefficients C 1 (i, j), C 2 (i, j),…, C T (i, j) are not modified. When one possesses enough stego images, the probability that none of the coefficients C 1 (i, j), C 2 (i, j),…, C T (i, j) is modified is small. Thus, the investigator should be able to distinguish the stego positions from the non-stego positions according to the means of residuals if he can obtain the cover images. However, the investigator often cannot know the cover JPEG images. In this case, if the investigator can estimate the cover images, which are denoted byĈ 1 ;Ĉ 2 ; …;Ĉ T , he can compute the mean of the estimated residuals in the same position (i, j) of different stego images as follows: If the investigator possesses enough stego images embedded along the same path and can estimate the covers of them accurately enough, he may also be able to distinguish the stego positions from the non-stego positions with a success rate higher than a random guess based on the averaged estimated residuals as follows: where f(i, j) = 1 denote that the position (i, j) is determined as a stego position, f(i, j) = 0 denote the position (i, j) is determined as a non-stego position, and Thr is a decision threshold. Certainly, the more accurately the cover JPEG images are estimated, the higher the accuracy of payload location is. Therefore, in the following subsection of this section, a method is proposed to estimate the optimal cover co-frequency sub-images, then combine them to estimate the cover JPEG image. Optimal cover JPEG image estimation In [22], Quach et al. considered the strong correlation between neighboring pixels of spatial image and used the maximum a posterior (MAP) probability algorithm to estimate the optimal cover image corresponding to the stego image of LSB replacement and LSB matching steganography, which was used to locate the hidden information of LSB replacement and LSB matching steganography. In JPEG compression, the DCT transformation of pixel values greatly reduces the correlation between adjacent coefficients. And in order to improve the efficiency of JPEG compression, the DCT transformation is performed on each non-overlapping pixel block with a size of 8 × 8. Since the coefficients in the same position represent the magnitude of energy in the same frequency and the adjacent blocks in an image still have strong similarity, the coefficients in the same position of adjacent blocks still have a strong correlation. According to the property, this section will use the same method in [27] to divide the given JPEG images into 64 co-frequency sub-images, then use the maximum a posterior probability algorithm to estimate the optimal cover co-frequency sub-images, and combine them to get the optimal estimation of cover JPEG image. Markov model of co-frequency sub-image Let S d t and C d t denote the co-frequency sub-images composed of the dth quantized DCT coefficients in all 8 × 8 blocks of the tth stego image and its cover image, d = 1, 2, …, 64. In a statistical sense, the optimal estimation of cover co-frequency sub-images corresponding to S d t should be the cover co-frequency sub-image estimationĈ Then, the optimal cover co-frequency sub-image estimation is transformed into a problem of maximum a posterior probability estimation. Similar to [22], the following two assumptions are set: where k is a given positive integer. Eq. (5) indicates that each quantized DCT coefficient in the stego co-frequency sub-images is only related to the corresponding quantized DCT coefficient in the cover co-frequency sub-images, while Eq. (6) indicates that the cover co-frequency sub-image C d t is modeled with a k-order Markov model. For a given steganography algorithm, one can calculate the probabilities that the quantized DCT coefficient value changes to different possible values under a specific embedding rate α, viz. the transition probability in assumption (5). Besides, the prior probability in (6) can be computed from a large number of cover images. After dividing all quantized DCT coefficients into 64 co-frequency sub-images, each sub-image is scanned by four modes as shown in Fig. 1 to calculate the co-occurrence matrices of the adjacent elements. In JPEG image, the distributions of coefficient values in different co-frequency subimages show obvious differences. As shown in Fig. 2, the absolute values of coefficients in the low frequencies (corresponding to the upper left positions) are usually larger and equal to zero with the lowest probabilities, and most of the absolute values of coefficients in the high frequencies (corresponding to the lower right positions) equal to zero. Figure 3 presents the frequencies of zero coefficient in the different sub-images, where 10,000 images with a size of 512 × 512 in Bossbase 1.01 (http://agents.fel.cvut.cz/ stegodata/) are JPEG compressed with a quality factor of 75. The abscissa is the index of the position in the 8 × 8 block from left to right and top to bottom. It can be seen that the relative frequencies of zero coefficient in the sub-images corresponding to the lower right positions are close to 1. Optimal cover JPEG image estimation based on first-order Markov model In theory, we should compute the probabilities for all possible covers and search the cover which satisfies Eq. (4). But there are too many possible coefficient values in the cover image to search the whole possible space. Fortunately, the co-frequency subimage can be modeled by the hidden Markov model, and the Viterbi algorithm is a common method to solve the problem of the hidden Markov model. It has been used in cover image estimation of spatial steganography such as LSB replacement and LSB matching in [22]. Therefore, The Viterbi algorithm will also be adopted to search the optimal cover co-frequency sub-image. The Viterbi algorithm first computes the scores of the possible values of the first cover element as follows: Then, the scores of the possible values of the subsequent cover elements are computed as follows: where c k, i is possible value of the kth cover element in the ith image. Take a stego co-frequency sub-image with four quantized DCT coefficients S = (2, 0, −1, 1) of the typical F5 steganography as example, where the embedding ratio is 0.5. According to the embedding rule of F5 steganography, the possible values of the four cover coefficients are c 1 ∈ {2, 3}, c 2 ∈ {−1, 0, 1}, c 3 ∈ {−1, −2}, and c 4 ∈ {1, 2}. Figure 4 shows the trellis for Viterbi algorithm, which takes the possible values of four cover coefficients as nodes. The Viterbi algorithm first computes the scores of nodes in the first column of the trellis, where the value of p(c 1 ) can be obtained by statistics of a large number of cover JPEG images. For ease of understanding, it is assumed that the values of p(c 1 ) are as shown in the second column of Table 1. When the embedding ratio of F5 steganography is q, the coefficient value transition probability of F5 steganography is as follows: Then the scores of the subsequent nodes are computed in sequence by Eq. (8), and each node is connected with the previous node which maximizes its score. The values of p(c k | c k − 1 ) also can be obtained by statistics of a large number of cover JPEG images. It is assumed that the values of p(c k | c k − 1 ) are as shown in the last column of Table 1. Fig. 4 The trellis for Viterbi algorithm based on the first-order cover probability model Table 1 Example of the first-order cover probability model Finally, take the coefficient values in the path ending at the node with the largest score in the last column as the optimal estimation of the cover coefficients, as shown by the gray node in Fig. 4. It can be seen that when the embedding ratio is 0.5, the optimal estimation of the cover coefficient sequence of S = (2, 0, −1, 1) isĉ ¼ ð3; − 1; − 2; 2Þ . After the optimal estimation of each cover co-frequency sub-image is obtained by the Viterbi algorithm, one can place the coefficients of all estimated cover co-frequency subimages at the original positions of them to combine the optimal estimation of the cover JPEG image. The whole process is shown in Fig. 5, which is described in Algorithm 1. In theory, each cover co-frequency sub-image may be estimated more precisely by the first-order Markov model in the corresponding frequency. However, in many frequencies, there are a large number of coefficients with value of 0 which result in that the statistical significance of non-zero coefficient is not significant. Thus, in follows the first-order Markov model merged over different positions is used to estimate the cover co-frequency sub-images. Payload location algorithm for F5 steganography without Matrix Encoding The F5 steganography algorithm improves F4 by using shuffling. In F5 steganography, the positive odd and negative even represent the bit 1, while the positive even and negative odd represent the bit 0, and the DCT coefficients with value of 0 and DC coefficients do not carry secret information. The coefficient value transition probability of F5 steganography is shown by (9). When T stego JPEG images of F5 steganography are given, we can adopt the existing quantitative steganalysis algorithms to estimate the embedding ratios and then use the proposed Algorithm 1 in Section 3 to estimate the corresponding cover JPEG images. For each given stego JPEG image, we can scan it by 4 different modes as shown in Fig. 1, and then 4 estimated cover JPEG images can be obtained by Algorithm 1. After that, the residuals between the given stego image and the estimated cover JPEG images are computed as follows: which is slightly different from the previous residual calculation Eq. (1). For each position, 4T residuals can be computed from the given T stego JPEG images and 4T estimated cover JPEG images by (10), and then be averaged. The averaged value will be used to determine whether this position is a stego position. The detailed steps of the payload location for F5 steganography are given in Algorithm 2. Experimental setup In total, 10,000 PGM images with a size of 512 × 512 were downloaded from the BOSSbase1.01 and converted to cover JPEG images with a quality factor of 75. Nine thousand images were randomly selected from the generated cover JPEG images to count the first-order Markov model of cover co-frequency sub-image. The remaining 1000 images were used to test the performance of the proposed algorithm. A pseudorandom path was generated by scrambling the integer sequence 1, 2,…, 512 × 512. Then along the generated path, the pseudo-random message bits were embedded into the remaining 1000 images by F5 steganography (without matrix encoding) with ratio q = 0.5. Markov model selection From Algorithm 1 and 2, it can be found that the payload location accuracy is highly affected by the adopted first-order Markov model. In Section 3, we suggest to merge the Markov models over different frequencies to estimate the cover co-frequency subimage more precisely. Thus, we tried to merge proper Markov models. Firstly, the 64 Markov models m 1 …m 64 counted from sub-images corresponding to 64 positions in 8 × 8 matrix were applied to estimate the cover JPEG images separately, and the Markov model m i with the highest payload location accuracy was selected. Then, each of the remaining 63 models was merged to m i to obtain 63 new merged modes m i1 …m i63 , and the merged Markov model m ij with the highest payload location accuracy was selected. This operation was repeated until all models were merged. The merged model with the highest payload location accuracy was selected as the final model. One thousand test stego JPEG images with embedding ratio 0.5 were used to select the proper merged Markov model. Table 2 presents the location correctness of each co-frequency sub-images with the single corresponding Markov model, namely, 64 cofrequency sub-image models are used for the corresponding sub-images respectively. Table 3 shows the results when the optimal merged Markov model was used. In Tables 2 and 3, the correctness in the most upper left is not shown because the DC coefficients are not changed by F5 steganography. Comparing Table 2 with 3, we can see that for most positions, the location accuracy by using the optimal merged Markov model is much higher than that by using the individual model. Especially, the algorithm with the optimal merged Markov model can rightly distinguish the stego positions in low frequencies with accuracy close to 90%, even close to 95%. For the highfrequency positions, because there are very few available coefficients, it is still hard to distinguish the stego positions. Figure 6 shows the payload location accuracy of MAP-F5 with the optimal merged Markov model for different numbers of stego images when the embedding ratio is 0.5. It can be seen that the more the number of stego images, the higher the accuracy. As the number of images increases, the fluctuation of the residual means becomes smaller, and the residual means are closer to the change caused by information embedding. Therefore, the number of stego images is very important for locating the stego positions. Figure 7 compares the accuracies of the proposed algorithm and the payload location algorithm based on co-frequency sub-image wavelet filtering (CSW-F5 ) [27]. The 1000 stego images are generated with the same embedding path and the embedding ratio of 0.5. In the upper left corner of 8 × 8 block where the number of the 0 coefficient is relatively small, MAP-F5 obtains better results than CSW-F5. In practice, the results of the two payload location algorithms can be further combined. This paper proposes a payload location method based on optimal estimation of cover co-frequency sub-image. The proposed method divides each given stego JPEG image into 64 co-frequency sub-images, then estimates the optimal cover JPEG image by applying the maximum a posterior probability algorithm to the co-frequency sub-images, and finally determines the stego positions according to the averaged residuals between given multiple stego images embedded along the same path and the estimated cover images. The proposed method is applied to the payload location for F5 steganography without matrix encoding and the experimental results show that the proposed algorithm can locate the stego positions with higher accuracy than prior works. However, the proposed payload location method cannot work for the modern adaptive JPEG image steganography, JUNIWARD, UERD, and GUED. Therefore, in future, we will try to adapted the proposed cover JPEG image estimation method for the modern adaptive JPEG steganography. Besides, we will also try to improve the performance by using unsupervised learning to cluster the image blocks with similar contents [28].
v3-fos-license
2018-12-12T23:55:51.961Z
2014-04-03T00:00:00.000
55740152
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=44675", "pdf_hash": "38ad372f954829ef8e527cc0e79df5c01fc2a4c4", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:413", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "sha1": "38ad372f954829ef8e527cc0e79df5c01fc2a4c4", "year": 2014 }
pes2o/s2orc
Analyses Using SSR and DArT Molecular Markers Reveal that Ethiopian Accessions of White Lupin ( Lupinus albus L . ) Represent a Unique Genepool PCR-based genic and microarray-based Diversity Arrays Technology (DArTTM) markers were used to determine genetic diversity in 94 accessions of white lupin (Lupinus albus L.) comprising Australian and foreign cultivars, landraces, and advanced breeding lines from Australian breeding programs. A total of 345 (50 PCR-based and 295 DArT-based) polymorphic fragments were identified, which were used to determine the genetic diversity among accessions. Both cluster analysis of bivariate marker data using UPGMA, and principal coordinate analysis, indicated a high level of genetic diversity in the germplasm. Our results showed that both types of markers used in this study are suitable for estimation of genetic diversity. Landrace accessions from Ethiopia formed a very distinct and separate grouping with both marker systems. Australian cultivars and breeding lines were clustered together and tended to be distinct from European landraces. These findings will allow breeders to select appropriate, diverse parents to broaden the genetic base of white lupin breeding populations. Introduction Lupinus albus L. (white or broad-leaf lupin, 2n = 50), a member of the Leguminosae, is an annual grain-legume crop grown in Australia and other parts of the world.In Australia, L. albus is grown as an important, high-protein rotational grain crop and is useful in controlling cereal diseases in a mixed-farming crop rotation, and provides alternative herbicide options for weed control.L. albus fixes atmospheric nitrogen through its symbiosis with rhizobia and it is an efficient scavenger of phosphorus due to the presence of proteoid roots which secrete organic acids, increase phosphorus solubility and make it more accessible for plant uptake [1]- [5].Over the past 3000 years, white lupins have been utilised as feed for livestock (cattle, dairy cows, sheep, horses, and poultry), aquaculture, and food for human consumption [6]- [8]. Previous work has indicated that the Balkans region in the Mediterranean basin is the likely centre of origin of the L. albus species [9].In these locations the primitive brown seed colour (graecus) is found, along with genotypes which have shattering pods, hard-seeds, plus high-alkaloid content in the seeds and foliage to deter herbivores [10].These characteristics have all been replaced with important alternatives (low-alkaloid, white soft seeds, and non-shattering pods) during domestication and the modern breeding process [11].White lupin and other Lupinus species have been fully domesticated only recently when compared with most crops [12]. A number of ex situ germplasm collections are in existence containing genetic resources for L. albus.The world's largest lupin germplasm collection is located at Perth, Western Australia, despite the genus Lupinus being totally absent from the wild in that continent apart from recently naturalised introductions.Like many species exploited by Australian agriculture, the genetic material is sourced 100% from elsewhere, although significant progress in white lupin breeding has been made within Australia for yield, grain quality and disease resistance [6] [13] [14].Lupin germplasm collections can be exploited to identify novel genotypes which may contain novel genes for traits of commercial value (e.g., disease resistance), and to broaden the genetic base of lupin breeding programs.Genotypes from Ethiopia are the source of important genes for worldwide L. albus breeding, particularly resistance to the wide-spread and devastating fungal disease anthracnose [15] [16]. Genetic diversity in lupins has been characterized using morphological and agronomical attributes [17], and isozymes [18].The assessment of genetic diversity on the basis of morphological traits is not very reliable, as it may be influenced by the environment, and the list of traits with known inheritance is often limited.PCR based markers have the distinct advantages of being independent of the external environment, abundant, and relatively inexpensive and quick to assay.Molecular markers, including randomly amplified polymorphic DNAs (RAPDs), inter-simple sequence repeats (ISSRs), amplified fragment length polymorphisms (AFLPs), and randomly amplified microsatellite polymorphism (RAMP) have been used to assess genetic diversity in white lupin and other species of Lupinus [19]- [21]. Croxford et al. [41] developed STS markers in white lupin and used them to construct a genetic linkage map.Phan et al. [22] located 105 gene-based PCR markers in a RIL mapping population of white lupin.These markers were based on Intron Targeted Amplified Polymorphisms (ITAP), EST-derived SSR motifs, and Medicago truncatulata cross-specific amplicons [23].In this study, we have referred to them as PCR-based "genic" markers.The majority of these markers were locus-specific and evenly distributed on the chromosomes.Recently, Diversity Arrays Technology markers (DArT™) [24] were developed in white lupin [25].DArT™ markers are microarray based and are amenable to high-throughput genotyping and are cost-effective per data point, making them suitable for screening large number of individuals.These markers have been employed for genetic fingerprinting and diversity assessment, molecular mapping of different genomes, and development of marker-trait associations in several crops including: wheat, barley, cassava, canola, rice, and white lupin [24] [26]- [31].However, the usefulness of these markers in the assessment of L. albus genetic diversity has not been determined. The availability of a suite of markers based upon structural and functional genes [22] plus a DArT lupin chip, provided an opportunity to assess the genetic diversity and population structure in the germplasm available to the white lupin breeding program at Wagga Wagga, and the opportunity to compare the results from the two marker systems. Plant Material and DNA Extraction Seeds of 94 accessions of white lupin (Table 1) representing local and foreign varieties, landraces, and advanced breeding lines were provided by the lupin breeding program, located at the New South Wales Department of Primary Industries, Wagga Wagga, Australia.Non-breeding accessions were originally provided by the Australian Lupin Germplasm collection, DAFWA, Perth, Western Australia.They had been imported through quarantine from other collections and following field collection trips [9].Seeds were grown either in 250 mm diameter pots in an evaporatively-cooled glasshouse using sandy loam potting mix, or in row-plots in an insect-proof screen house with an irrigated and fertile chromic luvisol soil [32] of pH 5.0, both located at Wagga Wagga Agricultural Institute (35˚03'07"S; 147˚21'06"E).Group "G" rhizobia were added where required to facilitate good nodulation.One leaflet was taken from each of 10 individual plants per genotype.The leaflets were bulked for DNA isolation.The bulk sampling procedure [20] was followed as an efficient way to determine genetic diversity both between and within germplasm accessions.Total genomic DNA was isolated from the pooled leaflet tissue as described by Raman et al. [33].Molecular marker analysis was performed as described below. Phase 1 Initially a subset of eight accessions (Table 1) thought to be potentially diverse was tested for polymorphisms: Kiev-mutant, Rosetta, Lucky-1, P27174, P25758, P27593, XA100 and Start.These genotypes were important because they included the parents of the mapping population used to produce the L. albus linkage maps [22] [25], to locate the loci for low seed-alkaloid content (pauper) [34], and to develop PCR markers for resistance to anthracnose [35] and phomopsis [31].In addition, they included parents of other mapping populations made for use in research to identify markers for loci controlling Pleiochaeta Root Rot resistance, and low seed alkaloid content locus exiguus [6]. Sixty-three published primer sequences of Lupinus angustifolius and L. albus [22] [23] were tested across the eight L. albus genotypes.A total of 30 combinations of primer and restriction enzyme were employed to generate 70 resolvable polymorphic fragments.The number of fragments varied from 1 to 8 per marker (mean = 2.23).Six random individuals per genotype were visually examined for their phenotypic uniformity.All genotypes were uniform, which was not surprising given that these genotypes had been carefully grown to prevent insectmediated cross pollination for several generations at Wagga Wagga. Phase 2 For the screening of the complete 94 accession/genotype set (Table 1), 20 polymorphic markers that were easy to score were employed for genetic diversity analysis (Table 2).PCR analyses were performed following the recommended PCR thermocycler programs [22] [23].The 5' ends of the primers generating amplicons below 400 base pairs in size were tailed with the M13 sequence as described previously [36].SSR genotyping was performed using tailed and labelled M13 primers as described by Raman et al. [33].Amplified DNA fragments were separated and visualised on a CEQ8000 DNA sequencer (Beckman Coulter Inc.) and their sizes measured using fragment analysis software from the manufacturer [33].Primers generating amplicons over 400 bp were used as standard oligonucleotides and PCR products were separated by electrophoresis on either 2.5% (w/v) agarose or 8% (w/v) polyacrylamide gels.Restriction enzyme digestion of PCR products (CAPS analyses) were performed as described previously [22] [23].The digested products were resolved on 2% (w/v) agarose gels.All gels were stained with ethidium bromide and visualized on UV transilluminator. Data Analysis The allele data for PCR-based genic markers were converted to a presence (1)-absence (0) matrix for analysis.Markers which involve one or more restriction enzyme digests of the PCR products are CAPS markers [45].$ L. albus genetic map linkage group as defined by [22]. Binary data resulting from the DArT marker analysis was also analysed. Tree Construction and Principal Coordinate Analysis Dissimilarity matrices were calculated for single data based on the presence/absence of alleles using the Jaccard coefficient as implemented in DARwin 5 software (URL: http://darwin.cirad.fr/Home.php)[38].Cluster analysis was performed using the unweighted neighbour-joining method [39] with 1,000 bootstraps.A cophenetic correlation was calculated to compare the dissimilarities and the distances between accessions as represented in the dendrograms from the PCR-based genic and DArT sources.Principal Coordinate (PCO) analysis was conducted to visualise the genetic relationships among the accessions as described by Anderson et al. [40].The first two dimensions representing the largest components of the total variance were used to generate a diagnostic scatter plot. Results Forty eight per cent (30/63) of the PCR-based markers were found to be polymorphic among the initial set of eight accessions (Table 1).For the 20 easy-to-score markers that were subsequently employed to estimate genetic diversity among the 94 accessions of white lupin, a total of 50 polymorphic fragments were identified.The most informative marker locus (an SSR) was PT1, at which nine alleles were identified among the 94 accessions (Table 2).295 of the DArT markers were polymorphic with a call rate of more than 80% (quality threshold).Neighbour-joining trees constructed using 21 PCR-based and 295 DArT markers (Figure 1 and Figure 2) showed a high level of genetic diversity in the germplasm collection.High cophenetic coefficients of 0.91 and 0.97 for PCR based and DArT markers, respectively, indicated that both types of markers gave a good correlation between genetic distance matrices and tree structures. The dendrogram constructed from the PCR-based marker data identified only two large clades plus one smaller one (Figure 1).Groupings of genotypes in these clades, based on their geographic origin, were not obvious.However, the dendrogram resulting from the DArT data showed a clear grouping of accessions into four clades (Figure 2).Most of the landraces (Table 1) were grouped in Clade 2, and the breeding lines and varieties into Clade 1, with few exceptions.Six of the seven Ethiopian landraces (P27172, P27174, P28507, P28552, P28561 and P28573) formed a separate, distinct group in Clade 3, along with two progeny from crosses with Ethiopian lines (Andromeda and WALAB2008).Clade 4 contained only three accessions of which two are landraces from the Mediterranean region. The first two PCO dimensions explained 30% of the total observed variation.A 2-D plot of dimension 1 × dimension 2 confirmed that most of the Ethiopian accessions are highly diverse and clustered together (Figure 3).The plot also showed that the Australian lupin varieties and breeding lines are quite genetically similar, clus-tered well away from the Ethiopian material, and away from most of the landraces of European origin. Two Western Australia-bred genotypes, cv.Andromeda and breeding line WALAB2008, fall midway between their anthracnose-resistant Ethiopian parent (P27174) and their Ukrainian parent (cv.Kiev-mutant) (Figure 3).It remains to be seen whether such diverse new cultivars have the necessary adaptation to produce high-yield under local conditions in Australia or whether further breeding is required. Several landraces lay on the periphery of the main PCO groupings, namely, P28997 and P27154 (ex-Spain), P28989 (ex-Greece), P27840 (ex-Syria); with a breeding line UK (P25863) (Figure 3).These genotypes are potentially very useful as sources of new genes for breeding.The Chilean determinant cultivar Typtop was also a relative outlier in the distribution. Discussion The application of DArT and PCR-based markers for the assessment of genetic diversity in white lupin has not been previously reported.Recently, a newer DArT chip was developed and utilised by Vipin et al. [25]; it was assembled primarily from L. albus accessions and showed greater polymorphism but was not available when the Figure 3. Principal coordinate analysis of 94 accessions of Lupinus albus L. based upon combined data from 315 DArT and PCR-based markers.Accessions labelled as "A" and "E" are breeding lines and cultivars from Australia, and landraces from Ethiopia, respectively (see Table 1).Axis (dimension) 1 and axis 2 explained 17% and 13% of the genetic variation, respectively.Some diverse and unusual accessions are labelled (see text for explanation).Scale: origin to top of Y-axis = 0.25 (tick marks and labels omitted for clarity).work described here was undertaken.The metagenomic chip of Lupinus species used in this work was a compromise, and the low level of observed useful polymorphism for this diversity set (295/15,000 = 2% of the total clones on the chip) was not surprising since L. albus accessions only contributed 7.3% (7/94) of the metagenome used to construct the array. It is difficult to anticipate how many markers are sufficient for complete germplasm characterisation within a while lupin germplasm collection.Well-distributed markers per chromosome should be used, as the accuracy of genetic distance depends on the number and distribution of markers on the genome [42].For the purpose of measuring genetic distance, 20 well-spread markers per chromosome are probably sufficient [43].In this study, we employed 315 markers across 25 chromosomes-a less-than-ideal number for an assessment of genetic diversity. The principal advantage of the DArT markers is that they are microarray-based and several hundred markers can be screened in a single experiment.They are therefore cheaper than PCR-based SSR or CAPS markers but more expensive than SNP markers.Currently, SNP markers are recognised as the marker of choice in plant improvement programs as they are highly polymorphic, chromosome specific, ubiquitous in genomes.In some cases, particularly when a SNP is present within a gene controlling a trait of interest, it is directly responsible for an observed mutation. It has always been a challenge to characterise germplasm precisely; accessions from gene banks and breeding programs may be heterogenous and/or heterozygous, depending on their origin, history, and the breeding system of the species.Our findings suggest that molecular markers, both PCR-based and DArT, are suitable for assessment of genetic diversity in white lupin.These results will allow breeders to increase their efficiency when phenotyping the germplasm for new traits of interest. Most of the Australian breeding material examined in this work clusters with the European cultivars and breeding lines, no doubt reflecting their pedigree and breeding history.It may also indicate that certain linkage blocks have been retained as necessary for adaptation to modern farming systems.In contrast, the Ethiopian landraces examined here were tightly clustered (except for P28233 in clade 2) and they were very distinct from all other genotypes.P28233 may have been misclassified in the genebank collection.Such a distinct separation is evidence that the Ethiopian material has evolved in isolation from the L. albus populations of the Mediterranean basin.The genetic differences could be due to ancient founder effects and subsequent divergence from original genotypes sourced from the purported Balkan centre of origin [9].This is perhaps the most likely scenario since the Ethiopian landraces are not wild types, that is, they possess most of the domestication characteristics, and none have brown graecus seeds.However, Luckett et al. [44] showed that Ethiopian and Greek landraces had different genetic control of white seed colour, and this could have been selected in separate gene pools in the two regions. The 94 accessions used in this study are not an exhaustive list of the available L. albus germplasm world-wide.Now that an improved DArT array is available [25], there would be merit in extending this analysis to all accessions held in collections world-wide.Nevertheless, we have identified significant genetic diversity among the landraces, varieties and breeding lines-genetic variability that is ready to be explored by breeders and used to donate new or rare alleles to their breeding gene pools. Table 1 . Accessions used for assessment of genetic diversity in white lupin (Lupinus albus L.) using molecular markers. * ISO 3-letter country codes.$ An appended number after a hyphen in the accession name indicates a single plant selection from the original cultivar or breeding line.# Eight genotypes used in Phase 1 of the analysis.
v3-fos-license
2018-12-11T14:57:50.532Z
1992-12-01T00:00:00.000
59270634
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.j-micropalaeontol.net/11/241/1992/jm-11-241-1992.pdf", "pdf_hash": "84fd9b9cca5db5ef23af031a1c05d3190cf06283", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:414", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "84fd9b9cca5db5ef23af031a1c05d3190cf06283", "year": 1992 }
pes2o/s2orc
“Milankovitch cycles and microfossils: principals and practice of palaeocological illustrated by Cenomanian chalk-marl Rhythms” by C.R. Paul - a comment Sir - In his paper on Milankovitch cycles and microfossils, Paul (1992) has launched a comprehensive attack on the use of standard counts and percentages in palaeoecology, with particular reference to the methods used by micropalaeontologists studying Upper Cretaceous chalk and marl assemblages. We commend him for the diligent and painstaking way in which he has constructed his argument. He is, however, wrong. In presenting our counter-attack, we wish to take issue with several of his statements. In the following discussion, direct quotes from Paul (1992) are given thus in italics and quotes. Sir -In his paper on Milankovitch cycles and microfossils, Paul (1992) has launched a comprehensive attack on the use of standard counts and percentages in palaeoecology, with particular reference to the methods used by micropalaeontologists studying Upper Cretaceous chalk and marl assemblages. We commend him for the diligent and painstaking way in which he has constructed his argument. He is, however, wrong. In presenting our counter-attack, we wish to take issue with several of his statements. In the following discussion, direct quotes from Paul (1992) are given thus "in italics and quotes". "Percentages are, in effect, standard counts of 100". This is not so. A standard count of 100 is unlikely to give a true representation of the YO composition of an assemblage. That is whyastandardcountof at least300specimensisrecommended. The practice of making standard counts has developed not only "to ensure that key taxa are not overlooked" but equally importantly to ensure that the proportion of each taxon in the assemblage is determined with a high degree of confidence. The recommended figure of 300 originates from a study by Dryden (1931;fide Phleger, 1960) on accuracy in percentage representation of heavy mineral frequencies. Phleger (1960; Ch.1) discusses the topic in detail; he concludes (p. 35): "...that little if anything is to be gained by counting samples much larger than approximately 300 specimens and that the illusion of accuracy tends to be misleading". Weagree that "...standard countsorpercentagesforall taxaare interdependent". However, Paul's distinction between signal ("a genuine change in the abundance of a taxon") and echo ("a passive response to a change in the abundance of another taxon") is inappropriate. He illustrates his argument with Fig. 2, which "was constructed on the assumption that the taxon concerned was present at an absolutely invariant abundance in terms of specimens per square metre of seafloor or per gramme of sediment". The first part of this assumption is unwarranted since the standing crop or population density of the taxon is likely to have varied considerably (and at a very much higher frequency than the sampling interval) through time; a 6cm thick sample of chalk or marl does not represent the sea floor at a given instant in time, but the cumulative result of at least hundreds of years' worth of superimposed "sea floor", mixed by currents and bioturbation. The specimens of a taxon in a fossil assemblage do not constitute a population, nor do fossil assemblages truly represent communities; they are (to quote Griffiths & Evans (1992) in the same issue) "time-averaged taxocenes which have undergone a variety of processes of sortage and attrition". The second part of the assumption is unwarranted since it also assumes a constant rate of sediment deposition. Absolute abundance of specimens, expressed as numbers per weight or volume of sediment, is subject to variation due to changes in the rate of sedimentation. Paul's Fig. 2 shows fluctuations in the relative abundance of a taxon which are independent of sedimentation rate; they may be held to reflect real changes in assemblage composition through time, which may be interpreted as responses by taxa to changes in environmental parameters of ecological significance. In other words, the signal and the echo both contain useful information; that provided by the echo is arguably the more relevant to palaeoecology . "...standard counts and percentages may give misleading impressions and suggest inappropriate conclusions" Paul illustrates this point by showing that in terms of percentage (i.e. relative abundance) Gavelinella and Hedbergella were more abundant in chalks than in marls, while in terms of absolute abundance (numbers of specimens per 500g sample) the reverse was the case (his Tables 1 & 2). However, in comparing absolute abundances from two different lithologies he makes the implicit assumption that sedimentation rates were the same during chalk deposition as during marl depositionand as he himself argues in a later part of his paper, this was almost certainlynot thecase.Thedifferentresultsgivenbyprecentages and absolute abundances can be explained if the sedimentation rates in the chalks were higher than those in the marls. Consider the three chalk/marl rhythms on which his Tables 1 & 2 are based. Taking the durations estimated by Paul (his Table 6) and the thicknesses given in Fig. 2 of Leary et al. (1989), we find that the sedimentation rates of the chalk beds were more than twice those of the marls. The calculations for Paul's Table 6 may be slightly suspect since they are not actually based on three complete rhythms, but on "two complete marl chalk-marl couplets and most of a third one" (Leary et al., 1989). However, if one accepts that these are at least reasonable approximations then the foraminifera1 assemblages of the chalks have been diluted by higher sedimentation rates, and should be multiplied by a factor of at least two to allow direct comparison with those of the marls. If this is done, absolute abundance shows the same relationship between chalks and marls as percentages. Once again, it is clear that percentage (i.e., relative abundance) is a more reliable measure for palaeoecology than absolute abundance. Of course, it could be argued that thesedimentation rate was itself an important ecological parameter. However, the sedimentation rates in question appear to be less than 0.5cm/100yr -this rate of influx of sediment is unlikely to cause problems for even the most lethargic of benthonic foraminifera, notwithstanding Paul's comment about them being "unable to leave the area if sedimentation rates became uncomfortably high". Absolute abundance data are useful, of course, but should be considered in conjunction with (not instead of) relative abundancedata. It is essential, furthermore, to have a clear idea of what "absolute" abundance actually means. In a clay or silt lithology, for example, absolute abundance of microfossils may be measured against a baseline of minerogenic sediment. In a chalk, the baseline is a biogenic sediment composed of microfossils; in such a case, "absolute" abundance of foraminifera might be set against a coccolith baseline, for example, but in terms of complete assemblages (including all the microfossils and macrofossils preservedand still representing a biased and incomplete record of the original living community) it would be, in fact, relative abundance. Finally, whether dealing with absolute or relative abundances, the data obtained are only as good as the sampling method. Micropalaeontologists usually sample fossil assemblages three times: When collecting a sample in the field (sample size and interval relative to bed thicknesses are significant variables). When processing that sample (this effectively subsamples the original sample in some way); biases introduced at this stage are likely to be exaggerated when dealing with marls and chalks since the former will break downmore completely in one freeze-thaw cycle, thus yielding a "higher" faunal density. When picking microfossils from the processed sample residue. The bias or errors that may be introduced by this third stage of sampling can be avoided by picking the entire sample; in practice this would often be far too time-consuming, so the sample residue must be sub-sampled. This is often done by sieving the residue into size-fractions, which certainly makes picking much easier. Unfortunately it appears to be common practice (and one which Paul endorses) to then use only one of the fractions, usually the <500pm >250pm fraction. In Cretaceous chalk and marl samples, very small planktonic foraminifera ( e g . heterohelicids) are often abundant, but since they occur almost entirely in the <250pm fraction they are habitually left out of calculations of Planktonic/Benthonic ratios (eg. by Paul, 1992 andby Leary et al., 1989;see also comments by Curry, 1982). In some Cenomanian-Turonian boundary (Oceanic Anoxic Event) samples, the finest residues (>63pm) examined by one of us (DJH) were dominated by calcispheres and heterohelicids, yet the latter were not included in P/B ratio calculations by Jarvis et al. (1988) (Leary, pers. comm.); a pity, since heterohelicids may be useful indicators of strong oxygen minimum zones (Sliter & Premoli Silva, 1990;Boersma & Premoli Silva, 1989). At the other end of the scale, larger specimens (>500pm) are also excluded; what can be the justification for ignoring larger benthonic foraminifera (e.g. orbitolinids) whenthey arejustasmuchapart of anassemblage as Gavelinella and must have played a role in chalk sea-floor communities? Paul even argues for the exclusion of large specimens of genera (e.g., Lenticulina) which are also 1. 3. represented in the <500 >250pm fraction. Similar problems arise with chalk ostracod assemblages; coarse fractions are likely to be dominated by bairdiaceans and platycopids, while fine fractions may yield common and diverse cytherurids (Weaver, 1981). The fractions chosen for sieving are entirely artificial. P/B ratios calculated fromsuchanarbitrary selection of specimens may be useful in biostratigraphy and contain at least some of the original signal, but they are a poor excuse for palaeoecological data. First I would like to thank Horne and Slipper for their comments. They make some cojent points and enable me to clarify an implicit assumption behind my arguments that I omitted to state explicitly in the original paper, although I have made it elsewhere (Paul, 1992, p.130). However, I do find some of Horne & Slipper's arguments paradoxical. In the first paragraph I am taken to task for launching a "comprehensive attack on the use of standard counts and percentages in palaeoecology". Horne & Slipper assert flatly that I am wrong, by which I presume they mean that one should make standard counts. Later in the article, they state "Absolute abundance data are useful, of course, but should be considered in conjunction with (not instead of) relative abundance data." I could not agree more. That sentence succinctly summarizes the first aim of my paper. The implicit assumption that I omitted to spell out is that the two types of data are not, and cannot be, alternatives. This is a one-sided test. If complete data on absolute abundance are available, anyonecan calculate percentages (ie. relative abundance). If standard counts are made, no-one can estimate absolute abundance, not even the person who made the original counts and not even for the whole fauna let alone for each constituent taxon. My "attack was a plea to all palaeoecologists who record quantitative data (not just those working on the chalk) to do so in a way that makes both types of data available. One way to do so would be to count every microfossil present in a sample, but that would be extremely time-consuming and very inefficient. I suggested a technique which is only slightly more timeconsuming than making standard counts, but which yields estimates of both absolute and relative abundance. Even if I am wrong, as Horne and Slipper assert, this can only be confirmed by recording data on both absolute and relative abundance and demonstrating repeatedly that the former are consistently irrelevant or misleading. I am fairly confident that this will not prove to be the case, but I am absolutely certain that I will never be proved wrong so long as everyone continues to make standard counts. (This should not be taken as a coded plea to continue making standard counts. I am quite content to be proved wrong. That is how science advances). Home & Slipper make four specific comments; the first three start with a direct quote from my paper, the fourth concerns sampling methods. I would like to consider each in turn and will number them 1-4. "Percentages are, in effect, standard counts of 100". This quotationis takenout of context. AllImeantherewas that all three disadvantages of standard counts apply equally to percentages, no matter how large the counts on which they are based. However, Horne & Slipper go on to make some additional points with which I would like to take issue. First no count will give a "true" representation of the composition of an assemblage. This can only ever be estimated. They are correct to point out that a count of 300 specimens will give a more accurate estimate than a count of 100. Their quotation from Phleger (1960) "that little if anything is to be gained by counting samples much larger than approximately 300 specimens and that the illusion of accuracy tends to be misleading." may be empirically acceptable for samples with 1. low numbers of taxa (as one assumes is true of most heavy mineral assemblages). However, it will certainly not hold for a diverse fauna of more than 100 taxa, since a count of 300 only gives a 95% probability of detecting species present at 1% of the fauna. It does not hold, for example, if one wished to detect the nodosarian genera present in my Cenomanian samples (which have diversities well below 100) because the nodosarians are so rare. The fundamental relationship here is given by the equation: Where Q is the probability of overlooking a rare taxon, p is the proportion of the total fauna which the taxon constitutes, and n is the number of trials which in this context is the number of identified specimens (ie. the count). Selecting values of Q and p determines the size of the count. With a typical population structure where a few species dominate and most are relatively rare, and with a diversity of over 100 taxa, p would have to be less than 0.01 (1%) and a suitable count would be considerably in excess of 300 to be even 90% certain of not overlooking the rarer forms. Shaw (1964, chapter 18) outlined the theory behind these calculations in detail, while Dennison & Hay (1967) and Hay (1972) have published extremely wide ranging graphs of values for Q, p and n. 2. Horne & Slipper's second criticism concerning the interdependence of counts and percentages initially misses the point. My Fig. 2 was simply constructed to demonstrate that patterns can be generated by echoes (i.e. by a passive response to changes in abundance of other taxa) when no genuine signal (i.e. a real change in abundance) occurs. Of course the example is totally artificial, it has to be because real samples are subject to all the vagaries which Home & Slipper rightly document. To illustrate my point the reader has to know what the truth is. I chose to state that the taxon did not vary in abundance whatsoever because this makes the resulting diagram simpler. Any other predetermined pattern could be substituted, but most would be swamped out by the echoes from the two taxa that do vary in abundance in this example. This artificial example makes no assumptions whatsoever about rate of sedimentation. Finally, I wholeheartedly concur with Home & Slipper's statement that signal and echo both contain valuable information. However, I cannot for the life of me see how anyone can test their assertion that the information "provided by the echo is arguably more relevant to palaeoecology" unless data are recorded in a way which allows one to distinguish between signal and echo. Standard counts and percentages do not allow one to do this. Again, assume Home & Slipper are right and I am wrong. How can this be proved unless data are recorded in the way that I advocated? As regards the third point concerning trends inrelative versus absolute abundance, Home & Slipper have again taken my example tooliterally. Ididnot seek toexplainthe differences reported by Leary and Ditchfield (1989) in the abundances of Gavelinella and Hedbergella in chalks compared with mark I merely wished to point out that the trend in relative abundance 3. is the reverse of that in absolute abundance, and that these reversed trends might lead to different interpretations if considered alone. I suspect Horne & Slipper are perfectly correct in their explanation of these opposite trends, but they could not possibly have arrived at their explanation without knowing what the absolute abundances of these genera are. Had PaulLeary not recorded totalnumbers, but just identified the first 300 specimens he saw in each sample, none of us would be any the wiser. Home & Slipper make important points concerning biases that can creep in during sampling, processing and picking, with which I wholeheartedly agree. This serves to emphasize that standardization of techniques is essential. For example, I regret very much that I lost count of the number of freeze-thaw cycles my first batch of samples went through. Hence I cannot state exactly how many cycles they were subjected to and only that both batches were processed approximately equally thoroughly. More importantly, no-one can reproduce my experiments exactlynot even me. Home & Slipper take issue with the practice of restricting counts to a single size fraction. They have an important point to which I cannot see a simple solution and they make no suggestions. Of course larger benthic foraminifera are important in palaeoecology; of course heterohelicids are too; but how can quantitative data from different size fractions be combined in a way that is both reproducible and meaningful? The P/B ratio (which is never recorded as a ratio but as a percentage) is widely used in foraminiferal studies, but what does it mean if different researchers record it in different ways? And how can we tell if they do, since some researchers do not record their method? I have shown how variable the socalled P/B ratio can be if one combines data from two size fractions, let alone from three or four to include the heterohelicids. The only suggestion I can make is to record in the way that I advocated from each size fraction, but this would involve at least four times as much effort. Would the results be worth it? In quantitative studies on molluscs, with which I am more familiar, it is standard practice to make a cutoff at 0 . 5 m and count everything above that size, combiningdata fromall fractions. However, this rarely results in counts over lo00 individuals. My richest Cenomanian sample had an estimate of over 7000 individuals in the >250 micron fraction alone. I cannot imagine what the total of individuals larger than 63 microns would be. I chose a 4. compromise which I believe combined adequate data with a reasonably small amount of time and effort. Irecorded explicitly what I did so others could test my results by repeating my experiments as nearly as possible under the same conditions. I may not have chosen the best method, but my results are testable. That is the fundamental point. Unless details of sampling, processing and picking procedures are recorded, experiments are not reproducible and results cannot be tested. I would welcome Home & Slippers' views on this. Simply stating that different size fractions contribute valuable information does not solve the problem of how best to gather and record these data. I have spent a good deal of the last twelve years trying to convince the scientific world in general, and palaeontologists in particular, that the fossil record is by no means as incomplete as we are often led to believe. In doing so I have also been trying to convince palaeontologists to extract and record as much data as possible from their sample. In this case I would argue thatwitha littlemoreefforttwice the amount of data can beobtained (i.e. absoluteand relativeabundance). Interestingly, Home & Slipper donot apparently dispute my interpretations of Milankovitch control on microfossil assemblages. Yet most of my conclusions could not have been formulated, let alone tested in the future, without data on absolute abundance.
v3-fos-license
2017-04-18T21:36:39.867Z
2012-03-12T00:00:00.000
1969005
{ "extfieldsofstudy": [ "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.20.006825", "pdf_hash": "31983658b8d0c394d0da419d329f56744191a24b", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:415", "s2fieldsofstudy": [ "Physics" ], "sha1": "70413f9178264b3a266e5b70cade1fa55438ccef", "year": 2012 }
pes2o/s2orc
Remote THz generation from two-color filamentation : long distance dependence Remote terahertz (THz) generation from two-color filamentation is investigated as a function of the onset position of filaments. THz signals emitted by filaments produced at distances up to 55 m from the laser source were measured. However, from 9 m to 55 m, the THz signal decayed monotonically for increasing onset positions. With a simple calculation, the dominant factors associated to this decay were identified as group velocity mismatch of the two-color pulses and linear diffraction induced by focusing and propagating the second harmonic pulse. ©2012 Optical Society of America OCIS codes: (190.4380) Nonlinear optics, four-wave mixing; (010.1300) Atmospheric propagation. References and links 1. M. Tonouchi, “Cutting-edge Terahertz technology,” Nat. Photonics 1(2), 97–105 (2007). 2. M. C. Kemp, C. Baker, and I. Gregory, “Stand-off explosives detection using terahertz technology,” Stand-off Detection of Suicide Bombers and Mobile Subjects, pp. 151–165 (Springer, New York, 2006). 3. S. Nakajima, H. Hoshina, M. Yamashita, C. Otani, and N. Miyoshi, “Terahertz imaging diagnostics of cancer tissues with a chemometrics technique,” Appl. Phys. Lett. 90(4), 041102 (2007). 4. M. Lu, J. Shen, N. Li, Y. Zhang, C. Zhang, L. Liang, and X. Xu, “Detection and identification of illicit drugs using terahertz imaging,” Appl. Phys. (Berl.) 100, 103104 (2007). 5. A. Braun, G. Korn, X. Liu, D. Du, J. Squier, and G. Mourou, “Self-channeling of high-peak-power femtosecond laser pulses in air,” Opt. Lett. 20(1), 73–75 (1995). 6. A. Couairon and A. Mysyrowicz, “Femtosecond filamentation in transparent media,” Phys. Rep. 441(2-4), 47– 189 (2007). 7. S.L. Chin, “Femtosecond laser filamentation,” Springer series on Atomic, Optical and Plasma physics, LLC978– 1-4419–0687–8 (Springer Science + Business media, New York, 2010). 8. V. P. Kandidov, S. A. Shlenov, and O. G. Kosareva, “Filamentation of high-power femtosecond laser radiation,” Quantum Electron. 39(3), 205–228 (2009). 9. H. Hamster, A. Sullivan, S. Gordon, W. White, and R. W. Falcone, “Subpicosecond, electromagnetic pulses from intense laser-plasma interaction,” Phys. Rev. Lett. 71(17), 2725–2728 (1993). 10. N. Karpowicz, X. Lu, and X.-C. Zhang, “Terahertz gas photonics,” J. Mod. Opt. 56(10), 1137–1150 (2009). 11. F. Théberge, W. Liu, P. T. Simard, A. Becker, and S. L. Chin, “Plasma density inside a femtosecond laser filament in air: strong dependence on external focusing,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 74(3), 036406 (2006). 12. G. Méchain, C. D’Amico, Y.-B. André, S. Tzortzakis, M. Franco, B. Prade, A. Mysyrowicz, A. Couairon, E. Salmon, and R. Sauerbrey, “Range of plasma filaments created in air by a multi-terawatt femtosecond laser,” Opt. Commun. 247(1-3), 171–180 (2005). 13. J.-F. Daigle, Y. Kamali, M. Châteauneuf, G. Tremblay, F. Théberge, J. Dubois, G. Roy, and S. L. Chin, “Remote sensing with intense filaments enhanced by adaptive optics,” Appl. Phys. B 97(3), 701–713 (2009). 14. J. Kasparian, R. Sauerbrey, and S. L. Chin, “The critical laser intensity of self-guided light filaments in air,” Appl. Phys. B 71, 877–879 (2000). 15. A. Becker, A. D. Bandrauk, and S. L. Chin, “S-matrix analysis of non-resonant multiphoton ionisation of innervalence electrons of the nitrogen molecule,” Chem. Phys. Lett. 343(3-4), 345–350 (2001). #159910 $15.00 USD Received 12 Dec 2011; revised 13 Jan 2012; accepted 19 Jan 2012; published 9 Mar 2012 (C) 2012 OSA 12 March 2012 / Vol. 20, No. 6 / OPTICS EXPRESS 6825 16. C. D’Amico, A. Houard, M. Franco, B. Prade, A. Mysyrowicz, A. Couairon, and V. T. Tikhonchuk, “Conical forward THz emission from femtosecond-laser-beam filamentation in air,” Phys. Rev. Lett. 98(23), 235002 (2007). 17. Y. Chen, C. Marceau, W. Liu, Z.-D. Sun, Y. Zhang, F. Théberge, M. Châteauneuf, J. Dubois, and S. L. Chin, “Elliptically polarized terahertz emission in the forward direction of a femtosecond laser filament in air,” Appl. Phys. Lett. 93, 1116 (2008). 18. D. J. Cook and R. M. Hochstrasser, “Intense terahertz pulses by four-wave rectification in air,” Opt. Lett. 25(16), 1210–1212 (2000). 19. T.-J. Wang, J.-F. Daigle, S. Yuan, F. Théberge, M. Châteauneuf, J. Dubois, G. Roy, H. Zeng, and S. L. Chin, “Remote generation of high-energy terahertz pulses from two-color femtosecond laser filamentation in air,” Phys. Rev. A 83(5), 053801 (2011). 20. I. Babushkin, W. Kuehn, C. Köhler, S. Skupin, L. Bergé, K. Reimann, M. Woerner, J. Herrmann, and T. Elsaesser, “Ultrafast spatiotemporal dynamics of terahertz generation by ionizing two-color femtosecond pulses in gases,” Phys. Rev. Lett. 105(5), 053903 (2010). 21. W. Liu, F. Théberge, J.-F. Daigle, P. T. Simard, S. M. Sharifi, Y. Kamali, H. L. Xu, and S. L. Chin, “An efficient control of ultrashort laser filament location in air for the purpose of remote sensing,” Appl. Phys. B 85(1), 55–58 (2006). 22. N. Aközbek, A. Becker, and S. L. Chin, “Propagation and filamentation of femtosecond laser pulses in optical media,” Laser Phys. 15, 607–615 (2005). 23. C. Marceau, Y. Chen, F. Théberge, M. Châteauneuf, J. Dubois, and S. L. Chin, “Ultrafast birefringence induced by a femtosecond laser filament in gases,” Opt. Lett. 34(9), 1417–1419 (2009). 24. O. G. Kosareva, N. A. Panov, R. V. Volkov, V. A. Andreeva, A. V. Borodin, M. N. Esaulkov, Y. Chen, C. Marceau, V. A. Makarov, A. P. Shkurinov, A. B. Savel’ev, and S. L. Chin, “Analysis of dual frequency interaction in the filament with the purpose of efficiency control of THz pulse generation,” J. Infrared Milli Terahz Waves 32(10), 1157–1167 (2011). 25. Y. Liu, A. Houard, M. Durand, B. Prade, and A. Mysyrowicz, “Maker fringes in the Terahertz radiation produced by a 2-color laser field in air,” Opt. Express 17(14), 11480–11485 (2009). 26. S. A. Hosseini, Q. Luo, B. Ferland, W. Liu, S. L. Chin, O. G. Kosareva, N. A. Panov, N. Akozbek, and V. P. Kandidov, “Competition of multiple filaments during the propagation of intense femtosecond laser pulses,” Phys. Rev. A 70(3), 033802 (2004). 27. J. H. Marburger, “Self-focusing: theory,” Prog. Quantum Electron. 4, 35–110 (1975). 28. P. E. Ciddor, R. J. Hill, “Refractive index of air. 2. Group index,” Appl. Opt. 38(9), 1663–1667 (1999). 29. D. Strickland and G. Mourou, “Compression of amplified chirped optical pulses,” Opt. Commun. 56(3), 219–221 (1985). 30. J.-F. Daigle, A. Jaroń-Becker, S. Hosseini, T.-J. Wang, Y. Kamali, G. Roy, A. Becker, and S. L. Chin, “Intensity clamping measurement of laser filaments in air at 400 and 800 nm,” Phys. Rev. A 82(2), 023405 (2010). Introduction Due to recent achievements in laser sciences and technologies, terahertz (THz) generation and detection has attracted a lot of interest in multiple fields [1].THz waves have strong material penetration [2] and specific absorption bands for identification of chemical products.These characteristics have motivated the recent development of THz 'tools' dedicated to environmental monitoring, medical sciences [3] and homeland security [4], that is now very active worldwide.However, because radiation in this spectral region is strongly absorbed and highly affected by linear diffraction during atmospheric propagation, it is still a hard task to deliver intense THz pulses at a remote position. Laser filamentation [5][6][7][8] is currently one of the leading candidates that could potentially solve this problem.Experiments have shown that filaments could generate THz signals [9,10] but also, that these filaments could be projected at any distance from the laser source ranging from centimetres [11] to several hundreds of metres [12,13].This promising behaviour could possibly circumvent the two issues mentioned above.With a simple modification of the effective focal length of a focusing telescope placed at the laser system output (see Fig. 1), this THz source could be arbitrarily positioned, close to the object to be illuminated. Laser filaments result from a dynamic interplay between two nonlinear effects, namely Kerr self-focusing and defocusing from the self-induced plasma produced by multiphoton/tunnel ionization.This combination allows a propagation regime where the intensity is fixed over extended distances, much longer than the Rayleigh length.For a laser pulse at 800 nm, the Ti:Sapphire laser peak emission wavelength, the light intensity inside the filament core is about 50 TW/cm 2 [14] which is sufficiently high to induce high-order multiphoton processes such as 11-photon ionization of N 2 [15]. Even though measurement of THz signals has been reported during single color filamentation (at 800 nm) [16,17], the most promising scheme is probably the two-color technique [18] where the fundamental pulse co-propagates with its second harmonic (SH), at 400 nm, to produce a THz waves.A broadband THz pulse of 2.8 µJ was produced with this technique at a distance of 16 m from the focusing telescope [19]. It has been demonstrated, for point source plasmas, that the THz pulse is the result of a photocurrent induced by the asymmetric electric field distribution [20].On the other hand, for elongated filaments, there is still no consensus whether the dominant mechanism is four-wave mixing from the two-color pulse or the photocurrent.In fact, because the conversion efficiency is highly dependent on the relative phase between the two co-propagating pulses, it is difficult to explain THz emission over such extended distances with the photocurrent model.Moreover, the plasma density inside the filament, required for the photocurrent, quickly decreases when generated at longer distance [11,21].However, this debate is beyond the scope of this paper and shall not be further discussed. In this work, we put this remote THz source to the test by investigating how increasing the distance of filamentation onset, with the focusing telescope, impacts the produced THz signal.Using systematic measurements, the THz emission produced by laser filaments positioned at various distances from the laser source was characterized using a bolometer equipped with adequate filters.Even though the measured signal decayed monotonically as a function of distance, strong THz pulses were still observed when the filaments were positioned 55 m from the source, more than three times the longest distance previously reported.A simple numerical model was used to reproduce the observed behaviour and allowed the identification of the principal decaying factors involved.Fig. 1.Experimental setup for the generation and detection of THz pulses from two-color filamentation.60 mJ transform limited pulses of 50 fs duration are directed to an interferometer.The first arm is used for second harmonic generation while the second is a delay line for adequate temporal superposition of the two pulses in the filament zone.The two pulses are recombined with a dichroic mirror and directed to an all-reflective telescope with a variable focusing length.The second harmonic pulse is focused together with the 800 nm to form filaments at the desired position.The produced THz pulses are measured with a heliumcooled bolometer protected with adequate filters.The off-axis parabola imaged the strong part of the filaments on the detector's surface. Experimental scheme The experiments were performed at Laval University where a 30 m long indoor horizontal path is available.Figure 1 presents a schematic view of the experimental setup.60 mJ laser pulses of 50 fs transform limited duration at full width at half maximum were emitted at a 10 Hz repetition rate by a commercial Ti:Sapphire laser chain and directed to an interferometer. A 50:50 beamsplitter divided the laser pulse in two parts.The reflected beam passed through a KDP crystal to produce a pulse at 400 nm (SH).A telescope composed of two lenses of focal lengths 75 cm and −50 cm was used to control the divergence of the 400 nm pulses and ensure an optimal overlap of the two beams at the interaction zone.In the other arm, the transmitted fundamental pulse passed through a delay stage before it recombined with SH using a dichroic mirror.The energy in the transmitted beam was thus 30 mJ in the fundamental and ~1 mJ in the SH.A half-wave plate at 400 nm was used to align the polarisations.The delay stage controlled the temporal overlap of the two pulses and was used to optimize every measurement. The two co-propagating pulses passed through an all-reflective focusing telescope of variable focal length which was adjusted to produce filaments at the desired distance.The maximum propagation distance was increased to 60 m when a silver mirror was used at the end of the corridor to reflect both pulses.The aberrations introduced by the reflective-type focusing telescope were minimized by the telescope output mirror which consisted of an offaxis parabola. The THz detection system was centered on the beam axis and positioned after the filaments.The signals were measured with a liquid helium-cooled bolometer whose sensitive surface was permanently protected by a polyethylene window that transmitted approximately 80% of the radiation above 12 µm (< 25 THz).Behind this window, there was a movable filter consisting of a sapphire plate coated with zinc oxide and diamond layers which transmitted radiation above 30 µm (< 10 THz).A 5 cm diameter off-axis parabola was used to image the strongest part of the filament onto the detector's sensitive surface.A silicon wafer, positioned before the parabola, blocked all the visible and near-infrared spectral components.Very weak THz signals were detected when SH was blocked.Therefore, a THz contribution resulting from the interaction of the strong post-filament core with the Si wafer was ruled out.For each filamentation distance, the position of the detection system with respect to the filaments was adjusted to optimize the measured signal.Moreover, we also measured the detected signals with other filters (e.g.fused silica (> 100 THz), germanium (< 150 THz), Teflon (< 5 THz) and a quartz window covered with garnet (< 3.3 THz)) to verify for contamination in the near and mid-infrared spectral regions and coarsely characterize the THz pulse spectral distribution.It was concluded that the measured radiation was almost entirely at frequencies smaller than 10 THz (see inset of Fig. 2).In addition, the measured THz spectrum remained almost constant as a function of the focal distance such that any effect of a frequency-dependent sensitivity of the bolometer would not affect the measurement. The THz signal measured as a function of the filamentation distance is presented in Fig. 2 as red squares.The horizontal axis corresponds to the distance from the last optics in the focusing telescope to the strongest part of the filaments.Starting at 9 m, the signal received by the bolometer was rather strong.However, as the focusing distance was increased, the THz signal started to decrease rather slowly till the distance around 25 m.From here on, the signal suddenly shot down by more than one order of magnitude till 55 m.This distance was limited by the available space in the lab.The signal at this position was still about 9 times higher than the noise level. Based on our knowledge of filamentation, such a steep signal decrease was not anticipated.In fact, experiments have shown that similar pulses projected at a remote position could efficiently produce filaments at 110 m and beyond [13].Also, efficient THz generation due to the guiding [22,23] of SH in the filament core was expected.Because the 800 nm pulse's peak power was fixed for all focal lengths, the number of filaments formed would not change significantly as the focal length increased; however, there would be a reduction of the plasma density as well as the clamped intensity [11].The reduction of intensity inside the filaments with longer focusing would lead to a reduction of the THz signal.This tendency should continue along a certain experimental trend (slope) without break, in principle.The sudden break in the slope would mean that something else was happening.We explain this sudden steep decrease of the THz signal as being due to the temporal walk-off between the 800 nm and 400 nm pulses in spite of the fact that the SH was guided (cross phase locking) in the filament core.This is because the filament core is fed from the surrounding energy reservoir.The latter, being at a relatively low intensity is subject to a group index that is close to the linear value.The filament core simply follows the reservoir.The low intensity SH also propagates at a group velocity that is close to the value imposed by linear interaction with the medium.Thus, the walk-off between the pump and the SH does not depend upon the interaction between the two or guiding of the SH in the filament core via cross-phase-modulation.Consequently, any increase in filament length beyond this limit no longer contributes to the enhancement of the THz signal.The effect of this walk-off for THz generation during filamentation was described numerically for short focal lengths in [24]. Modeling and discussion In order to understand what the main causes for the signal decay with increasing focal length f were, the THz signals produced by the two-color pulses were simulated with a modified version of the model used in [25].Even though this model contained important approximations, it could reproduce very well the experimental data obtained.The purpose of this calculation is to demonstrate that the dominant factors leading to the observed reduction of the signal are not related to any nonlinear properties of the filaments or the THz generation mechanism.In fact, most of the equations used in this model assume multiple approximations that would normally be unavoidable in most filamentation modeling.Among those approximations we have neglected multi-filamentation competition [26], focal lengthdependent filament properties (plasma density, intensity, number of filaments etc.) and selffocusing of the SH pulses.These approximations were made to emphasize the fact that the dominant mechanisms leading to the observed behaviour are not related to a modification of the filaments' properties when formed at long distance.Instead, more basic and fundamental optical phenomena such as linear diffraction and group velocity mismatch can explain the observations. A four-wave mixing model for THz emission originating from the interaction of a single, cylinder-like filament of finite length and fixed intensity with a focused SH Gaussian pulse, whose focal waist is much larger than the filament diameter, was used as approximation of our experiment.Because the 800 nm pulse had a fixed peak power, the number of filaments produced is nearly constant and they would 'crowd' around the focal region each with essentially a constant intensity inside because of intensity clamping.We approximate this bundle of filament as an effective single filament.The modeling conditions are presented in Fig. 3 where a 800 nm pulse, focused to form a filament of length L fil , is superposed to a focused Gaussian pulse at 400 nm.A screen, positioned at a distance d from the filament end, represents the collecting parabola used to collect the signal onto the detector.When the focusing position f moves further away, the filament length becomes longer and, therefore, the amount of THz signal produced during the interaction should have increased.In fact, the filament length can, roughly speaking, be approximated as z f , the self-focusing distance, depends on the initial beam diameter and the pulse power [27].This definition of L fil implies that the filament always ends at the telescope focal position, which was the case in our experiment.Because it is rather hard to evaluate for real world beams whose transverse intensity profiles are not Gaussian, z f was left as a fitting parameter and ultimately fixed to z f = 180 m.Using Marburger's formula [27], which determines the self-focusing distance z f as a function of the pulse peak power, z f = 180 m can be achieved in air with 0.6 TW laser pulses of 2.5 cm transverse radius.Interestingly, in our experiment, the peak power of the 800 nm pulses was also 0.6 TW.In addition to reflecting the high quality of this laser beam line, this result further supports the data obtained by this model.Both pulses were considered perfectly Gaussian in the temporal domain.SH was assumed Gaussian in the transverse plane and even though its peak power was higher than the critical power, self-focusing was neglected.If we suppose that this filament consists of a series of THz point sources aligned along the propagation axis z and that the polarisation of both pulses is linear and parallel, the THz signal emitted by each point source is then given by: , . 1 In this equation, z=0 corresponds to the onset position of the filaments, E ω and ( ) correspond to the pulses peak electric field measured at the focal position for the fundamental and SH pulses respectively, τ is the pulse duration measured at e −1 , z R , 2ω (f) is the calculated Rayleigh range for the focused second harmonic pulse and D 2ω is SH initial beam diameter.θ(z, f), which describes the temporal overlap of the two pulses at a position z and φ(z, f), which corresponds to the generated THz pulse phase are given by: In Eq. ( 2), the n g,i 's correspond to the group velocity refractive indices of both pulses in normal atmosphere and c is the speed of light.On the other hand, Eq. ( 3) is the THz pulse phase where the n i 's are the refractive indices of air of the pump pulses, ω THz is the angular frequency of the THz pulse produced and ω F and ω SH correspond to the angular frequencies present within the broadband pump pulses required to produce ω THz .The two terms with the arctangent account for the Gouy phase shift of each beam that occurs at the waist but since the interaction zone (filament) ends at the focus, its effect is rather small.The first term describes the group velocity mismatch between the fundamental and the SH pulse.The refractive indices modifications induced by self-focusing and the presence of plasma were neglected.The values of n g,i were obtained from the model developed in [28].This assumption is valid because, even though both pulses interact in the filament core (guiding of SH and cross phase modulation), it is the energy reservoir, which is at a much lower intensity, that imposes the pulse propagation speed.The filament core only follows the peak intensity of the pump pulse.In addition, if the nonlinear contributions to the index of refraction were to be included in the model, the difference in group velocities would be increased leading to a more important walk-off between the two pulses.In fact, because the reservoir of the fundamental pulse is much more intense than SH, the refractive index contribution from cross phase modulation is larger for SH such that its group velocity decreases more than that of the fundamental pulse.Once again, because the properties of air are not well known in the THz spectral range, approximations were made in Eq. ( 3).In fact, the refractive index of air at THz frequencies was assumed equal to unity and the THz pulse traveled at the speed c.All these assumptions were made to keep the focus on the main causes of the signal decay for long focal lengths. As a result, for filaments formed with a focal length f, the total THz field reaching a target of transverse coordinate x positioned at a distance d from the end of the filament is given by: Finally, the total THz power collected by the parabola of radius r is obtained from: The calculation was performed for ten different THz frequencies between 0.1 to 10 THz and the output data, shown as a blue line in Fig. 2, corresponds to the sum of the all these components corrected based on the spectral measurements shown in Fig. 2. In this simulation, E ω = 1, E 2ω = 1/30, τ = 75 fs for both pulses (at e −1 ), d = 50 cm and r = 2.5 cm correspond to the radius of the collecting parabola.Good agreement was obtained and that even if multiple effects were neglected.This shows that the dominant factors resulting to this decay are not related to the filaments' properties. In the experiment, the limited diameter parabola only captured a fraction of the total THz signal produced by the filaments, especially for long focal lengths of the telescope.In order to verify if this could have lead to the formation of the knee at 25 m, the calculation was repeated by considering a 10 m diameter parabola.The result is shown in Fig. 2 including the inflection around 25 m.Therefore, the sudden slope change observed is not related to a reduced collection efficiency attributed to a limited diameter parabola.The first reason why the signal decreased with increasing f is destructive interference of the multiple THz sources aligned along the filament, and this is independent of the physical process producing the THz waves, i.e. the photocurrent or the four-wave mixing.Indeed, the electromagnetic wavefronts emitted by the THz point sources interfere as they travel towards the parabola.Elongating the filament increases the probability of finding two point sources whose electromagnetic fields cancel at the parabola.As a result, destructive interference decreases the on-axis intensity which produces a rather divergent THz beam.Since most of the THz energy propagates at an angle with respect to the propagation axis, this effect further deteriorates the collection efficiency because, beyond a certain filament length, the limited radius parabola can no longer capture all the produced THz radiation. Another important factor is related to the diffraction that the SH pulse encounters when it is focused and propagates extended distances.During the experiment, even though selffocusing could have been significant, no filamentation of SH was observed.Under these circumstances, SH was affected by diffraction and the focal spot diameter of the SH pulse increased linearly with increasing f.Based on the Rayleigh criterion, the focal intensity of SH should behave as This equation, where P is the pulse peak power, D 2ω is its initial diameter and λ 2ω the central wavelength, shows that I is inversely proportional to the square of the focusing distance.In the model, since the THz intensity produced by the filament is proportional to the intensity of the SH pulse, increasing the focal length to f + ∆f will intrinsically reduce the THz intensity by a factor that is proportional to In fact, the calculated total signal decreases with a slope of −2 beyond 25 m. As of now, it seems that increasing the focal length or increasing the f-number in order to generate the filaments further away can only impact negatively on the THz signal captured by the detector.However, as mentioned earlier, increasing f produces a longer filament, thus, a longer interaction zone and ultimately, we should expect stronger THz signals.When f<<z f , the filament length will grow as f 2 , with reduced growth rate as the two distances become more equal.This resulted in a reduced decaying slope which was measured to be −0.9 for the THz signal in Fig. 2, for focal lengths shorter than 20 m.This decay was mainly due to the decrease of the SH intensity and the lower collection efficiency for longer distance.However, the THz generation has another limitation, attributed to group velocity mismatch of the two pulses, which imposes a maximal length they can be superposed while propagating in air.In fact, even though self-group phase locking [22] of the two pulses and guiding [23] of the SH pulse could have occurred in the filament core, the surrounding energy reservoir still propagates in a quasi-linear way at a group velocity which depends on the pulse's wavelength.Thus group velocity mismatch of two pulses should occur during filamentation in air.When it comes to femtosecond pulses, this limit can appear at short propagation distances.In fact, the walk-off length (WOL) of two pulses at different wavelengths propagating in air is defined as where v g correspond to the group velocities of each pulses in air.Assuming that both the fundamental and SH pulses had a pulse duration of 75 fs, the maximum interaction length was limited to WOL = 1.8 m such that any section of a filament whose length exceeded WOL would not produce significant THz.The slow decay before 25 m and the knee observed in Fig. 2 both for the experiment and the calculation is a direct consequence of this limitation.In fact, in our model, filaments longer than 1.8 m were obtained for focal lengths ranging between 18 m and 25 m.The interaction length then becomes constant for longer focal distance, and the signal started to decrease at a faster rate resulting in a slope change in the log-log plot. The results presented show the difficulties that we have to face when it comes to produce strong THz pulses beyond 100 m from the source with the current method and laser.We however decided to look at the stated problems as new challenges to overcome and, would like to propose possible avenues other than the usually suggested solutions.Among those intuitive propositions we have increasing the initial beam size to reduce the f-number or increasing the SH pulse intensity.Another possible path consists in finding a method to increase WOL.Based on Eq. ( 7), this could easily be done by increasing the pulse duration.However this method should be used with care because, in current CPA systems [29], increasing the pulse duration is often realized through chirping the pulse at the expense of pulse peak power.This could rapidly result in a reduction of the filaments' length and robustness. Because the difference in group refractive index between the fundamental and SH pulses becomes less important, increasing the fundamental pulse wavelength also leads to an elongation of WOL.With a quick estimation using Eq. ( 7) and the group velocities v g obtained from the model presented in [27], the total THz signal could be increased by a factor 5 if the fundamental wavelength was shifted from 800 nm to 1.7 µm.However, this technique also has a drawback.In fact, as expressed in Eq. ( 6), the focal intensity is inversely proportional to the square of the wavelength.Therefore, this focal intensity diminution of SH reduces the 500% enhancement, obtained from the elongated WOL, to a mere 11%. Even though remote generation of powerful THz is challenging, we still believe that increasing the wavelength and the pulse width of the fundamental pulse could lead to a significant enhancement of the THz signal produced during two-color filamentation at long distance.In fact, because an increasing number of photons are required to ionize atmospheric molecules at longer pump wavelength, the formation of a stabilized filament could occur at a clamped intensity which increases with the fundamental wavelength.Assuming that fourwave mixing is the dominant mechanism for extended filaments, the THz signal would be proportional to the square of the filament's clamped intensity.In fact, in this scenario, two photons of the fundamental pulse are required to produce a single THz photon.However, because there are very few laser sources available worldwide sufficiently powerful to induce filamentation in air at wavelengths longer than 800 nm, both of these hypotheses have not yet been verified experimentally.Previous measurements revealed that the clamped intensity inside filaments increased by a factor 2.5 when the pulse peak wavelength was changed from 400 nm to 800 nm [30]. Conclusions In this work, remote THz generation from two-color filamentation in air was put to a test and strong THz signals were still observed when the filaments were positioned 55 m from the source, more than three times the longest distance previously reported.The results obtained revealed that as the filament bunch moved towards longer distances, the produced THz signal decreased monotonically.A simple numerical model revealed that the dominant factors related to this decay are not related to the filaments' properties.The decaying mechanisms were rather identified as group velocity mismatch between the two-color pulses and, diffraction caused by the long propagation of the SH pulse. The conclusions drawn from this study show that the production of strong THz pulses beyond 100 m from the current laser source are very challenging.However, two possible scenarios were proposed to improve the technique.The most promising method consists in increasing the clamped intensity by using a laser pulse with a longer wavelength.Laser sources that could put this idea to a test start to emerge; perhaps someone will perform the experiment shortly? Fig. 2 . Fig. 2. THz signal measured as a function of the focusing distance and presented in a log-log scale.The horizontal axis corresponds to the distance between the telescope and the strongest part of the filaments.The dotted line corresponds to the total THz signal, independent of the parabola diameter, calculated for each focal position.Inset: Spectral distribution of the THz signal measured using different combinations of filters in front of the bolometer. as a dotted line.It shows, apart from a change in the slopes at long distances, a very similar trend #159910 -$15.00USD Received 12 Dec 2011; revised 13 Jan 2012; accepted 19 Jan 2012; published 9 Mar 2012 (C) 2012 OSA
v3-fos-license
2023-04-30T15:14:59.711Z
2023-04-28T00:00:00.000
258412598
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/ijms24098031", "pdf_hash": "944fa7fc3c59ed8217e49eaffda71dcbcdcd4a36", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:416", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "sha1": "c41459f77f2ea0226d28460c6ff6c448aca7b38f", "year": 2023 }
pes2o/s2orc
Uremic Toxin Indoxyl Sulfate Promotes Macrophage-Associated Low-Grade Inflammation and Epithelial Cell Senescence In this study, we investigated the impact of the uremic toxin indoxyl sulfate on macrophages and tubular epithelial cells and its role in modulating the response to lipopolysaccharide (LPS). Indoxyl sulfate accumulates in the blood of patients with chronic kidney disease (CKD) and is a predictor of overall and cardiovascular morbidity/mortality. To simulate the uremic condition, primary macrophages and tubular epithelial cells were incubated with indoxyl sulfate at low concentrations as well as concentrations found in uremic patients, both alone and upon LPS challenge. The results showed that indoxyl sulfate alone induced the release of reactive oxygen species and low-grade inflammation in macrophages. Moreover, combined with LPS (proinflammatory conditions), indoxyl sulfate significantly increased TNF-α, CCL2, and IL-10 release but did not significantly affect the polarization of macrophages. Pre-treatment with indoxyl sulfate following LPS challenge induced the expression of aryl hydrocarbon receptor (Ahr) and NADPH oxidase 4 (Nox4) which generate reactive oxygen species (ROS). Further, experiments with tubular epithelial cells revealed that indoxyl sulfate might induce senescence in parenchymal cells and therefore participate in the progression of inflammaging. In conclusion, this study provides evidence that indoxyl sulfate provokes low-grade inflammation, modulates macrophage function, and enhances the inflammatory response associated with LPS. Finally, indoxyl sulfate signaling contributes to the senescence of tubular epithelial cells during injury. Introduction Chronic kidney disease (CKD) is a growing global health concern. According to recent studies, approximately 10-15% of the adult population in western countries suffers from CKD [1,2]. The prevalence of CKD is higher in older individuals, those with a history of cardiovascular disease, and individuals with diabetes [3]. CKD characterized by declined kidney function leads to an accumulation of waste products, including uremic toxins, in the bloodstream and tissues, which can contribute to the progression of the disease and further deterioration to kidney failure [4]. The burden of CKD is expected to continue to increase in the coming years, highlighting the need for effective prevention and management strategies to reduce its impact on public health [5,6]. Classification of uremic toxins in patients with CKD is based on their behavior during dialysis and physicochemical properties. According to the European Uremic Toxin Work Group (EUTox) database, there are over 100 uremic solutes/metabolites listed and the number is expected to grow [7][8][9][10]. The accumulation of uremic toxins in patients with Indoxyl Sulfate Induces Proinflammatory Transcripts in Primary Macrophages In Vitro and Modulates Their Immune Response and Metabolic Activity upon LPS Stimulation Several studies have suggested the effects of indoxyl sulfate on macrophage function; however, its immunomodulatory role is still under-researched. We first investigated the induction of proinflammatory cytokines by low concentrations of indoxyl sulfate in macrophages and found unchanged mRNA expression of proinflammatory cytokines as well as metabolic activity in these cells (Supplementary Figure S1). In further experiments, we used higher concentrations of indoxyl sulfate (60 µg/mL), which corresponds to patients with ESKD [27,28]. High concentrations of indoxyl sulfate of 60 µg/mL did not show strong, specific proinflammatory features, as evidenced by a significant increase in Tnf-α and Il10 expression in macrophages ( Figure 1). Furthermore, treatment of macrophages with LPS in the presence of indoxyl sulfate led to higher expression of proinflammatory factors Tnf-α and Ccl2 and downregulation of Irak-m expression compared to the LPS-treated group ( Figure 1). Interestingly, IRAK-M is known to induce the expression of negative regulators such as SOCS1, SHIP1, and A20 that control overshooting inflammation in myeloid cells to restrict tissue damage upon excessive immune response [29][30][31]. Similar results were observed with human PBMCs (Supplementary Figure S2). Thus, indoxyl sulfate mediates transcriptomic changes and promotes proinflammatory activation in macrophages. Indoxyl Sulfate Induces Proinflammatory Transcripts in Primary Macrophages In Vitro and Modulates Their Immune Response and Metabolic Activity upon LPS Stimulation Several studies have suggested the effects of indoxyl sulfate on macrophage function; however, its immunomodulatory role is still under-researched. We first investigated the induction of proinflammatory cytokines by low concentrations of indoxyl sulfate in macrophages and found unchanged mRNA expression of proinflammatory cytokines as well as metabolic activity in these cells (Supplementary Figure S1). In further experiments, we used higher concentrations of indoxyl sulfate (60 µg/mL), which corresponds to patients with ESKD [27,28]. High concentrations of indoxyl sulfate of 60 µg/mL did not show strong, specific proinflammatory features, as evidenced by a significant increase in Tnf-α and Il10 expression in macrophages ( Figure 1). Furthermore, treatment of macrophages with LPS in the presence of indoxyl sulfate led to higher expression of proinflammatory factors Tnf-α and Ccl2 and downregulation of Irak-m expression compared to the LPS-treated group (Figure 1). Interestingly, IRAK-M is known to induce the expression of negative regulators such as SOCS1, SHIP1, and A20 that control overshooting inflammation in myeloid cells to restrict tissue damage upon excessive immune response [29][30][31]. Similar results were observed with human PBMCs (Supplementary Figure S2). Thus, indoxyl sulfate mediates transcriptomic changes and promotes proinflammatory activation in macrophages. Indoxyl sulfate significantly affects the inflammatory gene expression of bone marrowderived macrophages (BMDMs). The figure shows the mRNA expression levels of inflammationassociated genes in BMDMs, as described in the Materials and Methods Section 4. We cultured the cells in a medium containing 60 µg/mL of indoxyl sulfate (IS) and stimulated them with LPS (2 ng/mL) for 4 h. Data are shown as means ± SD; dots represent biological replicates; * p < 0.05, ** p < 0.01. To investigate whether indoxyl sulfate affects the cell viability and metabolic activity of macrophages, we assessed their mitochondrial ability to metabolize 3-4,5dimethylthiazol-2,5-diphenyltetrazolium bromide (MTT). The analysis showed that cells Figure 1. Indoxyl sulfate significantly affects the inflammatory gene expression of bone marrowderived macrophages (BMDMs). The figure shows the mRNA expression levels of inflammationassociated genes in BMDMs, as described in the Materials and Methods Section 4. We cultured the cells in a medium containing 60 µg/mL of indoxyl sulfate (IS) and stimulated them with LPS (2 ng/mL) for 4 h. Data are shown as means ± SD; dots represent biological replicates; * p < 0.05, ** p < 0.01. To investigate whether indoxyl sulfate affects the cell viability and metabolic activity of macrophages, we assessed their mitochondrial ability to metabolize 3-4,5-dimethylthiazol-2,5-diphenyltetrazolium bromide (MTT). The analysis showed that cells treated with high concentrations of indoxyl sulfate did not significantly change their metabolic activity. The metabolic activity of macrophages upon 48 h of incubation with LPS and indoxyl sulfate revealed that LPS-associated metabolic activity was reduced by the abundance of uremic toxin (Supplementary Figure S3A). The analysis of cell death did not reveal a statistically significant difference between the groups as indicated by lactate dehydrogenase (LDH) measurement. Primary macrophages turned out to be less sensitive to indoxyl sulfate and LPS-related stress than immortalized monocyte/macrophage cell lines such as human THP1 and murine J774 (Supplementary Figure S3B). Thus, indoxyl sulfate inhibits macrophage metabolic activity under inflammatory conditions without affecting cell death. Indoxyl Sulfate Does Not Affect the Self-Limiting Nature of NF-kB-Associated Inflammatory Signaling Macrophage homeostasis depends on extensive regulatory mechanisms that orchestrate and sequester inflammatory signals [32][33][34][35]. We hypothesized that macrophages cultured with indoxyl sulfate would display dysregulation of homeostatic transcripts under inflammatory conditions and therefore analyzed the expression of molecules that obstruct inflammatory responses. Our preliminary results, summarized in a heat map, showed that the balance of negative regulators of inflammation is not significantly disturbed by indoxyl sulfate (Figure 2). In the heat map, Z-scores were used as a scaling method for visualization. Z-scores were calculated and plotted for each gene to ensure that the expression patterns are not overwhelmed by the expression values of highly affected genes. Further investigation revealed that only one group of negative regulators was significantly expressed (above a threshold and non-template control) and induced by LPS but unaffected by indoxyl sulfate, including A20, Mcpip1, and Socs3 ( Figure 2). In summary, these data show that indoxyl sulfate does not change the transcript levels of homeostatic genes in macrophages. treatment. The heat map shows the expression analysis of pre-selected transcripts. Genes indicated in green are upregulated and genes indicated in pink are downregulated to highlight differences between the samples. The rows are Z-score scaled for each gene separately to ensure that the expression patterns are not overwhelmed by the expression values of highly expressed transcripts. Dot plots represent the expression of selected genes and demonstrate that the distinction in color in the heat map could be a consequence of a variation in expression or low expression level. Data are shown as means ± SD; dots represent biological replicates; * p < 0.05. Indoxyl Sulfate Induces Moderate Inflammation by Enhancing Inflammatory Cytokine Production in Macrophages Further, we determined the cytokine production in the supernatants collected from the macrophage cultures. Interestingly, a significant difference in the levels of MCP1, TNF-α, and IL-10 between the LPS-stimulated groups (medium and pre-stimulation with indoxyl sulfate) was found in the supernatants (Figure 3). We found no statistically significant difference for IL-6, IL-12, and IFN-γ production. Stimulation with indoxyl sulfate alone significantly increased TNF levels, but did not have a significant effect on the levels of other cytokines that were tested ( Figure 3). Thus, indoxyl sulfate modifies LPS-induced cytokine production and possibly induces the proinflammatory phenotype of macrophages. . Indoxyl sulfate induces the production of inflammation-related cytokines. The secretion of inflammatory mediators was measured by a bead-based flow cytometric assay after 24 h in supernatants from bone marrow-derived macrophages (BMDMs); we cultured the cells in medium containing 60 µg/mL of indoxyl sulfate (IS) and stimulated them with LPS (2 ng/mL) for 24 h. Data are shown as means ± SD; dots represent biological replicates; * p < 0.05, ** p < 0.01. Indoxyl Sulfate Induces ROS Production, Enhances LPS-Induced ROS Release, and Increases Mitochondrial Superoxide Production To determine the functional importance of the proinflammatory properties of indoxyl sulfate on macrophages, we studied ROS production and their bactericidal properties. Increased oxidative stress observed in patients who suffer from CKD is associated with a proinflammatory state of the immune system [36]. To examine whether indoxyl sulfate affects ROS production in macrophages, we used a dichloro-dihydro-fluorescein diacetate (DCFH-DA) assay and assessed oxidative stress. Macrophages significantly enhanced ROS production as early as 20 minutes after treatment with indoxyl sulfate, LPS, and indoxyl sulfate with LPS ( Figure 4A). Thus, indoxyl sulfate significantly induces ROS production in macrophages and enhances ROS production under inflammatory . Indoxyl sulfate induces the production of inflammation-related cytokines. The secretion of inflammatory mediators was measured by a bead-based flow cytometric assay after 24 h in supernatants from bone marrow-derived macrophages (BMDMs); we cultured the cells in medium containing 60 µg/mL of indoxyl sulfate (IS) and stimulated them with LPS (2 ng/mL) for 24 h. Data are shown as means ± SD; dots represent biological replicates; * p < 0.05, ** p < 0.01. Indoxyl Sulfate Induces ROS Production, Enhances LPS-Induced ROS Release, and Increases Mitochondrial Superoxide Production To determine the functional importance of the proinflammatory properties of indoxyl sulfate on macrophages, we studied ROS production and their bactericidal properties. Increased oxidative stress observed in patients who suffer from CKD is associated with a proinflammatory state of the immune system [36]. To examine whether indoxyl sulfate affects ROS production in macrophages, we used a dichloro-dihydro-fluorescein diacetate (DCFH-DA) assay and assessed oxidative stress. Macrophages significantly enhanced ROS production as early as 20 minutes after treatment with indoxyl sulfate, LPS, and indoxyl sulfate with LPS ( Figure 4A). Thus, indoxyl sulfate significantly induces ROS production in macrophages and enhances ROS production under inflammatory conditions. Under the same experimental conditions, we measured mitochondrial superoxide using MitoSOX Red to determine mitochondria as a major source of ROS. Consistent with the DCFH-DA data, mitochondrial superoxide levels were found to be higher in cells grown in a medium containing indoxyl sulfate and LPS ( Figure 4B). Notably, under inflammatory conditions, no significant differences between the indoxyl sulfate treated and untreated groups were observed ( Figure 4C). Next, we evaluated the efficacy of macrophage-mediated bacterial uptake by quantifying the number of bacteria that remained associated with the cells after 24 h phagocytosis period. Interestingly, the number of live intracellular bacteria after 24 h was significantly higher in the indoxyl sulfate group ( Figure 5A). Later on, we performed a bead-based phagocytosis assay to eliminate variables associated with bacterial growth and bactericidal properties of macrophages. The results of this assay demonstrated that indoxyl sulfate improves the phagocytic capability of LPS-stimulated macrophages, which indicates enhanced cell activation ( Figure 5B). However, we did not observe any differences in the transmigration of macrophages toward a chemokine gradient (as shown in Supplementary Figure S4). These findings suggest that indoxyl sulfate is responsible for inducing oxidative stress, mitochondrial redox reactions, and phagocytosis without affecting their migratory ability. Next, we tested the expression of oxidative stress and ER-stress-related transcripts. Consistent with other results, we observed enhanced expression of Nrf2, which is involved in redox homeostasis [37]. Moreover, the transcripts linked to ER stress, such as Xbp1 and Atf6, were also upregulated in BMDMs stimulated with indoxyl sulfate. Next, we evaluated the efficacy of macrophage-mediated bacterial uptake by quantifying the number of bacteria that remained associated with the cells after 24 h phagocytosis period. Interestingly, the number of live intracellular bacteria after 24 h was significantly higher in the indoxyl sulfate group ( Figure 5A). Later on, we performed a bead-based phagocytosis assay to eliminate variables associated with bacterial growth and bactericidal properties of macrophages. The results of this assay demonstrated that indoxyl sulfate improves the phagocytic capability of LPS-stimulated macrophages, which indicates enhanced cell activation ( Figure 5B). However, we did not observe any differences in the transmigration of macrophages toward a chemokine gradient (as shown in Supplementary Figure S4). These findings suggest that indoxyl sulfate is responsible for inducing oxidative stress, mitochondrial redox reactions, and phagocytosis without affecting their migratory ability. Indoxyl Sulfate Promotes Macrophage Polarization toward a Proinflammatory Phenotype The original concept of classical and alternative macrophage polarization is associated with their role in inflammation and disease and is often linked to the balance of immune responses [38]. Therefore, we investigated the role of indoxyl sulfate in macrophage polarization to elucidate its role in this complex spatiotemporal subject. Surface expression of macrophage differentiation and activation markers were analyzed by flow cytometry and presented as the single-stain mean fluorescence intensity (MFI) in indoxyl sulfateand LPS-stimulated BMDMs. F4/80 and CD11b markers were expressed at high levels on LPS-stimulated cells, confirming the sufficient differentiation of primary cells ( Figure 6). As expected, cells also showed expression of the costimulatory molecule, CD80, mannose receptor, CD206, and MHCII. LPS-treated macrophages exhibited the typical M1-like phenotype characterized by strong induction of CD80, MHCII, and downregulation of CD206 compared to plain medium-cultured macrophages in terms of their percentage of positive cells and MFI. The cells treated with indoxyl sulfate displayed similar phenotypic features to the cells cultured in plain medium, suggesting that there were minor effects on macrophage phenotypic changes under the tested conditions. Similarly, under proinflammatory conditions, we did not observe any differences in the MFI between macrophages stimulated with both indoxyl sulfate and LPS, and those treated with LPS only, in terms of both the percentage of positive cells and MFI. . Indoxyl sulfate significantly enhances ROS production and affects redox regulation in macrophages. (A) BMDMs were generated, primed, and left either unstained or stained with ROS probes; dots represent biological replicates. (B) Mitochondrial superoxide production was detected with MitoSOX and presented as MFI fold induction (n = 5). (C) The figure shows the fold induction of mRNA expression levels of redox/ER stress-associated genes in BMDMs (n = 4). We cultured the cells in medium containing 60 µg/mL of indoxyl sulfate (IS) for 4 h. Data are shown as means ± SD; * p < 0.05, ** p < 0.01. Next, we evaluated the efficacy of macrophage-mediated bacterial uptake by quantifying the number of bacteria that remained associated with the cells after 24 h phagocytosis period. Interestingly, the number of live intracellular bacteria after 24 h was significantly higher in the indoxyl sulfate group ( Figure 5A). Later on, we performed a bead-based phagocytosis assay to eliminate variables associated with bacterial growth and bactericidal properties of macrophages. The results of this assay demonstrated that indoxyl sulfate improves the phagocytic capability of LPS-stimulated macrophages, which indicates enhanced cell activation ( Figure 5B). However, we did not observe any differences in the transmigration of macrophages toward a chemokine gradient (as shown in Supplementary Figure S4). These findings suggest that indoxyl sulfate is responsible for inducing oxidative stress, mitochondrial redox reactions, and phagocytosis without affecting their migratory ability. To investigate the effects of indoxyl sulfate on macrophage polarization in intact and LPS-stimulated macrophages, the cells were "gated" using F4/80 and CD11b markers. Flow cytometry analysis revealed the proportion of CD80+ (M1-like macrophage marker) and CD206+ (M2-like macrophage marker) macrophages. Indoxyl sulfate slightly increased the proportion of CD80+ cells in unstimulated macrophages compared to macrophages treated with LPS; however, indoxyl sulfate caused no major change in the proportion of CD206+ cells during the LPS challenge ( Figure 7). These data indicated that indoxyl sulfate could promote M1-like macrophage polarization and that the effect is lost in LPS-treated cells due to predominant LPS signaling. The analysis reveals significant but marginal changes in the M1-like polarization of macrophages cultured with indoxyl sulfate under basal conditions. This finding was consistent with the levels of cytokines/chemokines associated with macrophage activation measured in supernatants shown above. The significantly higher levels of TNFα and MCP-1, as well as higher levels of the anti-inflammatory cytokine IL-10, indicated a mixed phenotype of indoxyl sulfate-treated macrophages. The differences in the production of the immunosuppressive cytokine IL-10 could be due to increased oxidative stress in indoxyl sulfate-treated groups since autocrine IL-10 was shown to regulate macrophage nitric oxide (NO) production [39]. their percentage of positive cells and MFI. The cells treated with indoxyl sulfate displayed similar phenotypic features to the cells cultured in plain medium, suggesting that there were minor effects on macrophage phenotypic changes under the tested conditions. Similarly, under proinflammatory conditions, we did not observe any differences in the MFI between macrophages stimulated with both indoxyl sulfate and LPS, and those treated with LPS only, in terms of both the percentage of positive cells and MFI. To investigate the effects of indoxyl sulfate on macrophage polarization in intact and LPS-stimulated macrophages, the cells were "gated" using F4/80 and CD11b markers. Flow cytometry analysis revealed the proportion of CD80+ (M1-like macrophage marker) and CD206+ (M2-like macrophage marker) macrophages. Indoxyl sulfate slightly Collectively, these data indicate that indoxyl sulfate has a marginal effect on macrophage polarization promoting a proinflammatory environment. Indoxyl Sulfate and LPS Co-Stimulation Induce Ahr Expression We next explored whether indoxyl sulfate could affect its endogenous aryl hydrocarbon receptor (Ahr) expression in macrophages as well as organic anion transporters Oat1 and Oat3 which have been described as mediating the uptake of indoxyl sulfate from the plasma into the cytoplasm of the cells [40,41]. Our data showed that only the combination of indoxyl sulfate and LPS significantly induces Ahr expression (Figure 8). Furthermore, expression of Oat1 but not Oat3 increased, indicating possible differences in the elimination of harmful endogenous compounds (Figure 8). The NOX complex located in the plasma membrane, which acts as a major generator of ROS in phagocytic cells and triggers a metabolic shift toward an oxidative phenotype, was also directly involved in indoxyl sulfate signaling [42]. The combination of indoxyl sulfate and LPS significantly increased the expression of Nox4 during inflammatory conditions. Thus, indoxyl sulfate enhances Ahr expression and Nox4 expression under inflammatory conditions, suggesting efficient pro-IS priming of macrophages, indoxyl sulfate uptake, and increased oxidative stress. of the anti-inflammatory cytokine IL-10, indicated a mixed phenotype of indoxyl sulfatetreated macrophages. The differences in the production of the immunosuppressive cytokine IL-10 could be due to increased oxidative stress in indoxyl sulfate-treated groups since autocrine IL-10 was shown to regulate macrophage nitric oxide (NO) production [39]. Collectively, these data indicate that indoxyl sulfate has a marginal effect on macrophage polarization promoting a proinflammatory environment. located in the plasma membrane, which acts as a major generator of ROS in phagocytic cells and triggers a metabolic shift toward an oxidative phenotype, was also directly involved in indoxyl sulfate signaling [42]. The combination of indoxyl sulfate and LPS significantly increased the expression of Nox4 during inflammatory conditions. Thus, indoxyl sulfate enhances Ahr expression and Nox4 expression under inflammatory conditions, suggesting efficient pro-IS priming of macrophages, indoxyl sulfate uptake, and increased oxidative stress. Indoxyl Sulfate Affects the Proliferation and Metabolic Activity of Renal Tubular Epithelial Cells The complex and dynamic interplay between macrophages and parenchymal cells is a crucial aspect of the immune system's response to injury and disease. Moreover, parenchymal cells are the functional cells of an organ that can modulate the behavior of macrophages by releasing signaling molecules and cytokines. Therefore, apart from macrophages, we investigated the response of parenchymal cells to indoxyl sulfate to gain knowledge about tissue homeostasis. Since the main type of parenchymal cells in the kidney are renal tubular epithelial cells, we considered them for our further investigations. First, we studied the impact of indoxyl sulfate on the proliferation of primary mouse renal proximal tubular cells (mTECs) and a human cell line HK2. We used 60 µg/mL of indoxyl sulfate and combined it with LPS to mimic inflammatory conditions. We incubated the cells for 72 h and observed that it suppressed serum-dependent cell proliferation and might affect cell death under inflammatory conditions (Figures 9 and 10, and Supplementary Figure S5A-C). These findings suggest that indoxyl sulfate might be associated with poor regeneration upon injury. Indoxyl Sulfate Promotes Cellular Senescence by Activating Oxidative Stress Since senescence is a biological process characterized by a decline in cell proliferation and accumulation of cellular damage, we investigated its main marker, senescenceassociated β-galactosidase (SA-β-gal) in tubular epithelial cells. Our findings demonstrate that indoxyl sulfate increased the percentage of SA-β-gal-positive cells, and this effect was even more prominent under inflammatory conditions (as shown in Figure 11A). Consequently, the size of senescent cells was also significantly increased, as measured with light microscopy ( Figure 11B). Furthermore, since the association between oxidative stress and senescence is complex and bidirectional, it is expected that oxidative stress can induce senescence by causing cellular damage. Conversely, senescent cells can also contribute to oxidative stress by producing and releasing ROS and other proinflammatory cytokines ( Figure 11C,D). primary mouse renal proximal tubular cells (mTECs) and a human cell line HK2. We used 60 µg/mL of indoxyl sulfate and combined it with LPS to mimic inflammatory conditions. We incubated the cells for 72 h and observed that it suppressed serum-dependent cell proliferation and might affect cell death under inflammatory conditions (Figures 9 and 10, and Supplementary Figure S5A-C). These findings suggest that indoxyl sulfate might be associated with poor regeneration upon injury. macrophages, we investigated the response of parenchymal cells to indoxyl sulfate to gain knowledge about tissue homeostasis. Since the main type of parenchymal cells in the kidney are renal tubular epithelial cells, we considered them for our further investigations. First, we studied the impact of indoxyl sulfate on the proliferation of primary mouse renal proximal tubular cells (mTECs) and a human cell line HK2. We used 60 µg/mL of indoxyl sulfate and combined it with LPS to mimic inflammatory conditions. We incubated the cells for 72 h and observed that it suppressed serum-dependent cell proliferation and might affect cell death under inflammatory conditions (Figures 9 and 10, and Supplementary Figure S5A-C). These findings suggest that indoxyl sulfate might be associated with poor regeneration upon injury. Figure 11A). Consequently, the size of senescent cells was also significantly increased, as measured with light microscopy ( Figure 11B). Furthermore, since the association between oxidative stress and senescence is complex and bidirectional, it is expected that oxidative stress can induce senescence by causing cellular damage. Conversely, senescent cells can also contribute to oxidative stress by producing and releasing ROS and other proinflammatory cytokines ( Figure 11C,D). To determine if elevated levels of ROS also included mitochondrial ROS (mtROS), we employed the fluorescent dye MitoSOX, which specifically detects ROS generated as a by-product of mitochondrial respiration. We observed that indoxyl sulfate-induced ROS were also detected with the MitoSOX assay (Supplementary Figure S5D). Senescent cells often exhibit altered mRNA splicing patterns, changes in RNA processing and translation rates, and reduced RNA synthesis. Therefore, in combination with other senescence markers such as SA-β-gal activity and increased cell size, reduced RNA turnover or concentration can contribute to identifying senescence-like changes. Our data showed a constantly lower RNA yield from all indoxyl sulfate-stimulated parenchymal cell types used in this study ( Figure 12A). To determine if elevated levels of ROS also included mitochondrial ROS (mtROS), we employed the fluorescent dye MitoSOX, which specifically detects ROS generated as a by-product of mitochondrial respiration. We observed that indoxyl sulfate-induced ROS were also detected with the MitoSOX assay (Supplementary Figure S5D). Senescent cells often exhibit altered mRNA splicing patterns, changes in RNA processing and translation rates, and reduced RNA synthesis. Therefore, in combination with other senescence markers such as SA-β-gal activity and increased cell size, reduced RNA turnover or concentration can contribute to identifying senescence-like changes. Our data showed a constantly lower RNA yield from all indoxyl sulfate-stimulated parenchymal cell types used in this study ( Figure 12A). Based on the data presented above, we hypothesized that indoxyl sulfate may be required to stimulate an inflammatory phenotype in renal tubular epithelial cells. We investigated the expression of major cytokines in both primary isolated TECs and HK2 cells. Since changes in the expression levels were minor in HK2 cells, we decided to use primary tubular cells for the gene expression experiments. We hypothesized that tubular epithelial cells cultured with indoxyl sulfate would display dysregulation of homeostatic transcripts under inflammatory conditions. However, our preliminary results showed that similarly to immune cells, the delicate balance of negative regulators of inflammation Based on the data presented above, we hypothesized that indoxyl sulfate may be required to stimulate an inflammatory phenotype in renal tubular epithelial cells. We investigated the expression of major cytokines in both primary isolated TECs and HK2 cells. Since changes in the expression levels were minor in HK2 cells, we decided to use primary tubular cells for the gene expression experiments. We hypothesized that tubular epithelial cells cultured with indoxyl sulfate would display dysregulation of homeostatic transcripts under inflammatory conditions. However, our preliminary results showed that similarly to immune cells, the delicate balance of negative regulators of inflammation is not significantly disturbed by indoxyl sulfate (as shown in Supplementary Figure S6). These findings suggest that indoxyl sulfate induces moderate inflammation in tubular epithelial cells. Furthermore, we did not observe significant differences in the expression of senescence and oxidative stress markers (as shown in Supplementary Figure S7). Nevertheless, since the kidney has been shown to accumulate senescent cells with age [42], we decided to isolate primary tubular epithelial cells from mice aged 4-6 weeks and 6 months and stimulated them with indoxyl sulfate. We observed a significant difference in the expression of selected senescence markers upon indoxyl sulfate stimulation ( Figure 12B-D). This included especially the classical hallmark of cellular senescence, Cdkn1a (p21) ( Figure 12D). Thus, indoxyl sulfate contributes to oxidative stress and senescence in renal tubular cells. This effect was more pronounced in aging cells, suggesting that indoxyl sulfate may play a critical role in elderly individuals and during inflammaging. Discussion CKD is associated with the accumulation of uremic toxins/metabolites in the bloodstream causing immune dysregulation (e.g., altered immune and non-immune cell functions) that contribute to various complications, including cardiovascular disease, neurocognitive dysfunction, as well as dysbiosis, and increased risk of infections and metabolic disorders [11,23,[43][44][45]. Our study investigated the effects of indoxyl sulfate, a microorganismderived uremic toxin, on macrophage function. Since the homeostasis of tissues depends not only on immune cells but also on parenchymal cells, we also investigated the effects of indoxyl sulfate on tubular epithelial cells. We report that indoxyl sulfate significantly contributes to a systemic inflammatory state by (a) activating a prooxidant response, (b) inducing a low-grade inflammatory response in macrophages, and (c) inducing tubular epithelial cell senescence that affects inflammaging. A growing body of evidence indicates that indoxyl sulfate may significantly contribute to the progression of CKD. In vitro studies have demonstrated that indoxyl sulfate affects the biology of tubular cells, leading to increased levels of oxidative stress, inflammation, and fibrosis [46][47][48][49]. Animal models have also indicated that indoxyl sulfate can accelerate CKD progression through nephrotoxic effects [49,50]. Additionally, serum levels of indoxyl sulfate have been linked as surrogate markers of cardiovascular disease, such as intima-media thickness and pulse wave velocity in children [51], as well as diminished endothelial function in adults with CKD [52]. Our experimental results demonstrate that macrophage activation is a significant mechanism in the systemic action of indoxyl sulfate and indicate that indoxyl sulfate can induce low-grade inflammation or accelerate an existing inflammatory state. Although the effects may appear marginal, they are consistently significant, and the continuous abundance of indoxyl sulfate in tissues may significantly influence the cytokine milieu. Our findings suggest that indoxyl sulfate induces oxidative stress and affects the phagocytic capabilities of macrophages, which is in line with previous research showing that oxidative stress is prevalent in patients with CKD [53]. Adesso et al. reported that indoxyl sulfate stimulates macrophage function and enhances the inflammatory response associated with LPS, which contributes to the immune dysfunction observed in CKD patients. The authors observed a rapid and significant increase in ROS release from macrophages, reflecting the induction of an oxidative stress state [54]. Similar findings, indicating that indoxyl sulfate induces oxidative stress, have been reported for endothelial cells [55,56], vascular smooth muscle cells [47], and tubular epithelial cells [57]. Consistent with increased ROS, we observed the polarization of macrophages toward an M1-like phenotype and increased phagocytic activity. Our data contradict the effects of indoxyl sulfate on phagocytic activity in differentiated human macrophages (HL- 60) reported previously [58,59]. Therefore, further investigations are needed to understand the impact of indoxyl sulfate on macrophages. Although the effects of indoxyl sulfate might vary under different conditions and activation statuses, these studies suggest that macrophages are one of the targets of indoxyl sulfate. Our study did not find statistically significant differences in macrophage apoptosis following treatment with indoxyl sulfate. Although we observed trends in apoptosis rates, the results did not reach statistical significance, suggesting that further investigation may be needed. There may be a discrepancy between our data and the results obtained for the effects of indoxyl sulfate on apoptosis in UT7/EPO cells [60]. The authors used comparable concentrations (250 µM) of indoxyl sulfate at 48 h of treatment and observed an increase in apoptosis compared to the control condition. However, when the treatment duration was reduced to 24 h, no differences were observed. In our studies, we used macrophages that could be more resistant to stress and an abundance of indoxyl sulfate. Further experiments must be conducted to reveal if UT7/EPO cells express different levels of the aryl hydrocarbon receptor than macrophages. Inflammation is a crucial aspect of the immune system, and macrophages employ a variety of extra-and intracellular factors to regulate it [32,33,[61][62][63]. This regulation is essential for maintaining stability and constancy in immune responses and tissue regeneration. To achieve this, the immune system relies on a range of modulatory mechanisms that trigger allostasis, which refers to the ability to achieve stability through change [33]. Our results show that the tested transcripts responsible for homeostasis and allostasis did not change significantly in their abundance of indoxyl sulfate. As expected, we observed upregulation of some crucial negative regulators of inflammation, such as A20, Mcpip1, or Socs3, upon LPS stimulation. Interestingly, the macrophage regulatory molecule Irak-m showed significantly decreased expression in cells stimulated with LPS and indoxyl sulfate. Previous work from our group showed that mice deficient in IRAK-M displayed a lower number of alternatively activated macrophages [29]. A lower level of IRAK-M could skew the macrophages toward a proinflammatory phenotype. This finding was consistent with the flow cytometry analysis where indoxyl sulfate triggered M1-like macrophage development. Thus, the concept of indoxyl sulfate affecting macrophages and promoting "balance" in tissue homeostasis could be essential for the function and physiology of various tissues. Indoxyl sulfate could be partially responsible for a chronic low-grade inflammatory state observed in a wide range of chronic conditions, such as metabolic syndrome (MetS), nonalcoholic fatty liver disease (NAFLD), type 2 diabetes mellitus (T2DM), and cardiovascular disease (CVD) [64][65][66]. Experimental studies have also linked low-grade inflammation to insulin resistance and suggested microbiome and microbiome-related metabolites as one of the factors affecting the development of the syndrome [67]. Rahtes et al. showed that murine monocytes persistently challenged with super-low-dose LPS can be polarized into a low-grade inflammatory state using the TRAM/Keap1-dependent mechanism [67]. Decades of research have provided extensive knowledge regarding macrophage function upon treatment with highly immunostimulatory substances. However, further research efforts are required to study the effects of subclinical low-concentration or low-stimulatory agonists. Such studies would explain the conditions required for the establishment of a low-grade inflammatory state and its effects on macrophage phenotype and function. In this context, our research could be used to elucidate if the local infections could be a source of considerable concentrations of indoxyl sulfate that change the milieu and participate in tissue remodeling and repair. Our data demonstrate that stimulation with LPS and indoxyl sulfate together changes the expression of the indoxyl sulfate receptor Ahr. This indicates that the presence of this metabolite might induce greater effects during infection, and proinflammatory conditions might prime the cells and enhance their reactivity to indoxyl sulfate. However, research studies suggest that AhR represents a negative feedback mechanism that limits the strength and duration of inflammation triggered by indoxyl sulfate. For instance, AhR signaling decreases proinflammatory signals and induces differentiation of anti-inflammatory Treg cells via various mechanisms [3,[68][69][70]. Since the human gut is home to a vast array of microorganisms, it is not surprising that gut microbiota can have a significant impact on host physiology through both direct cell-to-cell interactions and indirect modulation via the production of microbial metabolites. Another relevant aspect includes a need for novel biomarkers that could be used in larger populations and independent cohorts. Metabolic biomarkers such as indoxyl sulfate could be beneficial for patients [71]. They could provide insight into the metabolic status and changes occurring within the gut, kidney, and systemic circulation. A meta-analysis comprising data from 11 studies revealed that indoxyl sulfate and p-cresyl sulfate were found to be independently linked to an increased risk of cardiovascular events and mortality in patients with CKD [72]. Another study suggests that only indoxyl sulfate, and not p-cresyl sulfate signaling, can be linked to altered miR-126 expression, which has been implicated in vascular endothelial functions, angiogenesis, and consequently the pathogenesis of CKD [73]. Chronic low-grade inflammation, also known as "inflammaging", is a hallmark of aging and has been linked to many age-related diseases [74][75][76]. It is thought to result from a complex interplay of genetic and environmental factors, including oxidative stress, changes in the gut microbiome, and exposure to toxins. An important physiological aspect related to inflammaging is senescence, which is a state of permanent cell cycle arrest that occurs as a result of various cellular stressors, including oxidative stress, telomere shortening, and DNA damage [76]. Our results briefly introduce the concept of indoxyl sulfate-mediated senescence without deep insights regarding telomerase activity or DNA damage. We observed changes in parenchymal cells which indicated a senescence and senescence-associated secretory phenotype; however, further investigation is needed. This has been shown to drive chronic inflammation and contribute to the development of agerelated diseases. Our results support findings from the literature. For instance, Niwa et al. demonstrated that indoxyl sulfate inhibited Klotho expression through the production of ROS and activation of NF-kB in proximal tubular cells. This induced the expression of SA-β-gal, p53, p21, p16, and retinoblastoma protein in the aorta of hypertensive rats and consequently triggered endothelial dysfunction [77]. A similar observation was made using HUVEC endothelial cells [78,79]. The authors concluded that indoxyl sulfate accelerates the progression of CKD and cardiovascular disease by inducing nephrovascular cell senescence. In summary, the accumulation of indoxyl sulfate can have significant effects on homeostasis, leading to oxidative stress and altered immune responses. These effects highlight the importance of removing uremic toxins through dialysis or transplantation to maintain homeostasis and improve health outcomes in individuals with kidney disease. Furthermore, it is important to consider local infections as a potential source of tryptophan metabolites and their systemic effects on immune responses and a persistent low-grade inflammation state. Materials and Methods Generation of primary cells: Mouse tubular epithelial cells (TECs) were seeded (5 × 10 5 cells/mL) in a 10% FCS 1% PS K1 medium in six-well plates and grown to 50% con-fluence. In brief, the mouse kidney capsule was peeled off and the kidneys were minced finely with the back of a syringe and digested with Collagenase D (working concentration 1.5 mg/mL) for 30 min at 37 • C. The digested kidneys were sieved through a 70 µm filter and centrifuged at 1500 rpm for 5 min at 4 • C, with a brake. The pellet was resuspended in 2 mL of PBS and layered very carefully on 10 mL of 31% Percoll and centrifuged at 3000 rpm for 10 min at 4 • C, without a brake. Cells were washed twice with PBS and TECs were grown from proximal tubular segments cultured in a K1 medium composed of Dulbecco's Modified Eagle Medium supplied with 1M of Hepes (pH 7.55), 10% FCS, hormone mix (HBSS, 31.25 pg/mL PGE-1, 3.4 pg/mL T3, 18 ng/mL hydrocortisone), 9.6 µg/mL of ITSS, 20 ng/mL of EGF, and 1% PS. The medium was changed two to three days after isolation. BMDM: Bone marrow was isolated from the femur and tibia. An 18 G needle was pushed through the bottom of a 0.5 mL Eppendorf tube and put into a 1.5 mL Eppendorf tube. The bones were placed into the 0.5 mL Eppendorf tube and centrifuged at 10,000 rpm at 4 • C for 15 s. The pellet was resuspended in 1 mL of 0.155 M NH4Cl (RBC lysis buffer at room temperature) by slowly pipetting and 2 mL more was added. The mixture was kept at room temperature for 1 min. The reaction was stopped by diluting the lysis buffer with medium (10-20 mL) followed by centrifugation at 1500 rpm at 4 • C for 2 min. Cells were washed with medium and centrifuged again under the same conditions. The cell suspension was passed through a cell strainer (70 µm) and centrifuged again. The pellet was resuspended in a 1 mL medium and cells were counted. Cells were seeded in 12/6 well plates (1.5 × 10 6 /12 well plate or 3 × 10 6 /6 well plate) in 1 or 2 mL of Dulbecco's Modified Eagle Medium supplied with 10% FCS (or mouse serum), 1% PS, and rmM-CSF at a concentration of 2 ng/mL, respectively. After 2/3 days, 1/2 mL of medium with rmM-CSF was added to the seeded cells. On day 5, the medium was removed and replaced by a fresh medium supplied with rmM-CSF. On day 7, cells were ready for stimulation. The animals were housed in accordance with international standards for the humane care and use of animals. As part of our commitment to reducing the number of animals used in research, we utilized tissue from animals that were humanely euthanized as part of approved research or breeding projects (tissue sharing). The collection of postmortem animal tissues was conducted in a registered animal facility, ensuring compliance with regulatory requirements. Immortalized J774, THP1, and HK2 were grown in a 75 cm 2 flask in Dulbecco's Modified Eagle Medium containing 10% FCS and 1% PS. In the case of adherent cell lines, subcultures were prepared by scraping. For the 75 cm 2 flasks, all but 10 mL of the culture medium was removed. Cells were dislodged from the flask with a cell scraper, aspirated, and dispensed into new flasks, in a ratio of 1:3 to 1:6. The medium was replaced two or three times a week. All in vitro experiments were performed a minimum of two independent times. Stimulation experiments were performed as indicated in the figures (stimulation time points included 4-72 h; 1-200 ng/mL LPS; 200-60 µg/mL indoxyl sulfate). For this assay, the Phagocytosis Quantification Kit, provided by Cayman (Item Number 500290, Ann Arbor, MI, USA), was used. Phagocytosis assay: Primary BMDMs were isolated from 3-month-old BL6 mice and cultured in DMEM medium supplemented with 10% penicillin-streptomycin and 1% FCS to a density of 1.5 million cells per well (12 well plate). On day 5, BMDMs were stimulated with LPS (2 ng/mL), indoxyl sulfate (60 µg/mL), and a 1:1 mixture of LPS (2 ng/mL) plus indoxyl sulfate (60 µg/mL) for 24 h. The next day, cells were incubated for 3 h at 37 • C, 5% CO 2 with IgG-FITC beads to allow for bead intake. Beads with unspecified binding to the cell surface were quenched with 0.4% trypan blue solution after the incubation period. Cells with no beads and cells with beads incubated under 4 • C were used as negative controls. After BMDM collection and resuspension in FACS buffer, data were collected using flow cytometry and analyzed with FlowJo. Bactericidal assay: Primary BMDMs were isolated from BL6 mice 3 months of age 5 days before the experiment and cultured in DMEM medium supplemented with 1% FCS to a density of 1.5 million cells per well (12 well plate). On day 5, cells were stimulated with LPS (2 ng/mL), indoxyl sulfate (60 µg/mL), and a 1:1 mixture of LPS (2 ng/mL), plus indoxyl sulfate (60 µg/mL), for 24 h. One day before co-incubation, Mach1T1R E. coli were cultivated in 1 × LB medium overnight at 37 • C to an optical density of 600 (OD600 = 100,000,000 cells). On the day of the experiment, BMDMs and E. coli were co-incubated to an MOI of 1:10 at 37 • C, 5% CO 2 in DMEM medium supplemented with 1% FCS for 4 h to allow bacterial phagocytosis. After the incubation period, wells were washed with pre-warmed PBS two times, and extracellular bacteria were killed with fresh DMEM medium supplemented with 1% FCS and 10% penicillin-streptomycin for 90 min. After the lysis of cells with distilled water, cell lysate was plated on LB agar plates to determine bacterial CFU. Transmigration assay: Primary BMDMs were isolated from BL6 mice 3 months of age 5 days before the experiment and cultured in DMEM medium supplemented with 10% penicillin-streptomycin and 1% FCS to a density of 1.5 million cells per well (12 well plate). Cells were then seeded at a density of 60,000 cells per well in the upper chamber of 24 well Corning Transwells with a pore
v3-fos-license
2020-09-10T10:09:49.888Z
2020-07-17T00:00:00.000
221741912
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://doi.org/10.31004/obsesi.v5i1.643", "pdf_hash": "5cacbec25cca1d87d2bfe9afed31ff2d14858693", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:417", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "f36ecdb0e5fdf6c982152982f85cce00244a0c78", "year": 2020 }
pes2o/s2orc
Application of Philosophy Values of Bhinci-Bhinciki Kuli in Early Childhood at Wolio Community This research aimed to describe the application the philosophy values of Bhinci-bhinciki Kuli in early childhood at Wolio community. This research used descriptive method with ethnographic approach. The subjects of this research consisted of the chiefs . This research was conducted at wolio community In Buton. Method of data collection used observation and interviews. Based on the results of data analysis, showed that: The values of Bhinci-bhinciki Kuli included Poma-masiaka which means mutual love, Poangka-kataka which means mutual respect, Pomae-maeaka which means mutual respect for dignity and Popia-piara which means caring for one another. The Family had a very important role in instilling the values of philosophy Bhinci-bhinciki Kuli as a foundation of character through habituation and modelling since early childhood in daily life were accustomed and modelling Poma-masiaka mutual love, Poangka-kataka which means mutual respect, Pomae-maeaka which means mutual respect for dignity and Popia-piara which means caring for one another. Application the philosophy values of Bhinci-bhinciki Kuli in early childhood was to instill the values morality, so that grow up with a personality that had good character. INTRODUCTION The philosophy values of "Bhinci bhinciki kuli" was the highest principle of life in Khalifatul Khamis country at the time. This philosophy contained noble values as the basis of human thinking and acting in realizing civil society. These values include Poma-masiaka, poangka-kataka, popia-piara, pomae-maeka. Currently, our society is experiencing a crisis of morality caused by a lack of knowledge, understanding, application of the values of "Bhinci bhinciki kuli" in the family and global challenges marked by technological developments. The moral crisis that engages the nation's generation includes the misuse of narcotics and drugs, theft, immoral acts, to brawl between groups, most of the perpetrators are adolescents. Based on research results of (Raman et al., 2014) concluded that knowledge about culture as the basis of life had a good impact on children's development. These cultural values could also provide knowledge about childcare. Early childhood is a potential age that has the ability to absorb information mainly through language in the environment. On other hand, early childhood has a period time that is very sensitive to various information in their environment. At this age, children have an intellectual capacity of 80%, which means that children have a strong grasp of the information obtained. In her theory, Maria Montessori said that children at an early age have a quick absorption or better known as the Absorbent Mind (Montessori, 1959) Information that enters through the child's senses is quickly absorbed into the brain. A child's brain's absorption can be compared to a sponge that absorbs water quickly therefore, educators should not be mistaken in providing concepts to children. Preparing a generation that is competitive through language and culture is the goal of the. United Nations Educational, Scientific and Cultural Organization (UNESCO). Community is a group of families in a certain place. In group gave birth to a harmonious concept of social life. Concepts that are born included social norms, values, values, beliefs, rules and abligation. These values are instilled in the family to form a generation of character. The family is a system that plays an important role in the inheritance of cultural values in generations. According to Berns (Zink et al., 2004: 91p) In the microsystem that is the family as a representative of cultural values, which socializes culture to children through interactions in the family and neighbors and society. Based on research results of (Trampe et al., 2015), concluded that the culture of positive interactions in the family environment has an influence on the development of positive emotional behavior, the more positive interaction in the family environment becomes a culture, the emotional behavior of children becomes more positive. Children are familiarized with pleasant communication, courtesy, and hugs, with a culture of good interaction in the family, so that eternal values can be engraved for this nation's generation. With this phenomenon parents should re-open pieces of the value of wisdom in the land of Buton, so that these values can be etched back in children from an early age for the hope of a nation of character. Because the easy generation is the successor to the nation, it is in their hands that this nation will advance together with global civilization. (Nishi et al., 2017) concluded that culture thcultural background of subjects in their real life affects the speed of cooperation decision making differentially in social environments. According to (Dewantara, 2004) Cultural Values must be instilled in children from early childhood, cultural values could be a filter of foreign culture that threatened children's morality. The philosophy values of "Bhinci Bhinciki Kuli" today are only historical stories from the culture of people's life in their time. As the times are marked by civilization of science and technology, these values eroded by increasingly evolving era. As the times are marked by the civilization of science and technology, these values eroded by an increasingly evolving era. The wolio children are born and grow in the land of a thousand fortresses but are increasingly threatened by the times, if the philosophical values of life do not become the principle of life and as a filter for foreign cultures that lurk the behavior of the current generation. then what happens is the emergence of a generation of conflict triggers in the community. By whom these values must be instilled the prime and the first are family. The family is the main implement the values of "Bhinci bhinciki kuli" in family life, and social since early age. At this time children are called by the term "the golden age" which is characterized by the ability of obsorbent mind that is the absorption of strong thoughts. This is where parents take the role as role models, examples and role models "ingarso sung tulodo". Based on the results of (Lamm et al., 2018) concluded that children's cognitive development varies depending on the culture in which the child lives. Culture plays an important role in the growth and cognitive development of children which can affect the behavior of children. Child development is determined how much cultural interaction is obtained by children in the family environment.. Poma-masiaka which means mutual love, Poangka-ngkataka which means mutual respect, Pomae-maeka which means mutual respect for dignity and Popia-piara which means caring for one another. These values need to be applied early with appropriate methods of child development. one method that can be applied is through the concept of Ki Hadjar Dewantoro (Sujiono, 2009) teachers and parents in providing education to their sons and daughters certainly had 3 concepts namely; Ingarso sung tulodo, meaning that both teacher and parent are role models or role models for their children. Ingmadyo mangun karso, also as a guide in providing instruction and Tut Wury Handayani, teachers and parents always provide motivation and encouragement to enthusiasm in developing children's potential. In addition, the method can be applied through habituation and modelling methods in both the family and school environment. While the research conducted by (Abulizi et al., 2017) The Children's behavior was influenced by the environment context, Parents interaction could shape and increased the child emotional. Based on the results of interviews with Mr. La Umbu Zaadi (3/3/2019) said that the people of Buton Bhinci-bhinciki kuli Philosophy are character values instilled in children from an early age. But as the times evolve the values of character in society, especially in the family began to disappear due to the workload of parents, the lack of parental knowledge of these values, the lack of available references and advanced technology. By referring to the theories, concepts and research results of various enographic studies, as well as community phenomena, the researcher is motivated to analyze and elevate these values to become a cultural reference to the values of local character. The values contained in the philosophy "Bhinci-bhinciki kuli" could be known what the concept of values contained and how the implementation of these values in his time. This research aims to describe and analyze the application of the Bhinci-bhinciki Kuli philosophy at community in Buton society and what values are contained in philosophy of Bhinci-bhinciki Kuli. RESEARCH METHOD This research used ethnographic research with a qualitative approach that is a research method based on the philosophy of phenomenology, used to examine natural conditions of objects. The subject of research is the subject to be examined or expected information about matters relating to the problem under study, namely the person or whatever is the center of attention or research target subject of his research is chiefs. In this study the researchers used the following data collection tools Interview Technique. The analysis technique used is interactive model analysis namely data collection, 'data reduction, data presentation, and drawing conclusions during data collection. The philosophy values of " Bhinci Bhinciki kuli" The philosophy values of "Bhinci bhinciki kuli" is the highest principle of life in buton community. In this philosophy contained noble values as the basis of human thinking and acting in realizing civil society. These values included Poma-masiaka, poangka-kataka, popiapiara, pomae-maeka). Currently, our society is experiencing a crisis of morality caused by a lack of knowledge, understanding, application of the values of "Bhinci bhinciki kuli" in the family and global challenges marked by technological developments. Based on law Number 23 of 2002 concerning Protection of Children's Rights. This Act seeks to protect children's rights from various forms of violence including trafficking, exploitation, child abuse, verbal abuse, physical abuse, intimidation, sexual abuse or other activities that can hinder children's growth and development. Because these various obstacles will only make it difficult for children to define their personal identity, character and life patterns in society, when they become adults. (Costa-Font et al., 2018) concluded that the culture had the social norms that shape the child behavior, through culture children had the personality indentity in social interaction. Culture is the principle of life the social community, so that need the strategy to implement in family life to instill since early childhood. With a better approach, it is expected to provide objective values that can be applied to all opportunities for children to better approach, it is expected to provide objective values that can be applied to all opportunities for children to better understand people at all times. the context of a more mature, mature and wise. The values of philosophy Bhinci bhinciki kuli in accordance with devinition of (Sjarkawi & Pd, 2006) Value meant that it was useful, capable, empowered, valid, and strong. Values is the quality of the a thing that made it likeable, desirable, useful, valued, and could be object of importance. According to the view of relativism : 1) Values are are relative because they relate to preferences (attitude, desires, dislike, feelings), 2). Values stand from different from one culture to another, 3). Judget like right and wrong, good, bad, 4) universal, absolute and objective Values that could be applied to all people at all times. This definition is corroborated by results of research by (Kapitány et al., 2018) concluded that cultural values were living traditions, and contain socio-cultural values. Children who were raised with cultural rituals can form them as individuals who have social values in society. Cultural values of life adopted by ancestors that contain moral values, religion needs to be instilled in children from an early age through daily interactions in the family and social as theory (Bronfenbrenner, 1979: 5p) that the mycrosistem (family) environment provided an important role in their environment. This age is known as the golden age that has the potential of a child is developing rapidly. According to Montessori (Morrison, 2007) children of this age had the brains that were easy to absorb or are called absorbent minds. In addition, children also have a very sensitive period. Therefore, parents are expected to be able to apply optimal care and involvement with cultural values in certain communities so that they can develop all the potential of early childhood. Based on research The values of "Bhinci bhinciki kuli" was the highest principle of life in Wolio community, this values included Poma-masiaka, Poangka-kataka, Popia-piara, Poangkakataka). In family life interaction parents as a model and habituation action in implementing Poma-masiaka which means mutual love, love friends, parents, brother and societies. Poangkakataka which means mutual respect, make polite in communication with parents, brother, friends and repect to the teacher and others. Pomae-maeaka which means mutual respect for dignity and Popia-piara which means caring for one another, help friends, help parents, brother and others. In this research that, the values of "Bhinci bhinciki kuli" must apply through habituation and modeling since early childhood. The values that instilled in aerly childhood would shape the children tobe good character and personality in the future. The research was supported by Research conducted by (Raman et al., 2014) concluded that the pattern of life, the behavior of parents, educating children was influenced by cultural values understood by society. life of Indian society known as Maamuli culture socio-cultural studies that contained children used to interact with the value of moral values, childbirth ethics, patterns of , educating children, these values are understood in Latin Indian families. Maamuli cultural values have been applied by the ancestors, which contain moral values, religion so it needs to be instilled in children in India from an early age through daily interactions in family and social. Habituation Early childhood is a potential age, this age is known as the golden age which meant that children at this age had a lot of knowledge that made it easy to absorb information and construct it into knowledge. Some potentials of early childhood were: Absorbent mind with an absorbent brain, children made it easy to obtain various information from their environment. On the other hand the child also has a ZPD (Zone Proximally Development) meaning that the child has a certain space and time as a sensitive zone to be stimulated properly by his parents through habituation. Habituation was one of the steps or strategies of parents in the Wolio family to instill the values of Bhinci-bhiciki kuli. This habituation was a theory that was used to shape personality through good actions and strive to always be repeated because it gives a positive impact on someone's person and also others. This research supported by the research (Nielsen et al., 2012) concluded that the culture could be transmitted to shape the child's behavior through the habituation in playing. This strategy was reinforced by Pavlov's theory (Santrock, 2007: 23p) that children's behavior can be formed through accustomed activities called "classical conditioning". With the conditions of behavior that are familiarized with both within the family, school and community, the child would grow into a person with good moral values. The noble values imprinted in the child were a form of behavior that is usually done in the family when building social interactions, adapting to the wider environment. (Escalante-Barrios et al., 2020) concluded that interaction The parent-child is a cornerstone of early childhood development and one-way early childhood programs can have a positive influence on early development. In Wolio's family had a very important role in instilling cultural values of the Bhinci-Bhinciki Kuli philosophy were values of character which should be passed from generation to generation with these values children would grow into civilized individuals, able to build harmonious social communication and be children, caring for children was influenced by responsible. Miana Wolio, in the sultanate era, cultural values understood by society. In the these values were highly valued in family life, socialized even in a government. Wolio children used to interact with the value of Poma-masiaka which means to love each other, Poangka-kataka which means mutual respect, Pomae-maeka which means mutual respect and for dignity and Popia-piara mutual help to help, protect and protect so that the civil society in its era can be realized. This research supported by the research (Gardner et al., 2018) concluded that the parenting behavior influenced the child behavior. (Dewantara, 2004) Children have good values from an early age, so these values can lead children to become personal with good moral behavior, moral feelings, and moral knowledge. In everyday life children were accustomed to speak with courtesy, shared with peers, be guided to be responsible and be fair. Children who were accustomed to noble cultural values would grow up with personality that had character. According to (Bronfenbrenner, 1979) the mycrosystem environment which included the family had a very important role in the formation of character values. Good character values were formed in the family through interaction both incommunication and in moral actions. Results of Research conducted by (Metwally et al., 2016) that a good family culture had an influence on the development of emotional social behavior. Parents' knowledge of the importance of values in socio-cultural behavior was a determinant of children's emotional social behavior. Based on the description of the results of this study it could be concluded that cultivating interactions in the family and social with social values could influence the development of social behavioral values for children. Based on the results of research (Purzycki et al., 2018) that instilling cultural and moral values of religion could form the personality of a good child, children were able to position themselves well when interacting in a social environment. Therefore, good cultural values are the responsibility of the family to be passed on to generations, so that these values were cultivated in family life, and in society. According to (Dewantara, 2004) That civilization should be planted by the family early on. Adab values are studying character or character so that they are able to distinguish good and bad deeds that lead children to act in accordance with the understanding and habits experienced in their lives. As (Lickona, 2013) many situations habits are a factor forming moral behavior. William Bennett said that people who had good character act sincerely, loyal, courageous, virtuous, and just, even making the right choices unconsciously. They do the right thing because it's based on habits. Modelling Early childhood is a potential age, this age is known as the golden age which means that children at this age have a lot of knowledge that makes it easy to absorb information and construct it into knowledge. Some potentials of early childhood are: Absorbent mind with an absorbent brain, children make it easy to obtain various information from their environment. On the other hand, the child also had a ZPD (Zone Proximally Development) meaning that the child has a certain space and time as a sensitive zone to be stimulated properly by his parents. Modelling is one of the learning methods to shape learners' personalities. Through models of parenting are examples, examples or role models to be followed, emulated and imitated by children. both in the family and school environment. Parents on wolio in the past, made personal role models or role models. The values of Bhinci-bhinciki kuli were imprinted in the family of the Wolio familily, parents paraphrasing the behavior of Poma-masika, Popia-piara, Poangka-kataka, Poamae-maeka to children. Mutual behavior poma-masiaka is seen and observed and even felt directly by the child when he is caring for, guided by both his parents, so that the child pays attention to the idolized figure, the resesarch conducted by (Fernández-Ballesteros et al., 2020) Concluded that good culture effect to behavior, children tobe friendly, competent and happiness, satisfaction and social participation. The behavior of the Bhinci bhinciki kuli was imprinted in the child, then was applied in social interactions. According to Ki Hadjar Dewantoro (Sujiono, 2009: 126p) that teachers and parents in providing education in early childhood certainly must have 3 concepts namely; Ingarso sung tulodo, meaning that both teacher and parent are role models or role models for their children. Ingmadyo mangun karso built the spirit, also as a guide in providing instruction and Tut Wury Handayani, teachers and parents always provide motivation and encouragement to enthusiasm in developing children's potential. In addition, the method could be applied through habituation and modelling methods both in the school and family environment, beside that research conducted by (Gould et al., 2018) concluded that social culture must to instill through the action and teaching. According to (Lickona, 2013) that parenting is the behavior of parents to discipline children and to make children responsible through guidance and assertiveness. The family is the first and foremost educator in a child's life, because it was from them that the child gets an education for the first time, as well as being the basis of children's development and children's lives later on. Pestalozzi (Morrison 1988: 46) said parents is the best teacher for his children. This means that the family provides the basis for the formation of behavior, character, morals, and education for children from an early age. Parent involvement could be interpreted as a parent's perception of his involvement in childcare in the form of active participation when playing and free time. Research by (Atchley et al., 2011) , concluded that children could develop their moral social abilities, through direct teaching, modeling and learning by doing (learning by doing). So for teachers and parents should set a good example to develop children's social and moral abilities. Exemplary is the main key for parents to educate children to be a person of character. As stated by (Lickona, 2013) that values are captured, not taught means that good cultural values are captured by children through good modelling and taught through direct explanation. CONCLUSION Based on the results of the research, concluded that The values of "Bhinci Bhinciki Kuli" included Poma-masiaka which means mutual love, Poangka-kataka which means mutual respect, Pomae-maeaka which means mutual respect for dignity and Popia-piara which means caring for one another must be instilled since early childhood in family daily life, through habituation and modeling in daily life. The parents should accustomed and modelling Since early childhood to speak soft and polite words when talking to parents, older children and their age friends, please help, feel good, have the spirit of mutual cooperation, sympathy, empathy, and love to do humanitarian activities, in order to children grow up as the personality good character.
v3-fos-license
2018-12-08T14:51:26.196Z
2018-12-01T00:00:00.000
54452216
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-36144-2.pdf", "pdf_hash": "77aec8b161d4f5919ce9d3c96faac205b4823583", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:418", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "77aec8b161d4f5919ce9d3c96faac205b4823583", "year": 2018 }
pes2o/s2orc
DPTIP, a newly identified potent brain penetrant neutral sphingomyelinase 2 inhibitor, regulates astrocyte-peripheral immune communication following brain inflammation Brain injury and inflammation induces a local release of extracellular vesicles (EVs) from astrocytes carrying proteins, RNAs, and microRNAs into the circulation. When these vesicles reach the liver, they stimulate the secretion of cytokines that mobilize peripheral immune cell infiltration into the brain, which can cause secondary tissue damage and impair recovery. Recent studies suggest that suppression of EV biosynthesis through neutral sphingomyelinase 2 (nSMase2) inhibition may represent a new therapeutic strategy. Unfortunately, currently available nSMase2 inhibitors exhibit low potency (IC50 ≥ 1 μM), poor solubility and/or limited brain penetration. Through a high throughput screening campaign of >365,000 compounds against human nSMase2 we identified 2,6-Dimethoxy-4-(5-Phenyl-4-Thiophen-2-yl-1H-Imidazol-2-yl)-Phenol (DPTIP), a potent (IC50 30 nM), selective, metabolically stable, and brain penetrable (AUCbrain/AUCplasma = 0.26) nSMase2 inhibitor. DPTIP dose-dependently inhibited EV release in primary astrocyte cultures. In a mouse model of brain injury conducted in GFAP-GFP mice, DPTIP potently (10 mg/kg IP) inhibited IL-1β-induced astrocyte-derived EV release (51 ± 13%; p < 0.001). This inhibition led to a reduction of cytokine upregulation in liver and attenuation of the infiltration of immune cells into the brain (80 ± 23%; p < 0.01). A structurally similar but inactive analog had no effect in vitro or in vivo. Scientific REPORTS | (2018) 8:17715 | DOI: 10.1038/s41598-018-36144-2 nSMase2 5 . Upregulation of nSMase2 activity is associated with cognitive impairment in HIV infection 6 , and with plaque deposition in AD 7,8 . Moreover, astrocyte-derived EVs (ADEVs) isolated from the plasma of AD patients contain increased amounts of complement proteins, implying that glial activation leads to the release of EVs that may play some role in regulating innate immunity 9 . Our group has shown that brain inflammation, a common theme in many neurodegenerative disorders 10 , can trigger the release of EVs from astrocytes which primes the infiltration of immune cells into the brain via upregulation of cytokines in the periphery 11 . Taken together, inhibition of EV secretion through inhibition of nSMAse2 is emerging as a novel avenue for the treatment of diseases associated with aberrant exosomal intercellular communication [11][12][13] . Unfortunately, limitations of currently available nSMase2 inhibitors have prevented a detailed evaluation of the role of nSMase2 in disease models and the advancement of drug-like nSMase2 inhibitors to the clinic. Currently available nSMase2 inhibitors have low potency (IC 50 's in µM level), poor aqueous solubility, and/or limited brain penetration. GW4869 14 , the most widely used inhibitor, has low inhibitory potency (IC 50 = 1 µM) in biochemical assays and very poor solubility (practically insoluble in water with poor solubility in organic solvents such as DMSO (0.2 mg/ml). These attributes have hampered GW4869′s clinical development. Cambinol, an inhibitor our group identified from a pilot screen of commercially available small chemical libraries 15 showed better solubility, but it was metabolically unstable and exhibited a poor in vivo pharmacokinetic profile. Chemistry efforts by our laboratory to improve cambinol's potency (IC 50 = 5 µM) and stability were unsuccessful. Herein, we report on a high throughput screening (HTS) campaign of over 365,000 compounds that identified a potent inhibitor of nSMase2 termed DPTIP, with an excellent pharmacokinetic profile including significant brain penetration, which was capable of dose-dependently blocking EV release from primary astrocytes. Moreover, in a mouse model of brain inflammation that recapitulates common features of neurodegenerative diseases, DPTIP potently inhibited IL-1β-induced ADEV release, peripheral cytokine upregulation and neutrophil migration into the brain. Results and Discussion Development of a 1536-well cell-free human recombinant nSMase2 enzyme activity assay. Human nSMase2 catalyzes the hydrolysis of sphingomyelin (SM) to phosphorylcholine and ceramide. As we reported previously, we used the Amplex Red system to monitor nSMase2 activity 15 . In this reaction, one of the enzymatic products, phosphorylcholine, is stoichiometrically converted through a series of enzyme-coupled reactions to fluorescent resorufin, so that fluorescence signal is directly proportional to nSMase2 activity (Fig. 1A). An enzymatic assay protocol was developed in 1536-well format for implementation for HTS. Several parameters were first optimized through the measurement of the fluorescence signal. Fluorescence signal increased with longer times of incubation (15-150 min) and increasing nSMase2 concentrations (0.03 to 0.5 µg protein/mL) at a constant SM concentration (20 µM) (Fig. 1B). Similarly, fluorescence signal increased with longer time of incubation (30-150 min) and increasing SM concentrations (5-40 µM) at a constant enzyme concentration (0.063 µg protein/ml) (Fig. 1C). Based on these results, we chose 0.1 µg protein/mL human nSMase2 cell lysate, 20 µM SM in a total volume of 4 µL and 2 h incubation at 37 °C to assess assay performance in HTS format. Under these conditions, reaction rate was linear with a robust fluorescence signal of approximately 2500 relative fluorescent units (RFU). Cambinol was used as the positive inhibitor control 15 ; it was pre-incubated with human nSMase2 for 15 min prior to addition of SM. Final DMSO concentration was 0.57%. The assay exhibited signal/background = 21 and Z' = 0.8 (Fig. 1D). We also evaluated the dose response of inhibition by cambinol and GW4869 to determine variability in the IC 50 values from plate to plate. GW4869 was insoluble in DMSO and appeared as a yellow pellet at the 3 highest concentrations so it was excluded as a positive control. Cambinol's average IC 50 from 4 independent determinations was 27 ± 1 µM (Fig. 1E). The final stage of validation of the assay for HTS was the screening of the Library of Pharmacologically Active Compounds (LOPAC) in 1536-well plates using the same assay conditions at four different inhibitor concentrations (0.4, 2, 11 and 57 µM). Overall, the sample field was even, there were no plate positional effects and the number of active hits increased as the concentration increased. HTS campaign and data analysis of hits led to the identification of seven potent nSMase2 inhibitors. Following assay validation, we screened 365,000 compounds from the Molecular Libraries Small Molecule Repository (MLSMR) and 2816 compounds from the NCGC pharmaceutical collection (NPC) library for human nSMase2 inhibitors. Compounds were screened at 4 concentrations: 1.1, 11, 57 and 114 µM. Cambinol (full dose response in each plate) was used as positive control. After eliminating promiscuous compounds, 1990 compounds that had maximal inhibitory responses >50% at the highest concentration tested and robust curve response classes (CRC) 16 were selected for re-testing in the same human nSMase2 activity assay and counter screen. The purpose of the counter screen was to identify false positives, i.e., compounds that inhibited the enzyme-coupled reactions of the assay system; it was carried out in the absence of human nSMase2 and SM and using phosphorylcholine as substrate. Out of the 1990 compounds, 1782 (90%) were confirmed in the 7 dose-response hnSMase2 confirmatory assay, but most (1718; 86%) were found to be false positives in the counter screen, resulting in 64 bona fide nSMase2 inhibitors. We also considered the difference between potency and response in the counter screen to select 156 additional hits that showed robust inhibition of the overall reaction, but were weakly active in the counter screen. There were a total of 220 compounds for follow-up confirmation ( Fig. 2A). Out of the 220 compounds tested, 7 compounds exhibited dose responses with IC 50 ≤ 10 µM that were also inactive in the counter assay (Fig. 2B). DPTIP is the most potent nSMase2 inhibitor reported to date. Filtering of the HTS hits as outlined above resulted in the identification of MLS000523327 or DPTIP (2,6-Dimethoxy-4-(5-Phenyl-4-Thiophen-2-yl-1H-Imidazol-2-yl)-Phenol) as the most promising compound based on potency and chemical optimization feasibility. The IC 50 for DPTIP using an extended inhibitor concentration range (10 pM -100 µM) was 30 nM 3A). This IC 50 is 30-and 160-fold more potent than the prototype inhibitors GW4869 (1 µM) 14 and cambinol (5 µM) 15 . To our knowledge, this is the first nSMase2 inhibitor described with nanomolar potency. Because DPTIP contains a hydroxyl group which could be a metabolic liability in vivo (Fig. 3A), we determined the importance of this group for inhibitory activity. We synthesized the des-hydroxyl analog of DPTIP (Fig. 3B) and showed that it was inactive against human nSMase2 (IC 50 > 100 µM) (Fig. 3B). These results demonstrate the importance of the hydroxyl group for inhibition, and also provide a structurally similar inactive DPTIP analog for use as a comparison compound in subsequent pharmacological assays. DPTIP exhibited non-competitive mode of inhibition and showed selectivity for nSMase2 versus related enzymes. DPTIP exhibited the hallmarks of noncompetitive inhibition; when the rate of reaction with respect to SM concentrations was monitored at increasing inhibitor concentrations, there was a decrease in maximal rate (V max ) while the Michaelis constant (K m ) was unchanged (Fig. 3C). V max and K m for each data set at a given inhibitor concentration were obtained from non-linear regression fits to Michaelis-Menten kinetics (Fig. 3C). DPTIP did not inhibit members of two related enzyme families including alkaline phosphatase (IC 50 > 100 µM in counter screen), a phosphomonoesterase, or acid sphingomyelinase (IC 50 > 100 µM), a phosphodiesterase closely related to nSMase2 (results not shown). Inhibitor selectivity with respect to enzymes from related families is consistent with a noncompetitive mode of inhibition, as DPTIP is likely acting at a site other than the catalytic site. Additional data also indicate that DPTIP exhibits specificity for nSMase2; DPTIP has been screened in 759 bioassays assays at NCATS and only weak activity (2-50 µM) was observed in 19 (2.5%) of these assays. (https:// pubchem.ncbi.nlm.nih.gov/compound/5446044#section=BioAssay-Results). DPTIP showed metabolic stability in mouse and human liver microsomes. One potential liability when using chemical probes in vivo is lack of metabolic stability which structurally inactivates the compound before it can reach its molecular target. We evaluated DPTIP for metabolic stability using human and mouse liver microsomes as we have previously described 17 . Percent of drug remaining over time was determined by liquid chromatography-tandem mass spectrometry analysis (LC/MS/MS). In the presence of NADPH, DPTIP remained intact (100% remaining at 1 h) in both mouse and human liver microsomes (Fig. 4A) indicating that the compound is not affected by CYP-450-mediated metabolism. These in vitro results indicate DPTIP does not have major liver metabolic liabilities that would preclude its use as an in vivo probe. DPTIP exhibited plasma exposure and brain penetration after systemic dosing in mice. In the next set of experiments, we evaluated the in vivo pharmacokinetic profile of systemically administered DPTIP. Mice were given DPTIP (10 mg/kg IP) and plasma and brain levels of DPTIP were measured at 0.25, 0.50, 1, 2, 4 and 6 h post dose (n = 3 per time point). DPTIP peak concentration in both plasma and brain was at 0.5 h (C max plasma = 11.6 ± 0.5 µM; C max brain = 2.5 µM) (Fig. 4B). The AUC 0-∞ of DPTIP in plasma and brain was 10 ± 1 and 2.6 ± 0.5 µM*h, respectively, resulting in an AUC brain /AUC plasma = 0.26. Brain levels of DPTIP exceeded its IC 50 for inhibition of nSMase2 up to 4 h following 10 mg/kg systemic dosing (Fig. 4B). DPTIP inhibited EV release from primary astrocytes whereas its inactive analog had no effect. Independent laboratories have shown that pharmacological and genetic inhibition of nSMase2 blocks EV secretion from glial cells 12 . Consequently, we evaluated DPTIP for its ability to inhibit EV release from primary glial cells in vitro. Mouse primary astrocytes were activated by FBS withdrawal as we have previously described 11 and treated with DPTIP or its closely related inactive des-hydroxyl analog (Fig. 5A) at a dose range of concentrations (0.03-30 µM) using DMSO (0.02%) as vehicle control. Two hours after treatment, EVs were isolated from the media and quantified by nanoparticle tracking analysis. DPTIP inhibited EV release from astrocytes in a dose dependent manner (Fig. 5A). In contrast, its closely related inactive analog had no effect on EV release suggesting DPTIP inhibits EV release via inhibition of nSMase2. We also determined the activation status of (+/−) serum-deprived astrocytes after DPTIP treatment. Rat primary astrocytes were treated with DPTIP (10 μM) or inactive analog for two hours along with (+/−) serum deprivation-induced stress. Cells were fixed and immunofluorescence labeling for GFAP was performed. DPTIP and inactive analog without serum starvation did not change GFAP levels (Fig. 5B,C). Serum deprivation resulted in activation of astrocytes as evidenced by increase in GFAP fluorescence intensity compared to non-treated controls. Treatment with DPTIP prevented astrocyte activation in response to serum starvation, while the inactive analog failed to prevent astrocyte activation (Fig. 5B,C). DPTIP inhibited biomarkers of brain inflammation in vivo whereas its inactive analog had no effect. Given DPTIP's brain penetration in mice and its ability to inhibit EV release in vitro, we next evaluated the ability of DPTIP to ameliorate EV release from astrocytes, cytokine upregulation in liver and neutrophil migration into brain in an in vivo mouse model of brain inflammation. As we have previously shown 11,18 , striatal injection of IL-1β in mice expressing GFP-GFAP in astroglia triggers a release of GFP-labelled EVs that rapidly enter into plasma, resulting in cytokine upregulation in liver and peripheral immune cell migration into brain 11 . Mice were dosed (10 mg/kg IP, DPTIP or inactive analog) 0.5 h prior to IL-1β striatal injection. At this dose, brain concentrations of DPTIP are above its IC 50 for nSMase2 inhibition for at least 4 h after compound administration (Fig. 4C). There were two groups of mice: the first group was sacrificed 2 h after IL-1β administration by heart puncture, and GFP-labeled circulating EVs were measured with liver cytokines. Mice in the second group were dosed a second time with DPTIP or inactive analog at 12 h and sacrificed at 24 h after IL-1β administration to measure brain neutrophils (Fig. 6A). Counting of astrocyte-released EV (GFP+) from blood and liver cytokine analysis was conducted by single injection of DPTIP. Although release of EVs from astrocytes can be initiated immediately after intracranial injection of IL-1β, infiltration of neutrophils in brain parenchyma occurred 12h-24h after the IL-1β injection. Since the pharmacokinetic profiles of DPTIP in plasma and brain following 10 mg/ kg IP dose showed that brain levels of DPTIP exceeded its IC50 for nSMase2 for only 4 h post dose, we administered DPTIP twice after IL-1β injection to ensure inhibition of nSMase2 was sustained during the experiment. When mice were dosed with DPTIP, number of astrocyte-derived EVs was reduced by 51 ± 13% 2 h post IL-1β administration (Fig. 6B). Western analysis using the isolated exosomal fraction confirmed the presence of CD63 (transmembrane protein), TSG101 (cytosolic protein) and Flotilin-1 (lipid raft associated protein), commonly used EV markers 19,20 . The GFP signal was an indication that these EVs originated in brain 11 while lack of mitofilin and α-actinin signals indicated the vesicles were not of mitochondrial 21 or cytoskeletal 22 origin respectively (Fig. 6B). Upregulation of liver cytokines upon IL-1β treatment was inhibited by DPTIP (Fig. 6C). Neutrophils, as measured by immunohistochemistry of coronal brain sections using LY6b antibody, showed reduced staining in sections from animals treated with DPTIP compared to IL-1β-treated animals (Fig. 6D); corresponding quantification showed neutrophil migration into brain was reduced by 80 ± 23% compared to IL-1β-treated animals (Fig. 6E). Administration of the closely related inactive analog, had no statistically significant effect on IL-1β-induced EV release (Fig. 6B). The effects of the inactive des-hydroxyl DPTIP on production of TNF-α and IL-6 were marginal and not statistically significant. Although the magnitude of reduction in CCL2 production by the inactive analog was high, the data were variable and also not statistically significant (Fig. 6C). Finally, des-hydroxyl DPTIP had no effect on neutrophil migration (Fig. 6D,E). Results with the inactive analog were consistent with the suggestion that DPTIP effects occur through nSMase2 inhibition. Importantly, these results are in agreement with our previous findings that co-injection of IL-1β with nSMase2 inhibition (either GW4869, altenusin, lentivirus targeting astrocytic nSMase2, or using nSMase2 KO mice) suppress neutrophil infiltration into brain parenchyma 11 . The same studies also indicated that nSMase2 inhibition suppressed activation of astrocytes and microglia 11 . Within this study, we focused our efforts on astrocytes because of the intimate association of these cells with the blood-brain barrier (BBB), and because in our previous study we knocked down nSMase2 expression selectively in astrocytes and showed that this inhibited the release of astrocyte-derived EVs (ADEVs) and prevented the liver cytokine response, and leukocyte trafficking into brain following parenchymal injection of IL-1beta 11 . Although it remains possible that neuronal or microglial-derived EV are also affected by nSMase2 inhibition, these earlier findings suggest that ADEVs are a major source of brain EVs that regulate the peripheral response to CNS injury. Future studies will include the use of neuronal and microglial derived EVs. The exact mechanism of serum deprivation-induced EV release is not known. Serum deprivation is known to produce a stress response that stimulates secretory pathways in astrocytes 23 . Additionally, nutrient deprivation has been shown to cause accumulation of ceramides in astrocytes, likely due to a stress response activation of nSMase2 24 . Nutrient starvation has been reported to increase nSMase2 activity and induce its expression in other cell types 25 . Serum deprivation induced EV release observed in our experiments may therefore be the result of nSMase2 activation in response to nutrient deprivation stress. A schematic illustration of the in vivo experiment are shown in Fig. 7 which are consistent with the data detailed above as well as previous literature. In brief, striatal IL-1β injection activates the IL-1β receptor on the plasma membrane of astrocytes that in turn activates nSMase2 enzymatic activity to catalyze the hydrolysis of sphingomyelin to produce ceramide 26 . Ceramide is used to manufacture intracellular vesicles (IVs) 1 that are released from astrocytes as EVs and migrate into plasma where they induce a peripheral acute cytokine response, mainly in liver, and prime immune cells to transmigrate to the brain 11 . In the presence of DPTIP, inhibition of nSMase2 prevents ceramide production, EV formation and secretion (Fig. 6B) cytokine upregulation (Fig. 6C) and neutrophil migration (Fig. 6D). In summary, DPTIP is the most potent nSMase2 inhibitor identified to date (IC 50 30 nM), exhibits selectivity, is metabolically stable and brain penetrant. DPTIP is an inhibitor of EV release in primary glial cells and in vivo. In addition, biomarkers that have been associated with EV release from brain, including cytokine upregulation and immune cell migration to brain, were also inhibited by DPTIP. The des hydroxyl inactive analog of DPTIP did not inhibit EV release in vitro and had no effect on IL-1β-induced cytokine regulation or neutrophil migration to brain in vivo. DPTIP is a considerable improvement over other nSMase2 inhibitors identified to date, it can be used as a probe in animal models of disease associated with EV dysregulation and it contains a structural scaffold that is actively being optimized for clinical translation. Methods Expression of human nSMase2. Full length human nSMase2 cDNA with a C-terminal Flag tag cloned into a pCMV6-Entry expression vector (Origene) was transfected into HEK293 cells using lipofectamine 2000 (Life Technologies). Selection of transfected cells was carried out for two weeks with 500 µg/ml G418 in EMEM containing 10% FBS (ATCC) and 2 mM glutamine (Life Technologies). Expression of human nSMase2 was confirmed by Western-blot analysis using an antibody specific against nSMase2 (R&D) diluted to 0.4 µg/ml in Tris-buffered saline with 0.1% Tween 20 and 5% bovine serum albumin. Cells expressing human nSMase2 were grown to confluency in 150 mm dishes, washed twice with cold PBS and harvested using a cell scraper in lysis buffer pH 7.5, Tris-HCl 100 mM, 1 mM EDTA, 100 mM sucrose, 100 µM PMSF, 1X protease inhibitor cocktail III (Calbiochem), 1 ml per dish. Cell lysis was achieved by sonicating 3 times on ice for 30 sec. Protein concentration was determined using the bicinchoninic acid (BCA) assay. Aliquots of cell lysate were snap frozen and stored at −80 °C. Activity of recombinant human nSMase2 from cell lysates remained stable for at least six months. Fluorescence-based nSMase2 activity assay in 1536-well format. Measurements of nSMase2 activity using fluorescence as readout was optimized for dose response quantitative HTS (qHTS). The assay was carried out in black solid bottom, medium binding, 1536-well plates (Greiner, 789176-F). Fluorescence response was optimized with respect to nSMase2 concentration, incubation time and SM concentration. Resorufin was monitored with Viewlux µHTS Microplate Imagers (Perkin Elmer) at energy levels 1,000 or 3,000 and exposure times of 1 or 2 sec. Fluorescence readings varied when using Viewlux offline (assay characterization) vs. Viewlux online (HTS); in order to account for differences in fluorescence efficiency, assay performance was monitored from machine to machine based on assay dynamic range and cambinol IC 50 reproducibility. Based on results of the different conditions outlined above, the HTS campaign was carried out using 0.1 µg protein/ µL nSMase2 preparation, 20 μM SM and 2 h time of incubation. Control inhibitors or test compounds (23 nL) were added from various concentrations in DMSO solution into to the nSMase2 preparation and incubated for Figure 6. Effects of DPTIP in mouse model of brain inflammation. (A) Experiment Timeline -Four groups of GFAP-EGFP mice were administered saline, IL-1β, IL1-β + DPTIP (10 mg/kg) or IL-1β + inactive analog (10 mg/kg). Compounds were given 0.5 h before IL-1β dosing. One group of mice was sacrificed 2 h after IL-1β administration to determine effects of the various treatments on extracellular vesicles (EVs) releases from brain and liver cytokine analysis. The second group was dosed a second time 12 h after IL-1β administration and sacrificed at 24 h to evaluate the effects of different treatments on neutrophil infiltration into brain. (B) GFPlabeled EVs in plasma under different treatments. Data are mean ± SD, n = 5 mice per condition. *p < 0.05 compared to saline control; ### p < 0.001 compared to IL1-β group; ***p < 0.001 compared to saline group. There was no difference observed between IL-1β and IL-1β plus des-hydroxyl analog groups. Panel to the right shows Western analysis using EVs when evaluating against GFP, exosomal (CD63, flotilin-1, TSG101), mitochondrial (mitofilin) and cytoskeletal (α-actinin) markers. Full blot rows and columns are shown in Supplementary Information (Fig. 1S). (C) Liver cytokine levels under different treatments as measured by qRT-PCR of RNA isolated from fresh frozen liver tissue. Samples were analyzed in triplicate. **p < 0.01 and *p < 0.05 compared to saline control; ## p < 0.01 and # p < 0.05 compared to IL1-β group. (D) Neutrophil levels in brain as measured by immunohistochemistry using coronal brain sections and Ly6b antibody. (E) Quantitation of (D); **p < 0.01 compared to saline control; ## p < 0.01 compared to IL1-β group. 15 min prior to the addition of substrate and enzyme-coupling detection reagents. Compounds were screened in 4 doses, starting at 57 µM, and doing 5-fold dilutions. A customized screening robot (Kalypsys) was used for the primary screen. A step-by step HTS assay protocol is given in the Supplementary Data (Table S1). Inhibitors of nSMase2 were selected using compound dose response curve algorithms developed at NCGC to score actives, which assigns each tested compound a compound response class (CRC) number 16 . This method classifies primary hits into different categories according to their potency (IC 50 ), magnitude of response (efficacy), quality of curve fitting (r2), and number of asymptotes. For example, CRC of −1.1 represents complete curve and high efficacy; CRC of −1.2 represents complete curve but partial efficacy. Compounds with CRCs of −1.1, −1.2, −2.1 and −2.2 were generally selected for confirmation and validation. Structural analysis of selected compounds was performed and promiscuous compounds were filtered out. A counter-assay to rule out compounds that inhibited the detection reaction was carried out in the absence of human nSMase2. The reaction was initiated with addition of phosphorylcholine (alkaline phosphatase substrate), added at a final concentration of 2 μM. Compounds that showed inhibitory activity in the counter-assay were removed from further validation. Metabolic stability. Metabolic stability assay was conducted in mouse or human liver microsomes as we have described previously with minor modifications 17 . Briefly, the reaction was carried out using potassium phosphate buffer (100 mM, pH 7.4), in the presence of an NADPH regenerating system (compound final concentration was 1 μM; 0.2 mg/mL microsomes). Compound disappearance was monitored over time using a liquid chromatography and tandem mass spectrometry (LC/MS/MS) method. Chromatographic analysis was performed using an Accela ultra high-performance system consisting of an analytical pump and an autosampler coupled with a TSQ Vantage mass spectrometer (Thermo Fisher Scientific Inc., Waltham, MA In vivo pharmacokinetics. Pharmacokinetic studies in mice were approved by the Animal Care and Use Committee at Johns Hopkins University. Male CD1 mice between 25 and 30 g were obtained from Harlan and maintained on a 12 h light−dark cycle with ad libitum access to food and water. Test compounds were dosed at 10 mg/kg IP at a dosing volume of 10 mL/kg. Blood and brain tissue were collected at 0.25, 0.5, 1, 2, 4 and 6 h post dose (n = 3 per time point). Blood was obtained via cardiac puncture and plasma was harvested from blood by centrifugation at 3000 × g for 15 min and stored at −80 °C. Brain tissues were harvested following blood collection and immediately snap frozen in liquid nitrogen and stored at −80 °C until LC−MS analysis. Calibration standards were prepared using naïve mouse plasma or brain spiked with DPTIP. DPTIP standards and samples were extracted from plasma and brain by a one-step protein precipitation using acetonitrile (100% v/v) containing internal standard (losartan: 0.5 µM). The samples were vortex mixed for 30 secs and centrifuged at 10000 × g for 10 min at 4 °C. Fifty microliter of the supernatant was diluted with 50 µL water and transferred to a 250 µL polypropylene vial sealed with a Teflon cap and analyzed via LC/MS/MS as described above. Plasma concentrations (pmol/mL) as well as tissue concentrations (pmol/g) were determined and plots of mean plasma concentration versus time were constructed for PK analysis. Non-compartmental-analysis modules in Phoenix WinNonlin version 7.0 (Certara USA, Inc., Princeton, NJ) were used to assess pharmacokinetic parameters including maximal concentration (C max ), time to C max (T max ), and area under the curve extrapolated to infinity (AUC 0-∞ ). Inhibition of EV release from primary glial cells. Potential inhibition of test compounds on EV release from primary astrocytes was carried out as previously described (Dickens et al., 2017). Briefly, rat primary astrocytes were seeded onto 6-well plates at a density of 20,000 cells/well. Twenty-four hours after seeding, astrocytes were washed with PBS and the medium changed to media without FBS. Absence of FBS mimics a trophic factor withdrawal stimulus causing EVs to be released from astrocytes via an nSMase2-dependent pathway. Astrocytes were then treated with test compounds at different concentrations: 0.03, 0.1, 0.3, 1, 3, and 10 μM. DMSO (0.02%) was used as control. Two hours after treatment, media was collected and centrifuged at 2700 × g for 15 min at 4 °C. The supernatant was collected and the number of EVs quantified using ZetaView Nanoparticle Tracker (Particle Metrix GmBH, Meerbusch, Germany) and the corresponding ZetaVeiw software (8.03.04.01). Nanosphere size standard 100 nm (Thermo Scientific) was used to calibrate the instrument prior to sample readings. Instrument pre-acquisition parameters were set to 23 °C, a sensitivity of 65, a frame rate of 30 frames per second (fps), a shutter speed of 100, and laser pulse duration equal to that of shutter duration. Post-acquisition parameters were set to a minimum brightness of 25, a maximum size of 200 pixels, and a minimum size of 10 pixels. For each sample 1 mL of the supernatant was injected into the sample-carrier cell and the particle count measured at 5 positions, with 2 cycles of reading per position. The cell was washed with PBS after every sample. Mean concentration of EVs/mL (±SEM) was calculated from 4 replicates. Inhibition of EV release in vivo. All experimental protocols using vertebrate animals were reviewed by the Institutional Animal Care and Use Committee at Johns Hopkins University and are in accordance with the guidelines of the NIH guide for the care and use of laboratory animals. Striatal injections and EV measurements were performed as previously described by our group in adult (2-3 month) male GFAP-GFP mice (Jackson Laboratories) 11,18 . Mice were anesthetized with 3% Isoflourane (Baxter) in oxygen (Airgas), and placed in a stereotaxic frame (Stoelting Co.). A small burr hole was drilled in the skull over the left striatum using a dental drill (Fine Scientific Tools). IL-1β (0.1 ng/3 µL) was injected (total volume of 3 μL) at the rate 0.5 µL/min via a pulled glass capillary tip diameter <50 µm 18 ; using the stereotaxic coordinates: A/P + 0.5; M/L −2; −3 D/V. Saline was used as a control. When DPTIP or its des-hydroxyl analog were used, they were given IP (10 mg/kg, 5% DMSO, 5% Tween-80 in saline) 30 min before IL-1β injection. Following infusion, the capillary was held in place for 5 min to allow for solution to diffuse into the tissue. Animals were sacrificed at 2 h by an overdose of anesthetic, and transcardially perfused with ice-cold saline containing heparin (20 µL per 100 ml, Sigma). Blood was collected via cardiac puncture using a heparin (Sigma Aldrich) coated syringe and EDTA tubes (BD) 2 h following striatal injections. Blood was immediately centrifuged at 2,700 x g for 15 min (20 °C) to obtain plasma. Plasma was further centrifuged at 10,000 g for 15 min (4 °C) to generate platelet free plasma. This procedure removes large particles such as apoptotic bodies. Cytokine measurements. RNA was isolated from fresh frozen tissues (10 to 50 mg) using the RNeasy Mini Kit (Qiagen). Total RNA was reverse-transcribed and quantified using previously published methods 27 . For quantitative real-time PCR (qRT-PCR), each reaction contained SYBR Green Master Mix (12.5 ml; Life Technologies), diethyl pyrocarbonate H 2 O (10.5 ml), forward and reverse primers to CCL2, TNFα, IL-6, IL-1b, IL-17a, IL-10, IGFR1, and CXCL1 (0.5 ml each; Sigma-Aldrich), and cDNA (1 ml). Each 96-well plate included a nontemplate control, and samples were analyzed in triplicate on an Applied Biosystems 7300 (Life Technologies). Cycling parameters were as follows: one cycle for 2 min at 50 °C, one cycle for 10 min at 95 °C, and 40 cycles for 15 s at 95 °C and for 1 min at 60 °C. The change in threshold cycle (ΔC t ) for each sample was normalized to β-actin, and ΔΔC t was calculated by comparing ΔC t for the treatment group to the average ΔC t of the control group 28 . Immunohistochemistry. Coronal brain sections (30 µm) were prepared using a cryostat microtome (Leica). Endogenous peroxidase activity was quenched using a 1% solution of H 2 O 2 in methanol, and primary antibody Ly6b (1:1000, AbD Serotec), was incubated at 4 °C overnight. Sections were washed (3 × PBS), and biotinylated secondary antibody (1:100, Vector Laboratories) was added at room temperature for 2 hours. Staining was visualized using an avidin-biotin complex (1:100 of A and B, Vector Laboratories) and DAB-HCl using a microscope to monitor staining progression. Stereological quantitation was performed using a one-in-five series (270-µm spacing), from the rostral point of bregma +1.10 mm to the caudal point of bregma −0.58 mm as previously described 29 . Ethical approval. All experimental protocols using vertebrate animals were reviewed by the Institutional Animal Care and Use Committee at Johns Hopkins University and are in accordance with the guidelines of the NIH guide for the care and use of laboratory animals. Johns Hopkins Medical Institution is fully accredited by the American Association for Accreditation in Laboratory Animal Care (AAALAC). Data Availability Statement Experimental data used to generate the results reported in this manuscript are available upon request.
v3-fos-license
2020-02-27T14:11:52.054Z
2020-02-27T00:00:00.000
211518813
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.00328/pdf", "pdf_hash": "6dc69f5723c0ccbb23cf56053aac0b8751521c3c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:419", "s2fieldsofstudy": [ "Biology" ], "sha1": "6dc69f5723c0ccbb23cf56053aac0b8751521c3c", "year": 2020 }
pes2o/s2orc
Impaired B Cell Function in Mice Lacking Perforin-2 Perforin-2 (P2) is a pore-forming protein with cytotoxic activity against intracellular bacterial pathogens. P2 knockout (P2KO) mice are unable to control infections and die from normally non-lethal bacterial infections. Here we show that P2KO mice as compared to WT mice show significantly higher levels of systemic inflammation, measured by inflammatory markers in serum, due to continuous microbial translocation from the gut which cannot be controlled as these mice lack P2. Systemic inflammation in young and old P2KO mice induces intrinsic B cell inflammation. Systemic and B cell intrinsic inflammation are negatively associated with in vivo and in vitro antibody responses. Chronic inflammation leads to class switch recombination defects, which are at least in part responsible for the reduced in vivo and in vitro antibody responses in young and old P2KO vs. WT mice. These defects include the reduced expression of activation-induced cytidine deaminase (AID), the enzyme for class switch recombination, somatic hypermutation and IgG production and of its transcriptional activators E47 and Pax5. Of note, the response of young P2KO mice is not different from the one observed in old WT mice, suggesting that the chronic inflammatory status of mice lacking P2 may accelerate, or be equivalent, to that seen in old mice. The inflammatory status of the splenic B cells is associated with increased frequencies and numbers of the pro-inflammatory B cell subset called Age-associated B Cells (ABCs) in the spleen and the visceral adipose tissue (VAT) of P2KO old mice. We show that B cells differentiate into ABCs in the VAT following interaction with the adipocytes and their products, and this occurs more in the VAT of P2KO mice as compared to WT controls. This is to our knowledge the first study on B cell function and antibody responses in mice lacking P2. INTRODUCTION Perforin-2 (MPEG1, P2) is a pore-forming protein with broad-spectrum activity against infectious bacteria in both mice and humans (1). P2 is constitutively expressed in phagocytes and other immune cells and can be induced in parenchymal, tissue-forming cells (2,3). In vitro, P2 prevents the intracellular replication of bacterial pathogens (3). P2 knockout (P2KO) mice are unable to control the systemic dissemination of bacterial pathogens and die from bacterial infections that are normally not lethal (3). Other bactericidal molecules have been found to be less effective in the absence of P2, suggesting that P2 is essential for the activity of mammalian immune defense mechanisms. It has recently been shown that P2 facilitates the delivery of proteases and other antimicrobial effectors to the sites of bacterial infection leading to effective killing of phagocytosed bacteria (4). Translocation of bacteria and their products from the gastrointestinal tract to extra-intestinal sites (lymph nodes, liver, spleen, kidney, blood) is a phenomenon that may occur spontaneously in healthy conditions in humans and mice without apparent deleterious consequences (5). Bacterial translocation is increased in different clinical pathological conditions and is certainly involved in the pathophysiological mechanisms of many diseases. Translocation of bacteria and/or their toxic products from the gastro-intestinal tract is strongly suspected to be responsible for the establishment of systemic chronic inflammation. This condition may be exacerbated in P2KO mice. We have previously shown in mice (6) and humans (7,8) that B cell function decreases with age and this decrease is associated with chronic low-grade systemic inflammation, called "inflammaging" (9). Higher levels of inflammaging, measured by serum TNF-α, induce higher TNF-α production by B cells from old mice and humans in vivo and in vitro, leading to significant decreases in their capacity to make protective antibodies in response to antigenic/mitogenic stimulation (6,7). Serum TNFα has been shown to up-regulate the expression of its receptors (TNFRI and TNFRII) on B cells, and interaction of TNF-α with its receptors induces NF-kB activation and secretion of TNF-α as well as of other pro-inflammatory cytokines and chemokines (10). Importantly, blocking TNF-α with specific antibodies has been shown to increase B cell function, at least in vitro, in both mice (6) and humans (7). The purpose of this study is to evaluate B cell function in P2KO mice. We hypothesized that P2KO mice are unable to control the translocation of bacteria and/or toxic bacterial products and this would generate a systemic low-grade chronic inflammation which negatively affects B cell function and antibody responses. Our results herein show that this is indeed the case. P2KO mice show significantly higher levels of systemic and intrinsic B cell inflammation which are negatively associated with protective antibody responses to a vaccine. This is to our knowledge the first study evaluating B cell function and antibody responses in mice lacking P2. Mice Male P2KO and wild type (WT) mice, both on a 129/SvJ background, were generated as previously described (3). Mice were young (3-4 months) and old (>18 months), bred at the University of Miami, Miller School of Medicine Transgenic Core Facility. Mice were allowed to freely access food and water and were housed at 23 • C on a 12 hr light/dark cycle under specific pathogen-free conditions. All studies adhered to the principles of laboratory animal care guidelines and were IACUC approved (protocols #16-252 and #16-006). Influenza Vaccine Response Mice were injected intramuscularly with 4 µg of the quadrivalent influenza vaccine (Fluzone Sanofi Pasteur 2017-2018) in alum (Aluminum Potassium Sulfate Dodecahydrate, SIGMA A-7210). Total volume of injection was 100 µl. Mice were sacrificed 28 days after the injection (peak of the response). B Cell Enrichment B cells were isolated from the spleens after 20 min incubation at 4 • C using CD19 MicroBeads (Miltenyi Biotec 130-121-301), according to the MiniMACS protocol (20 µl Microbeads + 80 µl PBS, for 10 7 cells). At the end of the purification procedure, cells were 90-95% CD19-positive by cytofluorimetric analysis. They were then maintained in PBS for 3 hrs at 4 • C to minimize potential effects of anti-CD19 antibodies on B cell activation. After positive selection, B cells were divided in two aliquots: one aliquot was used for culture stimulation, the other aliquot for RNA extraction after cells were resuspended in TRIzol (ThermoFisher Scientific). B Cell Culture Splenic B cells (10 6 /ml) were cultured in complete medium (RPMI 1640, supplemented with 10% FCS, 100 U/ml Penicillin-Streptomycin, 2 × 10 −5 M 2-ME, and 2 mM L-glutamine). FCS was certified to be endotoxin-free. B cells were stimulated in 24 well culture plates with 1 µg/ml of LPS (from E. coli, SIGMA L2880) for 1-7 days. At the end of the stimulation time, B cells were counted in a solution of trypan blue to evaluate viability which was found comparable in cultures of WT and P2KO mice. Isolation of Epididymal VAT Epididymal VAT was collected, weighed, washed with 1X Hanks' balanced salt solution (HBSS), resuspended in Dulbecco's modified Eagle Medium (DMEM), minced into small pieces, passed through a 70 µm filter and digested with collagenase type I (SIGMA C-9263) for 1 hr at 37 • C in a water bath. Digested cells were passed through a 300 µm filter, centrifuged at 300 g in order to separate the floating adipocytes from the stromal vascular fraction (SVF) containing the immune cells. The cells floating on the top were transferred to a new tube as adipocytes. The cell pellet (SVF) on the bottom was resuspended in a solution of Ammonium Chloride Potassium (ACK) for 3 min at RT (room temperature) to lyse the red blood cells. Both adipocytes and SVF were washed 3 times with DMEM. B cells were isolated from the SVF as indicated immediately below. Adipocytes were sonicated for cell disruption in the presence of TRIzol, and then centrifuged at 1,000 × g at 4 • C for 20 min to separate the soluble fraction from the lipids and cell debris. The soluble fraction was then used for RNA isolation. Cell Sorting FO B cells were sorted with the Sony SH800 cell sorter. FO B cells were incubated with adipocytes in transwell as detailed below. RNA Extraction and cDNA Preparation The mRNA was extracted from LPS-stimulated B cells at day 1 (to evaluate E47, Pax5) and at day 5 (to evaluate AID), using the µMACS mRNA isolation kit (Miltenyi), according to the manufacturer's protocol, eluted into 75 µl of preheated elution buffer, and stored at −80 • C until use. Total RNA was extracted from unstimulated VAT B cells, as well as from adipocytes, resuspended in TRIzol, according to the manufacturer's protocol, eluted into 10 µl of preheated H 2 O, and stored at −80 • C until use. Reverse Transcriptase (RT) reactions were performed in a Mastercycler Eppendorf Thermocycler to obtain cDNA. Briefly, 10 µl of mRNA or 2 µl of RNA at the concentration of 0.5 µg/µl were used as template for cDNA synthesis in the RT reaction. Conditions were: 40 min at 42 • C and 5 min at 65 • C. Quantitative PCR (qPCR) Reactions were conducted in MicroAmp 96-well plates, and run in the ABI 7300 machine. Calculations were made with ABI software. Briefly, we determined the cycle number at which transcripts reached a significant threshold (Ct). A value for the amount of the target gene, relative to GAPDH, was calculated and expressed as Ct. Results are expressed as 2 − Ct . Reagents and primers for qPCR amplification were from ThermoFisher. Primers were: Enzyme-Linked Immunosorbent Assay (ELISA) To measure microbial translocation in serum, Lonza QCL-1000 kit was used for the detection of Gram-negative bacterial endotoxin. To measure influenza vaccine serum IgG and IgA responses, the influenza vaccine was used for coating ELISA plates. The vaccine was used at the concentration of 10 µg/ml. Detection antibodies were HRP-conjugated affinity-purified F(ab') 2 of a goat anti-mouse IgG (Jackson IR Labs 115-036-062) and HRPconjugated goat anti-mouse IgA (ThermoFisher 62-6720). To measure stool-specific IgG antibodies in serum, we first obtained total protein lysates from stools of WT and P2KO mice that were used for coating ELISA plates. Stool sample collection and processing was performed as described (11). Total protein lysates were obtained using the M-PER mammalian protein extraction reagent (ThermoFiscer 78501), according to the manufacturer's protocol. Protein lysates were used at the concentration of 10 µg/ml. Detection antibody was an HRPconjugated affinity-purified F(ab') 2 of a goat anti-mouse IgG (Jackson IR Labs 115-036-062). To measure LPS-induced IgG3 in culture supernatants, purified IgG3 subclass-specific antibodies were used for coating (Southern Biotech 1101-01), at the concentration of 2 µg/ml. Detection antibody was the same as above. Co-culture of Adipocytes and Splenocytes The ratio between adipocytes and splenic lymphocytes in cocultures was equal to that which we measured in ex vivo isolated VAT (ratio adipocytes:lymphocytes). In the transwells, cells were co-cultured by using inserts with a 0.4 µm porous membrane (Corning) to separate adipocytes and splenic lymphocytes. Cells were left unstimulated. After 72 h, cells in the upper wells (splenic lymphocytes) were harvested, washed and stained to evaluate percentages and numbers of B cell subsets. Statistical Analyses To examine differences between 4 groups, two-way ANOVA was used. Group-wise differences were analyzed afterwards with Bonferroni's multiple comparisons test, with p < 0.05 set as criterion for significance. To examine differences between 2 groups, Student's t-tests (two-tailed) were used. To examine the relationships between variables, bivariate Pearson's correlation analyses were performed, using GraphPad Prism 5 software. Principal Component Analyses (PCA) were generated using RStudio Version 1.1.463. Increased Microbial Translocation in the Serum of P2KO vs. WT Mice We first measured microbial translocation by quantifying serum levels of LPS, the major component of Gram-negative bacterial cell walls. LPS in serum indicates microbial translocation (12). Results in Figure 1A show increased serum LPS in young and old P2KO mice as compared to WT controls, the highest levels being observed in old P2KO mice. Serum LPS levels in young P2KO mice are comparable to those observed in old WT mice. These results confirm our initial hypothesis that translocation of bacteria and their products from the gastro-intestinal tract occurs in P2KO mice and this may be responsible for the establishment of systemic chronic inflammation. We have also measured bacterial translocation by serum levels of IgG antibodies specific for stool-derived proteins. Results have indicated higher stoolspecific IgG in the serum of P2KO as compared to that of WT mice, confirming LPS results (data not shown). receptor for LPS, TLR4, is one of the several markers of IA so far identified. It is known that there is a negative association between the expression of IA markers in immune cells before stimulation and the response of the same immune cells after in vivo or in vitro stimulation. Therefore, IA is negatively associated with functional immune cells. This has been shown in chronic inflammatory conditions (aging and age-associated conditions) as well as in chronic infections (HIV, malaria) (7, 13-16). We measured in vivo antibody production in young and old WT and P2KO mice by measuring the serum response to the influenza vaccine by ELISA. Results in Figure 1B show that P2KO mice of both age groups have significantly decreased in vivo responses to the vaccine and make significantly less influenza vaccine-specific IgG antibodies as compared to WT controls. Noteworthily, the response of young P2KO mice is not different from the one observed in old WT mice. Influenza vaccine-specific IgA ( Figure 1C) and total IgG show a similar pattern (Figure 1D). Reduced in vivo The influenza vaccine response, as expected, was negatively correlated with microbial translocation (Figure 1E). Reduced in vitro Class Switch in B Cells From P2KO VS. WT Mice We then measured in vitro class switch, IgG secretion and plasma cell frequencies in LPS-stimulated splenic B cells from young and old WT and P2KO mice. We evaluated E47, Pax5, Prdm1 (Blimp-1), and activation-induced cytidine deaminase (AID) mRNA expression by qPCR. This was done at time points that we found optimal in our previously published work measuring in vitro class switch in splenic B cells from young and old C57BL/6 mice. Briefly, we found that E47 mRNA is higher at day 1 and then decreases at days 2-3 after stimulation (17,18). Pax5 mRNA expression has a kinetic similar to E47 (unpublished). AID mRNA is already detectable at day 3 but peaks at day 5, to decrease later on (17). Prdm1(Blimp-1) is detectable at day 2 and increases at later days, peaking at day 4, and it stays up until day 7 (18). E47 (19,20), and Pax5 (21,22) are transcriptional regulators of AID, the enzyme necessary for class switch recombination, the process leading to the production of secondary, classswitched antibodies, and somatic hypermutation (23)(24)(25). AID is a measure of optimal B cell function. Prdm1 (Blimp-1) is the transcription factor for plasma cell differentiation (26). In addition to transcription factors for class switch recombination and plasma cell differentiation, and AID, we also measured IgG3 secretion by ELISA. IgG3 is the Ig subclass secreted in larger amounts in response to LPS alone. In response to LPS and class switch cytokines or B lymphocyte stimulator (BlyS), a key survival factor for B cells also known to induce class switch (27) splenic B cells from 129/SvJ mice make predominantly IgG1 followed by IgG2b (28). Frequencies of plasma cells by flow cytometry were also evaluated in LPS-stimulated splenic B cells from young and old WT and P2KO mice. Results in Figure 2 show that B cells from P2KO mice, both young and old, express significantly less mRNA for E47 (A), Pax5 (B), AID (C), and Prdm1 (Blimp-1) (D) and secrete significantly less IgG3 antibodies (E), as compared to WT controls. Also the frequencies of plasma cells are less in cultured B cells from P2KO as compared to those from WT mice (F). Again, the response of young P2KO mice is not different from the one observed in old WT mice. Increased Intrinsic Inflammation in Splenic B Cells From P2KO vs. WT Mice We have previously shown in both mice (6) and humans (7) that high TNF-α mRNA levels in resting B cells negatively correlate with the response of the same B cells when stimulated in vivo or in vitro with mitogens and/or vaccines, clearly demonstrating that the inflammatory status of the B cells impacts their own function. We therefore measured mRNA expression of the proinflammatory cytokines TNF-α and IL-6 in unstimulated splenic B cells from young and old WT and P2KO mice. Results in Figure 3 show that TNF-α (top) and IL-6 (bottom) mRNA expression in unstimulated B cells from from P2KO mice are significantly higher as compared to those in B cells from WT mice (A). Moreover, TNF-α (top) and IL-6 (bottom) mRNA expression in unstimulated B cells are negatively associated with the in vivo influenza vaccine response (B) and with the in vitro AID mRNA expression (C). These results altogether confirm and extend our previous findings that higher mRNA expression of the inflammatory cytokines TNF-α and IL-6 in B cells, prior to any stimulation, renders the same B cells incapable of being optimally stimulated by vaccines or mitogens. Increased Frequencies and Numbers of Pro-inflammatory B Cells in the Spleen of P2KO vs. WT Mice The above results, showing higher inflammation (TNF-α and IL-6 mRNA expression) in unstimulated B cells from P2KO mice, as compared to WT controls, are supported by the findings of higher frequencies of pro-inflammatory B cell subsets in the spleens of P2KO vs. WT mice, as shown in Figure 4. We previously showed in mice (29) and humans (7) conditions and pro-inflammatory B cell subsets contributing to reduced function in the aged. We measured by flow cytometry the percentages of FO, ABC and MZ B cell subsets in the spleens of WT and P2KO old mice (the ones with the highest levels of inflammation). Results show significantly reduced frequencies (A) and numbers (B) of the anti-inflammatory FO subset, and significantly increased frequencies and numbers of the pro-inflammatory ABC subset, in the spleens of old P2KO vs. WT mice. No differences in frequencies and numbers of MZ B cells were observed between WT and P2KO mice. No Difference Between WT and P2KO Mice in Fat Measures but Increased Frequencies of ABCs in the VAT of P2KO vs. WT Mice Fat mass increases with age in mice (30) and humans (30,31). The increase in fat mass with age is responsible for increased local and systemic levels of pro-inflammatory mediators that are markers of inflammaging (9). Higher fat mass also induces proinflammatory B cells and impairs B cell function in old mice (29) and humans (32,33). Therefore, obesity may be considered a mechanism of aging. We analyzed the VAT to identify contributors to the phenotypic and functional changes observed in splenic B cells from old P2KO mice as compared to WT controls. Results in Figure 5 show that both mouse weight (A) and epididymal VAT weight (B) are comparable in WT and P2KO mice. The 2 measures are positively correlated (C). Additionally, mouse weight is negatively associated with in vitro class switch, measured by AID mRNA expression in stimulated splenic B cells (D). To explain the results in D and identify mechanisms responsible for the VAT-driven inflammation leading to the down-regulation of AID, we compared frequencies of ABCs in the VAT of P2KO vs. WT old mice. Results in Figure 6 show that FO B cells significantly decrease in frequencies and numbers, while ABCs significantly increase, in the VAT of old P2KO mice as compared to age-matched WT controls. These results demonstrate that although no significant differences were observed in mouse weight and epididymal VAT weight between P2KO and WT mice, frequencies and numbers of ABCs, the most pro-inflammatory B cell subset, are increased in the VAT of P2KO mice and they contribute to local and systemic inflammation which negatively impacts B cell function. Increased Differentiation of ABCs in the VAT of P2KO vs. WT Mice To understand if ABC frequencies in the VAT of P2KO mice increase as a consequence of increased differentiation of ABCs, we performed the following experiment in which we evaluated the ability of adipocytes to induce ABCs. We co-cultured in transwells adipocytes from the VAT of WT or P2KO old mice with splenic B cells from WT mice. These experiments were Figure 4. Mean comparisons between groups were performed by two-way ANOVA followed by Bonferroni's multiple comparisons test. *p < 0.05, **p < 0.01. performed in the absence of any exogenous stimulation. Results in Figure 7 show that co-culture of 72 hrs significantly changed the relative percentages of the B cell subsets, leading to a significant increase in ABC percentages, similar to what we have observed in the VAT (Figure 6). The reason why the co-culture of WT adipocytes and splenic B cells also changes the relative proportions of FO and ABC (reducing FO and increasing ABC percentages) is because WT adipocytes are also inflammatory, although not as much as P2KO adipocytes. To further confirm that FO do not decrease because they die but because they differentiate into ABCs, as we have previously shown in C57BL/6 mice (29), we co-cultured adipocytes from P2KO old mice with sorted splenic FO B cells from the same mice and we compared gene expression profiles of these FO B cells before and after 72 h in transwell. We measured Prdm1 (Blimp-1), a marker up-regulated in ABCs vs. FO, as we (29) and others (34) have previously shown. Results in Figure 8 show that the co-culture with adipocytes induced differentiation of FO B cells into ABCs, as splenic FO B cells acquired markers typical of ABCs. It is relevant to note that adipocyte-driven ABC differentiation occurred in the absence of any exogenous (antigen/mitogen) stimulation. This culture condition is different from that in which FO are stimulated with antigens/mitogens in vitro to generate Prdm1 (Blimp-1) expressing plasma cells. Adipocytes From P2KO Mice Are More Inflammatory Than Those From WT Mice We then compared the inflammatory profile of adipocytes from the VAT of WT and P2KO mice, which is responsible for the recruitment of inflammatory B cell subsets to the VAT and for their differentiation. We measured in particular RNA expression of pro-inflammatory cytokines (TNF-α, IL-6) and chemokines (CXCL10, CCL2, CCL5). Results in Figure 9 show significantly higher expression levels of the RNA for pro-inflammatory cytokines (A) and chemokines (B) in adipocytes from P2KO mice as compared to WT controls. In the PCA analysis (C) we show distinct clustering of the 2 groups of adipocytes. DISCUSSION The mouse and human gastrointestinal tracts are colonized by a huge number of microorganisms. Although the gut provides a functional barrier between these microorganisms FIGURE 8 | Splenic FO B cells co-cultured with P2KO adipocytes show markers of ABCs. FO B cells sorted from the spleen of P2KO old mice were co-cultured with adipocytes from the same old mice. RNA was extracted before and after 72 h in transwell and expression of Prdm1 was evaluated by qPCR. Results show qPCR values (2 − Ct ). Mean comparisons between groups were performed by paired Student's t-test (two-tailed). **p < 0.01. and the host, translocation of bacteria and/or their products is still occurring even in normal, healthy conditions. Our study is based on the hypothesis that these events of microbial translocation are strongly suspected to lead to the establishment of systemic chronic inflammation, intrinsic B cell inflammation and dysfunctional antibody responses. P2KO mice, lacking the mechanisms to control the proliferation and dissemination of the different microbes, are characterized by higher intrinsic B cell inflammation and more dysfunctional antibody responses as compared to WT controls. This is clearly shown by increased microbial translocation in the serum of P2KO mice as compared to WT controls, which is negatively associated with a protective response against the influenza vaccine. This is to our knowledge the first study evaluating B cell function and antibody responses in mice lacking P2. Studies in mice have clearly demonstrated that intestinal components also regulate the VAT [reviewed in Tilg and Kaser (35)] and results have shown that gut permeability is increased in obesity (36,37) leading to the release of LPS in the circulation. LPS, as well as other intestinal antigens, has been shown to be absorbed in the VAT through lipid-driven mechanisms (38,39). Based on our previous data in aged mice and humans, we know that the inflammatory status of the individual and of B cells themselves impacts B cell function. Here we show that the ability to generate an in vivo specific antibody response to the influenza vaccine is reduced in P2KO mice as compared to WT controls. The class switch recombination defects at least in part responsible for the reduced in vivo and in vitro antibody responses include the reduced expression of AID and of its transcriptional activators E47 and Pax5. Moreover, splenic unstimulated B cells from P2KO mice make higher levels of TNFα and IL-6 mRNA than those from WT mice and these negatively correlate with B cell function, measured in vivo by the response to the influenza vaccine and in vitro by AID mRNA expression in stimulated B cell cultures. These results confirm and extend our previously published results showing a negative impact of systemic chronic inflammation on B cell function and antibody production in vivo and in vitro. This inflammatory status of the splenic B cells is associated with increased frequencies and numbers of the pro-inflammatory B cell subset called ABCs. These cells have been reported to increase in aging and in age-associated inflammatory conditions in both mice (29,34,40,41) and humans (42)(43)(44)(45)(46). These cells have a unique transcriptomic phenotype (34) and are characterized by a senescence-associated secretory phenotype responsible for the secretion of several pro-inflammatory markers, including chemokines, cytokines, growth factors and matrix metalloproteinases (47). ABCs not only increase in the spleens but also in the SVF of the VAT of P2KO mice, despite a lack of increase in mouse weight and fat mass. The reason for us to evaluate the VAT is because with aging the VAT undergoes significant changes in abundance, distribution, cellular composition, endocrine signaling and it has been shown to affect the function of other systems including the immune system. ABCs differentiate in the VAT following interaction with the adipocytes and this occurs more in the VAT of P2KO mice as compared to WT controls. Differentiation of ABCs in the VAT is accompanied by the acquisition of markers typical of this B cell subset and expressed at almost indiscernible levels in FO B cells. We measured Prdm1, the gene coding for Blimp-1, the transcription factor for plasma cells, among others, as the RNA expression of this marker was found 10-fold higher in unstimulated splenic ABCs vs. FO in our previously published study (29). Although the major function of adipocytes is to store excess energy, several recent findings have indicated that the adipocytes are also endocrine cells able to secrete adipokines and several pro-inflammatory molecules that modulate immune cell infiltration, immune cell activation and differentiation. We have preliminary evidence that leptin, the major adipokine secreted by the adipocytes, induces in vitro differentiation of splenic naïve B cells into ABCs secreting IgG2c autoantibodies (data not shown). Experiments currently under way in our laboratory are evaluating other adipocyte-derived molecules that may be involved in B cell differentiation in the VAT. In conclusion, our results show for the first time that P2KO mice have decreased antibody responses, likely consequent to changes in B cell characteristics/function, chronic systemic inflammation supported by a continuous microbial translocation from the gut which cannot be controlled as these mice lack P2. These results are physiologically relevant for patients, although not frequent, with P2 deficiency who contract infections with intracellular bacteria (48) and therefore may need to be treated to improve their humoral immunity. DATA AVAILABILITY STATEMENT The data generated in this study are available upon request to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by University of Miami IACUC approved protocols #16-252 and #16-006.
v3-fos-license
2018-04-03T05:53:35.518Z
2017-11-30T00:00:00.000
27283159
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jstage.jst.go.jp/article/jsme2/32/4/32_ME17048/_pdf", "pdf_hash": "3adfc004673c81678cc20aebbeda80672cc76d17", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:420", "s2fieldsofstudy": [ "Biology" ], "sha1": "ac8ffaa28fb5f883e4f1fe640433d9769f2c17d6", "year": 2017 }
pes2o/s2orc
A Simple and Efficient RNA Extraction Method from Deep-Sea Hydrothermal Vent Chimney Structures RNA-based microbiological analyses, e.g., transcriptome and reverse transcription-quantitative PCR, require a relatively large amount of high quality RNA. RNA-based analyses on microbial communities in deep-sea hydrothermal environments often encounter methodological difficulties with RNA extraction due to the presence of unique minerals in and the low biomass of samples. In the present study, we assessed RNA extraction methods for deep-sea vent chimneys that had complex mineral compositions. Mineral-RNA adsorption experiments were conducted using mock chimney minerals and Escherichia coli total RNA solution, and showed that detectable RNA significantly decreased possibly due to adsorption onto minerals. This decrease in RNA was prevented by the addition of sodium tripolyphosphate (STPP), deoxynucleotide triphosphates (dNTPs), salmon sperm DNA, and NaOH. The addition of STPP was also effective for RNA extraction from the mixture of E. coli cells and mock chimney minerals when TRIzol reagent and the RNeasy column were used, but not when the RNeasy PowerSoil total RNA kit was used. A combination of STPP, TRIzol reagent, the RNeasy column, and sonication resulted in the highest RNA yield from a natural chimney. This indirect extraction procedure is simple, rapid, inexpensive, and may be used for large-scale RNA extraction. Advances in sequencing technologies have increased the significance of culture-independent analyses in microbial ecology, in which high-throughput sequences are obtained directly from nucleic acids extracted from various microbial habitats. Many methods for nucleic acid extraction have been developed and some of them have been optimized for specific sample types and research goals (1,16,20). A number of DNA extraction methods for marine sediments have been evaluated in detail (1,26). These studies provided solutions for crucial steps including the removal of PCR inhibitors, cell disruption, prevention of DNA adsorption, and release of DNA (1,26). DNA-based analyses provide information on microbial communities including dead, inactive, or dormant populations (6,14,51), while RNA-based methods allow for more precise assessments of the composition and function of active microbial populations (3,38). Therefore, RNA-based methods (e.g. a transcriptome analysis) have been applied to many microbial habitats, including soils (52), seawater, and marine sediments (9,55). However, RNA-based methods have only been successfully applied to free-living microbial communities in deep-sea hydrothermal environments in a few studies, and this may be due to the difficulties associated with the extraction of high-quantity and -quality RNA (13,25,27,44). Furthermore, RNA extraction methods for chimney habitats have not yet been evaluated. Therefore, we herein tested two RNA extraction methods and optimization protocols, and reported a simple, rapid, and cost-effective protocol for deep-sea vent chimney habitats. Preparation of a mock chimney and RNA A mock chimney was prepared by pulverizing pyrite (FeS 2 ) and barite (BaSO 4 ) (3:2 [w/w]) with a mortar and pestle, followed by sterilization at 230°C for 30 min. Sulfide and sulfate minerals are major constituents of deep-sea vent chimney structures (15,24,42). Total RNA was prepared from Escherichia coli cells with TRIzol reagent (Thermo Fisher Scientific, Waltham, MA) and the RNeasy mini kit (Qiagen, Hilden, Germany). When TRIzol reagent and the RNeasy column were used together, the aqueous phase after the TRIzol treatment was loaded into the RNeasy column after mixing with an equal volume of ethanol. RNA was eluted after washing processes according to the manufacturer's instructions. Adsorption experiments on the mock chimney and RNA The RNA adsorption experiment was conducted by mixing 100 μL of the RNA solution (28 ng μL -1 ), 100 μL of a potential adsorption inhibitor, and 10 mg of mock chimney powder on ice. We assessed each of the following potential inhibitors: sodium tripolyphosphate (STPP; 100 μL, 0.6 M), deoxynucleotide triphosphates (dNTPs; 100 μL, 2.5 mM each), salmon sperm DNA (100 μL, 0.8 μg μL -1 ), and NaOH (100 μL, pH 10). dNTPs, salmon sperm DNA, and NaOH were previously assessed for nucleic acid extraction from various environmental samples (26). Although STPP was not assessed by Lever et al. (26), we used it as a PO 4 source because STPP is cheap and safe. One hundred microliters of diethyl pyrocarbonate (DEPC)treated water was used as a control. After being incubated on ice for 0 h, 4 h, and 14 h, the mixture was vortexed for 2 s and centrifuged at 2,000×g for 20 s. The supernatant was recovered into a new tube, purified by the RNeasy column (Qiagen), and the RNA recovered was reverse transcribed into cDNA using random hexamers with the PrimeScript RT reagent kit with gDNA Eraser (TaKaRa Bio, Otsu, Japan). E. coli 16S rRNA was quantified by qPCR (Thermal Cycler Dice Real Time System II; TaKaRa Bio) with the primer EUB338F-U533R (2, 54) and SYBR Premix Ex Taq II (TaKaRa Bio), following the manufacturer's instructions. PCR without reverse transcription was used as a control. A standard curve was obtained for each run using the PCR-amplified 16S rRNA gene of E. coli. 16S rRNA in some samples was also quantified with Bioanalyzer 2100 (Agilent Technology, Santa Clara, USA) with the RNA 6000 pico chip kit (Agilent Technology). These experiments were run in triplicate. RNA extraction from a mixture of the mock chimney and E. coli cells E. coli cells were grown in LB medium and harvested by centrifugation in the late exponential growth phase. Cells (10 8 cells) and 0.25 g of the mock chimney were mixed and then frozen at -80°C for 48 h. RNA was extracted using the RNeasy PowerSoil total RNA kit (Qiagen; formerly the RNA PowerSoil total RNA isolation kit [MO BIO Laboratories, Carlsbad, CA]) or TRIzol reagent (Thermo Fisher Scientific) and the RNeasy column (Qiagen), in the presence or absence of 100 μL of 0.6 M STPP. RNA quality (RNA integrity number, RIN) was assessed using Bioanalyzer 2100 (Agilent Technology). 16S rRNA was quantified by qPCR as described above. All experiments were run in triplicate. RNA extraction from a natural chimney structure A chimney structure was obtained from the Noho site of the Sakai field (27°31.386'N, 126°59.209'E) (34), Mid-Okinawa Trough, Japan, at a depth of 1,550 m by means of the ROV Hyper-Dolphin (Dive#1860) during R/V Natsushima cruise NT15-13 (JAMSTEC) in August 2015. Immediately after its recovery onboard, the chimney structure was stored at -80°C until used. The chimney sample was pulverized with a mortar and pestle to a fine powder of micron-size particles in liquid nitrogen. Total RNA was directly extracted from 1.79-1.92 g of the sample using TRIzol reagent (Thermo Fisher Scientific) and the RNeasy column (Qiagen), in the presence or absence of 100 μL of 0.6 M STPP. In addition, the mixture of the pulverized chimney structure and STPP was sonicated using TAITEC VP-050 (TAITEC, Koshigaya, Japan) at 10W for 20 s. After 30 min on ice, the supernatant was recovered, and RNA was extracted using TRIzol reagent (Thermo Fisher Scientific) and the RNeasy column (Qiagen) (indirect extraction). STPP was used in the indirect RNA extraction procedure because the supernatant potentially includes tiny particles of the chimney structure and/or RNA accidentally released from cells by sonication. RNA quality and the 16S rRNA copy number were evaluated as described above. 16S rRNA sequence analysis of extracted RNA Total RNA samples from the chimney structure were reverse transcribed to cDNA as described above. The V4-V5 regions of 16S rRNA cDNA were amplified and analyzed using Illumina sequencing (MiSeq) as previously described (36). Sequences were processed using the QIIME software package (7). OTUs were selected at the 97% similarity level using UCLUST (10) and subsequently assigned to a taxon by comparisons with SILVA 119 (39) RNA adsorption to the mock chimney Mock chimney minerals were expected to trap RNA molecules to a certain extent, while some chemical treatments may prevent RNA adsorption to these minerals. We incubated mixtures of mock chimney minerals and E. coli total RNA solution, and then quantified the amounts of dissolved 16S rRNA in supernatants. After 4 h of mixing, dissolved RNA decreased to approximately 40% of the initial amount (Fig. 1). The RNA fragmentation pattern was examined by a Bioanalyzer electrogram and even after 14 h of mixing, no apparent fragmentation of RNA was observed (Fig. S1), suggesting that the decrease in RNA resulted from adsorption to minerals, and not from degradation. RNA adsorption to minerals was prevented by the addition of STPP, dNTPs, salmon sperm DNA, and NaOH; however, detectable RNA was significantly decreased after 14 h in the presence of salmon sperm DNA (Fig. 1). These results are consistent with previous findings showing that nucleic acid adsorption onto positively charged mineral surfaces may be prevented by phosphates, nucleic acids (4,18,37,40), and alkaline pH (4,49). Among the treatments tested in this study, we focused on STPP in subsequent experiments because it exerted some of the strongest effects, is cost-effective, and may be removed with a silica column if necessary (41). RNA extraction from the mock chimney The effects of STPP on RNA extraction were evaluated using a mixture of mock chimney minerals and E. coli cells. In the presence or absence of STPP, RNA was directly extracted by two different methods. The amount of RNA extracted significantly decreased when cells were mixed with mock chimney minerals in the absence of STPP ( Fig. 2A). When TRIzol reagent and the RNeasy column were used, the amount of RNA extracted markedly decreased. In contrast, the amount of RNA extracted only slightly decreased when the RNeasy PowerSoil total RNA kit was used, even in the absence of STPP (i.e. the mean RNA yield [±standard error] was 4.7×10 5 [±4.1×10 3 ] copies μL -1 of the culture in the absence of the mock chimney and 2.8×10 5 [±1.1×10 3 ] copies μL -1 of the culture in the presence of the mock chimney; n=3) ( Fig. 2A). This is potentially because RNeasy PowerSoil total RNA kit constituents may release RNA from the mineral surface; however, the chemical ingredients of the kit are not disclosed. In the presence of STPP, the amount of 16S rRNA recovered was significantly improved when TRIzol reagent and the RNeasy column were used ( Fig. 2A). In contrast, when the RNeasy PowerSoil total RNA kit was used with STPP, a significantly lower amount of RNA was extracted. This is potentially because phosphate carryover interfered with the RNeasy PowerSoil total RNA kit and/or subsequent qPCR. When combined with STPP, TRIzol reagent and the RNeasy column provided efficient RNA extraction from the mixture of mock chimney minerals and E. coli cells that was similar to that with the RNeasy PowerSoil total RNA kit without STPP. In addition, the quality of extracted RNA with STPP using TRIzol reagent and the RNeasy column was superior to that extracted using the RNeasy PowerSoil total RNA kit in the presence of STPP. The RNeasy PowerSoil total RNA kit resulted in RNA fragmentation and a lower RIN (43) (Fig. 2B), and this was potentially due to the bead beading step. A previous study indicated that RIN values greater than 7.0 are ideal for reproducible RT-qPCR (19). In addition, TRIzol reagent and the RNeasy column allowed RNA extraction in a shorter time (approximately 30 min) than the RNeasy PowerSoil total RNA kit (approximately 1.5 h). Although not assessed in the present study, other commercial kits, e.g. the NucleoSpin Soil kit (Macherey-Nagel, Düren, Germany) and Fast RNA Pro Soil-Direct kit (MP Biomedicals, Santa Ana, USA), generally take 1.5-2 h for RNA extraction. Furthermore, TRIzol reagent and the RNeasy column may reduce the cost of RNA extraction by approximately 40-80% from the commercial kits described above. TRIzol reagent and the RNeasy column may be easily used in larger scale experiments and the simultaneous extraction of RNA and DNA. Direct and indirect RNA extraction methods for a natural chimney sample A combination of TRIzol reagent and the RNeasy column was found to be a potentially effective method for RNA extraction from a natural chimney sample (Fig. 3). Although not significant, the amount of RNA extracted from the same sample was increased by the addition of STPP (Fig. 3). Since previous studies indicated the biofilm formation of microbial communities within natural chimney structures (8,44,48), the effects of sonication (22) during RNA extraction were also evaluated. This indirect extraction method significantly increased the amount of RNA extracted by more than 20-fold (Fig. 3), suggesting that sonication releases cells from chimney minerals and improves the efficiency of cell lysis, as previously indicated for microbial communities in sediments (29). Since the physical properties and mineral compositions of chimney structures vary in different deep-sea vents and even in different parts of the same chimney structure (50), further adjustments of the STPP amount and sonication intensity are necessary for optimization. Microbial 16S rRNA analysis Microbial 16S rRNA compositions were assessed using RNA extracted from the natural chimney structure with or without STPP and the sonication step. The same OTUs were dominantly detected in all 16S rRNA libraries, and neither sonication nor STPP markedly affected subsequent RNA analyses. At the class level, the microbial phylotype composition was significantly dominated by the phylotypes of Epsilonproteobacteria (80-98%), followed by the phylotypes of Methanococci (0.4-10.1%) and Deltaproteobacteria (0.8-6.1%) (Fig. 4). Members of the class Epsilonproteobacteria have been dominantly detected in the chimney habitats of most deep-sea hydrothermal fields (12,31,32). At the genus level, the phylotypes of Thioreductor, Sulfurimonas, Sulfurovum, and Lebetimonas were commonly found as the predominant populations in all three libraries. The most abundantly detected OTU was closely related to the genus Thioreductor (30) in all three libraries (Fig. 4). Besides the 16S rRNA sequences of bacteria, sequences closely related to the methanogenic archaea, members of the genus Methanocaldococcus (23,35), were dominantly detected. The relative abundance of Methanocaldococcus sequences was significantly decreased by the sonication step (indirect extraction) (Fig. 4), suggesting that the cells of bacteria, particularly Epsilonproteobacteria, with a potentially high RNA content were in biofilms and became accessible to TRIzol after sonication. Conclusion There has been a growing interest worldwide in seafloor and subseafloor energy and mineral resources. Deep-sea vent ecosystems including microbial communities are faced with anthropogenic environmental disturbances, and temporal and spatial variabilities in in situ microbial diversity and function need to be monitored using a polyphasic approach. Although the STPP amount and sonication step need to be adjusted to each chimney sample, the indirect RNA extraction method developed in this study is simple, rapid, and cost-effective, and may be used for large-scale RNA extraction. This new method may extend analytical methods for microbial communities within deep-sea hydrothermal vent chimneys, and thus may further our understanding of microbial activities in deep-sea hydrothermal fields.
v3-fos-license
2022-01-09T16:17:25.869Z
2022-01-01T00:00:00.000
245824932
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/27/2/359/pdf", "pdf_hash": "ad739eac088929ff9a07016b3ab46526ef951490", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:421", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "29e72016e2432062d0b4f0793436d0709bd61cfc", "year": 2022 }
pes2o/s2orc
Construction of Chiral Cyclic Compounds Enabled by Enantioselective Photocatalysis Chiral cyclic molecules are some of the most important compounds in nature, and are widely used in the fields of drugs, materials, synthesis, etc. Enantioselective photocatalysis has become a powerful tool for organic synthesis of chiral cyclic molecules. Herein, this review summarized the research progress in the synthesis of chiral cyclic compounds by photocatalytic cycloaddition reaction in the past 5 years, and expounded the reaction conditions, characters, and corresponding proposed mechanism, hoping to guide and promote the development of this field. Introduction In recent years, enantioselective photocatalysis has been successfully applied to extensive practical work of organic synthesis [1][2][3], providing an alternative method for the production of valuable chiral molecules. In this regard, many chemists such as Yoon, MacMillan, Bach, etc., made great contributions to the development of milder and more efficient enantioselective cycloaddition reaction through the organic photocatalysis. List and MacMillan were awarded the 2021 Nobel prize in chemistry for their "development in asymmetric organocatalysis". This review mainly discusses the research progress of enantioselective photocatalysis for constructing chiral cyclic compounds by photo-induced asymmetric cycloaddition reaction in the past 5 years, though some comprehensive contents regarding this topic were reported by the pioneers [4][5][6][7][8]. This paper is divided into seven parts according to the structural types of rings: the construction of 3-membered rings, 4-membered rings, 5-membered rings, 6-membered rings, 7-membered rings, macroring, and multi-rings. In contrast, there are significantly more reports about the construction of 4-membered rings via enantioselective photocatalysis. In these transformations, the use of chiral catalyst could furnish an appropriate chiral environment and improve the photocycloaddition enantioselectivity; some of them have become the representative chiral photocatalyst for enantioselective photocatalysis, such as chiral oxazaborolidine Lewis acid, chiral thioxones, ruthenium catalysts, iridium catalysts, chiral amine catalysts, and chiral phosphoric acids. Especially in recent years, chiral organocatalysts are more and more widely used in enantioselective photocatalysis [9]. In addition, the catalytic system had also been developed from the original single catalysis system to the current double catalysis, or even triple catalysis, system. struction of 3-membered ring by enantioselective photocatalysis in the past five years. In 2019, Bach et al. reported a simple, efficient, and enantioselective route to obtain the cyclopropyl substituted quinolone compounds [10]. Under the irradiation of visible light (λ = 420 nm), the 3-allyl-substituted quinolones (1) underwent a triplet sensitized di-πmethane rearrangement reaction to form 3-cyclopropylquinolones (3) in the presence of a chiral hydrogen bonding sensitizer thioxanthone (2) (Scheme 1). The reaction showed excellent yields in most cases and moderate enantioselectivity (88-96% yield, 32-55% ee). The author proposed a mechanism as follows: (1) was associated with a chiral hydrogen bonding sensitizer (2) to form the 1,3-diradical intermediate (1a), which further closed the ring to form the complex (ent-4) or complex (4). Owing to the geometric constraints in complex (1a), generating (ent-4) is expected to decay preferentially. In other words, this process favours formation of (4) while the latter process shows a preference for ent-3a (higher association constant Ka than [3a]), thus reducing the enantioselectivity of the deracemization process. Finally, the major enantiomer 3-cyclopropyl-quinolones (3) were obtained. Scheme 1. Enantioselective formation of 3-cyclopropyl-quinolones. Enantioselective Formation of 4-Membered Ring by Visible Light Catalysis The technology for the construction of chiral 4-membered ring compounds by enantioselective photocatalysis has become more mature, and [2+2] photocycloaddition is the most common synthesis method. In 2017, an effective and enantioselectivie chiral iridium catalyzed [2+2] photocycloaddition was reported by Yoon et al. [11], who used structurally related 3-alkoxyquinolones (5) irradiated by blue LED light with Ir(III) photosensitizer (6) to synthesize products (7) in good yields and enantioselectivitiy (up to 98% yield, up to 91% ee) (Scheme 2). Chloro-and bromo-substituted quinolones performed well but iodinated substrate displayed lower enantioselectivity. The excellent performance is still capable of modified alkene moiety with small enantioselective decline. Scheme 2. Enantioselective photocycloaddition of 3-alkoxyquinolones. Earlier, Yoon et al. developed a new strategy to achieve enantioselective [2+2] photocycloaddition of 2 -hydroxychalcones via Lewis acid-catalyzed triplet energy transfer [12]. Subsequently, they reported a chiral Lewis acid catalyzed triplet sensitization for enantioselective crossed photocycloaddition to synthesize highly enantioenriched cyclobutanes in 2017 [13]. In this work, 2 -hydroxychalcones (8) could couple with styrenes (9) to construct diarylcyclobutanes (10) in the presence of Sc(OTf) 3 , t-BuPybox, and Ru(bpy) 3 2+ upon the irradiation of 23 W CFL (Scheme 3). The transformations showed excellent yields and high enantioselectivity (up to 97% yield, up to 99% ee). The styrene ring could be substituted by a variety of electron-donating groups or electron-withdrawing groups, and the styryl double bond was also modified by some substituents with high ee. This method also provided a direct approach to the synthesis of diarylcyclobutane natural products, such as norlignan 3. The proposed mechanism was conducted as follows. 2 -hydroxychalcones (8) initially cooperated with Lewis acid to form the Lewis-acid-bound substrates (11), which could be transform into (11*) via triplet energy transfer by Ru(bpy) 3 2+ under the irradiation of 23 W CFL, then styrenes (9) captured with 1,4-diradical intermediates (12) to produce diarylcyclobutanes (10). Yoon et al. reported a highly enantioselective intermolecular [2+2] photocycloaddition reaction catalyzed by chiral hydrogen bond ion iridium photosensitizer (49). 3-Hydroxyquinolones (50) reacted with maleimide (51) to generate cycloaddition products (52) under the irradiation of blue LED light in excellent yields and enantioselectivity (up to 99% yield, up to 99% ee) (Scheme 12) [22]. The reaction has high enantioselectivity when the substitutions at the 6-position of 3-hydroxyquinolones are alkyl, halogen, and alkoxy groups, and the substituted 3-hydroxyquinolones at 5-and 7-positions also have good tolerance. However, the substitution at the 8-position has a great influence in enantioselectivity. Furthermore, the reaction is also applicable to alkyl, propyl, allyl, and carbamoyl substituted maleimide. In this reaction, the quinolone substrates (50) partially combined with the pyrazole of the iridium complex to afford complex (53), which was then transformed into an excited state (53a) under the irradiation of blue LED light. The excited state (53a) reacted with maleimide (51) by bimolecular energy transfer to obtain the complex (53b) and provided cycloaddition products (52) from (53c). Recently, Bach et al. reported an enantioselective photoaddition reaction catalyzed by chiral thioxone (54). Under the irradiation of visible light (λ = 420 nm), intramolecular cyclization of 3-alkylquinolones (55) with 4-O-tethered alkenes or allenes occurred to form cycloaddition products (56) in good yields and enantioselectivity (72-99% yield, 81-99% ee) (Scheme 13) [23]. The benzo ring of quinolones substituted by methyl, chloro, cyano, methoxy, and fluoro groups has good tolerance. In the study of olefins, propylene diene and trifluoroolefins were also suitable for this reaction. The reaction mechanism shows that alkylquinolones (55c) could react with thioxanthraquinones (54) to deliver the complex (57) which gave the quinolone triplet (57a) by energy transfer, then the internal carbon atom of olefin was added to form 1,4-diradical (57b), which underwent intersystem crossing (ISC) to produce (57c) and further gave desired product (56c). In 2020, Bach et al. reported a photocyclic addition reaction in which heterocyclic compounds (58) could be synthesized, using thioxanthone (59) as a chiral catalyst. Under the irradiation of visible light (λ = 420 nm), 3-substituted quinoxalin-2 (1H)-ones (60) and olefins (61), could occur an intermolecular aza Paternó-Büchi reaction in good yields and enantioselectivity (50-99% yield, 86-98% ee) (Scheme 14) [24]. The para-position of olefin aromatic ring could be substituted by some groups such as methyl, tert-butyl, and halogen substituents. Ethyl and trifluorocarbons at the C3 of quinoxalinones were also well tolerated. The reaction mechanism is similar to the previously mentioned mechanism (Scheme 13). In 2020, Takagi et al. reported an enantioselective intramolecular [2+2] photocycloaddition of 4-bishomoally-2-quinolones (62). When phosphoric acid (63) was used as a photocatalyst, cycloaddition products (64) were obtained with good yields and enantioselectivity (up to 88% yield, up to 92% ee) under the irradiation of visible light (λ > 290 nm) (Scheme 15) [25]. Methyl groups at the 6-and 8-positions of the substrates were well tolerated, while oxygen atoms could reduce the enantioselectivity of the products. The reaction occurred from a complex (65) formed by substrate (62b) and phosphoric acid (63) through dual hydrogen bonding, then the olefin moiety on the complex reacted with the enone moiety to form cycloadduct (64b) via photocycloaddition. The para-substituents on the phenyl ring, such as methyl, bromine, chloro, methoxy, and borate group, were well tolerated. In different olefins, styrenes, 1,3-enynes and 1,3-dienes could produce products with good enantioselectivity. The author's study shows that the reaction could be carried out due to the formation of the complex intermediates (70), which were combined by the substrates (67) and the catalyst (66). . The reaction was catalyzed by amines (72) which could easily convert into iminium ions. Under the irradiation of blue light (λ = 459 nm), the modified salicylaldehydes (73) could be successfully obtained in good yields and enantioselectivity (38-63% yield, 65-91% er) [27]. The enantioselectivity of the products could be improved if there are strong electron-donating groups at the aryl group of the salicylaldehyde core. Different aryl groups in the styrene chain, such as methyl and fluoryl, could facilitate a smooth reaction. The proposed mechanism was conducted as follows. First, the substrates (73a) combined with the catalysts (72) to furnish iminium ion intermediates (74). The excited complex (74a) was formed by SET under the irradiation of blue light LED, then the excited complex (74a) could transform into biradical intermediates (74b), which underwent [2+2] photocycloaddition to give the cyclobutyl iminium ions (74c). Finally, the cyclobutyl iminium ions (74c) produced the desired product, (71a). Enantioselective Formation of 5-Membered Ring by Visible Light Catalysis In nature, 5-Membered ring compounds exist widely, and some five-membered heterocyclic compounds, such as furan, pyrrole, and thiophene, are widely used in organic synthesis and have a variety of physiological activities as drugs. In 2017, MacMillan et al. reported an intramolecular α-alkylation of aldehydes (79) via a co-catalytic system (amine catalyst (80), iridium photocatalyst (81) and HAT catalyst (82)) to obtain five-membered, six-membered, or seven-membered cyclic aldehydes (83) under the irradiation of blue LED light in good yields and enantioselectivity (up to 91% yield, up to 95% ee) (Scheme 19) [29]. This reaction could be used to prepare a variety of heterocyclic compounds containing nitrogen atoms and synthesize tetrahydropyran. In the scope of alkenes, trisubstituted and 1,2-disubstituted olefins were well tolerated. The proposed mechanism was conducted as follows. The substrates (79) combined with amine catalyst (80) to afford enamines (84). At the same time, under the irradiation of visible light, enamines (84) formed electrophilic radical (84a) through SET initiated by iridium photocatalyst, which was added to olefins to produce nucleophilic radical (84b). Nucleophilic radical (84b) underwent HAT to generate iminium ions (84c); finally desired products (83) were obtained by releasing amine catalyst (80) from iminium ions (84c). In 2017, Luo et al. reported a chiral ion-pair photoredox organocatalyst (85) which was used for enantioselective anti-Markovnikov hydroetherification of alkenols (86) to synthesize five-membered oxygen-containing heterocyclic adducts (87) under the irradiation of bule LED light (λ = 450 nm) in good yields and enantioselectivity (50-90% yield, up to 64% ee). The chiral ion-pair is composed of chiral BINOL-based sodium phosphate and 9-mesityl-10-methylacridinium tetrafluoroborate (Scheme 20) [30]. The aryl substitutions of hydroxyl α-position were well tolerated. Studies have shown that the reaction begins with the chiral ion-pair-catalyzed SET step, which converts the substrates (86) into radical intermediates (88), radical intermediates (88) combine with chiral phosphate anion to form complex (89), and complex (89) undergo cyclization to yield cyclic adducts (90), which through chiral phosphate anion mediated hydrogen transfer to give desired products (87). In 2017, Bach et al. reported an enantioselective photocyclization reaction which converted 2-aryloxy-cyclohex-2-enones (91) to cis-2,3,4a,9b-tetrahydro-1H-dibenzofuran-4-ones (92) in moderate yields and enantioselectivity (26-76% yield, up to 60% ee) (Scheme 21) [31]. In the presence of Cu(ClO 4 ) 2 ·6H 2 O and bisoxazoline ligand (93), the reaction could be carried out under the irradiation of visible light (λ = 368 nm), or under the irradiation of visible light (λ = 418 nm) with the addition of 50 mol% of thioxanthone. The electron-donating groups on the aryl para-position have no effect, while the electron-withdrawing groups lead to the decrease in enantioselectivity. Studies have shown that the substrate (91a) could form the complex (94) with chiral copper-bisoxazoline complex so that the β-carbon atoms of ketene could be attacked to generate cyclic adducts (92a). In 2018, Knowles et al. reported a photocatalytic reaction to synthesize pyrroloindolines (95) from tryptamine substrates (96) under the irradiation of blue LED light in good yields and enantioselectivity (59-81% yield, 87-92% ee) (Scheme 22) [32]. Ir(ppy) 3 and 8H-TRIP BINOL phosphate (97) were used as catalysts. Some substituents on the indole core were well tolerated, such as Br-, Cl-, Methoxy-, and alkyl-substituents. Moreover, the reaction could also be applied to the synthesis of alkaloid natural products. The proposed mechanism was conducted as follows. Chiral phosphates could first form hydrogen-bonded adducts (98) with substrates (96). Under the irradiation of visible light, electron transfer occurred and reacted with stable nitroxyl TEMPO· to produce closed-shell intermediates (99); iminium ions underwent nucleophilic addition with pendant amine to obtain alkoxyaminesubstituted pyrroloindoline products (95). In 2018, Meggers et al. reported a [3+2] photocycloaddition catalyzed by chiral-atmetal rhodium complex (100). Under the irradiation of bule LED light, cyclopropanes (101) reacted with alkenes (102) or alkynes (103) to deliver chiral cyclopentanes (104) or cyclopentenes (105) in good yields and enantioselectivity (63-99% yields, up to >99% ee) (Scheme 23) [33]. Alkenes have a wide range of applicability, and the olefins substituted by Michael acceptors, styrenes, enynes, and aromatic rings were well tolerated; pyridine could also be used as a substituent group to participate in the reaction. In the scope of alkynes, various aryl substituted alkynes were well tolerated. The proposed mechanism was conducted as follows. Bidentate coordination occurred between cyclopropane substrates (101) and rhodium complex RhS (100) to generate intermediates (106), which were excited to intermediates (106a) under the irradiation of visible light. Intermediates (106a) as a strong oxidant were reduced to intermediates (106b) by tertiary amine. Intermediates (106b) were converted into radical intermediates (106c), which were added to alkenes (102) to generate ketyl radical (106d). Then ketyl radical (106d) released cycloaddition products (105) to complete catalytic cycle. In 2019, Hyster et al. reported a photoexcitation catalyzed by flavin-dependent "ene"reductase. The methodology could convert chloroacetamides (107) to five-, six-, seven-, and eight-membered lactams (108) under the irradiation of 50 W cyan light (λ = 497 nm) in good yields and enantioselectivity (up to 99% yield, up to >99% er), GluER-T36A (109) is used as the main chiral catalyst (Scheme 24) [34]. Aromatic substituted alkenes could participate in this reaction smoothly, and a variety of alkyl substituents on the olefin were well tolerated. The proposed mechanism was conducted as follows. Substrates (108) could combine with catalyst (109) to yield complex (110), which underwent electron transfer to obtain radical intermediates (111). Intermediates (111) formed exocyclic radical (111a) via cyclization, which gave desired products (108) through hydrogen atom transfer. form cyclic adducts (115) in good yields and enantioselectivity (up to 98% yield, up to 96% er) (Scheme 25) [35]. Many kinds of substituted furoindolines and pyrroloindolines were produced by this reaction with high enantioselectivity. In addition, substituted indolo [2,3-b]quinolines could also be constructed. The proposed mechanism was conducted as follows. In the presence of an iridium photocatalyst, substrate (113d) went through first SET oxidation to form radical cation intermediate (116) In 2020, Knowles et al. reported a kind of enantioselective intramolecular hydroamination of alkenes with sulfonamides (118) catalyzed by an iridium photocatalyst and a chiral phosphate (119). Pyrrolidines (120) were successfully obtained under the irradiation of blue LED light in good yields and enantioselectivity (up to 98% yield, up to 98% er) (Scheme 26) [36]. In the scope of sulfonamide moieties, the substituents at paraand metaposition of the sulfonamide arenes could provide products with high er; the reaction could also be suitable for benzofuran, thiophene, and thiazole heterocycles. Benzyl substitution, phenethyl chain, sulfamate ester, and sulfamide substrates were well tolerated. In addition, some complex sulfonamide substrates could also participate in this reaction. For the scope of alkenes, cyclohexyl-substituted and cyclobutyl-substituted substrates showed better enantioselectivity. In 2021, Gao et al. reported a method for the construction of polycyclic structures (A) from substituted 2-methylbenzaldehydes (147) and dienophiles (148) via a chiral titanium (149)-mediated enantioselective photoenolization/Diels-Alder reaction [42]. The reaction has good yields and enantioselectivity (up to 98% yield, up to 99% ee) (Scheme 32), and could be used to synthesize a variety of complex natural products and drugs. Not long before, another chiral TADDOL-type ligand (150) for exo-selective and enantioselective photoenolization/Diels-Alder reaction was found by Gao et al. [43]. Under the irradiation of visible light (λ = 366 nm), electron-rich 2-methylbenzaldehydes (151) reacted with dienophiles containing a benzoyl group at its α position (152) to form a variety of D-A addition products (B) in good yields and enantioselectivity (up to 92% yield, up to 99% ee) (Scheme 33). The process of the reaction depended on the generation of the structure of the dienophiles and the chiral ligands, and the chiral dinuclear Ti-TADDOLate species provided an excellent enantioselective environment for [4+2] cycloaddition. Enantioselective Formation of Macroring by Visible Light Catalysis Compared with mesocyclic molecules, chiral macrocyclic molecules are relatively rare, and the corresponding synthesis methods are not mature. In 2020, Xiao et al. reported palladium-catalyzed asymmetric [8+2] dipolar cycloadditions. In the presence of a chiral ligand (164) and Pd 2 (dba) 3 ·CHCl 3 , vinyl carbamates (165) could react with photogenerated ketenes (166) to deliver 10-membered cycloadducts (167) upon the diffraction of blue LED light in excellent yields and enantioselectivity (up to 97% yield, up to 97% ee) (Scheme 36) [46]. A variety of vinyl carbamates bearing different aryl groups could afford desired cycloadducts with high er, and unsaturated vinyl group substituted with vinyl carbamate could be well tolerated in asymmetric [8+2] cycloaddition. Electronically different substituents at the phenyl ring of α-diazoketones showed excellent applicability; α-diazoketones with alkyl groups, such as methyl, ethyl, and n-butyl are also applicable in this reaction. However, 2-aryl or alkyl-substituted vinyl carbamates and 2-aryl or alkenyl-substituted α-diazoketones cannot participate in the reaction. This method is the first visible light induced asymmetric [8+2] cycloaddition reaction. Enantioselective Formation of Multi-Ring by Visible Light Catalysis Compared with traditional synthesis methods, photocatalytic synthesis of chiral polycyclic compounds is new and effective. In 2018, Nicewicz et al. reported an asymmetric cation radical intramolecular Diels-Alder reaction, utilizing an oxidizing pyrilium salt bearing a chiral N-triflyl phosphoramide anion (168) to synthesize cycloaddition products with bicyclic structure (169) from trienes (170) upon the diffraction of blue LED light (λ = 470 nm) in good yields and enantioselectivity (up to 72%, up to 75% er) (Scheme 37) [47]. Moreover, this reaction could also be used to yield [2.2.1]-bicycloheptenes. The proposed mechanism was conducted as follows. Electron-rich dienophile of substrates (170) underwent the one-electron oxidation by photoredox catalyst upon the diffraction of blue LED light to transform into radical intermediates (171), which gave radical intermediates (171a) via cyclization. One-electron reduction of intermediates (171a) furnished bicyclic products (169). Conclusions In the past 5 years, enantioselective visible light catalysis has become an important strategy for the synthesis of chrial organic molecules. In this review, we have mainly summarized the new methods for the construction of chiral cyclic compounds via photoinduced transformation. The substrate applicability and mechanism of various methods are briefly described. It is worth noting that these photochemical synthesis methods provide a good supplement for the construction of polychiral and polycyclic compounds which are difficult, or even impossible, to synthesize with the previous methods. It can be predicted that photocatalysis will become a greener and more environmentally friendly synthesis method and will play an important role in the synthesis of a variety of corresponding chiral compounds, providing new ideas for the total synthesis of natural products and drugs. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2024-01-17T16:28:31.068Z
2024-01-01T00:00:00.000
267012434
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/215835/20240112-4718-1mvhidn.pdf", "pdf_hash": "4270bbd320a28fbec99c3f15ba5b0d21003ac224", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:422", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f94dea3db8dd5f09054447dc70e46f355a92f396", "year": 2024 }
pes2o/s2orc
Sudden-Onset Severe Back Pain Caused by Acute Gastric Anisakiasis Anisakiasis is a parasitic disease that usually causes acute abdominal pain, nausea, and vomiting after the ingestion of raw seafood. We present a case of anisakiasis in an 80-year-old man who complained of sudden-onset severe back pain that was reminiscent of aortic dissection. This case shows that anisakiasis should be considered as a possible differential diagnosis in patients with not only abdominal pain but also back pain. In addition, for “diagnostic excellence,” it is essential to return to a comprehensive medical history that allows the reassessment of the diagnosis even when it differs from the initial differential diagnosis. Introduction Anisakiasis is a parasitic disease caused by the nematode Anisakis simplex, which can generally invade the gastrointestinal wall of humans and cause strong allergic reactions [1].The main symptoms include mild-tosevere abdominal pain not confined to specific areas, nausea, and vomiting within a few minutes to four hours after consuming raw or undercooked seafood, particularly bonito and mackerel [1].Diagnosis and treatment are usually by endoscopy and extraction and identification of the larvae [2].Anisakiasis has been reported in significantly higher numbers particularly in Japan, Spain, and South Korea.In recent years, the number of anisakiasis reports has been increasing in many more countries across the world [3].In this report, we describe a case of gastric anisakiasis that presented with back pain and had a rare clinical course. Case Presentation An 80-year-old man with hypertension and dyslipidemia presented with severe middle back pain (Th4-8 midline of the trunk).While he was asleep, he suddenly experienced severe and dull back pain, with a numerical rating scale score of nine over his entire back area.He could not sleep well due to a feeling of dyspnea caused by persistent pain.The following day, when he presented to our hospital, he was still experiencing back pain in the entire area.His medical history included hypertension and dyslipidemia.He also reported heavy alcohol consumption habits.He did not have any other symptoms such as diarrhea or black stool, except for back pain and nausea.His only medication was angiotensin II receptor blocker (ARB) for hypertension.In addition, he had eaten raw squid sashimi prepared by a cook approximately 12 hours before the symptom onset. At presentation, his consciousness was clear, and his other vitals were as follows: temperature, 36.5°C;heart rate, 56 beats/min; blood pressure, 148/64 mmHg (no difference between right and left); respiratory rate, 12 breaths/min; and SpO 2 , 98% at room air.The abdomen was soft and flat with no tenderness.Physical examination did not reveal a pulse deficit, aortic bruit, unequal blood pressure in either arm, or costovertebral angle tenderness.Based on the patient's clinical history, acute aortic dissection and acute pancreatitis were suspected.In addition, acute coronary syndrome, esophageal rupture, and pulmonary embolism were considered as the differential diagnoses.Because of the history of raw fish consumption, anisakiasis was also listed as a differential diagnosis, but we considered anisakiasis less likely because it is atypical for the primary symptom to be back pain rather than abdominal pain. Laboratory tests showed leukocyte levels of 5400/dL (reference range: 3900-9700) with 3.3% eosinophils, Creactive protein level of 1.56 mg/dL (reference range: 0.0-0.29),D-dimer of 1.8 µg/mL (reference range: 0.0-1.0),serum amylase of 75 U/L (reference range: 43-124), and serum pancreatic lipase of 45 U/L (reference range: 14-56).The electrocardiogram results were also within the normal range.Contrast-enhanced computed tomography (CT) showed no findings that were suggestive of aortic dissection or pancreatitis; however, local thickening of the anterior portion of the gastric wall was observed (Figure 1).We suspected anisakiasis or gastric ulcer and performed an upper gastroscopy, which confirmed the presence of a single live anisakid nematode larva penetrating the gastric mucosa in the greater curvature of the lower body (Figure 2).The patient was then diagnosed with gastric anisakiasis.Endoscopically, Anisakis larvae were removed (Figure 3).There were no specific endoscopic findings other than gastric anisakiasis.Symptoms Discussion We experienced an atypical gastric anisakiasis with sudden-onset severe back pain that was reminiscent of aortic dissection.Anisakiasis may sometimes require differential diagnosis from cardiovascular disorders because it has been reported to cause severe chest pain [4,5].To the best of our knowledge, however, this is the first report of gastric anisakiasis with sudden-onset severe back pain.Other diseases were ruled out, and the symptoms disappeared after the anisakid nematode larvae were eradicated, suggesting that the back pain was caused by gastric anisakiasis.In addition, although the symptoms of gastric anisakiasis in this case appeared long after raw fish ingestion, few previous reports have suggested the occurrence of the disease as early as 72 hours after ingestion of raw fish [2].Other differentiation of the pain was done retrospectively.If back pain is caused by visceral pain, urinary tract stones and cholecystitis are the most likely differentials.Both diseases could be ruled out using CT.Somatic pain was less likely because it is not aggravated by body movement.Neuropathic pain was not consistent with the location, and psychogenic pain was not consistent with the improvement of symptoms.The actual mechanism by which the gastric lesion caused back pain, and not abdominal pain, in this case, is unknown.In fact, the mechanism by which gastric anisakiasis pain develops remains largely unknown [6].However, as with back pain caused by duodenal ulcers, it is likely that the back pain was caused by visceral pain in the stomach, which resulted in the afferent stimulation of spinal nerves [7,8].The symptoms of the disease were likely to be associated with pain from the stomach, especially since the range of stomach-associated pain is Th5-10 and the patient had pain at Th4-8.In addition, previous studies suggested age-related changes in pain processing, such as elevation in the pain threshold, with alterations in peripheral neural elements [9].These changes may possibly contribute to atypical pain related to anisakiasis in this case. Conclusions In this case, a comprehensive interview, including a thorough dietary history, was conducted at an early stage.The possibility of gastric anisakiasis was promptly considered after excluding aortic dissection on CT imaging.For "diagnostic excellence," it is crucial to collect the patient's basic information as well as reevaluate the diagnosis if it differs from the expected one.In addition, this case report highlights the need for doctors to include gastric anisakiasis in the differential diagnosis of not only abdominal pain but also back pain. FIGURE 1 : FIGURE 1: Contrast-enhanced CT Contrast-enhanced CT of the transverse section.Focal edematous wall thickening was seen in the antral measurement of the gastric body (arrow).CT, computed tomography
v3-fos-license
2019-05-20T13:02:47.070Z
2017-10-01T00:00:00.000
54637287
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.hrpub.org/download/20170930/UJER11-19509998.pdf", "pdf_hash": "f42bca9d1bfacda68e97c7d129a9776c14f1a25f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:423", "s2fieldsofstudy": [ "Education" ], "sha1": "63fb891558776d7b637515594c40d29df3244d56", "year": 2017 }
pes2o/s2orc
Engaging Foreign Curriculum Experts in Curriculum Design: A Case Study of Primary School Curriculum Change in Lesotho Involvement of foreign consultants in the Lesotho curriculum design has been in operation since the beginning of formal education around 1833 in the country to-date. The expectation was that, with time, Lesotho would produce enough quality curriculum specialists who would be entrusted with the task of curriculum design. However, the trained citizens continued to engage foreign consultants in curriculum design. This paper presents some of the reasons for continued engagement of foreign consultants in the Lesotho curriculum centre. The study is a case study in which a qualitative approach was used as the conceptual framework to interrogate the value of curriculum specialists mandated to design the Lesotho curriculum. Data used in this study were solicited from one-on-one interviews of curriculum specialists. The findings reveal one of the main reasons for engaging foreign consultants as lack of training for newly employed curriculum specialists while on the other hand, less qualified and inexperienced curriculum specialists are employed which led to a huge staff turnover due to unfavourable working conditions in the Lesotho curriculum development centre. This resulted in the lack of knowledge of curriculum design where curriculum specialists took too long to design a low quality curriculum. Introduction Curriculum design and development is one of the initial processes in teaching and learning since it informs what should be taught and learnt in schools. In curriculum design, a country's educational policies, goals, mission and vision are interpreted and transformed into general objectives, which can be easily transferred by school teachers into instructional objectives and learning standards. The Nature of Curriculum One of the properties of curriculum is that it is not stagnant but changes frequently to cater for modernization. However, curriculum design and curriculum change are among the most expensive processes in any education system. Some of the expenses emanate from production of instructional materials, training of teachers and curriculum developers and stakeholders. In most cases, curriculum change and design involves the engagement of foreign curriculum experts. An example of such is the South African Curriculum 2005 (C2005), which was accompanied by a strategy called Outcomes-based Education. OBE was imported from the USA and New Zealand by the new government of the ANC to phase out the Apartheid type of education. As observed by Reference [1], the goal of Outcomes-based Education (OBE) in South Africa was to move away from the Apartheid curriculum and to promote important problem-solving and critical-thinking skills the country needed. OBE focused on learner-centred, self-discovery learning and de-emphasised content. However, the implementation of OBE proved to be problematic because it requires well-trained teachers, small class sizes and resources. The problems observed in the C2005 of South Africa, as argued by Jansen [2], could also be attributed to the use of foreign curriculum experts without paying attention to the local context which Reference [3] refers to as the stage of ownership, internal initiative, internalisation or appropriations. As much as it is an example of a change influenced by politics, the C2005 of South Africa is also an example of a change of curriculum to organise subjects in the form of integration as a number of countries including Lesotho and Jamaica have adopted. This type of curriculum orientation which Reference [4] refers to as interdisciplinary concepts where topics are placed under one theme, is not well known by local teachers and curriculum experts. Foreign Curriculum Experts' History in Lesotho This lack of expertise usually calls for foreign curriculum experts to be engaged. In Lesotho, there has been a series of foreign curriculum experts that have assisted the country in curriculum design and development. Table 1 below shows some of the foreign curriculum experts who have been assisting Lesotho in curriculum design and development. Situating Lesotho Lesotho is a small mountainous country that is landlocked by South Africa and has a population of roughly two million. According to [Reference 28], "the country has a low primary school completion, at only 64 percent in 2014. The adult literacy rate in Lesotho of 76 percent in 2009 was below the national rate of 86 percent in 2000 but above the sub-Saharan Africa average of 60 percent in 2010." [ Reference 29] reports that the introduction of Free Primary Education (FPE) in 2000, starting with Grade 1, resulted in a dramatic rise in intake and overall enrolment, which increased from 364,951 in 1999 to 410,745 in 2000; a rise of 12.5 percent. Since the paper focuses on primary education, it is also worth mentioning that, of this enrolment, there were 214 746 boys and 214 974 girls. Lastly, Lesotho is a country with the third lowest English-reading performance and the fourth lowest math performance [reference 28]. Research Design This paper reports on a qualitative case study research of one curriculum institution in Lesotho in which one-on-one interviews were conducted to collect rich in-depth data from curriculum specialists. Research Type The methods of data collection in qualitative design favour mostly interviews whereby there is much text collected in the form of words (Reference [5,6]). In this study curriculum developers were encouraged to participate and interact fully where the interviewer probed for clarity of specific aspects relating to Lesotho curriculum design and stimulated participants to give full answers (Reference [7,8]). The interviews allowed for in-depth information from the curriculum developers (Reference [9]). In other words, the interviews produced very detailed and descriptive information collected in the form of words (Reference [10]). The detailed transcripts of interviews produced permitted the researcher to identify themes (Reference [11,12]). Interviews allowed the researcher to enter into curriculum developers' perspectives and as a result, the researcher was in a position to understand and portray their perceptions and understanding of curriculum design in Lesotho (Reference [13]). Interviews may be structured, unstructured or semi-structured (Reference [14]). In the case of structured interviews, specific questions and the order in which they are asked are determined prior to the interview process, whereas unstructured interviews explore the topic areas without specific questions or a pre-determined order (Reference [15]). This research utilised the semi-structured interviews whereby an interview schedule was developed prior to the interview but the interviewer probed for insufficiently answered questions. Inclusion/Exclusion Criteria The focus of the study is on involvement of foreign consultants in primary school science curriculum design, but the challenge is, the engagement of consultants was not only for primary science but also for all subjects and projects. For this reason, the selection of participants was on science curriculum specialists and project coordinators. The curriculum specialists selected for this study were as follows: Four members of the panels, who are responsible for the technical work in the development of primary curriculum, were interviewed. These consisted of a primary school science curriculum specialist, Life Skills education (LSE) curriculum specialist, one member of the panel who is not an employee of the Lesotho government but works as a lecturer in the National University of Lesotho; and the lead in-house contact person (Reference [16]) who was responsible for the selection of consultants and the drafting of terms of reference for at least three consultants recently engaged at the NCDC. Two directors (one former and one current) of curriculum development centre and CEO were selected to participate. Moreover, any number of available international consultants who had been engaged to help in the design and development of primary school science curriculum were selected to participate. One consultant focused on the Life Skills Education syllabus while the other assisted the development of the integrated primary curriculum of which science was one. Choice of Subjects There were four (4) curriculum specialists, two (2) directors and one (1) CEO selected as participants for the study as indicated in table 2 below. Sample Collection There were seven (7) participants invited to participate but only five (5) volunteered to take part in this study. The profile of curriculum specialists who took part in the study is shown in table 3 below. Data were collected from four curriculum specialists and one NCDC director. Interviews with the Director of NCDC were held in order to obtain information on the rationale for the involvement of foreign consultants in the curriculum design of Lesotho. Director-NCDC is responsible for the acquisition of services of foreign consultants once the need for a consultant has been identified. He then submits the requisition for approval to the Chief Education Officer for Curriculum and Assessment Services (CEO-CAS). There were plans to interview CEO-CAS, prior to and during the engagement of two consultants but she declined at the last minute despite having agreed to the interview and having seen the interview schedule. Cognizant of the fact that qualitative research interviews are collaboratively produced between the interviewer and the interviewee (Silverman [17]), we held all the interviews in a place where the respondents would be free to talk without feeling intimidated. For curriculum specialists, interviews were held in their offices at the curriculum development centre (NCDC). As most of them occupy offices individually, these were quiet rooms with no disturbances from other offices. All the interview meetings were planned for the convenience of the officers and where there were two officers in one office, the interviews were held in a vacant separate room to avoid interruptions from the observer. Two of the interviews were held at workshop venues (Lake-Side Hotel) as the officers held residential dissemination workshops for the curriculum developed with the assistance of foreign consultants. The workshops were for the education officers and District Resource Teachers (DRTs) who, in turn, were supposed to cascade the information to schools' representatives and the schools' representatives to pass knowledge to teachers in schools. We attended workshops as per invitation of the participants (curriculum specialists and Director-NCDC) where I was assured that they would have spare time within their workshop schedule for interviews. These were also held in separate quiet rooms with no disturbances from the workshop attendees. The fact that data in this research is mainly collected through interviews might raise suspicion of quality and quantity. However, Gubrium and Holstein (2001) [reference 30] claim that we should not provide the practical basis for learning from strangers. In that case, the data for this research is collected from a relevant audience (not strangers), the primary school science curriculum specialists, who design and develop the curriculum on a daily basis. Interviews are regarded as one of the effective methods of data collection. [Reference 30, p.10] argues that "we all live in what might be called an interview society in which interviews seem central to making sense of our lives". They further claim that the confessional properties of interviews construct more and deepen and broaden the subjects' experiential truths. Stated differently, they describe the interview as "part and parcel of our society and culture …… it is now an integral, constitutive feature of our everyday lives." Therefore, we chose this method for this research to solicit knowledge from the role players about rationale for involving foreign curriculum experts in local curriculum design. Data Processing Data in this research were collected mainly through interviews as stated above. In interviews, much text is collected in the form of words (Reference [5,18]). For this research, we analysed the data generated through the interviews of curriculum specialists using content analysis. This qualitative approach to data analysis helped us to describe how NCDC curriculum specialists are developing a science curriculum with the assistance of curriculum consultants. "Reference [7] argues that content analysis takes texts and analyses, reduces and interrogates them into summary form through the use of both pre-existing categories and emergent themes in order to generate or test a theory." To analyse data from curriculum specialists, we used features of the process of content analysis described by Reference [7]. The first feature is "breaking down text into units of analysis". This incorporates a number of steps we took to process data such as definition of research questions to determine the text to be analysed. In this case, we started by the identification of the type of population, from which units of texts are to be sampled, which are the interview transcripts of curriculum specialists. The curriculum specialists were categorised into three groups namely, subject specialists employed by NCDC to design the Lesotho curriculum; subject specialists who are members of the primary school science curriculum panel from stakeholder institutions but not employed by NCDC; and the authorities at NCDC in the form of directors and CEO-CS. These categories permitted us to identify the coding units from the text and context and we then decided on the codes to be used in the analysis in order to pull together a wealth of material into some order and structure. In this research, we have used "situation codes" that fall under descriptive codes depicting the perspectives held by curriculum specialists and ways of thinking about people and objects (Reference [7]). This descriptive and analytic data enabled us to identify patterns that could be used critically to interpret the rationale for engagement of foreign consultants in science curriculum documents' development. The second feature we used is "undertaking statistical analysis of the units". This is where after looking for patterns, regularities and relationships between segments of the text, we employed statistical techniques to categorize text by calculating trends, frequencies, priorities and relationships. This was then followed by factor analysis to group the kinds of responses from curriculum specialists. Results Lesotho has always depended on donor assistance for the development of its education system due to factors such as political instability and escalating trade deficit [Reference 32]. Similarly, [Reference 33] argue that the Ministry of Education and Training is dependent on donor funding to achieve their intended goals. The major contributors that support the education sector in Lesotho are World Bank, GTZ and IRISH AID [ Reference 34]. The curriculum change of 2009 to 2016 in Lesotho, on which this paper is based, was mainly funded by the World Bank. Therefore, the procedures followed to select consultants for this innovation were the World Bank procurement procedures. The selection procedure used as per World Bank requirements is called the quality-cost based selection method. In this procedure, there are three aspects described by Reference [19] that guide selection of consultants. To qualify for the engagement of a consultant in World Bank projects, the candidate should have: 1. General qualifications, 2. Adequacy for assignment and 3. Experience in the region and language (Reference [19]). We decided to utilize these criteria by the World Bank to evaluate the conformity to the quality and standard for World Bank projects by curriculum specialists mandated to design the curriculum of Lesotho. Table 4 below summarizes the findings from the interviews with curriculum specialists: All specialists know context, culture, local language and have taught in local schools for the minimum of five years. General Qualification None of the curriculum specialists has a PhD level of study. The highest level of education and training for curriculum specialists are Masters' degrees as mentioned by one of the curriculum specialists thus: "The local consultants (curriculum experts) are at a low level; they only do masters in curriculum". Adequacy of Assignment At the level of Masters' degrees, some curriculum specialists have appropriate qualifications in curriculum while some have Masters' degrees in other fields different from curriculum. There are also curriculum specialists who do not have any training in curriculum. Experience We investigated the experience of participants in two categories. The first is professional activity and the second is knowledge of culture and local language. Professional activity Curriculum specialists do not involve themselves in curriculum-related activities both at the Curriculum Development Centre and in the world of education. Interviewees complain that curriculum specialists return from schools to be office bearers and are not involved in educational research. They do not interact with the outside world, they do not attend and present papers in conferences, and are not writing books and academic papers. Knowledge of culture and language It is important for a consultant to have cultural awareness and sensitivity of the area in which the assignment is taking place (Reference [20]) and fortunately, data shows that curriculum specialists know the context, culture of the country, understand and speak local language and have taught in local schools for the minimum of five years. General Qualification Curriculum institutions, such as NCDC, should have access to the most qualified individuals in the field (Reference [21]). This is because general educational levels of staff and their specific preparation in education predict the richness of the curriculum (Reference [22]). In addition, curriculum specialists should be experts and intellectually productive academics (Reference [23]). To employ unqualified curriculum specialists and expect them to do quality work leads to certain failure (Reference [24]). In agreement with this argument, the American National Association for the Education of Young Children (NAEYC) provides the standards that early childhood educators should possess within childhood settings (Reference [27]). The analysis of these guidelines shows three main categories of knowledge for educators as content knowledge, pedagogical knowledge and language and culture. Adequacy for the Assignment "Reference [25] outlines the specifications required for the curriculum specialists as follows: 1. Master's Degree in Education plus a minimum of three (3) years' work experience as a Curriculum/Subject Specialist. Specialization in Curriculum Design and Development as well as assessment will be an added advantage. OR 2. Bachelor's Degree in Education plus five (5) years as a Curriculum/Subject Specialist in Science." As discussed earlier, adequacy for the assignment is the education and training in the specific field and subject directly relevant to the assignment (Reference [26]), curriculum specialists need to meet requirements specified above in the job advertisement to have adequacy for the assignment. Data shows that some curriculum specialists do not have adequacy for the assignment of curriculum design and development. Curriculum specialist 4 admits NCDC employs curriculum specialists with no proper qualifications: "Some of them are still from teaching without any training in curriculum development. So the new curriculum specialists join the center (NCDC) straight from teaching, they are not even given the basics of curriculum development. Director-NCDC explains why some curriculum specialists do not have relevant qualifications: "…sometimes the candidate with experience but without qualifications is selected (employed) over the one with qualifications but lacks experience... These people have been involved with developing the curriculum for some years but their qualifications lack curriculum development modules." Curriculum specialists do not meet the specifications since they have low levels of education and some do not have qualifications relevant to curriculum. It is not surprising that NCDC engages consultants to assist curriculum specialists to design curricula whenever there is curriculum innovation. This is because "educators who are qualified, well-resourced and supported are critical to program success" (Reference [22]), and the opposite is also true. The government of Lesotho cannot expect quality curriculum if it does not train curriculum specialists and employ appropriately qualified curriculum specialists. Therefore, if those appropriately qualified curriculum experts are found outside the borders of Lesotho, then for sake of the quality of education in Lesotho, the government of Lesotho has done a sensible thing to engage foreign curriculum experts. Besides, curriculum developments are not sealed airtight within national boundaries. "Just as economic, political, and ecological phenomena increasingly ignore national boundaries, so do educational issues" (Pinar, et al., 2014) [reference 31, p.793]. Concluding Remarks In conclusion, the paper highlighted that the curriculum specialists mandated to design and develop the curriculum of Lesotho fall short on some specifications of World Bank for procurement of consultants. Firstly, regarding general qualifications, which refer to the level of education, most curriculum specialists do not have a high level of qualifications. Their qualifications range from first degrees to Masters' degrees and they are never encouraged to pursue a PhD, since the Ministry of Education and Training does not offer study leave for anyone intending to further his/her studies beyond a PhD. Secondly, regarding the adequacy for assignment, curriculum specialists also do not conform to the specification. Adequacy for assignment refers to having appropriate qualifications for the position of the work one is doing. As curriculum specialists, they should be adequately qualified in the curriculum field. The paper discovers that some hold qualifications in curriculum studies but the majority either have Masters' degrees in other fields not specific to curriculum studies or they have no Masters' degrees. Those who have Masters' degrees majored in other fields such as management and leadership. The reason provided for having unqualified curriculum specialists is that there was hope that they will be trained as this used to be a norm. Again, the curriculum specialist is not an attractive job in Lesotho. There is a high turnover for curriculum specialists, leaving NCDC for other institutions such as the College of Education, the National University of Lesotho, Examinations Council of Lesotho and other government and UN agencies. In short, this means that there are curriculum specialists mandated to design and develop the curriculum of the country but with no training in curriculum matters. Finally, regarding the experience of curriculum specialists, there are two categories. The first is the experience in the region and knowledge of culture and language, to which all local curriculum specialists conform since they are all citizens of Lesotho; they are familiar with culture, beliefs, context and language of Lesotho. The second one is professional activity which refers to engagement in curriculum activities nationally and internationally. The research reveals that curriculum specialists do not involve themselves in curriculum-related activities. The participants stated that there are no research studies conducted by curriculum specialists in Lesotho schools and therefore will write paper articles which will result in none attendance of educational symposiums. This means that they do not interact with the outside world since they do not attend and present papers in conferences, and they are not writing books and academic papers. There is potential for future studies related to this research such as an investigation into other aspects that could have influenced selection of consultants to choose foreign consultants over local consultants. Another research could be a desktop study to compare curriculum documents designed and developed with assistance of foreign consultants and the one developed without foreign consultants' assistance. Lastly, a research which investigates the implementation of new curriculum in schools should be done to determine its success in schools.
v3-fos-license
2021-02-02T23:11:29.154Z
2021-01-29T00:00:00.000
231747126
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12328-021-01343-4.pdf", "pdf_hash": "d6b95fc09d7155e184cc5853e5788d2d7b810832", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:426", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d6b95fc09d7155e184cc5853e5788d2d7b810832", "year": 2021 }
pes2o/s2orc
Preoperatively diagnosed gastric collision tumor with mixed adenocarcinoma and gastrointestinal stromal tumor: a case report and literature review Reports of gastric collision tumors, comprising adenocarcinoma and gastrointestinal stromal tumor, are extremely rare. Here, we report the case of a 68-year-old male who was diagnosed with a lower-body, moderately differentiated, tubular-type adenocarcinoma and submucosal tumor and underwent an elective D2 distal gastrectomy. The tumor cells of the gastrointestinal stromal tumor were positive for H-caldesmon and CD117, weakly positive for smooth muscle actin and DOG-1, and negative for desmin, S-100 protein, CD31, and AE1/AE3. The tumor had grown into a mixed form of adenocarcinoma and gastrointestinal stromal tumor. Thus, we report the first case of a preoperatively diagnosed collision tumor in the stomach consisting of adenocarcinoma and gastrointestinal stromal tumor. Introduction A gastric collision tumor is characterized by two tumors that are in contact with no inclusion and is relatively rare [1]. In general, adenocarcinoma accounts for 95% of the malignant neoplasms of the stomach; however, gastrointestinal stromal tumors (GISTs) have rarely been observed [2][3][4][5]. Adenocarcinoma and GIST emerge from different layers of the stomach; therefore, their collision tumor consisting of these types of cancer is rare [6]. Here, we report a very rare case of collision tumor of the stomach comprising both adenocarcinoma and GIST. Case report A 68-year-old male with a height and weight of 174.5 cm and 43.6 kg, respectively, and a primary complaint of anemia was admitted to our facility. Abdominal examination revealed that the abdominal region was leveled and soft without any pressure or pain, and no palpable masses or superficial lymph nodes were observed. Hematological tests showed a hemoglobin (Hb) level of 11.8 g/dL, indicating anemia, and no abnormalities were indicated by CEA, CA19-9, and AFP tumor markers were observed. Esophagogastroduodenoscopy revealed a submucosal tumor-like mass with delle and bridging fold in the greater curvature of the lower stomach, and the tumor was suspected as the cause of anemia (Fig. 1). Ultrasound endoscopy revealed a submucosal tumor (SMT) (with the fourth layer as the main locus) and a tumor (with the third layer as the main locus), with some areas of indistinct borders (Fig. 1b). Tissue biopsy of this tumor indicated group 5 tubular adenocarcinoma, and thoracoabdominal computed tomography (CT) showed a tumor with a mottled contrast stain in the stomach body and no regional lymph nodes or distant metastases (Fig. 2). Based on the above findings, the patient was diagnosed with collision tumor of gastric cancer and SMT (GIST was highly suspected). Gastrectomy was performed via laparotomy, and the surgical findings were a diagnosis of H0P0CY0 according to the 3rd English edition of the latest edition of the Japanese classification of gastric carcinoma issued by the Japan Gastric Cancer Society [7]. Distal gastrectomy was performed with D2 lymph node dissection according to the Japanese Gastric Cancer Treatment Guidelines [8]. The tumor was located in the greater curvature of the antrum of the stomach and measured 50 × 45 mm in size (Fig. 3a). On the cut surface, a nodular tumor was mainly observed in the submucosal layer, and the mucosal surface was ulcerated (Fig. 3b). A large part of the tumor was observed to be mixed and some of the cancer parts and GIST alone sites were observed. (Fig. 3c). The nodular tumor was characterized by the intermingled proliferation of carcinoma cells and spindle cells (Fig. 3d). The spindle cells formed a nodular tumor in the submucosal layer and the subserosal layer. Carcinoma cells invaded the nodular tumor of the spindle cells. Pathological examination revealed Ki67 positive cells comprised 2% of the GIST. The average mitotic count was ≤ 5 mitotic figures per 50 high power fields. GIST was classified as very low risk according to the guidelines for GIST risk stratification [3]. Tubular adenocarcinoma had reached the subserosa, and no lymph node metastases were observed. The pathological findings were a diagnosis of pT3N0M0 pStageIIA, according to the 3rd English edition of the latest edition of the Japanese classification of gastric carcinoma issued by the Japan Gastric Cancer Society [7]. Immunohistochemical staining demonstrated a positive reaction for cytokeratin (AE1/AE3) in the carcinoma cells (Fig. 4a). Further, the nuclei of adenocarcinoma cells were positive for TP53 (Fig. 4b). Although the spindle cells were negative for cytokeratin (AE1/AE3) and TP53, these cells were positive for c-kit (Fig. 4c), weakly positive for DOG-1, and negative for desmin and S-100. As a result, the tumor was diagnosed as a collision tumor of GIST and adenocarcinoma. Lymph node metastasis of GIST or adenocarcinoma was not observed, and the resected margins were negative for neoplastic cells. Histologically, the intratumoral GIST sections contained different KIT exon 17 mutations, and KIT exon 17 mutation corresponding to amino acid substitution Asp820Lys was observed. On the other hand, the adenocarcinoma section detected no KIT mutation. No complications were observed postoperatively, and the patient was discharged from the hospital on postoperative day 14. No adjuvant therapy was required for GIST, as per the international guidelines for GIST risk stratification [3]. The patient was clinically and radiographically disease-free at 2.5 years after surgery. Discussion A collision tumor of adenocarcinoma and GIST in the stomach is extremely rare in gastric surgery. Only nine cases of GISTs have been reported in cancerous gastric collision tumors, including autologous cases, and these are shown in Table 1. In this report, a collision tumor of adenocarcinoma and GIST could be classified into two types: contact type wherein the tumor components are in contact with each other, and the mixed type wherein the tumor components are intermingled. We report the first case of a preoperatively diagnosed collision tumor in the stomach consisting of adenocarcinoma and GIST. Meyer defined collisional tumors in 1919 [16], as two types of unrelated tumors in the same organ that come in contact or partially infiltrate each other. In 1980, Spagnolo et al. [17] established the diagnostic criteria for collisions as follows: (1) the distribution of two distinct tissue types can be clearly distinguished, (2) each tissue type can be identified at adjacent sites, and (3) both components are mixed at the collision site, implying that the portion that seems like a transition of both components could be mixed on the inside. Eight cases of collision between GIST and gastric adenocarcinoma have been reported ( Table 1): five of these cases were males and three were females. The mean age of the patients was 72.3 years (range 54-86 years). In all these cases, GISTs were of low malignant potential, whereas gastric adenocarcinomas were usually advanced. We redefined Meyer's classification for collision tumors as contact type and Spagnolo's classification as a mixed type. Therefore, the classification of the cases, according to our literature review, suggested four cases of contact type and four cases, including our case, of mixed type. Regarding the diagnosis, except for this case, all reported cases were diagnosed with collision tumors after surgery. No case underwent preoperative ultrasound endoscopy, except for the present case. Pre-operative ultrasound endoscopes and ultrasound endoscopic needle biopsy are considered useful for the diagnosis of collision tumors. Kleist et al. [15] have described a rare case of a gastric adenocarcinoma inside a GIST. They reported that this might have resulted from the dysplastic epithelium trapped inside a GIST sustaining the tumorigenic effect of the intumor microenvironment or tumor-to-tumor metastasis from an independent gastric adenocarcinoma. Regarding GIST and gastric cancer found in the same specimen of the surgically resected esophagogastric junction and stomach, Abraham [18] reported that in a detailed review of 150 surgical specimens that were operated for esophagogastric junction cancer, incidental GIST (median tumor diameter 1.3 mm) was identified in 10% of the specimens, with no continuity between any of them and the cancer. Kawanowa et al. [19] reported a review of 100 surgical specimens of gastric carcinoma that were operated and found GIST as small as 5 mm in 50 lesions (35 patients), 90% of which were in the gastric upper body. Although the true prevalence of these reports is unknown because the tests were performed postoperatively, micro GISTs that are not clinically problematic may be considered relatively frequent. Several cases describing various other combinations of tumors have been reported [6,15]. Various hypotheses have been proposed about the synchronous occurrence of GIST and adenocarcinoma. Yan [20] reported that KIT and PDGFRA mutation analysis of gastric tumors, background gastric mucosa, and GIST in 15 cases, wherein gastric GIST was accidentally found in surgical specimens operated for gastric cancer, showed no genetic association. Therefore, we cannot exclude the possibility of the involvement of other unknown genes; however, if a specific factor triggered the development of both tumors, there would likely be more reports. Here, we report the ninth case of a collision tumor of gastric adenocarcinoma and GIST. The cause of collision tumors comprising gastric adenocarcinoma and GIST has not been identified. Besides the likelihood of accidental tumors, the possibility of an identical genetic abnormality or one tumor inducing the other has been cited. Molecular genetic finding were performed in three of the eight cases in the previous article, but the only genes searched were C-kit and PDGFRA. In our case, the GIST section showed mutations in the C-kit gene; however, similar to previous reports, no mutations were observed in the cancer section. However, it cannot be ruled out that mutations in genes other than C-kit and PDGFRA could be the cause. If a search for mutations in all genes can be performed, there may be a common genetic mutation in unknown gastric cancer and GIST. Clinically, they are usually indistinguishable from the dominant tumor type, and diagnosis is almost always determined postoperatively on histological findings. In summary, in this case, we could diagnose gastric adenocarcinoma and GIST collision tumors preoperatively using ultrasound endoscopy. Endoscopy is considered useful for diagnosis in gastric cancer with an SMT-like lesion. Although it was a very rare collision tumor, we were able to reveal the unique features and details of tumorigenesis preoperatively. When such a rare collision tumor is encountered, it should be carefully diagnosed preoperatively and examined in detail postoperatively, including genetic analysis.
v3-fos-license
2023-05-11T15:04:33.823Z
2023-04-03T00:00:00.000
258595121
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journal.unpas.ac.id/index.php/jrak/article/download/7106/2871", "pdf_hash": "47702f4656a6aaacbeb4875abd939fffd787fe5b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:427", "s2fieldsofstudy": [ "Business" ], "sha1": "f952f4cdd618944536b5fbb51ff8eadee0bf560d", "year": 2023 }
pes2o/s2orc
STOCK RETURN OF MANUFACTURING COMPANIES IN INDONESIA: INFLUENCE BUSINESS STRATEGY, EVA, MANAGERIAL OWNERSHIP AND SIZE The purpose of this study is to determine the effect of business strategy, economic value added (EVA), managerial ownership and size on stock profitability in manufacturing companies in the basic industrial and chemical sectors listed on the Indonesia Stock Exchange. The population of this study includes manufacturing companies listed on the Indonesia Stock Exchange (IDX) for the period 2016-2020. The sampling method uses a targeted sampling method. The type of data used in this study is secondary data using panel regression data analysis method. The results showed that business strategy and management ownership had a positive impact on stock returns. however, EVA and size do not affect stock returns. INTRODUCTION The effects of the Covid-19 pandemic are still felt on world stock markets (Ashraf, 2020).Various public and business sectors have also received significant impacts from this global event (Prem, 2020).In some countries, it has signaled the direction of economic recovery after the Covid-19 pandemic (Liu, 2020), however, there are still several countries that have not signaled a better economic recovery after the Covid-19 pandemic, one of which is Indonesia (D. S. Abbas, Ismail, Taqi, Yazid, et al., 2021). The Indonesia Stock Exchange (IDX) announced that all stock exchanges in the world are experiencing a decline in the value of securities (Caraka, 2020).This is known by the simultaneous decline in the global stock exchange stock price index during the Covid-19 pandemic.Likewise, the IDX stock price index since January 2020 has plummeted in early April, but began to show an upward trend slowly entering May 2020 (Olivia, 2020).This is an opportunity for investors if they want to invest in shares on a large scale, because investors can buy shares at relatively low prices.The stock price range is comparable to the stock price seven years ago.In addition, it can be a boon for investors to realize profits when the world economy improves and develops rapidly again (D. S. Abbas et al., 2022;Fernando et al., 2023;Saleh et al., 2022).This condition is not only experienced by stock exchanges in Indonesia, all stock exchanges in the world are affected equally (Al-Awadhi, 2020;Ashraf, 2020;Liu, 2020) .Exchanges in Japan, for example the Nikkei Index which is an indicator of stock trading in Japan at the beginning of 2020 recorded the highest stock index in the range of 24,083.In mid-March, when the Covid-19 outbreak was still peaking, the Nikkei stock index corrected to a low of 16,552.In the third week of May, the Nikkei Index has begun to rise to the range of 20,595 levels (Chien, 2021;Narayan, 2020;Zhang, 2021).Indeks Dow Jones Industrial Average Indeks (DJIA) , which is one of the leading indicators of stock trading in the United States, in February was still at the highest level throughout the year, namely 29,551 (Chien, 2021;Narayan, 2020;Zhang, 2021).The index then experienced a significant decline until it reached its lowest level in late March to 18,591.In the third week of May, the DJIA again rose to 24,206 (Chien, 2021;Narayan, 2020;Zhang, 2021). What about the stock exchange in China.This bamboo curtain country became the first location for the spread of the Covid-19 virus.Currently, China has managed to reduce the spread of the virus significantly (Akhtaruzzaman, 2021;He, 2020).In early January 2020, at a time when the outbreak was still not widespread, the Shanghai stock index recorded at 3,116. in the fourth week of March, the Shanghai index slumped to this year's lows of 2,660 (Akhtaruzzaman, 2021;He, 2020).In line with the outbreak that subsided, the third week of May the Shanghai stock index was already at the level of 2,899.In addition, stock exchanges in Europe also show the condition of a string of three money.The FTSE British stock exchange index, one of them, in January 2020 had touched the level of 7,675 (Ashraf, 2020).The FTSE experienced a correction until the end of March, below the 5,000 level.however, in week three the FTSE has moved to the range of 6,002.Let's take a look at the indices on the Singapore stock exchange (Straits Times Index / STI).The STI index in mid-January 2020 was at an early 3,281.The FTSE experienced a correction until the end of March, below the 5,000 level.however, in week three the FTSE has moved to the range of 6,002.Let's take a look at the indices on the Singapore stock exchange (Straits Times Index / STI).The STI index in mid-January 2020 was at an early 3,281. The lows were recorded at the end of March, where the STI corrected to 2,233.In the third week of May 2020, the STI index was already at the level of 2,581 (Liu, 2020;Yong, 2021).Conditions that occur in various world exchanges show the same pattern (Akhtaruzzaman, 2021;Al-Awadhi, 2020;Ashraf, 2020;Chien, 2021;He, 2020;Liu, 2020;Narayan, 2020;Zhang, 2021).Investors around the world experienced the same negative effects, namely suffering huge potential losses due to the Covid-19 pandemic (Caraka, 2020;Prem, 2020).However, in the past two months, investors around the world have also felt the momentum of global stock indices, giving a positive signal of the growth prospects of stock indices in the future (Youssef, 2021).In general, investors around the world experience similar opportunities if they re-enter the stock market today, to seize the potential for large profits in the future (Youssef, 2021).One of the things that can be taken into consideration by investors to determine their investment decisions is the financial position owned by each investor (Chiang, 2022;Youssef, 2021). One of the things that can be taken into consideration by investors to determine their investment decisions is the financial position owned by each investor The existence of this information imbalance, it is very difficult for investors to be able to objectively distinguish groups between good quality companies (Byun & Oh, 2018) and poor quality companies (Su et al., 2016).Meanwhile, both managers of 'good-quality' companies and those of poor quality, will claim to have astonishing (impressive) growth or implicitly imply that the companies they manage are of good quality (Phornlaphatrachakorn & Na-Kalasindhu, 2020;Surroca, 2020).Often managers also claim to have attractive profitability prospects (D.Abbas et al., 2018).As time passes to prove what is good, low-quality companies will benefit from making untrue claims when investors believe those claims.That is, companies that are not actually of good quality benefit by implying certain actions or actions. Initially, signal theory was directed to explain the problem of information inequality in labor markets (D. S. Abbas, Ismail, Taqi, & Yazid, 2021).In its development, signal theory was applied to answer questions regarding things that are specifically inherent in the company.That is, signal theory is developed into various applications within the company.The existence of this information imbalance problem, however, has made investors give low valuations to all companies (D. S. Abbas & Hidayat, 2022).That is, overshadowed by doubts about the true quality of the company and the similarity of assumptions that companies are generally not good, it will give rise to a general assumption that all companies in general are bad or not good (D. S. Abbas, Ismail, Taqi, & Yazid, 2021).In signaling theory, this is called pooling equilibrium.In this case, both good and bad companies are placed on the same valuation.That is, all companies are considered not good. Stock return is the result obtained from investment activities.Return is divided into two, namely realized return (return that occurs or can also be referred to as real return) and expected return (expected by investors) (Ashraf, 2020;Liu, 2020;Prem, 2020).The expectation of getting a return also occurs in financial assets (Chiang, 2022;Olivia, 2020).A financial asset indicates an investor's willingness to provide a certain amount of funds at this time to obtain a future flow of funds as compensation for the time factor during which the funds were invested and the risks borne (Chiang, 2022;Olivia, 2020). To measure stock returns can use technical and fundamental analysis.Fundamental analysis is an analysis to calculate the intrinsic value of stocks using company financial data, such as profits, dividends paid, sales, and so on.This fundamental analysis is widely used by academics.This research uses Business strategy, Economic value added, Managerial ownership and Company Size, which is expected to produce a more accurate value for stock return assessment, through observation of market behavior and internal conditions simultaneously (Ji, 2020). Business strategy is a set of actions aimed at achieving long-term goals and the company's strength to face business competition (Chowdhury, 2022).The implementation of business strategy is an important task for managerial in achieving organizational success (Ismail, 2013).The managerial task of implementing and executing these strategic options requires an assessment that will develop the organization's capability needs and the achievement of targeted goals (Chowdhury, 2022;Sumirat, 2020).The right choice of strategy will achieve superior performance for the organization.This choice of strategy is a part that needs to be considered in creating value for consumers and generating competitive advantage for the company (Chowdhury, 2022).According to (Chowdhury, 2022;Tang, 2021) shows that business strategy affects stock movements. Economic value added (EVA) is an indicator to measure the creation of added value from an investment (Dias, 2020).The strength of the concept of Economic value added is that companies can find out the success of creating added value for the investments made, while it can be known how much the actual capital cost of the investments made, so that the net return on capital can be clearly shown (Bernardelli, 2021;Kristanti, 2022).In addition, Economic value added is a measure of economic value added that a company generates as a result of management activities or strategies (Zakirova et al., 2021).A positive economic value added indicates that the company has succeeded in creating value for capital owners because the company is able to generate income levels exceeding the level of capital costs (Hutorov et al., 2018).This is in line with the goal to maximize the value of the company.Conversely, a negative economic value added indicates that the value of the company decreases because the rate of return is lower than its cost of capital (Zakirova et al., 2021). Managerial ownership is share ownership by company management as measured by the percentage of the number of shares owned by management (Basheer, 2018).Management ownership will help unify the interests between managers and shareholders.Managerial ownership will align the interests of management with shareholders, so that managers directly benefit from the decisions taken and also bear wrong losses (Mohammadi et al., 2020). Firm size is the size of a company.Based on its firm size, companies are divided into large and small companies (Loang, 2021).In other words, firm size is the market value of a company.Market value is obtained from the calculation of the market price of the stock multiplied by the number of shares issued (D 'Souza, 2018).These small companies are marginal in ability, so their share prices tend to be more sensitive to economic changes and they are less likely to thrive in difficult economic conditions (Muhammad Anwar, 2018). During the covid 19 pandemic has ended, giving rise to three major groups of entrepreneurs in the world (Naseem, 2021;Shahzad, 2021).Does this indicate that, based on its firm size, companies are divided into large and small companies?(Loang, 2021).The first group, the group whose business fields are the largest affected by the Covid-19 pandemic.Such as the tourism sector, lifestyle goods, shopping centers, and cafes.They currently only need to survive and it is relatively difficult to invest (Naseem, 2021;Shahzad, 2021).Does this indicate that, the company will carry out a business strategy, so that its business remains sTable over time, by delaying bad news to some extent?(Bentley et al., 2013). The second group is the group of entrepreneurs whose business fields have been affected by the pandemic, but only experienced a decrease in turnover between 30-50% (Naseem, 2021).They generally still have adequate cash flow.Instead of existing cash flows being used for uncertain business development, they can seize opportunities from relatively low stock prices by investing in portfolios in the stock market (Mishra, 2020;Shahzad, 2021).Does this indicate that negative economic value added indicates that the value of the company decreases because the rate of return is lower than its cost of capital?(Zakirova et al., 2021) The third group is a group of business people who actually get big profits during the Covid-19 pandemic, for example, food businesses, mask and PPE manufacturers, and other business sectors that produce goods needed during the pandemic (Chiang, 2022;Mishra, 2020;Naseem, 2021;Shahzad, 2021).Those in this group take advantage of the opportunity to invest in the current stock market by allocating business profits to stock instruments listed on the IDX.Does this indicate that, the manager will directly benefit from the decisions made and also bear the wrong losses?(Mohammadi et al., 2020). Based on the presentation and research questions about the three major business groups, it is necessary to confirm the explanation.The manufacturing sector was chosen because manufacturing companies experience fluctuating and significant capital structure growth every year.In addition, manufacturing companies can also describe the economic performance of all companies in Indonesia.This explanation, confirms that the motivation of this research is important to be developed further, so that it can contribute to existing research on the factors that affect stock returns. In addition to the explanation of the phenomenon that has been described, it is the background for this research.Researchers suspect that the establishment of business strategy as an independent variable will be able to affect the company's stock returns after the Covid 19 period.Therefore, researchers are interested in discussing further related to the relationship between business strategy, economic value added, managerial ownership and firm size as independent variables will be able to affect the return of manufacturing company shares during the Covid-19 period in 2016-2020. METHODS The type of research used in this study uses quantitative methods with associative explanation levels, namely using data sourced from the company's annual financial statements where the data obtained is in the form of numbers (Sekaran & Bougie, 2016).Where the level of explanation or problem formulation in this study is used to determine the relationship between two or more variables whose relationship is causal or one variable (independent) affects the other variable (dependent) (Sekaran & Bougie, 2016). The population in this study is all manufacturing companies in the Consumer Goods Industry sector listed on the Indonesia Stock Exchange (IDX) for the 2016-2020 period.The selection technique used in sample selection is purposive sampling technique.So that the results of the selection of samples selected 8 companies that meet the criteria from a total population of 40 companies. The data used in this study is secondary data using the documentation method, namely by collecting annual report data listed on the Indonesia Stock Exchange (IDX).Documentation according to (Sekaran & Bougie, 2016) is a method of data collection that is not addressed directly to the subject of research.In accordance with the identification of the problem to be studied and the model compiled, the operational variables used are as follows: The technical data analysis used is panel data regression analysis.Panel data regression analysis is a combination of cross section data and time series data, where the same cross section unit is measured at different times (Sekaran & Bougie, 2016). RESULTS The highest standard deviation value obtained by the business strategy is 90.82510, which shows that the average manufacturing company in Indonesia from 2016 -2020 is estimated to have efforts to achieve long-term goals and the company's strength to face competitors is quite high.This is because in 2019 -2020 Indonesia experienced the Covid 19 outbreak, resulting in a high level of risk vulnerability in various business sectors in Indonesia, especially the manufacturing business.The next stage of data testing is to identify the optimal analytical model, so that the model used can be carried out to the analysis stage.Based on the test results of the regression model, ordinary least square proved to be applicable.Thus, the standard assumption model that will be tested in this research regression is as follows.The results of multicollinearity testing do not seem to have variable values that exceed 0.8, so it can be concluded that the regression model in this study does not occur symptoms of multicollinearity. Furthermore, the results of the heteroscedasticity test showed that the Breusch-Pagan LM value was 0.1866 > 0.05, which means that in this research model there are no symptoms of heteroscedasticity in this research regression model. Based on the results of the R-square value of 0.16116, it means that business strategy, economic value added, managerial ownership, and size can contribute to stock returns, with an accuracy of 16.11 percent. DISCUSSION Based on Table 5, these results reflect that signaling theory is able to describe a company's management actions in providing clues to investors about how management views the company's prospects.In addition, based on the research sample company, the sample company has implemented a business strategy with a defender pattern in terms of cash flow and profitability.Defender companies emphasize more on efficiency so that it will help the company to obtain high profits.The share of profits earned by the company from operating activities will eventually be distributed to shareholders in return for their investment in the company called dividends.Companies that generate greater profits, the company will be able to distribute larger dividends to shareholders.This increase in dividends received by shareholders will result in an increase in returns received by shareholders.The right choice of strategy will achieve superior performance for the organization.This choice of strategy is a part that needs to be considered in creating value for consumers and generating competitive advantage for the company (Chowdhury, 2022).According to (Chowdhury, 2022;Tang, 2021) shows that business strategy affects stock movements.The results of this study are in line with the results of research conducted by (Chowdhury, 2022;Tang, 2021) which concludes that business strategy affects stock returns. Based on Table 5, economic value added has no partial effect on Stock Return.These results reflect that signaling theory has not been able to describe a company's management actions in providing clues to investors about how management views the creation of added value from an investment.In addition, based on the research sample company, the sample company has not applied economic value added.The reason for the lack of economic value added in this article, because even though the value of the company's economic value added rises, not necessarily the return that investors will receive will also rise, and vice versa.This shows that economic value added analysis is not used as a basis for decision making for investors to buy or release shares of the company and is also not used by the company's management in making dividend distribution policies.Changes in stock returns are more influenced by the ups and downs of the stock price.If the stock price increases, the return received by investors also tends to rise.This is in line with the goal to maximize the value of the company.Conversely, a negative economic value added indicates that the value of the company decreases because the rate of return is lower than its cost of capital (Zakirova et al., 2021).The results of this study are in line with the results of research conducted by (Kristanti, 2022;Zakirova et al., 2021) which concludes that economic value added has no effect on stock returns. Based on Table 5, these results reflect that signaling theory is able to describe a company's management actions in providing clues to investors about how management helps unify the interests between managers and shareholders.In addition, based on the research sample company, the sample company already has managerial ownership will align the interests of management with shareholders, so that managers directly feel the benefits of the decisions taken and also bear the wrong losses (Mohammadi et al., 2020).This means that managerial ownership has a significant and positive influence on stock returns.This states that the greater the proportion of managerial ownership in a company, the management will strive more actively to meet the interests of shareholders who are also themselves.Management that increases the amount of discretionary accruals causes reported profits to increase.In an efficient market, an increase in the amount of profit will be reacted positively by the market so that the market price of the shares of the target companies will rise, which in turn increases the amount of return obtained by the shareholders of the target company.The results of this study are in line with the results of research conducted by (Mohammadi et al., 2020) which concludes that Managerial Ownership has an influence on stock returns.Based on Table 5, this result reflects that signaling theory has not been able to describe information about the number of assets that reflects the amount of operating cash flow, sales, debt levels and company size contained in the report of the results of management or internal party accountability for its performance in the company.In addition, based on the research sample company, the sample company does not have a definite scale where the company can be clarified the size of the company through total assets, net sales, and market capitalization of the company, so as to guarantee the company in obtaining additional funds in the capital market compared to small companies.But investors should not only look at the company because of the size of the company, large-sized companies do not always have large total assets from the capital they have, the capital they have can come from loans that will later have to be paid which will result in a small return or take of shares.The results of this study are in line with (M Anwar, 2018) which states the size of the company has no effect on stock returns. CONCLUSION Signal theory basically argues that the information provided as signal information to investors and stakeholders can be a return on investment If stock return information has a positive value, it is expected that the market can respond well to the information.This information can be shown by the contribution of business strategy variables and managerial ownership that have a positive effect on the stock return of manufacturing companies listed on the Indonesia Stock Exchange (IDX).Likewise with the size of the company, which has a contribution in influencing stock returns in a positive way but it is not too discussed.This shows that the sensitivity of manufacturing company stock returns to company size is still relatively low.However, manufacturing companies during the 2016-2020 period, EVA had a negative but not significant influence on stock returns in manufacturing companies in Indonesia listed on the Indonesia Stock Exchange (IDX). This research was carried out to the best of the ability of researchers, but due to limited research resources, this study had some shortcomings.First, tabulating data calculations and measuring variables still uses a manual approach, so the rate of calculation errors is at high risk.Second, the limited number of samples in this study, due to the mismatch of sample criteria for variable measurement, because the situation of manufacturing companies in Indonesia is not completely normal, among which there are still many companies that have suffered consecutive losses and delisting, so that the data obtained from 2016-2020 resulted in the results of data calculations being less than perfect. Table 1 . Operational Variables Size = log (Total Aset jt )Stock Return The real return that happens at the t-th, which is the difference between the present price and the prior price.R it = P it − P it−1 Table 5 . Summary of Research Hypotheses
v3-fos-license
2020-01-29T14:03:02.426Z
2020-01-29T00:00:00.000
210933035
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3389/fchem.2020.00034", "pdf_hash": "c8011589df3d602c05f94e792b3c3a87adea5a79", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:429", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "c8011589df3d602c05f94e792b3c3a87adea5a79", "year": 2020 }
pes2o/s2orc
Glycerol: An Optimal Hydrogen Source for Microwave-Promoted Cu-Catalyzed Transfer Hydrogenation of Nitrobenzene to Aniline The search for sustainable alternatives for use in chemical synthesis and catalysis has found an ally in non-conventional energy sources and widely available green solvents. The use of glycerol, an abundant natural solvent, as an excellent “sacrificial” hydrogen source for the copper-catalyzed microwave (MW)-promoted transfer hydrogenation of nitrobenzene to aniline has been investigated in this work. Copper nanoparticles (CuNPs) were prepared in glycerol and the efficacy of the glycerol layer in mediating the interaction between the metal active sites has been examined using HRTEM analyses. Its high polarity, low vapor pressure, long relaxation time, and high acoustic impedance mean that excellent results were also obtained when the reaction media was subjected to ultrasound (US) and MW irradiation. US has been shown to play an important role in the process via its ability to enhance CuNPs dispersion, favor mechanical depassivation and increase catalytic active surface area, while MW irradiation shortened the reaction time from some hours to a few minutes. These synergistic combinations promoted the exhaustive reduction of nitrobenzene to aniline and facilitated the scale-up of the protocol for its optimized use in industrial MW reactors. INTRODUCTION The development of more direct catalytic approaches to the synthesis of chemical products is a key goal in achieving chemical sustainability. A synthetic process that combines heterogeneous catalysis in a green sustainable medium and non-conventional selective MW heating to promote fast chemical transformations is an appealing approach to the development of environmentally benign organic transformations. MW processing offers many advantages compared to classic conductive heating because the intrinsic properties of heat transfer by volumetric dielectric heating (Cravotto and Cintas, 2017). Fast heating rates, short processing times, instantaneous and precise electronic control, and clean heating profile are the main features of MW promoted processes (Rattanadecho and Makul, 2016). Since late 1960s the development of industrial applications of MW heating was mainly for drying and other thermal treatments. Today MW technology is exploited in several areas: drying (Feng et al., 2012), calcination, decomposition, polymerization (Kempe et al., 2011), chemical process control (Bhusnure et al., 2015), and production of nanomaterials (Dabrowska et al., 2018). In spite of the outstanding achievements in MW-assisted organic synthesis, its industrialization is limited to few applications. The exploitation of MW toward a sustainable economy, the discovery of novel waste-to-product approaches and the development of large scale protocol, that can provide products on a kg scale is the way to bypass the negative impact of the relatively high operating cost of MW processes, paving the way for a fully accepted technology. One way to pursue green synthesis is to improve the sustainable nature of the solvents, as they are directly responsible for the major environmental drawback generated by chemical processes. The use of bio-based and eco-friendly alternative solvents (Gandeepan et al., 2019) has been developed and evaluated over recent decades. Glycerol (1,2,3-propanetriol) is a common natural solvent that is rich in functionalities and is obtained in very large amounts as a co-product in biodiesel production (Cintas et al., 2014;Sudhakar et al., 2016). The rapid development of the biodiesel industry has resulted in an increase in glycerol production yields and a supply of low cost technical grade glycerol with a final purity of around 80-95% (Zhou et al., 2008). More than 90% of the glycerol used today is refined to give purities of higher than 97%, and the process can take purity up to from 99.5 to 99.7%. Intensive research is conducted to find ways to valorise glycerol and many fields of interest focus on its transformation into chemicals and hydrocarbon fuels (Dodekatos et al., 2018). Furthermore, its use as a convenient green reaction medium has been widely documented (Wolfson et al., 2009;Díaz-Álvarez and Cadierno, 2013;Díaz-Álvarez et al., 2014;Tagliapietra et al., 2015;Santoro et al., 2017). Reductive transformations are a vast class of chemical reactions that can be achieved both by catalytic processes that utilize molecular hydrogen, and others that use a less reactive hydrogen source (Filonenko et al., 2018). Rather than pressurized hydrogen, it is metal-catalyzed dehydrogenative transformations that are paving the way for sustainable processes with a high degree of control over selectivity and reaction rate. Coupled transfer hydrogenation-dehydrogenation reactions involve the transfer of one hydrogen from a donor molecule (alcohols, ethers and amines) to an unsaturated bond. A wide range of hydrogentransfer reactions have been studied thus far (Baráth, 2018), with primary and secondary alcohols usually being preferred for use. There is some preference for secondary alcohols as they are better donor molecules than primary alcohols because of the sigma inductive electronic effect. Glycerol can also be successfully employed as an environmentally benign "donor solvent" in transfer hydrogenation-dehydrogenation reactions for the reduction of ketones, aldehydes, olefins and aromatic nitro compounds. The preferred metallic catalytic systems for these processes are based on Ru, Pd, Ir, Ni, and bimetallic catalysts (Wolfson et al., 2009;Díaz-Álvarez and Cadierno, 2013). The reduction of aromatic nitro compounds is an important transformation that has been widely studied because anilines are important building blocks in the synthesis of pharmaceuticals and agrochemicals. Selective and complete reductions of nitrobenzenes in the presence of glycerol, used as a "sacrificial" hydrogen source, have been performed with Ni Raney (Wolfson et al., 2009) and in the presence of a recyclable catalyst made of magnetic ferrite-Nickel NPs (Gawande et al., 2012). Furthermore, bio-based glycerol has been exploited in the Ru-catalyzed, one-pot synthesis of imine and amine using nitrobenzene and alcohol as the starting materials (Cui et al., 2012(Cui et al., , 2013. An example of light-driven nitrobenzene reduction to aniline by transfer hydrogenation of glycerol has been described catalyzed by Pd/TiO 2 (Zhou et al., 2015). Glycerol has been also utilized to prepare 1,2,3trimethoxypropane a green alternative for petroleum-based solvents, such as THF, toluene and dichloromethane. 1,2,3-Trimethoxypropane has given good results in the Fe(acac) 3catalyzed transfer hydrogenation of carboxylic acids, nitriles, esters and nitrobenzene (Sutter et al., 2013). Despite its main disadvantage, i.e., its high viscosity at room temperature, glycerol is an optimal solvent for catalysis purposes because of its high polarity and capacity to remain in the liquid phase over a large temperature range (from 17.8 to 290 • C). Moreover, it has low vapor pressure, a long relaxation time and high acoustic impedance, meaning that it can be used under MW and US irradiation conditions. In fact, glycerol has a high loss factor, or loss tangent (tan δ = 0.651), at the standard MW frequency (2.45 GHz), which is indicative of high MW absorption and rapid heating. Several successful examples of MW-promoted organic syntheses in glycerol have therefore been described in the literature (Cravotto et al., 2011;Cintas et al., 2014). Glycerol can be used under sonochemical conditions, although greater amounts of energy must be supplied to overcome the cohesive forces in the liquid, as it is a viscous solvent. Similarly to other polyols (e.g., ethylene glycol and polyethylene glycol), glycerol can act as both a solvent and reducing agent of metal precursors, and several applications have been developed in the field of metal-nanoparticle synthesis. Furthermore, glycerol can act as a stabilizer of nanometric species, leading to the straightforward recycling of the catalytic phase (Chahdoura et al., 2014). The conventional and MW-assisted preparation of NPs in glycerol has already been described in the literature and this field of interest is continuously growing (Wang et al., 2018;Ghosh et al., 2019;Parveen et al., 2019;Vinodhini et al., 2019). CuNPs are an efficient source of Cu, but their potential applicability is restricted by copper's inherent instability under atmospheric conditions (Gawande et al., 2016). Cu(0)NPs have been efficiently prepared in glycerol (Dang-Bao et al., 2017), and some polyolstabilized CuNPs have been used in the reduction of nitrobenzene (Saha and Ranu, 2008;Duan et al., 2012;Santhanalakshmi and Parimala, 2012). Although several applications for copper catalysis in transfer hydrogenation have already been reported in the literature (Štefane and Požgan, 2014;Fan et al., 2018;Zhang and Li, 2019), the use of CuNPs for nitrobenzene reduction via transfer hydrogenation has not received much attention, to the best of our knowledge (Feng et al., 2014). In fact, the common approach to nitro benzene reduction by Cu catalysis is performed in the presence of NaBH 4 , which is used as a hydride donor (Wu et al., 2013;Aditya et al., 2017;de Souza et al., 2017). Herein, glycerol has been exploited as an efficient capping agent in the production of CuNPs, and as a solvent and hydrogen donor in the Cu-catalyzed reduction of nitrobenzene derivatives. Non-conventional, non-contact energy sources have been utilized to create new transfer hydrogenation processes that benefit from additional actuation via intensified mass and heat transfer. The ability of dielectric heating and US irradiation to maximize catalyst dispersion have been explored with the aim of enhancing reaction rate. All of the activities that are presented herein have the final aim of developing a knowledge-based strategy and selecting appropriate technologies for the scale up of the optimized reaction to an industrial MW instrument. RESULTS AND DISCUSSION The high reactivity of CuNPs and their recyclability have driven us to optimize the nitrobenzene reduction to aniline in the presence of CuNPs. NPs were prepared using a slight modification to a published procedure (Zhang et al., 2009). We decided to avoid the use of poly N-vinyl pyrrolidone (PVP) when investigating the efficacy of glycerol alone as a means to cap and stabilize the NPs. Cu(0) NPs were prepared according to the "bottom-up" approach; by dispersing CuSO 4 in a basic solution (pH 11) of water and glycerol (5:1), using the polyol as the stabilizer and solvent. NaBH 4 was immediately added and the deep blue solution became a colorless one in which the dark NPs could be identified (Supplementary Figure 7). Transmission electron microscopy (TEM) and particle-size distribution analyses were performed to characterize the prepared catalyst. CuNPs with a roundish shape and an average size of 10.2 ± 3.0 nm were obtained. Moreover, the NPs tended to form aggregates (Figure 1a). HRTEM analyses showed that metallic crystalline Cu is formed, as demonstrated by the presence of SCHEME 1 | CuNPs-catalyzed reduction of nitrobenzene (1a) using glycerol as reducing agent to obtain aniline (2a). diffraction fringes with spacing associated to the (1 1 1) plane of metallic Cu in the cubic phase (JCPDS file number 00-001-1242), as reported in Figures 1b,c. Moreover, an amorphous layer was observed around the NPs (Figure 1b, red arrow). The characteristics of the synthetic procedure and the contrast phase make it reasonable to propose that the CuNPs are coated with a glycerol layer that acts both as a protecting agent and stabilizer. Indeed, the NPs did not coalesce under the electronic beam of the instrument, and presumably do not do so under the reaction conditions, proving that they are quite stable. The synthesized NPs can be described as having a core-shell morphology, in which the core is made up of crystalline Cu 0 and the shell by glycerol molecules, as depicted in Figure 1c. The existence of this morphology suggests that the glycerol layer mediates interactions between the metal active sites and the reagents. Moreover, besides having a stabilizing function, the glycerol layer can moderately promote NP dispersion in the glycerol solvent, therefore enhancing the active metallic surface area of the catalyst. The reduction of nitrobenzene (1a) to aniline (2a) was performed in the presence of CuNPs and glycerol was used as the solvent and "sacrificial" hydrogen source (Scheme 1). As shown in Table 1, the reaction parameters were optimized by varying the nature of the base, the reaction temperatures and catalyst amounts. Several bases were tested: KOH, NaOH, K 2 CO 3 , CsCO 3, and all the reactions were performed at 130 • C, taken as the optimal reaction temperature, while the reaction time was fixed at 5 h. KOH showed the best results ( Table 1, entry 5). Excellent results were obtained in the presence of 2 eq of base and 5 mol% of CuNPs. The influence of US irradiation, given that the CuNPs usually aggregate, was investigated in order to improve reaction rate. US is known for its capacity to enhance particle dispersion and favor mechanical depassivation (Banerjee, 2019). Particlesize distribution was therefore measured before and after US treatment. Freshly prepared CuNPs were sonicated for 10 min [UP50H, F(kHz):30, P(W):50] until a perfectly dispersed black solution was obtained (Supplementary Figure 8). Offline particle-size distribution measurements (based on volume) were acquired and compared with those of freshly prepared NPs (Figure 2, red and blue curves). A laser diffractometer (Malvern, MasterSizer 3000 hydro SV) was employed and particle sizes were determined by measuring the intensity of scattered light when laser beam passed through a dispersed particulate sample. 0.5 mL of the sample (c NP = 1 g/L) was injected into 6.5 mL of deionised water in the measurement cell (so that the resulting concentration in the cell was 0.07 g/L) and mixed for 5 min using a built-in magnetic stirrer. The obtained scattering curves were averages of three subsequent measurements. Unlike the TEM observations, the Cu particles here had larger sizes due to the formation of aggregates, with an average size of 100 µm when suspended. US irradiation significantly influenced particle magnitude, with the size decreasing to 20 nm. A kinetic study was performed to assess the influence of US sonication on the reaction rate; the nitrobenzene reduction was carried out under optimal reaction conditions in the presence of freshly prepared CuNPs and pre-sonicated CuNPs. As shown in Figure 3, sigmoidal behavior was observed when the reaction was performed under conventional conditions and full conversion was obtained in 2 h. US pretreatment clearly had a significant effect on the reaction rate, and the complete conversion of nitrobenzene to aniline was achieved in 1 h. Excellent results were also obtained when the transfer hydrogenation of nitrobenzene was performed under MW irradiation, with the reaction time falling to 30 min (Table 1, entry 13). A slight effect on reaction rate was also observed when the reaction was performed under US irradiation, and full conversion was observed after 1 h (Table 1, entry 12). Influence of MW Irradiation Glycerol's high boiling point combined with its low environmental impact, cost, vapor pressure and high dielectric constant make it an optimal candidate for MW-promoted organic syntheses. Furthermore, as described in (Van De Kruijs et al., 2010), the interactions between MW and the heterogeneous metal-catalyst particles generates electrostatic discharges that can lead to the formation of active particles in the reaction media (Van De Kruijs et al., 2010;Horikoshi and Serpone, 2014). It is for both of these reasons that MW irradiation was chosen for the optimization of the CuNPs-catalyzed transfer hydrogenation of nitrobenzene in glycerol. A preliminary evaluation of the influence of MW irradiation on the nitrobenzene reduction was performed in a multimode MW oven (CEM Mars 5, max. power 400 W). The reaction was carried out at 130 • C, and nitrobenzene was completely converted to aniline after 30 min (Table 1, entry 13). In order to identify the best technology for reaction scaleup, a number of MW devices were used for MW-assisted batch syntheses of aniline. Reactions were performed in constant temperature mode and, separately, in constant power mode in two monomode MW instruments (Anton Paar Monowave 300 and CEM Discover SP) and two multimode MW systems (CEM Mars 5 and Milestone MicroSynth). When constant temperature mode was used, the power was automatically adjusted to reach the set temperature as quickly as possible and then maintain it using a dynamic feedback power loop. In constant power mode, the instrument adjusted the power to reach the reaction temperature as quickly as possible and then set the chosen constant power (Figure 4, gray profiles). In order to maintain the desired temperature, the selected power was carefully evaluated in advance using trial-error methodology. As demonstrated in Figure 4, the constant power set for monomode MW instruments was 4 W, while 80 W was needed in the multimode MW. As depicted in Table 2, while the reactions were complete after 30 min in all the experiments, substantial differences after 15 min of reaction time depending on the type of MW cavity (monomode vs. multimode MW cavity) and on the temporal heating profile (constant power vs. constant temperature) were observed. Better results were always obtained after 15 min when the power was maintained constant (Table 2, entries 2, 4, 6, 8). However, only in multimode-assisted reactors was full conversion obtained. The temperature and power profiles in all the MW instruments were registered when working with both methods: fixed temperature (Figures 4A,B) and fixed power (Figures 4C,D). Multimode systems have lower power density than monomode devices because of their large chambers, and higher power (80 W) is therefore required to bring the reaction mixture to the desired temperature. Higher MW power produces higher conversions, as MW electron-field effects are very important, meaning that electrostatic discharges can be generated via the interaction between MWs and the heterogeneous CuNPs, leading to the formation of active species in the reaction media (Horikoshi et al., 2011). The differences observed when the reaction time was shortened to 15 min may be due to the amount of energy provided to the sample, as demonstrated by the fluctuation in the power provided when the system is set to work at constant temperature (see Figures 4A,B), resulting in worse conversions than those obtained with constant power. As depicted in Table 2, the results of the reactions in the two monomode instruments (CEM Discover and Anton Paar Monowave 300) were similar to each other, as were those of the reactions in the two multimode instruments (CEM Mars and Milestone Microsynth). In fact, reaction outcome did not depend on the instrument used, but differences between the monomode and multimode reactors were observed. One of the main issues when working with Cu 0 -based NPs is their susceptibility to oxidation. Agglomeration is yet another effect that is usually observed when working with these small particles. Although the influence of MW irradiation on NP agglomeration has not yet been thoroughly studied, a few publications by Serpone et al., have attempted to evaluate the formation of aggregates of activated-carbon-supported Pd as caused by an excessive number of hot spots. In this work, some experiments were carried out in a MW-reactor, Anton Paar Monowave 300, that was equipped with a USB Digital Microscope Supereyes B003+ in order to better understand CuNP behavior inside the MW cavities. This multi-function microscope allowed us to follow the reaction when MW irradiation was applied. Firstly, glycerol was added to the test tube together with sequentially increasing quantities of CuNPs (2.5, 10, 20, and 40 mg). As showed in Figure 5 entries 1-2, the brown-dark CuNPs were suspended in glycerol and after 3 min were dispersed in the solvent. This behavior depends by the high viscosity at room temperature of glycerol that decrease by heating. After approximately 2 min of irradiation the magnetic bar started an efficient stirring. As can be observed, the concentration of CuNPs highly influenced aggregation, and the precipitation of large particles can be observed after a few minutes when working with 10 mg/3 mL or higher concentrations. The particle-size distribution of CuNPs was therefore measured after the MW-promoted reduction of nitrobenzene in glycerol was performed in the same instrument (Anton Paar Monowave 300), and a similar size-distribution profile to that of freshly prepared NPs was detected (Figure 2, green profile). MW Scale-Up The scale-up of MW heating constitutes a growing demand for industry thanks to the successes achieved on the lab scale. There are two known paths for this task: the continuousflow method and batchwise. At 2.45 GHz, the most commonly used MW frequency, the MW-penetration depth in common polar solvents is around a few centimeters. Because of this, the heating of bulk samples using MW irradiation has several limitations. Controlling the stirring rate to ensure and maintain the homogeneity of a solution and avoid thermal gradients must therefore be considered. As observed in the literature, most of the reactions that are accomplished under MW irradiation are performed at high temperatures in sealed vessels. Several approaches with different processing techniques can be used to scale-up a wide range of MW-promoted reactions (Bowman et al., 2008), and the use of open reaction vessels in batch mode offers operational advantages that can address scale-up needs. The small cavity of monomode MW apparatus has to be replaced with a larger multimode unit when high volumes are processed. The reduction of nitrobenzene to aniline, which has already been optimized for a 15 mL volume (1 mmol of substrate) in our study, was scaled up to 500 mL to perform the scale-up experiments. All scale-up experiments were performed in a MW instrument MEAM Explorer VP (http://www.meam.be/; Figure 6), which is a MW multimode oven designed as a multipurpose test device for various applications. This unique design provides access on four sides of the cavity, allowing multiple connections to be used for sensors and entrances/exits for gases or liquid products. In this case, the opening on the right side was used to measure the IR temperature of the sample, while the opening on the top was used to insert the glass stirring rod. Emissivity can be defined as the effectiveness of emitting energy in form of thermal irradiation and varies from 0 to 1. It is of high significance when the temperature is measured with an IR camera. Transmissivity refers to the proportion of the radiation that hits a body and ends up being transmitted through it without being absorbed or reflected, and describes the level of infrared radiation that permeates the object. While the emissivity value is intrinsic and only depends on sample nature, the transmissivity value also depends on the shape of the vessel and its material. Both transmissivity and emissivity have to be measured to ensure the optimal calibration of the IR camera, allowing the temperature measured by the system to be as accurate as possible. The emissivity and transmissivity of the reaction mixture were determined by comparing the solution temperature measured by a thermocouple and an IR camera; the factors were 0.95 and 0.48, respectively. It is important to always place the IR camera in the same location in order to maintain constant parameter values. Once those parameters were obtained, a number of experiments were performed. Preliminarily, the reaction was carried out at 130 • C in a 250 mL round-bottom flask and a glass stick was used to stir the 90 mL solution (6 mmol). Two reactions were performed: one experiment was carried out at constant power (25-30 W), while power was varied in the second (from 40 to 0 W). The temperature was maintained constant at 130 • C in both experiments. The histogram density of temperature in the cavity was registered using an IR camera (Optris PI Connect, Figure 7). As indicated by the temperatures next to the original figure, the maximum temperature inside the flask is 130 • C and the average of area 2, bulk volume inside the flask, is 114 • C. The input power and reflective power were measured in order to identify the power value that was actually absorbed by the reaction solution. As depicted in Figure 8, a higher percentage of the delivered power was absorbed when power was maintained constant (Figure 8A compared to Figure 8B). A number of reactions were performed to optimize the MW-heating protocol and scale-up the reaction. Results are summarized in Table 3. A 50% yield of aniline was obtained after 20 min of MW irradiation when working with constant power when the reaction was performed on the 6 mmol scale (90 mL glycerol) in the presence of 10 mol% of CuNPs. The conversion was reduced to 35% with varying power, which demonstrates that the efficacy of MW promotion on the Cucatalyzed transfer hydrogenation of nitrobenzene is reduced when fluctuating power is used. In agreement with our previous results on the laboratory scale, the pretreatment of CuNPs with US significantly enhanced the reaction rate (Table 3, entry 3); 66% reduced product was achieved in this case. When presonicated, the amount of catalyst could be reduced from 10 to 5 mol% without a significant decrease in yield ( Table 3, entry 4), while an increase in reaction temperature to 150 • C yielded 78% amino derivative in 20 min. As shown, full conversion of nitrobenzene to aniline was obtained when the reaction time was increased to 45 min ( Table 3, entry 6). When the reaction was performed on a larger scale (18 mmol nitrobenzene/270 mL of glycerol), the reaction time was increased to 1 h in order to complete the reduction. Moreover, the reaction was also performed in a 1 L flask (36 mmol/540 mL of glycerol) with the initial solution being sonicated. A 95% yield was achieved once this reaction mixture was heated for a total time of 1 h. CONCLUSIONS Glycerol has been studied as a hydrogen donor for the exhaustive, fast and reproducible Cu-catalyzed transfer hydrogenation of nitrobenzene to aniline. Small size, roundish-shape CuNPs were prepared in glycerol and, using HRTEM, it was possible to observe that the polyol layer mediates the interaction between the metal active sites and stabilizes NP function. NP dispersion in glycerol was promoted by US irradiation and excellent results (complete conversion and >95% yield) were obtained after 2 h when CuNPs were employed for nitrobenzene reduction under conventional heating conditions at 130 • C. The high polarity and low vapor pressure of glycerol allowed the effects of MW irradiation to be fully explored and, gratifyingly, the reaction was shortened to 15 min. On the basis of this detailed study, a constant power MW protocol has been optimized and the reaction was scaled-up to 36 mmol/500 mL of glycerol in a multimode industrial MW reactor. General All commercially available reagents and solvents were used without further purification. Reactions were monitored by TLC on Merck 60 F254 (0.25 mm) plates (Milan, Italy), which were visualized by UV inspection and/or by heating after spraying with 0.5% ninhydrin in ethanol. Reactions were carried out in a conventional oil bath by magnetic stirring, under US irradiation (Hielscher Ultrasonic horn UP50H) (Supplementary Figure 1) and in several MW devices, which operated in both monomode (Anton Paar 300, CEM Discover SP) (Supplementary Figures 2, 3), and multimode (CEM Mars 5 and MicroSynth) (Supplementary Figures 4, 5). A 1.2kW Multimode MW oven was used (Supplementary Figure 6), was used for MW-assisted reaction scale-up. NMR spectra (300 MHz and 75 MHz for 1 H and 13 C, respectively) were recorded. Chemical shifts were calibrated to the residual proton and carbon resonances of the solvent, CDCl 3 (δH = 7.26, δC = 77.16). Chemical shifts (δ) are given in ppm, and coupling constants (J) in Hz. GC-MS analyses were performed in a GC Agilent 6890 (Agilent Technologies, Santa Clara, CA, USA), which was fitted with a mass detector Agilent Network 5973, using a 30 m capillary column, i.d. of 0.25 mm and film thickness 0.25 µm. GC conditions were: injection split 1:10, injector temperature 250 • C, detector temperature 280 • C. Gas carrier: helium (1.2 mL/min). Temperature program: from 50 • C (5 min) to 100 • C (1 min) at 10 • C/min, to 230 • C (1 min) at 20 • C/min, to 300 • C (5 min) at 20 • C/min. HRMS was determined using MALDI-TOF mass spectra (Bruker Ultraflex TOF mass spectrometer, Milan, Italy). Copper nanoparticles were characterized by transmission electron microscopy (TEM) and high resolution TEM (HR-TEM). The measurements were carried out using a JEOL 3010-UHR instrument operating at 300 kV and equipped with a LaB 6 filament. Digital micrographs were acquired using a Gatan (2k × 2k)-pixel Ultrascan1000 CCD camera and were processed using Gatan digital micrograph. In order to obtain good sample dispersion and avoid modifications that may be induced by the use of a solvent, the powders were briefly contacted with the Cu grids, which were coated with lacey carbon, resulting in some particles adhering to the grid via electrostatic interactions. General Procedure for the Synthesis of Copper Nanoparticles A Copper (II) sulfate solution (1.5 mL of a 0.01 M solution in water/glycerol 5:1) was stirred and an aqueous 2 M NaOH solution was added dropwise to adjust the solution pH up to 11. After stirring for 10 min, 0.5 M NaBH 4 in water was added. Initially, the deep blue solution gradually became colorless to then turn burgundy, which indicates the formation of the copper colloid. The CuNPs were filtered on a Büchner funnel with a sintered glass disc using water and methanol to wash the catalyst. Nitrobenzene Reduction to Aniline Optimized Nitrobenzene-Reduction Procedure Under Conventional Heating CuNPs (3 mg, 5 mol%) were sonicated in a rounded-bottom flask with 3 mL of glycerol for 10 min [Hielscher Ultrasonic horn UP50H, F(kHz):30, P(W):50]. A perfectly dispersed black solution was observed. KOH (112 mg, 2 mmol) and nitrobenzene (123 mg, 1 mmol) were then added and the reaction was heated at 130 • C under magnetic stirring for 2 h. The reaction mixture was cooled to room temperature and filtered to remove CuNPs. Ten milliliters of water was added and extracted with ethyl acetate (2 × 10 mL). Aqueous HCl (0.01 M) was added to the organic phase and, after extraction, the aqueous phase was basified with NaOH (0.01 M), extracted with ethyl acetate (3 × 20 mL) and dried (Na 2 SO 4 ). The product was analyzed using 1 H NMR and 13 C NMR spectroscopy. Isolated yield 97% (Supplementary Figures 9, 10). General Procedure for US-Assisted Nitrobenzene Reduction Nitrobenzene (625 mg, 5 mmol), KOH (560 mg, 10 mmol), and the nano-copper catalyst (15 mg, 5 mol%) were added to 15 mL of glycerol and the mixture was sonicated using a Hielscher Ultrasonic horn UP50H, F(kHz):30, P(W):50 for 1 h. The reaction mixture was cooled down to room temperature and filtered to remove the CuNPs. Thirty milliliters of water was added and extracted with ethyl acetate (2 × 30 mL). Aqueous HCl (30 mL, 0.01 M) was added to the organic phase and, after extraction, the aqueous phase was basified with NaOH (0.01 M), extracted with FIGURE 7 | Histogram density of temperature recorded by IR camera Optris IP Connect. Reaction time: 10 min. ethyl acetate (3 × 60 mL) and dried (Na 2 SO 4 ). The product was analyzed using 1 H NMR and 13 C NMR spectroscopy. Isolated yield 98% (Supplementary Figures 9, 10). General Procedure for MW-Assisted Nitrobenzene Reduction When prior catalyst dispersion was required, the CuNPs (15 mg, 5 mol%) were weighed, added to a rounded-bottom flask with 15 mL of glycerol and sonicated for 10 min [Hielscher Ultrasonic horn UP50H, F(kHz):30, P(W):50]. A perfectly dispersed black solution was observed. KOH (560 mg, 10 mmol) and nitrobenzene (625 mg, 5 mmol) were then added and the reaction was carried out. Homogeneous MW distribution was ensured by a magnetic stirrer. Several MW devices were employed; both monomode systems (Anton Paar Monowave 300 and CEM Discover SP) and multimode systems (CEM Mars 5 and Milestone MicroSynth). Two different methods were used to apply MW irradiation: (a) fixed temperature and (b) fixed power. -Monomode systems (Anton Paar Monowave 300, CEM Discover SP): 2 min were required to reach the reaction temperature (130 • C) using the program: "heat as quickly as possible" (Maximum power 400 W). When performing the reaction at fixed temperature, the program "hold" was selected in order to main the temperature constant (130 • C) during the reaction. In this mode, the MW-reactor automatically adjusts the power to reach the indicated temperature. Reaction time: 30 min. When performing the reaction at fixed power, the program "constant power" was selected (4 W). In this way, a constant set power maintains the reaction mixture at the desired temperature (130 • C). Reaction time: 15 min. -Multimode systems (CEM Mars 5 and Milestone MicroSynth): 2 min were required to reach the reaction temperature (130 • C) (Maximum power 400 W) (CEM Mars 5: Power 100%). When performing the reaction at fixed temperature, the power was set at 400 W, and 130 • C was selected as the constant temperature. The MW-reactor could then automatically adjust the power. Reaction time: 30 min. When the reaction was performed at Scale-up in an industrial multimode MW. Reaction conditions: nitrobenzene (1 eq), glycerol (200 eq), KOH (2 eq). a Determined by GC-MS. b CuNPs were added into the glycerol and sonicated for 10 min, forming a perfectly dispersed black solution. fixed power, the mixture was irradiated at constant power (80 W) for the whole reaction time. Reaction time: 15 min. The reaction mixture was cooled to room temperature and filtered to remove the CuNPs. Thirty milliliters of water were added and extracted with ethyl acetate (2 × 30 mL). Aqueous HCl (30 mL, 0.01 M) was added to the organic phase, and, after extraction, the aqueous phase was basified with NaOH (0.01 M), extracted with ethyl acetate (3 × 60 mL) and dried (Na 2 SO 4 ). The product was analyzed using 1 H NMR and 13 C NMR spectroscopy. The isolated yield was 98%. When using the Anton Paar 300 reactor, the reaction temperature was controlled simultaneously by a ruby thermometer (a fiber optic sensor that is immersed in the reaction mixture and accurately measures the internal temperature over the entire reaction process) and an IR sensor that provides a measurement of the external temperature of the reaction vials. When using the CEM Discover SP, the temperature was measured by an IR sensor, and fiber optics where installed in both multimode systems (CEM Mars 5 and MicroSynth).
v3-fos-license
2017-05-09T07:03:33.510Z
2014-05-10T00:00:00.000
238231
{ "extfieldsofstudy": [ "Engineering", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10237-014-0590-8.pdf", "pdf_hash": "44df97e07f62b1f41edc1445aa52a40d1f3690ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:430", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "44df97e07f62b1f41edc1445aa52a40d1f3690ca", "year": 2014 }
pes2o/s2orc
Extension of Murray’s law including nonlinear mechanics of a composite artery wall A goal function approach is used to derive an extension of Murray’s law that includes effects of nonlinear mechanics of the artery wall. The artery is modeled as a thin-walled tube composed of different species of nonlinear elastic materials that deform together. These materials grow and remodel in a process that is governed by a target state defined by a homeostatic radius and a homeostatic material composition. Following Murray’s original idea, this target state is defined by a principle of minimum work. We take this work to include that of pumping and maintaining blood, as well as maintaining the materials of the artery wall. The minimization is performed under a constraint imposed by mechanical equilibrium. We derive a condition for the existence of a cost-optimal homeostatic state. We also conduct parametric studies using this novel theoretical frame to investigate how the cost-optimal radius and composition of the artery wall depend on flow rate, blood pressure, and elastin content . The inferred target radius of an artery then becomes a function of the flow conditions within the blood vessel (Murray 1926). In the present work, we extend this original idea of Murray so that also the material composition and wall thickness of the artery are determined by a minimum work principle. This development ties together with the previous work (Satha et al. 2014), where we studied how local changes in volumetric blood flow or pressure, due to, for instance, disease, injury, and surgery, trigger growth and remodeling (Humphrey 2002) toward a homeostatic target state. In this paper, we develop a theory that determines this target homeostatic radius, wall thickness, and material composition, the artery wall being a composite of different constituents with nonlinear material properties (Holzapfel et al. 2000). In order to keep the theory as simple as possible, we assume the vessel to be of cylindrical shape, and we use a theory for thin-walled structures. The blood vessel wall mainly consists of elastin, collagen, and smooth muscle (Boron and Boulpaep 2008, pp. 473-481). Thus, we model the vessel wall as a composite of multiple orthotropic, nonlinear elastic materials that deform together as the vessel stretches in the circumferential direction due to the transmural pressure, as described in the literature (Humphrey and Rajagopal 2002;Gleason and Humphrey 2004;Satha et al. 2014). The target composition and radius are assumed to minimize the cost-that is, the power per unit length of blood vessel-required to maintain and pump the blood contained within the vessel and to maintain the materials of the vessel wall, as previously proposed (Taber 1998;Klarbring et al. 2003;Liu and Kassab 2007). The goal function of the system is then taken to be this cost function subject to the constraints imposed by the mechanical equilibrium of the vessel wall. Since the elastin content changes very slowly in the vascular system of adult individuals (Tsamis et al. 2013), the amount of elastin is essentially beyond the control of the growth and remodeling process. Therefore, we regard the amount of elastin as a parameter to the system. The goal function is then parameterized by the blood pressure, the volumetric flow rate, and the amount of elastin. These parameters, in turn, are functions of time, and their fluctuations lead to fluctuations of the target geometry and composition. Experimental studies show that an increased blood pressure p increases the thickness of the vessel wall through growth and that the vessel adapts to achieve a homeostatic state (Matsumoto and Hayashi 1996;Hu et al. 2007). These studies also show that changes in blood pressure affect the material composition of the vessel wall. Similarly, the volumetric flow rate u has a strong impact on a blood vessel's radius and composition: The radius r is increased when the flow rate is increased, so that the shear stress of the fluid on the epithelial cells, that is, the interior lining of the vessel wall, is kept at a homeostatic state (Brownlee and Langille 1991). On a longer timescale, the material composition of the vessel wall also changes with increased flow rate (Kubis et al. 2001). It was suggested in an early work by Murray (1926) that the target dimensions of the blood vessel are governed by the minimization of metabolic power needed to maintain the materials of the vascular system and to overcome the hydrodynamic resistance from the vessel for a given demand of supplied blood. This minimization principle leads to Murray's law which is in fair agreement with the experimental data (Sherman 1981;Taber et al. 2001). Later, Murray's law was modified by taking the metabolic cost of the vessel wall into account (Taber 1998), including the active behavior of smooth muscle. This latter approach relates the shear stress of the homeostatic state to the pressure, the thickness of the vessel wall, and the degree of smooth muscle metabolism. Klarbring et al. (2003) and Liu and Kassab (2007) have further developed the cost function approach by considering minimization of the cost for the vascular tree as a whole in their formulations. To the knowledge of the authors, the fact that the artery wall is composed of several constituents with orthotropic, nonlinear properties (Holzapfel et al. 2000) has not been considered in previous studies of the cost-optimal geometry and composition of artery walls. Because the elastin content of the artery is essentially unchanging at the timescales of growth and remodeling (Tsamis et al. 2013), there is not a unique optimal target composition of the artery wall for a given set of flow parameters; the optimal state depends on the given amount of elastin, and its slow variations due to degradation. The target composition may then be coupled to the material properties of the composite artery wall. To find the cost-optimal geometry and composition of an artery with a nonlinear mechanical behavior, it is necessary to consider a mechanical model of the artery wall in conjunction with a cost function derived from the power required to maintain the materials and blood flow of the artery. We briefly outline the mechanical model, based on constrained mixture theory (Humphrey and Rajagopal 2002;Gleason and Humphrey 2004;Satha et al. 2014), in Sect. 2.1. This yields an equilibrium equation that relates the transmural pressure to the vessel geometry and composition of a homeostatic state. A description of the principle of cost-optimization for the artery wall follows in Sect. 2.3, and a goal function is subsequently formulated, whose minima correspond to a minimal cost of homeostatic states that satisfy the equilibrium equation (Sect. 2.4). We analyze how the cost-optimal state of the vessel varies with volumetric flow rate, pressure, and elastin content in Sect. 3. Constrained mixture thin-walled tube theory We consider a cylindrical tube composed of a mixture of n materials, whose respective mechanical properties are represented by their strain energy functions ψ k , k = 1 . . . n. A constrained mixture theory is used, implying that all constituents have the same deformation. This deformation, with respect to a given, fixed reference configuration, is represented by a circumferential strain λ and a supposed constant axial strain δ. For a pressure difference p between the interior and exterior of the tube, integration of the standard radial equilibrium equation gives where ρ is the radial coordinate which varies between an inner radius ρ 0 and an outer radius ρ 1 . For an incompressible material, the stress difference between circumferential stress σ ϕ and the radial stress σ ρ can, cf. Holzapfel and Ogden (2003), be written where φ k denotes the volume fraction of constituent k. Introducing Eq. (3) into Eq. (2) and making a thin-walled tube assumption, cf. Satha et al. (2014) for details, result in where R is the radius of the, now thin-walled, reference configuration, and A k is the effective reference area obtained by multiplying the volume fraction φ k by the total reference cross-sectional area. The radius of a deformed, thin-walled tube is expressed as r = λR. Essentially following Baek et al. (2006), we take the effective areas to be represented by where A k (0) is the original effective area of constituent k, Q k (t) is the fraction of constituent k that was produced before time 0 and remains at time t, A k (t) ≥ 0 is the rate of production of effective area at time t, and q k (t) ≥ 0 is a monotonically decreasing survival function such that q(0) = 1. By assuming that materials created at different time instances contribute to the strain energy in proportion to the remaining area fractions, we obtain (Baek et al. 2006) where, Ψ k λ k (t, τ ) is the strain energy density with respect to a natural, stress-free configuration and characterizes the nonlinear, elastic behavior (Baek et al. 2006). Also, λ k (t, τ ) is the stretch at time t for materials produced at time τ . Hence, (Baek et al. 2006) The ratio λ(t)/λ(τ ) is the stretch developed during the time interval [τ, t], and G k h is the homeostatic prestretch of constituent k, which means the material may attain prestretch at the time of production. Timescales and homeostatic conditions We recognize different timescales in the process of growth and remodeling of the vascular system. The high-frequency scale is that of the heartbeat. It was shown in Satha et al. (2014) that Eq. (4) is approximately valid for average quantities if the change of A k is taken to be much slower than that of the heartbeat timescale. Moreover, we distinguish between two processes in the slow change in A k . First, there is the change of homeostatic values. Secondly, there is the process of approaching these homeostatic target values when, say, a perturbation of the state occurs. The stability of the second type of process was previously investigated in Satha et al. (2014). Complementary to this, in the present paper, we study the target homeostatic state and its dependence on the imposed flow conditions. Such states are defined by a time-constant stretch λ(τ ) ≡λ as well as a time-constant composition of materials A k ≡ k . There are two classes of constituents for which steady-state conditions are possible (Satha et al. 2014): The set of constituent indices belonging to class (i) and (ii) are denoted by S i and S ii , respectively. Equations (5) and (6) result in (Satha et al. 2014) Here, G k Introducing Eq. (8) into a time-averaged version of Eq. (4), and evaluating for λ = λ(t) = λ(τ ) =λ and for A k = A k (t) = k , we get (Satha et al. 2014) where is called the homeostatic stress. Note that the homeostatic state is associated with a constant homeostatic stress for materials with a finite turnover. Here and in the following, we use the notation d f (s) = d f/ds and d n f (s) = d n f /ds n . Principle of cost-optimization As proposed by Murray (1926), it is assumed herein that the blood vessel growth and remodeling strive toward costoptimization of the vascular system. This assumption has been widely used in previous modeling work (Taber 1998;Klarbring et al. 2003;Liu and Kassab 2007). In this work, we take the target homeostatic state to be governed by such an optimization rule. We assume that the metabolic cost of the materials that constitute the vessel wall is proportional to the amount of each constituent, i.e., there are constants α k such that this cost per unit length of blood vessel in the homeostatic state can be written with the units of power per unit length. Since the homeostatic stress of smooth muscle is constant (Sect. 2.2), it is possible to represent the stress-dependent upkeep of smooth muscle (Taber 1998;Liu and Kassab 2007) by the constant α k . There is also a metabolic cost for the blood. This is again taken as proportional to the volume, i.e., it is proportional to πr 2 δ. Since r =λR, and since a constant axial stretch δ is considered, there is a constant β such that the metabolic cost of the blood per unit length of the blood vessel can be written We have β = πδα b where α b is the metabolic power per unit volume of blood. Finally, we take into account the energy per unit time consumed by the heart to maintain a certain volumetric flow rate. If we assume that the Hagen-Poiseuille equation governs the flow, the power per unit length of blood vessel required to overcome the viscous drag is (Taber 1998) where u is the volumetric flow rate, and η is the dynamic viscosity of the blood, which is assumed to be a Newtonian fluid. There is thus a constant γ = 8η/π such that the cost is per unit length of blood vessel. The total cost P per unit time and length is obtained as the sum of these contributions, becoming The optimization problem and its minima The problem we are considering is thus to minimize the total cost P under the constraint that the equilibrium condition, Eq. (9), is satisfied. This problem can be rewritten as an unconstrained optimization problem by taking an arbitrary j ∈ S i and rewriting Eq. (9) as Thus, j = j (λ, 1 , . . . , j−1 , j+1 , . . . , n ), and when substituted into the expression for P in Eq. (15), we get the goal function The target homeostatic state is now given by the unconstrained minimum of f , assuming that this minimum occurs for positive values of all variables. The model is next simplified by assuming that the blood vessel wall consists of two constituents only: elastin, k = 'e' ∈ S ii , and components with a finite turnover including collagen and smooth muscle, k = 't' ∈ S i . This classification incorporates the assumption that the elastin content is essentially constant over time (Tsamis et al. 2013), while other constituents have a substantially faster turnover, with a timescale of approximately 2 months (Nissen et al. 1978;Martufi and Gasser 2012). Smooth muscle is metabolically more expensive than collagen, and it is present in the vascular system to help pumping blood and to control high-frequency adaptation to changing demands of blood. The fraction of smooth muscle is then likely related to the fluctuations of the flow conditions rather than their time-averaged values. However, these dynamics are beyond the scope of this study, and we introduce the simplifying assumption that the ratio of the amount of collagen to the amount of smooth muscle is constant for any given artery. For the two constituents, 'e' and 't,' we can express the equilibrium equation (16) aŝ where σ t h is a constant homeostatic stress, and Eq. (10) was used to express σ e h (λ). Substituting Eq. (18) into the total cost P gives the goal function This cost function, retaining only nonconstant terms, becomes and the gradient of the goal function is Straight-forward differentiation of Eqs. (20) and (18) yields The optimal target homeostatic composition of a blood vessel is found at a stationary minimum point defined by Using that ∂ P/∂ t = α t is constant, the second derivative of f is We note that Then, d 2 f /dλ 2 > 0 when α t = 0. If α t > 0, we must consider the sign and magnitude of d 2Ât /dλ 2 : Whether or not this expression is positive at a stationary point can be evaluated when the material model is instantiated. This will be done in Sect. 3.1. However, qualitative insight can be gained by equivalently writing Eq. (28) as Thus, in case the elastin stress σ e h is proportional toλ, so that the second term vanishes, the stationary point will always be a minimum point. On the other hand, if the elastin has a strainstiffening behavior, then d 2Ât /dλ 2 may become negative. Particularly, this may be the case for small pressures. If we assume that the metabolic cost of the vessel wall is much smaller than that of the blood, α t ≈ 0. Then, d f/dλ = 0 gives consistent with Murray's law (Murray 1926). This result can be inserted into Eq. (18) to give a closed expression for the optimum amount of materials with finite turnover. For a finite metabolic cost of the vessel wall, α t > 0, the stretch at the stationary point of the goal function must be computed numerically for any nontrivial choice of strain energy function Ψ e . Results and discussion The cost-optimal target geometry and composition of the vessel wall are found at the minimum stationary point of the goal function. The locus of this stationary point depends on the parameters of the goal function, including pressure p, volumetric flow rate u, elastin content e , and parameters related to the material model for elastin. These parameters vary within a population as well as with time for each individual due to, e.g., aging, changes in body mass, medical treatments, or the development of diseases. In Sect. 3.2, we perform parameter studies to quantify these variations in the optimal state. However, we first need to be explicit about the material model and its parameters. Parameter identification and material model The parameters of our model are quantified using data for the radial artery (arteria radialis) and the common carotid artery (arteria carotis communis). Previous in vivo measurements on normotensive subjects are used, giving ensemble averages for the vessel radiusr , total areaĀ of the cross section, average blood pressurep, and volumetric flow ratē u, as compiled in Table 1. The composition, described by the fraction of elastin φ e and the fraction of other materials φ t , is estimated using histological data from the literature, as described by Satha et al. (2014). We use histological data from Li et al. (2008) for the radial artery and from Sommer et al. (2010) for the carotid artery (Table 1). The stretches in the circumferential, radial and longitudinal directions are λ k , (λ k δ) −1 and δ, yielding the Cauchy-Green tensor (Holzapfel et al. 2000; Ogden 2010) with invariants I k 0 = tr C k and I k 1 = (λ k ) 2 . As previously described (Satha et al. 2014), the strain energy density of the elastin fraction is taken to be isotropic (Holzapfel and Ogden 2010): Giannattasio et al. (2001) b Laurent et al. (1994) c Estimated by Satha et al. (2014) using histology from Li et al. (2008) d Numerical fit by Satha et al. (2014) to data from Laurent et al. (1994) and Girerd et al. (1998) e f Likittanasombut et al. (2006) g Bussy et al. (2000) h Bussy et al. (2000) with a correction for a misrepresented unit i Sommer et al. (2010) j Numerical fit using the method of Satha et al. (2014) with data from Bussy et al. (2000) while the strain energy density of the composite of other constituents is taken to be orthotropic (Holzapfel and Ogden 2010): where c 1 > 0 Pa is a constant and c 2 > 0 is a nondimensional constant. Parameter identification for the radial artery was performed in a previous study (Satha et al. 2014) by leastsquares fitting the two-constituent material model to experimental data (Laurent et al. 1994;Girerd et al. 1998), giving the material parameters shown in Table 1. Using Eq. (10), these parameters yield σ t h = G t h dΨ t (G t h ) = 38.1 kPa. The fitting procedure described by (Satha et al. 2014) is used herein to obtain the parameters of the carotid artery from the data of Bussy et al. (2000), with the Young's modulus of the unloaded wall of the carotid artery estimated to 0.3 MPa, similar to the value for the brachial artery (Kinlay et al. 2001). The resulting material parameters for the carotid artery are compiled in Table 1 and give σ t h = G t h dΨ t (G t h ) = 46.3 kPa. We also choose the constant longitudinal stretch to be δ = 1. The parameters, α t , β, and γ , of the goal function are obtained from the literature. Liu et al. (2012) estimate α b = 51.7 W/m 3 for human blood, giving β = 0.16 kW/m 3 . With a Newtonian fluid assumption, the dynamic viscosity of human blood at 40 % hematocrit is η = 3.2 mPa·s (Boron and Boulpaep 2008), giving γ = 8.1·10 −3 Js/m 3 . The metabolic coefficient α t is assumed to be dominated by smooth muscle and has an active and a passive component, with the active component proportional to the stress of that constituent (Taber 1998). We thus write where α w and k w denote the passive and active metabolic constants, respectively. These constants were estimated by Taber (1998) to be α w = 764 W/m 3 and k w = 0.00872 s −1 for the porcine carotid artery, giving α t = 1.1 kW/m 3 and α t = 1.2 kW/m 3 for the radial and carotid artery, respectively. We take these values for α t as order of magnitude estimates for human arteries and investigate different values α t = {0.0, 0.1, 1.0} kW/m 3 in the parametric studies below. Parametric studies In this section, we consider the effects of the volumetric flow rate, pressure, and elastin content on the radius r of the blood vessel and on the amount of constituents t with a finite turnover. The parameter α t , controlling the cost of the 't'type wall materials, is varied to highlight its effect on the vessel dimensions and composition. The target state for each set of parameters is found numerically by solving d f/dλ = 0 forλ using Eqs. (19) through (24), and then computing t using Eq. (18). From the point of view of growth stability, it is of great interest to assess whether the stationary points of the goal function are minima. With the prototypical values from Table 1, we have evaluated Eq. (28) for a wide range of the radius 0.1r < r < 3r and the pressure 0.2p < p < 3p and found that d 2Ât /dλ 2 > 0 within these ranges for both the radial and the carotid arteries. This means that the second derivative of the goal function with respect to t is strictly positive, asserting that the corresponding stationary points are indeed minima. Note that this validation was conducted for one particular choice of material model. An enhanced strain-stiffening, e.g., due to an anisotropic elastin fraction, would lead to greater nonlinearity in the strain energy density which would threaten the existence of the minimum. Therefore, we cannot exclude that there exists some physiological conditions at which the minimum of the goal function is lost. Volumetric flow rate It has been established experimentally that the volumetric flow rate has a strong impact on the blood vessel radius (Brownlee and Langille 1991;Kubis et al. 2001) and com- (Kubis et al. 2001). In our theoretical framework, this is manifested as a flow rate dependence of the stationary point of the goal function. The vessel radius r and the amount of composite materials t are plotted as functions of u in Fig. 1a, b (radial artery) and Fig. 1c, d (carotid artery) for different values of α t = {0.0, 0.1, 1.0} kW/m 3 and a constant pressure p =p. When the cost of wall materials is taken to be zero, α t = 0, the variations of r with u follow Murray's law, r ∝ u 1/3 ( Fig. 1a-c, solid line). Murray's law overpredicts the average vessel radiusr given in Table 1 for the average flow rateū. When the wall material is assigned a finite metabolic cost, Murray's law is modified to suppress the use of wall materials and thus reduce the radius to a more realistic value (Fig. 1a-c, dotted lines). Interestingly, this also introduces a lower bound on the vessel radius, which does not fully contract even at a vanishing flow rate. When examining the relation t (u) for the radial artery (Fig. 1b) and the carotid artery (Fig. 1d), it becomes clear that A t > 0 for all flow rates investigated. There is a minimum of A t (u) that corresponds to a zero of d t /dλ. For a constant pressure, the amount of materials in the vessel wall is a rather weak function of the flow rate. The rise in the amount of 't'-material for low-volume flows corresponds to the elastin being in a state of compression, requiring additional 't'-material, with constant stress σ t h , to balance the pressure p. However, this may be an artifact of Pressure When the cost of the wall materials is taken to be zero, α t = 0, and Murray's law governs the target state, the pressure does not have any effect on the vessel radius, as shown for both the radial and the carotid arteries by the solid lines in Fig. 2a-c. Also, it is observed in Fig. 2b-d that t is linear in pressure, which is trivially explained by the need to balance the pressure at a constant circumferential stress σ t h in the 't'-fraction of materials. Examining the solid lines in Fig. 2b-d, we note that t becomes negative when the pressure is sufficiently reduced. Below this limiting pressure, no realizable homeostatic state can be found which reproduces the prediction of Murray's law. This constitutes a lower limit of pressure for Murray's law. This may also be an artifact of the simplistic model for the strain energy density of elastin in compression, as discussed in Sect. 3.2.1. Under normal circumstances, with a typical pressure p = p, assigning a finite cost to the wall material, α t > 0, leads to a more narrow blood vessel (Fig. 2a- (Table 1). A narrow blood vessel reduces the force per unit length of the vessel wall and thus allows for a thinner wall, which saves expensive materials. It is interesting that the vessel radius increases when the blood pressure is reduced: A reduced blood pressure at a sustained volumetric flow rate then reduces the mechanical stability of the vessel and increases the risk of vessel collapse. The dramatic increase in the radius at low pressure is not physiological, since it occurs at states with t < 0 ( Fig. 2b-d, dashed and dotted lines), which can never be achieved. Elastin content Although the elastin content is essentially constant (Tsamis et al. 2013), it may degrade over very long timescales, e.g., the lifetime of an individual. This motivates a study on how variations-particularly reductions-in elastin content affect the homeostatic target vessel geometry and composition. Figure 3a, c show how the radii of the radial and carotid arteries, respectively, vary with the elastin content. When α t = 0, the vessel radius is maintained at a constant level, owing to the fact that the elastin content does not enter into Murray's law (Fig. 3a-c, solid line). Degradation of elastin is compensated for by an increase in the amount of other mate-rials t . It is shown in Fig. 3b-d that t (solid line) increases linearly when e is reduced. That is, degraded elastin is simply replaced by other materials to balance the transmural pressure. For the case α t = 0, elastin is replaced by metabolically more expensive materials. This is predicted to lead to a reduction of the vessel radius when elastin degrades (Fig. 3a-c, dashed and dotted lines). Comparison between radial and carotid artery To demonstrate the general applicability of the proposed model, two types of arteries, the radial artery and the common carotid artery, are compared. These arteries are very different in terms of diameter and blood flow, but have a similar transmural pressure. The fraction of elastin is much greater in the carotid artery ( Table 1). The predicted variation of the vessel radius r with u deviates significantly from Murray's law for the radial artery (Fig. 1a), whereas the Murray's law appears to hold much better for the carotid artery (Fig. 1c). The same conclusions can be drawn for the amount of 't'-materials t (Fig. 1b-d). In the cases of pressure dependence and elastin content dependence, the radial and carotid arteries display the same qualitative behavior, which clearly differs from Murray's law . Conclusions The design of the vascular system is assumed to be governed by the physiological principle of minimum work (Murray 1926). It is thus an optimization process that governs the architecture of arteries. On this basis, we have formulated a theoretical frame that extends Murray's law to include growth and remodeling, and the nonlinear mechanics of the artery wall. A goal function, novel to this application, is formulated using an expression for the power required to pump blood and the total metabolic power needed to maintain the blood and the wall of the artery. We have shown that there exists a minimum stationary point for a wide range of the volumetric flow rate and the pressure around the prototypical parameter values for the radial and the common carotid artery. In theory, however, this minimum could be lost for a strongly strain-stiffening elastin fraction. Taking the cost of wall materials into account reduces the radius of the target homeostatic state and also renders this target radius pressure-dependent. A reduction in the amount of elastin in the artery wall reduces the radius of the target homeostatic state. The greatest value of the present work may be its ability to depict the variations of the target homeostatic state under dynamic flow conditions. This theoretical frame can then be integrated into models for growth and remodeling (Satha et al. 2014;Taber 1998) to capture the coupled dynamics of remodeling and fluctuation of the target state.
v3-fos-license
2021-08-06T13:22:44.571Z
2021-08-06T00:00:00.000
236930269
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.685371/pdf", "pdf_hash": "ad1c4000a10a13a55f227edeee5205b39aa13e3d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:433", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "ad1c4000a10a13a55f227edeee5205b39aa13e3d", "year": 2021 }
pes2o/s2orc
Identification of Distinct Molecular Patterns and a Four-Gene Signature in Colon Cancer Based on Invasion-Related Genes Background The pathological stage of colon cancer cannot accurately predict recurrence, and to date, no gene expression characteristics have been demonstrated to be reliable for prognostic stratification in clinical practice, perhaps because colon cancer is a heterogeneous disease. The purpose was to establish a comprehensive molecular classification and prognostic marker for colon cancer based on invasion-related expression profiling. Methods From the Gene Expression Omnibus (GEO) database, we collected two microarray datasets of colon cancer samples, and another dataset was obtained from The Cancer Genome Atlas (TCGA). Differentially expressed genes (DEGs) further underwent univariate analysis, least absolute shrinkage, selection operator (LASSO) regression analysis, and multivariate Cox survival analysis to screen prognosis-associated feature genes, which were further verified with test datasets. Results Two molecular subtypes (C1 and C2) were identified based on invasion-related genes in the colon cancer samples in TCGA training dataset, and C2 had a good prognosis. Moreover, C1 was more sensitive to immunotherapy. A total of 1,514 invasion-related genes, specifically 124 downregulated genes and 1,390 upregulated genes in C1 and C2, were identified as DEGs. A four-gene prognostic signature was identified and validated, and colon cancer patients were stratified into a high-risk group and a low-risk group. Multivariate regression analyses and a nomogram indicated that the four-gene signature developed in this study was an independent predictive factor and had a relatively good predictive capability when adjusting for other clinical factors. Conclusion This research provided novel insights into the mechanisms underlying invasion and offered a novel biomarker of a poor prognosis in colon cancer patients. INTRODUCTION Colon cancer, which is a malignant tumor of the digestive tract derived from the mucosal epithelium of the colon or rectum, has become the third most frequent cancer among men and the second most frequent cancer among women worldwide (Arnold et al., 2017;Althobaiti and Jradi, 2019). Colon cancer starts insidiously, progresses rapidly, and has a poor prognosis and high mortality rate. There are many treatments available to prolong the survival of patients with advanced disease, and surgery is the main treatment for colon cancer, but the 5-year survival rate is 50% (Zhai et al., 2017). In total, 15-20% of colon cancer patients relapse after treatment (Shi et al., 2014), and CRC recurrence after therapeutic surgery is a major obstacle in improving the overall survival rate of colon cancer patients (Gerger et al., 2011). As a highly heterogeneous disease, colon cancer involves DNA repair defects, DNA methylation, chromosome instability, and other molecular pathogeneses in the course of disease development. Biomarkers have been used as common tools for disease detection and prognosis management in colon cancer patients (Juo et al., 2014). Therefore, the determination of molecular changes in colon cancer patients has become a hotspot in colon cancer research. With the development of the Human Genome Project and the arrival of the post-genome era, the development of various highthroughput biomedical technologies has led to the exponential growth of biological data, which are currently mostly applied to tumor functional genomics. Based on biological information service platforms for the construction of public networks and constantly emerging and accumulating biomedical information and clinical data resources, an increasing number of studies have focused on these vast amounts of genetic research and data processing, using bioinformatic analysis to mine the expression data for tumor-associated genes involved in the pathogenesis, progression, and changes in the process of transformation, to find CRC-related changes in the genome, obtain CRC gene expression profiles, and improve the CRC diagnosis threshold, which is of great value for the clinical application of genetic information (Tutar et al., 2015). Transcriptomic analysis has been widely used to describe the prognostic characteristics of colon cancer patients and has produced many candidate biomarkers with potential clinical value (Marisa et al., 2013;Xu et al., 2017;Wei et al., 2018). However, small sample sizes and certain technical factors restrict the consistency of the proposed signatures and provide limited prognostic information. In addition, the high heterogeneity of colon cancer makes it important to establish a reliable signal to identify patients with a high risk of disease recurrence. To this end, the integration of results from multiple studies is expected to yield more reliable prognostic characteristics. We, therefore, attempted to determine and validate a robust prognosis-related feature by integrating multiple datasets from colon cancer patients. This research developed a four-gene signature with solid prognostic performance for colon cancer that may complement traditional clinical prognostic factors and provide effective therapeutic interventions and individualized therapies for treating colon cancer patients. Sources of Obtained Data Clinical follow-up information and RNA-Seq data (FPKM) for colon cancer (COAD) were downloaded from TCGA database 1 1 https://portal.gdc.cancer.gov/ (Author Anonymous, 2018). The expression spectrum was converted to TPM, genes with low expression (genes with less than 1 transcript in more than 50% of all samples) were removed, and Ensembl IDs were converted into gene symbols. The median value was taken as the expression spectrum of gene symbols when multiple Ensembl IDs corresponded to the same gene symbol. Log2 conversion was performed for the expression spectrum data. Two datasets, GSE17538 (Smith et al., 2010) and GSE38832 (Tripathi et al., 2014), in the MINiML format were acquired from the GEO database 2 , both of which were sequencing data generated on the GPL570 platform ([HG-U133_Plus_2] Affymetrix Human Genome U133 Plus 2.0 Array). The chip data set was converted from probes to gene symbols according to the GPL570 annotation file (the middle value was taken as the expression spectrum of the gene symbol when multiple probes corresponded to the same gene symbol; probe expression was removed when there were multiple gene symbols per probe). The microarray dataset included only colon cancer tumor samples with survival time and survival status. The clinical information available after data preprocessing is shown in Table 1. The invasionrelated gene set was derived from the c2.all.v7.0.symbols.gmt file on the GSEA website 3 (Subramanian et al., 2005). There were a total of 1,202 genes involved in the 11 pathways related to invasion. Molecular Typing Based on Invasion-Related Genes In TCGA dataset, univariate Cox analysis of the 1,202 invasionrelated genes was performed using the Coxph function of the R package survival (V3.1-12), and the genes associated with colon cancer prognosis were obtained (p < 0.01). Next, the R package NMF (V1.48.0) (Li and Ngom, 2013) was used to conduct molecular typing of colon cancer samples from TCGA data set, and the optimal typing was selected. Comparison of Clinical Features and Molecular Mutations Between Molecular Subtypes The chi-square test was used to identify differences in clinical characteristics between the two molecular subtypes of colon cancer. According to the SNV/indel results of MuTect detection in TCGA database, the "maftools" (Mayakonda et al., 2018) software package was used for the mutation annotation format (MAF) on the basis of TCGA queue. Comparison of Molecular Subtypes With Existing Molecular Subtypes A total of six categories of immune infiltration were identified in human tumors based on corresponding tumor-promoting and tumor suppressor factors, namely, C6 (TGF-beta dominates), C5 (immunologically silent), C4 (lymphocyte Analysis of Molecular Subtypes With Immune Scores and Immunotherapy Outcomes First, the R software package MCPcounter (V1.2.0) (Dienstmann et al., 2019) was used to determine the immune cell scores of each sample, and then differences in the immune cell scores of molecular subtypes were compared. In recent years, research on immune checkpoint inhibition (ICI) has achieved breakthroughs in clinical response in a variety of human cancers, but most cancer patients do not benefit from ICI therapy. Studies have reported that a clinical response to an anti-PD-1 antibody, a type of ICI, is more likely to occur in tumors that already have T-cell infiltration and PD-L1 expression. Moreover, IFN-α functions critically in regulating the expression of PD-L1. High levels of IFN-, accompanied by accelerated lymphocyte infiltration, may be the key to recognition of tumorcytotoxic immunophenotypes, which may lead to treatment with anti-PD-1 therapy. We compared the expression of PDCD1 (PD-L1) and IFNG (IFN-α) among molecular subtypes and calculated the Pearson correlation coefficient of PDCD1 and IFNG expression. We also calculated the Pearson correlation coefficients between the expression of these two genes and immune scores for T cells and CD8 T cells. The above analyses identify our molecular subtypes based on the potential related to immunotherapy. Analysis and Functional Identification of DEGs in Molecular Subtypes In TCGA dataset, the R software package limma (V3.44.3) (Ritchie et al., 2015) was applied to analyze DEGs in the expression spectrum data of molecular subtypes, and an FDR < 0.01 and an |FC| > 1.5 were used as the threshold to screen and filter the differentially expressed genes. The R package clusterProfiler (V3.16.0) (Yu et al., 2012) was used to perform GO functional annotation and KEGG pathway enrichment analyses of differentially upregulated and downregulated genes, and an FDR < 0.05 was used as the threshold for filtering. Detection of Prognostic Genes and Their Characteristics LASSO, univariate regression, and multivariate regression analyses were performed to examine the relationships between the expression of invasion-related genes and the overall survival (OS) of colon cancer patients. In the univariate Cox regression analysis, a gene with a p-value < 0.05 was considered a candidate prognostic gene. Multivariate analysis, LASSO penalization, and stepAIC were applied for subsequent screening. Each gene was evaluated to determine its regression coefficient and hazard ratios (HRs), and the qualified mRNAs were ultimately included in a signature for colon cancer. Establishment of a Prognostic Model Based on Invasion-Related Genes Prognostic prediction by a signature for colon cancer patients was evaluated based on the expression of each optimal prognostic mRNA multiplied by the relative regression coefficient weight, which was calculated from the multivariate model with the following formula: where coef (i) refers to the coefficient of the ith gene, and gene(i) refers to the expression level of the ith gene. Each sample was evaluated to obtain a risk score value, and the risk score cutoff was set as the middle value. Samples with a risk score greater than the middle value were considered high-risk samples, and those with a risk score less than or equal to the middle value were considered low-risk samples. The Kaplan-Meier (KM) survival curves for the two groups were plotted. A receiver operating characteristic (ROC) curve for OS prediction was constructed to evaluate the specificity and sensitivity of the model. Cox multivariate analysis of clinicopathological characteristics of colon cancer patients was also conducted to examine prognostic model independence. Verification of the Prognostic Risk Model Patients were classified into a high-risk or low-risk group after comparing the risk scores of TCGA training set, the entire TCGA cohort, and two independent external datasets, the GSE17538 and GSE38832 cohorts. The cutoff values were calculated from the training cohort. KM curve, multivariate Cox, and timedependent ROC analyses were also performed. Additionally, stratified analyses were performed based on clinicopathological characteristics. Nomogram A nomogram and calibration curve were established by the R language "RMS" software package (Jiang et al., 2019). The consistency between the predicted probability and actual observed frequency was assessed by determining correctness. Next, the performance of the nomogram was visualized by showing the predicted and observed results in the calibration curve, with the 45 • line representing the most accurate prediction. Figure 1 shows the study flow chart. Univariate Cox survival analysis was performed on the expression profiles of 1,132 invasion-related genes using the Coxph function of the R package survival (V3.1-12), and 56 genes were identified to be associated with the prognosis of colon cancer (p < 0.01). Functional annotation analysis was carried out on 56 genes, which were related to exosomes, signal transduction, and transporters. The 56 prognostic genes were used to cluster TCGA samples with the R package NMF (K = 2-10), and TCGA samples were classified into a C1 or C2 category according to the clustering results (Figure 2A). KM survival curve analysis showed that the molecular subtype C2 had a better prognosis than C1 ( Figure 2B). The distribution of clinical trait statuses between FIGURE 1 | The study flow chart. Identification of Molecular Subtypes of Colon Cancer Frontiers in Genetics | www.frontiersin.org the two subtypes showed that dead samples, lymphatic invasion, and the incidences of T2, T3, and T4 in the prognostic C1 subtype were significantly higher than those in the prognostic C2 subtype, while N0 and stage I samples were significantly less common in the prognostic C1 subtype than in the prognostic C2 subtype ( Figure 2C). Comparison of Mutations Between Molecular Subtypes and Existing Immune Subtypes The profiles of key mutation genes in colon cancer, such as TP53, KRAS, SYNE1, PIK3CA, BRAF, FAT4, CSMD3, CTNNB1, and RYR2, were selected, and the mutation frequencies of the SYNE1, CSMD4, and BRAF genes in the C1 subtype were higher than those in the C2 subtype, whereas the mutation frequencies of TP53 (Mirgayazova et al., 2019), KRAS, PIK3CA, and FAT4 in the C1 subtype were lower than those in the C2 subtype (Figures 3A,B). The majority of colon cancer patients recorded in TCGA dataset were in the C1 or C2 immune subtype (approximately 94.3%) compared with the existing immune subtypes, with the C1 immune subtype having a better prognosis than the C2 immune subtype, and the C5 immune subtype was absent from TCGA colon cancer dataset ( Figure 3C). Additionally, we compared the distribution of these subtypes across our metabolic subtypes and found that the immune subtype C1 was predominant over our C2 subtype, which was consistent with a better prognosis for our C2 subtype ( Figure 3D). Evaluation of Immune Cell Scores and Immunotherapy Outcomes To identify the relationships of immune cell scores with the two molecular subtypes, first, the immune cell scores of each sample were calculated separately using the R software package MCPcounter, and then the differences in immune cell scores between the molecular subtypes were compared. The results showed that the 10 immune cell scores were higher for the C1 subtype than for the C2 subtype, which included scores for T cells and CD8 T cells ( Figure 4A). In recent years, immune checkpoint inhibition (ICI) research has led to breakthroughs in clinical response in a variety of human cancers, yet the majority of cancer patients do not benefit from ICI. Studies have demonstrated that the clinical response to anti-PD-1 antibodies, a type of ICI, is more likely to occur in tumors that already have T-cell infiltration and PD-L1 expression. Additionally, IFN-γ has an important role in regulating PD-L1 expression, and high levels of IFN-γ accompanied by accelerated lymphocyte infiltration may be critical for recognizing the immune phenotype of tumor cytotoxicity, which could potentially indicate anti-PD-1 therapeutic efficacy. We compared the expression of the three genes PDCD1 (PD-L1), CTLA4, and IFNNG (IFN-γ) between the molecular subtypes and observed that compared with the C2 subtype, the C1 subtype showed significantly higher PDCD1 and CTLA4 expression ( Figure 4B). In addition, we calculated the Pearson correlation coefficients between PDCD1, CTLA4, and IFNG gene expression and immune cell scores and found that they showed strong positive correlations ( Figure 4C). The above results suggested that our molecular subtype C1 may respond better to immunotherapy than C2. Identification of Differentially Expressed Genes A total of 1,514 DEGs from the C1 and C2 molecular subtypes were determined by the limma package (Figures 5A,B). Furthermore, the 1,390 upregulated differentially expressed genes and 124 downregulated differentially expressed genes related to the colon cancer subtype grouped by the R software package ClusterProfiler (v3.16.0) were used for GO functional enrichment analysis and KEGG pathway analysis. Here, the GO data demonstrated that the 1,390 upregulated genes were primarily involved in extracellular matrix organization, myeloid leukocyte migration, positive regulation of cell adhesion, and another 1,307 pathways ( Figure 5C). From the results of the KEGG pathway enrichment analysis, it was found that the 1,390 upregulated genes were related to proteoglycans, focal adhesion, the PI3K-Akt signaling pathway, cancer, and another 70 pathways ( Figure 5D). The GO results showed that the 124 downregulated genes were primarily involved in the innate immune response, antimicrobial humoral immune response, humoral immune response in the mucosa, and 19 other pathways ( Figure 5E). The results of the KEGG pathway enrichment analysis demonstrated that the 124 downregulated genes were related to drug metabolismcytochrome P450, the NOD-like receptor signaling pathway, retinol metabolism, and another seven pathways ( Figure 5F). Establishment of a Prognostic Risk Scoring System With Four Genes Univariate Cox regression analysis was performed on the 1,514 DEGs, and 139 genes associated with colorectal cancer prognosis were detected. Genes that might have been highly correlated with other genes were excluded by LASSO regression. The degree of complexity for LASSO regression was calculated by the parameter lambda (λ), with a larger λ indicating a greater penalty for the linear model with more variables (Figure 6A). When λ = 0.06686829 (Figure 6B), 10 candidate genes were acquired by LASSO regression. For the training cohort, multivariate Cox regression analysis showed that the independent prognostic Table 2), which were, therefore, used to build a prognostic model risk score, which had the formula (0.268 × INHBB expression value) + (0.225 × RBP7 expression value) + (0.514 × RTN2 expression value) + (−0.205 × ATOH1 expression value). Next, the risk scores of the samples in TCGA training data set were obtained based on the calculation formula of the risk score. Then, the median was taken as the cutoff point. If the risk score was higher than the median, the sample was considered high risk; otherwise, the sample was considered low risk. The risk score of the four-gene signature and patient survival are shown in Figures 6C,D. The gene expression heatmap indicated that INHBB, RBP7, and RTN2 were risk factors and that ATOH1 was a protective factor ( Figure 6E). Also, we used RT-qPCR and Western blot assay to validate the level of genes in model in two colorectal cancer cell line. The data showed that mRNA and protein expressions of INHBB, RBP7, and RTN2 were higher, while ATOH1 was downregulated in SW480 cells and HT29 cells in comparison with FHC cells (Supplementary Figures 1A,B). The log-rank test and KM survival curve analysis revealed that patients in the high-risk group tended to have a poor prognosis in TCGA training dataset (Figure 6F). The AUCs of the ROC curves for 1-year survival and 5-year survival were both 0.74, and the AUC for 3-year survival was 0.82 ( Figure 6G). Verification of the Four-Gene Signature With Internal Datasets To assess the robustness of our four-gene signature, we validated the signature in a test dataset and the entire TCGA dataset. Based on the above formula, the survival risk scores of patients in the test set were determined. Figures 7A-C display patient survival, a gene expression heatmap for the test set, and the risk score of the four-gene signature. The KM curve demonstrated a significant difference in prognosis between the low-risk group and the high-risk group (p-value of the log-rank test = 0.0011; Figure 7D). From the time-dependent ROC curve data, the four-gene signature was shown to effectively predict the OS of colon cancer patients ( Figure 7E). Moreover, a gene expression heatmap for the entire TCGA dataset, patient survival results, and the risk score of the four-gene signature are shown in Figures 8A-C. The KM curves showed a significant difference in survival time between the patients in the high-risk group and those in the low-risk group (log-rank test p-value < 0.0001, Figure 8D), and the AUCs were 0.68, 0.73, and 0.73 for 1, 3, and 5 years for the entire TCGA dataset (Figure 8E). Validation of the Four-Gene Signature in External Datasets To further verify the accuracy of our risk model with different platforms and different data sets, our risk model was verified with the two independent data sets GSE17538 and GSE38832. Based on the above formula, we calculated the survival risk scores of patients in the test set. Patient survival results, the risk score of the four-gene signature, and a gene expression heatmap for the GSE17538 dataset are displayed in Figures 9A-C. The KM curve showed a significant difference in the prognosis of patients between the high-risk group and the low-risk group (log-rank test p-value = 0.0011; Figure 9D). The time-dependent ROC curve results showed that this four-gene signature could effectively predict the OS of colon cancer patients ( Figure 9E). Moreover, the patient survival results, risk score of the four-gene signature, and gene expression heatmap for the GSE38832 dataset are shown in Figures 10A-C. The KM curves indicated a significant difference in the survival time of patients between the high-risk and low-risk groups (Figure 10D), and the AUCs were 0.69, 0.73, and 0.62 for 1, 3, and 5 years in TCGA dataset ( Figure 10E). Analysis of Clinical Characteristics of the Risk Model By comparing the distribution of risk scores among clinical feature groups in the entire TCGA dataset, it was found that there were significant differences in the T stage, N stage, M stage, stage, lymphatic invasion, and our molecular subtypes (p < 0.05). No differences in age or sex grouping were detected. In the lymphatic invasion-grouped samples, the samples with invasion had a higher risk score. Between our molecular subtypes, the risk score of the C1 subtype, which had a worse prognosis, was significantly higher than that of the C2 subtype, which had a better prognosis (Figure 11). The model also showed good classification of chemotherapy-and radiotherapy-treated samples ( Supplementary Figures 2A-C). The Risk Model Is an Independent Indicator of Colon Cancer Prognosis Univariate and multivariate analyses were conducted to compare the prognostic prediction of risk parameters with clinicopathological parameters (Table 3). According to the present data, RiskType and M stage were determined to be two independent indicators for colon cancer prognosis, as they showed significant differences in the two analyses. We also determined the two analyses on the GSE17538 and GSE38832 datasets, and the results showed that RiskType was also an independent indicator in prediction of colon prognosis in two GEO datasets (Supplementary Tables 1 and 2). Nomogram and Its Clinical Application To provide a quantitative method for the prediction of OS in colon cancer patients, we constructed a nomogram based on the risk score and M stage, which were identified as independent prognostic factors by multivariate analyses (Figure 12A). Risk score features had the greatest impact on survival prediction. The calibration curves of 5-, 3-, and 1-year survival showed that the nomogram was almost an ideal model in terms of predicting colon cancer prognosis ( Figure 12B). These results further supported the reliability of the prognostic model. Additionally, we screened four published signatures (Tian et al., 2017;Xu et al., 2017;Dai et al., 2018;Mo et al., 2019). Their risk scores were first calculated, and then the 1-, 3-, and 5-year AUCs of our model were compared with those of the four published models. The results showed that the 1-, 3-, and 5-year AUCs of our model were higher than those of the other four models (Supplementary Figures 2D-F). DCA also indicated that our model had better performance (Supplementary Figure 2G). DISCUSSION Differences in CRC tumors can affect the prediction of clinical treatment outcomes. The accuracy of patient clinical outcome prediction will be improved by classifying gene expression profiles and identifying subtypes of colorectal cancer with effective prognostic markers, which also provide valuable guidance for appropriate therapeutic interventions (Bramsen et al., 2017). CRC subtypes could advance accurate diagnosis and facilitate drug development. Many attempts have been made to use gene expression datasets to achieve this goal (Muzny et al., 2012;Ren et al., 2016). In a study by Bramsen et al. (2017), the use of a subtype strategy for CRC transcriptional profiling to identify molecular subtype-specific biomarkers helped to improve patient prognosis. Recent studies have established different subtype classifications based on the three molecular pathways that have been identified: chromosomal instability (CIN), CIMP, and microsatellite instability-high (MSI-H) (Sadanandam et al., 2013;Hoadley et al., 2014;Roepman et al., 2014). The CRC Subtype Consortium (CRCSC) identified four robust consensus molecular subtypes (CMSs) using RNA-sequencing numbers from primary tumor samples derived from patients with early-stage colon cancer: CMS1, inflammation/immunityrich genes; CMS2, normative; CMS3, metabolic; and CMS4, mesenchymal (Fontana et al., 2019). Seven DNA methylation subgroups were constructed based on DNA methylation in colon adenocarcinoma patients (Yang et al., 2019). However, there is disagreement among these classifications. Many attempts have been made to reach a consensus in classifying CRC subtypes, and such efforts play a critical part in determining the prognostic and predictive factors for colon cancer patients and in guiding treatment (Bramsen et al., 2017). Currently, due to the difficulty and cost of experimental verification, there is no general consensus on subclassification, and reliable molecular subtype methods are still needed to reveal the clinical potential of these subclassifications. This study developed an accurate method for determining molecular subtypes based on invasion-related genes. Gao et al. built gene signature sets based on eight cancer hallmarks to predict the recurrence of stage 2 colon cancer in patients treated with fluorouracil-based chemotherapy (Gao et al., 2016). An "invasiveness" gene signature is associated with metastasis-free survival and OS in patients with medulloblastoma, lung cancer, or prostate cancer (Liu et al., 2007). However, "invasive" genetic markers in colon cancer have not been studied. In this work, we retrospectively identified four bone metastasis-related genes (INHBB, RBP7, RTN2, and ATOH1) and constructed a gene expression signature model for colon cancer patients by bioinformatic analysis. As a proteincoding gene, inhibin subunit beta B (INHBB) is involved in the synthesis of transforming growth factor-β (TGF-β) family members. INHBB expression has been found to be upregulated in colorectal cancer tissues and to be positively related to stromal and immune scores (Yuan et al., 2020). High RBP7 expression has been confirmed to be an independent biomarker for poor cancer-specific survival in patients with late-or early-stage colon cancer. Moreover, a study showed that ectopic expression of RBP7 could enhance the invasion and migration of colon cancer cells (Elmasry et al., 2019). In colon cancer tissues, positive expression of ATOH1 is closely related to a lower grade, a lower TNM stage, and better overall survival (Yang et al., 2018). The spatial cellular expression patterns of RTN2 have not been investigated in colon cancer, but RTN2 expression was found to be positively correlated with degenerative disorder (Montenegro et al., 2012). Thus, we speculated that RTN2 could act as an anticancer target and a biomarker for the prognosis of colon cancer. The current findings indicate that the four-gene signature is an effective marker for predicting the survival prognosis of colon cancer patients. There were still limitations to the current research, as we studied only the mRNA expression of genes, which is not always related to their specific biological activities. Second, the detailed mechanism still needs to be investigated in further experiments. CONCLUSION In summary, we identified two new prognostic subtypes with significant differences in predicting colon cancer patient survival according to gene expression data from TCGA. Furthermore, we constructed a risk score model derived from four genes to predict the prognosis of colon cancer patients. This study suggests that gene expression profiles show the molecular characteristics of different subsets of colon cancer. Our results could facilitate the design of future clinical trials to identify colon cancer patients who could benefit from adjuvant chemotherapy. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. AUTHOR CONTRIBUTIONS YD designed the study, reviewed, and edited the manuscript. TS and ZC contributed to the literature search. HJ contributed to the data acquisition. XZ contributed to the statistical analysis. TS wrote the initial draft of the manuscript. All authors read and approved the manuscript.
v3-fos-license
2019-06-13T21:13:50.000Z
2019-06-13T00:00:00.000
189898463
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-7545-2.pdf", "pdf_hash": "7f7465859658b3fc7bef45f950732ae9a46643f2", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:438", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "7f7465859658b3fc7bef45f950732ae9a46643f2", "year": 2019 }
pes2o/s2orc
LHC13 forward elastic scattering: Dynamical gluon mass and semihard interactions In the context of a QCD-based model with even-under-crossing amplitude dominance at high-energies, it is shown that the $pp$ and $\bar{p}p$ elastic scattering data on $\sigma_{tot}$ and $\rho$ above 10 GeV are quite well described, especially the recent TOTEM data at 13 TeV. Specifically, we investigate the role of low-$x$ parton dynamics in dictating the high-energy behavior of forward scattering observables at LHC energies, by using a nonpertubative cutoff linked to the dynamical generation of a gluon mass. Unexpected features of the data, such as the rather small $\rho$ value at 13 TeV recently reported by the TOTEM Collaboration, are addressed using an eikonalized elastic amplitude, where unitarity and analyticity properties are readily build in. The model provides an accurate global description of $\sigma_{tot}$ and $\rho$ with pre- and post-LHC fine-tuned parton distributions, CTEQ6L and CT14, even if data at 8 and 13 TeV are not included in the dataset analyzed. These findings suggest that the low-$x$ parton dynamics, as well as the nonperturbative dynamics of QCD, play a major role in the driving mechanism behind the pre-asymptotic $\rho$ decrease at LHC energies. I. INTRODUCTION The elastic hadronic scattering at high energies represents a rather simple kinematic process. However, its complete dynamical description is still a fundamental problem in QCD, since the confinement phenomena precludes a pure perturbative approach. Over the past few years, the LHC has released precise measurements of elastic proton-proton scattering which has become an important guide for selecting models and theoretical approaches, looking for a better understanding of the theory of strong interactions. Among other physical observables, two forward quantities play a fundamental role in the investigation of the elastic scattering at high energies, the total cross section and the ρ parameter, which can be expressed in terms of the scattering amplitude A(s, t) by σ tot (s) = 4πIm A(s, t = 0), where s and t are the Mandelstam variables and t = 0 indicates the forward direction. Recently, the TOTEM Collaboration has provided new experimental measurements on σ tot and ρ from LHC13, the highest energy reached in accelerators. In a first paper [1], by using as input ρ = 0.10, the measurement of the total cross section yielded σ tot = 110.6 ± 3.4 mb. In a subsequent work [2], an independent measurement of the total cross section was reported, together with the first measurements of the ρ parameter: ρ = 0.10 ± 0.01 and ρ = 0.09 ± 0.01. Although the values of σ tot are in consensus with the increase of previous measurements by TOTEM, the ρ values indicate a rather unexpected decrease, as compared with measurements at lower energies and predictions from the wide majority of phenomenological models. This new information has originated a series of recent papers and discussions on possible phenomenological explanations for the rather small ρ-value. The main concern in these theoretical discussions is the full understanding of the Odderon concept (a crossing odd color-singlet with at least three gluons) [3][4][5] and of the Pomeron one (a crossing even color-singlet with at least two gluons) [6,7]. In this rather intricate scenario, we present here a phenomenological description of the forward pp andpp elastic scattering data in the region 10 GeV -13 TeV. In our model the behavior of the forward quantities σ tot (s) and ρ(s), given by Eqs. (1) and (2), are expected to be asymptotically dominated by the so-called semihard interactions. This type of process originates from hard scattering of partons which carry a very small fraction of the momenta of their parent hadrons, leading to the appearance of minijets [31,32]. The latter can be viewed simply as jets with transverse energy much smaller than the total center-of-mass energy available in the hadronic collision. The energy dependence of the cross sections is driven mainly by semihard elementary processes that include at least one gluon in the initial state, since at low x they are responsible for the dominant contribution. In our QCD-based formalism these partonic processes are written by means of the standard QCD cross sections convoluted with updated sets of partonic distribution functions. However, these processes are potentially divergent at low transferred momenta, and for this reason they must be regularized by means of some cutoff procedure. In a nonperturbative QCD context, one natural regulator was introduced by Cornwall some time ago [33], and since then has become an important feature in eikonalized models [34][35][36][37][38]. This regularization process is based on the increasing evidence that the gluon may develop a momentum-dependent mass, which introduces a natural scale able to separate the perturbative from the nonperturbative QCD region. Thus, taking into account the possibility that the infrared properties of QCD can, in principle, generate an effective gluon mass, we explore the nonperturbative aspects of QCD in order to describe the total cross section and the ratio of the real-to-imaginary parts of the forward elastic scattering amplitude in pp andpp collisions. Most importantly, two components are considered in our eikonal representation, one associated with the semihard interactions and calculated from QCD and a second one associated with soft contributions and based on the Regge-Gribov phenomenology. Except for an odd under crossing Reggeon contribution, necessary to distinguish between pp andpp scattering at low energies, all the dominant components at high energies (soft and semihard) are associated with even under crossing contributions, namely we have Pomeron dominance and absence of Odderon. The work is organized as follows. In Sect. II a short review on the concept of the dynamical gluon mass is presented. In Sect. III we introduce all the inputs and details concerning our QCD-based model and in Sect. IV we specify the data set and the fit procedures. In Sect. V the fit results are presented, followed by a discussion on the corresponding physical interpretations and implications. Our conclusions and final remarks are the contents of Sect. VI. The paper is complemented by four appendixes, where it is presented: details on the analytical parametrization for the partonic cross section (A), tests related to the effect of the leading soft contribution (B), energy-independent semihard form factor (C) and changes in the dataset (D). II. THE DYNAMICAL GLUON MASS As pointed out in the previous section, scattering amplitudes of partons in QCD contain infrared divergences. One procedure to regulate this behavior is by means of a dynamical mass generation mechanism which is based on the fact that the nonperturbative dynamics of QCD may generate an effective momentum-dependent mass M g (Q 2 ) for the gluons, while preserving the local SU (3) c invariance [39][40][41]. The dynamical mass M g (Q 2 ) introduces a natural nonperturbative scale and is linked to a finite infrared QCD effective chargeᾱ s (Q 2 ). The existence of a dynamical gluon mass is strongly supported by QCD lattice results. More specifically, lattice simulations reveal that the gluon propagator is finite in the infrared region [42][43][44][45][46][47][48][49] and this result corresponds, from the Schwinger-Dyson formalism, to a massive gluon [33,[50][51][52][53][54]. It is worth mentioning that infrared-finite QCD couplings are quite usual in the literature (for a recent review, see [55]). In addition to the evidence already mentioned in the lattice QCD, a finite infrared behavior of α s (Q 2 ) has been suggested, for example, in studies using QCD functional methods [56][57][58], and in studies of the Gribov-Zwanziger scenario [59][60][61]. Since the gluon mass generation is a purely dynamical effect, a formal continuum approach for tackling this nonperturbative phenomenon is provided by the aforementioned Schwinger-Dyson equations that govern the dynamics of all QCD Green's functions [33, 50-54, 62, 63]. These equations constitute an infinite set of coupled nonlinear integral equations and, after a proper truncation procedure, it is possible to obtain as a solution an infrared finite gluon propagator, while preserving the gauge invariance (or the BRST symmetry) in question. In this work we adopt the functional forms of M g andᾱ s obtained by Cornwall [33] via the pinch technique in order to derive a gauge invariant Schwinger-Dyson equation for the gluon propagator and the triple gluon vertex: where Λ is the QCD scale parameter, β 0 = 11−2n f /3 (n f is the number of flavors) and m g is the gluon mass scale to be phenomenologically adjusted in order to yield well founded results in strongly interacting processes. Note that the dynamical mass M 2 g (Q 2 ) vanishes in the limit Q 2 Λ 2 . It is thus evident that in this same limit the effective chargeᾱ s (Q 2 ) matches with the one-loop perturbative coupling: In the limit Q 2 → 0, in turn, the effective chargeᾱ s (Q 2 ) have an infrared fixed point, i.e. the dynamical mass tames the Landau pole. More precisely, if the relation m g /Λ > 1/2 is satisfied thenᾱ s (Q 2 ) is holomorphic (analytic) on the range 0 ≤ Q 2 ≤ Λ 2 [37]. In fact, this is the case, since the values of the ratio m g /Λ obtained phenomenologically typically lies in the interval A. Eikonal Representation The correct calculation of high-energy hadronic interactions must be compatible with analyticity and unitarity constraints, where the latter is satisfied simply by means of eikonalized amplitudes. We adopt the following normalization for the elastic scattering amplitude: where s is the square of the total center-of-mass energy, b is the impact parameter, q 2 t = −t is the usual Mandelstam invariant, with the complex eikonal function denoted by In this picture Γ(s, b) = 1 − e −χ(s,b) is the profile function, which, by the shadowing property, describes the absorption effects resulting from the opening of inelastic channels. In addition, in the impact parameter space and according the unitarity condition of the scattering S-matrix it may be also written as Therefore, the scattering process cannot be uniquely inelastic since the elastic amplitude receives contributions from both elastic and inelastic channels. In this representation P (s, b) = e −2χ R (s,b) can be defined as the probability that neither hadron is broken up in a collision at a given b and s. Such an absorption factor is crucial to determine rapidity gap survival probabilities in pp andpp scattering at high-energies, which in turn are crucial to disentangle inelastic diffractive (single and double) and central exclusive processes from the dominant minimumbias (non-diffractive) cross section [74,75]. Within the eikonal representation, Eq. (9), the total cross section and the ρ parameter in Eqs. (1) and (2) are given by: The eikonals for elastic pp andpp scattering are connected with crossing even (+) and odd (−) eikonals by Real and imaginary parts of the eikonals can be connected either by Derivative Dispersion Relations (DDR) [76][77][78][79][80][81] or Asymptotic Uniqueness (AU), which is based on the Phragmén-Lindelöff theorems [82,83] (see [84], appendixes B,C,D for a recent short review on these subjects). We have tested both methods and in what follows we present the results with the AU approach, also referred to as asymptotic prescriptions or real analytic amplitudes [83]. B. Semihard and Soft Contributions The eikonal function is assumed to be the sum of the soft and the semihard (SH) parton interactions in the hadronic collision [85,86], with each one related, in the general case, to the corresponding crossing even and odd contributions: In what follows we specify the inputs for each one of the four aforementioned contributions to the eikonal. Semihard Contributions and the Dynamical Gluon Mass The fundamental basis of models inspired upon QCD, or also known as minijet models, is that the semihard scatterings of partons in hadrons are responsible for the observed increase of the total cross section. Here we assume a Pomeron dominance, represented by a crossing even contribution, namely we consider that the semihard odd component does not contribute with the scattering process, In respect to the even contribution, it follows from the QCD improved parton model. At leading order, this semihard eikonal can be factorized as where W SH (s, b) is the overlap density distribution of semihard parton scattering, σ QCD denotes the cross section of hard parton scattering in the region where pQCD can be safely applied, namely above the cutoff Q 2 min . We assume (as in previous studies [37]) that hard parton scattering configuration in the transverse plane of the collision (in b-space) to be given by the Fourier-Bessel transform: where with ν SH = ν SH (s) taken as an energy dependent scale of the dipole. Specifically, we assume a logarithmic dependence for ν SH , namely: where ν 1 and ν 2 are two free fit parameters and the scale √ s 0 = 5 GeV is fixed. Regarding this dependence of the form factor on the energy, though not being formally established in the context of QCD, it is truly supported by the wealth of accelerator data available (as we shall see in Section IV) and seems to us more realistic than taking a static partonic configuration in b-space. In addition, many other phenomenological models have been proposed in literature (see e.g. [87][88][89][90][91][92][93]), in which the energy dependence in form factors play a crucial role in pp andpp elastic scattering dynamics and, therefore, in accurate descriptions of the data beyond √ s ∼10 GeV. The dynamical contribution, σ QCD (s), is calculated using perturbative QCD as follows: where x 1 and x 2 are momentum fraction carried by partons in the hadrons A and B, respectively,ŝ = x 1 x 2 s, |t| ≡ Q 2 stands for Mandelstam invariants of partonparton scatterings such as e.g. gg → gg, qg → qg and gg →qq (whose partonic cross sections are given afterwards) and f i/A (x 1 , |t|), f j/B (x 2 , |t|) are the parton distribution functions (PDFs) for partons i and j. The indexes i, j = q,q, g identify quark (anti-quark) and gluon degrees of freedom and Q 2 min represent the minimum momentum transfer scale allowing for pQCD calculations of partonic hard scattering, obeying the constraint 2Q 2 min < 2|t| <ŝ. Concerning the differential cross section at elementary level, the major contribution at high energies are the ones initiated by gluons 1 i. gluon-gluon elastic scattering, ii. quark-gluon elastic scattering, iii. gluon fusion into a quark pair, with kinematical constraints imposed and connected with the dynamical mass, namely: (i)ŝ +t +û = 4M 2 g (Q 2 ), for gluon elastic scattering (gg → gg) and (ii)ŝ +t +û = 2M 2 g (Q 2 ) + 2M 2 q (Q 2 ) for gluon fusion (gg →qq) and quark-gluon scattering qg → qg. Importantly, in what follows we assume the Cornwall's dynamical gluon mass (in Euclidean space) [33], Eq. (6), with the infrared frozen effective QCD charge, Eq. (7), to interpolate two QCD domains: (i) Q 2 ≈ 0, i.e. at infrared, where M 2 g freezes and the gluons carries an effective bare mass, , dynamical mass generation from nontrivial vacuum structure becomes unimportant and perturbative QCD limit is achieved. As discussed in Section II, recent phenomenology and lattice studies support bare gluon masses in the range, m g : 300 − 700 MeV. Here we fix m g = 400 MeV while also accounting, for completeness, the subdominant role of dynamical quark generation at high energies. We assume, for simplicity which also recovers the bare mass m q (with m q < m g ) at infrared and reaches the massless quark limit for Q 2 m 2 q . In all calculations we take m q = 250 MeV as fixed scale. At last, as commented before, the complex eikonal χ + SH (s, b) is determined through the asymptotic even prescription s → −is. The details on this dependence and the evaluation of the real and imaginary parts of σ QCD (s) are presented and discussed in Appendix A. Soft Contributions The full even and odd soft contributions are based on the Regge-Gribov formalism and are constructed in accordance with Asymptotic Uniqueness (Phragmén-Lindelöff theorems). Assuming also leading even component, they are parametrized by where denote analytical even and odd cross sections and A, B, C and D are free fit parameters. Moreover, the impact parameter structure derives from bidimensional Fourier transform of dipole form factors, namely: GeV is a fixed parameter and µ + sof t a free fit parameter. As in the case of the SH form factor, the energy scale is fixed at We notice that in the Regge-Gribov context, the soft even contribution consists of a Regge pole with intercept 1/2, a critical Pomeron and a triple-pole Pomeron, both with intercept 1. The odd contribution is associated with only a Regge pole, with intercept 1/2. IV. DATASET AND FIT PROCEDURES In the absence of ab initio theoretical QCD arguments to determine the parameters A, B, C, D, µ + sof t , ν 1 and ν 2 , we resort to a fine-tuning fit procedure described in what follows. As we are interested in the very high-energy behavior of σ tot and ρ, we shall use only pp andpp elastic scattering data. Moreover, in order to test our QCDbased model in the t = 0 limit, we perform global fits that include exclusively forward data, as described in Section III. A. Dataset Our dataset is compiled from a wealth of collider data on pp andpp elastic scattering, available in the Particle Data Group (PDG) database [94] as well as in the very recent papers of LHC Collaborations such as TOTEM [1,2,95,96] and ATLAS [97,98], which span a large c.m. energy range, namely 10 GeV √ s 13 TeV. For the sake of clarity and completeness we furnish in Table I all the recent LHC data on σ tot and ρ, still absent in the PDG2018 review. We call attention to the fact that we do not apply to this dataset, composed of 174 data points on σp p,pp tot and ρp p,pp , any sort of selection or sieving procedure, which might introduce bias in the analysis. TABLE I. Total cross section, σtot, and ρ-parameter data recently measured by TOTEM and ATLAS Collaborations at the LHC, but not compiled in the PDG2018 review [94]. This dataset totalizes 13 new data points on pp forward elastic scattering at high energies, most of which are currently published. For completeness, we provide all the appropriate references to the data we have used in our fits in the last column. B. Fit Procedures To provide statistical information on fit quality, we perform a best-fit analysis, furnishing as goodness of fit parameters the chi-squared per degrees of freedom (χ 2 /ζ) and the corresponding integrated probability, P (χ 2 , ζ) [102]. Since our model is highly nonlinear, numerical data reduction is called for. Despite the limitation of treating statistical and systematical uncertainties at the same foot, we apply the χ 2 /ζ tests to our dataset with uncertainties summed in quadrature 2 . Our fits are done using the TMINUIT class of the ROOT framework [105], through the MIGRAD algorithm. While the number of calls of the MIGRAD routine may vary in the fits with PDFs CETQ6L, CT14 and MMHT, full convergence of the algorithm was always achieved. Moreover, all data reductions were performed with the interval χ 2 − χ 2 min = 8.18, which corresponds to 68.3 % of Confidence Level (1σ) [106] in our case (7 free parameters). Furthermore, in all fits performed we set the low energy cutoff, lowing we present our results, according to the choice of three distinct PDFs: CTEQ6L [107] (pre-LHC), CT14 [108] and MMHT [109] (fine-tuned with LHC data) and setting three different high-energy cutoffs, as previously discussed. In testing different PDFs we look for a better understanding of the impact of low-x parton dynamics in defining the very high-energy behavior of σp p,pp tot and ρp p,pp . For comparison, the behavior of the gluon distribution function in each PDF set in given in Figs. 1 and 2. V. RESULTS AND DISCUSSION The results for the free fit parameters, using each one of the three PDFs (CTEQ6L, CT14, MMHT) and for each high-energy cutoff in the dataset ( √ s max = 13 TeV, 8 TeV and 7 TeV), are displayed in Table II, together with the statistical information on the data reductions (reduced chi square and corresponding integrated probability). The curves of σ tot (s) and ρ(s) for the three PDFs, compared with the experimental data, are shown in Figures Fig. 3, the results are in plenty agreement with all the σ tot data, independently of the PDF employed. For ρ the results with CTEQ6L and CT14 also describe quite well the TOTEM data at 13 TeV (and data at lower energies), but that is not the case with MMHT. Indeed, from Table II, in this case the integrated probability is the smallest one among the three PDFs. Notice that the result with CT14 (finetuned with LHC data) gives exactly ρ = 0.1 at 13 TeV. Despite a barely underestimation of the ρ datum from pp at 546 GeV, we conclude that our QCD-based model with CTEQ6L and CT14 provides a consistent description of the forward data in the interval 10 GeV -13 TeV, mainly a simultaneous agreement with the σ tot and ρ [107], CT14 [108] and MMHT [109] for highenergy cutoffs √ smax = 13 TeV, 8.0 TeV and 7.0 TeV. Quality fit estimators, chi-squared per degree of freedom, χ 2 /ζ, and integrated probability, P (χ 2 ; ζ), are also furnished (where ζ specifies the number of degrees of freedom (dof) in each fit). Table II, the integrated probability with √ s max = 7 TeV is the highest one among the three cutoffs and the corresponding predictions at higher energies indicate the decreasing in ρ(s). These results show the powerful predictive character of the results, since the σ tot and ρ data at 13 TeV are simultaneously described in all cases, even with √ s max = 7 TeV (for PDFs CT14 and CTEQ6L) and without Odderon contribution. In addition, looking for some insights into the formalism, it may be important to notice the effects of two phenomenological inputs, one related to the soft even eikonal and the other to the semihard form factor. In the first case, χ + sof t (s, b) as given by Eq. (26), has a component which increases with the energy, namely the term with coefficient C. In the second case, the dipole form factor G SH (s, k ⊥ ; ν SH ), Eqs. (19) and (20), also depends on the energy through the logarithmic. The effect of these terms can be investigated by assuming either C = 0 or ν 2 = 0 and re-fitting the dataset. These tests are presented and discussed in Appendixes B and C. By showing the values in the Table II, we can see that the parameter µ + sof t has, in general, the value 0.90 GeV. This restriction is due to the fact that the inverse of both µ + sof t and µ − sof t parameters characterizes the range of these soft interactions. Since the odd soft eikonal χ − sof t (s, b) is more sensitive to the longer-range ρ and ω exchanges, it is expected the inverse of the odd exchanges, (µ − sof t ) −1 , to be larger than the inverse of the even (a 2 and f 2 ) exchanges, (µ + sof t ) −1 . Thus in our analysis we impose the reasonable condition 1 < µ + sof t /µ − sof t ≤ 1.8. Indeed, in all cases the parameter µ + sof t fall within the expected range. Next we turn the focus to the physical intepretations of our results, mainly concerning high-energy QCD dynamics. In QCD-based (s-channel) models like ours, the driving mechanism behind the rapid rise of the total cross section is linked to the growth with energy of low-p t jets (called minijets). This idea, while proposed many years ago, remains a powerful one in the scope of models of strong interactions at high-energies, as it provides a clear connection between perturbative QCD and hadronic elastic observables, such as σ tot and ρ, in a unitarized framework. Those minijets arise from partonic interactions (mainly gluons) carring very small momentum fraction of their parent hadrons. On the one hand, from eq. (21), we see that the smallest x scale probed by this model is which, taking Q 2 min 1 GeV 2 , yields x min ∼ 10 −10 at LHC13. On the other, it is well-known that at very lowx the PDF's diverge, as gluon emissions -which naturally occur in any partonic process at high energiesare not suppressed by DGLAP evolution at higher momentum transferred. This behaviour can be readily seen from Figure 1 and 2 where the gluon distribution function from parton distributions CT14, CTEQ6L and MMHT are displayed at the minimum scale Q min = 1.3 GeV and two higher scales, Q = 10 GeV and 100 GeV. From these plots one may notice that MMHT grows faster than CT14 and CTEQ6L, specially at low momentum scales, such as Q min = 1.3 GeV. As matter of fact, very low-x gluons are the key ingredient to understand our results for various PDF's, as shown in Figures 3, 4 and 5. Once the QCD cross section (21) is dominated by low-x partons, and gluon iniciated processes are the leading component of this cross section, one expects the magnitude of σ SH (s) calculated with MMHT to be larger than the corresponding curves for CT14 and CTEQ6L at high energies. As we show in Figure 6 in Appendix A, that turns out to be exactly the case. Table II. VI. CONCLUSIONS In this paper we have presented recent studies of pp and pp elastic scattering within an eikonal QCD-based model, which combines the perturbative parton-model approach to model the semihard interactions among partons, with a Regge-inspired model to describe the underlying soft interactions. We present a phenomenological analysis undertaken to improve the understanding of elastic processes taking place in the LHC. We address this issue by means of a model involving only even-under-crossing amplitudes at very high energies. As a result, we see that Best fit parameters and quality estimators are given in Table II. the QCD-based model allows us to describe successfully the forward scattering quantities σ tot and ρ from √ s = 10 GeV to 13 TeV. Nowadays, with the recent release of LHC13 data by the TOTEM Collaboration, it seems that we have achieved a true impasse: (i) in the Regge phenomenology context, LHC13 data is interpreted as clear evidence for the Odderon discovery, in the maximal strong scenario (namely the maximal Odderon) [8,9] ; (ii) however, in other t-channel approaches, based on eikonal rescatterings, such as [17,18], a small or vanishing Odderon contribution at 13 TeV is found to be compatible with the Table II. real-to-imaginary ratio, ρ = 0.10 ± 0.01, measured by TOTEM; (iii) in addition, violations of t-channel unitarity have also been addressed in [110] and seems to be unavoidable if QCD interactions manifests in the strongest form; (iv) other approaches, based on s-channel unitarity, such as ours, find the LHC13 data on forward observables to be compatible with a vanishing high-energy odd under crossing amplitude. According to this picture, a detailed scrutiny on the asymptotic nature of the C-parity of the scattering amplitude continues to be a core task in physics. Hence, we devote most of this paper to analyzing forward observables in hadron-hadron collisions, bringing up information about the infrared properties of QCD by considering the possibility that the nonperturbative dynamics of QCD generate an effective charge. Our analysis, which follows a previous short letter [111], explores in detail the various effects that could be important in the global fits, in special three major points: (i) the use of three different PDFs (CT14,CTEQ6L and MMHT), investigating not only the difference and similarities among them, but also the effect of being pre or post LHC distributions; (ii) the study of their compatibility with the LHC13 data; (iii) the descriptions and predictions provided according to three high energy cutoffs, namely √ s max = 7, 8 and 13 TeV. On general grounds, the present results demonstrate an overall agreement of all PDFs with σ tot at 13 TeV and, apart from MMHT, an excellent agreement with ρ at the same energy. From a rigorous statistical point of view, our results show that the TOTEM measurements can be simultaneously well described by a QCD scattering amplitude dominated by only single crossing-even elastic terms. At first glance, the behavior of the ρ parameter obtained by means of the MMHT set could be regarded as a consequence of its gluon steeply-rising component, as depicted in Figs. 1 and 2. We observe that its gluon distribution function increases rapidly and becomes higher than the CTEQL and CT14 gluon distributions. Note that this rapid variation, around the initial scale Q = 1.3 GeV, occurs in the kinematic region that contributes most to the integral (21). We argue that the success of our model in describing the unexpected ρ decrease at LHC13 may be attributed to the effect of introducing infrared properties of QCD, by considering that the nonperturbative dynamics of QCD generate an effective gluon mass. Specifically, the essential inputs of our model, namely the low-x behavior of parton distribution functions and the dynamical gluon mass scale, are found to be crucial in the phenomenological description of present available data at center-of-mass energies spanning from 10 GeV to 13 TeV. This mass scale is a natural regulator for the potentially divergent partonic processes and apparently also plays an important role in the unexpected decrease of the ρ parameter at high energy. The study of infrared properties of QCD is currently a subject of intense theoretical interest. Our expectation is to improve the understanding about the influence of the dynamical-mass generation mechanism on semihard processes. Appendix A: Parametrization for σQCD(s) One of the most important ingredient of the QCDbased model is the even-under-crossing partonic crosssection σ QCD (s), given by Eq. (21). In this appendix we present the details of the evaluation of this quantity, using three distinct PDFs: CTEQ6L [43], CT14 [44] and MMHT [45]. Some additional results are presented and discussed. The evaluation is based on the steps that follow. First we consider the complex analytic parametrization where b 1 , ..., b 10 are free fit parameters and provides the adequate complex and even character of the analytic function through the substitution s → −is, leading to Re σ QCD (s) and Im σ QCD (s). Next, by means of Eq. (21) and using the three distinct PDFs, we generate around 30 points for each one of these parton distributions, which are then fitted by the Re σ QCD (s), with less than 1% error. With the values of the free fit parameters determined for each PDF, the corresponding Im σ QCD (s) are evaluated. For CTEQ6L, CT14 and MMHT we display in Table III the best-fit parameters b i , i = 1, · · · , 10 and in Fig. 6 the dependencies of Re σ QCD (s) and Im σ QCD (s). From the figure, we see in all cases the steep rise of the partonic cross-sections with the energy. For example at √ s = 10 TeV, most results lie around 580 mb. Notice, however, that this rise is tamed in the physical crosssections, since we have an eikonalized model. We note that among the PDFs post-LHC, MMHT led to the fastest rise of both Re σ QCD (s) and Im σ QCD (s) and CT14 led to the slowest rise. The results with CTEC6L (pre-LHC) lie between these two cases. Table III. The extreme fast rise of σ QCD (s) in case of MMHT, may be the responsible for the overestimation of ρ at 13 TeV, a result which is independent of the high-energy cutoff (Figures 1, 2 and 3). Appendix B: Effect of the leading contribution in χ + sof t (s, b) One of the ingredients of the QCD-based model is the soft-even component of the eikonal, Eqs. (26) and (27), which comprise a leading Pomeron contribution given by the quadratic term in Eq. (27), with coefficient C. In order to investigate the relevance of this leading soft contribution at high energies in our global results, we present here a test in which this term is excluded. Specifically, we fix C = 0 in Eq. (27) and refit the dataset. As illustration, we consider the high-energy cutoff at 13 TeV and the three PDFs employed in this work. The results of these fits are presented in Table IV and Fig. 7. Let us compare the results in Fig. 3 (C free fit parameter) with those in Fig. 7 (C = 0 fixed), focusing the TOTEM data at 13 TeV (inserts) in the cases of PDFs CT14 and CTEQ6L. From Fig. 3, the results for σ tot cross the middle of the lower error bar and for ρ they cross the central value of the highest measurement. On the other hand, from Fig. 7 the results for σ tot barely reach the end of the lower error bar and for ρ they cross the middle of the upper error bar. We conclude that, although not being the leading contribution at the highest energies, the triple pole Pomeron in the soft component is important for the correct description of σ tot and ρ at 13 TeV and for an adequate fit result in statistical grounds. Although not so usual in the present phenomenological context, one of the ingredients of the QCD-based model is the energy dependence embodied in the semihard form factor, Eqs. (18) and (19). As commented in our introduction, this assumption is associated with the possibility of a broadening of the spacial gluon distribution as the energy increases. In order to investigate the relevance of this assumption in our global results, we present here a test in which this energy dependence is excluded. Specifically, we fix ν 2 = 0 in Eq. (20), so that ν SH = ν 1 and refit the data set. As illustration, we consider the highenergy cutoff at 13 TeV and the three PDFs employed in this work. The results of these fits are presented in Table V and Fig. 8. Let us compare the results in Fig. 3 (ν 2 free fit parameter) with those in Fig. 8 (ν 2 = 0 fixed), focusing the TOTEM data at 13 TeV (inserts) in the cases of PDFs CT14 and CTEQ6L. For ρ(s), the results with ν 2 = 0 indicate a steeper decrease at high energies, present agree- Table IV. ment with the pp ρ data and also with thepp data at 546 GeV. However, for σ tot (s) with both PDFs the results lie far below the lower error bars. In respect the statistical quality of the fits, comparison of Tables II (ν 2 free fit parameter) and V (ν 2 = 0 fixed) shows that the exclusion of the energy dependence results in a rather unaccepted goodness of fit since χ 2 /ζ increase to 1.3 − 1.4 and P (χ 2 ) decrease to 10 −3 − 10 −4 , at least two order of magnitude smaller. We conclude that the broadening of the spacial gluon distribution, as provided by Eqs. (18) and (19), is an important ingredient for the adequate description of both σ tot and ρ data at the LHC energy region. Here we develop two tests on the efficiency of the QCD-based model related to two different choices of the dataset. In the first test the low-energy cutoff is lowered from 10 GeV down to 5 GeV and in the second test the ATLAS data at 7 and 8 TeV are not included in the dataset. We present the results obtained with the three PDFs and as illustration, we consider only the high-energy cutoff at 13 TeV. Since the results are similar to those presented in the main text with our standard dataset, we focus the discussion on those obtained with the PDF CT14. Although the integrated probability decreases two order of magnitudes for √ s min = 5.0 GeV, we see that the visual description of the data is quite good and the quality of the fit is reasonable for this data set (without any sieve procedure), showing that the model can cover efficiently the whole region 5 GeV -13 TeV. D.2: Fits without the ATLAS data It is well known the discrepancies between the TOTEM and ATLAS data on σ tot at 7 and 8 TeV [112]. Here we present two tests with low-energy cutoffs at 10 GeV and 5 GeV, in which the ATLAS data are not included in the data set. The results are presented in Table VII, Figure 10 ( √ s min = 10 GeV) and Table VIII, Figure 11 ( √ s min Table V = 5 GeV). Our results with the complete dataset (ATLAS data included) are shown in Table II, Fig. 1 for √ s min = 10 GeV ( √ s max = 13 TeV, PDF CT14) and Table VIII, Fig. 9 for √ s min = 5 GeV. By comparing the results we see that, without the ATLAS data, for both cutoffs the integrated probability increases as a consequence of the aforementioned discrepancies. In particular, it is interesting to note that with the exclusion of the ATLAS data, for √ s min = 10 GeV we obtain χ 2 /ζ = 1.071 for ζ = 165, resulting in the highest integrated probability: P (χ 2 ; ζ) = 0.25.
v3-fos-license
2017-04-13T04:08:32.006Z
2015-03-30T00:00:00.000
18145651
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0121224&type=printable", "pdf_hash": "4427a82645a18e511e0c4de03fb5a5fb84c7d5b5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:440", "s2fieldsofstudy": [ "Biology" ], "sha1": "4427a82645a18e511e0c4de03fb5a5fb84c7d5b5", "year": 2015 }
pes2o/s2orc
Differential Gene Expression in the Liver of the African Lungfish, Protopterus annectens, after 6 Months of Aestivation in Air or 1 Day of Arousal from 6 Months of Aestivation The African lungfish, Protopterus annectens, can undergo aestivation during drought. Aestivation has three phases: induction, maintenance and arousal. The objective of this study was to examine the differential gene expression in the liver of P. annectens after 6 months (the maintenance phase) of aestivation as compared with the freshwater control, or after 1 day of arousal from 6 months aestivation as compared with 6 months of aestivation using suppression subtractive hybridization. During the maintenance phase of aestivation, the mRNA expression of argininosuccinate synthetase 1 and carbamoyl phosphate synthetase III were up-regulated, indicating an increase in the ornithine-urea cycle capacity to detoxify ammonia to urea. There was also an increase in the expression of betaine homocysteine-S-transferase 1 which could reduce and prevent the accumulation of hepatic homocysteine. On the other hand, the down-regulation of superoxide dismutase 1 expression could signify a decrease in ROS production during the maintenance phase of aestivation. In addition, the maintenance phase was marked by decreases in expressions of genes related to blood coagulation, complement fixation and iron and copper metabolism, which could be strategies used to prevent thrombosis and to conserve energy. Unlike the maintenance phase of aestivation, there were increases in expressions of genes related to nitrogen, carbohydrate and lipid metabolism and fatty acid transport after 1 day of arousal from 6 months aestivation. There were also up-regulation in expressions of genes that were involved in the electron transport system and ATP synthesis, indicating a greater demand for metabolic energy during arousal. Overall, our results signify the importance of sustaining a low rate of waste production and conservation of energy store during the maintenance phase, and the dependence on internal energy store for repair and structural modification during the arousal phase, of aestivation in the liver of P. annectens. Introduction Lungfishes are an archaic group of Sarcopterygian fishes characterized by the possession of a lung opening off the ventral side of the oesophagus. They hold an important position in the evolutionary tree with regard to water-land transition, during which many important physiological and biochemical adaptations occurred (e.g. air-breathing, urea synthesis, redirection of blood flow, heart partitioning). These adaptations supposedly facilitated the migration of fishes to terrestrial environments, leading to the evolution of tetrapods. There are six species of extant lungfishes, four of which (Protopterus aethiopicus, P. amphibius, P. annectens and P. dolloi) are found in Africa. African lungfishes are obligate air-breathers; they typically inhabit fringing weedy areas of lakes and rivers where dissolved oxygen levels are low, daytime temperatures are high, and seasonal drying is common. Without limbs to facilitate locomotion on land, lungfishes would have to passively tolerate desiccation, and aestivation could be the only means for survival under desiccation at high temperature. Aestivation involves corporal torpor at high environmental temperature with absolutely no intake of food and water for an extended period. African lungfishes can aestivate in subterranean mud cocoons for~4 years [1], which could be the longest aestivation period known for vertebrates. Traditionally, aestivation experiments on African lungfishes were performed either in mud or in cloth bags in the laboratory [2][3][4][5]. Chew et al. [6] were the first to achieve induction of aestivation in P. dolloi in pure mucus cocoons in air inside plastic boxes. Subsequently, it has been confirmed that P. annectens, P. aethiopicus [7][8][9][10][11] and P. amphibius (Y.K.I. and S.F.C, unpublished observation) can also be induced to aestivate in pure mucus cocoons in air. There are three phases of aestivation. During the induction phase in air, the fish detects environmental cues and turn them into some sort of internal signals that would instill the necessary changes at the behavioral, structural, physiological and biochemical levels in preparation of aestivation. It secretes a substantial amount of mucus which turns into a dry cocoon within 6-8 days. Aestivation begins when the fish is completely encased in a dried mucus cocoon, and there is a complete cessation of feeding and locomotor activities. During the maintenance phase, the fish has to preserve the biological structures and sustain a slow rate of waste production to avoid pollution of the internal environment. It can perpetuate to aestivate under such conditions for more than a year. The aestivating lungfish can be aroused from aestivation by the addition of water. Upon arousal, the fish struggles out of the cocoon and swims, albeit sluggishly, to the water surface to gulp air. After arousal, it excretes the accumulated waste products, and feeds for repair and growth. Completion of aestivation occurs only if arousal is successful; if not, the animal have had apparently succumbed to certain factors during the maintenance phase. Feeding begins approximately 7-10 days after arousal, and the fish grow and develop as normal thereafter. It is apparent that adaptive (physiological, biochemical and molecular) changes in various organs of the aestivating African lungfish would vary during the three phases of aestivation. However, the majority of studies in the past focused only on the maintenance phase, and there is a dearth of information on the induction and arousal phases of aestivation [12]. Loong et al. [13] pioneered in using suppression subtractive hybridization (SSH) polymerase chain reaction (PCR) to identify aestivation-specific gene clusters in the liver of P. annectens after 6 days (induction phase) of aestivation in a mucus cocoon in air (normoxia). They reported up-or down-regulation of several gene clusters which were involved in urea synthesis, prevention of clot formation, activation of the lectin pathway for complement activation, conservation of minerals (e.g. iron and copper) and increased production of hemoglobin beta. Since there were up-and down-regulation of mRNA expressions of genes related to ribosomal proteins and translational elongation factors, there could be simultaneous increases in protein degradation and protein synthesis during 6 days of aestivation, confirming the importance of reconstruction of protein structures in preparation for the maintenance phase of aestivation [13]. The liver is involved in diverse metabolic activities which include detoxification, oxidative defense, urea synthesis, carbohydrate and amino acid metabolism, and iron and copper metabolism. Even during the maintenance phase of aestivation, the liver has to continue functioning to detoxify ammonia to urea; only then, would the aestivating fish be able to mobilize protein and amino acid as an energy source for survival during the aestivation process. Therefore, in this study, we continued to examine the effects of 6 months of aestivation and 1 day arousal from 6 months of aestivation on the up-and down-regulation of genes in the liver of P. annectens using SSH PCR. SSH involves two types of cDNAs: testers (with treatment) and drivers (control). In order to examine differential gene expression in the liver during the maintenance phase (6 months) of aestivation (tester), liver of fish kept in fresh water was used as the driver. Results obtained would indicate changes in gene expression in aestivating fish with reference to non-aestivating fish. However, in order to examine differential gene expression in the liver during the arousal phase (1 day arousal from 6 months of aestivation) of aestivation (tester), liver of fish that had undergone 6 months of aestivation in air were used as driver instead. In this way, results obtained would reveal changes in gene expression in aroused fish with reference to aestivating fish. The zebrafish nomenclature system (see https://wiki.zfin.org/display/general/ZFIN+ Zebrafish+Nomenclature+Guidelines) for genes and proteins of fish origin and the human nomenclature (see http://www.genenames.org/guidelines.html) for genes and proteins of mammalian origin were adopted in this paper. Specifically, for fishes, gene symbols are italicized, all in lower case, and protein designations are the same as the gene symbol, but not italicized with the first letter in upper case. Collection and maintenance of fish Protopterus annectens (80-120 g body mass) were imported from Central Africa through a local fish farm in Singapore. They were maintained in plastic aquaria filled with dechlorinated freshwater at pH 7.0 and at 25°C in the laboratory. Water was changed daily. No attempt was made to separate the sexes. Fish were acclimated to laboratory conditions for at least 1 month before experimentation. During the adaptation period, fish were fed with frozen fish meat and food was withheld 96 h prior to experiments. Ethics Statement Approval to undertake this study was obtained from the Institutional Animal Care and Use Committee of the National University of Singapore (IACUC 035/09). Experimental conditions and tissue sampling Protopterus annectens were induced to aestivate at 27-29°C and 85-90% humidity individually in plastic tanks (L29 cm x W19 cm x H17.5 cm) containing 15 ml of dechlorinated tap water (adjusted to 0.3‰ with seawater) following the procedure of Chew et al. [6]. During the induction phase of aestivation, the experimental fish would secrete plenty of mucus during the first few days, and the mucus would slowly dry up between day 5 and day 7 to form a mucus cocoon. Aestivation was considered to begin when the fish was fully encased in the cocoon and displayed no locomotor activities. Protopterus annectens can be maintained in aestivation for a long period of time and this was regarded as the maintenance phase of aestivation. Fish maintained in freshwater served as controls. Control fish were killed with an overdose of neutralized MS222 (0.2%) followed with a blow to the head. Aestivating fish were killed on day 186 (6 months; prolonged maintenance phase) or after 1 day arousal from 6 months of aestivation with a blow to the head. The liver was quickly excised and frozen in liquid nitrogen. The frozen samples were kept at-80°C until analysis. Total RNA and poly (A) mRNA extraction Frozen tissues were homogenized using a polytron homogenizer (Kinematica AG, Lucerne, Switzerland) in 400 μl of chaotropic buffer (4.5 M guanidine thiocyanate, 2% N-lauroylsarcosine, 50 mM EDTA (pH 8.0), 25 mM Tris-HCl (pH 7.5), 0.1 M β-mercaptoethanol, 0.2% antifoam A). Total RNA was extracted from the liver, using the chaotropic extraction protocol described by Whitehead and Crawford [14]. The RNA pellet obtained was rinsed twice with 500 μl of 70% ethanol, and further purified using the Qiagen RNeasy Mini Kit (Qiagen Inc., Valencia, CA, USA). The concentration and purity of the purified RNA were determined using the NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific Inc., Wilmington, DE, USA). The RNA quality was determined by visualising the presence of the 18S and 28S ribosomal RNA bands using the BioRad Universal Hood II gel documentation system (BioRad, Hercules, CA, USA) after carrying out electrophoresis of 1 μg of RNA on 1% (w/v) agarose gel in TAE buffer (40 mM Tris-acetate, 1 mM EDTA, pH 8.0) with nucleic acid staining dye GelRed (1:20000, Biotium Inc., Hayward, CA, USA) at 100 V for 30 min. The presence of sharp 28S and 18S bands in the proportion of about 2:1 indicate the integrity of the total RNA. Poly (A) mRNA was extracted from 200 μg of total RNA using the Oligotek mRNA kit (Qiagen Inc.). The RNA sample (200 μg) was mixed with 15 μl of Oligotex suspension (resin) and was heated at 70°C for 3 min and then cooled at 25°C for 10 min. The Oligotex:mRNA complex was spun at 14,000 xg and the pellet obtained was resuspended in 400 μl of Buffer OW2 (Qiagen Inc.) and then passed through a small spin column by centrifuging at 14,000 xg for 1 min. The column was washed with another 400 μl of Buffer OW2. The resin in the column was resuspended with 50 μl of hot (70°C) Buffer OEB (Qiagen Inc.) and eluted by centrifugation at 14,000 xg for 1 min to obtain the Poly (A) RNA. Another 50 μl of hot (70°C) Buffer OEB was added to the column and the process was repeated to ensure maximal Poly (A) mRNA yield. Construction of SSH libraries Two sets of forward (up-regulated genes) and reverse (down-regulated genes) SSH libraries for the liver were generated using the PCR-Select cDNA subtraction kit (Clontech Laboratories, Inc., Mountain View, CA, USA); one set for fish aestivated for 6 months in air (prolonged maintenance phase) with reference to the freshwater control, and the other set for fish that was aroused for 1 day after 6 months of aestivation in air (arousal phase) with reference to 6 months of aestivation in air. Two micrograms of poly (A) mRNA from each condition was used for cDNA synthesis. After the first and second strand synthesis, the double stranded cDNA from both groups were digested with Rsa I. A portion of the digested cDNA was ligated with either Adapter 1 or Adaptor 2R, and the rest was saved for subsequent usage as the driver for hybridization. The forward library was generated from the hybridization between adapterligated cDNA obtained from fish that had undergone 6 months of aestivation in air or fish that were recovered for 1 day (tester) and Rsa I-digested cDNA from the control fish kept in fresh water or fish aestivated for 6 months in air (driver). The reverse library was made the same way, except that the adapter-ligated cDNA from the control in fresh water or 6 months of aestivation served as the tester while the Rsa I-digested cDNA from fish aestivated for 6 months in air or fish that were recovered for 1 day acted as the driver, respectively. The driver cDNA was added in excess to remove common cDNA by hybrid selection, leaving over-expressed and novel tester cDNAs to be recovered and cloned. The PCR amplification of the differentially expressed cDNAs was performed using the Advantage cDNA polymerase mix (Clontech Laboratories, Inc.) and 9902 Applied Biosystems PCR thermal cycler (Life Technologies Corporation, Carlsbad, CA, USA). The primary and secondary PCR amplification of these reciprocal subtractions of cDNA from the control and aestivated fish produced 1 forward and 1 reverse SSH libraries enriched in differentially expressed transcripts. Differentially expressed cDNAs were cloned using pGEM-T easy vector system kit (Promega Corporation, Madison, WI, USA), transformed into chemically competent JM109 Escherichia coli (Promega Corporation), and plated onto Luria-Bertani (LB) agar with ampicillin, 5-bromo-4-chloro-3-indolyl-β-D-galactopyranoside (X-gal) and isopropyl β-D-thiogalactopyranoside (IPTG). Selected white colonies were grown overnight in LB broth with ampicillin. The plasmids were extracted using the resin-based plasmid miniprep kit (Axygen Biosciences, Union City, CA, USA). The plasmids were quantified by the NanoDrop ND-1000 spectrophotometer. Approximately 80-100 ng of plasmid DNA was used in BigDye Terminator v3.1 Cycle Sequencing Kit (Life Technologies Corporation) with 2 μM T7 primers. Excess fluorescent nucleotides and salts were removed from the samples by ethanol precipitation. The dried samples were resuspended in Hi-Di Formamide (Life Technologies Corporation) before loading to the Prism 3130XL sequencer (Life Technologies Corporation). A total of 500 clones for each forward and reverse library were selected for sequencing. Sequence output was exported as text and edited manually to remove vector sequences using BioEdit Sequence Alignment Editor software version 7.0.9 [15]. BLAST searches were performed using the tBLASTx algorithm [16] and default search conditions. Proteins were considered significant when the E value was <1E-04. The annotated sequences were grouped based on Gene Ontology classification. The sequences were deposited in Genbank EST database and were assigned with accession numbers JZ575382 to JZ575617. Relative quantitative real-time PCR (qPCR) In order to validate the changes obtained in the SSH studies, nine genes were selected for the determination of mRNA expression using quantitative real-time PCR (qPCR). These include acyl-CoA desaturase (acd), argininosuccinate synthetase 1 (ass1), betaine-homocysteine Smethyltransferase 1 (bhmt1), ceruloplasmin (cp), carbamoyl phosphate synthetase III (cpsIII), fumarate hydratase (fh), ferritin light chain (ftl), glyceraldehyde-3-phosphate dehydrogenase (gapdh) and superoxide dismutase 1 (sod1). Prior to first strand cDNA synthesis, RNA from the liver of fish kept in fresh water, aestivated for 6 months in air or aroused for 1 day after 6 months of aestivation in air were treated separately with Deoxyribonuclease I (Qiagen Inc.) to remove any contaminating genomic DNA. First strand cDNA was synthesized from 1 μg of total RNA using random hexamer primer and the RevertAid first stand cDNA synthesis kit, following the manufacturer's instruction (Thermo Fisher Scientific Inc). mRNA expressions of the selected genes were quantified using a StepOnePlus Real-Time PCR System (Life Technologies Corporation). Each PCR reaction contained 5 μl of 2x Fast SYBR Green Master Mix (Life Technologies Corporation), a certain aliquot of gene-specific primers (listed in Table 1) and 0.1-2 ng of cDNA in a total volume of 10 μl. Samples were run in triplicate. qPCR reactions were performed with the following cycling conditions: 95°C for 20 s (1 cycle), followed by 40 cycles of 95°C for 3 s and 60°C of 30 s. Data was collected at each elongation step. Each run was followed by a melt curve analysis by increasing the temperature from 60°C to 95°C at 0.3°C increment to confirm the presence of only a single PCR product. In addition, random PCR products were electrophoresed in a 1.8% agarose gel to verify that only one band was present. All the data were normalized to the abundance of β-actin mRNA. The amplification efficiencies for β-actin and all selected genes were between 90-100%. The subsequent application of the 2 -ΔΔCT calculation for relative quantification was validated by confirming that the variation between the amplification efficiencies of the target and reference gene through a 100-fold dilution remained relatively constant [17]. The mean fold-change values were transformed into logarithmic values (log 2 ) to enable valid statistical analysis. Statistical analysis Results for qPCR were presented as means ± standard errors of the mean (S.E.M.). Student's ttest was used to evaluate the difference between means. Differences with P<0.05 were regarded as statistically significant. Results SSH libraries from liver of P. annectens after 6 months of aestivation (with fresh water control as the driver) Two SSH-generated libraries, forward ( Table 2) and reverse (Table 3), were constructed for genes that were up-and down-regulated, respectively, in the liver of P. annectens which had undergone 6 months of aestivation in air. A total of 98 genes were identified from these SSH libraries, of which 20 genes were up-regulated (Table 2) and 78 genes were down-regulated (Table 3). There were 340 unidentified sequences which could be genes that are yet to be characterized in P. annectens. Ribosomal protein S12 appeared in both forward and reverse Table 1. Primers used for quantitative real-time PCR on acyl-CoA desaturase (acd), argininosuccinate synthetase 1 (ass1), betainehomocysteine S-methyltransferase 1 (bhmt1), ceruloplasmin (cp), carbamoyl-phosphate synthetase III (cpsIII), fumarate hydratase (fh), ferritin light chain (ftl), glyceraldehyde-3-phosphate dehydrogenase (gapdh), superoxide dismutase 1 (sod1) from the liver of Protopterus annectens. Gene Primer sequence (5' to 3') subtraction libraries, indicating that it could be false positives or encoding for different isoforms of the same protein. The forward library indicated the up-regulation of bhmt1 and fh expression levels in the liver of P. annectens after 6 months of aestivation. Certain genes related to nitrogen metabolism such as ass1 and cps III and a number of ribosomal genes that was involved in protein synthesis were also up-regulated ( Table 2). The reverse library indicated the down-regulation of expression levels of genes related to antioxidative stress (e.g. sod1) and copper transport (e.g. cp) in the liver of P. annectens after 6 Table 2. Known transcripts found in the forward library (up-regulation) obtained by suppression subtractive hybridization PCR from the liver of Protopterus annectens aestivated for 6 months in air with fish kept in fresh water as the reference for comparison. Group and Gene Gene symbol P. annectens accession no. months of aestivation. The mRNA expression levels of some genes involved in complement activation, blood coagulation and iron transport were also down-regulated (Table 3). Relative quantification of mRNA expression levels of selected genes were performed using qPCR to verify the up-or down-regulated of selected genes. In agreement with the SSH results Differential Gene Expression in the Liver of the African Lungfish of the forward library, there were significant increases in the mRNA expression levels of bhmt1, fh, ass1, cps III in the liver of P. annectens after 6 months of aestivation (Fig. 1A-D). In addition, there were significant decreases in the mRNA expression levels of sod1 and cp in corroboration of the SSH results ( Fig. 1E and F). SSH libraries from liver of P. annectens after 1 day of arousal from 6 months of aestivation (with 6 months of aestivation as the driver) Similarly, forward (Table 4) and reverse (Table 5) libraries were constructed to reflect the genes that were up-and down-regulated, respectively, in the liver of P. annectens after 1 day of arousal from 6 months of aestivation. A total of 143 genes were identified from these subtraction libraries, in which 76 genes were up-regulated (Table 4) and 67 genes were down-regulated ( Table 5). Out of these 1000 sequences obtained, 391 were unidentified and they could again be genes that are yet to be characterized in P. annectens. Fructose-bisphosphate aldolase B (aldob) and some genes related to ribosomal proteins appeared in both forward and reverse subtraction libraries, indicating that they could be false positives or encoding for different isoforms of the same protein. As revealed by the forward library (up-regulation), the mRNA expressions of aldob and ass1, related to carbohydrate and nitrogen metabolism, respectively, were up-regulated in the liver of P. annectens after 1 day of arousal from 6 months of aestivation. Some genes involved in lipid metabolism (acd, desaturase 2, fatty acid binding protein and stearoyl-CoA desaturase), ATP synthesis and iron metabolism (ftl, ferritin middle subunit and transferrin-a) were also up-regulated ( Table 4). The reverse library (down-regulation) revealed that the down-regulation of expression levels of certain genes related to carbohydrate metabolism (aldob and plasma alpha-L-fucosidase precursor putative) in the liver of P. annectens after 1 day of arousal from 6 months of aestivation. The mRNA expression levels of some genes related to protein synthesis, signaling and iron metabolism (alpha globin chain, ferritin heavy chain and transferrin) were also down-regulated (Table 5). In support of the SSH results, there were significant increases in the mRNA expression levels of acd, ftl and gapdh in the liver of P. annectens after 1 day of arousal from 6 months of aestivation as confirmed by qPCR (Fig. 1G-I). Discussion Maintenance phase: up-regulation of ornithine-urea cycle (OUC) capacity African lungfishes are ureogenic and they possess a full complement of OUC enzymes including CpsIII in their livers [8,18,19]. During the maintenance phase of aestivation, ammonia released through amino acid catabolism must be detoxified because its excretion would have been completely impeded during desiccation [12]. By synthesizing and accumulating urea, which is less toxic, P. annectens can carry out protein catabolism for a longer period without being intoxicated by ammonia [12]. Therefore, there is a need to increase the urea-synthesizing capacity during the maintenance phase of aestivation. Indeed, there were increases in mRNA expression levels of OUC enzymes, particularly ass1 and cpsIII, in the liver of P. annectens after 6 months of aestivation (Table 1). There was also a significant increase in the expression level of fh. Fh catalyzes the reversible conversion between fumarate and malate and is believed to play an important role in the tricarboxylic acid cycle [20]. It can also be involved in nitrogen metabolism as it could regulate the fumarate levels produced by the OUC [20]. BHMT is a cytosolic zinc metalloprotein belonging to the family of methyltransferases [21]. It catalyzes the transfer of a methyl group to homocysteine to form methionine [22], and contributes to~50% of methionine synthesis in liver [23]. In human, defects in methionine and cysteine metabolism in the liver lead to increased homocysteine concentration in the plasma, i.e. hyperhomocysteinemia, which is associated with vascular diseases [24,25], birth defects such as spina bifida [26], and neurodegenerative diseases such as Alzheimer's disease [27]. When accumulated abnormally in tissues and organs, homocysteine can produce multiple deleterious changes simultaneously [28], leading to multi-organ failure involving the brain, kidney, heart, vascular system and/or musculoskeletal system [29][30][31][32]. Hence, it is highly probable that bhmt1/Bhmt1 expressions were up-regulated in the liver of P. annectens to reduce the hepatic homocysteine concentration during the maintenance phase of aestivation as suggested by Ong et al. [33]. Maintenance phase: down-regulation of genes related to blood coagulation As the heart rate of African lungfish, P. aethiopicus, drops from 22-30 beats min -1 before aestivation to 12-17 beats min -1 by the end of 1-1.5 months in the mud [34], it is probable that a severe decrease in the rate of blood flow would have occurred. Thus, any mechanism that can prevent the formation of a thrombosis when the fish is inactive during aestivation would be of considerable survival value. Indeed, several genes related to blood coagulation, which included fibrinogen (7 clones), apolipoprotein H (8 clones) and serine proteinase inhibitor clade C (antithrombin) member 1 (serpinc1; 3 clones) were down-regulated in the liver of fish after 6 months of aestivation (Table 3) and this could signify a decrease in the tendency of blood clot formation. Maintenance phase: down-regulation of sod1 SOD is an antioxidant enzyme that catalyzes the dismutation of two O 2 •to H 2 O 2 , and therefore plays a central role in antioxidation. An adaptive response against oxidative stress is often marked by the increased production of intracellular antioxidant enzymes such as SOD, catalase, glutathione peroxidase and glutathione reductase to protect the macromolecules from the stress-induced damage. It was suggested that up-regulation of intracellular antioxidant enzymes during aestivation and hibernation protects against stress-related cellular injury [35,36]. However, the down-regulation in the mRNA expression of sod1 in the liver of P. annectens after 6 months of aestivation (Table 3) suggests that other antioxidant enzymes such as Bhmt1, glutathione-S-transferase, glutathione reductase, glutathione peroxidase or catalase may be involved and their activities would be sufficient to counteract the oxidative stress. Also, these results could be indicative of a decrease in ROS production during the maintenance phase of aestivation due to a slower metabolic rate, including the rate of nitrogen metabolism. Maintenance phase: down-regulation of genes related to complement fixation The complement system mediates a chain reaction of proteolysis and assembly of protein complexes that results in the elimination of invading microorganisms [37,38]. Three activation pathways (the classical, lectin and alternative pathways) and a lytic pathway regulate these events. Protopterus annectens utilizes lectin pathway for protection against pathogens during the induction phase of aestivation [13]. However, our results showed that many genes related to complement fixation appeared in the reverse library. These included the complement C3 precursor alpha chain (11 clones), complement component 4 binding protein alpha (3 clones) and CD46 antigen complement regulatory protein (2 clones), and seven others (Table 3). Hence, P. annectens might down-regulate the classical complement fixation pathway during the maintenance phase of aestivation, possibly because of three reasons. Firstly, the dried mucus cocoon was already well formed, which conferred the aestivating lungfish a certain degree of protection against external pathogens. Secondly, tissue reconstruction would have subsided after the induction phase, and there could be minimal tissue inflammation during the prolonged maintenance phase. Thirdly, it was important to conserve the limited energy resources, and it would be energetically demanding to sustain the increased expression of genes involved in complement fixation during the maintenance phase of aestivation. Maintenance phase: down-regulation of warm-temperature-acclimationrelated 65 kDa protein and hemopexin The plasma glycoprotein warm-temperature-acclimation-related protein (Wap65) was first identified in the goldfish Carassius auratus [39] and the cDNA showed a homology of 31% to rat hemopexin, a serum glycoprotein that transports heme to liver parenchymal cells [40]. Hemopexins in mammals are mainly synthesized in liver and are responsible for the transportation of heme resulting from hemolysis to the liver. Therefore, the down-regulation of the wap65 and hemopexin in the liver of P. annectens (Table 3) suggested that hemolysis might be suppressed during the maintenance phase of aestivation. There are also indications that the Wap65 can be involved in immune responses in the Channel catfish Ictalurus punctatus [41]. Hence, its down-regulation suggested that a decrease in immune response might have occurred in the liver of P. annectens during the maintenance phase of aestivation. Maintenance phase: down-regulation of genes related to iron metabolism Iron is involved in many cellular metabolic pathways and enzymatic reactions, but it is toxic when in excess [42][43][44]. Transferrin is one of the major serum proteins, which is synthesized mainly in liver and plays a crucial role in iron metabolism. Under normal conditions, most of the iron in the plasma is bound to transferrin, and iron-transferrin complexes enter the cells via a transferrin receptor-mediated endocytic pathway. Transferrin also has a close relationship with the immune system. It binds to iron, creating an environment with low levels of iron, where few microorganisms can survive and prosper [45]. On the other hand, ferritin is the main iron storage protein in both eukaryotes and prokaryotes; it keeps iron in a soluble and non-toxic form [43,46,47]. Also, up-regulation of ferritin has been observed in oxidative stress [48] and inflammatory conditions in human [49][50][51]. Transferrin and ferritin mRNA expression levels are up-regulated in P. annectens during the induction phase of aestivation [13], probably due to oxidative stress and inflammation arisen through tissue reconstruction, and/or a high turnover rate of free and bound iron resulting from increased production of certain types of hemoglobins or hemoglobin in general. By contrast, our results indicated that there could be a decrease in the capacity of iron metabolism and transport in P. annectens during the maintenance phase of aestivation as transferrin (14 clones) and hemopexin (3 clones) appeared in the reverse library. This correlated well with the aestivation process as a prolonged torpor state would theoretically lead to a lower rate of ROS production, and stabilized expression of hemoglobin genes. Maintenance phase: down-regulation of genes related to copper metabolism Ceruloplasmin (CP) is crucial in the oxidation of Fe 2+ to Fe 3+ , which enables the binding of iron to transferrin, facilitating the mobilization of iron in the body. It also represents a tightly bound pool of copper that accounts for >90% of the total plasma copper in most species [52,53]. CP synthesis and/or secretion can be altered by inflammation, hormones, and copper. Plasma concentrations of acute-phase globulins, including CP, increase with tissue injury, localized acute inflammation, and chronic inflammatory diseases [54]. The mRNA expression level of cp was up-regulated in the liver of P. annectens during the induction phase of aestivation [13]. However, our results revealed that 6 months of aestivation led to a down-regulation of cp mRNA expression in the liver of P. annectens. This suggested that tissue degradation or inflammation may be limited during the maintenance phase of aestivation due to a profound decrease in metabolic activity. Consequently, there was no longer a need to up-regulate expression level of cp. Maintenance phase: up-or down-regulation of protein synthesis? Twelve genes related to protein synthesis, transport and folding appeared in the reverse library of lungfish undergoing 6 months of aestivation in air ( Table 3). The down-regulation of genes related to protein synthesis such as eukaryotic translation initiation factors and other ribosomal proteins is a consistent phenomenon in metabolic rate reduction. Suppression of protein synthesis during aestivation would help the animal to conserve energy and enhance its survival. However, 10 types of ribosomal proteins appeared in the forward library indicating up-regulation of mRNA expressions of these genes in the liver of P. annectens after 6 months of aestivation (Table 2). Taken altogether, these results indicate that the capacity of protein synthesis was not suppressed completely during the prolonged phase of aestivation. This could be an important strategy since the aestivating lungfish would have to maintain the protein synthesis machinery in preparation for arousal from aestivation when water becomes available. Arousal phase: up-regulation of ass1 expression and amino acid metabolism After 1 day of arousal from 6 months of aestivation, ass1 still appeared in the forward library ( Table 4), indicating that there was a further increase in the mRNA expression of ass1 in the liver. Since cpsIII and fh could not be found in the reverse library (Table 5), and their mRNA expressions were already up-regulated during the maintenance phase of aestivation, it can be deduced that their increased mRNA expressions were sustained into the arousal phase. Upon arousal, the fish has to reconstruct cells and tissues that have been modified during the induction phase and repair damages that have occurred during the maintenance phase of aestivation. Such structural changes would require increased syntheses of certain proteins, and since refeeding would not occur until 7-10 days after arousal, it would imply the mobilization of amino acids of endogenous origin [12]. Both substrate and energy are needed for repair and regeneration. Our results indicate that endogenous amino acids could serve such purposes during arousal. Indeed, there could be increases in the capacity of protein turnover, the electron transport system, lipid biosynthesis and iron metabolism in P. annectens after 1 day of arousal from 6 months of aestivation. The energy that supports these activities could be derived from increased amino acid (and perhaps also carbohydrate) catabolism during this period. The ammonia released through increased amino acid catabolism had to be detoxified to urea through the hepatic OUC. Therefore, it can be understood why there were significant increases in the urea-synthesizing capacity upon arousal from aestivation. Besides being involved in urea synthesis, arginine produced by Ass also acts as a substrate for nitric oxide (NO) production in the liver, where NO is involved in liver regeneration [55] and protection of the liver from ischaemia-reperfusion injury [56]. Indeed, Chng et al [57] had shown that the arginine and NOx concentrations decreased and increased, respectively, in the liver of P. annectens after 6 months of aestivation and after 3 days of arousal from aestivation, supporting the proposition that arginine synthesized through Ass could be used for increased NO production, especially during arousal. Arousal phase: up-regulation of carbohydrate metabolism? Compared with the maintenance phase, 1 day of arousal led to increases in mRNA expressions of gapdh and aldob, and a decrease in the expression of another isoform of aldob. Although Gapdh does not catalyse a flux generating step (unlike hexokinase, glycogen phosphorylase, and pyruvate kinase) or act as a regulatory enzyme (unlike phosphofructokinase) in the glycolytic pathway, it involves an oxidation-reduction reaction, and our results could indicate a tendency towards an up-regulation of carbohydrate metabolism in the liver of P. annectens during the arousal phase of aestivation. Frick et al. [58] reported that P. dolloi conserved the glycogen pool during the maintenance phase of aestivation. Naturally, the fish becomes more active after arousal, and there could be an increase in the utilization of glycogen store for energy production during this period before feeding is resumed. Arousal phase: up-regulation of genes involved in lipid metabolism and fatty acid transport Fatty acid binding proteins (FABPs) are intracellular carriers that transport fatty acids through cytoplasm, linking sites of fatty acid import/export (plasma membrane), internal storage (lipid droplets), and oxidation (mitochondria) [59]. Stearoyl-CoA desaturase is a lipogenic enzyme that catalyzes the synthesis of monounsaturated fatty acids [60]. Acyl-CoA desaturase is the terminal component of the liver microsomal stearoyl-CoA desaturase system that utilizes O 2 and electrons from reduced cytochrome b5 to catalyze the insertion of a double bond into a spectrum of fatty acyl-CoA substrates including palmitoyl-CoA and stearoyl-CoA. The up-regulation of mRNA expressions of fabps (4 clones), stearoyl-CoA desaturase (1 clone), desaturase (5 clones) and acyl-CoA desaturase (11 clones) ( Table 4) indicate that there could be an increase in fatty acid synthesis and lipid metabolism in the liver of P. annectens after 1 day of arousal. Tissue regeneration would be an important activity during arousal, and cell proliferation requires increased lipid metabolism to generate biomembranes. It is probable that the energy required to sustain these activities was derived from amino acid catabolism. Arousal phase: up-regulation of electron transport system and ATP synthesis? Conservation of energy is a key feature during the maintenance phase of aestivation to sustain life in adverse environmental condition. Arousal from aestivation marks an increase in the demand for ATP. Indeed, after 1 day of arousal, there were increases in mRNA expressions of ndufa2 (5 clones), cytochrome c oxidase subunit IV isoform 2 (2 clones) and two different types of ATP synthase (mitochondrial F o and F 1 complex; 2 clones each) ( Table 4), indicating that mitochondria became more active. It would be essential to maintain mitochondrial redox balance when activities of oxidation-reduction reactions increased in the mitochondrial matrix. The increase in mRNA expression of 3-hydroxybutyrate dehydrogenase type 1 (5 clones) suggested that mitochondrial activities might not be fully supported by an adequate supply of oxygen, and mitochondrial redox balance might have been maintained transiently through hydroxybutyrate formation during this initial phase of arousal. Arousal phase: up-or down-regulation of iron metabolism and transport There could be two reasons for the increases in transferrin and ferritin expressions in the liver of P. annectens during arousal. Firstly, it could be a response to increased oxidative stress and inflammation. After arousal, the lungfish would immediately swim to the surface to breathe air. A rapid increase in O 2 metabolism would lead to increased generation of reactive oxygen species, as the rate of superoxide generation at the mitochondrial level is known to be correlated positively with oxygen tension [61,62]. Furthermore, animals experiencing transient metabolic depression followed by restoration of normal O 2 uptake also experience oxidative stress; examples consist of hibernating mammals, anoxia-tolerant turtles, freeze-tolerant frogs and molluscs [35,63,64]. Secondly, it could be due to an increase in the turnover of free and bound iron as a result of the increase in synthesis of certain type of hemoglobins and/or hemoglobin in general. Delaney et al. [65] reported that 4 electrophoretically distinct types of hemoglobins (fraction I, II, III and IV) were present in P. aethiopicus, and there were increases in the amounts of types II and IV hemoglobins during the maintenance phase of aestivation. Hence, it is logical to deduce that changes in hemoglobin types during the induction phase of aestivation must be reverted back to normal during arousal, which could be one of the reasons that led to the up-regulation in mRNA expressions of transferrin and ferritin in the liver of P. annectens. Arousal phase: up-regulation of glutathione S-transferase (gst) GSTs are a major group of detoxification proteins involved in protecting against various reactive chemicals, including chemical carcinogens, secondary metabolites during oxidative stress, and chemotherapeutic agents [66]. They catalyze the reaction of glutathione with electrophilic centers of organic compounds [67]. These glutathione-conjugated compounds are rendered more water-soluble and more readily excreted. Besides, some GSTs have secondary catalytic activities including steroid isomerisation [68] and a selenium-independent peroxidase activity with organic hydroperoxides [69]. The alpha class GST (GSTa) may also function as intracellular transporters of various hydrophobic compounds (which are not substrates of GSTs) like bilirubin, heme, thyroid hormones, bile salts and steroids [70]. The increase in mRNA expression of gst in the liver of P. annectens after 1 day of arousal ( Table 4) is indicative of a possible increase in secondary metabolites of oxidative stress and/or transport of heme in the liver. Similarly, increases in activity of Gst have been observed in aestivating snails and snails aroused from aestivation [71]. Arousal phase: increase in protein turnover Based on the variety of genes related to protein synthesis, transport and folding in the forward and reverse library, it can be concluded that there was a high rate of protein turnover in the liver of lungfish after 1 day of arousal. It would appear that the machinery (e.g. ribosomal protein L12, L17 and L19) involved in the maintenance of protein structure during the maintenance phase (Table 4) was different from that (e.g. eIF4E-binding protein, eukaryotic translation elongation factor alpha 1 and elongation factor-1, delta b) involved in the regeneration of protein structure during the arousal phase (Table 5). Conclusion Six months of aestivation led to changes in gene expression related to nitrogen metabolism, oxidative defense, blood coagulation, complement fixation, iron and copper metabolism, and protein synthesis in liver of P. annectens. These results indicate that sustaining a low rate of waste production and conservation of energy store were essential to the maintenance phase of aestivation. On the other hand, there were changes in gene expression related to nitrogen metabolism, lipid metabolism, fatty acid transport, electron transport system, and ATP synthesis in liver of P. annectens after 1 day of arousal from 6 months of aestivation. It would appear that the freshly aroused fish depended on internal energy store for repair and structural modification. Overall, our results indicate that aestivation cannot be regarded as the result of a general depression of metabolism only, but it involves the complex interplay between up-regulation and down-regulation of diverse cellular activities. Hence, efforts should be made in the future to identify and differentiate molecular, biochemical and physiological phenomena in African lungfishes incidental to each of the three phases (induction, maintenance and arousal) of aestivation. Author Contributions Conceived and designed the experiments: YKI SFC. Performed the experiments: KCH. Analyzed the data: KCH SFC YKI. Contributed reagents/materials/analysis tools: WPW. Wrote the paper: SFC KCH YKI. Took care of the animals: WPW.
v3-fos-license
2018-12-15T02:41:06.358Z
2018-04-19T00:00:00.000
56396106
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://etasr.com/index.php/ETASR/article/download/1895/pdf", "pdf_hash": "d6492ce98a0132a9c3453934459d1e00769b7f13", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:441", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "d6492ce98a0132a9c3453934459d1e00769b7f13", "year": 2018 }
pes2o/s2orc
BlindSense : An Accessibility-inclusive Universal User Interface for Blind People A large number of blind people use smartphone-based assistive technology to perform their common activities. In order to provide a better user experience the existing user interface paradigm needs to be revisited. A simplified, semantically consistent, and blind-friendly adaptive user interface model is proposed. The proposed solution is evaluated through an empirical study on 63 blind people leveraging an improved user experience in performing common activities on a smartphone. Keywords-adaptive UI; blind people; smartphone; blindfriendly INTRODUCTION A large number of blind people are using state-of-the-art assistive technologies for performing their daily life activities [1][2][3].Smartphone-based assistive technologies are an emerging trend for the blind people due to the inbuilt features such as accessibility, usability, and enhanced interactions [4,5].The accessibility services such as talking back, gesture controls, haptic feedback, screen magnifier, large text, color contrast, inverted colors, screen brightness, shortcuts, and virtual assistants are facilitating blind and visually impaired people in performing several operations on the smartphones.However, existing smartphone-based interfaces are posed to several issues in delivering a unified, usable and adaptive solution to the blind people.The navigational complexity in the interface design, lack of consistency in the button, icons, layout screens, identifying, selecting non-visual items on the screen, and traditional input mechanism is contributing in an increased cognitive overload [6].In addition, every mobile application has its specific flow of interaction, placement of non-visual items, layout preferences, and distinct functionality.Notably, it is difficult to establish a balance between accessibility and usability of a mobile application.In most of the cases, the mobile apps are either accessible but barely usable or usable but barely accessible [7].Nowadays, the available mobile apps are mostly inaccessible to the blind people.This is because these apps are either having limited usability or do not adhere to the web/mobile accessibility guidelines [8].The usability of the smartphone-based user interface can be improved by using adaptive user interface paradigms.Adaptive user interfaces support context-awareness and can generate a new instance of the interface by changes in the environment, user preferences, and device usage [9].This can help blind people in the personalization of their smartphone-based user interface (UI) layouts, widgets, and UI controls of a particular application, irrespective of their technical ability, skillset, and device handling capabilities.To gain a considerably improved blindfriendly interface design requires an extensive revision, in order to meet the requirements and needs of the blind people.This may require a technical framework supported by an adaptation mechanism to address the diverse user capabilities, needs, and context-of-use to ensure a high degree of usability and acceptability [10]. This paper aims to devise a universal UI design for blind people, to customize the interface components of the commonly available mobile applications into a blind-friendly simplified UI.This will provide a simplified, semantically consistent, easy to use interface for operating commonly available mobile apps on the smartphone.In addition, the blind people will be having better control over the interface customization, and re-organization of interface elements as per their requirements.The vital contribution of this paper is to improve the user experience of the blind people.The upcoming section provides an overview of related work pertains to accessibility-inclusive user interfaces design. II. RELATED WORK Through a series of studies, researchers have analyzed and identified recommendations for the accessibility-inclusive UIs for blind people [11,12].The emergence of smartphone-based UIs has opened new vistas for visually impaired and blind people.However, the cost of usability, accessibility, and the challenge of how to make this device more usable to the blind people emerges [13].The pre-touchscreen era witnessed that the mobile device possessed physical controls for navigation and operational usage.However, existing touchscreen interfaces are vibrant to a number of issues due to the nonexistence of physical buttons, and user interface controls making it insufficient to drive these devices [14,15].Some common usability issues are demonstrated in Table I. [12,[23][24][25][26][27] Placing non-visual items on the screen, locating and identifying a particular item of interest are key issues.Remembering user action status, and following a pattern of activities is a challenge.Searching and retrieval of particular information is difficult activity to perform as well. www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People Task, Dialog, Presentation, User Semantic lost, Navigational complexity, Task adequacy Keys with multiple functions [28] Lack of physical keys on soft keypad resulting in higher chances of wrong touches. Besides, many actions are associated with one key, which creates confusion for these people.Usually they are unaware of the type of functionality is associated with a particular key Task, Presentation, User Semantic lost, Task adequacy Automated Assistance [29] Automated assistance tools receive information proactively without a user request.Extensive utilization of such assistance systems may burden the blind people. User, Platform Semantic lost, Task adequacy, Cognitive overload Haptic feedback [30] The utilization of haptic feedbacks and gesture controls are an emerging issue for blind people, e.g., consistent and appropriate feedback at the right time is inadequate in the existing interfaces. Task, Dialog, Presentation, User Task adequacy, Dimensional trade-off User control over interface components (UI Adaptation) [31] Inadequate UI flexibility and limited control over UI personalization is a key issue.Besides, every mobile application provides a meaningful entry and exit paths/points and should accommodate users requirements, and allow the user to customize interface layouts, and manipulate non-visual objects directly. Task, Dialog, Presentation, User Semantic lost, Navigational complexity, Task adequacy, Dimensional trade-off Device incompatibility [32] The final user interface generated on different devices acts inversely.This final generated outcome does not offer interoperability with different operating systems, and devices.Certain applications require pre-installed libraries, and utilities to operate rationally.Cross-mobile and cross-platform support is a primary aspect lacking in the currently available interfaces.[31]. Existing interfaces have limited persistency and consistency due to which it is difficult for the blind people to remember every action on the screen. Task, Presentation Semantic lost, Navigational complexity, Task adequacy Learnability and discoverability of the UIs [12] Learnability and discoverability are the key challenges in currently available applications.Discoverability is the time factor and ease by which the user can begin an effective interaction with the system. Task, Presentation, User Semantic lost, Navigational complexity,Task adequacy Inadequate mapping of feedbacks [10] Though, the first generation of haptic feedbacks is available in the form of vibratory motors, but still, this can provide a limited sensation in operating smartphones for blind people Task, Dialog, Presentation, User Semantic lost, Task adequacy, Dimensional trade-off Exhaustive Text-Entry [12] Typical keypad, inadequate labels, smaller UI elements and text-to-speech responses in text-entry reduces the efficiency of the blind people.The error rate and a number of missed-touches in using traditional keypads are usually high. Task, Presentation Semantic lost, Task adequacy Screen Orientation, Size, Resolution [17] The usability of the touchscreen interfaces is affected by the screen elements such as the size of screen and change of orientation.The small size of button and UI elements have to divest effect on the performance.Screen orientation also leads to increase the difficulty level of these people in learnability and discoverability. Task, Dialog, Presentation, User Semantic lost, Navigational complexity, Task adequacy, Dimensional trade-off, Device independence User Model fragmentation [13,17] Every application has preserved several models locally.Each application store and retrieve model information from the local repository ensuring the reuse of application models.However, heterogeneous application models may reflect a partial view of the user behavior and application usage in a particular scenario.The inclusion of accessibility in performing daily tasks through different applications and systems is highlighted in [14,[16][17][18][19]. Screen readers and built-in accessibility services have considerably improved the usability of device footprints for blind people [20].The preliminary focus was the ability to interact with the smartphones in performing common tasks such as reading a text message, identifying objects of interest and colors [4].The advent of touchscreen technology has replaced the physical controls, elements, and directional anchors, resulting in creating difficulties in several operations.However, the tactical feedback, haptics, and multimodal interaction triggered a better basis for visual/auditory interactions [17,21].Besides, the touchscreen offers a number of challenging opportunities such as haptic feedback, gesture control systems [22], text-to-speech system, and screen reading accessibility services (e.g., Talkback for Android, Voice Over for Apple) enables blind people to read out the contents of the screen and operate smartphone interfaces [23].Blind people usually avoid contents that develop accessibility problems for them [24].Even sighted people consume 66% of their time in editing and correcting text in an automatic speech recognizer output on the desktop system [25].Besides the above reported issues, Table I is depicting usability issues faced by blind people in performing various activities on smartphones.These problems are identified and analyzed in the specific context of HCI model [26] including task, domain, dialog, presentation, platform, and user model. In summary, the usability of touchscreen UIs merits further investigation.This requires the revamping of existing UIs based on the needs and expectations of the blind people.Many researchers have now emphasized on the development of a user-adaptive paradigm of designing simple to use, accessible, and user-friendly interfaces based on the guidelines of HCI [14,[27][28][29].In addition, few studies proposed usable and accessibility-inclusive UIs.However, the results need further improvement.Researchers should consider the improvement in the accessibility, usability, technical, and operational effectiveness of the smartphone-based UIs for blind people.From the findings of literature review in the area of humancomputer interaction, usability, accessibility and diversified requirements of the blind people, we come up with a universal accessibility framework on smartphone UIs for blind people.The proposed framework is designed keeping in view the related work mentioned in Table I.The proposed BlindSense, a universal UI design is discussed in the next section. III. BLIND-FRIENDLY UNIVERSAL USER INTERFACE DESIGN The technical abilities and tasks involved in the design of smartphone-based blind-friendly UI have been analyzed in the above section.We have analyzed common mobile applications by capturing the details of the nature of the app, category, total number of activities, number of inputs, number of outputs, number of UI controls used in the application, context of use, and minimal feature set.These common applications include SMS, Call, Contacts, Email, Skype, WhatsApp, Facebook, Twitter, Calendar, Location, Clock, Reminders, Reading Books, Reading Documents, Identifying Products, Reading News, Weather, Instagram, and Chrome.However, these applications have been designed for sighted people thus, a number of activities and sub-activities are either redundant, repetitive or having complex navigational structure, or need a long route to follow.The minimal feature sets were extracted through manual usability heuristics.The information is reported in Table II.Thus, suggesting a minimal set of activities, input, outputs and contents in performing common applications have been outlined prior to the design of our proposed architecture. The proposed BlindSense, a simplified, consistent, usable, adaptive universal UI model is based on user preferences, device logging, and context of use.The novel contribution is to customize/generate an optimal interface extracted from the existing common applications user interface controls, layouts, user interfaces, and widgets.This automatic transformation will be relying on semantic web technologies to model, and transform user interfaces resulting in transforming the complicated design of the existing mobile applications into blind-friendly and simplified UI.The BlindSense is a pluggable layer-based architecture promoting openness and flexibility in the technical design of the system.The designers or users can define their screen layouts, text-entry plug-ins, adaptation rules, templates, themes, and mode/pattern of interactions.The proposed architecture is illustrated in Figure 1.The architecture details are provided below. A. User Interface Layer The UI layer serves as an interaction point between the smartphone and blind people.The BlindSense application presents a wizard to the blind people to customize their UI.The user inputs are captured through text-entry, gesture controls and voice commands for personalization and other operations.The application transforms the features extracted from Common Element Set (CES) to Minimal Feature Set (MFS) through the process of abstraction and adaptation.CES describes features of the UI, layouts, themes, widgets etc.The system automatically extracts MES from the user preferences, device logging history, the context of use and the environment.These feature-sets are deployed at the activity or application level depending on a number of I/O of UI elements.For instance, a number of elements, input/outputs in a activity or an application level as reported in Table II.Universal User Interface Architecture B. Transformation Layer This layer ensures the delivery of a simplified and personalized UI representing user, impairment, accessibility, devices, UI components, and adaptation models.Adaptation knowledge base contains a set of personalization and adaptation rules.The input from User Information Model (UIM) and the context model are processed on this layer which results in the generation of the simplified UI.A user model may contain static (such as screen partitions) or dynamic (such as level of abstraction) information.The UIM consists of the following profiles: user capability, interest (interest level: high, low, and medium, interest category: computer, sports, entertainment, food, reading etc.), education, health, impairment profiles (visually impaired, blind, deaf-blind, and motor-impaired etc.), emergency, and social profiles.Adaptation manager is consisting of classes representing information related to several models for UI adaptation.Adaptation manager retrieves abstraction levels, and adaptation rules related to a specific disability from the adaptation repository.Abstraction mechanism can be applied to elements, group elements, presentation, and application level.For instance, adaptation rule related to the sequence generation of action and activities would be performed at the task and domain level.The final UI will be generated using Android XML layouts.In case of changes in the user preferences, adaptation components are updated with the latest information retrieved from user profile ontology and a new instance of the UI is generated C. Context Layer This layer captures, and stores information pertain to device, environment, user, and context through context extractor.The context model is composed of the user, platform, and environment models.The user model describes the needs and preferences while the platform model provides information related to device and platform, including screen resolution size, screen divisions, button size, keypads, aspect ratio, etc.The environment model represents information specific to the location of the user point of interaction, the level of ambient light etc.However, selective context sensing is performed in our case.Similarly, the light and noise sensing are not required all the time for continuous updating in the UI.This can be setup once, be stored and retrieved anytime.The smartphone sensors store data in the form of key-value pairs, nested structure and in the formal ontology.A user context model containing information about context provider, context property, and context status is generated at the end.Context data extractor filters the context data in relevance to UI adaptation. D. Semantic Layer The insight into the deeper aspect of UI adaptation involves the handling of model information, context-awareness, and their associated semantics.This layer encapsulates access to a comprehensive UI adaptation ontology used for user profiling and preferences, adaptation, context, devices and accessibility ontologies.It provides technology-independent access to metadata encoded in the ontology.Additional contents may also be associated with the activities and tasks related to UI modeling, e.g.multimedia captions, audio descriptions, and interpretation of several other patterns.The architecture is developed using re-configurable modular approach for realizing the inclusion of semantic web technologies. E. Storage Layer The storage layer is responsible for managing several storage sources including ontologies, data store etc.Information about user profiling, preferences, contextual data, adaptation rules, and layouts details is stored and retrieved from this layer, once all required data has articulated from the relevant models.BlindSense uses semantic reasoning capabilities of the ontological modeling to present a final UI to blind people.The user may change his/her preferences related to layout, theme, and interaction's type in the runtime.The structural model of universal UI model is represented in state transition diagram (STD) in Figure 2. www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People All these states are stored in the system and are executed in a specific order.The diagram begins with START, where it waits for input.Once the user provides a particular input, other processes are initiated by switching several states to complete an activity or perform a particular action.In case of error, the system is returned to the initial state, and the error is recorded in the system memory accordingly. F. Perspective Workflow BlindSense can be used as accessibility service or as an individual application.By enabling accessibility service for the first time, the system loads specific installed applications of common use and extracts common element features.User starts personalizing the layout by selecting a few preferences about screen divisions, mode of interactions, etc.Also, the device logging, context of use, and user profiling are automatically articulated.The rules used for the generation of simplified UI are checked in the adaptation repository where specific rules for adaptation are applied.In case of non-availability of specific rules for given transformation, the transformation is set to be default or baseline specification.The complete application simplification process is illustrated in Figure 3.The prototype is developed using Android SDK.BlindSense proof-of-concept IV. AIMS AND HYPOTHESES We studied the user experience by analyzing user satisfaction in performing several activities on the proposed universal UI design.Each participant has demonstrated his perceived usefulness, ease of use, system usability scale, and user experience.To the best of our knowledge, a similar universal UI design for blind people has not presented before.Thus, the aim was to investigate the effect of user experience in using common applications on a smartphone using universal UI design to gain a systematic understanding of user experience in overall operations.We aimed at formulating an assumption of which variables are the most central to user's experience.Following hypotheses were made:  H1: The perception of perceived usefulness in performing common activities through a universal UI for blind people in terms of the success of solving tasks/activities on smartphone influences a positive user satisfaction.  H2: The ease of use in personalizing a universal UI for blind people in term of task completion, personalization, and a number of accurate touches will improve the user experience.  H3: Consistency will lead to a more positive attitude towards an improved user experience in accessing nonvisual items, and skipping irrelevant items on a universal UI for blind people  H4: Improved system usability scale will lead to a more positive attitude towards the use of universal UI for blind people.  H5: Consistency in the interface elements will lead to a more positive attitude towards the ease of use in accessing and operating a universal UI for blind people on a smartphone. Besides we will analyze whether a specific usability parameter was most influential in the particular case or otherwise.The key variables include user satisfaction, perceived usefulness, ease of use.System usability scale will be predicated on having a positive/negative influence on blind people's user experience. V. EVALUATION AND RESULTS The evaluation of the proposed solution was conducted through an empirical study.The usability of the proposed UI design and the individual components of the architecture were evaluated using already established methods, metrics, and usability parameters related to HCI.We were interested to find the user experience of blind people by performing a number of tasks associated with user interface customization and operating smartphone applications with the ease of use. A. Participation Sixty three (4 females, 59 male) participants took part in this study.The median age of participants was 39 years, within range of 22-56 years.In the pre-application assessment, participants experience was rated on a four-item scale: beginner, intermediate, advanced and expert.The participant level of smartphone usage experience varied from beginner to advanced.Usability experts observed the navigational and orientation skills the blind people have in performing common tasks/activities.The experts mainly judged the confidence and frustration level of the participants.Table III summarizes general information about the participants along with other indicators, i.e. information related to their background, age, gender and smartphone usage experience.The participants reported their level of experience in the initial trials. B. Procedure The participants were introduced to the BlindSense framework and a demonstration of the required steps to perform one-by-one was provided.The study spanned for eleven weeks and consisted of the following components: (1) pre-application usage assessment and collection of background data, (2) introductory session with our universal UI framework and initial trials, (3) in-the-wild device usage, (4) interviews and observations.A practice trial session on the general tasks and operational usage of several scenarios was performed.Participants were allowed to practice in a trial session on general tasks and activities such as unlocking the phone, placing a call, sending a message, etc.The participants were asked to perform 121 predefined tasks.Average time exercised on each participant was about 66 minutes.The researchers were directly involved in observing the execution of the tasks performed.Besides, we acquired the services of nine professional facilitators who assisted the participants during the entire study.We continued the sequence of grouping and interviewing up to finishing all participants in the same pattern.In addition, all participants were provided with a Samsung S6 and an HTC One smartphone running on android.The Talkback screen reading application and data collection service, and the BlindSense application were pre-installed on the devices.For each task we recorded the time of completion, the degree of accuracy in performing common activities like placing a call, sending messages, etc.The result section presents the responses collected through a structured questionnaire, interview, and observations.The university ethics committee/IRB has approved the consent procedure for this study.Written consent was obtained from the caretakers of the participants.The participants were informed about the study objective, study procedure, potential risks etc.The study checklist was verbally communicated to all blind people, with their verbal approval while the caretakers issued the written consent. C. Analysis and Validation Procedures/Data Analysis We run statistical correlation analysis of observations to define the relationship between UX attributes of the universal UI on attitude, intention to use, perceived usefulness, understandability and learnability, operability, ease of use, system usability scale, minimal memory load, consistency, and user satisfaction.The statistical software SPSS 21 using AMOS 21 was used for analysis and structuring modeling.The first step was to define a measurement model and test the relationship among several dependent and independent variables.The assessment of measurement model validity was conducted by checking goodness-of-fit indices (GFI).We used confirmatory factor analysis (CFA) using maximum likelihood to verify the reliability, convergent validity, composite reliability and average variance of each construct.The measurement model had 60 variables for 10 latent variables.In order to confirm the fitness of proposed model, the Chi-Square, Chi-Square/d.f., GFI, incremental fit index (IFI), normed fit index (NFI), comparative fit index (CFI), Tucker-Lewis index (TLI), parsimony goodness of Fit index (PGFI) and root mean square error of approximation (RMSEA) were assessed.The measures mentioned above indicated that the estimated covariance metrics of the proposed measurement and observed model were found satisfactory.The reliability test was accessed through Cronbach's alpha.The CFA model indicates that the overall fit index measurement model found a satisfactory ratio of Chi-Square to the degree of freedom (x2/df)=1.577,RMSEA=0.076,CFI=0.727,NFI=0.939,IFI=0.949,TLI=0.696,PGFI=0.539. In addition, the measurement model was found to have strong internal reliability and convergent validity.The Cronbach's alpha values, item-total correlation, factor loading, composite reliability, and average variance extracted from the analysis report a robust fitness.Tables IV-VI show confirmatory factor loadings of each item with their respective reliability scores.The factor loadings having value above 0.05 are considered as acceptable in general practice, whereas the reported factor loadings exceeded 0.06.Similarly, the value of Cronbach's alpha reliability score 0.70 is considered an acceptable reliability score.In the reported data, the scores are above 0.70.In addition, to verify the internal consistency of each latent variable, we have measured the construct reliability too.It is acceptable when the composite reliability is higher than 0.07 and AVE is higher than 0.05.The reported score is mostly above the acceptable range of construct reliability.Figure 4 shows the diagram of the final structural model generated from the relationship of latent variables.The results are depicted in a standardized regression weights in different paths.All the paths were found significant at the level of p<0.001.As depicted, the perceived usefulness has an impact on the user satisfaction with high impact path weight (path coefficient=0.22).Research model overall had satisfactory variance in the user experience in operating adaptive user interfaces.In Tables VII a summary is presented.In respect of hypothesis: Perceived usefulness was positively associated with user satisfaction (H1, β=0.3030, p=0.016),Ease of Use (H2, β=0.469, p<0.000).Consistency (H3, β =0.287, p<0.023),System usability scale (H4, β=0.400, p<0.001) and consistency concerning ease of use (H5, β=0.320, p<0.011) had a positive effect on the user experience of blind people in using adaptive UI.The significance of all hypotheses was <005 thus each hypothesis is accepted. VI. DISCUSSION Understanding the need for developing an accessibilityinclusive UI for blind people, our research articulates usability, ease of use, consistency, usefulness, and accessibility for generating a simplified, consistent and universal UI design for blind people.The study proposed, developed and validated a blind-friendly universal UI design for operating common applications on smartphone resulting in enriched user experience. As hypothesized, the parameters used, i.e. ease of use, consistency, operability, perceived usefulness, minimal memory load, system usability scale were found to have a positive effect to user satisfaction and user experience.Ultimately, a consensus was reached on the acceptance of using universal user interface model.The user's attitude towards the use of the suggested application was reported as effective, pleasant and enjoyable.For statistical validation, this study measured ease of use, consistency, operability, perceived usefulness, minimal memory load, system usability scale, understandability and learnability (i.e., the fundamental determinants of user acceptance of any Technology Acceptance Model (TAM)) through a survey questionnaire.The results resulted in a satisfactory response.Through a series of interventions of model evaluations and validations, the hypothesis that user satisfaction is positively affected by the adaptation of the universal UI design for blind people is accepted.The study also verified the relationship between the usability of UI and user satisfaction.User satisfaction is an important factor in the design of smartphone-based UIs.In addition, the study results are consistent with earlier studies on the usability and accessibility of smartphone applications.The findings collectively investigate that various features of the smartphone-based UIs and layouts such as screen size, user controls, navigational complexity, user interaction, and feedbacks convey positive psychological effects in a particular user context [30].Methodologically, a potential threat to the investigation is to undertake this approach on visually-dense interfaces such as game and entertainment applications.Besides, the potential of smartphone capabilities can be used for hedonic and utilitarian purposes [31].As depicted in the results, some users find the universal interface design to be a convenient and efficient one for completing their tasks, while others perceived this as a bit uncomfortable and annoying.Therefore, the including of more visually complex tasks may be further investigated. VII. CONCLUSION A large number of smartphone applications does not comply with the mobile accessibility guidelines.These applications do not specifically meet the requirements of blind people.Thus, these people are facing numerous challenges in accessing and operating smartphone interface components such as finding a button, understanding layouts, interface navigation etc. Besides, a blind person has to learn every new application, and their features resulting in penetrating learnability and discoverability.They have to learn and apply their previous experience and this may result in varying user experience.The findings of this study illustrated that a simplified, semantically consistent, and context-sensitive universal UI design contributes to having a satisfactory positive evaluation.The main contribution of this proposed research study was an attempt to improve the user experience of blind people in operating smartphones through a universal interface design by using adaptive UI paradigm for personalization.We have adopted measurement items from existing web/mobile usability and revamped a number of parameters for this study.The proposed solution addressed the problems of simplicity, reduction, organization, and prioritization [32] by providing a semantically consistent, simplified, task-oriented, and contextsensitive UI design.During the study, the proposed intervention has significantly reduced the cognitive user overload.The consistency in the division of smartphone screen enables blind people to memorize the flow of activities and actions with ease.Thus there is a slim chance of getting lost in a given navigation workflow. Our results illustrate that our proposed solution is more robust, easy to use and adaptable than other solutions operated through the accessibility services.Our future work will focus on extending this framework for visually complex/navigationally-dense applications.Emotion-based UIs design may also be investigated further.Moreover, the optimization of GUI layouts and elements will be considered in the particular focus with gesture control systems, and eyetracking systems. TABLE I . COMMON USABILITY ISSUES IN TOUCHSCREEN USER INTERFACES FOR BLIND PEOPLE TABLE V . INTERNAL RELIABILITY AND CONVERGENT VALIDITY -PART I TABLE VI . INTERNAL RELIABILITY AND CONVERGENT VALIDITY -PART II UC: Unstandardized Coefficient, SC: Standardized Coefficient, SE: Standard Error, P: Significance
v3-fos-license
2016-05-12T22:15:10.714Z
2014-04-15T00:00:00.000
8386923
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mmrjournal.biomedcentral.com/track/pdf/10.1186/2054-9369-1-3", "pdf_hash": "8fa03b445821bb989461916a10cfbaab57f3d110", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:443", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "35b6c44f1f60524e97ccb04faa43dc4fdd5068b1", "year": 2014 }
pes2o/s2orc
Relationship between acute stress and sleep disorder in grass-root military personnel: mediating effect of social support Background Sleep disorder induced by acute stress has always been an important topic for study among the general population. However, the mediating effect of social support between acute stress and sleep disorder has rarely been reported before. Methods A total of 2,411 grass-root military personnel were randomly selected by cluster sampling, and administered the Chinese Military Personnel Sleep Disorder Scale, Military Acute Stress Scale and Social Support Rating Scale. Results The total score of acute stress scale was positively correlated with the total score and factor scores of sleep disorder scale (r = 0.209 ~ 0.465, P < 0.01); The total score of social support scale was positively correlated with the total score of acute stress scale and the total score and factor scores of sleep disorder scale (r = 0.356 ~ 0.537, P < 0.01). The analysis of mediating effects showed that lack of social support partially mediated between acute stress and the factors of sleep disorder. The analysis of structural equation model showed that acute stress not only had a direct effect on sleep disorder (the path coefficient was 0.29, P = 0.000), but also on lack of social support (the path coefficient was 0.39, P = 0.000); lack of social support had a direct effect on sleep disorder (the path coefficient was 0.48, P = 0.000). Conclusions Acute stress and lack of social support are two significant factors of sleep disorder in grass-root military personnel. Well-established social support could alleviate sleep disorder induced by acute stress. Lack of social support was a partial mediator between acute stress and sleep disorder. Background Stress has been a significant predisposing cause for various major fatal diseases. Military stress is defined as a special emotional state under extraordinary military circumstances, which might physiologically and psychologically exert negative influences upon individuals [1]. Acute stress could potentially give rise to anxiety, irritability, sleep disorders in particular, and other symptoms as acute emotional reaction, and might compromise individuals' functions in social, occupational and other significant fields [2]. Sleep quality has aroused extensive concerns, since it plays a significant role in the normality and quality of a wide range of psychological and physical functioning at the awakening time. Sleep disorder might severely impair life quality and downgrade working efficiency [3]. Cognitive stress theory has it that stress reactivity can't be defined as simple stimulation-induced reaction, but is determined by multiple mediating factors, such as social support, individual cognitive evaluation and others [4]. Based upon the study by Wenyu et al., social support of the grass-root military personnel could impair their sleep quality [5]. Generally speaking, social support, acute stress and sleep disorder are closely linked with each other. But there's barely any research combined these three factors of the grass-root military personnel. This study, taking grass-root military personnel as study subjects, investigated the relationship between these three factors by mediating effect analysis and pathway analysis, in an effort to provide references for improving sleep quality in grass-root military personnel. Measuring instruments Social Support Rating Scale [6]: there are altogether18 items, covering three dimensions, namely subjective support, objective support and support utility rate, plus a lying factor. It adopted three levels scoring, with never scoring 0, sometimes scoring 1 and always scoring 2. Higher scores indicated lower social support. All coefficients have been verified as follows: the correlation coefficient between factors ranged from 0.48 to 0.59 (P < 0.01), and the correlation coefficient between factors and the general scale ranged from 0.72 to 0.82 (P < 0.01); the testretest coefficient of the general scale and factors ranged from 0.62 to 0.80 (P < 0.01); Cronbach'α coefficients ranged from 0.62 to 0.87; split half coefficients ranged from 0.55 to 0.83. These verification results have demonstrated that this scale had excellent reliability and validity, meeting psychological assessment criteria. Chinese Military Personnel Sleep Disorder Scale [7]: there are altogether 29 items, covering five dimensions, namely daily functioning, insomnia, hypersomnia, motile abnormal sleep and immotile abnormal sleep. It adopted four levels scoring, with never scoring 1, occasionally scoring 2, often scoring 3 and always scoring 4. Higher scores indicted more serious sleep disorder. All coefficients were verified as follows: the correlation coefficient between factors ranged from 0.30 to 0.50 (P < 0.01), and the correlation coefficient between factors and the general scale ranged from 0.40 to 0.83 (P < 0.01); the test-retest coefficient of the general scale and factors ranged from 0.62 to 0.88 (P < 0.01); Cronbach'α coefficients ranged from 0.35 to 0.67; split half coefficients ranged from 0.59 to 0.85. These verification results have demonstrated that this scale had excellent reliability and validity, meeting the psychological assessment criteria. Military Acute Stress Scale [8]: there are altogether 42 items, covering nine dimensions, namely respiratory system, nervous system, cardiovascular system, skeletal system, digestive system, urogenital system, emotion, language and behavior, sleep, plus a lying factor. It adopted two levels scoring, with yes scoring 1 and no scoring 2. Higher scores indicted more serious stressrelated symptoms. All coefficients were verified as follows: the correlation coefficient between factors ranged from 0.28 to 0.57 (P < 0.01), and the correlation coefficient between factors and the general scale ranged from 0.70 to 0.85 (P < 0.01); the test-retest coefficient of the general scale and factors ranged from 0.38 to 0.91 (P < 0.01); Cronbach'α coefficients ranged from 0.61 to 0.93; split half coefficients ranged from 0.47 to 0.86. These verification results have demonstrated that this scale had excellent reliability and validity, meeting the psychological assessment criteria. Testing procedure All participants were divided into groups of about 30 to 50 individuals and were group-tested by automatic testing device. One research fellow made the leading remarks before the procedure, and three research fellows monitored the procedure, which lasted about half an hour. All participants were given the informed content before testing. This study was approved by the Human Research Medical Ethics Committee at No. 102 Hospital of PLA. Informed consents were obtained from all participants. Details regarding the study methods have been reported previously. Quality control All participants were screened for histories of psychological diseases, organic diseases and drug dependence. They were not requested to fill out the name of the responders to dispel misgivings. All questionnaires which were continuously, randomly or arbitrarily filled, or whose lying score was higher than the average plus 1.96 standard error, were excluded. All 2,490 pieces were recollected, among which 79 were excluded based upon standards mentioned above, making questionnaire validity rate 96.8%. Statistical analysis Pearson correlation analysis and stratified regression analysis were performed in the platform of SPSS version 17.0. The stratified regression analysis was carried out as follows. Based upon the study by Zhonglin et al. [9], regression analysis was carried out to verify the mediating effect of social support between acute stress and sleep disorder. To begin with, acute stress, social support and sleep disorder were centralized processed (generating new variables by original variables subtracting its average). Step 1, to verify the regression coefficient by taking sleep disorder total score as Results Demographic variables by all the participants Table 1 shows that the demographic information of all the participants. Correlation analysis of social support, sleep disorder and acute stress Table 2 shows that the total score of social support scale was positively correlated with the total score and factor scores of sleep disorder (P < 0.01). It also shows the total score of acute stress scale was positively correlated with social support (P < 0.01); the total score of acute stress scale was positively correlated with the total score and factor scores of sleep disorder (P < 0.01). Mediating effect analysis Based upon the study by Zhonglin et al. [9], Table 3 shows that lack of social support partially mediated between acute stress and the factors of sleep disorder. Discussion The correlation analysis in this study demonstrated that acute stress and social support were positively correlated with sleep disorder and its factors, which might suggest that acute stress and lack of social support exert impact upon sleep quality, according well with other studies. Spoor-maker et al. concluded that sleep disorder constituted a prominent issue in acute stress, and more serious consequences could be brought about by acute stress events plus sleep disorder [11,12]. Both Schoenfeld and Kobayashi investigated sleep quality in stressed individuals and found that acute stress would impair sleep quality and even engender sleep disorder [13,14]. Hall et al. found that acute stress reactivity was significantly related to the down-regulation of parasympathetic nerves during non-rapid eye movement sleep (NREM sleep) and rapid eye movement sleep (REM sleep), and the heart rate (1) 0.766 (1) 0.426 (1) Acute stress 0.459 (1) 0.303 (1) Significance levels of hierarchical regression analysis: (1) P < 0.01. β. Standardized regression analysis coefficient. abnormalities induced by acute stress could also impair sleep quality [15]. Krakow et al. has verified that treatment targeting sleep disorder could alleviate stress symptoms in over 50% patients [16], and another study by Brummett et al. showed that social support system also had indirect bearing on sleep quality [17]. Under military circumstances, rigorous and overloaded training might give rise to mental stress and over-fatigue and other stress responses; on the other hand, strained interpersonal relationship, family traumatic events and lack of social support could also induce stress responses [18]. All of these studies suggest that less acute stress and better social support system would greatly reduce sleep disorders and improve significantly sleep quality. However, another study by Hellhammer et al. proved that neuroendocrine response mechanism induced by acute stress played a positive role in improving sleep quality [19], which was inconsistent with our study. The reasons of the inconsistent might be that, the kinds of the participants were different, then the acute stress could be different either. The results of stratified regression analysis showed that social support partially mediated between acute stress and sleep disorder, which was further confirmed by structural equation model construction. Both results verified that acute stress could directly lead to sleep disorder, and also could indirectly lead to sleep disorder through the mediating effect of social support. This conclusion is consistent with other previous studies. Yu et al. found that social support mediated between stressful life events and psychological symptoms [20]. Another study by Zhu et al. verified that extensive social support system could serve as cushion for effects of stressful life events upon emotion, thus preserving mental health [21]. Multiple studies have suggested that social support, defined as perception of external support available, served as an important variable in stress responses, and played a significant role in alleviating stress response, preserving mental health and improving sleep quality [22][23][24]. Based upon this study, extensive social support could improve sleep quality by reducing acute stress in grass-root military personnel. Conclusions This study has demonstrated through mediating effect analysis and structural equation model construction that social support partially mediated between acute stress and sleep disorder, and well-established social support system and less acute stress could greatly improve sleep quality. This study is of great significance in providing references for improving sleep quality in grass-root military personnel and enhancing military combating capacity. Competing interests LZ led the study and has a fiduciary role in Prevention and Treatment Centre for Psychological Diseases of PLA in the PLA 102nd Hospital. GY, CC and LK are employees of Prevention and Treatment Centre for Psychological Diseases of PLA in the PLA 102nd Hospital, which might have an interest in the submitted work. All other authors (QZ, QlZ, XS and SZ) don't have any financial interests or non-financial competing interests that may be relevant to the submitted work. Authors' contributions LZ designed the study and obtained the funding . QZ participated in the design of the study and drafted the manuscript. QlZ performed the statistical analysis and helped to draft the manuscript. SZ collected data. XS, GY, CC and LK contributed to technical or material support. Each author had full access to all the data and take responsibility for the integrity of the data and the preciseness of the data analysis. All authors read and approved the final manuscript.
v3-fos-license
2024-04-15T05:19:24.762Z
2024-04-13T00:00:00.000
269135776
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a77495c97c5804460407a25ea3f9581b143e3384", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:444", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a77495c97c5804460407a25ea3f9581b143e3384", "year": 2024 }
pes2o/s2orc
The preliminary analysis of lymphatic flow around the connective tissues surrounding SMA and SpA elucidates patients’ oncological condition in borderline-resectable pancreatic cancer Background In pancreatic ductal adenocarcinoma (PDAC), invasion of connective tissues surrounding major arteries is a crucial prognostic factor after radical resection. However, why the connective tissues invasion is associated with poor prognosis is not well understood. Materials and methods From 2018 to 2020, 25 patients receiving radical surgery for PDAC in our institute were enrolled. HyperEye Medical System (HEMS) was used to examine lymphatic flow from the connective tissues surrounding SMA and SpA and which lymph nodes ICG accumulated in was examined. Results HEMS imaging revealed ICG was transported down to the paraaortic area of the abdominal aorta along SMA. In pancreatic head cancer, 9 paraaortic lymph nodes among 14 (64.3%) were ICG positive, higher positivity than LN#15 (25.0%) or LN#18 (50.0%), indicating lymphatic flow around the SMA was leading directly to the paraaortic lymph nodes. Similarly, in pancreatic body and tail cancer, the percentage of ICG-positive LN #16a2 was very high, as was that of #8a, although that of #7 was only 42.9%. Conclusions Our preliminary result indicated that the lymphatic flow along the connective tissues surrounding major arteries could be helpful in understanding metastasis and improving prognosis in BR-A pancreatic cancer. Supplementary Information The online version contains supplementary material available at 10.1186/s12893-024-02398-z. The preliminary analysis of lymphatic flow around the connective tissues surrounding SMA and SpA elucidates patients' oncological condition in borderline-resectable pancreatic cancer Introduction Pancreatic ductal adenocarcinoma (PDAC) is the fourthleading cause of cancer deaths in the United States, with a 5-year relative survival rate of 8% 1. Surgical resection for localized disease is the only treatment option for a complete cure, but the prognosis after radical resection is still poor, and > 50% of patients develop tumor recurrence at distant or locoregional sites, with an estimated 5-year survival of only 20% 2. One of the reasons for such poor prognosis after radical resection is the high incidence of invasion to extra-pancreatic tissue, including lymphatic vessels and nerve plexuses, leading to distant metastasis.Especially, invasion of connective tissues surrounding major arteries, in which the peripancreatic nerve plexus and lymph vessels exist, is a crucial prognostic factor after radical resection [3][4][5].This is why major arterial invasion such as common hepatic artery (CHA) and superior mesenteric artery (SMA) is categorized as borderline resectable state which needs multidisciplinary approach with preoperative therapy for radical resection. We have also reported that the status of perineural invasion and that of nodal involvement are significant independent prognostic factors in patients with PDAC who are receiving preoperative chemoradiotherapy followed by radical surgery [6].Furthermore, we have already shown that perineural invasion is significantly associated with not only postoperative local recurrence but with distant metastasis, such as to the liver and nonlocoregional lymph nodes [7].However, why the connective tissues invasion including perineural invasion is associated with postoperative distant metastasis is not well understood.Previous studies have used fluorescence imaging to demonstrate that lymph from the pancreatic head flows into the connective tissues of SMA [8,9], but how lymphatic flow is running from the SMA connective tissues is still unclear.In this study, we aimed to examine the lymphatic flow from the connective tissues surrounding SMA by using indocyanine green (ICG) fluorescence imaging and clarify why the invasion of connective tissues surrounding SMA is one of risk factors of distant metastasis.Furthermore, we also examined the lymphatic flow from the connective tissues surrounding splenic artery (SpA) and tried to clarify the significance of SpA invasion in pancreatic body and tail cancer. Enrolled patients From 2018 to 2020, 25 patients who had undergone radical surgery for pancreatic ductal adenocarcinoma (PDAC) in our institute were enrolled in this study.Patient characteristics are shown in Table 1 and the details of each patient are described in supplementary Table 1.The average age was 62.4 ± 9.1 years old, and 13 patients were male.Twelve patients received pancreaticoduodenectomy for pancreatic head cancer, while 13 patients received distal pancreatectomy for pancreatic body and tail cancer.All patients received D2 lymph node dissection, that included wide sampling of paraaortic lymph nodes.Neoadjuvant chemotherapy and neoadjuvant chemoradiotherapy were performed in 4 and 18 patients respectively, while the other 3 patients received upfront surgery.Pathologically examination was determined to the UICC-TNM classification 8th edition and pathologically lymph node metastasis in the resected specimen was observed in 10 patients and pathologically perineural invasion was observed in four patients.Postoperative recurrence was observed in nine patients and primary recurrence site was as follows; local recurrence in two patients, liver recurrence in five patients, lymph node recurrence in two patients and lung recurrence in two patients. Surgical procedure In pancreatic head cancer, the superior mesenteric vein (SMV) and SMA were exposed at the inferior border of the pancreatic body just after laparotomy.After the SMA sheath was revealed, 0.3 mL of 0.5% ICG was injected into the connective tissues surrounding SMA at the level of the middle colic artery bifurcation (Fig. 1A).Next, we carefully performed Kocher's mobilization and a wide sampling of paraaortic lymph nodes (#16a2 and #16b1).After that we accomplished pancreaticoduodenectomy by a posterior or mesenteric approach, depending on the location and size of the tumor. In pancreatic body and tail cancer, we exposed the superior border of the pancreatic body along the SpA, and injected 0.3 mL of 0.5% ICG into the connective tissues surrounding SpA adjacent to the tumor (Fig. 1B).We then performed radical antegrade modular pancreatosplenectomy (RAMPS), including wide sampling of paraaortic lymph nodes (#16a2 and #16b1). Both in pancreatic head cancer and pancreatic body cancer, we usually dissected affected one-ha-f side of Lymphatic flow analysis To examine the lymphatic flow from the connective tissues surrounding SMA and SpA, we used the Hyper-Eye Medical System (HEMS, Mizuho Medical Co. Ltd., Tokyo, Japan), which can visualize ICG-enhanced lymphoid structures and ICG that has accumulated in lymph nodes by detecting the near-infrared fluorescence signal emitted by ICG.In addition, after regional lymph nodes were dissected, whether ICG had accumulated or not in lymph nodes near the SMA or SpA, as shown in Fig. 1, was investigated by HEMS and the percentage of ICGpositive lymph nodes in each region was calculated.The photographs in Fig. 2 show an ICG injection being performed in a pancreatoduodenectomy (Fig. 2A) and a distal pancreatectomy (Fig. 2B). Figure 2 C and 2D show the accumulation of ICG in dissected lymph nodes as detected by HEMS. Statistical analysis and ethical issues All data are expressed as mean ± standard deviation or median and range.Differences in continuous values were evaluated using the Student t-test or Mann-Whitney U test.Categorical data were compared using the Fisher's exact probability test or Pearson's chi-squared test, as appropriate.All analyses were performed in IBM SPSS statistics version 21.0 (IBM Japan Business Logistics, Tokyo, Japan), and P < 0.05 was considered significant.The statistics expert in our laboratory performed all the statistical analyses. The study protocol was approved by the Human Ethics Review Committee of Osaka International Cancer Institute (ethical approval number 1,710,059,193).Signed informed consent was obtained from each participant. Results Supplementary Fig. 3 is a moving image showing lymphatic flow, obtained by using HEMS after ICG injection in pancreaticoduodenectomy. ICG-bearing lymph is shown flowing down to the paraaortic area of the abdominal aorta along the connective tissues surrounding SMA.Table 2 shows the percentage of ICG-positive lymph nodes against the total number of dissected lymph nodes in all patients.In pancreatic head cancer, all of LN#14p lymph nodes, which are from the proximal area around the SMA, were ICG positive, indicating that ICG was transported properly within lymphatic vessels and that ICG was stored in lymph nodes.In #16a1 lymph nodes, which were paraaortic lymph nodes near the beginning of the SMA, 9 lymph nodes among 14 (64.3%) were ICG positive and this was higher than the percentage of positive LN#15 (25.0%) or LN#18 (50.0%) nodes, which were near the point of ICG injection, indicating that lymphatic flow around the SMA was leading directly to the paraaortic lymph nodes.Furthermore, the ICG-positive percentages of LN#16b1R and #16b1L were 46.3% and 35.7%, respectively, and also higher than that of #15, indicating that invasion of the SMA by the pancreatic cancer could be contributing to the threat of distant metastasis.Similarly, in pancreatic body and tail cancer, the percentages Fig. 1 The ICG injection manner according to the location of pancreatic cancer.(A) In pancreatic head cancer, ICG was injected into the surface of the nerve plexus in SMA at the level of the middle colic artery bifurcation.Then, the regional lymph nodes along SMA including paraaortic lymph nodes (#16a2 and #16b1) was dissected and examined.(B) In pancreatic body and tail cancer, ICG was injected into the surface of the SpA nerve plexus adjacent to the tumor.Then, Then, the regional lymph nodes along SMA including paraaortic lymph nodes (#16a2 and #16b1) was dissected and examined of ICG-positive #16a2 and #8a nodes were also very high, although that of #7 was only 42.9%, indicating that lymphatic flow into paraaortic lymph nodes was occurring also in the connective tissues surrounding SpA. Table 3 indicates the percentage of patients with ICGpositive lymph nodes in each lymph node lesion.In patients with pancreatic-head cancer, paraaortic lymph nodes (#16b1R) were detected in all patients, and ICGpositive nodes were detected in 10 patients (83.3%).Furthermore, in patients with pancreatic body and tail cancer, paraaortic lymph nodes (#16a2) were detected in 11 patients (84.6%), and all patients showed ICG positivity.These data indicate that pancreatic cancer was easily infiltrating paraaortic tissue along connective tissues surrounding major arteries, leading to distant metastasis. Discussion In borderline-resectable pancreatic cancer, the surgeryfirst approach is not curative.Most cases result in tumor relapse because of both the high risk of positive margins and the high incidence of early distant recurrence, including non-regional lymph node metastasis.Thus, the National Comprehensive Cancer Network (NCCN) guideline recommends preoperative chemotherapy with or without radiotherapy.We have also reported the prognostic benefit of preoperative chemoradiotherapy Table 2 The number of each regional dissected lymph nodes and ICG positive rate for patients with resectable and borderline-resectable pancreatic cancer [10].However, especially in borderline-resectable cases with SMA abutment, the surgical outcome was unsatisfactory, mainly because of distant metastasis, although local control of the cancer was relatively good, probably due to the effect of radiation therapy [7].Thus, to improve the prognosis of patients with pancreatic cancer with SMA abutment, it is important to understand why SMA invasion leads to distant metastasis. We proved in this study that there are some lymphatic vessels in the connective tissues surrounding SMA.Previously, Xu et al., using normal autopsy specimens, revealed that lymphatics and capillaries are present in the mesopancreatic root, located between the uncinate process of the pancreas and the superior mesenteric vessels [11].They also revealed that intra-mesopancreatic nerves, lymph nodes, lymphangions, and fascia fibers along the SMA were infiltrated by cancer cells in specimens of unresectable pancreatic cancer.Furthermore, Cheng et al. described the invasion of lymphatic vessels along the SMA as activating tumor-induced lymphangiogenesis, resulting in the development of metastatic tumors [12].Our results also show that tumor cells could easily move into the general circulation within a few minutes when tumor cells invaded the connective tissues surrounding SMA.This indicates that the clinical condition in borderline-resectable pancreatic cancer with initial artery abutment (BR-A) was completely different from borderline-resectable pancreatic cancer with portal vein invasion (BR-PV), so we should consider a distinct treatment strategy for BR-A, separately from BR-PV.In surgical procedures, the mesenteric approach, one of six approaches to the superior mesenteric artery [13], was reported to be suitable for BR-A pancreatic cancer with respect to the early judgement of resectability and a sufficient peripancreatic margin around the SMA.However, Hirono et al. reported that the mesenteric approach did not provide significant prognostic advantages for patients with borderline-resectable pancreatic cancer, although it could yield prognostic benefits to patients with resectable pancreatic cancer in the form of lower local recurrence rates [14].Taking into consideration our result that cancer cells easily migrated into the systemic circulation when pancreatic cancer invaded the connective tissues surrounding SMA, it is not surgical techniques but an effective multimodal approach including powerful preoperative chemotherapy that is essential for the prognostic improvement of BR-A pancreatic cancer patients. In this study, we injected ICG along the connective tissues surrounding SpA in patients with pancreatic body and tail cancer.The injected ICG was transported in lymphatic vessels of the connective tissues surrounding SpA into paraaortic lymph nodes; this is very similar to the observed ICG transport in the connective tissues of SMA in patients with pancreatic head cancer.Recent reports have described pancreatic body and tail cancer with SpA involvement showing poor prognosis after radical resection [15][16][17].Similarly, we have reported that SpA involvement is a poor prognostic indicator after radical resection in patients with pancreatic body and tail cancer who receive preoperative chemoradiotherapy [18].Interestingly, the incidence of distant recurrence was significantly high in patients with SpA involvement, and this was very similar to the results in patients with SMA abutment, potentially indicating that pancreatic body and tail cancer with SpA involvement should be treated as borderline resectable like pancreatic head cancer with CHA and SMA invasion. Of course, this study was just exploratory research and the interpretation of the results is limited by its small sample size, so we have to elucidate the lymphatic flow in the connective tissues surrounding SMA more definitively in a larger number of patients.However, our results could partially explain why BR-A pancreatic cancer shows a high incidence of distant metastasis after radical resection, and we believe that our results could help to establish a more effective treatment strategy in BR-A pancreatic cancer. Fig. 2 Fig. 2 The actual image of ICG injection in a pancreatoduodenectomy (A) and a distal pancreatectomy (B).The accumulation of ICG in dissected para aortic lymph nodes as detected by HEMS in a pancreaticoduodenectomy (C) and in a distal pancreatectomy (D) Table 1 The characteristics of enrolled pancreatic cancer patients SMA nerve plexus.All patients received D2 lymph node dissection, but #6, #8, #12, #13, #17 lymph nodes in pancreatic head cancer and #10, #11, #18 lymph nodes in pancreatic body-tail cancer were en bloc resected with pancreas, so we couldn't evaluate the ICG accumulation in these lymph nodes.
v3-fos-license
2016-06-17T07:46:45.014Z
2014-07-29T00:00:00.000
14209494
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2014.00009/pdf", "pdf_hash": "0a9239682a3ec58f76468b9798f36dd5f25e2004", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:445", "s2fieldsofstudy": [ "Biology" ], "sha1": "0a9239682a3ec58f76468b9798f36dd5f25e2004", "year": 2014 }
pes2o/s2orc
Differential effects of glycation on protein aggregation and amyloid formation Amyloids are a class of insoluble proteinaceous substances generally composed of linear un-branched fibrils that are formed from misfolded proteins. Conformational diseases such as Alzheimer's disease, transmissible spongiform encephalopathies, and familial amyloidosis are associated with the presence of amyloid aggregates in the affected tissues. The majority of the cases are sporadic, suggesting that several factors must contribute to the onset and progression of these disorders. Among them, in the past 10 years, non-enzymatic glycation of proteins has been reported to stimulate protein aggregation and amyloid deposition. In this review, we analyze the most recent advances in this field suggesting that the effects induced by glycation may not be generalized as strongly depending on the protein structure. Indeed, being a post-translational modification, glycation could differentially affects the aggregation process in promoting, accelerating and/or stabilizing on-pathway and off-pathway species. PROTEIN AGGREGATION AND AMYLOID FORMATION Neurodegenerative disorders, including Alzheimer's, Parkinson's, amyotrophic lateral sclerosis and prion diseases are debilitating and so far incurable disorders that demand intensive research. In these diseases, misfolding, aggregation, and precipitation of proteins seem to be directly related to neurotoxicity (Dobson, 2003;Chiti and Dobson, 2006). Specifically, the physiological alterations are associated with the formation of fibrillar aggregates, referred to as amyloid fibrils, that usually accumulate in the extracellular space of tissues or also as intracellular deposits Taylor et al., 2005). Protein molecular assembly is characterized by several events like conformational changes and intermolecular interactions which strongly affect each other. The hierarchy of all these mechanisms and their extent depends on several physical and chemical parameters such as temperature, pH, ionic strength, and addition of denaturants. Until very recently, it was thought that only a small number of polypeptide chains associated with clinical disorders were able to form amyloid fibrils. However, a number of recent studies have shown that proteins unrelated to diseases, under suitable conditions, can form aggregates in vitro with structural and cytotoxic properties that closely resemble those of the amyloid fibrils formed in diseased tissues (Litvinovich et al., 1998;Fandrich et al., 2001;Sirangelo et al., 2004Sirangelo et al., , 2009Iannuzzi et al., 2013a). These observations have led to the idea that the ability to form amyloid fibrils is a generic property of polypeptide chains irrespective of their amino acid sequence and caused by stable interactions involving primarily the common polypeptide backbone. Despite major differences in the sequences and three-dimensional structures of the peptides and proteins involved, the fibrillar forms of the aggregates share a common ultrastructure (Diaz-Avalos et al., 2003;Nelson et al., 2005;Fitzpatrick et al., 2013). They usually consist of a number (typically 2-6) of protofilaments, each about 2-5 nm in diameter, that are often twisted around each other to form super-coiled ropelike structures typically 7-13 nm in width or that laterally associate to form long ribbons that are 2-5 nm thick and up to 30 nm wide (Serpell et al., 2000). X-ray diffraction analysis has indicated that the characteristic structure, i.e., the β-cross motif, is formed by β-strands oriented perpendicular to the long axis of the fibril, and β-sheets propagating in the fibril direction (Sunde and Blake, 1997;Makin and Serpell, 2002;Maji et al., 2009). These findings suggest that a common molecular mechanism could underlie the aggregation process of the different proteins involved in misfolding diseases (Kopito, 2000;Dobson, 2001). Three major factors have been identified as important parameters in the conversion of a protein into aggregates; these are high hydrophobicity, high propensity to convert from α-helical to βsheet structure, and low net charge (Konno, 2001;Ciani et al., 2002;Tjernberg et al., 2002;Chiti et al., 2003;Tartaglia et al., 2008). Protein destabilization favors the formation of partially unfolded conformations that are highly prone to aggregation (Uversky and Fink, 2004). In most cases, protein destabilization is facilitated by amino acid mutations which also increase the structural flexibility of the peptide chain; however, other proteins are amyloidogenic even in the wild type form (Hurle et al., 1994;Goedert et al., 2000;Quintas et al., 2001;Niraula et al., 2002;Iannuzzi et al., 2007;Infusini et al., 2012Infusini et al., , 2013. It has been suggested that protein folding and protein aggregation, despite being distinct processes, are in competition each other and the environmental conditions dictate which one is favored for a given polypeptide chain (Tartaglia and Vendruscolo, 2010). On this basis, extensive studies have been carried out in vitro to investigate the nature of the transition between natively folded states and soluble aggregate-precursor states, and between the latter and mature amyloid fibrils and the factors affecting all of these (Wiseman et al., 2005). Recent data indicate that these dangerous aggregation-prone states, although similar to the native conformation, display altered surface charge distribution, alternative β-sheet topologies and increased solvent exposure of hydrophobic surfaces and of aggregation-prone regions of the sequence (De Simone et al., 2011). The propensity of normally folded proteins to form amyloid-like fibrils increases in conditions that allow the protein to break the major unfolding energy barrier, favoring partial unfolding of the native state. These include low pH, high temperature, or the presence of organic solvents (Guijarro et al., 1998;Villegas et al., 2000). However, increasing evidence is now accumulating that folded proteins also retain a significant tendency to aggregate with no need for unfolding as first obligatory step (Plakoutsi et al., 2004;Bemporad and Chiti, 2009). Protein aggregation begins with the appearance of aggregation nuclei, whose growth is considered the rate-limiting step of the process, which has many characteristics of a nucleation-dependent polymerization mechanism (Kelly, 1998) (Figures 1, 2). These species, generally indicated as protofibrils or soluble oligomeric intermediates, appear as globules of 2.5-5.0 nm in diameter or larger, with an intrinsic tendency to further assemble into pore-like annular and tubular structures (Lashuel et al., 2002;Poirier et al., 2002). Once a nucleus is formed, fibril growth is thought to proceed rapidly by further association of either monomers or oligomers with the nucleus (Cohen et al., 2012). While insoluble aggregates correlate with disease progression, there are increasing evidences that the initiating and most toxic events are caused by prefibrillar forms rather than mature fibrils. These results have led to the idea that molecular basis of cell and tissue impairment may be related to the transient appearance of prefibrillar assemblies, under conditions where their intracellular levels increase as a consequence of dysfunctions in cellular clearance machineries (Stefani, 2012). The specific mechanism by which these species appear to mediate their toxic effects is not completely understood; probably toxicity is mediated by common structural features shared by prefibrillar precursors (Kayed et al., 2003;Bucciantini et al., 2004;Malmo et al., 2006;Cecchi and Stefani, 2013). PROTEIN GLYCATION AND AMYLOIDOSIS Although the aggregation process of amyloidogenic proteins has been widely studied in vitro and many physiological (environmental and genetic) factors involved have been identified, the molecular mechanisms underlying the formation of aggregates in vivo and in pathological conditions are still poorly understood. The majority of neurodegenerative diseases are sporadic, suggesting that other factors must contribute to the onset and progression of these disorders. Post-translational modifications are known to affect protein structure and function. Some of these modifications might affect proteins in detrimental ways and lead to their misfolding and accumulation. Reducing sugars play an important role in modifying proteins, forming advanced glycation end-products (AGEs) in a non-enzymatic process named FIGURE 2 | Nucleation-dependent fibril formation process. FIGURE 1 | Association of two or more non-native peptide/protein molecules forming highly ordered, fibrillar aggregates. Frontiers in Molecular Biosciences | Structural Biology September 2014 | Volume 1 | Article 9 | 2 glycation. This process is different from glycosylation; indeed these two post-translational modifications affect the structure of the target protein in a different way. Glycosylation is a selective protein modification, driven by specific enzymes, that is generally associated to a gain of function (or stabilization) of the target protein. Non-enzymatic glycation is a non-selective modification and it is generally associated to a loss of function of the target protein due to modifications of its native structure. While glycosylation is a well controlled cellular mechanism, non-enzymatic glycation only depends on the exposure of free amino-groups in the polypeptide chain, concentration of the sugar and oxidative conditions. Recently, much attention has been devoted to the role played by non-enzymatic glycation of proteins in stimulating amyloid aggregation and toxicity. Proteins in amyloid deposits are found often glycated suggesting a direct correlation between protein glycation and amyloidosis (Miyata et al., 1993;Kikuchi et al., 2000;Munch et al., 2000;Dukic-Stefanovic et al., 2001;Shults, 2006). Glycation reactions are common to all cell types: glycated products slowly accumulate in vivo leading, besides cellular modifications involved in the aging process, to several different protein dysfunctions (Lyons et al., 1991;Miyata et al., 1999;Gul et al., 2009). The process begins with a nucleophilic addition reaction between a free amino group of a protein and a carbonyl group of a reducing sugar, forming a reversible intermediate product (Schiff 's base). Side-chains of arginine and lysine residues, the N-terminus amino group of proteins, and thiol groups of cysteine residues, are the main targets of protein glycation. The process depends on several conditions, such as the concentration and reactivity of the glycation agent, the presence of catalytic factors (metals, buffer ions and oxygen), the physiological pH, temperature and the half-life of each protein. All reducing sugars can participate in glycation reactions and, between them, D-ribose is the most active and its intracellular level can be quite high. D-glucose is the least reactive and its intracellular concentration is negligible, while dicarbonyl compounds are far more reactive. The levels of D-ribose in the blood are estimated around 20 mg/L in healthy individuals while D-glucose 6-10 g/L. Once formed, the Schiff 's base can turn into a stable ketoamine by Amadori rearrangement (Figure 3). This reaction is reversible depending on the concentration of the reactants. The late-stage of the process is an irreversible cascade of reactions involving dehydration, hydrolysis, and other rearrangements leading to the formation of AGEs. AGEs products are considered to be a marker of several diseases, such as arteriosclerosis, renal failure, Alzheimer disease, or diabetes, although they normally increase in aging (Vlassara, 2005). Indeed, protein glycation has been considered an age related problem influencing mainly extracellular proteins, such as collagen and elastin, which are located outside the cells and provide strength and flexibility to the tissues. AGEs formation can interfere not only with the regular functioning of the proteins to which they are attached but also induce the formation of covalent crosslinks with close proteins. This process is gradual, so that crosslinks accumulate over the years on the longest-lived extracellular proteins, which do not get cleared very often; clear evidence of this is found in the extracellular collagen and elastin (Furber, 2010). The observation that proteins in amyloid deposits, such as β-amyloid, tau, prions and transthyretin, are often found glycated in patients suggests a direct correlation between protein glycation and amyloid formation. This is thought to be associated with an increased protein stability through the formation of cross-links that stabilize protein aggregates (Figure 4). Also, glycation affects the structure and the biological activity of proteins as well as their degradation process (Shaklai et al., 1984;Mendez et al., 2005) and, being an abnormal modification, it has been found to induce some proteins to misfold and, thus, promote protein aggregation (Vitek et al., 1994;Chellan and Nagaraj, 1999;Verzijl et al., 2002;Bouma et al., 2003). Moreover, once proteins become glycated at their exposed lysine residues, clearance by the ubiquitin-proteasome system would be impaired because ubiquitination of lysine residues, a modification that targets proteins to the proteasome for degradation, might be impeded. Thus, accumulation of proteins as aggregates or as depositions or inclusions in tissues might be favored after glycation. However, in addition to directly affecting protein structure and function, AGEs also exert cellular effects mediated by specific AGEs receptors (RAGE), as well as macrophage scavenger receptors, MSR type II, OST-48, 80K-H, galectin-3, and CD36 (Vlassara et al., 1995;Li et al., 1996;Ohgami et al., 2002;Stern et al., 2002). Indeed, glycation may be responsible, via RAGE, for an increase in oxidative stress and inflammation through the formation of reactive oxygen species and the activation of the nuclear DIFFERENTIAL EFFECTS OF GLYCATION ON PROTEIN AGGREGATION Several proteins related and not related to misfolding diseases have been so far examined to investigate the effect of glycation on their propensity to aggregate and form amyloid structure. Aβ-PEPTIDE Vitek et al. (1994) observed, for the first time, that plaque fractions of AD brains contained about three-fold more AGE adducts than preparations from healthy, age-matched controls. They showed that the in vivo half-life of β-amyloid is prolonged in AD, resulting in greater accumulation of AGE modifications which may, in turn, act to promote accumulation of additional amyloid. Moreover, AGE-modified Aβ peptide-nucleation seeds accelerated aggregation of soluble Aβ peptide compared to nonmodified seed material (Vitek et al., 1994). Successively, Munch et al. (1997Munch et al. ( , 2000 reported that glycation promotes in vitro amyloid aggregation of Aβ peptide, probably because of crosslinking through AGEs formation. Further studies revealed that glycation is not only capable of enhancing the rate of formation of amyloid, oligomers and protofibrils but also of increasing the size of the aggregates (Chen et al., 2006). The fibrillar aggregates formed upon glycation were not cytotoxic, thus glycation in the Aβ peptide seems to strongly reduce its toxicity (Fernandez-Busquets et al., 2010). β2-MICROGLOBULIN Also in the case of β2-microglobulin, glycation seems to promote amyloid aggregation. In particular, D-ribose interacts with human β2-microglobulin to generate AGEs that form aggregates in a time-dependent manner. Ribosylated β2-microglobulin molecules are highly oligomerized compared with the unglycated protein, and have granular morphology. Such ribosylated β2microglobulin aggregates show significant cytotoxicity to both human SH-SY5Y neuroblastoma and human foreskin fibroblast FS2 cells and induce the formation of intracellular reactive oxygen species (Kong et al., 2011). By contrast, modification of β2-microglobulin with D-glucose was reported to inhibit fibril extension in vitro (Hashimoto et al., 1999). INSULIN A different effect has been observed for glycated insulin. This protein is intimately associated with glycaemia and is vulnerable to glycation by glucose and other highly reactive carbonyls especially in diabetic conditions. (Brange et al., 1997). In vitro experiments have shown that glucose is able to produce glycated bovine insulin on Lys29 in the C-terminal region of chain B and on N-terminus of chains A and B. Glucose produces glycated bovine insulin adducts with different structural features depending on the experimental conditions. In particular, in reducing conditions glycation produces higher levels of insulin oligomerization and, therefore, accelerates amyloid formation. On the contrary, in non-reducing conditions, glycation inhibits amyloid formation in a way proportional to the glycation extent (Alavi et al., 2013). Probably, under these conditions, insulin adducts possess a higher internal dynamics that prevent formation of the rigid cross-β core structure thus reducing the ability to form fibrils. Methylglyoxal is able to produce glycated human insulin in a single site, i.e., Arg46 of the B-chain. This modification induces the formation of native-like aggregates and reduces the ability to form fibrils by blocking the formation of the seeding nuclei. These aggregates are small, soluble, non-fibrillar and retain a native-like structure. The lag phase of the nucleation-dependent polymerization process increased as a function of methylglyoxal concentration. In this case glycation preserved insulin native conformation, blocking the α-helix to β-sheet transition thus leading to a reduced fibril formation. Again, the effects may be ascribed to a higher dynamics in glycated insulin leading to impairment in the formation of the rigid cross-β core structure. Taken together, these results showed that methylglyoxal-induced glycation reduces insulin fibril formation and promotes the population of oligomeric states (Oliveira et al., 2011). CYTOCHROME C Cytochrome c (Cyt c) was also used as a model protein to study the impact of glycation on protein structure, stability, and ability to form aggregates. Methylglyoxal has been shown to covalently modify Cyt c at a single arginine residue and induces early conformational changes that lead to the formation of native-like aggregates without promoting amyloid formation. Oligomerization occurs due to localized protein structural changes, which induce a decrease in the conformational stability of the modified protein. Consequently, the aggregation process starts directly by monomer addition in a way that is thermodynamically and kinetically favored. Furthermore, partially unfolded species are formed, but they do not seem to be implicated in the aggregation process. Interestingly, the glycated Cyt c unfolded species are an off pathway by-product and, for this reason, they do not promote the amyloidogenic aggregation pathway (Oliveira et al., 2013). α-SYNUCLEIN Glycation of α-synuclein is a factor involved in the aggregation of the protein into Parkinson's disease and in the formation of Lewy bodies (LB). Glycation was first reported to be present in substantia nigra and locus coeruleus of peripheral LB (Vicente and Outeiro, 2010). The protein has 15 lysine residues making it a target for glycation at multiple sites (Padmaraju et al., 2011). Lee and collaborators found that methylglyoxal induces oligomerization of α-synuclein and inhibits the formation of amyloid fibrils. Moreover, protein fibrillization was also significantly suppressed by the seeding of modified α-synuclein species (Lee et al., 2009). Similar results were obtained with D-ribose: ribosylation of α-synuclein promotes the formation of molten globule-like aggregates which caused cells oxidative stress and resulted in high cytotoxicity (Chen et al., 2010). LYSOZYME Also hen egg white lysozyme (HEWL) has been used to study the impact of glycation on protein structure and aggregation. HEWL is a structural homolog of human lysozyme, responsible for systemic amyloidosis disease and, for this reason, considered a Frontiers in Molecular Biosciences | Structural Biology September 2014 | Volume 1 | Article 9 | 4 very good model. HEWL undergoes glycation in vitro and potential glycation sites are considered to be the N-terminal α-amino group, ε-amino group of lysine residues and guanidino group of arginine residues (Tagami et al., 2000). Glycation of HEWL has been tested over a prolonged period in the presence of D-glucose, D-fructose and D-ribose (Fazili and Naeem, 2013;Ghosh et al., 2013). Glycation has been found to promote the formation of cross-linked oligomers in HEWL instead of amyloid aggregates and, among the tested sugars, D-ribose resulted the most effective one. Glycation in HEWL has been shown to promote at first an alpha to β transition and then, prolonged glycation induced the formation of cross-linked β-sheet rich oligomers which are amorphous and globular in nature. ALBUMIN Also human and bovine serum albumin (BSA) have been shown to be efficiently glycated in vitro by D-ribose and, in this case, glycation has been shown to promote amyloid aggregation (Bouma et al., 2003;Sattarahmady et al., 2007). Although BSA is a highly soluble protein rich in helical structure, glycation promotes strong conformational changes affecting both secondary and tertiary structure. Indeed, it has been observed a strong reduction of the helical content and, subsequently, the formation of β-rich aggregates that rapidly evolve to the formation of amyloid fibrils. Amyloid-like aggregates of glycated BSA are able to induce high cytotoxicity that trigger cell death by activation of cellular signaling cascades. Indeed, independent experiments have shown that aggregates of glycated BSA are able to induce oxidative stress ROS mediated and apoptosis in both neurotypic SH-SY5Y and MCF-7 cells (Wei et al., 2009;Khan et al., 2013). W7FW14F APOMYOGLOBIN Recently, it has been shown that glycation of the amyloidogenic apomyoglobin mutant W7FW14F significantly accelerates the amyloid fibrils formation providing evidence that glycation actively participates to the process affecting the reaction kinetics (Iannuzzi et al., 2013b). Moreover, it has been examined the effect of glycation on wild type apomyoglobin and preliminary results indicate that, for this protein, AGEs formation does not trigger amyloid aggregation, thus suggesting that the presence of amyloidogenic sequences in a misfolded protein is crucial for predisposing the protein to amyloid aggregation (unpublished data). These data indicate that a synergy between predisposing factor, i.e., aggregation propensity, and AGEs induced cross-links formation may be a strongly relevant factor in addressing the formation of amyloid structure. The differences observed in the protein models so far studied might be a consequence of the inherent properties of the native structure of each protein or structural changes induced by AGE modifications as result of different glycation agents. In most of the cases mentioned above, fibrillation enhancement is achieved by modifying amyloidogenic proteins with glycating sugars like glucose or fructose while small and highly reactive carbonyls like methylglyoxal are apparently more prone to reduce fibril formation. This suggests that different glycation agents lead to specific structural constraints that have a major role in protein fibrillation kinetics. Moreover, some glycated proteins undergo oligomerization without promoting amyloid fibril formation and this can be related to the aggregation behavior of some amyloidogenic proteins upon glycation. In fact both insulin and α-synuclein, which are involved in amyloid diseases, show decreased amyloid fibril formation after glycation and both significantly retain the native three dimensional structure during the aggregation process. Overall, glycation of amyloidogenic proteins can lead to a shift from an amyloidogenic pathway to a native-like aggregation through a process that is thermodynamically and kinetically favored. CONCLUSIONS AND PERSPECTIVES The above referred considerations make the study of AGEs one of the most important areas of biomedical research today. Several questions remain to be answered: whether glycation of susceptible proteins is a triggering event or just a result of its reactivity toward low-turnover aggregated species, which are highly insoluble and protease-resistant, remains controversial. Several studies suggest that glycation may be an early event promoting or accelerating abnormal protein deposition, followed by increased protease resistance and insolubility. Regardless of the chronology of AGEs formation, it is known that its accumulation is related to sustained inflammatory responses and oxidative stress, which is a common feature in many neurodegenerative disorders. Glycation may then be understood as a dynamic contributor to these multifactorial diseases by promoting, accelerating or stabilizing pathological protein aggregation and inducing responses leading to cell dysfunction, damage and death. Thus, it will be important to further investigate the biochemical effects induced by the interaction of AGEs-modified proteins with cells, such as, the activation of oxidative stress signaling pathway and inflammatory response.
v3-fos-license
2017-06-26T18:43:08.514Z
2015-09-02T00:00:00.000
5779790
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://intjem.biomedcentral.com/track/pdf/10.1186/s12245-015-0083-2", "pdf_hash": "6b17632242366ab741fde32a3529c8301578bdc2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:448", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6b17632242366ab741fde32a3529c8301578bdc2", "year": 2015 }
pes2o/s2orc
Hemorrhagic shock from breast blunt trauma Background Seat belt use has been associated with decreased life-threatening thoracic injuries. However, there has been an increase in soft-tissue injuries such as breast trauma. Case report We describe a case of a young healthy female who presented to a community hospital Emergency department without any trauma designation following a motor vehicle accident. The patient was found to have hemorrhagic shock from an intramammary hemorrhage and was treated with blood products and a temporizing external abdominal binder in preparation for a transfer to a level 1 center where she was successfully treated with angiographic embolization. Objectives The objective of this study is to report on a case hemorrhagic shock from a breast hematoma as well as a review of the literature on previous seat belt associated breast trauma and its management in the emergency department. Conclusion Seat belt associated breast trauma is uncommon in the emergency medicine literature. However, it can be associated with life threatening intramammary bleeding. Emergency physicians should be aware of these injuries and their proper management. Background The use of seat belts has reduced the incidence of lifethreatening chest trauma but has increased the incidence of soft-tissue and internal organ injuries [1,2]. One of the resulting injuries is a blunt injury to the female breast. Blunt breast trauma literature is scarce in emergency medicine [3]. The aim of this paper is to report on a case of hemorrhagic shock resulting from a blunt breast trauma along with a review of the literature on the management of such an injury. Case presentation A 54-year-old female with a history of hypertension and abdominal laparoscopy presented to a small community hospital without any trauma designation after a motor vehicle accident. The restrained patient was driving at high speed on wet roads when she lost control of her car and hit the front end of her vehicle on an embankment causing the car to roll over. The airbags did deploy. Patient was able to self extricate and was ambulatory on scene. The patient denied any loss of consciousness. On arrival to the emergency department, the patient was anxious and complaining of right breast pain and right ankle pain. Her initial vitals were as follows: temperature 36.7°C, heart rate 94 bpm, blood pressure 201/139 mmHg, respiratory rate of 20, and SaO2 100 % on room air. There were no signs of trauma on her head and neck area. There was a large contusion overlying the right breast with mild swelling when compared to the opposite breast. There was exquisite bilateral rib tenderness at multiple levels, and her right ankle was swollen and tender with intact pulses and sensation. The bedside focused assessment with sonography in trauma (FAST) exam was negative. She was given Fentanyl 100 mcg IV twice for pain control, normal saline 1 l IV, and taken to CT. CT head and cervical spine were unremarkable. CT angiography of her thorax demonstrated a 11.8 × 11.4 × 7.6 cm right breast hematoma with active extravasation. Upon return from CT, the patient was more diaphoretic, anxious, and hypotensive at 82/56 mmHg. She was immediately given a 2 l normal saline IV bolus, and massive transfusion protocol was activated. Because the patient's level of pain was unchanged after Fentanyl, she was given Ketamine 150 mg IV. While sedated, her right breast was wrapped using an abdominal binder and elastic bandage as a temporizing compression measure. The patient was accepted for transfer to a level 1 trauma facility. At the time of transfer, she received two units of PRBCs and her vitals improved to a heart rate of 108 bpm and a blood pressure of 156/113 mmHg. Platelets and FFP were not ready at time of transfer. She was taken via ambulance because of weather conditions. Her labs were significant for an initial hemoglobin of 12.8 g/dL, a lactate of 2 mMol/L, drug screen positive for opiates, and negative ethanol level. Upon arrival to the trauma center, her vitals were as follows: temperature of 36.1°C, heart rate of 120 bpm, blood pressure of 130/80 mmHg, RR of 20, and a SaO2 of 100 % on nonrebreather. CT angiography of her chest was repeated along with a CT angiophraphy of her abdomen and pelvis. The CT once again showed the breast hematoma, (Figs. 1, 2 and 3) as well as left 4th-8th rib fractures, right 4th-7th rib fractures, and bilateral first rib fractures. Afterwards, she was taken to interventional radiology. Thoracic aortogram, internal mammary, and lateral thoracic branches arteriogram were negative for persistent extravasation. She received a total of three units of PRBCs as well as platelets and FFP during her hospitalization. Her hospital course was remarkable for her large opiate requirement. She was discharged 3 days after admission. Her Hemoglobin at discharge was 10.1 g/dL. Discussion Breast injury is an uncommon form of blunt chest trauma. In a review of 5305 women with blunt chest trauma, only 108 (2 %) presented with breast trauma [4]. The mechanisms of breast trauma from a seat belt include both shear and crush injuries that result from the shoulder restraint. Majeski proposed a classification of breast trauma associated with seat belt injuries. From class 1 to 4, they ranged from mild bruising and tenderness to an avulsion of the breast from the chest wall with rupture of the blood vessels and active bleeding in to the chest [5]. The complete classification can be seen in Table 1. The most serious injuries to the breast are mammary duct avulsion and a vascular injury. The arterial supply to the breast comes from several sources, the internal mammary artery with perforators though the chest wall supplies the medial breast and the lateral thoracic branch of the axillary artery provides blood flow to the lateral breast. The majority of breast trauma patients had associated injuries. Of those, the most common were long bone extremity fractures (47 %), rib fractures (15 %), solid organ injury (11 %), and pneumothoraces/hemothoraces (10 %), all requiring chest tube placement [1]. Tam Song et al. reviewed all seat belt-related injuries and found 13 patients who presented to the emergency department at the time of the motor vehicle accident [2]. Of the immediate presentations, 4 patients had minor injuries such as lacerations and breast implant related injuries and were treated conservatively with outpatient plastic surgery follow-up. Nine patients required urgent attention and were found to have a rapidly expanding breast. Six of them deteriorated hemodynamically and were found to have arterial extravasation from the internal mammary artery, the lateral thoracic artery, and the accessory scapular branch of the axillary artery. Two pregnant patients had an enlarged breast due to accumulation of milk secondary to avulsed milk ducts. The last patient had an inflammatory air pocket in communication with an underlying pneumothorax, which resolved after chest tube placement. There is currently no established standard of management and treatment of blunt breast trauma. Patients should be assessed and treated like any major trauma patient following the ATLS guidelines. Sanders et al. proposed an algorithm based on their study [4]. Patients were divided as either having a simple or a complex breast trauma. Simple breast trauma patients defined as having an abrasion, a small laceration or pain over the affected breast were managed conservatively. Hemodynamically stable patients with complex breast trauma defined as a crush injury to the breast resulting in skin loss or an intramammary hematoma underwent a CT scan of their chest. Patients with no active arterial extravasation were monitored and treated symptomatically. Patients with a blush on CT were taken to interventional radiology for angiography and embolization [4]. Because this occurred at a community hospital without a trauma designation, both trauma services and timely interventional radiology were unavailable. The patient was successfully managed with blood product transfusion and a compressive band around the affected breast. The binder was used as a temporizing measure to provide external compression on the breast and to tamponade the bleed. The role of post traumatic mammography is controversial as it is generally unnecessary as long as clinical follow-up ensures resolution of any mass effect after recovery [1]. However, some authors advocate for a baseline mammogram at 3-6 months post injury with annual mammograms thereafter to ensure complete resolution of any masses and to rule out any post traumatic malignant breast malignancy [6]. Typical post traumatic mammographic findings involve fat necrosis in different stages of evolution that range from acute contusion to calcified oil cysts [2]. Conclusions This is an interesting case of hemorrhagic shock following a seat belt injury to the breast. The patient presented to a small community emergency department with normal vital signs. However, concern of rapid deterioration soon occurred after discovery of arterial extravasation of the breast and resultant hypotension. This is the first case report that shows the application of an abdominal binder on an actively bleeding intramammary hematoma, and in so, should be of relevance to emergency physicians. Consent The patient has given her consent for the case report to be published. Table 1 Breast injury clasification Grade 1 Mild crush injury consisting of bruising, ecchymosis, skin blistering, breast swelling, tenderness, friction burns over contact area. Grade 2 Moderate crush injury consisting of intramammary hematoma, fat necrosis, skin avulsion or loss, skin laceration, skin ulcer Grade 3 Severe crush injury consisting of subcutaneous partial or complete transection of the breast resulting in a permanent diagonal furrow across the breast corresponding to the line of the seat belt that cleaved the breast tissue into two parts Grade 4 Avulsion breast injury consisting of subcutaneous avulsion of the breast from the chest wall with rupture of perforating branches of intercostal vessels, active bleeding into the breast and the space between the breast and chest wall caused by the traumatic shearing force
v3-fos-license
2021-11-10T16:14:39.272Z
2021-11-07T00:00:00.000
243909107
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4352/11/11/1351/pdf", "pdf_hash": "bff41e8f2aa547f979539711f319efead00965a6", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:449", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "7e420b200ad9ff67e005d032378d1e9d29495fc3", "year": 2021 }
pes2o/s2orc
Rolling Texture of Cu–30%Zn Alloy Using Taylor Model Based on Twinning and Coplanar Slip : A modified Taylor model, hereafter referred to as the MTCS(Mechanical-Twinning-with-Coplanar-Slip)-model, is proposed in the present work to predict weak texture components in the shear bands of brass-type fcc metals with a twin–matrix lamellar (TML) structure. The MTCS-model considers two boundary conditions (i.e., twinning does not occur in previously twinned areas and coplanar slip occurs in the TML region) to simulate the rolling texture of Cu–30%Zn. In the first approximation, texture simulation using the MTCS-model revealed brass-type textures, including Y (cid:4668)1 1 1(cid:4669) < 1 1 2 > and Z (cid:4668)1 1 1(cid:4669) < 1 1 0 > components, which correspond to the observed experimental textures. Single orientations of C (1 1 2)[1 (cid:3365) 1 (cid:3365) 1] and S’ (1 2 3)[4(cid:3364) 1(cid:3364) 2] were applied to the MTCS-model to understand the evolution of Y and Z components. For the Y orientation, the C orientation rotates toward T (5 5 2)[1 1 5] by twinning after 30% reduction and then toward Y (1 1 1)[1 1 2] by coplanar slip after over 30% reduction. For the Z orientation, the S’ orientation rotates toward T’ (3 2 1)[2 1 (cid:3365)4(cid:3364)] by twinning after 30% reduction and then toward Z (1 1 1)[1 0 1(cid:3364)] by coplanar slip after over 30% reduction. Introduction In the industry applications, the formability of metals plays a significant role, which mainly depends on the crystallographic texture and microstructure [1]. Moderate-strain metals and alloys including copper exhibit laminar microstructures called shear bands (SBs), a form of plastic instability. Duggan et al. [2] and Fargette et al. [3] investigated the formation mechanism of SBs in Cu-Zn alloys. In rolled metals, the SBs form as thin planar sheets that are parallel to the transverse direction (TD) and inclined at ~35° to the rolling direction (RD). Brass-type SBs and copper-type SBs have been found in low stacking fault energy (SFE) and intermediate to high SFE materials [2,4], respectively. Hatherly and Malin [5] defined low SFE as < 20 mJ/m 2 , intermediate SFE between 20 and 40 mJ/m 2 and high SFE as > 40 mJ/m 2 . Considering the low stacking fault energy (SFE) of fcc metals, the SBs exhibit the structure of twin-matrix lamellae (TML) composed of twin and matrix lamella layers. The TML structure was reported in the observation of deformed Cu-30%Zn by Duggan et al. [2] and Fargette et al. [3]. Malin and Hatherly [6] reported the TML structure in pure copper. An abnormal slip system parallel to twinning planes also occurs in the lamellar layer structure [2]. After large thickness reductions for α-brass, the SBs substantially change the texture from copper-type to brass-type because of the formation of the fine TML structure [2]. Wassermann et al. [7] reported that cold-rolled α-brass with C {1 1 2} <1 1 1>-oriented grains exhibit twin orientation T {5 5 2}<1 1 5> after twinning; consequently, the T-oriented grains rotate toward the B {1 1 0} <1 1 2> orientation by dislocation slip. Hirsch et al. [4] found that the T-oriented grains rotate to the transition orientation of {1 1 1} <1 1 2> (brass R ) due to mechanical twinning (MT) after small reduction, and the brass R grains rotate to the G {1 1 0}<0 0 1> orientation through shear banding after large reductions. Paul et al. [8,9] studied low-SFE fcc single crystals initially oriented with copper and revealed that the two coplanar slip systems adjusted by the initiation of shear banding play an important role in the formation of brass-type textures. Sevillano et al. [10] and van Houtte et al. [11,12] predicted copper-type texture using full constraint (FC)-and relaxed constraint (RC)-Taylor models, respectively. Leffers [13,14], Hirsch and Lücke [15] and van Houtte [16] estimated the brass-type texture using various Taylor models that consider MT. According to Chin's study [17], Kallend [18] and van Houtte introduced MT into FC- [16] and RC-Taylor models [11,12]. Hirsch et al. [15] quantitatively compared the rolling texture between FC and RC Taylor models. Leffers [19,20] used a modified Sachs model to predict brass-type texture. Kalidindi [21] proposed a crystal plasticity model considering deformation twinning and observed that twinning is difficult to occur in the twinned regions. In addition, Kalidindi [22] modelled with shear banding to predict the texture transition from Cu-type to brass-type. Lebensohn and Tomé [23,24] utilized the VPSC model to simulate brass-type texture including comprehensive relative activity of slip and twin systems. Toth et al. [25] used a Taylor version of the VPSC model considering dislocation slip and twinning to simulate the deformed texture of TWIP Steel with fcc structure. Chalapathi et al. [26] proposed a modified LAMEL model to simulate the rolling texture of an fcc steel. Among all models, the present work aimed to incorporate the experimental observations of TML [27][28][29][30][31] in the Taylor model while considering MT to predict the rolling texture of Cu-30%Zn. The modified Taylor model was compared with conventional Taylor models such as FC and RC Taylor models in terms of rolling texture. Materials and Methods The dimensions of the as-received Cu-30%Zn alloy were reduced to 60.0 × 20.0 × 20.0 mm 3 by using an abrasive cutting machine and annealed at 600 °C for 1 h to homogenize, and then cold-rolled up to 90% thickness reduction. A two-high non-reversing mill with roll diameter of 590 mm was set with rolling speed of 9 rpm to conduct the cold rolling experiment. Drops of lubricant oil were applied on the rolls to reduce to friction and heat during rolling. The alloy was rolled 10 times to reduce the thickness from 20.0 to 14.0 mm (30% reduction). Then, 1/3 length of the alloy was cut off and continued to roll 12 times to reduce the thickness from 14.0 to 8.0 mm (60% reduction). Finally, 1/2 length of the alloy was cut off and continued to roll 15 times to reduce the thickness from 8.0 to 2.0 mm (90% reduction). The middle regions of the rolled materials were selected and cut out in dimensions of 20.0 mm width and 20.0 mm length for further texture analysis. The specimens were prepared by grinding using #400, #800, #1500, #2500, and #4000 SiC papers. The texture of Cu-30%Zn alloy was examined on the surface parallel to the out-ofplane direction called ND direction by using Bruker(Germany) D8 ADVANCE in NCKU with CuKα radiation of λ = 1.5406 Å at 40 kV and 40 mA. Three incomplete pole figures of {1 1 1}, {2 0 0}, and {2 2 0} were recorded by varying the tilting angle of 0°-70° and the rotation angle of 0°-360° with a scanning step of 5°. Defocusing correction was then employed on the measurement of random powder Cu-30%Zn alloy. Orientation distribution function (ODF) and complete {1 1 1} pole figure were calculated using LaboSoft(Poland) LaboTex ver.3.0 software in NCKU. Modelling The rolling texture of Cu-30%Zn alloy was modelled using modified Taylor models in the Matlab software. FC-Taylor model, RC-Taylor model, RC-Taylor model considering MT called MT-model, and RC-Taylor model considering MT and coplanar slip called MTCS-model were constructed as follows. FC-Model According to the method of van Houtte et al. [12,32] for establishing the FC-Taylor model, a given displacement gradient eij in a grain is composed of a symmetric matrix εij (called the strain tensor) and an antisymmetric matrix ωij (called the rotation tensor). Here, the given strain tensor εij of 90% reduction (corresponding true strain, | . . |=2.3) is expressed in the macroscopic frame for plane strain condition as follows: An incremental strain of 0.01 was set for each step in the simulation. In the crystal frame, the symmetric matrix , which relates the shear on the slip system , is described for the slip systems of {1 1 1}<1 1 0> by: where, , and denote the direction of the Burgers vector, the normal to the slip plane, and the shear on the slip system , respectively, in the crystal frame. The symbol ⊗ denotes the dyadic product of two vectors. The antisymmetric matrix is expressed by: and where is the lattice rotation in the macroscopic frame. Five linear equations are needed to solve Equation (2) with 792 combinations for a given strain. According to the Taylor assumption [33], the minimum work corresponds to the minimum sum of the activated five absolute shears and is expressed as: The critical resolved shear stress is denoted as for all 12 slip systems, and the number of the activated slip systems is n s = 5. The lattice rotation in the macroscopic frame can be obtained using Equation (5). As a result, the new orientation matrix g* is expressed by: where is the initial orientation matrix before deformation. RC-Model After high reduction, the deformation texture shows discrepancies from that predicted by FC-Taylor model. Van Houtte [11] proposed the concept of partly constrained deformation of crystallites and labelled it as RC-model because of the observation that grains become flattened and elongated after high rolling reduction. In the RC-Taylor model, only four slip systems are activated. Hence, the relaxed constraint of shear strain can be calculated in Equation (1), where the X direction is the rolling direction and Z the normal direction, and the number of the activated slip system is = 4 in Equations (2) and (4). The given strain tensor is expressed in the macroscopic frame for plane strain condition as follows: where the shear strain is unconstrained. Equations (2)-(7) are the same in the case of the FC-Taylor model. MT-Model Following their observation of MT in low-SFE metals, van Houtte [16] and Chin et al. [17] proposed a modified Taylor model that assumes that the shear in the crystal frame is due to the MT in {1 1 1}<1 1 2> twin systems for FC-and RC-models and not {1 1 1}<1 1 0> slip systems. Following the concept of van Houtte [16], the present study employed an RC-model combining three slip systems and one twin system. This model is referred to as the MT-model and differs from the RC-model in that the latter considers four slip systems. The flowchart of the subroutine is shown in Figure 1. In the MT-model, the strain tensor resulting from three slip systems and one twinning system is expressed in the crystal frame as follows: where the numbers of the activated slip and twin systems are = 3 and = 1, respectively. The former and latter parts of the symmetric matrix corresponding to the shear on the slip systems of {1 1 1}<1 1 0> in Equation (2) and the twin systems of {1 1 1}<1 1 2> are expressed as: where , , and denote the direction of the Burgers vector, the normal to the twin plane, and the shear on the twin system , respectively. The slip and twin systems used in the models are listed in Table 1. The rotational antisymmetric matrix in the crystal frame is expressed as: where is the lattice rotation in the macroscopic frame. Table 1. Twelve slip and twin systems of fcc metals used in the models. SS8 (1 1 1) Considering the contributions of slip and twinning, the minimum work of the MTmodel is expressed in terms of α = as follows: where the numbers of the activated slip and twinning systems are = 3 and = 1, respectively, and the critical resolved shear stresses for twinning and slip in all 12 twin and slip systems are denoted and , respectively. The CRSS values of the slip and twin systems are assumed to be identical, that is, α = 1. Thus, the new orientation of matrix * is expressed by: where is the initial orientation matrix prior to deformation. After twinning deformation, the orientation number of the fine TML structure increases twofold after each simulation step, which leads to increases in computation time. To address the problem of time-consuming computations, van Houtte assumed a simpli-fied method with only one orientation; here, either the matrix orientation or the twin orientation is selected as the new orientation of the matrix and twin area in the TML region. This orientation depends on the relation between a random number R ranging from 0 to 1 and the volume fraction of the twin region , which is expressed as: where the constant of the twinning shear of fcc metals is denoted = √ . If R is greater than , then the new orientation is determined by Equation (11). If R < , then the new orientation * is given by Equation (11). The twinned orientation obtained after twinning is given by Equation (13): where * is the new matrix orientation and Θ is a matrix that transforms the matrix orientation into a twin orientation, which is expressed as follows: where T denotes the transpose of the matrix . MTCS-Model Considering the coplanar slip in the TML region reported by Hirsch et al. [4,34], the current work presents another modified Taylor model, hereafter referred to as the MTCSmodel, that combines the MT-model with the concept of coplanar slip in the TML region. A major difference between the MTCS-and MT-models is the addition of two assumptions resulting in different textures in the former. The first assumption in the MTCS-model is that further twinning is forbidden in a priori twinned regions. The second assumption in this model is that the deformation of twinned grains contributes to two coplanar slip systems. The former assumption is based on the perspective that twinning cannot easily occur in previously twinned areas, as reported by Kalidindi [21]; in other words, secondary or further twinning is forbidden in previous twinning areas. The latter assumption is based on the coplanar slip observed in the TML region by Hirsch et al. [4]. The coplanar slip forms on the plane of activated twin systems. The two other slip systems were selected from non-coplanar slip systems. Therefore, the key difference between the MT-and MTCS-models lies in changes in the twinned orientation. Following the procedures of the MT-model, we calculate rigid body rotation for the MTCS-model by taking the plastic strain and minimum work into account using Equations (10)-(12), as shown in Figure 2. After each deformation step, a new orientation is determined at random by selecting a number R between 0 and 1. Here the possibility of twinning is equal to the fraction of the twinning area . When ≤ , the new orientation is determined by twinning by using Equations (7) and (15); when > , the new orientation is calculated by applying Equation (7) because of the deformation of the slip, as shown in Figure 2. This procedure for orientation determination leads to indicating the twinned and non-twinned orientations. The two assumptions are then implemented in the MTSC-model. The first assumption is that secondary twinning, that is, further twinning in a previously twinned orientation, is excluded. For the non-twinned orientation indicated, rigid body rotation is calculated by considering the plastic strain and minimum work by using Equations (8)-(10). However, the twinned orientation indicated does not change according to Equation (13) but follows the right route in Figure 2 to avoid the formation of secondary twinning. This phenomenon corresponds to the assumption that the twinning orientation occurs only once. The second assumption is that the coplanar slip occurring in the TML region is implemented in the reorientation calculation of the twinned orientation. The activated systems in twinned orientations occur on twinning planes called coplanar slip systems at the first twinning. Thus, the 12 slip systems used in the models could be classified into coplanar and non-coplanar slip systems on the basis of the twinning planes. Therefore, two activated slip systems are selected from the coplanar slip systems, and another two slip systems are selected from the non-coplanar slip systems by using the RC-model. Thus, in addition to non-coplanar slip, coplanar slip can contribute to the plastic strain of twinned orientations via the relation: where and are the plastic strains resulting from the coplanar slip and noncoplanar slip, respectively. Considering the contribution of coplanar and non-coplanar slips, the optimization of the minimum work in the MTCS-model is expressed in terms of = as: where the numbers of the activated slip and twinning systems are =2 and =2, respectively, and the critical resolved shear stresses for the coplanar and non-coplanar slip systems are denoted and , respectively. The CRSS values of the slip and twin systems are assumed to be identical, that is, = 1. The antisymmetric matrix is expressed as: where is the lattice rotation in the macroscopic frame. Thus, the new orientation of matrix * is expressed as: * = ( − ) (20) where is the twinned orientation matrix before deformation resulting from the coplanar slip. For the four simulation models, the orientation number of grains is 5000, each strain step is 0.01, the total strain is 90% reduction, and the 5000 grains initially show random orientations. The simulated textures were analyzed, including ODF, complete {1 1 1} pole figures, and volume fraction by using LaboTex software. The Euler angles follow the definition of Bunge. Results and Discussion This section is divided by subheadings. It provides a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. Table 2 shows the preferred orientations simulated by the four models. The FC-and RC-models predict a β-fiber after cold rolling in Figure 4a (1 1 1)[2 11 9] The relative activities of the slip and twin systems were analyzed. Figure 5 shows the relative activity of the systems as a function of deformation strain for the slip or twin systems considered in the MT and MTCS models. The relative activities of slip and twin systems for each step of simulation are determined using Equations (21) and (22), respectively [28]. Effect of Twinning on the Rolling Texture In the case of the RC-model, only slip systems were activated, that is, the activity of slip has a constant value of 1. In addition, the activity of the 12 slip systems generally ranges from 8% to 9% throughout the simulation. In the case of the MT-model, the activity of the twin system calculated from the simulation results is larger than that of the slip system at < 0.2; by contrast, the activity of the slip system is greater than that of the twin system at > 0.2. These results indicate that the deformation mechanism of the MT-model is dominated by twinning at < 0.2. Under the condition of < 0.2, the deformation mechanism of the MT-model is dominated by the twinning of slip system TS1(1 1 1)[1 1 2 ]. The deformation mechanism of the MT-model is dominated by the slipping of slip systems SS4 (1 1 1 1 1)[1 0 1], which show similarly stable activities of approximately 50% throughout the simulation. The activity of twin systems is close to zero throughout the simulation (i.e., ~10 −18 ) because of the model's assumption of restricted secondary twinning (i.e., twinning may occur only once). Madhavan et al. [35] reported that the evolution of Cu-type rolling textures after up to 95% reduction may be completely attributed to slip. Overall, in the case of the MTCS-model, the activated shear fractions of the dominant slip systems of SS1(1 1 1)[0 1 1 ] and SS2(1 1 1)[1 0 1] are approximately 50% and 50%, respectively, of the slip and twin contributions; these values correspond to approximately 50% of the slip contribution. The activated shear fraction of the dominant twin system of TS11(1 1 1)[2 1 1] is 0% of the slip and twin contribution, which corresponds to 33.2% of the twin contribution. The volume fraction of major components was calculated to quantify the orientation components of the four models. Given their pure slip mechanism, the volume fractions of C and S orientations are 15.52% and 11.81% for FC-model and 13.07% and 12.56% for RCmodel, as shown in Figure 6a,b. In both cases of FC-and RC-models, the volume fraction of B and G components are relatively low because twinning mechanism was not considered. When considering partial slip and twinning for the MT-model, the volume fraction of C and S are reduced to 0.75% and 8.04%, while the volume fraction of brass-type components, B and G components, increase. Furthermore, taking the coplanar slip mechanism into account in the MTCS-model, the C and S orientations are stabilized due to the restriction of secondary twinning, which leads to higher volume fractions than those in MTmodel. The major volume fraction of 10.95% calculated from MTCS-model in Figure 6d reveals the S orientation, while in the case of MT-model in Figure 6c that of 11.36% is the B orientation. This observation suggests that the deformation mechanism of coplanar slip may lead to the orientation change from B to S orientation because the deformation mechanism of coplanar slip is considered in the MTCS-model. Furthermore, the volume fraction of T orientation is 0.37% for MT-model (in Figure 6c), and those of T, Y, and Z orientations are 1.64%, 2.34%, and 2.36%, respectively, for the MTCS-model (in Figure 6d). The results reveal that the difference between MT-and MTCS-models lies on the orientation prediction of Y and Z, where the volume fractions of both orientations are small. Wassermann et al. [7] observed that on cold-rolled α-brass, the twin orientation T {5 5 2} <1 1 5> is formed after twinning and consequently rotates to B {1 1 0} <1 1 2>. Hirsch et al. [4] observed that in the low SFE Cu-Zn alloys, texture transition occurs at the intermediate of high strains; a decrease in C orientation leads to an increase in T orientation. The experimental result indicates the onset of twinning by the decrease of C orientation and the increase of G. After 70% reduction, the G orientation is stable and the rest of T orientation shifts toward Y. Madhavan et al. [35] observed the texture evolution of coldrolled Ni-40%Co. At the early stage, the deformation is achieved by slip and MT. At higher reductions, high fraction of Cu-type shear bands was observed, which leads to final G orientation with high volume fraction. For FC-and RC-models, we only assume that the plastic deformation results from slipping on the slip systems. For MT-and MTCS-models, we consider that deformation occurs due to MT. This finding suggests that the deformation mode, either due to slip or twinning, changes the texture. The MTCS-model can predict the orientation components of Y and Z with the initial random orientations. With the use of a single crystal, the development and formation of Y and Z orientations were successfully estimated. Thus, the MTCS-model in combination with the initial orientations of C and S' was employed in the following sections. Formation of Y Orientation Hirsch et al. [4] reported that the T orientation is formed at low strain of <60% reduction due to the twin mechanism in Cu-30%Zn. At 70% reduction, this T orientation rotates oppositely to Y orientation due to coplanar slip instead of rotating toward G. Duggan et al. [2] observed that the T orientation and the matrix orientation of C rotate toward Y orientation. On this basis, the present work used the MTCS-model with the C orientation of (1 1 2)[1 1 1] to understand the development of Y orientation. Figures 7 and 8 show the simulated {1 1 1} and {2 0 0} pole figures of 30%, 60%, and 90% reductions with initial single C orientation using the MTCS-model. At 30% reduction, some of the initial C orientations either stay close to the C orientation of (90°,37.7°,45°) with 2.7° [1 0 1] misorientation or rotate to the orientation (270°,69°,45°) near T with 5.2° [1 0 1] misorientation due to twinning. The rotation angle and axis between simulated C and T is 60.0° [9 8 9] after 30% reduction. At 60% reduction, most of the orientations rotate close to Y (270°,63.6°,45°) with a misorientation of 8.9° from the Y orientation because of slip. According to Asbeck et al. [36] and Hirsch et al. [4], the orientation of (270°,74.2°,45°)rotates toward the Y orientation of (270°,54.7°,45°) instead of moving toward the G orientation of (270°,90°,45°). The simulation results obtained from the MTCS-model are in agreement with those reported by Asbeck et al. and Hirsch et al. After 90% reduction, the major orientations rotate toward the Y orientation of (270°,57.1°,45°) with a misorientation of 2.4°. This path of orientation change is in agreement with the study of Hirsch et al. [4]. In summary, the C orientation of (1 1 2)[1 1 1] rotates to the T orientation on (1 1 1) plane due to twinning at 30% reduction. After 60% reduction, the T orientation rotates toward Y orientation, which requires the coplanar slip systems of SS1 (1 1 1 Formation of Z Orientation The formation of Z orientation is attributed to the twinning of S' (133.1°,36.7°,26.6°), which is close to S(121.0, 36.7, 26.6). Hirsch et al. [4] reported that after twinning, the S' orientation leads to T' (313.1°,36.7°,26.6°), one of the symmetrically equivalent variants of S'. As a result, the TS' and S' orientations rotate toward Z orientation by coplanar slip. Thus, the MTCS-model with the S' orientation of (1 2 3)[4 1 2] was used to understand the development of Z orientation. Figures 10 and 11 The rotation angle and axis between simulated S' and T' is 60.0° [9 8 9] after 30% reduction. At 60% reduction, the S' orientation of (127.5°,38.7°,26.8°) is still observed, and the other orientation is close to T' (294.3°,68.7°,53.2°). The former has a misorientation of 5.3° away from the S' orientation, and the latter has a misorientation of 6.9° away from the T' orientation. With increasing reduction from 30% to 60%, the T' orientation rotates toward Z and decreases the misorientations from 18.7° to 16.3°. At 90% reduction, most of the orientations rotate close to Z (292.8°, 63.1°, 48.9°) with a misorientation of 10.4°, and the orientation of S' is still found. The increase in reduction from 60% to 90% decreases the misorientations between T' and Z. Hirsch et al. [4] observed that the peak shift of S orientation leads to a large φ angle and a small φ angle. In the present simulation of the MTCS-model, the φ angle increases from the initial 26.7° to 31.5°, and the φ angle decreases from the initial 133.1° to 116.9°. This trend is in agreement with the observation of Hirsch et al. The S' orientation of (123) [4 1 2] rotates to the T' orientation because of twinning on the twin plane of (111) at 30% reduction. At above 30% reduction, the T' orientation rotates toward the final Z orientation as explained by the coplanar slip of SS1(111)[ 011 ], SS2(111)[1 01], and SS3(111) [11 0] further gliding on the (111) plane. This change in orientation is shown in Figure 12. Therefore, the combination of MT and coplanar slip in the TML region can be successfully simulated by the Taylor model to reveal the formation of Y and Z orientations observed in the experiments. Conclusions Conventional Taylor models including FC-and RC-models considering pure slip mechanism simulate strong copper-type textures. Both FC-and RC-models display preferred orientations close to C, S and B. Among the components, the volume fraction of C orientations is 2.45% higher in FC-model. With consideration of MT mechanism, the condition of partial slip and twinning leads to partial brass-type textures. Significantly decreased volume fraction of C and S were determined with 12.32% and 4.52%. In the meantime, the increased volume fraction of B and G with 8.53% and 2.11% indicates the formation of brass-type texture. Considering MT and coplanar slip in the TML region in this study, a Taylor-based MTCS-model is proposed to simulate the rolling texture of Cu-30%Zn. Comparing with the results of MT-model, the volume fraction of C and S orientations was determined with 2.87% and 2.91% increase, respectively. In the meantime, the decreased volume fraction of B with 5.31% indicates the instability of B orientation. In addition to the β-fibers, the simulated results of the MTCS-model display the experimentally observed texture components including T, Y, and Z orientations, with corresponding volume fractions of 1.62%, 2.34%, and 2.36%, respectively. Furthermore, we can successfully predict the reorientations of C-Y-T and S'-T'-Z by additionally considering twinning and then coplanar slip in the proposed MTCS-model. The Y and Z orientations, however, were not observed in the FC-, RC-and MT-models, but were found in the MTCS-model. Evolution of single C and S' orientations further suggests the texture transition from copper-type to brass-type texture. The texture transitions from C to Y and from S' to Z reveal the following. Considering the texture transition from C to Y, the C orientation of (1 1 2)[1 1 1] rotates toward T (2 2 1)[1 1 4 ] because of twinning after 30% reduction, after which the T orientation rotates toward Y (3 3 2)[1 1 3 ] and Y(8 8 7)[10 11 24] because of continued coplanar slip after reductions of 30% and 60%, respectively. In the case of the texture transition from S' to Z, the S' orientation of (1 2 3)[4 1 2] rotates toward T' (7 5 3)[11 4 19 ] by twinning after 30% reduction, after which the T' orientation rotates toward Z (4 3 2)[8 2 13 ] and Z(9 8 6)[203] because of continued coplanar slip after reductions of 30% and 60%, respectively. Data Availability Statement: Data available on request due to restrictions. The data presented in this study may be available on request from the corresponding author. The data are not publicly available due to the large amount of the data.
v3-fos-license
2023-08-27T15:19:35.721Z
2023-08-25T00:00:00.000
261193238
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12562-023-01715-4.pdf", "pdf_hash": "9a41ca5e90795ab3e8f8994527295839ea47ab0c", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:450", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "a34718402377ed212a5e4c53d4594c32950208a3", "year": 2023 }
pes2o/s2orc
Evolutionary genomics of white spot syndrome virus White spot syndrome virus (WSSV) has been one of the most devastating pathogens affecting the global shrimp industry since its initial outbreaks in Asia in the early 1990s. In this study, we recovered 13 complete metagenome-assembled genomes (MAGs) of Japanese WSSV isolates and 30 draft WSSV MAGs recovered from publicly available sequencing data, to investigate the genomic evolution of WSSV. Phylogenetic analysis revealed two major phylotypes, designated phylotypes I and II. Bayesian divergence time estimates placed the divergence time of the two phylotypes between 1970 and the early 1980s, with an estimated substitution rate of 1.1 × 10–5 substitutions per site per year, implying the existence of pre-pandemic genetic diversity of WSSV in Asia. Based on this scenario, phylotype I was responsible for the 1990s pandemic and spread worldwide, whereas phylotype II was localized in Asia and infiltrated Australia. Two cross-phylotype recombinant lineages were identified, which demonstrate the role of genomic recombination in generating the genetic diversity of WSSV. These results provide important insights into the evolution of WSSV and may help uncover the ultimate origins of this devastating pathogen. Introduction Penaeid shrimp aquaculture has experienced rapid growth since the latter half of the twentieth century, but it has been constantly threatened by various infections by bacteria, fungi, parasites, and viruses (Lightner et al. 2012;Momoyama and Muroga 2005;Stentiford et al. 2012).One particularly lethal virus for penaeid shrimps is white spot syndrome virus (WSSV), which is a large, double-stranded DNA virus that infects a wide range of decapod crustaceans (H.-C.Wang et al. 2019).WSSV has a bacilliform, enveloped virion which contains more than 40 different structural proteins (H.-C.Wang et al. 2019) and replicates in the cell nuclei of tissues of ectodermal and mesodermal origins (Chou et al. 1995).The WSSV genome is a circular, double-stranded DNA ranging 280-315 kbp in length, encoding 150 to 180 predicted protein-coding genes (Li et al. 2017;Tsai et al. 2004;van Hulten et al. 2001;Yang et al. 2001). The earliest known outbreak of WSSV occurred in Fujian Province, China in July 1992 (Cai and Su 1993;Lo et al. 2005;Su et al. 1995), and it has since spread rapidly to other regions through human activities, affecting all shrimpfarming regions in the world (Lightner et al. 2012;Oakey and Smith 2018;Onihary et al. 2021;Tang et al. 2013).In Japan, the first recorded incidence of WSSV occurred in Hiroshima Prefecture in 1993 via the introduction of infected kuruma shrimp Penaeus japonicus seeds (Inouye et al. 1994;Momoyama et al. 1994;Nakano et al. 1994).WSSV rapidly spread to other regions in Japan, causing massive damage to shrimp production.By the end of the 1990s, WSSV established itself in wild Japanese crustacean populations (Maeda et al. 1998;Okamoto and Suzuki 1999;Fukuzumi and Chikushi 2003;Izumikawa 2013) and still continues to affect the kuruma shrimp industry. Understanding the origins and evolution of infectious diseases has various socioeconomic implications such as the development of biosecurity policies.Genomic information is essential for understanding the pathogenesis, epidemiology, and evolution of a virus.Dozens of WSSV genomes have been published from various parts of the world (Cruz-Flores et al. 2020;Han et al. 2017;Kooloth Valappil et al. 2021;Oakey and Smith 2018;Parrilla-Taylor et al. 2018;Rodriguez-Anaya et al. 2016;van Hulten et al. 2001;Yang et al. 2001), but there have been few attempts to decipher the evolution of WSSV at the genome level.In this study, we aimed to dissect the genomic evolution of WSSV by leveraging existing genomic assemblies and newly generated WSSV genome assemblies through de novo sequencing and exploration of publicly available datasets. WSSV specimens We sequenced a total of 12 specimens collected from seven Prefectures across Japan (Table 1).JP01 was derived from diseased P. japonicus in Miyazaki Prefecture between 1995 and 2000 and had been maintained as an infectious virus in the Laboratory of Genome Science, Tokyo University of Marine Science and Technology, Tokyo.JP02 was derived from diseased P. japonicus in Yamaguchi Prefecture between 1995 and 2000 and had been maintained as an infectious virus at the National Fisheries University, Yamaguchi Prefecture.JP03 and JP04 were identified from the shotgun sequencing data of Metapenaeopsis lamellata and Trachysalambria curvirostris samples, respectively, which were accidentally found to contain WSSV sequences.Both samples had been fixed with ethanol and therefore infectious viruses could not be recovered.S14, E1, E2, and 1-4 were recovered from diagnostic samples submitted to the Okinawa Prefectural Government and tested positive for WSSV by PCR.Sample 79 was collected during an epidemiological survey conducted by the Deepsea Water Research Center, Kumejima Island, Okinawa Prefecture.P. japonicus samples infected with isolates 0722-1 and Miyako2021 were provided by commercial farmers in Okinawa Island and Miyako Island, respectively.Pc2020 was derived from a naturally infected red swamp crayfish Procambarus clarkii originating from Chiba Prefecture.All Japanese WSSV genomes were regarded as MAGs because the sample preparation procedures did not involve the purification of virions. DNA extraction For JP01 and JP02 (Fig. 1), genomic DNA was extracted from shrimp homogenate prepared as a viral inoculum.We first concentrated the homogenate using Amicon Ultracells (Merck) and extracted DNA by phenol-chloroform-isoamyl alcohol extraction.The DNA was further concentrated using Amicon Ultracells.For JP03 and JP04, genomic DNA was extracted from the muscle tissue of the ethanol-preserved specimens by phenol-chloroform-isoamyl alcohol extraction.We used a swimming leg from specimen 79 for DNA extraction by phenol-chloroform-isoamyl alcohol extraction.For Pc2020, we extracted genomic DNA from the gills of a dead crayfish using a MagAttract HMW DNA Kit (Qiagen). Multiple displacement amplification Before Oxford Nanopore Technologies (ONT) library preparation, the genomic DNA of JP01, JP02, 79, S14, E1, E2, and 1-4 was amplified by multiple displacement amplification (MDA).Amplification was performed using the REPLIg Mini Kit (150023, Qiagen), and the amplicons were purified using Agencourt AMPure XP beads (Beckman Coulter) and quantified using a Qubit dsDNA BR Assay kit (Thermo Fischer Scientific).Approximately 1.5 µg of the amplicon was then digested with T7 Endonuclease I (M0302, New England Biolabs), purified again using Agencourt AMPure XP beads and quantified using the Qubit dsDNA BR Assay kit.Approximately 750 ng of the purified DNA was used for ONT library preparation. Sequencing Paired-end libraries for Illumina sequencing were prepared using the Nextera XT DNA Library Preparation Kit (Illumina) and were sequenced using the MiSeq Reagent kit v2 (2 × 150 cycle) or MiSeq Reagent Kit v3 (2 × 300 cycle).ONT libraries were prepared using the Ligation Sequencing Kit (SQK-LSK109, Oxford Nanopore Technologies) and NEBNext Companion Module for Oxford Nanopore Technologies Ligation Sequencing (E7180, New England Biolabs).The enzymatic reaction steps were extended to twice the suggested duration.The libraries were sequenced on R9.4.1 flow cells on a MinION or a GridION platform.The fast5 files were base-called using Guppy v. 5.0.11 in super-accuracy mode. De novo assembly and assembly curation The ONT reads were filtered by length using SeqKit (Shen et al. 2016) and aligned to the CN01 genome (RefSeq Accession no.NC_003225.3)with Minimap2 (Li 2018, p. 2).The mapped reads were extracted and de novo assembled using Flye v. 2.9 (Kolmogorov et al. 2019) WSSV genome reconstruction from publicly available sequence data The Sequence Read Archive (SRA) entries that contained signatures of WSSV were identified using BigQuery (last accessed December 2022), and the corresponding entries were downloaded from the DDBJ/NCBI/ENA database.The downloaded entries included the genomes of WSSV and host crustaceans, shotgun metagenomes, and transcriptomes.The sampling dates were determined based on the corresponding BioSample metadata and literature.The reads were trimmed using Fastp v. 0.20.1, and the trimmed reads were mapped to the CN01 genome with Minimap2.The mapped reads were extracted and depth-normalized using BBnorm (Bushnell et al. 2017).The normalized reads were assembled using SPAdes v. 3.15.5 (Nurk et al. 2013), and the contigs were scaffolded by Chromosomer v. 0.1.4a(Tamazian et al. 2016).The scaffolds were polished using Pilon v. 1.24 (Walker et al. 2014) and were manually curated. Phylogenetic analysis and divergence time estimation Publicly available WSSV genomes were downloaded from the NCBI database (last accessed February 2023).Proteincoding genes in some of the publicly available WSSV genomes were highly fragmented.The presence of extensive frameshift mutations was likely due to the use of particular sequencing platforms (IonTorrent PGM or ONT), which have been known to struggle with homopolymers. The fragmentation of core genes such as the major capsid protein (wsv360) and DNA polymerase (wsv514) (Kawato et al. 2019) strongly suggested that the majority of mutations in these assemblies did not originate biologically, and therefore these assemblies were excluded from the analysis.We used Snippy (https:// github.com/ tseem ann/ snippy) to construct a core genome alignment of 61 WSSV genomes by mapping simulated short reads generated from each assembly to the CN01 genome (clean.full.aln in Online Resource 2, Online_Resource_2_whole_genome_alignments.zip).A maximum-likelihood phylogenetic tree was built (clean.full.aln.treefile in Online Resource 2, Online_Resource_2_ whole_genome_alignments.zip) using IQ-TREE 2.2.2.3 (Minh et al. 2020), which revealed two major phylotypes, I and II (Fig. 2).Possible recombinants (lineages 1601 and Qingdao; see "Identification of recombination breakpoints" Section) and genomes with unknown sampling dates were excluded from the downstream analyses. We then built a maximum likelihood phylogenetic tree with IQ-TREE v. 2.2.2.3 from the recombinant-free, dated whole genome alignment containing 48 sequences (clean2.full.aln.treefileand clean2.full.aln in Online Resource 2, Online_Resource_2_whole_genome_alignments.zip) and inspected the tree with Tempest v. 1.5.3 (Rambaut et al. 2016).Rooting, based on heuristic residual mean squares, placed the root between phylotypes I and II (R 2 = 0.5056; function: heuristic residual mean squared).The root position was independently supported by the minimum variance method implemented in FastRoot v. 1.5 (Mai et al. 2017). Bayesian phylogenetic analysis was performed with BEAST v. 2.7.4 (Bouckaert et al. 2014).Preliminary analysis using 48 genomes placed WSSV-TH or WSSV-TW as ancestral strains, but this was considered implausible since it drastically deviated from the most likely root position found in the Tempest and FastRoot analyses.Tempest analysis found WSSV-TH and WSSV-TW had large rootto-tip genetic distance residuals, which indicated that the two genomes contained disproportionately large numbers of unique nucleotide variations relative to their sampling dates.We suspect that the observed large residuals were sequencing artifacts arising from the Sanger-based shotgun sequencing strategy.Therefore, we excluded from the dataset WSSV-TW, WSSV-TH, and six other sequences with the absolute values of root-to-tip genetic distance residuals larger than 1.30 × 10 -4 (WSSV-LS, POMZ4, POMZ1, Pc2020, PG1, WSSV-Peru). The final recombinant-free whole genome alignment contained 40 genomes (clean3.full.aln in Online Resource 2, Online_Resource_2_whole_genome_alignments.zip).The maximum-likelihood phylogenetic tree built with IQ-TREE v. 2.2.2.3 (clean3.full.aln.treefile in Online Resource 2, Online_Resource_2_whole_genome_alignments.zip) yielded a Tempest R 2 value of 0.8049 (function: heuristic residual mean squared).Bayesian divergence time estimation was performed using BEAST v. 2.7.4 (Bouckaert et al. 2014).To account for the ambiguity in the sampling dates of JP01A, JP01B, and JP02, we set a prior distribution of the three tips as a normal distribution (σ = 1.0) with the mean at 1998.Fifty million iterations were performed, which were sampled every 10,000 steps after a 10% burn-in.We used Tracer v. 1.7.2 (Rambaut et al. 2018) to monitor the progress of the run and to ensure that the effective sampling sizes of all parameters were larger than 200, except the posterior of run "coalescent constant, GTR," which was 175.3.Three population models (constant, exponential, and Bayesian skyline) and two substitution models (HKY and GTR) were used to assess the impact of model selection.A maximum clade credibility tree was generated for each run with TreeAnnotator (Drummond and Rambaut 2007), which was visualized with FigTree v. 1.4.4 (http:// tree.bio.ed.ac.uk/ softw are/ figtr ee/).We arbitrarily selected the estimate under the HKY model assuming a constant population size for presentation in Fig. 3, as all estimates converged on similar tree topologies and estimated divergence dates.BEAST XML files and resulting trees are available as Online Resource 3 (Online_Resource_3_BEAST_trees.zip). Identification of recombination breakpoints Possible recombinants (lineages 1601 and Qingdao) were detected by inspecting neighbor-net (Bryant and Moulton 2004) phylogenetic networks constructed from the whole genome alignment of 61 WSSV genomes using SplitsTree4 v. 4.19.0 (Huson 1998).We used RDP4 v. 4.101 (Martin et al. 2015) to analyze the recombination breakpoints of the crossphylotype lineages.Subsampling of WSSV genomes was necessary to complete RDP4 analysis.Recombination sites of lineage 1601 were identified by analyzing the following six genomes: phylotype I, CN01 and PC; phylotype II, CN03 and WSSV-AU; lineage 1601, 1601 and GCF7.Recombination sites of lineage Qingdao were identified by analyzing the following six genomes: phylotype I, CN01 and PC; phylotype II, CN03 and WSSV-AU; lineage Qingdao, Qingdao2019 and Qingdao2020.Recombination breakpoints were plotted against the CN01 genome, and corresponding regions of genome alignment were extracted with a custom script (https:// github.com/ satos hikaw ato/ bio_ small_ scrip ts/ blob/ main/ crop_ align ment.py).Maximum-likelihood phylogenetic trees based on the subgenomic alignments were constructed with IQ-TREE 2.2.2.3.Subgenomic alignments and maximum-likelihood phylogenetic trees are available as Online Resource 4 (Online_ Resource_4_subgenomic_alignments.zip). Sequencing and assembly of Japanese WSSV genomes We sequenced a total of 12 specimens from farmed and wild crustaceans collected in Japan (Table 1), resulting in 0.35 to 3.74 Gb of Illumina reads per specimen and 69 Mb to 14.9 Gb of ONT reads per specimen (Online Resource, Table S1).A total of 13 genomes were recovered, ranging in size from 288,190 bp (Miyako2021) to 311,562 bp (JP02), with the number of protein-coding genes ranging from 160-180 (Table 2). JP01A (299,976 bp) and JP01B (293,923 bp) originate from diseased P. japonicus in Miyazaki Prefecture between 1995 and 2000.Haplotype phasing resolved two closely related genotypes present in the viral inoculum.Isolate JP02 (Fig. 1), originating from Yamaguchi Prefecture between 1995 and 2000, had a genome size of 311,562 bp -the largest complete WSSV genome sequenced to date and approaching the estimated genome size of TH-96-II (312 kbp) (Marks et al. 2005).The coding regions were overall very similar to those of CN01 (309,286 bp; NCBI RefSeq Accession no.NC_003225) (Li et al. 2017).The difference in genome size between JP02 and CN01 was mainly due to variations in repeat sequence lengths, including homologous repeats and variable number tandem repeats in ORFs. Isolates JP03 and JP04, obtained from Mikawa Bay in Aichi Prefecture and Lake Hamana in Shizuoka Prefecture, respectively, shared (i) the translocation of wsv486 gene (Online Resource, Fig. S1), (ii) a 3381-bp deletion around the ORF14/15 region (Online Resource, Fig. S2), and (iii) a 7057-bp deletion around the ORF23/24 region (Online Resource, Fig. S3).The translocation of wsv486 has previously been reported in CN03 (GenBank Accession no.KT995471.1)(Li et al. 2017).The similarity of WSSV genomes from Mikawa Bay and Lake Hamana suggests that the two isolates share a common origin, which is consistent with the fact that wild P. japonicus populations in the two prefectures share a common spawning ground, the Sea of Enshu (Suitoh et al. 2014).Isolate 79 was identified during an epidemiological survey of WSSV in P. japonicus, while Pc2020 was identified in a wild Procambarus clarkii from Chiba Prefecture.Collectively, WSSV genomes derived from wild crustaceans provide further evidence that WSSV has become established in the wild crustacean populations in Japan. Draft WSSV genomes recovered from publicly available sequencing data A total of 30 WSSV draft genomes were recovered from publicly available sequencing datasets (Table 3).Nineteen MAGs were recovered from samples derived from China, Two major phylotypes The maximum-likelihood phylogenomic tree revealed two phylotypes (Fig. 2).Rooting between the two phylotypes was supported by heuristic residual mean squares, implemented in Tempest, and the minimum variance method implemented in FastRoot.We identified two cross-phylotype recombinant lineages (1601, shaded in orange in Fig. 2; Qingdao, light green), which will be discussed later (see "Emergence of recombinant strains"). Divergence time estimates We next estimated the divergence time of the two phylotypes by Bayesian phylogenetic analysis.For the analysis, we removed from the dataset (i) sequences with unknown sampling dates, (ii) cross-lineage recombinants, and (iii) WSSV genomes that had exceptionally large or small rootto-tip divergences in the maximum-likelihood phylogenetic tree.The Bayesian phylogenetic analyses using 40 WSSV genomes yielded 95% confidence intervals for the divergence between phylotypes I and II occurring around 1973-1981(median in Fig. 3: 1977)), suggesting that the two phylotypes had diverged prior to the 1990s pandemic.The estimate was robust to the choice of priors including population dynamics (coalescent constant, coalescent exponential, and Bayesian skyline) and substitution models (HKY and GTR). The Bayesian phylogenetic analyses converged to median estimated mutation rates of 1.11 × 10 -5 to 1.15 × 10 -5 substitutions/site/year (Online Resource, Table S2).While this may be higher than estimated mutation rates of other dsDNA viruses (Brennan Greg et al. 2022;Firth et al. 2010;Guellil et al. 2022;Morga et al. 2021), it can be reasonably derived from an experimentally derived mutation rate of a baculovirus (1 × 10 −7 mutations/site/replication) (Boezen et al. 2022) assuming 100 replications/year, both of which are reasonable assumptions considering the biology of WSSV.Although we should be cautious with interpreting this value since estimated virus mutation rates vary substantially between short and long terms (Duchêne et al. 2014;Ghafari et al. 2021), we believe that our estimate of the WSSV mutation rate is reasonable for analyzing the genomic evolution of WSSV in the past few decades. Phylotype I was responsible for the 1990s pandemic Phylotype I contained all strains isolated during the 1990s pandemic that originated in Asia.All Japanese WSSV genomes belonged to clade I.The lack of resolution among the phylotype I members in the tree is consistent with the rapid spread of the virus, which did not allow enough time for WSSV genomes to accumulate mutations to enable tracking its geographic spread.Marks et al. (2004) proposed the existence of an ancestral strain with the largest genome, which subsequently shed redundant segments to become smaller genomes (Marks et al. 2004).Isolate TH-96-II, with an estimated genome size of 312 kbp, is believed to be a close representation of the ancestral WSSV genome, although the whole genome of this isolate has not been published (Marks et al. 2005).JP02 and CN01, both belonging to phylotype I, retained the intact ORF13/14 and ORF24/25 regions and are therefore likely to closely resemble TH-96-II (Online Resource, Figs.S2 and S3).The short branch lengths of JP02 and CN01 also suggest that they are close representations of the common ancestor(s) of phylotype I (Fig. 1). A small branch which forms a sister clade to all other phylotype I members was found (Figs. 2 and 3).The estimated divergence time between this clade and the other members, including CN01 and JP02, predates the 1990s, suggesting that the two lineages had diverged before the pandemic.This might represent another line of evidence that WSSV had prepandemic genetic diversity.Alternatively, it is possible that there have been small cross-phylotype recombination events that could not be detected by our analysis, but which contributed phylotype II-like phylogenetic signals to these sequences. Taken together, these findings suggest that phylotype I was responsible for the 1990s pandemic and that JP02, CN01, and TH-96-II represent the ancestral genotype(s) of phylotype I that emerged in the pandemic epicenter in the 1990s in East Asia (Zwart et al. 2010). Phylotype II is characterized by a 5949-bp deletion in the variable region ORF13/14 (relative to TH-96-II; Online Resource, Fig. S2).Based on this specific genomic deletion, it is likely that WSSV genotypes from Madagascar and Saudi Arabia belong to phylotype II (Onihary et al. 2021;Tang et al. 2013), although it is possible that other parts of their genomes have originated from other phylotype(s) due to recombination.ORF23/24 is more variable, with Indian isolates (CWG3, DBA1182, and WSSV-LS) sharing a 11,273-bp deletion (relative to CN01; Online Resource, Fig. S3), while Chinese (CN03 and CN04) and Bangladeshi (SS304) isolates share a 10,893-bp deletion (relative to CN01; Online Resource, Fig. S3).The ORF14/15 and ORF23/24 regions in WSSV-AU appear to have accumulated additional sequences. Collectively, the distribution of phylotype II appears to be localized in Asia and have a distinct origin from that of phylotype I.This suggests that phylotype II represents a WSSV lineage already present in Asia prior to the 1990s pandemic. Emergence of recombinant strains Virus genomes can recombine, sometimes leading to a more complex evolutionary history with reticulate rather than bifurcating branches (Brennan Greg et al. 2022;Kolb et al. 2017).To investigate the possibility of genome recombination events in WSSV, we generated a neighbor-net phylogenetic network, which visualizes conflicting phylogenetic signals present in the whole genome alignment (Fig. 4a).By visually inspecting the phylogenetic network and comparing the topology with that of the maximum-likelihood phylogenetic tree in Fig. 2, we identified two possible cross-phylotype recombinant lineages, which we named 1601 and Qingdao. Lineage 1601 consists of isolate 1601 (MH663976.1;referred to as "Procambarus clarkii virus" and "WSSV-Cc" by the authors of the original publication) (Ke et al. 2021) and four related MAGs (Laibin2019, GCF7, Jangsu2019, and Sichuan2020).This cluster was placed within phylotype I in the maximum-likelihood phylogenetic tree in Fig. 2, whereas it fell into phylotype II in the neighbor-net network Lineage Qingdao is represented by MAGs Qingdao2019 and Qingdao2020, which were identified from RNA-seq data of P. japonicus sampled in Qingdao, Shandong, China, in 2019 and 2020.Qingdao2019 and Qingdao2020 an early branching clade in phylotype II in the maximum-likelihood phylogenetic tree (Fig. 1).However, the phylogenetic network linked the Qingdao lineage to a phylotype I representative (Wei-fang2018) with a reticulation, suggesting a cross-phylotype recombination. We hypothesized that these conflicting phylogenetic signals result from genomic recombination between two phylotypes, leading to distinct parts of the genome originating from different phylotypes.To test this, we used RDP4 to define potential recombination breakpoints within the two recombinant lineages (Fig. 4b).The analysis revealed two recombination events in the 1601 lineage and one in the Qingdao lineage (Online Resource, Table S3). The predicted recombination breakpoints tell us which parts of the genome originate from which phylotype, and we expected that a phylogenetic analysis of selected genome segments should corroborate this.We selected three regions in the genome that showed no signs of recombination in either recombinant lineage (coordinates 23,734,111,858,and 205,825;Fig. 4b).The maximum-likelihood phylogenetic trees constructed from these regions supported the presence of two major phylotypes.As expected, lineages 1601 and Qingdao were classified under different phylotypes depending on the genome segment (Fig. 4c-e).These results demonstrate that crossphylotype recombination events gave rise to the 1601 and Qingdao lineages. In conclusion, these observations indicate that WSSV genomes have experienced cross-phylotype recombination events, resulting in the emergence of at least two chimeric strains. Discussion In this study, we aimed to understand the evolution of WSSV using whole genome sequences.Phylogenetic analysis indicated the presence of two major phylotypes.The overall topology of the phylogenetic tree is consistent with the explosive spread of phylotype I during the 1990s pandemic.The common ancestor of phylotype I, the probable pandemic strain, likely had the largest genome of over 310 kbp in size and spread worldwide, and the genome shrank independently in various geographic regions (Zwart et al. 2010).Phylotype II, in contrast, seems to have a distinct origin in Asia and spread to Australia.A divergence time estimate pointed to the most recent common ancestor of the two phylotypes existing between 1970 and the early 1980s.This suggests that there was a preexisting diversity of WSSV genotypes in Asia before the 1990s pandemic. WSSV classification based on partial genomic segments should be interpreted with caution, as it only reflects the origin of given specific genomic segments, rather than the entire genome.The traditional molecular markers are useful in analyzing epidemiology within a country or a province, as clearly stated by the original authors (Dieu et al. 2004).Our results also indicate that WSSV genomes do recombine, further complicating the evolutionary history of the whole viral genome.In this regard, it may be difficult to classify WSSV isolates based on a handful of markers and discuss the origins of WSSV. We identified two cross-phylotype recombinant lineages, 1601 and Qingdao.It is possible that there are recombinant lineages that have been missed out in our analysis, as suggested by the inconsistent placement of some isolates in the phylogenetic trees constructed from subgenomes.We have found that detecting recombination in WSSV genomes is a complex task which requires careful attention to various factors, such as the construction of a reliable whole genome alignment, selection of confidently non-recombinant reference sequences, and consideration of structural variations in accessory genes and repetitive sequences.We also explored the possibility that recombinant MAGs were artifacts resulting from mixed infection, but this was considered unlikely because highly similar sequences were identified from multiple datasets sampled at different localities and timepoints. The hidden genetic diversity of WSSV in Asia has been suggested by Oakey et al. (Oakey et al. 2019) and Zeng (Zeng 2021) who assessed the diversity of variable numbers of tandem repeats.It is possible that a thorough epidemiological survey in Asia could reveal a yet unknown genotypic diversity of WSSV.Direct sampling of WSSV in diverse geographic locations is difficult for various reasons.However, we may be able to discover ancestral WSSV strains through bioinformatic analyses (Kawasaki et al. 2021), as we have successfully recovered multiple WSSV genomes from publicly available sequence data, including datasets that do not necessarily target WSSV. Fig. 1 Fig. 1 Genome diagram of white spot syndrome virus JP02.Outer track: protein-coding genes and their transcriptional orientations (blue).Middle track: GC skew of 100 bp sliding windows with 10-bp increments (positive: emerald, negative: purple).Inner track: deviation of GC contents from the average, 100 bp sliding windows with 10-bp increments Fig. 2 Fig.2Maximum-likelihood phylogenetic tree of WSSV genomes.Japanese WSSV MAGs sequenced in this study are highlighted with a filled circle (•); WSSV MAGs recovered from publicly available high-throughput sequencing data are highlighted with a triangle (▲).Sampling years (if known) are indicated after an @ sign, followed by the NCBI accession numbers, if they exist.The sampling years for JP01A, JP01B, and JP02, which range from 1995 to 2000, due to uncertainty, were arbitrarily indicated as 1998.Numbers beside nodes indicate the SH-aLRT support (%)/ultrafast bootstrap support (%).Phylotypes I and II are shaded with light blue and pink, respectively.Cross-phylotype recombinant lineages 1601 and Qingdao are shaded with orange and light green, respectively Fig. 4 Fig. 4 Cross-phylotype recombinant WSSV lineages.a Neighbor-net phylogenetic network of 61 WSSV genomes using uncorrected distances ("Uncorrected_P") ignoring constant sites.Phylotypes I and II are shaded with light blue and pink, respectively.Cross-phylotype recombinant lineages 1601 and Qingdao are shaded with orange and light green, respectively.Lineage 1601 clusters with phylotype II despite its phylogenetic position within phylotype I in the maximum-likelihood phylogenetic tree in Fig. 1.Lineage Qingdao is linked to Weifang2018 (phylotype I) with a reticulation, indicative of conflicting phylogenetic signals and suggesting their close relationships.b Coordinates of recombination breakpoints in lineages 1601 and Qingdao.The upper track shows the genome diagram of WSSV CN01.The middle and bottom tracks indicate the inferred origins of genome segments in lineages 1601 and Qingdao.The breakpoint coordinates are those of the CN01 genome.The gray boxes between the middle and bottom tracks denote the recombination-free segments used for the phylogenetic analyses in c to e. c Maximumlikelihood phylogenetic tree of recombination-free segment corresponding to nucleotides 23,315-111,734 in the CN01 genome.Phylotypes I and II are shaded with light blue and pink, respectively.Cross-phylotype recombinant lineages and Qingdao are shaded with orange and light green, respectively.Numbers beside nodes indicate the SH-aLRT support (%)/ultrafast bootstrap support (%). d Maximum-likelihood phylogenetic tree of recombinationfree segment corresponding to nucleotides 111,735-190,858 in the CN01 genome.e Maximumlikelihood phylogenetic tree of recombination-free segment corresponding to nucleotides 205,155-297,825 in the CN01 genome Table 1 WSSV samples sequenced in this study (Patterson et al. 2015) al. 2017).Canu was particularly useful for removing chimeric reads arising from MDA, which improved the assembly quality.The resulting assemblies were polished with POLCA v. 3.4.2(ZiminandSalzberg2020)using Illumina reads trimmed using Fastp v. 0.20.1(Chenetal.2018).JP01 was found to be a mixture of two closely related haplotypes, and therefore two genotypes were resolved using FreeBayes v. 1.3.6 (Garrison and Marth 2012) and WhatsHap v. 1.6(Patterson et al. 2015).Haplotype-specific reads were reassembled using Canu v. 2.2 and polished with POLCA v. 3.4.2. Table 2 Assembly statistics of Japanese WSSV MAGs
v3-fos-license
2016-05-12T22:15:10.714Z
2016-04-13T00:00:00.000
558274
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0153425&type=printable", "pdf_hash": "4388b15757020a83ed61ae97671c8aef50d06eb4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:451", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "4388b15757020a83ed61ae97671c8aef50d06eb4", "year": 2016 }
pes2o/s2orc
Whole Blood Gene Expression Profiling in Preclinical and Clinical Cattle Infected with Atypical Bovine Spongiform Encephalopathy Prion diseases, such as bovine spongiform encephalopathies (BSE), are transmissible neurodegenerative disorders affecting humans and a wide variety of mammals. Variant Creutzfeldt-Jakob disease (vCJD), a prion disease in humans, has been linked to exposure to BSE prions. This classical BSE (cBSE) is now rapidly disappearing as a result of appropriate measures to control animal feeding. Besides cBSE, two atypical forms (named H- and L-type BSE) have recently been described in Europe, Japan, and North America. Here we describe the first wide-spectrum microarray analysis in whole blood of atypical BSE-infected cattle. Transcriptome changes in infected animals were analyzed prior to and after the onset of clinical signs. The microarray analysis revealed gene expression changes in blood prior to the appearance of the clinical signs and during the progression of the disease. A set of 32 differentially expressed genes was found to be in common between clinical and preclinical stages and showed a very similar expression pattern in the two phases. A 22-gene signature showed an oscillating pattern of expression, being differentially expressed in the preclinical stage and then going back to control levels in the symptomatic phase. One gene, SEL1L3, was downregulated during the progression of the disease. Most of the studies performed up to date utilized various tissues, which are not suitable for a rapid analysis of infected animals and patients. Our findings suggest the intriguing possibility to take advantage of whole blood RNA transcriptional profiling for the preclinical identification of prion infection. Further, this study highlighted several pathways, such as immune response and metabolism that may play an important role in peripheral prion pathogenesis. Finally, the gene expression changes identified in the present study may be further investigated as a fingerprint for monitoring the progression of disease and for developing targeted therapeutic interventions. Introduction Transmissible spongiform encephalopathies (TSEs), or prion diseases, are a group of fatal neurodegenerative disorders, which affect humans and a wide variety of animals. They include Creutzfeldt-Jakob disease (CJD), Gerstmann-Sträussler-Scheinker syndrome (GSS) and fatal familial insomnia (FFI) in humans [1], scrapie in goats and sheep [2], chronic wasting disease (CWD) in cervids [3] and bovine spongiform encephalopathy (BSE) in cattle [4]. The etiological agent of TSEs is an abnormally folded isoform (PrP Sc ) of the cellular prion protein (PrP C ), which accumulates in the nervous and lymphoreticular systems during the progression of the disease [5]. PrP Sc accumulation, neuronal loss, spongiosis and astrogliosis are common hallmarks of prion diseases [6]. Despite the fact that the pathological features of these diseases are well characterized, the molecular mechanisms and the signaling pathways underlying TSEs are largely unknown. The appearance of BSE in the United Kingdom (UK) in 1986 [7] led to an increased interest in these diseases, especially because of its epidemic nature in the UK. Foodborne transmission of BSE prions to humans was observed in the 1990s with the appearance of a new variant form of CJD (vCJD) [8]. It has been shown experimentally that BSE prions have strain characteristics identical to those of prion isolates from human cases of vCJD [9]. So far, 229 cases of vCJD have been reported around the world [10]. In recent years, two atypical forms of BSE have been identified in several European countries [11], Japan [12,13], the United States [14] and Canada [15]. The two atypical BSE strains are denoted as H-type BSE and L-type BSE (also named bovine amyloidotic spongiform encephalopathy, BASE) [16,17]. The "H" and "L" identify the higher and lower electrophoretic mobility of the unglycosylated protease resistant PrP Sc fragment, respectively [18]. So far, both atypical subtypes have been identified only in cattle that were at least eight years old [19]. In view of that, it has been postulated that, unlike classical BSE (cBSE), cases of atypical BSE may have risen spontaneously, although transmission through feed or the environment cannot be ruled out. Indeed, histopathological as well as immunohistochemical analyses showed that atypical forms of BSE can be experimentally transmitted to mice [20][21][22] as well as to cattle. Moreover, they differ from cBSE and from each other in terms of clinical features [23][24][25] and biochemical properties [26][27][28]. Interestingly, some recent studies showed that H-and L-type BSE prions may acquire cBSE-like properties during propagation in animals expressing homologous bovine prion protein [29] or during inter-species transmission [17,30], respectively. These findings support the view that the epidemic BSE agent could have originated from atypical cattle prions. While cBSE cases are now rapidly disappearing as a result of appropriate measures to control animal feeding, more insight into atypical BSE would be necessary in order to carry out risk assessment and to adopt appropriate control measures. Given the infective nature of prions, the identification of specific molecular signatures may be helpful for the development of preclinical diagnostic tests in order to prevent horizontal transmission of the disease and potentially to develop targeted therapies in humans. Highthroughput genomic techniques, such as DNA microarrays and RNA-seq, are the most frequently used methodologies for the identification of differentially expressed genes [31]. Gene expression approaches were first applied for studying scrapie [32,33], while for BSE, and particularly for atypical BSE, they have appeared only recently in the literature [34][35][36]. Rodent models have been widely employed for large-scale studies of prion diseases [37,38]; however, it is of the utmost importance to extend these studies to the ruminant species naturally affected by these diseases. In particular, most analyses in cattle have been performed using central nervous system (CNS) tissues from infected animals. Such studies are certainly of relevance but are not particularly suitable for diagnostic purposes. Also, the large majority of these genomic studies have been focused on the cBSE infection, while very few data are available about the involvement of peripheral tissues in atypical BSE infected cattle [36]. Peripheral blood is a readily accessible source of biological information on disease status and it is a suitable tissue for prospective rapid diagnostic tests in animals and patients. The objective of the present study was to identify molecular patterns in whole blood of atypical BSE-infected cattle in both clinical and preclinical stages of the disease. Transcriptional changes were analyzed using microarray technology and data were validated by Reverse Transcriptase quantitative PCR (RT-qPCR). Materials and Methods All procedures involving animals were approved by the Home Office of the UK government according to the Animal (Scientific Procedures) Act 1986 and in conformity with the institutional guidelines of the Istituto Zooprofilattico Sperimentale del Piemonte Liguria e Valle d'Aosta, Turin, Italy (IZSPLV), that were in compliance with national (D.L. no. 116, G.U. suppl. 40, Feb. 18, 1992, Circular No.8, G.U., 14 July 1994) and international regulations (EEC Council Directive 86/609, OJ L 358, 1 Dec. 12,1987). All the experimental protocols proposed were reviewed and approved by the IZSPLV Animal Care and Use Committee (IACUC). Blood Samples Blood samples from 8 BSE-infected cattle (4 with H-type and 4 with L-type BSE) and 2 noninfected controls were provided by the Biological Archive Group at the Animal and Plant Health Agency, United Kingdom. All procedures involving animals were approved by the Home Office of the UK government according to the Animal (Scientific Procedures) Act 1986. The calves were born by crossing Aberdeen angus with females imported from Denmark (Danish Holstein, Danish milking red). The inoculation details have been reported previously [23]. Briefly, experimental cattle were intracerebrally inoculated with 1 ml of a 10% brain homogenate of either L-type or H-type BSE at 10-11 months of age [23]. All infected cattle used in this study were females. The negative controls were age and sex-matched with the infected group. For each animal, the blood sampling was performed at 2 different time points after inoculation, corresponding to the preclinical (6-months post infection) and the clinical (from 22 to 26 months post infection) stage of the disease. In this way we obtained 16 samples, 8 in the preclinical and 8 in the clinical stage. The estimated clinical onset after infection was based on the presence of changes in behavior, unexpected startle responses, and difficulty in rising [23]. Neurological examination and behavioral observations were conducted routinely until the culling of the animals. TSE infection was confirmed by post-mortem immunohistochemistry on brain sections of the animals [23]. Detailed information on the husbandry procedures and the pathological signs have been described in a previous study published by Konold et al. [23]. Finally, blood samples from 6 sex-matched Aberdeen angus from a different herd were added to the study and used as additional negative controls to obtain a sample size comparable to the one of the infected animals (8 samples). RNA Isolation 500 μL of fresh blood were stabilized in 1.3 mL RNAlater 1 Solution and immediately frozen at -20°C. Samples were sent in dry ice to IZSTO (Turin, Italy), where the RNA was isolated according to the RiboPure™-Blood Kit manufacturer's instructions (Ambion 1 ). DNase I treatment (Ambion 1 ) was included in the RNA extraction protocol to reduce DNA contamination. Purified RNA was eluted in 50 μL elution solution and the final concentration, as well as the absence of protein, was determined using a Thermo Scientific™ NanoDrop 2000 spectrophotometer. Since the RNA concentration was too low to proceed with the subsequent analysis, the RNA samples were concentrated using a Labconco CentriVap concentrator. The new concentration was assessed using a Thermo Scientific™ NanoDrop 2000 spectrophotometer and the integrity of the RNA was determined by capillary electrophoresis (Agilent 2100 Bioanalyzer, Agilent Technologies, Santa Clara, USA). Microarray Hybridization, Statistical Analysis and Data Mining 24 RNA samples were used for the microarray analysis: 8 preclinical (P1, P2, P4, P5, P7, P8, EP9 and P10), 8 clinical (S1, S2, S3, S4, S7, S8, S9 and S10), and 8 control (c.2, c.3, c.P3, c.5, c. S5, C.P6, C.S6 and c.9) samples. 120 ng of each total RNA were used as template for the synthesis of biotin-labeled cRNA according to the standard one-cycle amplification and labeling protocol developed by Affymetrix (Santa Clara, CA). cRNA was then fragmented and hybridized on GeneChip 1 Bovine Genome Array, which contains over 24128 probe sets. The microarrays were washed, stained (Affymetrix fluidic station 450 DX) and scanned (Affymetrix scanner 3000 7G). Cell intensity values from the raw array data were computed using the Affymetrix GeneChip 1 Operating Software (GCOS). Microarray quality control and statistical analysis were performed in the software system R using the Bioconductor package OneChannelGUI [39][40][41]. The LIMMA algorithm was used to compute a linear model fit [42]. Data filtering and normalization was carried out using GC-Robust multi-array analysis (GCRMA) from imported Affymetrix data (.CEL) files. After the assessment and inspection of microarray quality controls (RNA degradation plot, RLE and NUSE plots) we identified one low quality control sample (cS5) and excluded it from the analysis (S1 Appendix). Gene probes with a p value 0.05 and fold-change 2 were considered to be differentially expressed. Differentially expressed probe sets were functionally classified using David Bioinformatics tool [43,44] on the Affymetrix bovine background. Heat maps were generated using the heatmap.2 function from the gplots library in the R statistical environment [45,46]. Probe set data were hierarchically clustered with complete linkage using the Euclidean metric. were carried out by denaturing at 95°C for 15s, annealing at 60°C for 1 min and extension at 55°C for 1 min for 45 cycles. Melt curve analysis and gel electrophoresis of amplification products were performed for each primer pair to confirm the production of a single PCR amplicon. The amplification was performed using a CFX96™ Real-Time PCR Detection System (Bio-Rad Laboratories, Inc.). All the RT-qPCR reactions were run in triplicate and included the following controls: no template (NTC) and minus-reverse transcriptase (RT-) negative controls. The normalization accuracy was improved by geometric averaging of multiple reference genes [48] and using two inter-run calibrators to reduce inter-run variation. We decided to use a normalization factor based on three reference genes (GAPDH, RPL12 and ACTB) since it has been shown in the literature that this is the minimal number required for a reliable normalization [49]. Stability of the selected reference genes was determined by calculating their geNorm M value (M) and the coefficient of variation (CV) on the normalized relative quantities [50]. M and CV values were then compared against empirically determined thresholds for acceptable stability (~1 and~0.5 for M and CV values respectively) [50] (S2 Appendix). The statistical analysis and the fold change calculation were carried out using qBasePlus 1.1 software [50]. Identification of Differentially Expressed Genes (DEGs) in the Blood of Atypical BSE-Infected Cattle To investigate if gene expression alterations were present in blood from atypical BSE-infected cattle (clinical and preclinical), we performed microarray experiments using Affymetrix Gene-Chip 1 Bovine Genome Array. Since the goal of this project was to identify a common pattern of DEGs in atypical BSE infection, we defined the 4 H-and the 4 L-type inoculated-cattle as one single group of 8 animals named as atypical infected cattle. This approach allowed us to increase the sample size to improve the statistics and thus obtain more reliable results. The distribution of signal intensities, relative log expression (RLE) and normalized unscaled standard error (NUSE) plots were examined in order to avoid procedural failures and the presence of degraded RNA samples. After the assessment of microarray quality controls we identified one control sample (cS5) as an outlier and excluded it from the analysis (see S1 Appendix). Statistical analysis was performed on microarray results using the oneChannelGUI Bioconductor package [39]. The raw microarray data were deposited in the Gene Expression Omnibus (GEO) repository and assigned the accession number GSE69048. The data sets supporting the results of this article are available in the Gene Expression Omnibus (GEO) repository: http:// www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=mzgxuaagddwftcz&acc=GSE69048. Statistical comparison between the infected animals (clinical and preclinical) and the control group (IvsCtrl) revealed a total of 101 differentially regulated probe sets (p value lower than 0.05 and changes in expression higher than 2-fold) as shown in Table 1. Some of these probe sets encoded for the same gene. Gene annotation, performed using DAVID Bioinformatics Resources [43,44], identified a subset of 93 genes with known functions. The most relevant functional groups are reported in Table 2. To evaluate to what extent gene expression alterations in blood were related to the preclinical or clinical stage of the disease, two distinct statistical analyses were performed comparing each group (8 samples) with the control one: clinical versus control (CvsCtrl) and preclinical versus control (PvsCtrl). In the clinical stage, a total of 207 probe sets showed significant alteration in expression levels compared to the control group. Among these, 87 were up-regulated while 120 had a reduction in expression. Interestingly, a pronounced alteration in the gene expression profile was also found in the preclinical stage, with a total number of 113 differentially expressed probe sets (55 genes were up-regulated while 58 were down-regulated). Two heat maps representing the differentially expressed probe sets in preclinical and clinical groups are shown in Fig 1. The complete probe set lists with the relative p values and fold changes can be found in S1 and S2 Tables. A gene enrichment analysis was performed to identify the most enriched GO terms in the clinical and preclinical groups (Fig 2). DEGs specific of the clinical group were clustered in functional categories related to cytokine-cytokine receptor interaction, regulation of leukocyte activation, inflammatory response, autoimmune thyroid disease, chemokine activity, B cell proliferation and differentiation, regulation of apoptosis, kinase inhibitor activity, and membrane raft (Fig 2B). The preclinical stage was characterized by enrichment in gene clusters related to chemokine signaling pathway, extracellular region, secreted protein, immune response, pyridoxal phosphate binding, transcription, myeloid-associated differentiation marker, B cell proliferation, extracellular matrix, RNA metabolic process, MHC class I, Laminin G and response to wounding (Fig 2A). When comparing the differentially regulated probe sets identified in the preclinical and clinical groups, it was found that 35 differentially expressed probe sets (corresponding to 32 DEGs) were common between the two stages of disease (Fig 3), leaving 172 DEGs specific to clinical and 78 genes specific to preclinical animals. Remarkably, all of the 32 common DEGs displayed a very similar pattern of expression in the clinical and preclinical groups, as shown in Fig 3B. These genes are listed in bold in S1 and S2 Tables. To further dissect gene expression alterations during the progression of the disease, we performed a statistical analysis to identify specific changes between the clinical and preclinical stages (CvsP). Indeed, we found 235 DEGs, which were significantly enriched in pathways related to immune response (regulation of B cell proliferation, leucocyte activation, ISG15-protein conjugation and chemokine signaling were among the most significant). The list of the most relevant enriched probe sets can be found in S3 Table. We used a Venn diagram to compare the DEGs found in PvsCtrl, CvsCtrl and CvsP analyses that were previously performed and then we examined the expression levels of common DEGs (Fig 4). Venn diagram revealed the presence of one DEG in common between PvsCtrl, CvsCtrl and CvsP comparisons, while 22 genes were differentially expressed in PvsCtrl and CvsP but not in CvsCtrl comparisons (Fig 4A). We found that these 22 DEGs had an opposite fold change sign in PvsCtrl and CvsP, thus indicating an oscillatory pattern of expression (see Fig 4B, 4C and Table 3). In particular, 9 out of 22 DEGs were up-regulated in the preclinical phase and then went back roughly to the expression level of the controls in the clinical stage ( Fig 4B). The remaining 13 out of 22 DEGs were down-regulated in the preclinical phase and then went back almost to control levels in the clinical phase (Fig 4C). The only gene in common between the three comparisons, namely Sel-1 Suppressor Of Lin-12-Like 3 (SEL1L3), was Validation of Microarray Results by RT-qPCR To confirm the microarray results, RT-qPCR analysis was performed using the SYBR 1 green assay. A normalization factor based on three reference genes (glyceraldehyde-3-phosphate dehydrogenase, GAPDH; ribosomal protein L12, RPL12; actin, beta, ACTB) was used for the analysis. The stability of the selected reference genes was determined by calculating their geNorm M value (M) and the coefficient of variation (CV) on the normalized relative quantities (S2 Appendix file) [50]. 9 DEGs related to different functional categories were chosen for the validation: GNLY (granulysin), CD40L (CD40 ligand), PDK4 (pyruvate dehydrogenase lipoamide kinase isozyme 4), IDO1 (indoleamine 2, 3-dioxygenase 1), HBA2 (hemoglobin, alpha 2), XIST (X-inactive specific transcript), GNB4 (guanine nucleotide binding protein beta polypeptide 4), BOLA (MHC class I heavy chain), and SEL1L3 (Table 4). These genes were selected on the basis of their fold changes, p values and relevance in the literature. Moreover, because we performed several statistical analyses of the microarray data, we chose genes that appeared as differentially expressed with the highest frequency in different resulting datasets. Despite microarray p value for HBA2 was not significant, this gene was selected for the RT-qPCR validation since a previous work published by our group highlighted its involvement in prion pathogenesis [51]. The RT-qPCR analysis confirmed the microarray results for six out of nine genes selected (XIST, CD40L, GNLY, PDK4, HBA2 and SEL1L3), which are represented in Table 5 and Fig 5: CD40L, XIST, SEL1L3 and GNLY downregulation was confirmed in both preclinical and clinical groups, while HBA2 was significantly down-regulated only in preclinical animals. Significant PDK4 upregulation was found in the clinical stage, but not in the preclinical one. These results were in line with the microarray data (Table 5). Discussion Whole blood is the most suitable tissue for a prospective rapid diagnostic test since it minimizes sample handling artifacts and reduces sample variability due to fractionation. The present study revealed a substantial gene expression alteration in whole blood from atypical BSEinfected cattle, which could be investigated in future experiments and, if confirmed, could be exploited as a signature for the disease. One of the major caveats in using peripheral blood is that its cellular components may change dramatically during infections or inflammation. The animals used in the present study did not show any apparent side pathology, they were monitored daily by the husbandry staff and their blood was examined for serum aspartate aminotransferase (AST), creatine kinase (CK) and manganese [23]. Nonetheless, possible interference due to hidden pathologies or to inter-individual variations in hematocrit and white blood cell count may affect interpretation of expression data and should be considered as an important variable in future studies. In light of this, the present results should be read as a first exploration of whole blood transcriptomics during a prion infection. To our knowledge, this is the first microarray study of whole blood from BSE-infected cattle. Indeed, in a study published by Panelli et al. [36], fractionated white blood cells were analyzed to detect gene expression changes in L-type infected animals. Very few DEGs are common between the two studies. However, this discrepancy may be explained by different infection methods, microarray platforms, statistical analysis stringency and p value cut-off. Also, the white cells used in the study of Panelli et al. were isolated from 1 year post-infection animals, while in our study we used whole blood from preclinical and clinical infected cattle (around 6 months and 22-26 postinfection respectively). In the present study, 4 statistical comparisons were performed: infected (preclinical and clinical) versus control (IvsCtrl), preclinical versus control (PvsCtrl), clinical versus control (CvsCtrl) and clinical versus preclinical (CvsP) comparisons. Since our goal was to find a common pattern among all the atypical BSE-infected cattle, we defined them as a single group of infected (H-type and L-type) animals. Indeed, as published by Konold and colleagues [23], these animals shared a very similar phenotype in terms of behavioral and clinical signs. Whole Blood Gene Expression Profiling in BSE-Infected Cattle anatomical distribution for the atypical strains, with only slight differences in the overall intensities between H-and L-type [27]. Nonetheless, even if H-and L-type BSE are reported to share many similarities, they constitute two distinct BSE variants which are characterized by a different electrophoretic mobility of PrP Sc unglycosylated moiety after proteinase K (PK) digestion [16,18]. Statistical comparison between the 4 H-type and the 4 L-type infected animals was carried out in preliminary analyses (see S4 Table), but only a limited number of DEGs was found. Among them, only 15 had a p value lower than 0.01 and only 16 showed a fold change higher than 3, indicating that, at least in terms of number of DEGs, these two groups did not display large differences. For these reasons, we decided to focus our attention on finding a common gene pattern among all the atypical BSE-infected cattle and therefore we pooled the two groups. Due to the high inter-animal variability, which is expected for outbreed animals, further studies in larger animal cohorts would be required to investigate in detail the strain-specific gene expression changes occurring during the progression of the disease. Still, the HvsL analysis can be used as a sort of internal control in this study. Another aspect to be taken into account when reading the present results is that additional negative control cattle, aged from 12 to 37 months and derived from a different herd compared to the Konold's study groups [23], were introduced in the analyses. The addition of these controls was useful to balance the samples from infected animals and allowed a preliminary exploration of the differentially expressed transcripts. However, age-related and environmental variability may have affected in some degree the data and need to be considered for their correct interpretation. Despite some limitations, since several statistical analyses were performed (including the CvsP analysis, in which all the animals derived from the same herd) a cross comparison of all them, as we did with the Venn Diagram, may be very useful in order to define a set of genes which could be a good starting point for further validation experiments in the future. In the first statistical analysis we performed (IvsC), we found that among 101 DEGs, 93 had known functions and were involved in several biological processes and molecular pathways, such as autoimmune thyroiditis, chemokine and cytokine activity, regulation of the secretion pathway, the immune system and antigen presentation [52]. Previous studies on CNS tissues from BSE-infected animals also showed the involvement of many of these pathways in prion pathogenesis [4,35,52]. This similarity between brain and blood may not be surprising, since it has been shown in the literature that blood transcriptome analyses identify genes that are relevant to the pathological processes occurring in the CNS [53]. Indeed, measuring disease-related gene expression in peripheral blood may be a useful proxy measure for gene expression in the CNS [53,54]. To characterize the gene expression profile in the preclinical and clinical stages, we performed the PvsCtrl and the CvsCtrl statistical comparisons. We found that 113 probe sets were differentially regulated in the preclinical stage of the disease, while 207 probe sets had an altered expression in the clinical phase. Importantly, the present results indicated that, at least in blood, a consistent gene expression alteration is present from the early stages of the disease. This finding is in agreement with microarray analysis carried out by Tang et al., which revealed the highest degree of differential gene regulation in brains of cBSE-infected cattle at 21 months post infection, which is prior to the detection of infectivity [4]. Also, Tortosa and colleagues found a significant number of DEGs at early stages of the disease in the CNS from cBSEinfected transgenic mice [52]. Venn diagram analysis revealed that 32 DEGs were in common between the clinical and preclinical groups and, remarkably, they had a very similar pattern of expression in both stages of the disease. Since these genes are altered in both phases, it would be very interesting to confirm their differential expression in future experiments with additional negative controls, and eventually in blood from human patients. Based on GO enrichment analysis, we found that immunity and inflammation processes were strongly involved during the progression of the disease stages. Interestingly, we found that antigen processing and presentation via MHC (major histocompatibility complex) molecules and the autoimmune thyroiditis pathway were significantly altered in atypical BSE-challenged animals. The majority of MHC class I molecule coding-genes were down-regulated in infected cattle (three out of four probes) and, also, MHC class II molecule coding transcripts were found to be down-regulated during the progression of the clinical signs (four out of four probes were down-regulated in the CvsP comparison). The involvement of MHC transcripts in prion pathogenesis is supported by another microarray study published by Khaniya and colleagues in 2009 [55]. In line with the trend found by the microarray analysis, the RT-qPCR validation experiments indicated a downregulation for MHC class I heavy chain (BOLA), even though the results failed to reach the statistical significance (data not shown). Regarding the autoimmune thyroiditis pathway, it is well known in the literature that Hashimoto's encephalitis, together with the associated thyroiditis, is a differential diagnosis for CJD, since the two pathologies share a very similar clinical symptomatology [56]. As hypothesized previously by Prusiner and colleagues, the clinical and neuropathological similarities between CJD and Hashimoto's thyroiditis raise the possibility that protein misprocessing may underlie both neurodegenerative and autoimmune diseases [57]. Finally, a fourth statistical analysis was performed to identify any specific changes between the clinical and the preclinical stages of disease (CvsP). Indeed, we found that the last phases of the disease are accompanied by the overactivation of several genes involved in the immune defense response. In particular, the shift from the preclinical towards the clinical stage was characterized by the upregulation of genes involved in B cell proliferation and the ISG15 (IFNinduced 15-kDa protein) conjugation system. ISG15 is a ubiquitin-like molecule that is tightly regulated by specific innate immunity signaling pathways [58]. Interestingly, it has been shown in the literature that this protein is over-activated in the spinal cord of amyotrophic lateral sclerosis mice models [59] and it has been indicated as a general marker for both acute and chronic neuronal injuries [60]. To further analyze the data, we compared the list of DEGs found in PvsCtrl, CvsP and CvsCtrl and found 22 genes with an oscillatory pattern of expression, being differentially expressed in the preclinical stage and then going back roughly to the control level in the clinical stage. Interestingly, some of the oscillatory DEGs are involved in regulation of transcription, thus suggesting that the gene expression during atypical BSE infection is tightly regulated. Venn diagram analysis revealed that one gene, SEL1L3, was down-regulated in all the comparisons (PvsCtrl, CvsCtrl, CvsP). SEL1L3 codes for a transmembrane protein whose function is unknown. Interestingly, an important paralog of SEL1L3, SEL1L, is involved in the retrotranslocation of misfolded proteins from the lumen of the endoplasmic reticulum to the cytosol, where they are degraded by the proteasome in an ubiquitin-dependent manner [61]. Therefore, we could hypothesize that its down-regulation in prion infected animals would lead to a reduced degradation of PrP Sc , thus supporting the progression of the disease. We validated this gene by RT-qPCR, confirming its downregulation in both the preclinical and clinical stages of the disease. Further investigation on the function of SEL1L3L would be of great interest since this gene may play an important role in prion disease and maybe other neurodegenerative illnesses. Besides SEL1L3, five other genes were validated by RT-qPCR; here we will briefly discuss how these genes may be involved in prion pathogenesis and in host response to prion infection. GNLY and CD40L were found to be down-regulated in both preclinical and clinical stages. GNLY is a powerful antimicrobial protein contained within the granules of cytotoxic T lymphocyte and natural killer cells. This gene was found to be downregulated also in a microarray study performed on the medulla oblongata from sheep with preclinical natural scrapie [62]. Thus, it may be a good candidate as an early biomarker for atypical BSE but also for other prion diseases. CD40-CD40L interactions mediate a broad variety of immune and inflammatory responses and have been implicated in the pathogenesis of Alzheimer's disease (AD) [63,64]. Although the importance of CD40L in prion disease progression has not yet been clarified (66-68), its downregulation in blood during both preclinical and clinical stages of atypical BSE-infection suggests that prion infection has an impact on the host immune system response and that immune tolerance may be an active process induced by prions. Two other downregulated genes were validated by RT-qPCR, namely HBA2 and XIST. Concerning HBA2, we found a downregulation in preclinical atypical BSE-infected cattle. Haemoglobins are iron-containing proteins that transport oxygen in the blood of most vertebrates. Beside blood, HBA and HBB are also expressed in mesencephalic dopaminergic neurons and glial cells [65] and are down regulated in AD, PD and other neurodegenerative diseases [66]. Haemoglobin genes expression alteration during preclinical scrapie was also found in the spleen and CNS of infected animals [67,68], as well as in the brains of nonhuman primates infected with BSE [51]. These findings suggest an involvement of these genes in the host response to general neurodegenerative processes. Besides changes in transcript levels, it has been found that both HbA and HbB protein distribution is altered in mitochondrial fractions from PD degenerating brain [69]. Moreover, HbA is also expressed in endothelial cells, where it regulates the nitric oxide signaling [70]. Even though a clear mechanism linking these molecules to neurodegeneration has not yet been described, taken together these findings strongly suggest a central role for haemoglobin in neurodegenerative processes. A marked downregulation in XIST expression was found in our infected animals. XIST is a gene located on X chromosomes which codes for a long non-coding RNA (LncRNA) involved in X-chromosome dosage control [71,72]. LncRNAs are emerging as useful biomarkers for neurodegenerative diseases such as AD [73] and other disease processes [74], and they can be easily detected in blood and urine from patients. In addition, we cannot exclude the possibility that the alteration in XIST expression may have some role in gender-dependent response to prion infection [75]. RT-qPCR experiments confirmed the upregulation of PDK4 in clinically affected animals. PDK4 encodes for a mitochondrial protein involved in glucose metabolism through the inhibition of pyruvate dehydrogenase complex, which leads to a reduction in pyruvate conversion to acetyl-CoA [76]. In the literature, a key role has been suggested for acetyl-CoA fueling for the survival of cholinergic neurons in the course of neurodegenerative diseases [77]. PDK4 overactivation can lead to a switch from glucose catabolism to fatty acid utilization [78], thus increasing the production of ketone bodies. Notably, it has been shown in the literature that these molecules are able to cross the blood brain barrier. We could speculate that in prion infection (or at least in atypical BSE infection) the concentration of ketone bodies would rise in blood, as a consequence of PDK4 upregulation, and act in the brain as neuroprotective molecules [79,80]. This would be an attempt by the organism to prevent the neurodegeneration induced by prions. Conclusions In conclusion, the present study has led to the identification of several gene expression changes in whole blood from clinical and preclinical atypical BSE cattle, which upon further investigation and validation in blood from human patients, might represent a molecular fingerprint to characterize this disease. By comparing our results with other studies on various animal prion diseases, we observed that some of the most significantly altered DEGs we found in blood were found differentially expressed also in brain tissue from BSE-infected cattle; this observation indicates that whole blood transcriptome analyses may serve as a proxy measure for the changes occurring in the CNS of infected animals. Furthermore, our study underlines the importance of utilizing whole blood, without any additional manipulation, as a source tissue as it is an easily accessible body fluid. In addition, the transcription regulation activated in atypical BSE infections is similar to some extent to the one observed in the literature for cBSE, even though the clinical characteristics and biochemical properties are very different. Thus, this gene expression profile may be investigated in other BSE infections to identify a common molecular fingerprint. Overall, our study confirmed the differential expression of 6 genes (XIST, CD40L, GNLY, PDK4, HBA2 and SEL1L3), which may play several roles in atypical BSE pathogenesis and, possibly, in other prion infections. Indeed, they are involved in multiple pathways such as immune response, inflammation, and glucose catabolism. Even though further studies are required to investigate the specific involvement of all the identified genes in prion diseases, our data indicate an important role for immune system regulation in the prion pathogenesis of atypical BSE and maybe in BSE as well as in other prion diseases in general. Supporting Information S1 Appendix. Post hybridization quality assessment. (A) Normalized unscaled standard error (NUSE), (B) relative log expression (RLE) and (C) raw signal intensity plots are used to check for technical problems and to spot outlier samples after GCRMA normalization. Box plots centered higher than normal (typically above 1.1 in the NUSE plot) and/or having a larger spread in the RLE plots represent arrays with quality problems. One outlier was easily identified by post hybridization quality assessment (black arrow in panel A and B, control sample cS5). Table. Functional classification of differentially expressed genes found in blood of clinical versus preclinical animals. The gene enrichment analysis was performed using DAVID bioinformatics tool 6.7 (NIAID/NIH, USA). Only genes with a known GO are represented in the list. The DEGs which fell in more than one category for simplicity are presented under a single functional heading. FC and P values refer to the microarray analysis. (XLSX) S4
v3-fos-license
2022-06-18T06:17:50.203Z
2022-06-16T00:00:00.000
249747311
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": null, "pdf_hash": "779e496a612591d28c0ef6c95623979f4732eed6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:452", "s2fieldsofstudy": [ "Economics", "Geography", "Medicine" ], "sha1": "490ffa356fb3f66a7af324af813eb2394318c999", "year": 2022 }
pes2o/s2orc
Regional variation in prescription drug spending: Evidence from regional migrants in Sweden Abstract There is substantial variation in drug spending across regions in Sweden, which can be justified if caused by differences in health need, but an indication of inefficiencies if primarily caused by differences in place‐specific supply‐side factors. This paper aims to estimate the relative effect of individual demand‐side factors and place‐specific supply‐side factors as drivers of geographical variation in drug spending in Sweden. We use individual‐level register data on purchases of prescription drugs matched with demographic and socioeconomic data of a random sample of about 900,000 individuals over 2007–2016. The primary empirical approach is a two‐way fixed effect model and an event study where we identify demand‐ and supply‐side effects based on how regional and local migrants change drug spending when moving across regional and municipal borders. As an alternative approach in robustness checks, we also use a decomposition analysis. The results show that the place‐specific supply‐side effect accounts for only about 5%–10% of variation in drug spending and remaining variation is due to individual demand‐side effects. These results imply that health policies to reduce regional variation in drug spending would have limited impact if targeted at place‐specific characteristics. of health care providers, this could be considered a supply factor that can cause more spending. However, the high availability could result from a long-term high demand due to large health needs. A number of recent studies on regional variation have used regional migration data to try to separate demand and supply factors (Godøy & Huitfeldt, 2020;Molitor, 2018;Moura et al., 2019;Salm & Wübker, 2020;Song et al., 2010;Zeltzer et al., 2021). The general idea in this approach is that if place-specific institutional features (supply) are most important, we expect regional migrants to change health care utilization behavior when moving into a new region (Finkelstein et al., 2016). On the other hand, if individual characteristics and preferences (demand) cause the regional differences, we expect migrants' health care use to remain (fairly) constant after moving. Previous studies from various settings have estimated divergent place effects, with 9% (Salm & Wübker, 2020) to 60% (Finkelstein et al., 2016) of variation in health spending caused by supply factors. The ambiguity of results suggests that institutional settings are important for the size of the regional variation and the drivers behind the variation (Godøy & Huitfeldt, 2020;Salm & Wübker, 2020). It is not obvious on what level of aggregation regional variation should be analyzed (Zhang et al., 2012), and previous studies have used variation across provinces, hospital referral regions or postal codes (e.g., Godøy & Huitfeldt, 2020;Moura et al., 2019). Moreover, previous research has analyzed non-drug health care spending or total health care spending. Less is known about the distribution and causes of regional variation in drug spending (Zhang et al., 2010), even as drugs alone represent about 20% of health care spending in the average OECD country (OECD, 2017). In this paper, we run a two way fixed effects model and an event study analysis with regional migration data from Sweden to study what explains regional variation in prescription drug spending. We analyze the causes of regional variation on two levels of aggregation -across 21 regions (counties) and 290 municipalities. We exploit prescription drug spending as the outcome of regional variation, because there is no reason to assume that the geographical pattern of variation is the same for each component of health care spending. Zhang et al. (2010) showed a weak correlation (r = 0.10) between drug spending and non-drug health care spending across hospital referral regions in US Medicare, and pointed out that drugs can work as either a complement or a substitute for medical care. Moura et al. (2019) subcategorized total spending and found that place effects explained 28% of variation in prescription drug spending, and in contrast, around 21% of variation in primary care spending. It is important to understand the drivers of regional variation for each component of health spending, as the causes may differ by type of health care. The Swedish market for prescription drugs is a good case study considering that the institutional rules may reduce the impact of supply-side factors. In general, the single-payer national health service system is characterized by universal coverage, low cost-sharing, and salary-paid physicians with minor (or no) economic incentives to over-treat. Specific for the market of prescription drug is that physicians, hospitals, and primary health care centers have limited (if any) direct economic incentives to prescribe larger volumes or more expensive drugs than needed (in some cases even incentives to prescribe less when costs are carried by the clinic). Additionally, prescription drug prices are fixed nationally and for generic drugs, pharmacies are required to offer the cheapest generic alternative irrespective of which brand-name drug was prescribed (Granlund, 2010). In the rest of the paper, we show that the average regional drug spending per capita per year varies from −7% to +28% around the national mean. The documented regional variation is similar to variations in total health care spending in the Netherlands (Moura et al., 2019), but smaller than the reported regional variation in drug spending in the US (Zhang et al., 2010). The variation across municipalities is, as expected, larger, −28% to +103% around the national mean. Our results show that the place effect accounts for only about 5%-10% of the variation in drug spending in regions and municipalities, which is at the lower end of the scale compared with previous estimates. The remaining 90%-95% of the variation in drug spending is driven by an individual demand-side effect. Our study makes two main contributions to the literature on the causes of regional variation in health care spending. First, we provide evidence of an individual effect as the main driver of variation in drug spending. Our results emphasize the importance of institutional settings in general, but also the particular institutional rules by type of health care -indicating that place-effects play a limited role in a national system with few incentives to over-treat and with a generic substitution policy. Second, our results show that the level of aggregation (regions or municipalities) does not change the qualitative interpretation of our results, even though a lower level of aggregation reduce the uncertainty of our estimates. The paper is structured as follows: in Section 2 we describe the institutional setting and data, and in Section 3 we present the empirical method. We present the results in Section 4, robustness checks in Section 5 and conclude with a discussion in Section 6. | Institutional setting The Swedish national health service offers universal coverage for all residents. The system is decentralized in 21 regions with responsibility to finance and provide health care. The regions subdivide into 290 municipalities with responsibility (among other things) for long-term care. The provision of health care is carried out by a mix of public and private providers and all providers are reimbursed at the same rate through public funds (regional and municipal income taxation). Health care is subsidized at point of service with relatively small out-of-pocket prices for health services, identical across providers within the same region (private and public). The cost-sharing scheme for prescription drugs is identical for all regions and takes the form of a deductible with multiple thresholds, where the patient annually pays a maximum of €224 (1€ = 10.5 SEK, year 2019). The patient out-of-pocket price for prescription drugs is the same irrespective of whether the physician is employed in public or private. The national Dental and Pharmaceutical Benefits Agency (TLV), regulates which drugs are included in the national pharmaceutical benefits scheme based on health need, disease severity, and cost-effectiveness (Svensson et al., 2015). If a drug is approved, it is sold at private pharmacies throughout the country at the fixed price agreed by the producer and TLV. In 2009, the pharmacy market was deregulated from a single state-owned pharmacy to allow for multiple private owners, which lead to an increase by 22% in the number of pharmacies (Anell et al., 2012;Swedish competition authority, 2010). | Sample We base our analysis on a random sample of 1 million individuals of the Swedish population, followed over 10 years (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). After excluding children under 15, the sample consists of 929,711 individuals and about 8.2 million individual-year observations. The data set contains details on all purchases of prescribed drugs matched with demographic and socioeconomic background statistics at the individual level. The data have been collected from the National Board of Health and Welfare's register of prescribed drugs and population registers of Statistics Sweden and merged using individual identification numbers. Key variables for our analyses are total drug expenditures per year, that is, the sum of the cost for the payer (the region) and the patient's out-of-pocket costs; and the place of residence. We assess regional variation across the 21 Swedish regions and the 290 municipalities motivated by the organizational structure and data availability. The regional ethics review board in Gothenburg approved the merging of registers and the analysis plan (#803-17). | Drug spending at the individual level Drug spending per capita is highly right-skewed. In the entire sample, almost 30% of observations are zeros ( Table 1). The mean drug spending per capita and year is €319 and the median is €44. The highest cost per patient and year is above €1.6 million. We base our identification strategy on individuals who move across region borders. We define regional migrants as individuals who move between regions once during the study period and where we can follow spending both before and after the move (i.e., moves that occurred between 2008 and 2015). The sample of movers consists of 53,620 regional migrants and 507,510 migrant-year observations ( Table 1). The sample almost doubles when we include individuals that move across municipal borders; 102,943 municipal migrants and 977,359 migrant-year observations. The migrants differ from non-migrants; the most notable differences are that migrants are younger (mean age 36 vs. 50 years), have a higher education level, a higher proportion is unmarried, and the migrants have considerably lower drug spending (Table 2). | Drug spending on the regional level Across the 21 Swedish regions (NUTS 3, Eurostat European Commission, 2018), average regional drug spending per capita per year varies from −7% to +28% around the national mean ( Figure 1). The size of variation is similar to that of variations in total health care spending in the Netherlands' provinces (Moura et al., 2019). Expressing the variations in terms of the ratio of highest to lowest, the ratio of 1.38 (Table 3) is smaller than the variation in drug spending in US Medicare (ratio 1.6) (Zhang et al., 2010). Figure A1 in the Appendix shows that regional averages are stable over the study period (Spearman's correlations estimated between 0.72 and 0.96 over each pair of the 10 years). The variation across municipalities is, as expected, larger, varying between −28% and +103% around the national mean (Figure 2). Regional variation in drug spending is only weakly correlated to gross regional product (Spearman's correlation coefficient of 0.20 over years 2007-2016, results available on request). | EMPIRICAL APPROACH Our main approach to differentiate between demand-and supply factors is a two-way fixed effect model with individual and year fixed effects, and an event study specification (Finkelstein et al., 2016). We use an alternative decomposition approach as a robustness check, described further in Section 5. The idea behind the main analysis is to see how drug spending changes when an individual move to a different region (municipality) with different supply-side characteristics and spending levels. The analysis only includes individuals in the sample who moved between regions (municipalities) during the study period. The two-way fixed effect equation that we estimate is: T A B L E 3 Distribution of drug spending on the regional and municipal level (€) where is the log drug spending of individual i in year t. The main independent variable of interest is the difference in mean log drug spending between the region of origin and the region of destination: for individual i who moves from region to region . The regional mean log spending is the pooled mean over the 10 years for region j. Using the mean of log spending, is approximately the percentage difference in spending between the two regions and . With 21 regions, can take up to 21× 20 = 420 distinct values (with 290 municipalities can take up to 83,810 distinct values). Figure A2-A3 in the Appendix show that the distributions of the migrants' -values for regions and municipalities are approximately symmetric, implying that moves from high-to low-consumption regions (municipalities) are as common as moving from low to high-consumption regions (municipalities). This supports the assumption that moving to a different area is exogenous with respect to the consumption of prescription drugs. The binary indicator takes the value 1 in years after the move ( ) , and 0 otherwise. The main parameter of interest is , which will reflect the (percentage) change in individual spending in the years after the move, given the difference in average log spending between origin and destination region ( ) . is a vector of time-varying individual characteristics and is a vector of parameters to be estimated. Included individual-level variables are: binary indicators for gender-specific 10-year age groups (women 30-39, men 40-49, etc.), individual disposable income, and family situation defined by marital status and number (and age) of children in the household. is a vector of binary indicators for years since the move, where r is the number of years after the move, accounting for effects of migration that are unrelated to D ( 1 takes the value 1 for year one after the move and 0 otherwise, 2 takes the value 1 for year two after the move and 0 otherwise, etc.). Additionally, is year fixed effects; is individual fixed effects, and is an error term that represents unobserved individual characteristics. is the share of variation attributed to a place effect (Salm & Wübker, 2020). If = 1 , the difference in region average spending completely predicts individual spending changes at the time of a move, adjusted for changes in included individual-level covariates, and variations are driven by regional "supply" characteristics. If = 0 , the difference in region average spending does not affect individual spending, assuming that individual "demand" characteristics cause the variations. We expect to have a value between 0 and 1, such that the place effect explains regional variation in part ( ), and the individual effect explains the remaining part (1− ). To draw causal conclusions of , the following exogeneity assumption must hold: The assumption requires that the explanatory variables, such as , are unrelated to unobserved individual-level time-varying characteristics ( ). The expression makes no assumption about , which implies that a potential association between and time-invariant unobserved factors, such as stable patient preferences, does not violate the causal interpretation of (Salm & Wübker, 2020). Something that may violate the exogeneity assumption is if unobserved time-varying characteristics are Percentage deviation from national mean systematically correlated to or to other included covariates. That could potentially arise if individuals experiencing a negative health shock tend to move to regions with higher drug spending, if the effect of is nonlinear or asymmetric (e.g., different impact on drug spending depending on if moving to a high-or low-consumption region) or if varies over time (Salm & Wübker, 2020). To assess potential spending trends in years before and after the move, we estimate year-specific ′ in the following event study regression: where is interacted with binary indicators for each year before and after the move, allowing the effect of to differ each year. We set the coefficient for the year before the move ( = −1) to zero. Concerning our relatively small sample for regional migrants, we restrict the binary indicators to years − 5 ≥ ≤ 5 around the move, but include all available years around the move (− 8 ≥ ≤ 8 ) for the analysis of municipal migrants. The estimated model in Equation (4) tests whether there are systematic changes in log drug spending pre-and post-move; while the main Equation (1) assumes that the pre-trend is flat. We run the two-way fixed effects regression with ln( + 1) as the main outcome, with and without independent variables of individual characteristics. Due to many zeros in drug spending, we assess the robustness of our results and run the model with various forms of the dependent variable, namely ln( + 2) , ln( + 10) and ln( ) , as well as the inverse hyperbolic sine transformation (arcsinh) ln ( + √ 2 + 1 ) (Bellemare & Wichman, 2020). We run this set of analyses first based on variation and migration across regions and second based on variation and migration across municipalities. To further assess the importance of skewness in drug spending, as a small share of individuals have very high costs of drugs, we create a 95-percentile trimmed sample where we exclude the top 5% of observations. A thorough description of the trimmed sample can be found in the Supporting Information. | RESULTS The place effect ̂ is in the analysis of regions estimated to 0.05 with a confidence interval of −0.11 to 0.21 (Model 2 in Table 4). Results are similar irrespective of excluding or including the individual-level independent variables in the model specification (Model 1 vs. Model 2). In alternative models, ̂ is estimated between −0.03 and 0.06. Running the analyses of variation across municipalities, we estimate a place effect of 0.10 (CI 0.04; 0.16) in the preferred specification (Model 2 in Table 5), and ̂ 's between 0.03 and 0.11 in alternative specifications. We emphasize that a place effect of zero would indicate that the differences in regional (municipal) average spending do not affect individual spending after moving to a new region (municipality). We interpret 1− ̂ as the individual effect hence, we find an individual effect around 0.90-0.95 on region and municipal level. T A B L E 4 Results of the two-way fixed effect regressions -variation across regions The results on region and municipal level are similar, with narrower confidence intervals in the analysis on municipal level (due to the larger sample of migrants). With an upper limit of the confidence interval of 0.21 in the preferred specification of the region analysis (Model 2 in Table 4) and 0.16 in the municipal level analysis (Model 2 in Table 5), the place-specific supply-side effect is likely substantially less important than the individual demand-side effect for variation in drug spending. Running the analysis on the sub-period 2010-2016, that is, after the deregulation of the pharmacy monopoly, the place effect of is similar to the main results (Table A1-A2 in the Appendix). The variation across regions is estimated between −0.07 and −0.04, and for variation across municipalities the place effect is estimated between 0.01 and 0.10. Running the analysis with a trimmed sample and variation across regions yield results similar to the main analyses, with estimates of ̂ ranging from −0.06 to 0.09 with wide confidence intervals overlapping zero (see Table S5 in Supporting information). Estimating spending trends with year-specific ̂ 's in the event study specification, evidence of a pre-trend would suggest an over-estimation of the place effect in the main Equation (1). However, we do not find evidence of pre-trends in the years before the move (Figures 3 and 4). Altogether, we find limited evidence for a positive place effect after the move in the region-level analysis, as only one of the post-move year-specific ̂ 's has a positive point estimate (five years after the move). There is high uncertainty in the analysis of regions with relatively wide confidence intervals overlapping zero. In the event study analysis on municipal level however, we find evidence of a positive place effect that is seen not immediately but about Year-specific theta 95% CI three years after the move. Overall, the event study results confirm that the place effect is limited or small, and that the individual effect is driving variation in drug spending. | Decomposition analysis As a robustness check, we use a three-way fixed effects model with individual, region, and year fixed effects in a decomposition analysis (Finkelstein et al., 2016). The decomposition analysis includes migrants and non-migrants, but the identification relies on the migrants; without individuals moving across regions, it would be impossible to separate individual fixed effects from region fixed effects. We assume that drug spending is a product of observed individual characteristics, unobserved region and individual characteristics, and time effects. The decomposition analysis equation is specified as: where is the log drug spending of individual i in region j in year t, is region fixed effects, is an error term that represents time-varying individual characteristics, and the rest is defined as above. The estimated region fixed effects' coefficients from this regression form the basis for the decomposition. When estimating the region fixed effects, , one region has to serve as the reference case and the coefficients are only relevant in comparison to each other, which implies that we choose two regions or two groups of regions for comparison in the decomposition. We define regions above median drug spending as group A (high-consumption regions) and regions below median drug spending as group B (low-consumption regions). In an alternative decomposition, we consider the comparison between regions in the top quartile as group A and regions in the bottom quartile as group B. The difference in average drug spending between group A and B is decomposed into one part attributed to place and one part attributed to individuals, as seen in the equation: Where and are the mean place effects in groups A and B respectively and and are the mean individual effects in respective groups (however, the ∶ are not estimated in the regression model). Rearranging Equation (6), the share of drug spending variation attributed to place is estimated as: where ̂ and ̂ are the mean of the estimated region fixed effects' coefficients in group A and B, respectively, and and are the (unweighted) mean of the actual drug spending in the same groups. From this follows that the share of regional variation attributed to individuals is estimated as = 1 − . Confidence intervals for the place share and the individual share are estimated by bootstrapping the sample with 250 bootstrap replicates. We use clustered bootstrap sampling on individual level, so for each individual drawn, the whole cluster of yearly observations is used. We use the 2.5th and 97.5th percentiles to form 95% confidence intervals. | Robustness checks results In the decomposition analysis comparing regions above and below the median, we find a share of 0.07 of regional variation attributed to place-specific supply factors, with a bootstrapped confidence interval of −0.10 to 0.23 (Model 2 in Table 6). Figure A4 in the Appendix shows the distribution of the bootstrapped place and individual shares. Comparing the top and bottom quartile of regions, the point estimate of the place effect is 0.11 (lower panel of Table 6). We note that the results deviate slightly in the simpler model without independent variables (Model 1), estimating a place effect of 0.28 but with a wider confidence interval. The results from the decomposition are in line with our main results and show that individual-level characteristics outweigh place-specific characteristics as the main drivers of regional variation. The results from alternative model specifications are similar to the main results (Table A3 in the Appendix). | DISCUSSION In this paper, we have estimated the relative effect of individuals and place on the variation in drug spending. In our main analysis, we estimate that the place effect accounts for about 5%-10% of the variation in drug spending. The results indicate that Note: The effect shares are estimated in fixed-effects regressions. For each model, the decomposition is estimated comparing regions above/below median spending and regions above p75/below p25 of spending. Confidence intervals are estimated by bootstrapping with 250 repetitions drawn at the individual level and composed by 2.5 and 97.5 percentile of the bootstrap estimates. In Model 1, the regression is run without independent variables of individual characteristics. In Model 2, independent variables include indicators for age-gender group, individual income, marital status, and the number of children in the household. T A B L E 6 Results from the decomposition analysis most of the variation in drug spending is caused by individual-level demand factors, both concerning regional and municipal level variation. Robustness checks using a decomposition analysis support our results. There is only one study, to our knowledge, that has estimated the relative effect of individuals and place in regional drug spending variation using a similar approach: Moura et al. (2019) estimated a place effect of 28% for variation in prescription drug spending, indicating a more prominent supply-side effect compared to our results. A larger place effect was also found for regional variation in US Medicare health service spending and in Norwegian hospital spending at about 50% (Finkelstein et al., 2016;Godøy & Huitfeldt, 2020). Our point estimates are closer in magnitude to variation in outpatient services in Germany, where Salm and Wübker (2020) estimated a place effect of about 10-20%. As noted by other authors, current evidence strongly suggests that the causes of regional variation differ depending on institutional setting, but also by type of care. The relatively small place effect found in German outpatient care, in line with our results, was interpreted as a result of high restrictions on physicians combined with many available choices for patients (Salm & Wübker, 2020). The larger place effect found in Norwegian hospital spending was argued to be reasonable given the context of the demographic, geographic, and environmental conditions in Norway with low population density and long travel times (Godøy & Huitfeldt, 2020). The Swedish setting has several aspects similar to the German and the Norwegian, such as low cost-sharing and regulations on physicians' treatment alternatives. But relative to Germany, patients in Sweden have fewer options for choosing provider (varying by type of care and where in the country the patient lives). In the setting of prescription drugs, one of the main features that likely affect costs of drugs, is that physicians have limited, or no, economic incentive to "over-prescribe". Together with fixed prices on national level and pharmacies obligation to offer the cheapest generic when available (Granlund, 2010), these regulations likely limit both the size of regional variation and the scope of place-specific supply factors as drivers of variation. We extend the analysis compared to previous papers by assessing geographical variation on region and municipal level, and find similar results on both levels of aggregation. Using a lower level of aggregation, implies more variation in the main independent variable in two aspects: First, expected variation is larger across municipalities than regions (the magnitude of ), and second, the number of potential values of is multiplied manifold. Additionally, the larger number of migrants across municipalities reduces the uncertainty of our results. Finding a relatively small place effect both on regional and municipal level indicates that neither region-specific nor municipal-specific supply-side conditions affect drug spending to a major extent. The main empirical approach used in this study, differs from the decomposition assessed in the robustness checks, even though both methods aim to estimate the share of regional variation driven by a place effect. In the decomposition analysis, using both migrants and non-migrants, the place effect is measured by how much of average spending is captured in the estimated region fixed effects. In the two-way fixed effect model, this is done by estimating how much average regional spending affects individual level spending at the time of a move, based on regional migrants only. All available pair combinations of a region of origin and destination region are considered in the main analysis and the estimated place effect will rely more heavily on regions with more frequent migrations. In the decomposition analysis, on the other hand, the choice of regions to compare becomes crucial as each region is given the same weight in the exercise regardless of the region's population size or the number of migrants. This raises the question of what regions are most relevant to base the analysis of regional variation on, regions with more frequent migration, or the most extreme regions in the top and bottom that account for the major part of variations, or defined by some other measures. One of the major determinants of prescription drug use is medical need or, in other words, individual health. A limitation in our analysis is that we do not adjust drug spending for individual health, for example, with a comorbidity index. However, assuming individual health status remains fairly constant over the time period, the included individual fixed effects will account for time-constant comorbidities. In this study we assess regional variation in total drug spending and find limited scope of a place effect, however, the effects on total drug spending may hide heterogeneity with respect to specific drugs or ATC groups. Studies on implementation of drugs in Sweden have shown a larger variation across regions for example, about a 4-fold variation in drugs for heart failure and for MI-prevention (Fu et al., 2020;Johannesen et al., 2020), and it remains to be investigated whether the place-effect has a more prominent role for certain types of drugs. There are some potential violations of the exogeneity assumptions in the main analysis that could limit the causal interpretation of the results (Salm & Wübker, 2020). One concern would be if patients react to a negative health event by moving to a region with higher spending. However, we consider it unlikely that people because of bad health would move to regions with higher drug spending since knowledge and information about drug spending is likely incomplete. Further, if patients are aware of physician preferences and generosity regarding prescriptions, a plausible reaction would perhaps be to see a different physician in the home municipality and region than to move. Another violation of the exogeneity assumption would be if varied over time, for example, if a policy reform shifted the relative effect of individual and place. In 2012, the annual cost-sharing maximum was raised, changing the economic incentives for patients. The change was uniform across all regions. The increase in the number of pharmacies following the deregulation of the pharmacy market in 2009 also seem to have had a limited effect on our results, as seen in the point estimates from analyses of the sub-period 2010-2016. A third potential violation is if the effect of on drug spending is nonlinear and varies depending on moving to a high-or low-spending region. The results from the decomposition analysis suggest that the relative effect of individual and of place differs depending on what regions are being compared. Recent discussions on the use of two-way fixed effects models of panel data with variation in treatment timing imply that we should be cautious in terms of interpreting the results as an average place effect (Callaway & Sant'Anna, 2021;Goodman-Bacon, 2021;Sun & Abraham, 2021). The two-way fixed effects model estimates a weighted average of the treatment effect considering all possible pairs of treated and untreated units at different time points (Goodman-Bacon, 2021). Researchers have suggested different ways to deal with heterogeneity in the treatment effect over time and across groups in a binary treatment setting (Callaway & Sant'Anna, 2021;Sun & Abraham, 2021) and for continuous treatments (de Chaisemartin et al., 2022). Running an additional event study analysis with an alternative estimator robust to heterogeneous treatment effects (de Chaisemartin & D'Haultfoeuille, 2020;de Chaisemartin et al., 2022) yield results similar to our main analyses and does not change the overall conclusion of our results ( Figure A5 in the Appendix). Even though we cannot rule out bias in interpreting our main results as an average treatment effect, this additional analysis indicate that the likelihood for a bias of a relevant magnitude is in this case small. In conclusion, our findings show that individual-level demand-side characteristics are the main drivers of regional and municipal variation in prescription drug spending in Sweden. The results imply that health policy with the aim to reduce regional variation would have limited impact if targeted at place-specific supply-side characteristics. Future research should study the causes of regional variation concerning the different sub-components of health care spending or utilization, and which individual-level demand-side factors, such as health need or socioeconomic status, are most the important drivers of variation. ACKNOWLEDGMENTS We thank conference participants at the 2019 Australian Health Economics Doctoral Workshop and at the 2021 Essen-Gothenburg workshop in Health Economics and seminar participants at the School of Public Health and Community Medicine at University of Gothenburg. This study was funded by the Swedish Research Council (ref. 2018-02708). CONFLICT OF INTEREST None. DATA AVAILABILITY STATEMENT APPENDIX JOHANSSON ANd SVENSSON F I G U R E A 1 Development over time of average regional drug spending. The lines represent the regional mean spending of prescribed drugs per capita per year for the 21 regions. The (weighted) national mean over 2007-2016 was €319 (€1 = 10.5 SEK 2019). Spearman's correlations estimated between 0.72 and 0.96 over each pair of the ten years [Colour figure can be viewed at wileyonlinelibrary.com] F I G U R E A 2 Distribution of D i for migrants across regions. The distribution of the migrants' -values, is fairly symmetrical around 0, indicating that moving from a high-consumption region to a low-consumption region is as common as vice versa. This supports the assumption that moving to a different region is exogenous with respect to the consumption of prescribed drugs [Colour figure can be viewed at wileyonlinelibrary.com] F I G U R E A 3 Distribution of D i for migrants across municipalities. The distribution of the migrants' -values, is symmetrical around 0, indicating that moving from a high-consumption municipality to a low-consumption municipality is as common as vice versa. This supports the assumption that moving to a different municipality is exogenous with respect to the consumption of prescribed drugs [Colour figure can be viewed at wileyonlinelibrary.com] Note: The effect shares are estimated in fixed-effects regressions using the full sample of both migrants and non-migrants. For each model, the decomposition is estimated comparing regions above/below median spending and regions above p75/below p25 of spending. Confidence intervals are estimated by bootstrapping with 250 repetitions drawn at the individual level and composed by 2.5 and 97.5 percentile of the bootstrap estimates. Each model is run including independent variables (age-gender group, individual income, marital status, and the number of children in the household) and year fixed effects, region fixed effects, and indicators for the number of years since the move.
v3-fos-license
2016-03-22T00:56:01.885Z
2013-04-01T00:00:00.000
1469446
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6651/5/4/665/pdf", "pdf_hash": "7429cf873812b42868be03e1f5ac29a4c5b38305", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:453", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "7429cf873812b42868be03e1f5ac29a4c5b38305", "year": 2013 }
pes2o/s2orc
The Snake Venom Rhodocytin from Calloselasma rhodostoma—A Clinically Important Toxin and a Useful Experimental Tool for Studies of C-Type Lectin-Like Receptor 2 (CLEC-2) The snake venom, rhodocytin, from the Malayan viper, Calloselasma rhodostoma, and the endogenous podoplanin are identified as ligands for the C-type lectin-like receptor 2 (CLEC-2). The snakebites caused by Calloselasma rhodostoma cause a local reaction with swelling, bleeding and eventually necrosis, together with a systemic effect on blood coagulation with distant bleedings that can occur in many different organs. This clinical picture suggests that toxins in the venom have effects on endothelial cells and vessel permeability, extravasation and, possibly, activation of immunocompetent cells, as well as effects on platelets and the coagulation cascade. Based on the available biological studies, it seems likely that ligation of CLEC-2 contributes to local extravasation, inflammation and, possibly, local necrosis, due to microthrombi and ischemia, whereas other toxins may be more important for the distant hemorrhagic complications. However, the venom contains several toxins and both local, as well as distant, symptoms are probably complex reactions that cannot be explained by the effects of rhodocytin and CLEC-2 alone. The in vivo reactions to rhodocytin are thus examples of toxin-induced crosstalk between coagulation (platelets), endothelium and inflammation (immunocompetent cells). Very few studies have addressed this crosstalk as a part of the pathogenesis behind local and systemic reactions to Calloselasma rhodostoma bites. The author suggests that detailed biological studies based on an up-to-date methodology of local and systemic reactions to Calloselasma rhodostoma bites should be used as a hypothesis-generating basis for future functional studies of the CLEC-2 receptor. It will not be possible to study the effects of purified toxins in humans, but the development of animal models (e.g., cutaneous injections of rhodocytin to mimic snakebites) would supplement studies in humans. Introduction The Malayan pit viper, Calloselasma rhodostoma, is found in South-East Asia. It attains an average length of 70-80 cm and rarely more than 1 meter. Calloselasma rhodostoma produces potent snake venom containing a large number of toxins that target proteins in the vasculature and the coagulation system [1]. One of these toxins is rhodocytin, which is a ligand for the human C-type lectin-like receptor 2 (CLEC-2); this receptor belongs to the group of C-type lectin receptors (CLRs) that form a superfamily of proteins containing conserved C-type lectin binding domains [2]. CLEC-2 is highly expressed on platelets and megakaryocytes and at lower levels on several other myeloid cells; its activation thereby triggers an intracellular signaling pathway resulting in platelet activation, as well as initiation of immune responses [3,4]. The Clinical Presentation of Calloselasma rhodostoma Bites Calloselasma rhodostoma is a major cause of snakebite morbidity in Thailand, Cambodia, Laos, Northwest Malaysia and Java [5,6]. As described in a recent review, relatively few studies of the clinical presentation of snake bites have been published, and accurate statistics of the incidence, morbidity and mortality of snakebites throughout the world are not available [7]. However, the effects of Calloselasma rhodostoma bites have been described in previous studies [5,6], and a detailed presentation of the clinical characteristics is given in Table 1 [6]. Generally, the clinical presentation correlates with the severity of envenoming [8], and the symptoms will be more severe in individuals with low body weight or comorbidity, if the bite is located to the face or trunk, by exercise after the bite and if the snake clings to the victim for a longer time [7]. A minority of the patients had no symptoms (48 out of 250 patients). Local symptoms were most common (178 out of the 250 patients) [6]. Local swelling and pain then usually start from minutes to several hours after the bite [8]. Skin discoloring, blistering, bleeding and necrosis may also occur. Systemic or distant hemorrhagic effects (i.e., hemoptysis, hematemesis, macroscopic hematuria) were only seen for a minority of 37 patients and was probably due to more severe envenoming, whereas other signs from distant organs were rare [6,9]. Hypotension and shock is only seen for a small minority of patients; the mortality is therefore generally low and mainly caused by severe hemorrhages or secondary bacterial infections [5,6]. [6]. The results are presented as the fraction of patients and a description of the symptoms/signs. Local effects of snakebite in all 250 patients 48/250 No local swelling, bleeding or other local reaction. 24/250 Negligible reaction with a maximum extent of swelling of <1 cm difference in circumference between the bitten and healthy extremity. 57/250 Mild local swelling, eventually together with local bleeding or blistering, but without necrosis; 1-4 cm difference in circumference between the bitten and the healthy extremity. 94/250 Moderate local reaction with swelling corresponding to a more than 4 cm difference in the circumference between the affected and the healthy extremity; no necrosis. 27 The common reactions at the local site suggest that local inflammation with extravasation is a part of the reaction to the venom. It is not known how the venom causes distant hemorrhages and whether this is due to an effect on the coagulation factor system or circulating platelets. As will be described below, rhodocytin is a venom component that activates platelets; it is not known whether this toxin contributes to the bleeding tendency, because effects of envenoming on peripheral blood platelet counts or in vivo platelet functions have not been investigated. One hypothesis could be that rhodocytin activates platelets and, thereby, causes platelet consumption, followed by thrombocytopenia and bleeding. An alternative hypothesis could be bleeding caused by a general alteration of the platelet function, due to rhodocytin. Most of the bleeding complications in the previous study were from the skin or mucous membrane, and this is consistent with a qualitative or quantitative platelet defect [10]. However, since Calloselasma rhodostoma venom contains various toxins other than rhodocytin, the clinical symptoms cannot be explained solely based on the effects of rhodocytin/CLEC-2, but are probably also caused by contributions from other toxins to these complex reactions. Molecular Characterization of Rhodocytin and Its Interaction with the CLEC-2 Receptor Snake venoms often contain a large number of toxins that target proteins in the vasculature. The venom, rhodocytin, was purified from Calloselasma rhodostoma in the 1990s [11,12]. The crystallographic structure of rhodocytin shows that it is made up of two alpha and two beta chains in the form of a tetramer [13]. A disulfide-linked dimer consists of an alpha and a beta chain and two such dimers form a non-disulfide-tetramer [13,14]. The surface of rhodocytin has a negatively charged cleft that provides a suitable docking surface for the predominately positively charged surface region of CLEC-2, and the flexibility of these molecules seems to strengthen the interaction [15]. Expression and Function of the CLEC-2 Receptor in Normal Cells The CLEC-2 gene is located in the human natural killer gene complex (NKC) on chromosome 12 [2,6]. This complex contains several families of type II transmembrane C-type lectin-like receptors, and CLEC-2 is part of the Dectin-1 cluster, which also includes MICL, CLEC12B, CLEC9A, CLEC-1, Dectin-1 and LOX-1. These receptors recognize a diverse range of structurally unrelated ligands, including molecular patterns in both endogenous and exogenous ligands, as well as receptor-specific ligands (e.g., podoplanin) [16], and the receptors initiate a variety of immunoregulatory, inflammatory and homoeostatic reactions [2,16]. Podoplanin is a mucin-type sialoglycoprotein that is exposed in a variety of cell types, including brain, heart, kidney, lungs, osteoblasts and lymphoid tissue, and it has recently been identified as an endogenous ligand for CLEC-2 [2,16]. By reverse transcriptase mRNA or Northern blot analyses, the CLEC-2 transmembrane receptor transcripts have been identified in bone marrow, circulating myeloid cells (monocytes, dendritic cells and granulocytes), liver and natural killer (NK) cells [17,18]. A systematic analysis of protein expression has revealed that CLEC-2 protein is detected in platelets, megakaryocytes, liver sinusoidal endothelial cells and liver Kupffer cells [19,20]. Studies in mice indicate that CLEC-2 is highly expressed in platelets and megakaryocytes, whereas the levels in other cell types are lower ( Table 2). As described above, the CLRs are a superfamily of proteins, including a wide range of molecules, some of which are true lectins that bind to carbohydrate ligands. However, many members of this superfamily only share a basic structural scaffold with the true-sugar binding lectins; this is also true for CLEC-2 [15,21]. The structure of CLEC-2 has been resolved, and the basic scaffold is a conserved C-type lectin fold that is held together by disulfide bonds and hydrophobic interactions [15]. A variant of the standard alpha helix loop extends across the surface of the molecule and contains a stretch of 3-10 helices [15]. The cytosolic tail of CLEC-2 contains a novel sequence, YxxL, known as the hemi-immunoreceptor tyrosine-based activatory motif (hemITAM) [21]. This hemITAM is phosphorylated by Src family kinases upon binding of rhodocytin to the extracellular domain [22,23], and this phosphorylation promotes binding of the tyrosine kinase Syk or Zap-70, by their tandem Src-homology 2 (SH2) domains. CLEC-2 is activated as a dimer [2], and ligation causes activation leading to further downstream signaling. Table 2. Important biological effects of C-type lectin-like receptor 2 (CLEC-2) ligation: direct effects on CLEC-2-expressing cells and indirect effects mediated through podoplanin expression by the target cells. ITAM, immunoreceptor tyrosine-based activation motif; NK, natural killer cells. Cell Expression and functional effects of CLEC-2 ligation/activation Direct effects on CLEC-2 expressing cells Platelets and megakaryocytes [18,19] (i) CLEC-2 ligation induces intracellular tyrosine-phosphorylation signaling cascades mediated by Src, Syk, Vav, SLP-76 and PLCγ family members; (ii) There is also an increase in intracellular calcium levels and; (iii) finally, induction of platelet activation. Thus, Syk is a downstream mediator in platelets, neutrophils, monocytes, dendritic cells and endothelial cells (see below). Neutrophils [20,21,24] Murine studies indicate that CLEC-2 activation initiates intracellular signaling through Syk and also affects signaling initiated by Toll-like Receptors (TLRs); this TLR effect is then similar to the effects seen in monocytes. CLEC-2 ligation triggers phagocytosis, and this is probably initiated via the cytoplasmic ITAMlike motif of its cytoplasmic tail. Similar to monocytes, CLEC-2 ligation in neutrophils seems to initiate production and release of IL-6, IL-10 and TNF-α. Monocytes [20,24] CLEC-2 initiated Syk-coupled signaling is able to modulate TLR-initiated signaling, and proinflammatory responses are thereby altered. Production and release of IL-6, IL-10 and TNF-α is induced. Dendritic cells [25] Intracellular signaling initiated by CLEC-2 ligation in dendritic cells involves many of the same mediators as platelets: CLEC-ligation triggers cell migration via downregulation of RhoA activity and myosin light-chain phosphorylation. F-actin-rich protrusions is triggered by Vav signaling and Rac1 activation. This signaling cascade finally results in rearrangement of the actin cytoskeleton, and dendritic cell migration is thereby promoted. NK cells [17] Reverse transcriptase-PCR and Northern blot analysis indicate that CLEC-2 is expressed in NK cells, but the functional effects of CLEC-2 ligation have not been examined. Liver sinusoidal endothelial cells, liver Kupffer cells [20] CLEC-2 is expressed on liver sinusoidal endothelial cells and Kupffer cells in both mice and humans, but the functional effects of CLEC-2 ligation on these cells have not been studied. [26][27][28] Interaction between CLEC-2 in platelets and podoplanin in lymph endothelial cells are necessary for the embryonic separation of lymph and blood vessels; Syk-and SLP-76-deficient mice have blood/lymphatic misconnections. These effects are probably caused by reduced signaling in platelets rather than a direct effect via endothelium-expressed CLEC-2. Cancer cells and development of metastases [29][30][31][32] Podoplanin is expressed in several malignancies and seems to be important for cancer cell migration and metastasis. The likely mechanism is cancer-induced platelet activation with the release of soluble mediators that affect endothelial cells and/or cancer cell migration with the development of metastases. [3], although the details of these interactions are not fully understood. The Possible Role of CLEC-2 in Cancer Development CLEC-2 has a possible role in tumor growth and metastasis [29][30][31][32]. Tumor cell-induced platelet activation seems to be mediated through the release of soluble mediators (adenosine phosphate, thromboxane), and this effect can be further strengthened by the activation of serine proteases (thrombin, capsin B) generated by the procoagulant activity of some tumor cells through (i) the release of matrix metalloproteases from cancer cells and platelets, and (ii) exposure of subendothelial collagen fibers due to tissue degradation. It is also proposed that platelets stabilize vessel growth (especially through vascular endothelial growth factor (VEGF) release) and, thereby, facilitate development of metastases through a proangiogenic effect [29]. Finally, there are also data supporting that CLEC-2 ligand expression in tumor endothelial cells increases adhesion of malignant cells to the vessel wall, followed by extravasation to new metastatic sites; this seems to involve signaling through small GTPases that regulate the actin cytoskeleton. The crosstalk between platelets and tumor cells causes platelet activation and changes in tumor cell morphology, as well as cell surface molecule expression. This possible role of CLEC-2 in carcinogenesis suggests that CLEC-2 may become a therapeutic target in future cancer therapy [29,30]. Biological Studies of Local and Systemic Effects after Calloselasma Rhodostoma Bites-A Hypothesis-Generating Basis for Future Studies of CLEC-2 Biology? The Calloselasma rhodostoma is a major cause of snakebite morbidity in Southeast Asia. The clinical presentation has been known for a long time, and the local and systemic effects after the bites have been described in previous clinical studies. The swelling and local pain suggest that extravasation and local inflammation is important in the pathogenesis, an observation suggesting that venom toxins have effects on endothelial cells and vessel permeability, as well as local recruitment of immunocompetent cells. One possible explanation for the development of necrosis could be platelet activation with microthrombi and ischemia. The available studies suggest that ligation of the CLEC-2 receptor contributes to all these effects; (i) CLEC-2 ligation seems to affect endothelial cells and vessel formation during embryogenesis, as well as during cancer metastasation [26][27][28]31,32], and direct or indirect effects on the endothelium may contribute to local swelling and extravasation; (ii) activation of immunocompetent cells may be an important local proinflammatory effect; and (iii) platelet activation may cause microthrombi, ischemia and necrosis. Even though the snake venom contains a large number of toxins that target several proteins and other toxins than rhodocytin may be more important for the general effects on the coagulation system and distant bleeding complications, the authors hypothesis is that rhodocytin has an important role in the development of clinical symptoms after Calloselasma rhodostoma bites. CLEC-2 is a member of a protein superfamily containing conserved C-type lectin-like domains and having diverse functions. CLEC-2 is located on platelets, as well as immunocompetent cells, and receptor ligation leads to intracellular signaling and finally platelet activation. The intracellular signaling downstream to CLEC-2 shows similarities between platelets and immunocompetent cells. Even though the available studies are still few, the present knowledge about the CLEC-2 receptor has contributed to our understanding of hemostasis and links platelet biology to other fields in medicine, such as immunity and cancer. Two ligands for CLEC-2 have now been identified: the exogenous and soluble rhodocytin and the endogenous and membrane-bound podoplanin [25,32]. Even though the endogenous ligand has been identified, rhodocytin should still be regarded as a useful experimental tool. Even though different ligands bind to the same receptor, they will not necessarily have similar receptor-binding features and, thereby, induce the same downstream signaling effects. This is true for the Angiopoietin-Tie-2 system where the two ligands, Angiopoietin-1 and -2, bind to the same Tie-2 receptor, but may have different functional effects, depending on the biological context [33][34][35]. Further studies are required to clarify whether or not this may be the case also for rhodocytin and podoplanin. Additional questions that need to be answered by future experimental studies are (i) whether downstream signaling differs between soluble and membrane-bound CLEC-2 ligands and (ii) whether podoplanin exists in biologically active soluble forms similar to several cytokine receptors and adhesion molecules [36][37][38]. The availability of two different ligands may then become important in further studies of ligand-initiated intracellular signaling downstream to CLEC-2. Conclusions Snake venom has already made its way from clinical medicine to experimental studies of CLEC-2 biology. Most studies of patients with Calloselasma rhodostoma bites are relatively old; they are mainly descriptive clinical characterizations without additional biological studies [5][6][7]. The author suggests that detailed biological studies in patients with Calloselasma rhodostoma bites could be performed and possibly also additional studies in experimental animal models (i.e., local cutaneous injections of rhodocytin) could be used to further verify observations in humans. Our available knowledge suggests that rhodocytin is important, especially for the local reactions to these snakebites, and it possibly also contributes to systemic or distant effects. Such biology could then be used as a hypothesis-generating basis for future functional studies of CLEC-2. The CLEC-2 receptor seems to represent an important link between coagulation, inflammation, immunity and carcinogenesis, and detailed biological studies are therefore important to clarify whether this receptor or its downstream signaling cascade could be considered as a possible therapeutic target in clinical medicine. The downstream signaling from CLEC-2 shows similarities between different cells (i.e., platelets, granulocytes and monocytes; see Table 2) and targeting of CLEC-2 or CLEC-2-induced signaling may therefore represent a possibility to target different proinflammatory cells or different biological processes through a single molecular target. Conflict of Interest The author declares no conflict of interest.
v3-fos-license
2023-02-14T02:16:13.202Z
2023-02-13T00:00:00.000
256826791
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-023-43107-3.pdf", "pdf_hash": "5d70f4dc4ad009683a8130e7f34c4bf792a0f59a", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:457", "s2fieldsofstudy": [ "Physics" ], "sha1": "5d70f4dc4ad009683a8130e7f34c4bf792a0f59a", "year": 2023 }
pes2o/s2orc
Soliton confinement in a quantum circuit Confinement of topological excitations into particle-like states - typically associated with theories of elementary particles - are known to occur in condensed matter systems, arising as domain-wall confinement in quantum spin chains. However, investigation of confinement in the condensed matter setting has rarely ventured beyond lattice spin systems. Here we analyze the confinement of sine-Gordon solitons into mesonic bound states in a perturbed quantum sine-Gordon model. The latter describes the scaling limit of a one-dimensional, quantum electronic circuit (QEC) array, constructed using experimentally-demonstrated QEC elements. The scaling limit is reached faster for the QEC array compared to spin chains, allowing investigation of the strong-coupling regime of this model. We compute the string tension of confinement of sine-Gordon solitons and the changes in the low-lying energy spectrum. These results, obtained using the density matrix renormalization group method, could be verified in a quench experiment using state-of-the-art QEC technologies. Confinement and asymptotic freedom are paradigmatic examples of non-perturbative effects in strongly interacting quantum field theories (QFTs) [1].While typically associated with theories of elementary particles [2,3], confinement of excitations into particle-like states occurs in a wide range condensed matter systems.In the latter setting, the "hadrons" are formed due to confinement of domain walls in quantum spin chains [4].They have been detected using neutron scattering experiments in a coupled spin-1/2 chains [5] and in a one-dimensional Ising ferromagnet [6].Furthermore, signatures of confinement have been observed in numerical investigations of quenches in quantum Ising spin chains [7,8] as well as in noisy quantum simulators [9,10]. Despite its ubiquitousness, in the condensed matter setting, quantitative investigation of confinement has rarely ventured beyond lattice spin systems.In this work, we show that confinement of topological excitations can arise in a one-dimensional, superconducting, quantum electronic circuit (QEC) array.The QEC array is constructed using experimentally-demonstrated quantum circuit elements: Josephson junctions, capacitors and 0 − π qubits [11][12][13][14][15][16][17].The proposed QEC array departs from the established paradigm of probing confinement in condensed matter systems and starts with lattice quantum rotors.These lattice regularizations are particularly suitable for simulating a large class of stronglyinteracting bosonic QFTs [18] due to rapid convergence to the scaling limit.While this was numerically observed in the semi-classical regime of the sine-Gordon (sG) model [19], here we show that QECs are suitable for regularizing a strong-interacting, non-integrable bosonic QFT. With a specific choice of interactions that arise naturally in QEC systems due to tunneling of Cooper pairs and pairs of Cooper pairs, we verify that the long- wavelength properties of the QEC array are described by a perturbed sG (psG) model [20][21][22][23][24].The corresponding euclidean action is where V (φ) = −2µ cos(βφ) − 2λ cos(βφ/2) and λ, µ, β are parameters [25].Due to the presence of the perturba-arXiv:2302.06289v3 [quant-ph] 10 Oct 2023 tion ∝ λ, the solitons and the antisolitons of the sG model experience a confining potential that grows linearly with their separation.This leads to the formation of mesonic excitations, analogous to the confinement phenomena occurring in the Ising model with a longitudinal field [26][27][28][29][30][31].In the psG case, the free Ising domain-walls are replaced by interacting sG solitons.While predicted using semi-classical and perturbative analysis [22][23][24], quantitative investigations of confinement, direct evidence of the psG mesons and an experimentally-feasible proposal to realize this model have remained elusive so far.This is performed in this work.Each unit cell of the one-dimensional QEC array [gray rectangle in Fig. 1] contains: i) a Josephson junction on the horizontal link with junction energy (capacitance) E J (C J ), ii) a parallel circuit of an ordinary Josephson junction [junction energy (capacitance) E J1 (C 1 )] and a 0 − π qubit [11][12][13][14] on the vertical link.The 0 − π qubit is realized using two Josephson junctions [junction energies (capacitances) E ′ J (C ′ J )], together with two inductors with inductances L [Fig.1(b)].In the limit (L/C ′ J ) 1/2 ≫ ℏ/(2e) 2 , this circuit configuration realizes a cos(2ϕ) Josephson junction [14].In the limit C J ≫ C eff , where C eff = C 1 + C 2 , the QEC array is described by the Hamiltonian: where E c = (2e) 2 /2C eff and we have chosen periodic boundary conditions.Here, n k is the excess number of Cooper pairs [32] on each superconducting island and ϕ k is the superconducting phase at each node, satisfying [n j , e ±iϕ k ] = ±ℏδ jk e ±iϕ k , with ℏ set to 1 in the computations.We approximate the exponentially-decaying, long-range interaction due to the capacitance C J [33] with a nearest-neighbor interaction [34] of the form ϵn k n k+1 , where the constant ϵ is < 1 [35].The third and fourth terms in Eq. ( 2) arise due to the coherent tunneling of Cooper-pairs between nearest-neighboring islands and due to a gate-voltage at each node.The last two cosine potentials of Eq. ( 2) respectively arise from tunneling of Cooper-pairs and pairs of Cooper-pairs through the Josephson junction and the 0−π qubit on the vertical link. For E J2 = E J1 = 0, H corresponds to a variation of the Hamiltonian of the Bose-Hubbard model [36,37] and conserves the total number of Cooper-pairs.As E J /E c is increased from 0, the QEC array transitions from an insulating to superconducting phase.We focus on the superconducting phase obtained by increasing E J /E c at constant density [38,39].In the latter phase, the longwavelength properties of the array are described by the free, compactified boson QFT [33,34], characterized by the algebraic decay of the correlation function of the lattice vertex operator: , where K is the Luttinger parameter.This algebraic dependence is verified in Fig. 2(a) by computing the corresponding correlation function using the density matrix renormalization group (DMRG) technique [40].For the parameters in this work, the Luttinger parameter varies between 0 ≤ K ≤ 2 [34,41].We further compute the dimensionless "Fermi/plasmon velocity", u, in the QEC array by analyzing the ground-state energy of the array with system-size (Ref.[25], Sec.III) [Fig.2(c)]. For E J2 ̸ = 0, E J1 = 0, keeping E J > E c , the QEC array realizes the sG model [19].Now, the lattice model has a conserved Z 2 symmetry, associated with the parity operator for the number of Cooper-pairs: This symmetry leads to a two-fold degenerate ground state for this realization of the sG model.This is in contrast to the usual continuum formulation of the latter, where the ground state is one of the infinitely many vacua.The two degenerate states correspond to ϕ k = 0 and ϕ k = π, k = 1, . . ., L, with the sG solitons and antisolitons interpolating between them.The sG coupling, β, is given by: β = K/2 ∈ (0, 1) (Ref.[25], Sec.I). We verify the sG limit of the QEC array as follows.First, we compute the scaling of the lattice operator e iϕ k , which, in the continuum limit, correspond to the vertex operator e iβφ/2 .The scaling with the coupling E J2 /E c [Fig.2(b)] yields the value of the sG coupling β 2 [Fig.2(c)].These values are compared with those expected from the free-boson computations.The discrepancy between the obtained values of β 2 for the sG and the free boson computations as β 2 → 1 arises due to the Kosterlitz-Thouless phase-transition.We also compute the connected, two-point correlation function: ⟨e iϕj e −iϕ k ⟩ − ⟨e iϕj ⟩ 2 .When normalized by ⟨e iϕj ⟩ 2 , the latter is given by a universal function, computable using analytical techniques.We compare the DMRG results with analytical predictions.We chose two representative values of β 2 to demonstrate the robustness of our results in both the attractive and repulsive regimes.The quantity, M u, where M is the soliton mass, is obtained numerically by computing the correlation length of the lattice model using the infinite DMRG technique.The short (long) distance behavior of the normalized, connected correlation function was computed using conformal perturbation theory (form-factors [42,43] computed by including up to two-particle contributions) (Ref.[25], Sec.IB).The results are shown as pink (lime) solid curves labeled CPT (FF 2p ) in Fig. 2(d). The soliton-creating operators for the sG model [44,45] are defined on the lattice as: O q s (k) = e 2isϕ k j<k e −iqπnj , where q and s are the topological charge and the Lorentz spin of the excitations.The current QEC incarnation of the sG model gives access to solitons with s ∈ {0, 1/2, 1} and q = ±1.For definiteness, we consider s = 0. Fig. 3(a) (empty markers) shows the energy cost, T , of separating a soliton-antisoliton pair, after they are created by application of O q s at two different locations for different values of β 2 .For the sG model, as expected, T = 0 for all values of the separation d.The corresponding phase-profile can be inferred by computing ⟨e iϕ k ⟩ for different lattice sites, after normalizing with respect to the ground-state results [Fig.3(b)]. The situation changes dramatically for the psG model, realized by making E J1 ̸ = 0 in Eq. ( 2), while choosing the rest of the parameters as for the sG model.Due to the perturbing potential ∼ cos(ϕ k ), the sG solitons and the antisolitons experience a strong-confining potential energy, qualitatively similar to that experienced by the free, Ising domain walls under a longitudinal field [26][27][28][29].We compute the energy-cost of separation T for the FIG. 3. a) DMRG results for the string tension for different choices of β 2 , chosen by fixing EJ /Ec [Fig.2(c)], for L = 64.The results are shown for EJ 2 /Ec = 0.1 for both the sG and psG models, while for the latter, EJ 1 /Ec = 0.1.Similar results were obtained for other choices.For the sG model (empty markers), after creating the soliton-antisoliton pair, there is no associated energy cost of separation.However, for the psG model (filled markers), due to the existence of the perturbing cosine potential ∝ EJ 1 [Eq.( 2)], the soliton and the antisoliton experience a confining force.This leads to an energy cost (T /Ec) growing linearly with separation d. b) The corresponding phase-profile computed by creating a soliton-antisoliton pair and separating them by 12 lattice sites.c) The corresponding string tension, σ = T /d (empty circles) obtained from a linear fit of the data in a).The corresponding leading-order analytical predictions for σ are denoted by crosses.The discrepancy between the predicted and obtained string-tension for β 2 ≈ 0.736 occurs due to the proximity to the Kosterlitz-Thouless point (β 2 = 1).psG model as in the sG case [Fig.3(a), filled markers].The energy-cost grows proportional to the distance of separation: T = σd, where σ is the string-tension.The latter is numerically obtained by fitting to this linear dependence and shown as a function of β in Fig. 3(c).To leading order, σ = 2⟨e iϕ k ⟩E J1 /E c , where the expectation value ⟨e iϕ k ⟩ is computed for the ground state of H with E J1 = 0.The discrepancy between the leading-order prediction and the numerical results for β 2 ≈ 0.736 is due to the proximity to the Kosterlitz-Thouless point.The decrease of the string-tension with increasing β 2 can be viewed as a consequence of the increasing repulsion between the sG solitons and antisolitons with increasing The spectrum of the psG model contains the newlyformed mesons and the charge-neutral sG breathers.The latter occur only for β 2 < 1/2 with their masses acquiring corrections due to the perturbing potential.Fig. 4 shows DMRG results for mass of the lightest particle as a function of the dimensionless parameter η = For small η, the psG mesons are heavier (with masses > 2M ) than the breathers (with masses < 2M ).We compute the mass of the lightest sG Using linear fit [25] of the numerical data for η ≪ 1, we obtain the ratio m b /M (comparison with the analytical prediction in the left inset).The scaling of the psG meson mass is given by: (mmes − 2M )/M ∼ η α for η ≪ 1.The inset in the right panel shows the comparison of the α obtained using DMRG (circles) and those using non-interacting twoparticle (NI-2p) approximation (dotted line). breather (psG meson) for β 2 < (>)1/2 from computation of the correlation lengths using infinite DMRG technique.For η ≪ 1, the correction to the lightest breather mass can be expanded in powers of η.We show a comparison of the obtained ratio m b /M , m b being the lightest sG breather mass for η = 0, with the analytical predictions in the left inset.For a comparison of our numerical data with perturbative computation [23], see Sec.IIB of Ref. [25].For β 2 > 1/2, the spectrum contains only the psG mesons.The dependence of lowest psG meson mass is shown in Fig. 4 (right).For η ≪ 1, a non-interacting two-particle (NI-2p) computation (Ref.[25], Sec.IIC) predicts (m mes − 2M )/M ∼ η α , where α = 2 3 .Comparison of the numerical results with the NI-2p computation is shown in the right inset.A more complete computation using the Bethe-Salpeter equation for the psG model is beyond the scope of this work. To summarize, we have numerically demonstrated the confinement of sG solitons into mesonic bound states in a QEC array.We computed the associated string tension and computed the scaling properties of the mass of the lightest particle.In contrast to quantum spin-chains which have been the defacto standard for lattice simulation of strongly-interacting QFTs, this work demonstrates the robustness and versatility of QEC to achieve this goal.Given that the primitive circuit elements of the proposed scheme have already been demonstrated, it is conceivable that predictions for additional physical properties of the psG model could be obtained using analog quantum simulation [46] in an experimental realization.For instance, a quench experiment would be able to cap-ture signatures of the excitations with energy higher than what could reliably probed using DMRG.Consider the case when the junction energies of the blue Josephson junctions, E J1 , in Fig. 1 are tunable.This can be accomplished by replacing the corresponding junctions by a SQUID loop with a magnetic flux threading the latter [47].After preparing the system in the ground state of H with E J1 = 0, the coupling E J1 is turned on by tuning magnetic flux.Signatures of the confinement of the sG solitons can be obtained by probing the spectrum and the current-current correlation functions [48].Given the progress in the fabrication and investigation of large QEC arrays [49][50][51], we are optimistic of experimental vindication of our work. The proposed QEC provides a starting point for the realization of a large number of one-dimensional QFTs.First, replacing the blue Josephson junction on the vertical link in Fig. 1 by a linear inductor gives rise to the renowned massive Schwinger model.Second, tuning a magnetic flux between the Josephson junction and the 0−π qubit in each cell changes the perturbing potential in Eq. ( 2) from cos(ϕ k ) to sin(ϕ k ).For certain values of E J1 /E J2 , this induces a renormalization group flow from the gapped perturbed sine-Gordon model to a quantum critical point of Ising universality class [23,24,52].Third, QECs provide a robust avenue to realize sG models with a-fold degenerate minima, where a ∈ Z (Ref.[25], Sec.IV).The corresponding cos(aϕ) circuit element can be constructed by recursively using the cos ϕ and cos 2ϕ circuit elements.Perturbations of these sG models lead to not only soliton confinement and false-vacuum decays [53,54] present in the a = 2 case, but also all unitary minimal conformal field theory models [52,55].Controlled realization of the latter multicritical Ising models opens the door to numerical and experimental investigation of a wide range of impurity problems that have so far been elusive. FIG. 1 . FIG.1.Each unit cell (gray rectangle) of the QEC array contains a Josephson junction (green cross) on the horizontal link.The vertical link of the same contains a parallel circuit of an ordinary Josephson junction (blue cross) and a cos(2ϕ) Josephson junction (purple crosses).The latter is formed by two Josephson junctions, two capacitors and two inductors (bottom right panel)[14].The variation of the classical potential, V cl , [Eq.(1)] as EJ 1 /EJ 2 increases from 0 in steps of 1/4 is shown in the top right panel.For nonzero EJ 1 /EJ 2 , the solitons (green wavepacket) and antisolitons (maroon wavepacket), interpolating between the potential minima at ϕ = 0 and ϕ = π, experience a confining potential (yellow string in top left panel), leading to the formation of mesonic bound states. FIG. 2 . FIG. 2. DMRG results and comparison with analytical predictions.a) Verification of the power-law decay of the correlation functions of the lattice vertex operators for the free boson model obtained for EJ 1 = EJ 2 = 0 keeping EJ /Ec finite.The obtained Luttinger parameter (K = 2β 2 ) from the slopes are plotted as pluses in c).b) Scaling of the vertex operator expectation value with EJ 2 /Ec for the sG model.The values of the sG coupling obtained from this scaling are plotted as diamonds in c).The discrepancy between the sG result and the free-boson prediction as β 2 → 1 occur due to corrections to scaling arising from the Kosterlitz-Thouless phase-transition occurring at β 2 = 1.The (dimensionless) Fermi/plasmon velocity, u, was obtained from the Casimir energy computation of the free theory[25].The free-fermion point of the sG model is indicated by the dotted magenta line.d) Comparison of the normalized, connected two-point correlation function of the vertex operator e iϕ j ∼e iβφ/2 computed using DMRG and analytical computations in the repulsive (β 2 ≈ 0.63) and the attractive (β 2 ≈ 0.4, inset) regimes of the sG model.The ratio 1/M u, M being the soliton-mass, was obtained by computing the correlation length from the infinite DMRG computation. FIG. 4. DMRG results for the mass of the lightest particle of the psG model for β 2 < 1/2 (left) and β 2 > 1/2 (right), as a function of the dimensionless quantity η.Here, M (m b ) is the mass of the soliton (lightest breather) of the unperturbed sG model.The diamonds and triangles correspond to different choices of EJ 2 /EC .For small η, the lightest particle is the lightest sG breather (psG meson) for β 2 < (>)1/2.
v3-fos-license
2021-12-11T05:09:05.738Z
2021-11-06T00:00:00.000
240316637
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.18053/jctres.07.202106.004", "pdf_hash": "3e5085d6ced4ee1d02d90a08efe510458746b027", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:458", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3e5085d6ced4ee1d02d90a08efe510458746b027", "year": 2021 }
pes2o/s2orc
Online public interest in common malignancies and cancer screening during the COVID-19 pandemic in the United States Background and Aim: The COVID-19 pandemic was declared a national emergency in the United States in March 2020. The Centers for Medicare and Medicaid Services subsequently released recommendations that health-care facilities temporarily delay elective surgeries and non-essential medical procedures. Disruptions to medical care significantly impacted cancer patients, with cancer screenings halted and nonurgent cancer surgeries postponed as health-care facilities shifted resources toward the COVID-19 pandemic. Although it has been reported that cancer screening rates decreased dramatically in the United States in 2020, it is unclear whether this trend was driven by factors related to public interest in cancer and/or cancer screening as opposed to other factors such as clinical backlogs, pandemic-related policies, and/or resource limitations. The purpose of this study was to use the Google Trends tool to evaluate public interest in six common malignancies and four common cancer screening methods during the COVID-19 pandemic. Methods: We used the Google Trends tool to quantify public interest in six different malignancies (Breast Cancer, Colon Cancer, Lung Cancer, Prostate Cancer, Thyroid Cancer, and Cervical Cancer) and four cancer screening methods (Pap Smear, Lung Cancer Screening, Mammogram, and Colonoscopy) in the United States during the COVID-19 pandemic. Welch’s t-tests were used to compare monthly search volumes during the COVID-19 pandemic (2020) to the 4 years before the pandemic (2016 – 2019) for all ten search terms included in our study. We used Benjamini-Hochberg to adjust raw p values to account for multiple statistical comparisons. The level of statistical significance was defined by choosing a false discovery rate of 0.05. Results: Our results indicate significantly reduced interest in all malignancies studied at the beginning of the COVID-19 pandemic. Public interest in [‘Breast Cancer’], [‘Colon Cancer’], [‘Lung Cancer’], [‘Thyroid Cancer’], and [‘Cervical Cancer’] significantly decreased in the months of March, April, May, and June 2020 when compared with public interest in 2016-2019. Public interest in cancer screening methods such as [‘Pap Smear’], [‘Lung Cancer Screening’], [‘Mammogram’], and [‘Colonoscopy’] significantly deceased in the months of April and May compared to 2016 – 2019 values. However, decreased public interest in cancer screening methods was temporary, with Google search volumes returning to pre-pandemic levels in June 2020 – December 2020. Conclusion: There was significantly reduced public interest in both common malignancies and cancer screening methods at the beginning of the COVID-19 pandemic in the United States. However, after an initial decline, public interest as indicated by Google search volumes quickly returned to pre-pandemic levels in the second half of the calendar year 2020. In addition, trends in public interest in cancer screening as indicated by Google search volumes aligned with cancer screening uptake rates in the United States during the study period. This finding suggests that Google Trends may serve as an effective tool in gauging the public’s interest in cancer and/or cancer screenings in the United States, which makes it a valuable resource that can be used to inform decisions aimed at improving cancer screening rates in the future. Relevance for Patients: The Google Trends tool can be used to measure public interest in various malignancies and their associated screening methods. Google Trends data may be used to inform measures aimed at improving cancer screening uptake. Introduction The COVID-19 pandemic has fundamentally changed the healthcare landscape for both healthcare professionals and patients. On March 13, 2020, the President of the United States declared COVID-19 to be a national emergency [1]. The Centers for Medicare and Medicaid Services (CMS) subsequently released recommendations that health-care facilities "[delay] all elective surgeries, non-essential medical, surgical, and dental procedures" [2]. The recommendations issued by CMS temporarily brought health-care systems to a halt as resources were shifted to focus on the COVID-19 pandemic [3]. Disruptions in healthcare services significantly impacted cancer patients, as routine cancer screening procedures were advised against and nonurgent cancer surgeries were delayed [4,5]. More than 1 year after the initial declaration of COVID-19 as a national emergency, concerns have been raised regarding the impact of the pandemic on cancer care, especially with regard to delays in diagnosis and treatment that may have occurred as a result of the pandemic [5]. For example, a recent study examining cancer screening rates in the United States revealed substantial decreases in cancer screenings, visits, and surgeries in 2020 when compared with metrics observed in the previous year's [6]. It is unclear to what extent public apprehension regarding cancer screening during the COVID-19 pandemic was a factor in driving reduced cancer screening rates in 2020 when compared with other factors such as clinical backlogs, pandemic-related policies, and/or pandemic-related resource limitations [7][8][9]. As such, we sought to quantify public interest in various malignancies and cancer screening methods during the COVID-19 pandemic. The internet is one way to track public interest in cancer-related topics [10]. When searching for cancer information online, the primary search engine that patients use is Google, which accounts for more than 90% of all internet searches [11]. Google Trends is a free, opensource tool that allows customizable analysis of search term volumes entered into the Google search engine. The Google Trends tool has been utilized previously to measure public interest in a wide range of health topics, from influenza outbreaks to osteoarthritis treatments to plastic surgery procedures [12][13][14][15][16][17]. In addition, the Google Trends tool has been used extensively to measure public interest in various oncological topics such as the effectiveness of cancer awareness months [18,19]. The purpose of this study was to use the Google Trends tool to evaluate public interest in six common malignancies and four common cancer screening methods during the COVID-19 pandemic. We hypothesized that there would be a reduction in public interest in common malignancies and cancer screening methods in the months after the onset of the pandemic in the United States. A sustained reduction in public awareness of cancer and cancer screening as a result of the pandemic has important clinical implications. Methods In this cross-sectional retrospective study, we used the Google Trends tool to investigate the impact of the COVID-19 pandemic on public interest on various malignancies and cancer screening methods in the United States. Google trends output Google Trends analyses can be customized by search term, time period, and geographic location. After a search term is entered into the Google Trends tool and temporal and geographic constraints are specified, Google Trends generates visuals and outputs that reflect the volume of a given search term relative to peak popularity within the defined time period, which is assigned a value of 100. The data are presented as relative search volume (RSV), which is computed as ratio between searches for a given topic and the total amount of Google queries. An RSV value of 100 indicates the largest ratio between searches for a specific topic and the total amount of Google queries, while an RSV of 0 indicates that at the specified time point, the proportion of queries for the search term was <1% of its peak RSV (RSV 100) [20]. The Google Trends tool uses RSV rather than absolute count of Google searches to allow for ease of comparison of search volumes in states, cities, and countries with varying population densities. In the current study, the following filters were utilized in the Google Search term selection We investigated trends in public interest for six common malignancies and four common cancer screening methods. ]. The six aforementioned malignancies were chosen due to their high relative frequency and/or available screening methods. The specific search terms for the four cancer screening methods were selected as a result of their demonstrated popularity using the "Related Queries" feature of the Google Trends tool. Statistical analysis The Google Trends tool provided RSV on a weekly basis from 2016 to 2020 for all ten search terms (six malignancies and four cancer screening methods) included in this study. To account for monthly fluctuations in RSV during cancer awareness months, we compared monthly search volume data during the COVID-19 pandemic (March -December, 2020) to the same months in the prior 4 years (March -December, 2016 -2019) for each of the ten search terms included in our study. The months of January and February were excluded from analysis due to the onset and declaration of the COVID-19 pandemic as a global health emergency in March 2020 [1]. Welch's t-tests were used to compare monthly search volumes for 10 months (March -December) for all ten search terms included in our study, for a total of 100 t-tests. We used Benjamini-Hochberg (BH) to adjust raw p values to account for multiple statistical comparisons [21]. The adjusted BH p values were obtained using SPSS Version 26.0.0.1. Our significance level was determined by controlling the expected proportion of false discoveries at 0.05. Results The monthly RSV for the six malignancies studied are displayed in Figure 1. We compared the mean monthly RSV during the pandemic (March -December, 2020) to RSV prior to the pandemic (mean pooled values including data from 2016 to 2019), and the results are displayed in Tables 1-6. Raw p values and adjusted BH p values due to multiple comparisons are both listed. With regard to public interest in ['Breast Cancer'], a statistically significant reduction in RSV, adjusted for multiple comparisons, at a significance cutoff of p < 0.05 was observed in 2020 in the months of March, April, May, June, and August. The largest drop in public interest (9.4% decrease in mean RSV compared with 2016 -2019) was observed in April ( Table 1). (Table 2). For ['Lung Cancer'], a significant reduction in RSV in 2020 was observed in the months of March, April, May, June, September, and November. The largest drop in public interest (6.6% decrease in mean RSV compared with 2016-2019) was observed in April ( Table 3). For ['Prostate Cancer'], a significant reduction in RSV in 2020 was observed in the months of April and May. The largest drop in public interest (20.4% decrease in mean RSV compared with 2016 -2019) was observed in April. For ['Thyroid Cancer'], a significant reduction in RSV in 2020 was observed in the months of March, April, May, June, and September. The largest drop in public interested (35.6% decrease in mean RSV compared with 2016 -2019) was observed in April ( Table 5). The monthly RSV for the four cancer screening methods studied are displayed in Figure 2. Comparisons of monthly RSV in the 4 years before the pandemic (2016 -2019) to those observed during the COVID-19 pandemic (2020) is observed in Tables 7-10. For ['Pap Smear'], a statistically significant reduction in RSV, adjusted for multiple comparisons, at a significance cutoff of p < 0.05 was observed in 2020 in the months of April and May. The largest drop in public interested (28.5% decrease in mean RSV compared with 2016 -2019) was observed in April (Table 7). For ['Lung Cancer Screening'], a significant reduction in RSV in 2020 was observed in the months of March, April and May. The largest drop in public interested (25.5% decrease in mean RSV compared with 2016 -2019) was observed in April (Table 8). Table 9). For ['Colonoscopy'], a significant reduction in RSV in 2020 was observed in the months of April and May. The largest drop in public interested (47.0% decrease in mean RSV compared with 2016 -2019) was observed in April (Table 10). There was Discussion The purpose of this study was to examine the impact of the COVID-19 pandemic on public interest in various malignancies and cancer screening methods in the United States. Our results indicate significantly reduced interest in all six cancers studied for many of the months included in our study. Public interest in Public interest in various malignancies as indicated by online search activity has previously been correlated with incidence and mortality rates of some cancers [22,23]. In addition, the Google Trends tool has been shown to be a valuable tool for analyzing public interest in cancer screening in several different countries [24,25]. As such, the results of our study have clinical implications, with the potential to inform patient counseling. Our findings align with the conclusions of a recently published study examining the effect of the COVID-19 pandemic on public interest in various cancers in Canada. In both the United States and Canada, public interest in many of the most common malignancies decreased significantly during the first few of the months of the pandemic before normalizing toward the end of the calendar year 2020 [26]. In addition to analyzing public interest in many common cancers, we also examined trends in public interest in various cancer screening methods. While public interest in all search terms associated with cancer screening decreased significantly in the first few months of the pandemic, search volumes largely normalized to pre-pandemic levels by the end of the calendar year 2020. The results of our study align with a recently published article that examined breast, colorectal, and prostate cancer screening rates in the United States. Chen et al. [4] reported a sharp decrease in cancer screening uptake in the United States in the months of March through May, which is the same trend that we observed with regards to public interest in various cancer screening methods using the Google Trends tool during the same time period. The result of our study has implications for policy makers and nonprofit organizations tasked with improving cancer screening rates in the United States. Near the beginning of the pandemic, nonemergent medical services such as cancer screenings drastically decreased [4]. In addition to the frequency of nonemergency medical services declining, there was also a sharp reduction in emergency department visits; with many people reporting that they felt uncomfortable visiting the emergency department because of a fear of becoming infected with COVID-19 [27]. This led to many patients waiting to seek treatment for life-threatening conditions, which led to adverse health consequences [28,29]. As a result, many hospitals launched initiatives aimed at easing patients' fears about the virus and encouraged them to continue their preventative care regimens, such as cancer screenings, that are vital to their overall health [27,30]. The messaging regarding the importance of attending to cancer screenings was well received, as supported by our findings of increased public interest in common malignancies and cancer screening methods throughout the rest of the 2020 calendar year. However, despite revived public interest in cancer screening, reduced screening totals for the remainder of calendar year 2020 when compared with the previous year's suggest that resources focused on improving cancer rates should be dedicated to other factors such as clinical backlogs, pandemicrelated legislation, and health-care center resource limitations that appear to be decreasing screening uptake, [4,[7][8][9]31]. At this time, properly dedicating resources to these areas of focus, rather than focusing on messaging aimed at easing concerns of the public, could help to improve cancer screening rates in the future. The results of our study also provide insight into how public interest patterns in various malignancies and cancer screening methods vary due to media coverage. Previously published reports indicate that public awareness campaign and celebrity diagnoses or deaths significantly impact the public's interest in various malignancies [18,32]. We observed similar trends in our study. When comparing public interest in cancers or cancer screening methods in 2020 to public interest in 2016-2019, there were limited circumstances (6 months out of 100 months studied) in which the 2020 search volumes were greater than those observed in 2016-2019. However, in each of the six circumstances, greater public interest in 2020 was driven by a media-generating event such as a celebrity diagnosis/death or a cancer awareness month. For example, in late August of 2020, renewed actor Chadwick Boseman died of colon cancer. There was a subsequent increase in public interest in both ['Colon Cancer'] and ['Colonoscopy'] in September and/or October after the news was shared by media outlets around the world (Tables 2 and 10). Health-care organizations tasked with improving cancer screening rates and outreach initiatives should utilize the tremendous publicity generated by a celebrity cancer diagnoses or death to raise awareness and to provide important information about cancer screening to the public, which may help to decrease related mortality rates. One long-lasting effect of the COVID-19 pandemic may be a shift in the way in which people expect to undergo their cancer screenings. The convenience of the in-home cancer screening test has become even more valuable to patients who are hesitant to leave their homes during the pandemic, and the race is underway to develop the most accurate in-home screening methods. Stoolbased screening tests that patients perform at home in an effort to detect colon cancer gained popularity over the pandemic and are expected to continue to serve as an efficient alternative to the colonoscopy in the upcoming years [33,34]. In addition, selfsampling kits for cervical cancer screening are currently under evaluation for approval by the United States Food and Drug Association [35]. It is possible that the convenience of an inhome screening method may improve screening rates, particularly among people who feel uncomfortable with the screening methods that are used in a traditional healthcare setting for breast, cervical, and colorectal cancer screening [36][37][38]. In addition, increased uptake of in-home screening methods could help to avoid a steep decline in cancer screening that would likely occur in the event of a future pandemic. As such, it would be beneficial to use the Google Trends tool to monitor the public's interest with regards to in-home cancer screening tests in the future. The findings of our study suggest that Google Trends can serve as an effective tool in gauging the public's interest in cancer screenings in the United States. The patterns observed with regards to public interest in various cancer screenings such as ['Pap Smear'], ['Lung Cancer Screening'], ['Mammogram'], and ['Colonoscopy'] largely mimic those seen in actual healthcare screening uptake in the United States -a sharp decline at the beginning of the pandemic followed by a gradual increase throughout the remainder of the calendar year 2020 [39]. In addition to the long-term benefits of a free, open-source tool that can track public interest in cancer screening, the data provided by the Google Trends tool may prove critical should virus variants that are resistant to modern vaccines emerge and extend the duration of the COVID-19 pandemic. The rise of the "delta variant" of the virus has coincided with an increase in breakthrough infections and the potential of a return to more restrictive measures in the United States [40]. In the event of restrictions being implemented in the future similar to those that were implemented at the beginning of the COVID-19 pandemic, the Google Trends tool can be utilized by healthcare systems to provide real-time, up to date data regarding public interest in cancer screenings, which can contribute to proper resource allocation to meet the screening demands of a given geographical area. There are several limitations to this study. First, casual inferences cannot be drawn due to the observational nature of the study design. In addition, Google Trends does not provide extensive information about the demographics of the users whose data are reflected in this study. As such, it is unclear if the users are a representative sample of the United States population. Next, the Google Trends tool only captures information about searches that are entered into the Google search engine. People may seek information about various malignancies and cancer screening methods using other search engines, and this data would not be reflected in this study. However, more than 90% of search engine inquiries worldwide are executed using Google as compared to an alternative search engine, which supports the notion that the data included in our study is representative of all queries in the United States [11]. Finally, due to the de-identified nature of the Google users whose data are reflected in this study, we cannot directly link public interest in various cancer screening methods with actual cancer screening uptake. However, the trends in public interest in cancer screening methods observed in our study mimic the trends of cancer screening uptake observed in recently published studies, which suggests that the Google Trends tool may be an effective gauge for public interest in cancer screening [4,39]. Conclusions Google search trends indicate decreased public interest in many common malignancies and cancer screening methods in the United States in the early months of the COVID-19 pandemic, with a gradual return to pre-pandemic levels towards the end of the 2020 calendar year. Furthermore, trends in public interest in several cancer screening methods as indicated by Google search volumes aligned with cancer screening uptake in the United States, suggesting that the Google Trends tool may be a valuable information source that can be used to guide decisions aimed at improving cancer screening rates in the future. Finally, we observed the strong impact that the media can have on driving public interest in various malignancies and cancer screening methods. Our findings may be useful to organizations attempting to educate the public regarding various malignancies with the primary goal of improving cancer screening uptake in the United States during the COVID-19 pandemic and into the future.
v3-fos-license
2024-05-04T15:12:50.071Z
2024-04-30T00:00:00.000
269536228
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2607/12/5/897/pdf?version=1714451445", "pdf_hash": "5f7ef5dbec33bb0bfca830d5507e70bb450c5180", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:459", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "sha1": "8531beb95ad846e177d76dd660e3b92abd5bd376", "year": 2024 }
pes2o/s2orc
The Potential Roles of Host Cell miRNAs in Fine-Tuning Bovine Coronavirus (BCoV) Molecular Pathogenesis, Tissue Tropism, and Immune Regulation Bovine coronavirus (BCoV) infection causes significant economic loss to the dairy and beef industries worldwide. BCoV exhibits dual tropism, infecting the respiratory and enteric tracts of cattle. The enteric BCoV isolates could also induce respiratory manifestations under certain circumstances. However, the mechanism of this dual tropism of BCoV infection has not yet been studied well. MicroRNAs (miRNAs) are small non-coding RNAs that regulate gene expression and play a dual role in virus infection, mediating virus or modulating host immune regulatory genes through complex virus–host cell interactions. However, their role in BCoV infection remains unclear. This study aims to identify bovine miRNAs crucial for regulating virus–host interaction, influencing tissue tropism, and explore their potential as biomarkers and therapeutic agents against BCoV. We downloaded 18 full-length BCoV genomes (10 enteric and eight respiratory) from GenBank. We applied several bioinformatic tools to study the host miRNAs targeting various regions in the viral genome. We used the criteria of differential targeting between the enteric/respiratory isolates to identify some critical miRNAs as biological markers for BCoV infection. Using various online bioinformatic tools, we also searched for host miRNA target genes involved in BCoV infection, immune evasion, and regulation. Our results show that four bovine miRNAs (miR-2375, miR-193a-3p, miR-12059, and miR-494) potentially target the BCoV spike protein at multiple sites. These miRNAs also regulate the host immune suppressor pathways, which negatively impacts BCoV replication. Furthermore, we found that bta-(miR-2338, miR-6535, miR-2392, and miR-12054) also target the BCoV genome at certain regions but are involved in regulating host immune signal transduction pathways, i.e., type I interferon (IFN) and retinoic acid-inducible gene I (RIG-I) pathways. Moreover, both miR-2338 and miR-2392 also target host transcriptional factors RORA, YY1, and HLF, which are potential diagnostic markers for BCoV infection. Therefore, miR-2338, miR-6535, miR-2392, and miR-12054 have the potential to fine-tune BCoV tropism and immune evasion and enhance viral pathogenesis. Our results indicate that host miRNAs play essential roles in the BCoV tissue tropism, pathogenesis, and immune regulation. Four bovine miRNAs (miR-2375, bta-miR-193a-3p, bta-miR-12059, and bta-miR-494) target BCoV-S glycoprotein and are potentially involved in several immune suppression pathways during the viral infection. These miRNA candidates could serve as good genetic markers for BCoV infection. However, further studies are urgently needed to validate these identified miRNAs and their target genes in the context of BCoV infection and dual tropism and as genetic markers. BCoV-S is the most crucial protein that shapes viral infection, pathogenesis, and immune regulation.The BCoV-S protein binds to multiple host receptors and is cleaved during the initial phase of viral infection by host cell proteases, enabling viral entry and disease progression [16][17][18].The BCoV-S protein consists of the S1 (receptor-binding) subunit and the S2 (membrane-fusion) subunit [19].The S1 subunit has two domains: an N-terminal domain (NTD) and a C-terminal domain (CTD) [20].Both BCoV and the Human Coronavirus (HCoV)-OC43 recognize a sugar moiety called 5-N-acetyl-9-O-acetylneuraminic acid (Neu5,9Ac2) on the host cell surface of glycolipids or glycoproteins that play an essential role in virus-host receptor recognition [21,22].The S1-NTD region of the BCoV spike protein is an essential receptor-binding domain that recognizes sugar moieties [23].The S1/S2 furin cleavage site is usually cleaved by furin at the S1/S2 boundary to detach S1 from the S2 domain and control virus fusion and entry [24].Receptor recognition and attachment are crucial and indispensable steps in viral infection and tropism [25].Several related SARS-CoV-2 studies and other coronaviruses have demonstrated that blocking spike protein at certain critical regions can inhibit coronavirus infection [26].However, research regarding BCoV within this context remains scarce. MicroRNAs (miRNAs) are small, non-coding RNAs that target the 3 ′ UTR of the target mRNAs and inhibit their translation [27,28].Several studies have shown that host miRNAmediated RNA interference (RNAi) is essential in virus-host interactions [29][30][31].Recent studies reported that, during viral infections, miRNAs emerged as a critical regulator either by suppressing or enhancing the expression of the targeted host gene(s) or the viral RNAs [32].For example, miR-485 targets host RIG-I with a low abundance of H5N1 influenza virus while targeting the PB1 subunit of the viral genome (essential for viral replication) with an increased amount of H5N1, leading to marked inhibition of the virus replication [33].Similarly, host miRNA targets the 3 ′ UTR, receptor binding domain, and the structural and nonstructural proteins of SARS-CoV-2 and inhibits viral replication [34].For example, the up-regulation of miR-2392 enhanced SARS-CoV-2 replication in the host [35], while miR-150-5p inhibited SARS-CoV-2 replication in the target cells in vitro [36].In addition to their role in modulating the host genome, miRNAs regulate the viral/host tissue tropism upon infection.A recent study reported that pseudorabies virus (PRV) upregulates some key host miRNAs to evade the innate immune response and establish its tropism in the kidneys, spleen, lungs, and other tissues [37].Therefore, miRNAs hold promise as valuable biomarkers and potential therapeutic agents against viral diseases of various species of animals and humans. Understanding the complex interaction between viruses and their host transcription machinery is of fundamental importance.As one of its possible interactions with host cells, the SARS-CoV-2 virus induces specific mechanisms to inhibit the production of the host nuclear proteins, transcription factors (TFs), and host mRNAs.This nucleic acid degradation could be achieved through the interaction between host miRNAs and some important viral proteins [38,39].The TFs are the host cytoplasmic proteins that bind to the promoter region of the DNA-binding domain of target genes and influence transcription mechanisms.Both host miRNAs and TFs interact during viral infection and affect cell functions like proliferation, differentiation, and apoptosis [40].Several studies have explored the interplay between host miRNAs and TFs during SARS-CoV-2 infection [41], but their role in BCoV remains to be investigated. Coronavirus (CoV) infections, in general, trigger the host cells to activate several immunogenic cytokines, such as IFNs or inducible cytokines known as suppressors of the cytokine signaling (SOCS) pathway [42].SOCS1 and SOCS3 are well-known regulators of SOCS pathways, essential in neutralizing interferons [43,44].Several studies reported that CoV infection activates SOCS and suppresses the host immune regulation [44,45].Sultani et al. (2023) found that miR-155 is upregulated during SARS-CoV-2 infection and targets SOCS1 and inhibits the host Th17/Treg pathway [46]. On the other hand, type I IFN plays a vital role in activating host innate immune responses after external pathogenic recognition.CoVs produce a double-stranded RNA (ds-RNA) during viral replication, which is recognized by host pattern recognition receptors (PRRs) as a pathogen-associated molecular pattern (PAMP) [47].The PAMPs are recognized by cytosolic retinoic acid-inducible gene I (RIG-I) [48][49][50] and promote type I IFN transcription [51,52].After infection, viruses promote tissue tropism either directly, causing pathogenicity through their genomes, or indirectly, stimulating host miRNAs to escape the host immune mechanism. In the current study, we used several in silico prediction tools to identify some host miRNAs that potentially target and influence the host immune regulatory genes and immune-suppressing genes in the context of BCoV infection.Furthermore, we explored the potential roles of some host miRNAs in binding to various regions across the BCoV genome, inhibiting viral replication, and thus fine-tuning its tissue tropism and pathogenesis.We also identified some potential miRNAs that could serve as biological and genetic markers for BCoV infection and potentially distinguish between the enteric and respiratory isolates of BCoV.The outcomes of this study will enrich our knowledge about bovine miRNA and BCoV host interaction. BCoV Genome Sequences We downloaded 18 full-length genome sequences of bovine coronavirus (BCoV) (10 representing the enteric isolates and eight representing the respiratory isolates) from the National Center for Biotechnology Information (NCBI) for our downstream analysis (Table 1).The BCoV isolates were selected from diverse global regions, considering both geographical location and the year of isolation.The mature bovine miRNA sequences were retrieved from the miRBase Release 22.1 (http://www.mirbase.org/accessed on 30 January 2019) [53].One thousand sixty-four mature bovine miRNAs were retrieved from the online miRNA database (miRBase database) (https://www.mirbase.org/summary.shtml?orgbta).The multiple sequence alignment of the 18 full-length genome sequences of -BCoV was conducted by snapgene 6.0.2 (http://www.snapgene.com)using the MUSCLE alignment method between various BCoV genome (enteric and respiratory) isolates. Identification of Bovine miRNAs Targeting Host Genes and miRNAs Targeting Host Transcription Factors in Specific Tissues The bovine miRNAs targeting host genes prediction was performed via miRWalk and TargetScan 8.0 [54,55].miRWalk is an online bioinformatic tool that produces the integrated network of relationships between miRNAs-genes-pathways and miRNAs-genes-disorder interactions.The miRWalk prediction presenting the network interaction of miRNAs and genes was obtained from miRWalk.The RumimiR online bioinformatic tools were used to identify miRNAs based on animal status, tissue origins, and experimental conditions from publications [56].The generated figures illustrating the network interaction of miRNAs and their potential target genes, transcription factors, and tissues were produced by Cytoscape version 3.10.0[57]. Host miRNAs Potentially Targeting BCoV Genome Sequences at Various Locations We used a combination of different bioinformatic tools to identify potential bovine miRNAs targeting BCoV.We used the genome sequence of the Mebus (Accession Number: U00735) isolate as a reference strain for the enteric BCoV isolates [58].To identify potential miRNAs binding to the Mebus isolate of BCoV, we used online software RNA22 v2 (https: //cm.jefferson.edu/rna22/).We selected the binding miRNAs based on the binding energy and the 6-8 bp (6mer-8mer) binding at the seed region of miRNAs.The selection criteria for the potential miRNA candidates were based on several parameters: the minimum free energy and the complementarity between the miRNA seed region and the viral/host gene binding site (specified as nucleotide 2-8 on candidate miRNAs) of a minimum of six nucleotide-binding mRNA candidates and the BCoV target gene. Functional Enrichment Analysis of Genes Targeted by the Relevant miRNA Candidates To identify the biological significance of the selected bovine miRNAs and their impacts on the differentially expressed genes (DEGs), Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses in the context of BCoV infection were performed via miRWalk (https://cm.jefferson.edu/rna22/)[54].The pathway enrichment category plots for KEGG and GO were plotted by (https://www.bioinformatics.com.cn/en), a free online platform for data analysis and visualization. Bovine Host Cellular miRNAs Targeting the BCoV-S Protein Our data show that 154 bovine miRNAs have potential target sites at several locations across the BCoV-S protein (Figure 1).To identify the most potential miRNA candidates, we selected the miRNA candidates that are potentially bound to the most sensitive sites of BCoV-S protein (i.e., S1-NTD sugar-binding sites, S1/S2 Furin cleavage site, GxCx Motif, and conserved S2 monomers of the BCoV spike protein), which are involved in virus attachment into the host cell and the downstream replication and pathogenesis (Table 2).S1-NTD is a crucial receptor-binding site of the BCoV-S protein in the host cell [23].The S1/S2 furin cleavage site promotes virus entry and enhances replication.We found that both bta-miR-193a-3p and bta-miR-494 potentially synergistically target the S1-NTD sugar-binding sites of the BCoV-S protein.Meanwhile, bta-miR-2375 potentially targets the S1/S2 Furin cleavage site, while bta-miR-12059 has a potential target site within the conserved S2 monomer region (Figures 2 and 3 and Table 3). bta-miR-494 has three other potential binding sites in the BCoV genome at ORF1a, ORF1b, and the S1 region.Meanwhile, bta-miR-2375 potentially binds to two other sites at ORF1a and the spike gene's S1 region (Supplementary File S1).The targeting by bovine miRNAs could influence the spike protein functions, inhibiting BCoV replication. Bovine Host Cellular miRNAs Targeting the BCoV-S Protein Our data show that 154 bovine miRNAs have potential target sites at several locations across the BCoV-S protein (Figure 1).To identify the most potential miRNA candidates, we selected the miRNA candidates that are potentially bound to the most sensitive sites of BCoV-S protein (i.e., S1-NTD sugar-binding sites, S1/S2 Furin cleavage site, GxCx Motif, and conserved S2 monomers of the BCoV spike protein), which are involved in virus attachment into the host cell and the downstream replication and pathogenesis (Table 2). S2 The amino acids and the nucleotides are highlighted in red font, showing the binding sites of the sugar molecules. S1-NTD is a crucial receptor-binding site of the BCoV-S protein in the host cell [23].The S1/S2 furin cleavage site promotes virus entry and enhances replication.We found that both bta-miR-193a-3p and bta-miR-494 potentially synergistically target the S1-NTD sugar-binding sites of the BCoV-S protein.Meanwhile, bta-miR-2375 potentially targets the S1/S2 Furin cleavage site, while bta-miR-12059 has a potential target site within the conserved S2 monomer region (Figures 2 and 3 and Table 3). bta-miR-494 has three other potential binding sites in the BCoV genome at ORF1a, ORF1b, and the S1 region.Meanwhile, bta-miR-2375 potentially binds to two other sites at ORF1a and the spike gene's S1 region (Supplementary File S1).The targeting by bovine miRNAs could influence the spike protein functions, inhibiting BCoV replication.The amino acids and the nucleotides are highlighted in red font, showing the binding sites of the sugar molecules. Differential Targeting of Some Bovine Host Cell miRNA Candidates to the Genomes of BCoV (Enteric/Respiratory) Isolates The multiple sequence analysis (MSA) was performed on the 18 full-length BCoV genome sequences from enteric and respiratory isolates to map the homology and targeting of the chosen miRNA candidates.Results showed that bta-miR-193a-3p potentially binds to the region, including the tyrosine-162 residue of the S1-NTD sugar-binding site.This binding site was highly conserved among all of the selected isolates of BCoV (Figure TATAGGTA-ATATGGAGGAGT |||:| ||||||| TCCTCCGTCCCACCCTCCTCT The highlighted nucleotide sequences in red color indicate the hybridization between the seed region of selected miRNA and the target region in BCoV-S glycoprotein. Differential Targeting of Some Bovine Host Cell miRNA Candidates to the Genomes of BCoV (Enteric/Respiratory) Isolates The multiple sequence analysis (MSA) was performed on the 18 full-length BCoV genome sequences from enteric and respiratory isolates to map the homology and targeting of the chosen miRNA candidates.Results showed that bta-miR-193a-3p potentially binds to the region, including the tyrosine-162 residue of the S1-NTD sugar-binding site.This binding site was highly conserved among all of the selected isolates of BCoV (Figure 3A).Meanwhile, the binding of bta-miR-494 to another residue of the S1-NTD sugar-binding site was also conserved among all of the tested BCoV isolates (Figure 3B).Furthermore, the bta-miR-2375 binding to the S1/S2 Furin cleavage site was also conserved among all of the 18 BCoV isolates (Figure 4A).Similarly, bta-miR-12059 potentially binds to the conserved S2 nonamer region of the BCoV-S glycoprotein, which was conserved in all analyzed sequences except for the canine respiratory coronavirus isolate (Figure 4B). Potential Impacts of Host Cell miRNAs on Viral Replication through Targeting Some Host Signal Transduction Pathways in the Context of BCoV Infection The above results showed that miR-2375, miR-193a-3p, miR-12059, and miR-494 target BCoV-S protein.Consequently, we aimed to elucidate the role of these miRNAs in host signal transduction pathways.The KEGG gene enrichment analysis was categorized into groups based on pathways involved in environmental information processing, cellular processing, organismal systems, and human diseases (Figure 5).The highest differentially expressed genes (DEGs) were observed in MAPK and PI3K-AKT pathways, followed by cytokine-cytokine receptor interaction, Wnt, and JAK/STAT signaling pathways (Figure 5).Furthermore, these miRNAs also target genes involved in the host receptor signaling pathways, including chemokine, T cell, B cell, and RIG-I receptor signaling pathways (Figure 5). The GO enrichment analysis showed that bovine miR-193a-3p, miR-494, miR-2375, and miR-12059 target genes involved in pathways that negatively regulate different biological processes (Figure 6).These four bovine miRNA candidates potentially target some genes that regulate cell proliferation and inflammatory responses.Furthermore, the GO analysis also showed that many genes were targeted in the Golgi and cellular components' whole membrane, which may influence the BCoV-S attachment to the cell membrane of the target cells (Figure 6). Potential Impacts of Host Cell miRNAs on Viral Replication through Targeting Some Host Signal Transduction Pathways in the Context of BCoV Infection The above results showed that miR-2375, miR-193a-3p, miR-12059, and miR-494 target BCoV-S protein.Consequently, we aimed to elucidate the role of these miRNAs in host signal transduction pathways.The KEGG gene enrichment analysis was categorized into groups based on pathways involved in environmental information processing, cellular processing, organismal systems, and human diseases (Figure 5).The highest differentially expressed genes (DEGs) were observed in MAPK and PI3K-AKT pathways, followed by cytokine-cytokine receptor interaction, Wnt, and JAK/STAT signaling pathways (Figure 5).Furthermore, these miRNAs also target genes involved in the host receptor signaling pathways, including chemokine, T cell, B cell, and RIG-I receptor signaling pathways (Figure 5). The GO enrichment analysis showed that bovine miR-193a-3p, miR-494, miR-2375, and miR-12059 target genes involved in pathways that negatively regulate different bio- The MiRWalk analysis and GO term BP, CC, and MF three-in-one bar plots were plotted by https: //www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. BCoV Infection Induces Differential Display of the Host Cell miRNAs to Fine-Tune the Viral Tropism and Viral Replication We used an online tool to identify some potential bovine miRNAs targeting host immune regulatory genes (Supplementary File S2).Twelve miRNAs were selected among these candidates based on their putative involvement in various host immune response pathways.Our findings suggest that beta-miR-2338 had the maximum number of binding sites (10 sites) (Figures 8 and 9).Additionally, three miRNAs (bta-miR-6535, bta-miR-2392, and bta-miR-12054) target nine host genes (Figures 8 and 9).Furthermore, three miRNAs (bta-miR-12053, bta-miR-2360, bta-miR-2407) each can potentially target eight genes, and To support these findings, we expanded our search and used another online server, TargetScan8.0, to confirm our predictions.Results indicated that both miR-2375 and miR-494 only bind to some immune-suppressor genes.It is worth mentioning here that the bovine miR-12059 was not included in the TargetScsan8.0database.The bovine miR-2375 binds to six immune-suppressing genes, particularly SOCS2, SOCS4, SOCS5, and SOCS7 (Figure 7B).Meanwhile, bovine miR-494 could only potentially target both SOCS5 and SOCS6 (Figure 7B). Bovine miRNAs Regulating Some Host Transcription Factors That Control the Expression of Some Key Host Genes To understand the miRNA transcription factor (TF) co-regulation of host genes that might potentially influence BCoV replication, we generated a miRNAs-TFs network.We conducted the online target prediction of these 12 miRNAs with TFs already reported to have strong potential as diagnostic biomarkers for BCoV infection [59].Our results showed that seven miRNAs (miR-10171-3p, miR-7865, miR-2407, miR-6535, miR-2392, miR-12054, and miR-2451) out of those 12 candidates target the Retinoic acid receptorrelated orphan receptor alpha (RORα) transcription factor (Figure 11).RORα is highly expressed in different cells and tissues, including macrophages, suggesting its potential roles in the immune responses [60,61].Furthermore, miR-2338 targets the Yin Yang 1 (YY1) transcription factor, which is highly expressed in mammalian tissues, and the oncogenic fusion transcription factor (TCF3-HLF) (Figure 11). Bovine miRNAs Regulating Some Host Transcription Factors That Control the Expression of Some Key Host Genes To understand the miRNA transcription factor (TF) co-regulation of host genes that might potentially influence BCoV replication, we generated a miRNAs-TFs network.We conducted the online target prediction of these 12 miRNAs with TFs already reported to have strong potential as diagnostic biomarkers for BCoV infection [59].Our results showed that seven miRNAs (miR-10171-3p, miR-7865, miR-2407, miR-6535, miR-2392, miR-12054, and miR-2451) out of those 12 candidates target the Retinoic acid receptorrelated orphan receptor alpha (RORα) transcription factor (Figure 11).RORα is highly expressed in different cells and tissues, including macrophages, suggesting its potential roles in the immune responses [60,61].Furthermore, miR-2338 targets the Yin Yang 1 (YY1) transcription factor, which is highly expressed in mammalian tissues, and the oncogenic fusion transcription factor (TCF3-HLF) (Figure 11). The GO enrichment analysis showed that these bovine miRNAs target many genes in the host cell cytoplasm and nucleus (Figure 13).Moreover, these miRNAs also target genes involved in the cell cycle, protein binding, DNA binding, and RNA binding (Figure 13).These results indicated that these bovine miRNAs, especially miR-6535, miR-2392, miR-12054, and miR-2338, could potentially promote BCoV replication and fine-tune the viral tissue tropism and pathogenesis. The GO enrichment analysis showed that these bovine miRNAs target many genes in the host cell cytoplasm and nucleus (Figure 13).Moreover, these miRNAs also target genes involved in the cell cycle, protein binding, DNA binding, and RNA binding (Figure 13).These results indicated that these bovine miRNAs, especially miR-6535, miR-2392, miR-12054, and miR-2338, could potentially promote BCoV replication and fine-tune the viral tissue tropism and pathogenesis.KEGG pathway enrichment analysis of some selected bovine miRNAs potentially targeting some immune regulatory genes.The number displayed on the peak of each bar shows the number of genes involved in specific pathways targeted by these miRNAs.The column on the left displays the name of each pathway.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and pathway enrichment category plot were plotted using https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Figure 13.GO pathway enrichment analysis of bovine miRNAs targeting the immune regulatory genes.The counts represent the gene numbers (X-axis).The analysis was performed with MiRWalk, Figure 12.KEGG pathway enrichment analysis of some selected bovine miRNAs potentially targeting some immune regulatory genes.The number displayed on the peak of each bar shows the number of genes involved in specific pathways targeted by these miRNAs.The column on the left displays the name of each pathway.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and pathway enrichment category plot were plotted using https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization.KEGG pathway enrichment analysis of some selected bovine miRNAs potentially targeting some immune regulatory genes.The number displayed on the peak of each bar shows the number of genes involved in specific pathways targeted by these miRNAs.The column on the left displays the name of each pathway.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and pathway enrichment category plot were plotted using https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Figure 13.GO pathway enrichment analysis of bovine miRNAs targeting the immune regulatory genes.The counts represent the gene numbers (X-axis).The analysis was performed with MiRWalk, Discussion BCoV causes several clinical syndromes in the affected animals and shows a dual tropism in the respiratory and digestive tracts.Both the enteric and respiratory isolates of BCoV show a high degree of similarity on the genomic level [62].Meanwhile, due to their high degree of similarity, there is a lack of any genetic marker for these enteric and respiratory isolates of BCoV.Thus, there appeared to be other host-related factors that may fine-tune this dual tissue tropism.MicroRNAs were believed to regulate some host cell genes essential during many viral replications, including SARS-CoV-2 infection.Some host miRNAs were implicated in the diagnosis and prognosis of the outcomes of SARS-CoV-2-infected patients [34].The main goals of the current study were to study the potential roles of the host cell miRNAs in BCoV replication, tissue tropism, and immune regulation.Searching for some genetic miRNA markers for BCoV infection in general and even distinguishing between the enteric and respiratory isolates of BCoV were important contributions of this study. Host miRNA Targeting BCoV Spike Protein at Various Locations As in most other coronaviruses, the spike protein of BCoV also plays an essential role in attachment to the host cells, molecular pathogenesis, and immune evasion.Recent studies reported that host miRNAs could influence viral replication and alter the host gene expression profiles during SARS-CoV-2 replication [63].Furthermore, 67 human miRNAs have been reported to target and affect the normal function of SARS-CoV-2 spike protein [63,64]. The BCoV-S1 subunit consists of two independent domains, the N-terminal domain (NTD) and the C domain or Receptor-Binding Domain (RBD), which act as the viral receptorbinding domains [20].The BCoV and human HCoV-OC43 S1-NTD domains bind to some sugar moieties and act as viral lectins [21,22].Both the BCoV and HCoV-OC43 also express a hemagglutinin-esterase (HE) that serves as a receptor-destroying enzyme and promotes viral detachment from the sugar on infected cells, which is consistent with the presence of a viral lectin in their spike proteins [12].Peng et al. reported four critical sugar-binding sites in the BCoV S1-NTD domain: Tyr-162, Glu-182, Trp-184, and His-185 [23].However, the Furin-binding and sugar-binding sites serve as sensitive spike protein regions for virus invasion into the host.Thus, binding some host miRNAs to these motifs may prevent attachment and invasion of viral RNA into the host cell.We found that bta-miR-193a-3p binds to Tyr-162 and bta-miR-494 binds to the His-185 residue of the S1-NTD sugar-binding site.Furthermore, bta-miR-2375 binds to the S1/S2 Furin cleavage site, and bta-miR-12059 binds to the conserved S2 nonamer site. Impacts of Host miRNA Targeting Various BCoV Genome Regions Related to the Virus's Infectivity and Pathogenesis Besides the roles of miRNAs in binding to the viral genome, miRNAs binding to the mRNA of one or more target genes of the host cells may alter the mRNA expression and that of its encoded protein at the post-transcriptional level.This alteration in the mRNA or protein levels may impact a single gene or an entire signal transduction pathway in the context of the BCoV infection.One of the effective methods for understanding the dual tissue tropism of BCoV infection and pathogenesis is to comprehend host cell-related molecular signal transduction pathways during viral infection.First, we found that miR-193a-3p, miR-494, miR-2375, and miR-12059 target BcoV-S protein and could influence the virus's infectivity.Here, we wanted to explore the influence of these four miRNAs on host signal transduction pathways related to BcoV infection.Most of the cellular signaling and apoptosis pathways enriched by miR-193a-3p, miR-494, miR-2375, and miR-12059 have been recently reported as COVID-19-associated pathways [65,66].Therefore, we conducted the KEGG and the GO pathway enrichment analyses for some selected miRNA candidates (miR-193a-3p, miR-494, miR-2375, and miR-12059).We found that KEGG pathways affected by bovine miR-193a-3p, miR-494, miR-2375, and miR-12059 are primarily enriched in Mitogen-Activated Protein Kinase (MAPK) and Phosphatidylinositol 3-Kinase PI3K/ AKT (PI3K-Akt) signaling pathways (Figure 5).Consistently, Tao et al. (2020) showed that SARS-CoV, like other respiratory viruses, hijacks MAPK-p38 activity and promotes viral replication [67]. The PI3K-Akt pathway has been linked to various aspects of virus entry into cells and the development of immune responses.It can significantly influence viral cell invasion, growth, migration, and proliferation.It can promote angiogenesis while inhibiting apoptosis.SARS-CoV-2 endocytosis occurs through a clathrin-mediated pathway modulated by PI3K/AKT signaling [65].A recent study found that cancer pathways are the first disease pathways related to COVID-19.A total of 98 genes were commonly expressed in cancer and COVID-19 disease pathways.It was also shown that patients with the human influenza virus type-A infection pattern shared 57 common genes with SARS-CoV-2 infected patients [68].Similarly, our study indicated that most human disease pathways, including the influenza virus type-A, are influenced by the four bovine miRNA candidates mentioned above (miR-193a-3p, miR-494, miR-2375, and miR-12059). Another study showed that viral miRNAs could target host genes and establish favorable host cell conditions for virus replication [69].In this context, several studies found viral miRNAs predicted to target and influence host immune regulatory pathways, such as T cell-mediated immunity, cytokine response, biological adhesion, and autophagy, as well as other regulatory signaling pathways like WNT, MAPK, and TGF-beta signaling [63,[70][71][72][73].In contrast, this study showed that DEGs mediated by bovine miR-193a-3p, miR-494, miR-2375, and miR-12059 were also enriched in host immune regulatory pathways, like T cell-mediated immunity, cytokine and chemokine, and apoptosis, as well as other regulatory signaling pathways like MAPK, and WNT in both KEGG and GO pathways. Impacts of Host miRNA Targeting Various Immune Regulatory Genes in the Context of BCoV Infection Although the complete mechanism of miRNAs upon viral infection is not fully understood, it involves interactions with virus and host cells and modulates virus replication or host biological pathways.The virus most likely hijacks the host miRNAs to alter host genes, creates a suitable environment for their replication, and prevents the host's antiviral immune response [80].To identify the bovine miRNA candidates promoting BCoV pathogenesis and tissue tropism, we selected genes involved in the immune response against some coronaviruses, including BCoV.A recent study demonstrated that BCoV-Nucleocapsid protein alters type I interferon production by inhibiting MDA5, MAVS, TBK1, and IRF3 in the RLR pathway [81].Another study used an artificial intelligence bioinformatic approach to prove that PI3K/AKT, MAPK, and TLRs are the three most significant pathways involved in COVID-19 infection [68].At least eight proteins among the viral proteins expressed by SARS-CoV or SARS-CoV-2 have been proven to inhibit type I IFN response [82,83].To understand the molecular mechanism and the involvement of some host miRNAs in regulating these pathways during BCoV replication, we analyzed complete bovine miRNAs with host immune regulatory pathways.Results showed that bta-miR-2338 targeted 10 critical genes involved in host immune response (Figure 8A).In addition, bta-miR-6535, bta-miR-2392, and bta-miR-12054 each could potentially target nine genes involved in host immune response (Figure 8B-D).Remarkably, all four of these miRNAs (bta-miR-2338, bta-miR-6535, bta-miR-2392, and bta-miR-12054) also target CD4 T helper cell receptors (Figure 9).The KEGG gene enrichment analysis performed for these miRNAs supported the above statement and shared the interest of these miRNAs in influencing receptor binding pathways of T cell, B cell, toll-like receptors (TLR), and RIG-I receptor signaling pathways (Figure 12).All of this evidence strongly suggests that bta-miR-2338, bta-miR-6535, bta-miR-2392, and bta-miR-12054 could potentially enhance the propagation/replication of BCoV. Despite their role in viral propagation and host immune response activation, miRNAs can also mediate tissue tropism upon virus infection.In rats, the pseudorabies virus (PRV) infection showed several host miRNAs differently expressed in the lungs and spleen, regulating the respiratory and immune system and promoting tissue tropism [37].In this study, the GO functional annotation showed that these bovine miRNAs (bta-miR-2338, bta-miR-6535, bta-miR-2392, and bta-miR-12054) primarily targeted the host cell cycle, protein binding, DNA binding, RNA binding, cytosol, cytoplasm, and nucleus (Figure 13).Furthermore, an online prediction tool also indicated the high level of expression of bta-miR-2338 in different bovine tissues, including the small intestine, heart tissues, and kidney cells (Figure 10).These observations imply that the over-expression of bta-miR-2338 upon BCoV infection can positively regulate viral pathogenesis and tissue tropism in the host intestine and kidney. Potential Role of miRNAs as Biological Markers for BCoV Infection Several in silico studies have been conducted to predict host miRNA interactions with different viruses, particularly in the case of SARS-CoV-2 infection in humans [84,85].Although miRNAs only target individual genes, advanced research has demonstrated that miRNAs can modulate the complete signal transduction pathways.Understanding the full range of miRNA functions and their roles in the pathogenesis and control of some diseases has sparked widespread interest in their use to regulate immune pathways, offering a promising therapeutic option for many emerging diseases.The current study aimed to identify some potential bovine miRNAs that can be used as diagnostic, therapeutic, and genetic biomarkers for BCoV infection in cattle. This study elucidates the significance of bovine miR-193a-3p, miR-494, miR-2375, and miR-12059 as potential therapeutic agents against BCoV infection.Our findings demonstrate that these miRNAs have potential roles in fine-tuning BCoV replication.On the one hand, miR-193a-3p, miR-494, miR-2375, and miR-12059 target viral genomes and inhibit their replication.On the other hand, they inhibit the host immune suppressor pathway.Bovine miR-494 shares a homologous seed region with human miR-494-3p.A recent study proposed that has-miR-494-3p was significantly altered in studied COVID-19 patients [36].In addition, human miR-494 has been reported as a novel therapeutic marker for human breast cancer [86,87].These findings suggest that bovine miR-494 could be a potential therapeutic marker against BCoV.Furthermore, by using different bioinformatic prediction tools, we believe four bovine miRNAs (bta-miR-2338, bta-miR-6535, bta-miR-2392, and bta-miR-12054) could influence host immune mechanisms by targeting several immune regulatory genes and transcription factors.In contrast, McDonald et al. [35] found that miR-2392 was detected in the blood and urine of COVID-19-positive patients but was absent in COVID-19-naive patients.Furthermore, they designed miRNA-based antiviral therapeutics that target miR-2392 and significantly reduced SARS-CoV-2 in hamsters and may inhibit COVID-19 in humans [35].Altogether, these findings highlight the potential of miR-2392 as a biomarker for the diagnosis or prognosis of both BCoV and COVID-19. miRNA expression has been reported to change during the progression of crucial infectious diseases like bovine viral diarrhea (BVD) and foot and mouth disease (FMD).Evidence showed that miR-423-5p and miR-151-3p exhibit differential expression patterns across different time points post-BVD infection, indicating their potential significance in BVD infection [88].Another study showed that miR-17-5p, miR-31, and miR-1281 are potential biomarkers for foot and mouth diseases virus (FMD), offering insights into both acute infection and viral persistence [89].Apart from viral infection, several studies showed the importance of miRNAs as biomarkers in bacterial infections affecting bovines.Wang et al. [90] showed that miR-199a plays an important role in Mycobacterium bovis infection by downregulating host IFN-B expression.Similarly, Iannaccone et al. [91] identified miR-146a as a prognostic biomarker for bovine tuberculosis infection. In conclusion, host cell miRNAs may act as critical mediators in the differential tropism of BCoV infection (Ent/Resp).These miRNAs represent a promising trend, especially as diagnostic and genetic markers for BCoV.They may also pave the way for developing novel miRNA-based vaccines against BCoV in the future.Further studies are needed to explore the roles of miRNAs in the molecular pathogenesis and immune response/evasion of BCoV. Limitations of the Current Study The main limitations of the current study were (1) a lack of data on bovine miRNAs in the public domain, (2) the limited number of studies on the roles of miRNAs in the field of BCoV, and (3) the limited number of online tools and algorithms that can help in predicting the functions of bovine miRNAs and their target genes. Figure 3 . Figure 3. Multiple sequence analysis of the BCoV-S1-NTD sugar-binding site: (A) Bovine miR-193a-3p binding to the S1-NTD sugar-binding site tyrosine-162 residue (TAT) of spike gene of BCoV.(B) Bovine miR-494 binding to the S1-NTD sugar-binding site histidine-185 residue (TGG) of spike gene of BCoV.The green box shows the exact S1-NTD sugar-binding site.The red box shows the candidate miRNAs' seed region with complementary binding sites in BCoV-S glycoprotein. Figure 3 . Figure 3. Multiple sequence analysis of the BCoV-S1-NTD sugar-binding site: (A) Bovine miR-193a-3p binding to the S1-NTD sugar-binding site tyrosine-162 residue (TAT) of spike gene of BCoV.(B) Bovine miR-494 binding to the S1-NTD sugar-binding site histidine-185 residue (TGG) of spike gene of BCoV.The green box shows the exact S1-NTD sugar-binding site.The red box shows the candidate miRNAs' seed region with complementary binding sites in BCoV-S glycoprotein. Figure 4 . Figure 4. Multiple sequence analysis of the BCoV-S1/S2 Furin cleavage sites and the conserved S2 nonamer site: (A) Bta-miR-2375 binding to the S1/S2 Furin cleavage site of BCoV-S glycoprotein; (B) Bta-miR-12059 binding to the conserved S2 nonamer site of spike gene of BCoV.The green boxes show the S1/S2 Furin cleavage site and the conserved S2 nonamer site.The red boxes show miRNAs' seed region binding sites with the BCoV-S genes. Figure 4 . Figure 4. Multiple sequence analysis of the BCoV-S1/S2 Furin cleavage sites and the conserved S2 nonamer site: (A) Bta-miR-2375 binding to the S1/S2 Furin cleavage site of BCoV-S glycoprotein; (B) Bta-miR-12059 binding to the conserved S2 nonamer site of spike gene of BCoV.The green boxes show the S1/S2 Furin cleavage site and the conserved S2 nonamer site.The red boxes show miRNAs' seed region binding sites with the BCoV-S genes. Figure 5 . Figure 5. KEGG pathway enrichment analysis for some selected (miR-193a-3p, miR-494, miR-2375, and miR-12059) key host cell miRNAs: The number on the peak of each bar shows the gene numbers in specific pathways targeted and differently expressed by these miRNAs.The column on the left shows the target pathways.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and the pathway enrichment category plots were plotted by https://www.bioinformatics.com.cn/enaccessed on 31st January 2021, a free online platform for data analysis and visualization. Figure 6 .Figure 6 . Figure 6.GO pathway enrichment analysis for some selected host cell miRNA candidates (miR-193a-3p, miR-494, miR-2375, and miR-12059).The number on the peak of each bar shows the gene Figure 6.GO pathway enrichment analysis for some selected host cell miRNA candidates (miR-193a-3p, miR-494, miR-2375, and miR-12059).The number on the peak of each bar shows the gene numbers in specific pathways targeted and differently expressed by these miRNA candidates.The MiRWalk analysis and GO term BP, CC, and MF three-in-one bar plots were plotted by https: //www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Figure 8 . Figure 8. List of some selected bovine miRNAs and their potential targeting to some key host immune regulatory genes: (A) bta-miR-2338 targeted genes; (B) bta-miR-6535 targeted genes; (C) bta-miR-2392 targeted genes; and (D) bta-miR-12054 targeted genes.The miRNAs and gene binding predictions were performed by miRWalk software. Figure 9 . Figure 9. Network analysis of predicted bovine miRNAs targeting host immune regulatory genes: The binding of bovine miRNAs and their potential target genes was predicted by miRWalk.The figure showing network interaction was developed in Cytoscape. Microorganisms 2024 , 23 Figure 10 . Figure 10.Gene atlas showing the expression profiles of some selected bovine miRNAs in various host tissues: Bovine candidates' tissue origin was identified using the RumimiR database.The figure showing network interaction of miRNAs with bovine tissues was produced using Cytoscape. Figure 10 . Figure 10.Gene atlas showing the expression profiles of some selected bovine miRNAs in various host tissues: Bovine candidates' tissue origin was identified using the RumimiR database.The figure showing network interaction of miRNAs with bovine tissues was produced using Cytoscape. Microorganisms 2024 , 23 Figure 11 . Figure 11.Network analysis of some selected bovine miRNAs targeting some important host transcription factors: The targeting of the selected bovine miRNAs and their target genes was predicted by miRWalk.The figure showing network interaction was produced using Cytoscape. Figure 11 . Figure 11.Network analysis of some selected bovine miRNAs targeting some important host transcription factors: The targeting of the selected bovine miRNAs and their target genes was predicted by miRWalk.The figure showing network interaction was produced using Cytoscape. Figure 12 . Figure12.KEGG pathway enrichment analysis of some selected bovine miRNAs potentially targeting some immune regulatory genes.The number displayed on the peak of each bar shows the number of genes involved in specific pathways targeted by these miRNAs.The column on the left displays the name of each pathway.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and pathway enrichment category plot were plotted using https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Microorganisms 2024 , 23 Figure 12 . Figure12.KEGG pathway enrichment analysis of some selected bovine miRNAs potentially targeting some immune regulatory genes.The number displayed on the peak of each bar shows the number of genes involved in specific pathways targeted by these miRNAs.The column on the left displays the name of each pathway.The column on the right side indicates KEGG pathways assigned by different categories.The MiRWalk analysis and pathway enrichment category plot were plotted using https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Figure 13 . Figure 13.GO pathway enrichment analysis of bovine miRNAs targeting the immune regulatory genes.The counts represent the gene numbers (X-axis).The analysis was performed with MiRWalk, and GO term Biological Process (BP), Cellular Components (CC), and Molecular Function (MF) enriched horizontal bars with colors were plotted by https://www.bioinformatics.com.cn/en, a free online platform for data analysis and visualization. Table 1 . List of BCoV genome isolates and their demographic data retrieved from GenBank. Table 2 . Mapping the bovine miRNAs targeting some critical domains in the BCoV-S glycoprotein. Table 3 . Hybridization of some host cell miRNAs potentially targeting some critical domains of BCoV-S glycoprotein.
v3-fos-license
2021-05-11T00:04:12.044Z
2021-01-12T00:00:00.000
234300977
{ "extfieldsofstudy": [ "Geology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/nsr/article-pdf/8/4/nwab007/38910365/nwab007.pdf", "pdf_hash": "39881e251a10436ee3c73f660d06ddf4f89e13ef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:460", "s2fieldsofstudy": [ "Geology" ], "sha1": "23448280cb3671ff73ba29692a90dd394f6a888c", "year": 2021 }
pes2o/s2orc
Water transport to the core–mantle boundary Abstract Water is transported to Earth's interior in lithospheric slabs at subduction zones. Shallow dehydration fuels hydrous island arc magmatism but some water is transported deeper in cool slab mantle. Further dehydration at ∼700 km may limit deeper transport but hydrated phases in slab crust have considerable capacity for transporting water to the core-mantle boundary. Quantifying how much remains the challenge. Water can have remarkable effects when exposed to rocks at high pressures and temperatures. It can form new minerals with unique properties and often profoundly affects the physical, transport and rheological properties of nominally anhydrous mantle minerals. It has the ability to drastically reduce the melting point of mantle rocks to produce inviscid and reactive melts, often with extreme chemical flavors, and these melts can alter surrounding mantle with potential longterm geochemical consequences. At the base of the mantle, water can react with core iron to produce a super-oxidized and hydrated phase, FeO 2 H x , with the potential to profoundly alter the mantle and even the surface and atmosphere redox state, but only if enough water can reach such depths [1]. Current estimates for bulk mantle water content based on the average H 2 O/Ce ratio of oceanic basalts from melt inclusions and the most un-degassed basalts, coupled with mass balance constraints for Ce, indicate a fraction under one ocean mass [2], a robust estimate as long as the basalts sampled at the surface tap all mantle reservoirs. The mantle likely contains some primordial water but given that the post-accretion Earth was very hot, water has low solubility and readily degasses from magma at low pressures, and its solubility in crystallizing liquidus minerals is also very low, the mantle just after accretion may have been relatively dry. Thus, it is plausible that most or even all of the water in the current mantle is 'recycled', added primarily by subduction of hydrated lithospheric plates. If transport of water to the core-mantle boundary is an important geological process with planet-scale implications, then surface water incorporated into subducting slabs and transported to the coremantle boundary may be a requirement. Water is added to the basaltic oceanic crust and peridotitic mantle in lithospheric plates (hereafter, slab crust and slab mantle, respectively) at mid-ocean ridges, at transform faults, and in bending faults formed at the outer rise prior to subduction [3]. Estimates vary but about 1 × 10 12 kg of water is currently subducted each year into the mantle [4], and at this rate roughly 2-3 ocean masses could have been added to the mantle since subduction began. However, much of this water is returned to the surface through hydrous magmatism at convergent margins, which itself is a response to slab dehydration in an initial, and large, release of water. Meta-basalt and metasediments comprising the slab crust lose their water very efficiently beneath the volcanic front because most slab crust geotherms cross mineral dehydration or melting reactions at depths of less than 150 km, and even if some water remains [6,10,12]. Slab geotherms are after those in [4]. Cold slabs may transport as much as 5 wt% water past 'choke point 1' in locally hydrated regions of the slab mantle, whereas slab mantle is dehydrated in warmer slabs. Colder slab mantle that can transport water into the transition zone will undergo dehydration at 'choke point 2'. How much water can be transported deeper into the mantle and potentially to the core depends on the dynamics of fluid/melt segregation in this region. (b) Schematic showing dehydration in the slab mantle at choke point 2. Migration of fluids within slab mantle will result in water dissolving into bridgmanite and other nominally anhydrous phases with a bulk storage capacity of ∼0.1 wt%, potentially accommodating much or all of the released water. Migration of fluids out of the slab into ambient mantle would also hydrate bridgmanite and other phases and result in net fluid loss from the slab. Conversely, migration of hydrous fluids into the crust could result in extensive hydration of meta-basalt with water accommodated first in nominally anhydrous phases like bridgmanite, Ca-perovskite and NAL phase, but especially in dense SiO 2 phases (stishovite and CaCl 2 -type) that can host at least 3 wt% water (∼0.6 wt% in bulk crust). stored in minerals like lawsonite in cooler slabs, nearly complete dehydration is expected by ∼300 km [5]. Peridotitic slab mantle may have much greater potential to deliver water deeper into the interior. As shown in Fig. 1a, an initial pulse of dehydration of slab mantle occurs at depths less than ∼200 km in warmer slabs, controlled primarily by breakdown of chlorite and antigorite when slab-therms cross a deep 'trough', sometimes referred to as a 'choke point', along the dehydration curve (Fig. 1a) [6]. But the slab mantle in cooler subduction zones can skirt beneath the dehydration reactions, and antigorite can transform directly to the hydrated alphabet silicate phases (Phases A, E, superhydrous B, D), delivering perhaps as much as 5 wt% water in locally hydrated regions (e.g. deep faults and fractures in the lithosphere) to transition zone depths [6]. Estimates based on mineral phase relations in the slab crust and the slab mantle coupled with subduction zone thermal models suggest that as much as 30% of subducted water may have been transported past the sub-volcanic dehydration front and into the deeper mantle [4], although this depends on the depth and extent of deep hydration of the slab mantle, which is poorly constrained. Coincidentally, this also amounts to about one ocean mass if water subduction rates have been roughly constant since subduction began, a figure tantalizingly close to the estimated mantle water content based on geochemical arguments [2]. But what is the likely fate of water in the slab mantle in the transition zone and beyond? Lithospheric slabs are expected to slow down and deform in the transition zone due to the interplay among the many factors affecting buoyancy and plate rheology, potentially trapping slabs before they descend into the lower mantle [7]. If colder, water-bearing slabs heat up by as little as a few hundred degrees in the transition zone, hydrous phases in the slab mantle will break down to wadsleyite or ringwoodite-bearing assemblages, and a hydrous fluid (Fig. 1a). Wadselyite and ringwoodite can themselves accommodate significant amounts of water and so hydrated portions of the slab mantle would retain ∼1 wt% water. A hydrous ringwoodite inclusion in a sublithospheric diamond with ∼1.5 wt% H 2 O may provide direct evidence for this process [8]. But no matter if slabs heat up or not in the transition zone, as they penetrate into the lower mantle phase D, superhydrous phase B or ringwoodite in the slab mantle will dehydrate at ∼700-800 km due to another deep trough, or second 'choke point', transforming into an assemblage of nominally anhydrous minerals dominated by bridgmanite (∼75 wt%) with, relatively, a much lower bulk water storage capacity (< ∼0.1 wt%) [9] (Fig. 1a). Water released from the slab mantle should lead to melting at the top of the lower mantle [10], and indeed, low shear-wave velocity anomalies at ∼700-800 km below North America may be capturing such dehydration melting in real time [11]. The fate of the hydrous fluids/melts released from the slab in the deep transition zone and shallow lower mantle determines how much water slabs can carry deeper into the lower mantle. Presumably water is released from regions of the slab mantle where it was originally deposited, like the fractures and faults that formed in the slab near the surface [3]. If hydrous melts can migrate into surrounding water-undersaturated peridotite within the slab, then water should dissolve into bridgmanite and coexisting nominally anhydrous phases (Ca-perovskite and ferropericlase) until they are saturated (Fig. 1b). And because bridgmanite (water capacity ∼0.1 wt%) dominates the phase assemblage, the slab mantle can potentially accommodate much or all of the released water depending on details of how the hydrous fluids migrate, react and disperse. If released water is simply re-dissolved into the slab mantle in this way then it could be transported deeper into the mantle mainly in bridgmanite, possibly to the core-mantle boundary. Water solubility in bridgmanite throughout the mantle pressure-temperature range is not known, so whether water would partially exsolve as the slab moves deeper stabilizing a melt or another hydrous phase, or remains stable in bridgmanite as a dispersed, minor component, remains to be discovered. Another possibility is that the hydrous fluids/melts produced at the second choke point in the slab mantle at ∼700 km migrate out of the slab mantle, perhaps along the pre-existing fractures and faults where bridgmanite-rich mantle should already be saturated, and into either oceanic crust or ambient mantle (Fig. 1b). If the hydrous melts move into ambient mantle, water would be consumed by water-undersaturated bridgmanite, leading to net loss of water from the slab to the upper part of the lower mantle, perhaps severely diminishing the slab's capacity to transport water to the deeper mantle and core. But what if the water released from slab mantle mi-grates into the subducting, previously dehydrated, slab crust? Although slab crust is expected to be largely dehydrated in the upper mantle, changes in its mineralogy at higher pressures gives it the potential to host and carry significant quantities of water to the core-mantle boundary. Studies have identified a number of hydrous phases with CaCl 2 -type structures, including δ-AlOOH, ε-FeOOH and MgSiO 2 (OH) 2 (phase H), that can potentially stabilize in the slab crust in the transition zone or lower mantle. Indeed, these phases likely form extensive solid solutions such that an iron-bearing, alumina-rich, δ-H solid solution should stabilize at ∼50 GPa in the slab crust [12], but only after the nominally anhydrous phases in the crust, (aluminous bridgmanite, stishovite, Caperovskite and NAL phase) saturate in water. Once formed, the δ-H solid solution in the slab crust may remain stable all the way to the core mantle boundary if the slab temperature remains well below the mantle geotherm otherwise a hydrous melt may form instead [12] (Fig. 1a). But phase δ-H solid solution and the other potential hydrated oxide phases, intriguing as they are as potential hosts for water, may not be the likely primary host for water in slab crust. Recent studies suggest a new potential host for waterstishovite and post-stishovite dense SiO 2 phases [13,14]. SiO 2 minerals make up about a fifth of the slab crust by weight in the transition zone and lower mantle [15] and recent experiments indicate that the dense SiO 2 phases, stishovite (rutile structure-very similar to CaCl 2 structure) and CaCl 2type SiO 2 , structures that are akin to phase H and other hydrated oxides, can host at least 3 wt% water, which is much more than previously considered. More importantly, these dense SiO 2 phases apparently remain stable and hydrated even at temperatures as high as the lower mantle geotherm, unlike other hydrous phases [13,14]. And as a major mineral in the slab crust, SiO 2 phases would have to saturate with water first before other hydrous phases, like δ-H solid solution, would stabilize. If the hydrous melts released from the slab mantle in the transition zone or lower mantle migrate into slab crust the water would dissolve into the undersaturated dense SiO 2 phase (Fig. 1b). Thus, hydrated dense SiO 2 phases are possibly the best candidate hosts for water transport in slab crust all the way to the core mantle boundary due to their high water storage capacity, high modal abundance and high-pressure-temperature stability. Once a slab makes it to the coremantle boundary region, water held in the slab crust or the slab mantle may be released due to the high geothermal gradient. Heating of slabs at the core-mantle boundary, where temperatures may exceed 3000 • C, may ultimately dehydrate SiO 2 phases in the slab crust or bridgmanite (or δ-H) in the slab mantle, with released water initiating melting in the mantle and/or reaction with the core to form hydrated iron metal and super oxides, phases that may potentially explain ultra-low seismic velocities in this region [1,10]. How much water can be released in this region from subducted lithosphere remains a question that is hard to quantify and depends on dynamic processes of dehydration and rehydration in the shallower mantle, specifically at the two 'choke points' in the slab mantle, processes that are as yet poorly understood. What is clear is that subducting slabs have the capacity to carry surface water all the way to the core in a number of phases, and possibly in a phase that has previously seemed quite unlikely, dense SiO 2 .
v3-fos-license
2023-05-25T15:02:42.678Z
2023-05-23T00:00:00.000
258875367
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/jof9060598", "pdf_hash": "179358734e6ae6525fd906b41af94aaf13b8b26b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:461", "s2fieldsofstudy": [ "Medicine" ], "sha1": "27f1b0411048c209176f69663ce932194a78a931", "year": 2023 }
pes2o/s2orc
Fungal–Bacterial Co-Infections and Super-Infections among Hospitalized COVID-19 Patients: A Systematic Review This study systematically reviewed fungal–bacterial co-infections and super-infections among hospitalized COVID-19 patients. A PRISMA systematic search was conducted. On September 2022, Medline, PubMed, Google Scholar, PsychINFO, Wiley Online Library, NATURE, and CINAHL databases were searched for all relevant articles published in English. All articles that exclusively reported the presence of fungal–bacterial co-infections and super-infections among hospitalized COVID-19 patients were included. Seven databases produced 6937 articles as a result of the literature search. Twenty-four articles met the inclusion criteria and were included in the final analysis. The total number of samples across the studies was 10,834, with a total of 1243 (11.5%) patients admitted to the intensive care unit (ICU). Of these patients, 535 underwent mechanical ventilation (4.9%), 2386 (22.0%) were male, and 597 (5.5%) died. Furthermore, hospitalized COVID-19 patients have a somewhat high rate (23.5%) of fungal–bacterial co-infections and super-infections. Moreover, for SARS-CoV-2 patients who have a chest X-ray that suggests a bacterial infection, who require immediate ICU admission, or who have a seriously immunocompromised condition, empiric antibiotic therapy should be taken into consideration. Additionally, the prevalence of co-infections and super-infections among hospitalized COVID-19 patients may have an impact on diagnosis and treatment. It is crucial to check for fungal and bacterial co-infections and super-infections in COVID-19 patients. Introduction The respiratory illness COVID-19, which is the cause of the present COVID-19 pandemic, is brought on by the coronavirus known as SARS-CoV-2 [1]. On 30 January 2020, and 11 March 2020, respectively, the World Health Organization (WHO) labeled the outbreak a pandemic and a public health emergency of international concern [2]. As of 7 August 2022, 581.8 million confirmed cases of COVID-19 and 6.4 million deaths had been reported globally [3]. According to a meta-analysis, 17% of SARS-CoV-2 infections are asymptomatic, and asymptomatic individuals are 42% less likely to transmit the virus [4]. In addition, there is uncertainty about reinfection and long-term immunity [5]. Although reports have suggested that reinfection is happening with varying intensity, it is unknown how frequently it occurs [6]. The results of recent research have implications that immunizations may not be able to provide lifelong protection against the virus and that herd immunity may not be able to eradicate the virus if reinfection is a common occurrence [7]. On the other hand, it is uncertain how common co-infections and super-infections are among humans worldwide [8]. A co-infection is an illness that occurs at the same time as the first infection, but a super-infection is an infection that occurs after a previous infection, particularly those caused by microorganisms that are resistant to previously employed antibiotics [9,10]. Co-infections and super-infections with SARS-CoV-2 are usually caused by community-acquired bacteria, such as Streptococcus pneumoniae, Hemophilus influenza, or S. Aureus, or by hospital-acquired, multidrug-resistant bacteria and fungi [10]. Furthermore, bacterial and fungal co-infections and super-infections represent important complications of viral diseases and may be associated with worse outcomes [11]. In contrast to the well-documented phenomenon of co-infections and super-infections with bacterial, viral, and other pathogens in influenza, SARS, MERS, and other respiratory viral illnesses, information on bacterial and fungal co-infections and super-infections in SARS-CoV-2 patients is scarce and still developing [12]. Patients who have SARS-CoV-2 are at significant risk of contracting a nosocomial co-infection and may need to stay in the hospital for a long time, either in regular wards or the intensive care unit [13]. Additionally, SARS-CoV-2 patients may experience severe pneumonia, necessitating hospitalization, intubation, and transfer to the intensive care unit [14]. The likelihood of developing multidrug resistance increases when these co-infections are treated with empiric, broad-spectrum antibiotic therapy [13]. The global proportion of microorganisms that are resistant to multiple drugs has also been reported [15]. Additionally, the primary cause of death in this cohort is respiratory failure brought on by SARS-CoV-2 infection; nevertheless, multiple observations have shown that hospitalized SARS-CoV-2 patients may also be more vulnerable to co-infections and super-infections [16]. Additionally, co-infections and super-infections, particularly in countries with limited resources, are thought to play a role in the relatively high incidence of severe infection and mortality in SARS-CoV-2. These factors, along with a lack of natural immunity and viral replication in the lower respiratory tract, are thought to contribute to severe lung injury and acute respiratory distress syndrome [17]. Therefore, the aim of the current study was to systematically review fungal-bacterial co-infections and super-infections among hospitalized COVID-19 patients. Protocol Registration The protocol for this systematic review was submitted and registered with the International Prospective Register of Systematic Reviews (PROSPERO) (Code no. CRD42022368456). In addition, this systematic review followed the guiding principles of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [18]. Eligibility Criteria Articles that fulfilled the following eligibility criteria were included in this systematic review: • Participants: patients of any age with a confirmed positive COVID-19 test who developed fungal-bacterial co-infections and super-infections during the hospital stay. • Exposure: severe acute respiratory syndrome coronavirus 2. • Outcome: fungal-bacterial co-infections and super-infections. Search Strategy Systematic searches were conducted on the Medline, PubMed, Google Scholar, Psych-INFO, Wiley Online Library, NATURE, and CINAHL databases between 1 January 2020 and 20 September 2022. The search was restricted to the English language restriction, without limitation on the region. A collection of search terms was produced using truncations, Medical Subject Headings (MESH), and Boolean operators (Table 1). The asterisk (*) represents any group of characters, including no character. Data Extraction The Mendeley reference management program was used to compile all articles from automated database searches. After deleting duplicates, screening was performed to ensure the studies met the qualification requirements. Articles were screened in three stages based on title, abstract, and full text. To facilitate the comparison and synthesis of studies, key information pertinent to the study's focus was methodically gathered and collated. Table 2 lists the first author's name, publication date, country of origin, the total number of SARS-CoV-2 positive patients who underwent co-pathogen testing, the total number of coinfected patients admitted to the intensive care unit (ICU), the total number of participants on mechanical ventilation, the total number of deaths, the total number of bacterial and fungal co-infections and super-infections, the types of organisms, and the total number of antimicrobials used. The study design, sample size, participant age, and key findings of each included article were extracted and are reported in Table S1. Assessment of Quality The National Institutes of Health Quality Assessment tool for Observational Cohort and Cross-Sectional Studies, which addresses the design, selection bias, data collection, confounders, blinding, and attrition, was used to evaluate the quality of the qualifying papers. Overall grades of 'good', 'fair', or 'poor' were provided for each article [41]. Data Analysis We were unable to perform a meta-analysis since the outcome measures varied amongst the articles [42]. Rather, narrative synthesis was carried out. This made it possible to take into account confounding, mediating, and moderating variables, which are frequently neglected in meta-analyses. Each study was introduced before being compared, analyzed, and then synthesized. Results S databases used for the literature search produced 6937 articles. After 1470 duplicates were eliminated, 3847 unique articles were disqualified from the screening on the basis of the title. After screening the remaining 1620 articles in the abstract, 1337 were eliminated, leaving 283 articles. Two hundred fifty-nine articles were found to be ineligible after reviewing the entire contents, with not meeting the participant criteria being the most frequent cause. More information on the exclusion criteria is provided in the PRISMA flowchart [18] (Figure 1). In the end, 24 articles met the criteria for this systematic review. Assessment of Quality The National Institutes of Health Quality Assessment tool for Observational Cohor and Cross-Sectional Studies, which addresses the design, selection bias, data collection confounders, blinding, and attrition, was used to evaluate the quality of the qualifyin papers. Overall grades of 'good,' 'fair,' or 'poor' were provided for each article [41]. Data Analysis We were unable to perform a meta-analysis since the outcome measures varie amongst the articles [42]. Rather, narrative synthesis was carried out. This made it poss ble to take into account confounding, mediating, and moderating variables, which ar frequently neglected in meta-analyses. Each study was introduced before being com pared, analyzed, and then synthesized. Results S databases used for the literature search produced 6937 articles. After 1470 dupl cates were eliminated, 3847 unique articles were disqualified from the screening on th basis of the title. After screening the remaining 1620 articles in the abstract, 1337 wer eliminated, leaving 283 articles. Two hundred fifty-nine articles were found to be inelig ble after reviewing the entire contents, with not meeting the participant criteria being th most frequent cause. More information on the exclusion criteria is provided in th PRISMA flowchart [18] (Figure 1). In the end, 24 articles met the criteria for this system atic review. Description of Articles The reviewed articles are summarized in Table 2. Twelve articles were published in 2020, six articles were published in 2021, and six articles were published in 2022. In addition, six studies were conducted in China, four studies were conducted in the United Kingdom (U.K.), three studies were conducted in the United States (U.S.), three studies were conducted in Italy, two studies were conducted in Iran, and one study each was conducted in Pakistan, Egypt, Scotland, Saudi Arabia, Palestine, and Spain. The participants' ages ranged from 2 to 99. Three articles did not mention the age of the participants [6,21,31]. All articles included hospitalized patients. The total number of samples across the studies was 10,834, with a total of 1243 (11.5%) participants being admitted to the ICU; 535 underwent mechanical ventilator (4. Laboratory techniques for co-pathogen detection within the articles included respiratory samples and RT-PCR tests; serologic tests (antibodies), RT-PCR tests with respiratory and/or blood cultures; respiratory and/or blood cultures; others tested both serology and RT-PCR, and others did not specify their testing methods. Additionally, data regarding the names of bacterial and fungal species were not mentioned in the included articles. Moreover, Wang et al. [19], Chen et al. [23], and Wang et al. [30] reported that 8, 15, and 5 antifungal agents were used among the study participants, respectively, but they did not specify the type of antifungal agents. Chen et al. (2020) [23] Poor Retrospective cohort Single-center study Small sample size and not justified (n = 99) All articles included retrospective cohorts in their design, except Ramadan et al.'s article, which used a prospective cohort [6]. The retrospective cohort design of 23 articles is susceptible to three common sources of bias: information, confusion, and interaction biases. In addition, a prospective cohort design could be affected by the loss of follow-up [43]. The majority of articles were single-center ones, and only four studies were multi-center ones. Discussion Co-infections and super-infections are typically caused by bacteria that have been acquired in the community or in a hospital. In addition, co-infections and super-infections may become more likely as a result of multidrug-resistant bacteria or fungi [10]. Furthermore, fungal-bacterial co-infections and super-infections are considered to be important complications of viral diseases and may be associated with worse outcomes [11]. This systematic review was conducted to examine the presence of fungal-bacterial co-infections and super-infections among hospitalized patients with SARS-CoV-2. In this review, all articles that exclusively reported the presence of fungal-bacterial co-infections and superinfections among hospitalized SARS-CoV-2 patients were included. Twenty-four articles met the inclusion criteria and were included in the final analysis. The included studies were conducted in China, the U.K., the U.S., Italy, Iran, Pakistan, Egypt, Scotland, Saudi Arabia, Palestine, and Spain. The total sample size of all the studies was 10,834. Prevalence and Outcome The main results of this review show that hospitalized SARS-CoV-2 patients have a somewhat high rate (23.5%) of fungal-bacterial co-infections and super-infections. However, the included articles often lack uniformity in both the reports and examinations of fungal-bacterial co-infections and super-infections, which may have under-or overestimated the rates of co-infections and super-infections. According to previous research, patients with severe viral infections frequently develop fungal infections caused by Aspergillus, Candida, Cryptococcus neoformans, Pneumocystis, or other fungal species, which is associated with apparent increases morbidity and mortality [40]. Additionally, due to severely damaged alveoli and a reduction in leukocyte counts, SARS-CoV-2 patients are susceptible to fungal infections at the later stages of the illness [43]. Comorbidities, immune-modulating therapies, widespread use of over-thecounter antibiotics, pathological aberrations of the immune system, and epithelial barriers caused by SARS-CoV-2 are likely to play a role as well [44]. A comprehensive review and meta-analysis study was undertaken by Chinese researchers on 2780 confirmed SARS-CoV-2 patients from nine relevant investigations. They found that Asian patients were more likely than patients in studies from the U.K. were to have a fungal co-infection, and from 0.12% to 0.15 percent of cases tested positive for a fungal infection following fungal culturing at admission [45]. Therefore, after the confirmation of SARS-CoV-2 infection, several researchers have proposed that patients should undergo routine screening for bacterial and fungal infections [46,47]. Antimicrobial Drugs The findings of the current review show that the most commonly used antimicrobials in the included articles were CEP, CLR, AZM, CAR, TGC, LZD, F.Q., β-Lactamase inhibitors, MET, VAN, TZP, DOX, LVX, CIP, CRO, FEP, and MEM, which are highly recommended for use in patients with community bacterial pneumonitis needing admission to a hospital, and also in urinary tract infections, abdominal infections, and infections caused by other sensitive bacteria. In order to decrease the inappropriate use of empiric antibiotics, Hughes also investigated a method based on the stratification of individuals who are at risk of developing a bacterial co-infection or super-infection within the first 72 h of admission [48]. The outcomes of earlier research also showed the importance of beta-lactamase inhibitors in preventing the development of antibiotic resistance by blocking serine beta-lactamases, which are enzymes that deactivate the beta-lactam ring, which is a chemical compound shared by all beta-lactam antimicrobials [49]. However, the analysis of collected specimens from hospitalized SARS-CoV-2 patients revealed bacterial co-infection throughout the pandemic [37]. Therefore, empirical antibiotic therapy is a common component in SARS-CoV-2 treatment procedures to address potential organisms. When bacterial co-infections and super-infections are identified, especially if the patient is hospitalized in the intensive care unit, wide-spectrum antibiotic regimens are given [50]. There are currently no data available to compare SARS-CoV-2 patients who received antibacterial medicines with those who did not, which would allow us to assess the effectiveness of the treatment. Additionally, previous research has shown that repeated antibiotic doses have a minimal effect on the disease's progression or the fatality rates of patients [48]. Additionally, the use of antibiotics is limited to cases of known or suspected bacterial infections. The current review also showed that empiric antibiotic therapy should be taken into account in SARS-CoV-2 patients who have a chest X-ray that indicates a bacterial infection and who need emergency ICU admission, or those who have a highly immunocompromised condition. Future studies are recommended, focusing on the specific usage of antimicrobial drugs, the differences between developing and non-developing countries, and the influences of other variables, including the effect of the socioeconomic variables on patients' outcome. Detection Techniques RT-PCR tests with respiratory and/or blood cultures, serological tests (antibodies), serological and RT-PCR tests combined, and undisclosed testing methods were among the laboratory techniques for co-pathogen detection in the articles in the current review. Additionally, in the current systematic review, the majority (19 articles) used RT-PCR tests, only one article used both serology and RT-PCR tests, and the remaining four articles did not specify their testing methods. Incidences of co-infections and super-infections may have been underestimated or inflated as a result of the lack of standardization in laboratory techniques for detecting fungal-bacterial co-pathogens in the articles. Clinicians should maintain a high level of diligence for these co-infections and super-infections, especially in critically ill patients, given the current diagnostic challenges and uncertainties relating to the risks associated with fungal-bacterial co-infections and super-infections in hospitalized SARS-CoV-2 patients. Additionally, the French High Council for Public Health advised clinicians to focus on fungal infections in SARS-CoV-2 patients, especially in cases of a severe illness [51]. According to the results of the current study, 1243 (11.5%) of the patients with fungalbacterial co-infections and super-infections were brought to the intensive care unit (ICU), of whom 535 (4.9%) required mechanical ventilation, and 597 (5.5%) passed away. Bardi et al. (2021) [33] investigated nosocomial infections associated with COVID-19 in the intensive care unit. He revealed that 91 confirmed nosocomial infections occurred in 57 patients while they were receiving ICU care. A large variety of Gram-positive (55%) and Gramnegative (30%) bacteria, as well as fungi (15%), were responsible for the majority of them, including pneumonia (23%), tracheobronchitis (10%), and urinary tract infections (8%). The next most frequent bloodstream infections were primary (31%) and catheter-related (25%) infections. The author also discovered that staying in the ICU for a significant amount of time increases the risk of co-infections and super-infections occurring for more than a week. In fact, patients who have co-infections and super-infections are more likely to need ICU care and mechanical ventilation [52]; the likelihood of developing co-infections and super-infections also rises as ICU stays lengthen. A secondary infection linked to a hospital setting is also more likely to occur when mechanical ventilation is required [53]. Additionally, super-and co-infections exacerbate patients' prognoses and raise the fatality rate. As a result, doctors should be frequently sought out upon admission, and also, in each case where the radiologic and clinical statuses have worsened despite intervention with high-dose corticosteroids. This systematic review also found that the high rates of co-infections and super-infections were common among severely ill patients due to age and other predisposing factors, including COPD, asthma, diabetes, high cholesterol, and high blood pressure. More research is needed in the future to determine the underlying processes of fungal-bacterial co-infections and super-infections among hospitalized SARS-CoV-2 patients. Strengths and Limitations The main strengths of this review are that it is the first one to systematically examine fungal-bacterial co-infections and super-infections among hospitalized patients with SARS-CoV-2 and its large sample size. However, our study has a number of limitations. The included articles described often did not uniformly report or undertake examinations to detect fungal-bacterial co-infections and super-infections, which may resulted in underor overestimated rates of co-infections and super-infections. In addition, the retrospective cohort design of the included articles reduced the control over multiple confounders and data collection, which could increase the potential for information, confusion, and interaction biases. Another drawback was that the current systematic review was conducted by a single author. Conclusions Hospitalized SARS-CoV-2 patients have a somewhat high rate (23.5%) of fungalbacterial co-infections and super-infections. In addition, the prevalence of co-infections and super-infections among hospitalized SARS-CoV-2 patients may have an impact on diagnosis and treatment. Moreover, for future diagnostics and treatment options, it is crucial to check for fungal and bacterial co-infections and super-infections in SARS-CoV-2 patients using fungal and bacterial culture assays. Additionally, the use of antibiotics should be moderated and based on the findings of sensitivity and culture tests. Finally, it is advised to use infection control measures to avoid nosocomial infections. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.3390/jof9060598/s1, Table S1: The key findings of the included article. Funding: This research received no external funding. Institutional Review Board Statement: The protocol of the study was registered at The International Prospective Register of Systematic Reviews (PROSPERO, registration No. CRD42022368456). Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable.
v3-fos-license
2021-11-11T16:20:18.564Z
2021-11-08T00:00:00.000
243974607
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/13/21/4483/pdf", "pdf_hash": "226162391d3a6749f9ddf923aeef4afd9a34a3c2", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:463", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "64dcf6c9b1d751075b8b8f5c546808df360bbc77", "year": 2021 }
pes2o/s2orc
Estimation of Boreal Forest Growing Stock Volume in Russia from Sentinel-2 MSI and Land Cover Classification : Growing stock volume (GSV) is a fundamental parameter of forests, closely related to the above-ground biomass and hence to carbon storage. Estimation of GSV at regional to global scales depends on the use of satellite remote sensing data, although accuracies are generally lower over the sparse boreal forest. This is especially true of boreal forest in Russia, for which knowledge of GSV is currently poor despite its global importance. Here we develop a new empirical method in which the primary remote sensing data source is a single summer Sentinel-2 MSI image, augmented by land-cover classification based on the same MSI image trained using MODIS-derived data. In our work the method is calibrated and validated using an extensive set of field measurements from two contrasting regions of the Russian arctic. Results show that GSV can be estimated with an RMS uncertainty of approximately 35–55%, comparable to other spaceborne estimates of low-GSV forest areas, with 70% spatial correspondence between our GSV maps and existing products derived from MODIS data. Our empirical approach requires somewhat laborious data collection when used for upscaling from field data, but could also be used to downscale global data. Introduction Growing stock volume (GSV), defined as the total volume of all living tree stems (excluding branches, including bark) in an area of interest or unit area such as a hectare [1], is an essential structural parameter describing a forest. Its use in assessing commercial forestry is well established [2]. It is also of direct ecological and climatological significance, being closely related to the concept of above-ground biomass (AGB) and hence carbon storage [3]. Remote sensing methods have a long history of development for estimation of GSV [1,[4][5][6][7][8][9][10], and a number of approaches have evolved since the 1990s, exploiting spaceborne visible and near-infrared (VNIR) imagery, radar, and more recently the incorporation of airborne measurements from airborne laser scanners and UAV (unmanned aerial vehicle, commonly referred to as a 'drone') observations. The simplest approaches are based on multispectral analysis of freely-available VNIR imagery having a spatial resolution of the order of 10 m or coarser [11][12][13][14][15][16][17]. Useful enrichment of the available feature space has been demonstrated using multitemporal datasets [18][19][20][21], incorporating texture measures [14,22] and field-derived or satellite-derived threedimensional information [23][24][25][26][27][28][29][30]. Other approaches are based on the use of ultra-highresolution VNIR imagery (usually not free of cost) [31,32], radar imagery [1,[33][34][35][36][37][38][39][40][41][42][43][44][45][46], or combinations of VNIR and radar imagery [47][48][49][50][51][52][53]. We should also note approaches based on the direct use of spaceborne laser profiling [54] and those that explicitly incorporate a landscape characterisation, derived from satellite data, into a VNIR [55] or radar [56] analysis. Finally, Zharko et al. [57] have demonstrated the utility of winter VNIR imagery in sparsely forested areas subject to snow cover, where the optical contrast between snow and vegetation can be exploited. Figure 1 attempts to give a simple overview of the current situation regarding remote sensing estimation of GSV. It has been compiled from quantitative data abstracted from many publications [13,14,16,[21][22][23]25,27,32,33,37,40,47,48,50,[56][57][58][59][60]. As Figure 1 shows, typical accuracies for spaceborne methods are approximately 20 to 40% RMSE, becoming somewhat poorer at lower values of GSV. Horizontal axis shows the mean GSV of the plot or plots included in the studies, and the vertical axis shows the estimated error in the calculated GSV expressed as a percentage of the mean. Methods are roughly classified as airborne (using data from ALS (Airborne Laser Scanning) or UAVs), optical (satellite data from Sentinel-2 and Landsat), ultra-high resolution satellite data (with spatial resolution of 1 m or finer), radar (satellite radar data), or 'multiple', using more than one data type. 'Present' summarises the performance of the method developed in the present work. The dashed line is an empirical fit to the data except for the 'airborne' class, and has the formula RMSE = 436 (GSV) −1/2 , where RMSE is in % and GSV in m 3 ha −1 . The work presented in this paper is focused specifically on estimation of GSV in the Russian boreal forest. The Boreal forest generally, and the Russian part in particular, is poorly inventoried [1,61] and yet its importance in our understanding of the global climate system is high [62]. Globally, the boreal forest accounts for 31% of the area, 20% of the GSV, and 13% of the above-ground biomass of all forest whereas Russia, which is dominated by boreal forest, accounts for 20% of the global forest area [63]. However, a recent study based on remote sensing data has shown that the growing stock of Russian forests is 39% higher than the value of official statistics in the State Forest Register [61]. Multispectral VNIR remote sensing-based GSV estimation for boreal forest is particularly challenging because of the low canopy coverage, combined with the fact the field layer is often composed of dwarf shrubs that are spectrally not very different from the forest canopy [64]. Although airborne methods have undoubted potential to improve our ability to estimate GSV for boreal forests, it will often be impractical to obtain airborne data, especially over large areas. There is thus an incentive to develop simple optimised estimation algorithms that exploit freely and frequently available satellite data. In the present work, we develop one such algorithm that uses an empirical estimation function combining both Sentinel-2 MSI imagery and a land-cover classification derived from this imagery and trained using MODIS (Moderate Resolution Imaging Spectrometer) data. The novel feature of this new method is thus the inclusion of land-cover as a potentially informative source of information on GSV, in addition to the multispectral bands of the MSI imagery itself. The explicit aim of this approach is to minimize its dependence on multiple data sources that may be difficult to acquire on a routine basis. The algorithm is locally tuned, and developed for two contrasting areas of the Russian boreal forest. Although we do not assert that these represent the complete range of types of boreal forest in Russia, their contrasting natures allow the method's potential for generalisation to be assessed to some extent. Materials and Methods Field data were collected as part of the project "Multiplatform remote sensing of the impact of climate change on northern forests of Russia", a Russian-UK collaboration that ran from 2018 to 2021. The broad aim of this project was to develop a systematic understanding of the distribution of biomass in the Russian boreal forest, its changes during the first 20 years of the present century, and the climatic influences on it. Fieldwork took place in July-August 2018 in and around the Khibiny mountains in north-western Russia, and in July-August 2019 in the vicinity of Yakutsk, Sakha Republic, in northeastern Russia ( Figure 2). The boreal forest in the Khibiny region is dominated by pine, spruce, and birch species, and lies within the area of Russian forest classed as 'accessible' [65]; whereas around Yakutsk it is dominated by larch, pine, and birch species and is classed as 'hard-to-reach'. Logistical support for the Khibiny fieldwork was provided by the Khibiny Educational and Scientific Station (67°38′ N, 33°44′ E) [66] Location of study areas within the Russian boreal forest. Background map shows dominant forest types and is simplified from [67] using original data from [68]. Map prepared by the authors. Within each of the two main study areas, a number of sample plots of 20 m × 20 m area were established, geolocated using non-differential GPS. Plots were accessed by road vehicle and on foot, and were selected to span, as far as practicable, the range of forest type and condition characteristic of the areas. The plots were judged by eye to be homogeneous in tree density. Where the number density of trees was particularly high, smaller plots were occasionally chosen. Data suitable for estimating the GSV per plot were collected by measuring all stem diameters d (diameter at breast height-DBH-at 1.3 m) and tree heights h, together with tree genera, in each 20 m × 20 m plot. Trees were counted and measured only if their heights exceeded 2 m. (A height threshold was quicker to implement in the field than the equivalent DBH threshold of approximately 3 cm. The data collection protocol in this study also complied with previous studies of forests near the treeline, where the tree was defined as woody vegetation over 2 m tall. The resulting data are more complete in the number of stems per site, which is important in comparing to previous data.) Diameters were measured using a Haglöfs Mantax 40 cm tree caliper or by passing a flexible measuring tape around the stem, and heights were measured using an optical clinometer (Suunto PM5, Silva Clino Master CM and similar) with a measuring tape to determine the distance from the observer to the base of the tree. The accuracy of both methods was estimated to be 5%. In total, 1858 trees were measured across a total of 33 field plots (total area sampled was 8675 m 2 ). Stem volumes V of individual trees were calculated from their geometrical measurements making use of the collection of allometric formulae compiled by Zianis et al. [69]. Although this compilation is explicitly for European trees, we propose that the shapes of individual trees in Siberia will not differ substantially from those of similar species in Europe. We justify this assumption on the basis of, first, our own informal observations and, second, the fact that the published formulae do not suggest large variations in stem volume between different tree species of the same genus, height, and DBH. Specifically, we proceeded as follows. For a given tree genus (e.g., Betula), all the allometric formulae contained within the compilation of Zianis et al. [69] for trees of the same genus were applied using the measured tree dimensions, and the median value of the calculated stem volumes was adopted as the value for that particular tree. The formulae were transcribed into the programming language GNU Octave [70] in order to facilitate the huge number of calculations required by this method. Some obvious errors in the published formulae (wrong units specified for the measurement of tree stem volume, height, or DBH, which produced errors of more than one order of magnitude) were corrected. By comparing the within-species and between-species variations of GSV within each genus, we estimate the uncertainty arising from this approach as 0.007 m 3 . On this basis, we estimate the uncertainty arising from the application of the Zianis et al. formulae to be of the order of 10%, so including the uncertainty in the measurements themselves, we estimate the fractional uncertainty in a calculated value of V to be 25%. By extrapolating the height-volume and diameter-volume relationships established from our measurements, we estimate that the proportion of total tree GSV that we did not sample was less than 0.1%. Four Sentinel 2 Multispectral Imager (MSI) images were used in this study, consisting of a summer image and a winter image for each of the two main study areas (Table 1). Selection criteria were that the images should provide good coverage of all the field plots, should have no identifiable cloud or smoke cover over the field plots and be generally as free of cloud as possible, and that they should correspond to the year in which the corresponding field data were collected. In practice, application of these criteria required that three of the four images were supplied at level 1C (top-of-atmosphere reflectance) whereas one was available at level 2A (atmospherically corrected). The Sakha summer image exhibited several areas of cloud and smoke that were not adequately removed using the supplied cloud mask layer, and these were masked out manually. Following these steps, the images were clipped to rectangular areas large enough to include all the training plots. The resulting areas after masking (shown in Figure 3) were 7795 km 2 (Khibiny) and 22,042 km 2 (Sakha). The rather large difference in area (factor 2.8) was mainly a consequence of the spatial distribution of the field plots, controlled by the practical difficulties of access in the study areas. Table 1. Land-cover classifications were produced for the two study areas from the available MSI images using the Semi-Automatic Classification plugin (version 6.4.5: [71]) for QGIS version 3.14 [72]. Training data for these classifications were produced by generating classified point objects from the MODIS-based Russian land cover map [73,74] available through the VEGA system of the Space Research Institute of the Russian Academy of Sciences [75]. The 'Point object' tool in the VEGA system was used to generate the classified point objects. VEGA superimposes a grid over the MODIS land-cover map for a chosen extent, and if a node of this grid falls into a pixel that is surrounded by pixels of the same land-cover class, then the grid node becomes a point object of this land-cover class. The points are then filtered to provide a similar number of points for each landcover class. Square buffers of 30 m × 30 m were applied to these point objects, and spectral signatures for any of the VEGA-defined land-cover classes were calculated over the eight available 10-m resolution bands (i.e., bands 2, 3, 4, and 8 of the summer and winter images) using the pixels which intersected with the square buffers. Some manual intervention was required to remove obviously incorrect point training data. In all cases, these were observed to be a consequence of the mismatch in spatial resolution between the MSI images and the MODIS data used to generate the training points (for example, a point classified as needleleaf forest that actually lay within a small lake). In total, 7578 classified points in 13 land-cover classes, and 6524 classified points in 10 land-cover classes were generated for Khibiny and Sakha, respectively. The two combined 8 band Sentinel 2 summer and winter images, one for Khibiny and one for Sakha, were then classified using the relevant spectral signatures and the maximum likelihood algorithm. The sets of image classes were then reduced to include only those occurring within 3 × 3 pixel neighbourhoods of the centre locations of the field plots (8 and 5 remaining classes in Khibiny and Sakha respectively2). These image classes were further generalised, as described in Section 3, to four and three in number, respectively, as our method of estimating GSV depends on limiting the number of potential explanatory variables. Modelling of GSV per unit area, G, was based on individual summer Sentinel-2 MSI band values and land-cover classifications. To better accommodate the dynamic range in the calibration data, and to ensure that the model could not generate non-positive values of G, the empirical model was defined such that the natural logarithm of G, ln G, was a linear function of the variables. The generic model had the form where Bi is the pixel value in band i of the Sentinel-2 MSI image, and Cj is the number of pixels (out of a maximum of nine) in the 3 × 3 neighbourhood of the pixel to be modelled that are assigned to merged land-cover class j. Because the number of available MSI bands was 4 (bands 2, 3, 4, and 8), the value of n could have been anything from 0 to 4. And as the number of generalised land-cover classes was four (Khibiny) and three (Sakha), the value of m could have been anything from 0 to these values. Thus the total number of parameters in the model defined by Equation (1) could be as many as 7 or 8. We chose, however, to limit the actual number (i.e., the sum of n + m) to three in each case. The choice of MSI bands i to include in the model, and the number and merging of land-cover classes j, were determined experimentally using the field-based estimates of G as training data. The values of the coefficients ai and bj were determined through linear least-squares regression analysis, and performance was assessed using leave-one-out error analysis. Separate modelling exercises were performed for the two study areas, and the optimal models (i.e., those that resulted in the smallest RMSE errors in lnG) were applied to the entire MSI image area. Non-forest areas, as defined by the land-cover classifications, were masked out, as were water bodies, identified by applying a threshold of 0.3 to calculated values of the normalised difference water index (NDWI: [76]). Small water bodies are particularly abundant in the Sakha study area. A 10-m buffer was applied to all detected water bodies to remove marginal pixels. A graphical summary of the processing chain by which the GSV G was estimated is given in Figure 4. Table 4, then quantified (4) to generate a multi-band image in which each 'band' represents the number of pixels within a 3 × 3 neighbourhood corresponding to a particular class in the GLCC. The GSV estimator function is produced using equation (1), with the summer MSI image and the 3 × 3 class counts image as inputs, trained using field estimates of GSV (5). Finally, the GSV estimates for the whole study area are produced using the estimator function, class counts, and MSI summer image (6). Figure 3) in the Khibiny and Sakha study areas. Tables 2 and 3 include an indication of the composition of each plot as a percentage GSV represented by each of the main tree genera (i.e., last four columns of Table 4), and these were used to guide a process of generalisation of the land cover maps to maximise the correspondence (by minimising the Kramér V-statistic on contingency tables: [77]) between the land cover and the composition. These generalisations are shown in Table 4 and the resulting generalised maps are shown in Figure 5. The names allocated to these generalised classes are arbitrary and have no ecological significance, and in particular the use of the same generalised names between the two study areas does not imply any ecological equivalence between them. We emphasise that no inferences are drawn from the names of these classes. The optimum model for the Khibiny area employed a single band (band 3: green) of Sentinel-2 MSI data, together with the 'low vegetation' and 'needleleaf forest' generalised classes. The optimum model for the Sakha area employed two MSI bands (2: blue and 3: green), together with a single generalised land-cover class 'Needleleaf forest'. Coefficients of the optimum models are shown in Table 5, together with their accuracy estimates. Figures 5 and 6 show the results of applying these models to the entire area represented by the MSI images. The logarithmic values generated by applying Equation (1) were transformed to linear values by exponentiation. Mean, standard deviation, and median GSV estimated for forest areas in the Khibiny area were 102, 34, and 98 m 3 ha −1 . The corresponding values for the Sakha area were 118, 91, and 99 m 3 ha −1 . These values are comparable to the mean value of 72 m 3 ha −1 deduced for boreal forest globally [63]. The pseudocolour scales of Figures 6 and 7 are different, corresponding to the different distributions of estimated GSV in the two study areas. Table 5. Parameters and coefficients of the optimum GSV models defined by Equation (1), together with r 2 coefficient of the fit to the field data and an estimate of the uncertainty ∆ ln G in fitting the natural logarithm of GSV from leave-one-out estimation. Discussion The premise of this study is that the inclusion of a land-cover classification, suitably converted into quantitative data, can provide useful ancillary input to an empirical model to estimate forest GSV from summer Sentinel-2 MSI imagery. This has proved to be the case, at least in the two study areas investigated, as the optimum models in both cases selected at least one of the land-cover classes as input. Some experimentation was performed to include winter imagery but this did not materially improve the performance of the method. Inspection of the GSV model output showed some anomalously high GSV values, especially in areas that may be partly in shadow. This evidently points to the empirical, nonphysical basis of the algorithm and suggests that the incorporation of topographic data would have scope for improving its performance. However, we note that coverage of both study areas in the ArcticDEM product (https://www.pgc.umn.edu/data/arcticdem/, accessed 01 November 2021) is at present incomplete, and that either this or the ASTER GDEM-which does offer complete coverage-would require filtering for artefacts, thus increasing the complexity of the algorithm. Some small shadow-affected areas were removed by the median filtering noted earlier, and some further improvement was made by truncating the predicted GSV values at an upper bound of 500 m 3 ha −1 , to limit the extent to which values were extrapolated beyond the range of the calibration data. This removed 1.7% of the pixels in the Sakha image. Far fewer anomalies (approximately 0.001%) were noted in the Khibiny image, and GSV truncation was not applied. The distribution of estimated GSV values was narrower than that for the Sakha image (e.g., a standard deviation of 34 m 3 ha −1 compared to 91 m 3 ha −1 ) and no truncation was deemed necessary. In contrast, the spatial correspondence between the modelled GSV and ultra-highresolution satellite imagery ( Figure 8) and a large-scale MODIS-based GSV product ( Figure 9) is evidently good at both small and large spatial scales. Visual comparison (Figures 8 and 9) is convincing. We also quantify the correspondence by constructing 2 × 2 contingency tables between above-and below-median GSV values as classified using our method and using the MODIS GSV product. These show accuracies (proportion of pixels agreeing whether the GSV is above or below the median value) of 70.5% for the Khibiny study area and 68.0% for the Sakha area. The relationship between our estimated GSV values and those derived in the MODIS-based product is shown in Figure 10, demonstrating a monotonic (if not linear) correspondence. We recall that the present algorithm was not calibrated to the MODIS product, but only against field data. These observations lend confidence in the present method. The accuracy of this method is summarised in Table 5, whose results are interpreted as implying that the RMS error in GSV estimation is approximately 35% for the Sakha study site and approximately 55% for the Khibiny site. These values are included in Figure 1, where they imply that the method is not obviously inferior to other approaches to GSV estimation for sparse forests based on spaceborne optical or multispectral data. However, we also note that up to approximately 25% uncertainty in GSV may be contributed by the allometric estimation, so the algorithm's performance may be considerably better than these values imply. We thus propose that it is worth developing this approach. Its principal disadvantage is that it is constructed on an empirical rather than a physical relationship. This is compensated for by the fact that it is derived from a large number of field measurements which are labour-intensive to acquire, although spaceborne laser altimetry from GLAS and ICESat-2 could offer some scope for acquiring GSV estimates for calibration [78]. Additionally, the strong correspondence noted in Figure 9 suggests that its most useful application may be as a downscaling tool from large-scale GSV estimates, where its requirement for just two Sentinel-2 images (or similar) would be a major advantage. Obvious future developments would be to attempt to derive the calibration data themselves from potentially less time-consuming data collection methods, such as UAV surveys, or from published databases of field measurements over a wider range of locations. Forest presence data could be obtained at higher spatial resolution from Landsat-derived products [79,80]. Conclusions We have developed a simple, empirically-based algorithm for spatial extrapolation of GSV based on one summer and one winter Sentinel-2 MSI image, a large-scale Russian land-cover classification, and field-plot scale GSV data used for parameter selection and calibration of the algorithm. It has been applied to two contrasting regions of the Russian boreal forest and produces convincing patterns of spatial variation as well as mean GSV values consistent with what is expected for boreal forest in general. Over the limited range of situations to which it has been applied, it appears that its accuracy is comparable to, and perhaps better than, other local or regional-scale methods used to estimate GSV on the basis of satellite imagery. The essence of the method is to optionally include a simple description of land-cover, which is converted into a set of quantitative variables, along with the values (reflectances, radiances, or digital numbers) from the available bands of the MSI images. This approach is relatively undemanding of data availability. As it has been implemented in the present work, it is trained using field data that are laborious to acquire; but as a downscaling method for large-scale GSV products such as those generated from MODIS imagery this requirement for field data would not be necessary. We also gratefully acknowledge financial support and much encouragement from the UK Science and Innovation Network through the British Embassy in Moscow. The EU Transnational Access Interact scheme provided financial and logistical support for access to and use of facilities at the Khibiny and Spasskaya Pad field stations. Data Availability Statement: All data used in this research are either included in the manuscript or publicly available.
v3-fos-license
2021-05-23T03:10:49.460Z
2021-05-01T00:00:00.000
235098782
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.4067/s0718-221x2021000100437", "pdf_hash": "93a9a4a7b8c91839860e080059213c0f3adee021", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:464", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "93a9a4a7b8c91839860e080059213c0f3adee021", "year": 2021 }
pes2o/s2orc
SYNERGISTIC INFLUENCE OF FLAME RETARDANT ADDITIVES AND CITRIC ACID ON THE FUNCTIONAL PROPERTIES OF RICE HUSK/WOOD BLENDED PARTICLEBOARDS The selected functional properties of rice husk/wood blended particleboards which include thermal analysis, limiting oxygen index, morphological analysis, and mechanical properties have been investigated. Rice husk/wood particleboards were produced with one step hot press casting technique using citric acid to improve the compatibility in the particleboards with calcium oxide and aluminum oxide as flame retardants. The results showed improvement in the mechanical properties, flame retardancy, and thermal stability with the addition of flame retardants to the particleboards. The aluminum oxide synergy with citric acid in rice husk/ wood particleboards gave the best flame retardancy. INTRODUCTION The increasing population of humans with the sustainability development oriented in the modern era will demand more structures of construction materials. Rise in the manufacture of construction materials has also led to increasing in greenhouse gas emission and diminution of natural assets. Agro-industrial lignocellulosic wastes have been used as a new economic social and environmental alternative material for particleboards production due to the increasing number of it in the agricultural wastes (de Lima Mesquita et al. 2018). Furthermore, it is possible to have a combination of wood with other lignocellulosic materials to manufacture environmentally friendly products without reducing their quality. According to de Melo et al. (2014), rice husk (Oryza sativa) is highly potential for particleboards utilization among most agricultural by-products. It has been one of the major waste products from the agricultural industry of the producing countries. Due to the high quantities of silica, cellulose and lignin in rice husk couple with other useful reinforcement properties, rice husk is used in the construction industry (Temitope et al. 2015, Battegazzore et al. 2018. The presence of a high concentration of amorphous silica determines the pozzolanic effect, which exhibits cementations properties of rice husk. Also, the use of rice husk ash which contains high amorphous silica eventually improves the strength and durability of concrete (Sulaiman et al. 2018). Rubberwood (Hevea brasiliensis) trees were initially planted for their latex. When the trees reached an age of over 25 years, the old mature trees are felled for replanting as the latex production decrease (Jamil et al. 2013). Hence, the rubberwood logs are therefore used for the manufacture of a variety of products, such as particleboards, medium density fiberboards (MDF), plywoods, laminated veneer lumbers (LVL) and so on, due to its favorable medium-density-hardwood and natural light color (Loh et al. 2010). They also stated that, due to the unattractive price of rubber and conversion of rubber plantation to oil palm plantation, the rubber plantation area was decreasing and the supply of rubberwood was subsequently reduced. Thus, the study focused on the incorporation of rice husk and rubberwood particles into particleboards production. Presently, urea and phenol based formaldehyde are commercially used as particleboards binder in manufacturing (Sulaiman et al. 2016, Cèsar et al. 2017. However, formaldehyde-based resins are not environmentally friendly and also constitute health disorders that contain harmful chemical substances (Suraya et al. 2018). Previous work monitored with the aid of FTIR spectra by Umemura et al. (2012) showed that there was a reaction between citric acid and wood bark given rise to the formation of ester linkage involved in the reaction between the carbonyl groups and the hydroxyl groups of the citric and wood. Kusumah et al. (2016) reported the result of the characterization of particleboards manufactured from the combination of sorghum bagasse and citric acid. Building fire resistance awareness has been increased to ensure the safety of occupants at the same time provide sufficient time for fireman to extinguish the fire and minimize property loss (Umemura et al. 2012). Flame retardants can be classified into two categories, which are additive flame retardants and reactive flame retardants. The use of mineral flame retardant in particleboards has resulted in the separation of lignocellulosic particles and has also reduced the thermal conductivity of the particleboards, which subsequently led to flame retardancy (Cèsar et al. 2017, Zhao et al. 2017. Mineral fillers used to decrease the flammability of the substances are carbonates or hydroxides. According to Hull et al. (2011), aluminum hydroxide decomposes endothermically to form aluminum oxide (Al 2 O 3 ) that releases water vapor that dilutes the radicals in the flame, whereas the alumina residue builds up to form a shielding effect for the blazing polymer. To the best of the author's knowledge, there is no report on the use as a flame retardant additive of a particleboards characteristic made of citric acid hybridized rice husk/wood particles reinforced with aluminum oxide (Al 2 O 3 ) and calcium oxide (CaO). The main objective of this study was therefore to assess and evaluate the synergistic effects of citric acid and flame retardant additive on the functional properties of particleboards made from hybridized rice husk/wood particles. The produced particleboards will be evaluated for their changes in functional groups and morphological characteristics, limited oxygen index and thermal properties, as well as physical and mechanical properties, so that the targeted minimum required standard for particleboards is met. MATERIALS AND METHODS The rice husk used for this study was from Kilang Beras Bernas Paya Keladi, Kepala Batas Pulau Pinang, Malaysia while the wood particles consist of rubberwood obtained from Hevea Board Berhad, Negeri Sembilan, Malaysia. The rice husk and wood particles used for this study were a ratio of 50:50. The adhesive used was citric acid obtained commercially from R&M Chemicals, while the flame retardant additives used in this research were aluminum oxide (Al 2 O 3 ) and calcium oxide (CaO), which were purchased from Bendosen Laboratory Chemicals. Preparation of raw materials and production of particleboards Both the rice husk and wood particles were grounded to reduce the size to less than 10 mm with the aid of Riken Grinder. An oven-dried weight of the raw materials which equivalent to 800 kg/m 3 targeted density of particleboards was prepared. 10 % (w/w) Al 2 O 3 and CaO powder were each added into the particleboards production formulation in order to have flame retardant properties, and mixed together with a citric acid solution. Three formulations as shown in Table 1 were used. The particleboards were prepared according to Hashim et al. (2011) and Kusumah et al. (2016) with slight modification. The design of the casting mold was to prepare particleboards in six replicates each and maintained a uniform density of 800 kg/m 3 according to the formulation shown in Table 1 in a wooden mold. The cast was initially pre-pressed by cold press followed by hot pressing at 5 MPa pressure for 20 min. The thickness of the particleboards was controlled by 10 mm steel bars placed at the side before subjected to the hot press by the hot press machine. Characterization of particleboards properties The possible changes on the morphological outlay of each particleboards were monitored with Thermo Scientific Scanning Electron Microscopy (SEM). The samples were prepared as reported by Hashim et al. (2011) and examined on the SEM model of FEI Quanta FEG 650. FT-IR Spectrophotometer of IRPrestige-21 from Shimadzu was used to monitor the various functional groups existing in each type of particleboard sample. The particleboards were grounded into a pellet with an addition of potassium bromide (KBr) and scanned through a wavelength of 500 cm -1 -4000 cm -1 under nitrogen. The spectra results were compared and analyzed. The limiting oxygen index (LOI) was carried out according to standard ASTM D2863-08 (2008) with a dimension of 8 cm × 1 cm × 0,5 cm. Thermal analysis of all produced particleboards will be run using Mettler Toledo TGA/SDTA851 e thermogravimetric (Mettler Toledo Corp, Switzerland). About 10 mg of each sample was put in an aluminum pan and burn under a nitrogen atmosphere at starting temperature of 30 °C to 930 °C with a heating rate of 20 °C/min. All the particleboards were prepared according to the Japanese Industrial Standard JIS A5908-03 (2003) for evaluation of physical and mechanical properties. The mechanical tests of modulus of rupture (MOR) and internal bond (IB) strength were executed with the aid of Universal Instron testing machine System Model UTM-5582 operational on a load cell capacity of 10 3 kg. The physical tests were evaluated on ten specimens, weighed to an accuracy of 0,01 g from each panel include for density, water absorption (WA) and thickness swelling (TS) after the samples have been immersed in water for 24 h. Flame retardant properties showed by Al 2 O 3 and CaO additives respectively were evaluated for the control sample with 10 % citric acid and 20 % citric acid particleboard respectively. The data generated were analyzed for the significant differences using the Tukey test. Particleboards without the addition of citric acid and additives were used as a control sample. Functional groups assessment The Fourier Transform Infrared Spectroscopy (FTIR) as shown in Figure 1 revealed the transmittance of the particleboards samples at a range of 500 cm -1 -4000 cm -1 wavenumbers. The FTIR spectra were used to monitor the presence and disappearance of functional groups that exist in the particleboards before and after addition of different percentage of citric acid and the flame retardant additives, (Al 2 O 3 or CaO). The FTIR showed general similarities in the spectra images of both the control and the reinforced particleboards. The spectra results show a significant broad absorption band appearing between the region of 3414 cm -1 -3426 cm -1 , indicating OH and/or NH stretching for all the samples, as also reported by Hashim et al. (2011). Meanwhile, peaks around 2300 cm -1 -2400 cm -1 indicated the presence of carbon dioxide (CO 2 ), which resulted from the measuring condition. Absorption band existing around 1738 cm -1 was attributed to C=O stretching resulting from the carbonyl group or the resultant ester group occasion with the citric acid addition (Giridhar et al. 2017, Umemura et al. 2013 which the intensity of the band increased with the increase of citric acid addition in the particleboards. The particleboards bonded with 20 % CA possessed lower transmittance than those added without citric acid. The analysis result was supported by the previous works that bonding mechanism was the formation of ester linkages between carbonyl groups from citric acid with hydroxyl groups from wood and rice husk particles (Seo et al. 2016). As a result, citric acid acts as a cross linking agent by reacting with the hydroxyl group of the natural fibers in accordance with the development of the ester linkages which contribute to the good physical properties of particleboards. This would reduce the hygroscopicity of the lignocellulosic and result in good dimensional stability of the final product (Vukusic et al. 2006). This was strongly supported by the improvement of the physical and mechanical properties of the particleboards produced. From the figure, the spectra of particleboards with the addition of 10 % Al2O3, showed an absorption peak around 667 cm -1 which was associated with the stretching of Al-O-Al groups that are part of aluminium oxide network (Orellana et al. 2014). The presence of Al-OH group appeared around 1420 cm -1 was not significantly observed in Figure 1. Furthermore, the particleboards with the addition of 10% CaO showed spectra with a sharp peak at 3644 cm -1 due to the O-H stretching vibration for monomeric form of Ca(OH)2 when CaO hydrated with the moisture present in particles during the board manufacturing process (Bakovic et al. 2006). Spectra of particleboards with the addition of 10 % CaO was showed in the figure, in which a peak at 511 cm -1 corresponded to Ca-O symmetric vibration (Galvan-Ruiz et al. 2007). In this study, a mixture of rice husk and wood particles has been used in particleboards production. Silica is highly present in rice husk, where the entire outer layer of the rice husk surface was almost covered by silica (Jamil et al. 2013). From the figure, the intense absorption peaks at a region of 1038 cm -1 -1111 cm -1 were observed. The peaks at 1100 cm -1 and 480 cm -1 are due to the stretching vibrations and flexion of the Si-O-Si bonds (Orellana et al. 2014). Evaluation on the microstructure of samples The distribution of rise husk, wood particles and additives such as Al 2 O 3 and CaO in the particleboards was observed and shown in Figure 2. The rice husk has a cylindrical rough and hollow structure, which might become a barrier during adhesive application (de Melo et al. 2014) whereas wood particles showed well compacted cross-sectional arrangement. Morphology of particleboards made from a mixture of rice husk and wood particles without citric acid added, clearly exhibited that particles were loosely packed to one another and there are some voids that can be observed as shown in Figure 2a. From Figure 2b, Al 2 O 3 was detected in the particleboards as granulated size without melting during particleboards manufacture as its melting point is very high. The addition of 10 % Al 2 O 3 in the particleboards which was randomly distributed in the particleboard mixture can be expected. The Al 2 O 3 additive filled the void spaces between the mixture of rice husk and wood particles. The compactness of the particleboard was improved with the citric acid addition as a binder and Al 2 O 3 additive. Particleboard sample added with 20 % CA and 10 % CaO (Figure 2c), showed no obvious granulated CaO additives on the cross-sectional surface. The small fine CaO powder is seen as homogeneously dispersed on the cross-sectional surface of the particleboards sample. However, these particles have a great tendency to form agglomerates which could affect the final performance of particleboards especially the mechanical properties of the particleboards as shown in the SEM micrograph. Limiting oxygen index (LOI) The LOI value of all particleboards is tabulated in Table 2. In this study, aluminum oxide (Al 2 O 3 ) and calcium oxide (CaO) was added to the particleboards as flame retardants additive. LOI was used to measure the least volume of oxygen concentration in a mixture of oxygen and nitrogen that required supporting the flaming combustion process of a sample. This means that there is a positive correlation between flame retardant and the amount of oxygen to burn it. This indicated that the addition of the additives to the binderless particleboards made the particleboards less flammable. The incorporation of flame retardants into the binderless particleboards raised LOI value. This is supporting evidence that the additives enhanced the flame-retardant properties of the particleboards. Particleboards incorporated with 20 % CA + 10 % Al 2 O 3 exhibited the highest LOI at 58 % followed by 20 % CA + 10 % CaO of 48 %. Overall, the addition of both additives, CaO and Al 2 O 3 and also citric acid into the binderless particleboard has increased the flame retardancy. According to Jia et al. (2015), materials having LOI values greater than 26 % will show self-extinguishing behaviors and considered to be good flame retardant. In this study, the optimum condition of CaO to act as a flame retardant at 20 % has met the requirement of physical and mechanical properties according to Type 8 of JIS A 5908-03 (2003) Standard. Although aluminum-based additives have been widely investigated and used as flame retardant additives, the addition of citric acid content has enhanced the properties of Al 2 O 3 as a flame retardant in this study. According to Hull et al. (2011), loadings of 5-20 % of inert fillers which acting as flame retardants, have an inconsequential influence on the LOI, as over 80 % inert filler loading will make effective flame retardant. Thermal degradation through thermogravimetric analysis (TGA) The TGA device was employed to monitor the thermal degradation reaction of the particleboards and also to determine the mass loss or gain due to decomposition or loss of volatile matter. Figure 3 showed the weight loss (TG) curves and the derivative thermogravimetric (DTG) curves of all types of particleboards produced. For all the particleboards samples, weight loss was observed at temperatures up to 100 °C due to moisture loss at the initial stage. The major degradation of lignocellulose fibers began at temperatures above 250 °C and ended at temperatures below 500 °C (Elbasuney 2017). The decomposition of cellulose and hemicellulose occurred at a temperature range of 190 °C -360 °C (Saari et al. 2020), while the decomposition of lignin was between 180 °C -500 °C (Poletto 2017). From Figure 3, TG curves for particleboards with the addition of 10 % CaO showed further weight loss at a temperature around 700 °C. The first weight loss between 264 °C and 538 °C was attributed to the decomposition of the organic matter meanwhile the next loss in weight occurred between 538 °C and 721 °C was due to the calcium oxide (CaO) decomposition releasing carbon dioxide into the atmosphere (Abdul . Moreover, the addition of CaO as a flame retardant in this study, might re-carbonated at temperature 600 °C but this reaction is very slow. The particleboards with the addition of 10 % Al 2 O 3 , showed no obvious difference on DTG curves when compared with other particleboards. Particleboards added with 10 % CA and 10 % Al 2 O 3 , showed the lowest percentage of weight loss with an increased temperature up to 930 °C. This was due to Al 2 O 3 that formed a protective layer to protect the fibers that present in the particleboard. This was more clearly seen in Figure 3 that particleboard with the addition of 10 % Al 2 O 3 is thermally stable at the highest temperature compared with others. However, with increased addition of 10 % CA + 10 % Al 2 O 3 and 20 % CA + 10 % Al 2 O 3 thermal stability of the particleboard was enhanced. This was supported by the results obtained in limiting oxygen index (LOI) evaluation that increased for 20 % CA + 10 % Al 2 O 3 particleboard. Dimensional stability of the particleboards Results of the physical properties of the prepared particleboards are shown in Table 3. The average density, percentages of both water absorption (WA) and thickness swelling (TS) content of particleboards at various formulations were presented. The TS of the different panels varied from 6,83 % to 42,90 % and the WA ranged from 34,40 % to 92,01 %. The WA and TS value decreased dramatically when citric acid is added from 0 % to 10 % and 20 % into the particleboards panel. The hygroscopicity of lignocellulosic particles caused swelling when the particleboards immersed in water. The lignocellulosic particles spring back by releasing the built-in compressive forces that have been undergone during board manufacturing (Nagieb et al. 2011). According to Umemura et al. (2012), citric acid is used as a water resistant adhesive. Thus, with the increasing percentage of citric acid (CA) used to bond the particles, the TS and WA of the particleboards were accordingly reduced. This was obviously showed that the addition of citric acid resulted in an improvement in dimensional stability of particleboards (Widyorini et al. 2014). ± Values in parenthesis represent the standard deviation while different lower-case letters in superscript represent statistical significance and same lower case letters indicate a similarity of differences (p < 0,05). When the average density of the particle board of 800 kg/m 3 was maintained, there was an observed relatively high WA and high TS in control particleboard panels compared with the particleboards reinforced with flame retardant additives and further dropped as the additives increased. While the control particleboards have the highest WA and TS values of 92,01 % and 42,90 % respectively, the particleboard with additives (20 % CA + 10 % Al 2 O 3 ) recorded the least percentage WA and TS of 34,40 % and 6,90 % respectively. This is due to the improved compatibility of fiber particles, binders and additives. The presence of additives is expected to inhibit water penetration in the particleboards and increase as the additive increases, resulting in low WA and TS values. The high TS and WA values show that there are relatively more empty spaces in the binderless particleboards. The requirement set out in JIS A5908-03 (2003) Type 8 standard for particleboards is less than 12%. In this study, all particleboards with the addition of 10 % and 20 % of CA (excerpt 10 % CA + 10 % CaO) capable of meeting the TS standard, thus showing that the addition of CA will increase the dimensional stability of the particleboards. Table 4 summarizes mechanical properties which include the internal bond (IB) and average modulus of rupture (MOR) of the rice husk/wood-based particleboards with and without additives. Despite the consistent density value of the particleboards, the presence of compatibilizing agent and the inclusion of flame-retardants additives show a positive impact on the MOR and the IB of the prepared particleboards. The results revealed the least value of 6,23 MPa and 0,91 MPa of MOR and IB strength respectively for the binderless particleboards which is higher than the value obtained for kenaf core binderless particleboards (Xu et al. 2004). However, the value is lower than the value obtained for oil palm trunk (OPT) with polyhydroxyalkanoates addition to the binderless particleboard (Baskaran et al. 2012) and oil palm trunk with the addition of sucrose and fructose binderless particleboards (Lamaming et al. 2013). The highest values of MOR and IB strength were obtained from rice husk/wood blended with 20 % citric acid + 10 % Al 2 O 3 formulated particleboards. Mechanical properties This observation is ascribed to the excellent role of citric acid in ensuring good compatibility of the binder and flame retardant addition as shown in the SEM image ( Figure 2). However, these reinforced particleboards exhibit better mechanical properties performance compared to binderless particleboards. Both MOR and IB strength also show a significant increase than those of the binderless particleboards. In addition to improving flame retardants, flame retardant additives have been reported to improve the mechanical properties of reinforced materials. From the result in Table 4, the particleboards reinforced with the combination of flame-retardant substances (Al 2 O 3 + CaO) fulfilled the minimum requirements of 8,0 MPa of MOR and all particleboards have passed 0.15 MPa of IB for Type 8 JIS A5908-03 (2003). ± Values in parenthesis represent the standard deviation while different lower case letters in superscript represent statistical significance and same lower case letters indicate a similarity of differences (p < 0,05). CONCLUSIONS Awareness of ecological values and interest in renewable materials have created a demand for agro based building products. The present research study revealed the potential of the rice husk/rubber wood particles in the building and furniture industry as partial or full replacement of wood particleboards. Therefore, in this research, the particleboards were developed from a mixture of rice husk/wood particles with the addition of citric acid and aluminum oxide (Al 2 O 3 ) or calcium oxide (CaO) have been investigated. Based on the results obtained, the citric acid found to be a potential natural adhesive to replace synthetic adhesive as it improved the functional properties of particleboard from rice husk/wood particle mixtures. Particleboards from the mixture of rice husk and wood particles, were produced a target density of 800 kg/m 3 in order to achieve the JIS A5908-03 (2003) standard. LOI evaluation, shows that both CaO and Al 2 O 3 exhibited good performance with an increase of citric acid content. Particleboard with 20 % CA and 10 % Al 2 O 3 possessed better flame-resistant properties and even thermally stable. Overall, particleboards from a mixture of rice husk and wood particles with 20 % CA in addition to both formulas with 10% Al 2 O 3 (LOI of 58 %) and 10% CaO (LOI of 48 %) exhibited strong flame resistance properties. The addition of a combination of flame-retardant substances (Al 2 O 3 + CaO) increased particleboards strength and met the minimum physical and mechanical property specifications in compliance with Type 8 of JIS A5908-03 (2003) Standard. The results clearly suggest that, with minor modification, the application of rice husk/rubber wood particles in building construction is feasible and effective.
v3-fos-license
2020-12-24T09:08:06.878Z
2020-12-21T00:00:00.000
233819555
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CC0", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/4318052/files/Supporting_information_Appendix_S2.pdf", "pdf_hash": "1047a5e825cda6b1f6a38a759dda6eef2f668f91", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:466", "s2fieldsofstudy": [ "Geology" ], "sha1": "348812ef9b37aeecc0d22af43b14390fed0c1f4b", "year": 2021 }
pes2o/s2orc
Taxonomic identification using virtual palaeontology and geometric morphometrics: a case study of Jurassic nerineoidean gastropods Taxonomic identification of fossils is fundamental to a wide range of geological and biological disciplines. Many fossil groups are identified based on expert judgement, which requires extensive experience and is not always available for the specific taxonomic group at hand. Nerineoideans, a group of extinct gastropods that formed a major component of Mesozoic shallow marine environments, have distinctive internal spiral folds that form the basis for their classification at the genus level. However, their identification is often inconsistent because it is based on a set of selected characters reliant upon individual interpretation. This study shows a non‐destructive and quantitative method for their identification using micro‐CT and geometric morphometrics. We examined and micro‐CT‐scanned nerineoidean specimens from five main families that dominated Europe, Arabia and Africa during the Middle–Late Jurassic. Optimal longitudinal slices were selected from the tomographic reconstructions or from images of polished cross‐sections compiled from fossil collections, published work and online databases. Internal whorl outlines were represented by 30 evenly distributed sliding semilandmarks and shape variations were studied using the Procrustes‐based geometric morphometrics method. Multivariate analysis shows that Ceritellidae and Ptygmatididae are distinct families, whereas Nerinellidae, Eunerineidae and Nerineidae fall within the same shape variance and cannot be distinguished based on internal whorl outlines. The suggested method can be applied to images from various sources as well as to poorly preserved specimens. Our case study demonstrates the importance of quantitatively re‐evaluating taxonomy in the fossil record, promoting the future utility of large datasets. Abstract: Taxonomic identification of fossils is fundamental to a wide range of geological and biological disciplines. Many fossil groups are identified based on expert judgement, which requires extensive experience and is not always available for the specific taxonomic group at hand. Nerineoideans, a group of extinct gastropods that formed a major component of Mesozoic shallow marine environments, have distinctive internal spiral folds that form the basis for their classification at the genus level. However, their identification is often inconsistent because it is based on a set of selected characters reliant upon individual interpretation. This study shows a non-destructive and quantitative method for their identification using micro-CT and geometric morphometrics. We examined and micro-CT-scanned nerineoidean specimens from five main families that dominated Europe, Arabia and Africa during the Middle-Late Jurassic. Optimal longitudinal slices were selected from the tomographic reconstructions or from images of polished cross-sections compiled from fossil collections, published work and online databases. Internal whorl outlines were represented by 30 evenly distributed sliding semilandmarks and shape variations were studied using the Procrustes-based geometric morphometrics method. Multivariate analysis shows that Ceritellidae and Ptygmatididae are distinct families, whereas Nerinellidae, Eunerineidae and Nerineidae fall within the same shape variance and cannot be distinguished based on internal whorl outlines. The suggested method can be applied to images from various sources as well as to poorly preserved specimens. Our case study demonstrates the importance of quantitatively re-evaluating taxonomy in the fossil record, promoting the future utility of large datasets. T H E accurate identification of fossils lies at the heart of palaeontological biodiversity exploration and forms the foundation of systematic taxonomy, biostratigraphy, palaeoecology, evolutionary research and global change studies among other disciplines. By necessity, fossil specimens are identified based on their morphology and accurate and precise identification requires extensive expertise in the specific taxonomic group at hand. With the classical approach to morphological identification, taxa are frequently identified based on a set of selected characters that not infrequently remain inadequately described and documented. Therefore, their value in identification is dependent on individual interpretation. In addition, many taxa were described long ago and have remained poorly described and delimited; therefore identifications may be difficult to verify, resulting in possible misinterpretations and inaccuracies (MacLeod et al. 2010). These discrepancies have become increasingly apparent in recent years, following the rise in palaeontological 'Big Data' studies (Allmon et al. 2018). The digital availability of morphological information is rapidly increasing through the digitization of specimens held in museum collections, literature (e.g. Biodiversity Heritage Library), and researcher's morphological data sets (increasingly including 3D and micro-CT imaging). However, the research value of such morphological data is dependent upon the accuracy of its metadata, including taxonomic identifications. Our growing ability to combine data from many sources has highlighted the great importance of ensuring taxonomic accuracy and consistency. Both specimen-based (e.g. Global Biodiversity Information Facility) and literature-based taxonomic databases (e.g. Paleobiology Database) are faced with a growing need to re-evaluate the consistency of the taxonomic identifications they serve, to ensure their research value into the future (Karim et al. 2016;Nelson & Ellis 2018). Thus, finding new ways to analyse complex shapes in order to recognize the underlying taxonomic signal will help practical identification of fossil taxa and should facilitate and strengthen the many palaeontological studies that are reliant upon uniform identifications (Carvalho et al. 2007;Ram ırez et al. 2007;Faulwetter et al. 2013). In addition, it is likely to reveal new insights into the morphology and hence taxonomy and classification of those taxa that remain contentious. A notable example of the need to re-evaluate the robustness of taxonomy and systematics in the fossil record is exhibited by the extinct gastropod superfamily Nerineoidea Zittel, 1873. Gastropods are a dominant component of sedimentary rocks and have a rich and extensive fossil record (Kidwell & Bosence 1991, Bieler 1992). An example of the practical importance of consistent and accurate taxonomic identification of fossil gastropods is emphasized by their use as sensitive indicators of seafloor properties and a useful tool for palaeoenvironmental reconstruction (Kidwell & Flessa 1995;Dietl et al. 2015). Presence of common Nerineoidea characterized many shallow shelf carbonate environments from the Early Jurassic (Hettangian) to the Late Cretaceous (Maastrichtian) (Dietrich 1925;Cox 1949) across Europe, Africa, Asia and North and South America (Cox 1965;Pchelintsev 1965;Wieczorek 1979;Sohl 1987;Vaughan 1988;Sirna 1995). These large and conspicuous gastropods are considered to be key faunal elements of Mesozoic carbonate ramps and platforms and were especially common in the tropical Tethys Sea (Sohl 1987;Kollmann 1992). As one of the most abundant macrofossil components of many Mesozoic shallow marine environments, nerineoideans have been extensively used for biostratigraphy and palaeoecology (Cox 1965;Wieczorek 1979Wieczorek , 1998Sirna & Mastroianni 1993;Barker 1994). Their palaeoecological and biostratigraphical importance is reflected in their abundant occurrence in fossil collections and their numerous citations in the scientific literature (Vaughan 1988;Kollmann 2014). The expansion of carbonate platforms in the Middle Jurassic promoted nerineoidean diversification and led to them being among the most common and distinctive gastropods found in Jurassic strata worldwide (Sohl 1987;Barker 1990Barker , 1994Wieczorek 1998). They are known to form mass accumulations that can reach a density of hundreds of specimens per square metre. These nerineoidean-rich fossil beds can extend laterally for kilometres and may be useful stratigraphic marker beds (e.g. Reuchenette Formation, Switzerland) (Wieczorek 1979;Waite et al. 2008). Nevertheless, the group's taxonomy is challenging and its systematic description and phylogenetic position debated (Tracey et al. 1993;Sirna 1995). Nerineoideans are notable for possessing prominent internal spiral lamellae (folds) that extend from the internal walls of the shell into the shell cavity ( Fig. 1A) (Cossmann 1898;Cox 1960;Bandel 1993;Barker 1994). Nerineoidean taxa show a very wide diversity in the number, morphology and strength of their folds. The folds vary in occurrence across taxa from those with no, or very few, folds to those with five extremely convoluted folds with many lobes (e.g. Kouyoumontzakis 1989, fig. 3, pl. 1). The number and position of folds may change markedly through the ontogeny of an individual, however it remains fairly constant within a species across comparable stages of growth (e.g. Wieczorek 1979, Barker 1990). These features have made their description essential in species identification and they are a cornerstone for the taxonomy and classification of the superfamily (Sirna 1995). Each whorl interior may be characterized by the presence of folds in four positions as seen in whorl crosssections: on the columella, parietal, pallial and basal walls (Fig. 1B;Cox 1960;Vaughan 1988). The positional identification of folds has provided the basis for a simple descriptive notation or 'fold formula' (Barker 1990). This has been extended by others (e.g. Wieczorek 1979) into a complex notational system describing up to three orders of folds in the most complexly folded taxa together with more minor swellings or flexures. It should be stressed that, to our knowledge, these descriptive systems have not been explicitly regarded as reflecting biological homology of fold structures among nerineoidean taxa. Folds are incrementally secreted over the internal walls of the shell during ontogenetic development and can reduce the internal volume of the shell by 50%, but the shell remains fold-free in the very earliest whorls and at least part of the last whorl (Barker 1990;Waite et al. 2008). Historically, a range of interpretations have been offered for the function of nerineoid folds and these are reviewed by Kollmann (2014). The most recent revision by Kollmann (2014) redefined the superfamily Nerineoidea, and recognized seven families using a set of morphological criteria in which the relative importance of different character complexes was stressed at different hierarchical levels within the classification. For example, at family level taxonomic assignment was based predominantly on the shape of the base of the last whorl and the siphonal apparatus (Kollmann 2014), even though these features are rarely preserved. At the generic level, Kollmann (2014) identified and classified taxa based on the interpretation of internal folds within the whorls and the external whorl outline, both of which are commonly preserved. Kollmann (2014, p. 352) noted that for higher level classification, 'generally, variations in the shape and size of internal plaits [herein folds], which frequently have been used to distinguish species, are not suitable criteria. Changes with ontogeny, variability and even different cutting planes or differences in preservation obscure actual plait size and shape.' Despite what Kollmann affirmed, he reported some characters of internal folds in his diagnoses of the families. Kollmann's (2014) classification should be regarded primarily as a practical (= artificial) system; clearly a phylogenetic classification of the Nerineoidea still remains some way distant. The internal morphology of the whorls is classically observed in polished cross-sections of specimens, and these often complex morphologies are generally described in detail. Nonetheless, the inconsistency between different researchers on taxonomic importance at different hierarchical levels of these descriptions has helped create taxonomic controversy (Sirna 1995). In spite of these limitations, the morphology of the internal whorl crosssections still remains the foundation for establishing the generic position of nerineoidean species (Waite et al. 2008; Kollmann 2014). Virtual palaeontological methods combined with shape analysis using geometric morphometric (GM) methods are increasingly contributing to our understanding of taxonomy, evolution and phylogeny across a wide range of organisms ( The use of tomographic methods, such as micro-CT scans, can reveal complex internal structures of fossilized organisms that have been inaccessible so far (Molineux et al. 2007;Sutton 2008;Faulwetter et al. 2013). This has enabled researchers to analyse structures using GM methods such as landmarks and semilandmarks without causing any damage to the specimen (MacLeod 2007;Sutton et al. 2014Sutton et al. , 2016. Thus, the distinctive internal structure of nerineoideans is ideally suited for developing a quantitative approach based on virtual palaeontology and GM methods. Moreover, a tomographic and quantitative approach to characterizing these features may provide a uniform measure that can help understand systematic relationships of nerineoidean taxa and how those are best represented in a taxonomic classification (e.g. Delpey 1940;Vaughan 1988;Sirna 1995;Kollmann 2014). In the current study, we initiate the development of a new reliable quantitative taxonomic identification method for nerineoideans using semilandmark-based GM. MATERIAL AND METHOD A total of 58 specimens were examined from a wide range of Middle and Upper Jurassic palaeolatitudes, including localities from Europe, Asia (Arabia) and Africa in order to capture a wide range of cross-sectional shell morphologies from across the Tethys shelf (Fig. 2). The ecologically most abundant five of the seven known nerineoidean families (sensu Kollmann 2014) in the Tethys Sea were included in the analysis: Ceritellidae Wenz, 1940;Ptygmatididae Pchelintsev, 1960;Nerinellidae Pchelintsev, 1960;Eunerineidae Kollmann, 2014;and Nerineidae Zittel, 1873. Synonyms of species and genera were updated to establish a taxonomic baseline for further analysis. An updated systematic classification was based on Kollmann (2014) Of the specimens, 28 were scanned at the Natural History Museum (London, UK) using a micro-CT (X-Tek HMX ST225 cone beam system, Nikon Metrology; voltage of 180-200 kV, current/flux of 180-200 lA; 3142 projections were collected at 0.11°angular intervals/slice increment over a 360°rotation with a voxel size of 0.018 to 0.09 mm). The rest were analysed using images of polished cross-sections compiled from various sources (Table 1; Leshno Afriat et al. 2020, appendix S1). Capturing internal whorl shape Internal whorl shape from micro-CT scans. The workflow for shape capturing of the internal whorl outline from CT images is illustrated in Figure 3A-D. Specimens that were suspected of having density differences between the shell and the whorl interiors were targeted for CT-scanning. Three-dimensional digital models of nerineoideans were built using the Amira 6.4 software package (Mercury Computer Systems, Chelmsford, MA). In addition to 13 individually scanned specimens, nerineoidean specimens were manually segmented from a mass accumulation sample from the Upper Jurassic of Tanzania (NHMUK PI G 46024; Leshno Afriat et al. 2020, appendix S1; Fig. 4). An optimal 2D slice showing the internal whorl outlines was selected for each individual 3D model using Amira 6.4 (Figs 3C, 4C). To capture ontogenetic variance in internal whorl outline, three successive longitudinal whorl cross-sections showing the internal folds were analysed for each specimen. Whorls were captured from the central part of the spire where the internal fold structure was most visible and complex. The apex and adapical part of the penultimate whorl, which generally remain fold-free (Barker 1990;Waite et al. 2008), were avoided. In cases where consecutive internal whorl outlines were unclear or incomplete, any available whorls were captured, with a maximum of six whorls per specimen. Thirty evenly spaced sliding semilandmarks were placed on the internal shape outline of each whorl, using Rhinoceros 3D v.1.5 (Robert McNeel & Associates, Seattle, USA). The sliding semilandmarks were distributed clockwise from a geometrically equivalent anchor point positioned at the most abaxial point in the curve between the pallial and basal folds (see Fig. 3D, F). The whorl cross-sections were numbered according to their respective location in the gastropod spire (W1 to W6; Fig. 3C, E). The outlines of internal whorls W4, W5, and W6 were mirrored prior to landmark spacing. A shifting semilandmark GM method (Zelditch et al. 2004;Bardua et al. 2019) employing a single anchor (start) point was used to prevent unwarranted assumptions concerning the homology of individual folds, as would be required using a landmark-based method. This outline method maximized the geometric correspondence across the semilandmark set. Internal whorl shape from polished cross-sections. Images of polished cross-sections were obtained from published plates and online images from the Global Biodiversity Information Facility (GBIF 2019; see Leshno Afriat et al. 2020, appendix S1, for a list of specimen's sources). In addition, specimens from the palaeontological collections of the Geological Survey of Israel and the Natural History Museum, London (UK) were photographed (using X-T10 Fujifilm, and a Canon EOS 600D, respectively) and their internal whorl shape captured in an identical way (Fig. 3E, F). Data analysis Statistical analyses were conducted in R (version 3.6.1) using the package geomorph (v. 3. Analysis of similarity (ANOSIM) was used to test for morphological variation in internal whorl shape through ontogeny between and within specimens of each of the studied families. Intra-and inter-observer variations of the method were examined on five specimens by two independent researchers. Each of the researchers identified an optimal CT-slice, and then one of the researchers (YLA) repeated the distribution of landmarks from the equivalent starting point three times. Principle component analysis (PCA) and ANOSIM were carried out to examine the repeatability of the results (Leshno Afriat et al. 2020, appendix S2). The validity of our method of combining cross-sections from CT-slices and images of polished specimens was established using two specimens of Eunerinea? sp. (GSI 3161, GSI 3162; Leshno Afriat et al. 2020, appendix S1) that were CT-scanned, polished and their cross-sections photographed. PCA showed our method to be highly reliable and repeatable (Leshno Afriat et al. 2020, appendix S2). Canonical variates analysis (CVA) was performed for corrected estimation rates of assignment of specimens to family level. Permutational multivariate analysis of variance (PERMANOVA) was carried out on the entire shape space to examine significant differences between a priori groups. PCA was used to examine the shape variance of internal whorl outlines in different genera for each of the five studied families. RESULTS Internal whorl outline significantly differentiates three groups of nerineoideans, two comprising single families, independent of data source (Table 1; Fig. 5). No significant difference is found between consecutive internal whorls of ontogenetic development, represented by the relative position of the whorl in the gastropod spire (W1, W2, W3 etc.) (R = À0.06, p = 0.92). Ceritellidae and Ptygmatididae are significantly separated from each other and from the other groups, whereas the variance of the sampled specimens belonging to the families Nerinellidae, Eunerineidae and Nerineidae strongly overlaps (Fig. 5, Table 2). Accordingly, we combined the specimens belonging to Nerinellidae, Eunerineidae and Nerineidae into one group for further analysis. CVA demonstrates moderate to high correct estimation rates for each group (i.e. Ceritellidae, Ptygmatididae and Nerinellidae + Eunerineidae + Nerineidae) ranging from 75.0% to 90.4% (Table 3). Variance in internal whorl shape separates F I G . 3 . Workflow for shape extraction from micro-CT scans (A-D) and images of polished cross-sections (E-F). A, specimen selected for micro-CT scanning; NHMUK PI MG 1562 (4); B, 3D model reconstructed from CT stacks; the model is shown with a corner cut exposing the internal morphology; C, an optimal 2D slice is selected from the CT model and the outlines of three consecutive longitudinal internal whorl cross-sections are extracted; D, 30 evenly spaced semilandmarks are scattered clockwise from an equivalent starting point (in red); whorls W4 to W6 are mirrored prior to semilandmark spacing. E, polished surface of a Jurassic nerineoidean specimen; outlines of three consecutive longitudinal internal whorl cross-sections are extracted for each specimen; F, 30 evenly spaced semilandmarks are scattered from an equivalent starting point. Scale bars represent 1 cm. F I G . 4 . Workflow for shape extraction from a complex fossil bed: A, planar and lateral view of a mass accumulation of Nerineoidea from the Kimmeridgian (Upper Jurassic) of Tanzania (NHMUK PI G 46024); B, 3D model in planar and lateral view of manually segmented fossil specimens (carbonate matrix in grey, segmented specimens in colour); C, enlarged single specimen in blue. For each segmented specimen an optimal 2D slice is selected and longitudinal internal whorl cross-sections are extracted as illustrated in Figure 3C. Scale bars represent 1 cm. species of the same family along the first two PC axes for Ceritellidae and Ptygmatididae (Fig. 6A, B). For example, in the Ceritellidae (Fig. 6A), internal whorl outlines of Fibuloptyxis bucillyensis Fischer, 1960 and Fibuloptyxis elegans convexa Fischer, 1960 plot more closely to each other than to specimens of Cossmannea, regardless of the number of studied internal whorl outlines, the relative position in the spire and the different methods of obtaining cross-sections (Table 1). The first two PCs explain 88% of the total shape variance for the Ceritellidae (Fig. 6A), which are generally characterized by having a simple outline with strong pallial or parietal folds (Kollmann 2014). The number of folds increases along PC1 (75%), while the columellar fold becomes more pronounced along PC2 (13%) (Fig. 6A). In the Ptygmatididae, which are defined by highly convoluted whorls with two columellar folds and complex parietal and pallial folds (Kollmann 2014), the first two PCs explain 61% of the total shape variance (Fig. 6B). Internal whorl shape becomes more complex along PC1 (44%) and with more lobes along PC2 (17%). In the PCA of the combined Nerinellidae + Eunerineidae + Nerineidae group, the first two PC axes explain 67% of total variance. The high resemblance in fold morphology prevents discrimination between whorls of sampled specimens belonging to these three families: the characteristics of the inner whorl outlines are indistinguishable. These families show varying degrees of development of the columellar folds and more prominent pallial and parietal folds (Fig. 6C). DISCUSSION This study demonstrates a reliable and quantitative method of investigation of the internal morphological features of spire whorls that can be applied to the dominant nerineoidean families of the Tethys Sea during the Middle and Late Jurassic. Quantitative shape analysis significantly differentiates between specimens of the Ceritellidae, Ptygmatididae and Nerinellidae + Eunerineidae + Nerineidae families, based on their internal whorl outlines ( Fig. 5; Tables 2, 3). Within our dataset, this separation extends to species level of the Ceritellidae and Ptygmatididae families (Fig. 6A, B). Furthermore, our method is independent of sample size and applicable to different data sources (Tables 1, 2). Additionally, our method suggests that there may be no clear geometrical difference in interior whorl morphology between the sampled specimens of Nerinellidae, Eunerineidae and Nerineidae (Figs 5, 6; Table 2) that is usable for their taxonomic differentiation. These families are known for their complicated systematic history, which has been the focus of strong debate (Vaughan 1988;Sirna 1995). Zittel (1873) hierarchical levels based on various features, such as overall shell shape, the presence of a heterostrophic protochonch, ornamentation and the presence or absence of an umbilicus (summarized in Vaughan 1988). In a later revision by Pchelintsev (1965), which massively inflated the number of higher level nerineoid taxa, these genera were elevated to the family level (viz Nerineidae and Nerinellidae) based on the presence of an anterior siphonal notch and general shell shape. The genus Eunerinea was first proposed by Cox (1949) as a subgenus for forms previously attributed to Nerinea by Cossmann (1896). A more recent revision by Kollmann (2014) redefined the Nerineoidea and, based on their apertures, established the Eunerineidae (n. fam.) that includes the Nerineidae Zittel, 1873 sensu Pchelintsev (1965) and also the Diptyxidae Bouchet & Rocroi, 2005, which was based on the mistaken relationships of Diptyxis (now shown to belong to the Ceritellidae). Whereas internal whorl outline is widely agreed as the basis for classification and identification of nerinoidean taxa at species level (e.g. Vaughan 1988; Sirna 1995; Kollmann 2014), classification at higher taxonomic levels has been established using various external shell features. This has resulted in the establishment of a wide range of classifications across different taxonomic levels among different authors (see summary in Sirna 1995). Our method demonstrates, in a quantitative way, that it is not possible using our sampled specimens to differentiate the families Nerinellidae, Eunerineidae and Nerinidae based on their internal structure alone (Figs 5, 6C). Kollmann's familial diagnoses suggest a potential similarity of internal whorl morphology amongst members of these families; most of the diagnostic differences between them relate to external morphological features and size. Thus, the systematic position and further subdivision of genera from these families remains problematic, and further examination is needed to validate their taxonomic assignment. We envisage that future refinement of the classification of the Nerineoidea will be best accomplished by combining the information obtained from ontogenetic and GM studies of interior whorl morphologies with the external morphological characters currently used. In this way we can obviate the inherent artificiality of a classification built using different characters at different hierarchical levels. Micro-CT has been used to help in reconstructing the anatomy of recent and fossil molluscs, including buccal masses of ammonoids (e.g. Tanabe et al. 2013;Kruta et al. 2014) and extant cephalopods (e.g. Kerbl et al. 2013). Previous tomographic reconstructions of the internal structure of mollusc shells have focused on 3D parameters, such as chamber volume, to test buoyancy properties (Lemanis et al. 2015) and to study evolution and development (Tajika et al. 2015(Tajika et al. , 2018Lemanis et al. 2016). However, imaging of fossils has been limited to relatively few specimens (e.g. Lemanis et al. 2015Lemanis et al. , 2016Tajika et al. 2015) and scanning was restricted to just those exceptionally preserved fossils that exhibited a clear contrast between the shell and the surrounding matrix. We also found micro-CT scanning useful in reconstructing the internal structure of fossils without damaging them through physical preparation. However, our method of using slices of cross-sections from the scanned 3D models made it possible to analyse specimens with various degrees of preservation, specifically poorly preserved ones. Furthermore, by incorporating the large amount of available cross-section images from fossil collections, publications and online databases we significantly increased our sample size. We demonstrate, for the first time to our knowledge, that micro-CT scanning is also useful for establishing the internal morphology of specimens embedded in complex calcareous mass-accumulation beds (Fig. 4) (but see Lukeneder et al. (2014) for laser scanning of surface morphology). Mass accumulations of shells, or fossil concentrations, have been extensively studied for their usefulness in sedimentology and stratigraphy as well as for the wealth of palaeobiological data they hold (Kidwell et al. 1986). The taphonomic analysis of fossil concentrations has been correlated with multiple palaeoenvironmental parameters, including palaeohydraulics and sedimentary deposition, facies analysis and marker bed correlations (F€ ursich 1978;Kidwell et al. 1986) as well as sequence stratigraphy (F€ ursich & Pandey 2003) and petrophysical properties (Chinelatto et al. 2020). Mass accumulations of nerineoideans are considered important recorders of water energy and nutrient availability, and quantitative analysis of their abundance is used for environmental reconstruction in many regions (Dauwalder & Remane 1979;Wieczorek 1979;Waite et al. 2008). CT enabled us to isolate individual specimens from the surrounding matrix without using destructive and irreproducible mechanical or chemical methods (summarized in Sutton 2008). By combining virtual palaeontology and GM, we manually segmented nerineoidean specimens without harming the original sample and accurately characterized the internal structure of the nerineoidean specimens. Our new approach proposes a new opportunity of quantitative examination of mass accumulations of nerineoideans. It has the potential to uncover underexploited data on the intraspecific variation of fossil concentrations and to increase their value in palaeoenvironmental reconstructions and biostratigraphy. The efficiency of GM in distinguishing between species using a single data source has been frequently shown for both Recent and fossil gastropods (e.g. Guralnik & Kurpius 2001;Carvajal-Rodr ıguez et al. 2006;Monnet et al. 2009;Smith & Hendricks 2013;Abdelhady 2016;Jackson & Claybourn 2018). However, we have developed a GM protocol that can be applied to multiple data sources: CT images, polished cross-sections and mass accumulations, all analysed using the same methods to determine the internal structure of the shell in 2D. This allows easy comparison of data across disparate sources including whole shells, sectioned shells and the frequent illustrations of cross-sections in published work. We envisage that the use of this method will help to enable the re-evaluation of the taxonomy and classification of the Nerineoidea using an easily replicable method. Previous work by Jackson & Claybourn (2018) emphasized the importance of incorporating classic qualitative criteria to create F I G . 6 . Principal component analysis (PCA) of internal whorl outlines and characteristic outlines of taxa within Nerineiodean families: A, Ceritellidae; B, Ptygmatididae; C, Nerinellidae + Eunerineidae + Nerineidae. a robust framework for taxonomic identification of Cambrian helcionelloid molluscs. They combined qualitative systematic descriptors with geometric morphometrics analysis to refine subtle intra-and interspecific variations in shape. Our quantitative method of investigation of the characters of fossil gastropods might be seen as a step towards the development of a large-scale automated taxonomic identification system (MacLeod 2007;Hsiang et al. 2018). CONCLUSIONS We suggest a new reliable quantitative method to taxonomically identify nerineoideans based on their internal shell structures. We show that using micro-CT scanning and GM can provide a powerful and easily employed tool for identifying nerineoidean gastropods, one of the most abundant, diverse and widespread gastropod superfamilies of the Mesozoic. Our method is non-destructive and thus reduces the inherent difficulty associated with the existing classical morphological identifications of Nerineoidea. Moreover, it is applicable to a range of poorly preserved specimens, cross-sections, and to mass accumulations of specimens in fossil beds. The current study found that the Ceritellidae and Ptygmatididae families show distinctive internal whorl outlines, whereas the internal whorl outline of the Nerinellidae, Eunerineidae and Nerineidae families overlap, thus demonstrating the need for further taxonomic re-evaluation of the group. Our suggestion for applying virtual palaeontology techniques to the identification of nerineoidean fossils could promote the revolution of their classification and understanding of their phylogeny. In turn, the distinct identification of nerineoideans can be applied to improve the characterization of macrobenthic community structure of Mesozoic carbonate platforms. The era of digitization enables us to ask large-scale questions based on the sampled fossil record. Using our virtual palaeontology approach on the current curated databases, we can progress towards a unified interpretation of taxonomic data, promoting the use of large datasets in future work.
v3-fos-license
2015-03-19T23:44:59.000Z
2015-06-23T00:00:00.000
15880541
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://gmd.copernicus.org/articles/8/1839/2015/gmd-8-1839-2015.pdf", "pdf_hash": "ea581dddfc54d4e80798623ccb18f9466b4c09e6", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:467", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "sha1": "a97af0729b0e5af06a674f9a9e03c285f75d65b7", "year": 2015 }
pes2o/s2orc
Reaching the lower stratosphere : validating an extended vertical grid for COSMO This study presents an extended vertical grid for the regional atmospheric model COSMO (COnsortium for Small-scale MOdeling) reaching up to 33 km. The extended setup has been used to stably simulate 11 months in a domain covering central and northern Europe. Temperature and relative humidity have been validated using radiosonde data in polar and temperate latitudes, focussing on the polar and mid-latitude stratosphere over Europe. Temperature values are reproduced very well by the model. Relative humidity could only be met in the mean over the whole time period after excluding data from Russian stations, which showed significantly higher values. A sensitivity study shows the stability of the model against different forcing intervals and damping layer heights. Introduction The upper troposphere and lowermost stratosphere is a place of sharp gradients in many constituents of air and of the physical parameters used to describe its state.Temperature and ozone are textbook examples, but methane, water and many more species also show a strong gradient.At the same time, being the boundary to the lower atmosphere, this is an area where small-scale fluctuations can have a strong influence on the stratosphere and its composition (Zahn et al., 2014). In order to simulate this highly vulnerable and influential layer directly, a model with high vertical and horizontal resolution is needed.Global models usually are too coarsely resolved and cannot model the small-scale processes.In extending the vertical layering of the regional model COSMO (COnsortium for Small-scale MOdeling) to 33 km, we present here a model that can fill the gap.As we planned to apply the extended setup to simulations covering polar spring and the associated ozone loss with the coupled chemistry model COSMO-ART (COSMO-Aerosols and Reactive Trace gases) (Vogel et al., 2009), we focus here on polar latitudes, but always refer to temperate regions also. After an introduction to the model and an exact definition of the extended vertical grid in Sect.2, the measurement data are introduced in Sect.3. COSMO is shown to be able to run stably with the extended layering.Using radiosonde data and regridded data from meteorological reanalyses, it is shown that the model is able to reproduce temperatures very well (Sect.4.2) while relative humidity is more difficult (Sect.4.3) and only its mean value could be reproduced.Two runs with different boundary conditions were performed to test the influence on the model result. Additionally, three more runs were done in order to test the stability of the model against an increased boundary forcing interval set to 12 and 24 h instead of 6 h and against increasing the thickness of the damping layer by setting its lower end down to 22 km instead of 28 km.Section 5 presents the results of this sensitivity study, showing that the model will still run stably. The model: vertical grid, boundary data and domain This section gives a short introduction to COSMO and explains the changes made to the standard vertical grid as well as the boundary data used and the specified domain. Introduction to the model COSMO is a regional atmospheric model that has been developed by a consortium lead by the german weather service DWD (German Weather Service).DWD uses the model for its regional numerical weather forecast of Europe and Germany with a resolution of 7 and 2.8 km respectively (Baldauf et al., 2011b).Many extensions have been developed for the model, for example COSMO-ART including chemistry and aerosols (Vogel et al., 2009).For this study, the model was set up to run in forecast mode to simulate several months in form of a hindcast using reanalysis data as boundary forcing. The standard setup of COSMO used for the forecast of central Europe (DWD domain COSMO-DE) reaches to a height of 22.0 km (Baldauf et al., 2011a).This is the vertical grid referred to as the standard vertical setup or grid in this study, well aware of the fact the vertical grid used to simulate a larger European domain (COSMO-EU) that reaches up to 23.6 km (Schulz and Schättler, 2009) is just as frequently used by DWD.The model has also been used to study greater heights in tropical latitudes in the AMMA (African Monsoon Multidisciplinary Analyses) project (Gantner and Kalthoff, 2010), reaching 28.0 km, and a tropical setup reaching up to 30.0 km has also been developed (Krähenmann et al., 2013).With the extended vertical grid presented in this study, it becomes possible to simulate the lowermost stratosphere in polar latitudes.This validation study opens the door to new applications of COSMO. The extended vertical grid The standard vertical grid of the COSMO model reaches up to 22.0 km in 50 layers.The vertical structure is visible in Fig. 1, exact values are given in Table A1.The damping layer in the top layers begins at 11 357 m in standard setup.The vertical layering of the new grid introduced in this study is also given in Fig. 1 and Table A1.It is focused on the lower stratosphere, with the highest of the 60 layers at 33 km, the damping layer beginning at 28 km (rdheight = 28 000.0 in the namelist).The top layer of the extended grid about 10 km above that of the standard grid and the distance between the layers is slightly smaller in all heights above the lowest kilometer, as is also visible in Fig. 1. In order to test the sensitivity of the model to the size of the damping layer, an additional model run was done, for which the lower boundary of the damping layer was set to 22 km JAN MAYEN, T during 01.10.2010MAYEN, T during 01.10. to 31.08.2011, sonde data 01.10.10 15.11.10 29.12.10 11.02.11 26.03.11 02.06.11 , sonde data 01.10.10 15.11.10 29.12.10 11.02.11 26.03.11 02.06.11JAN MAYEN, T during 01.10.2010to 31.08.2011, COSMO by ERA 01.10.10 15.11.10 29.12.10 11.02.11 26.03.11 02.06.11 (rdheight = 22 000.0 in the namelist), which is just the top of the standard grid.The damping layer then spans one-third of the model layers. The analyses used as boundary data In order to examine the influence of different boundary data on the model results, the model was run twice, using ERA-Interim and NCEP (National Center for Environmental Prediction) reanalysis data for starting and boundary values.The vertical layering of the two reanalyses is displayed in Fig. 2. In order to better evaluate the model, the reanalysis data were also interpolated to the vertical grid used for the output of the model. The reanalysis project of the National Center for Environmental Prediction provides data starting on 1 January 1948, 01.10.10 24.11.10 18.01.11 14.03.11giving global fields every 6 h (00:00, 06:00, 12:00 and 18:00 UTC) at a resolution of T62, which corresponds to 1.875 • (192 points on a latitude) (Kalnay et al., 1996).The upper boundary is at 2.7 hPa, approximately 42 km in the US standard atmosphere (Sissenwine et al., 1962).So the new vertical grid reaching up to 33 km is still within the vertical limits of the NCEP reanalysis data. ERA-Interim is the reanalysis project of the European Centre for Medium Range Weather Forecast (ECMWF) (Dee et al., 2011).The data were used in this study at a resolution of T255 (corresponding to 0.7 • , 512 points on a latitude) and up to 0.1 hPa.So both the vertical and horizontal resolution are higher than those of the NCEP reanalysis.ERA-Interim is available for the same timestamps as the NCEP reanalysis. In standard setup, the reanalysis data were used in a 6hourly interval (hincbound = 6.0 in the namelist) to force the model.The sensitivity of the model to this interval of boundary forcing was tested by performing two additional model runs using the ERA-Interim reanalysis data and us-ing it as forcing every 12 and 24 h (hincbound = 12.0 or hincbound = 24.0respectively). The model domain The model domain used in this study is shown in Fig. 3.It covers most of Europe with a focus on the polar latitudes, stretching from northern Africa in the south and covering Svalbard, east of Greenland at 74 • N, in the north.The resolution was set to 0.2 • .The COSMO model is operationally used by DWD to produce regional weather forecasts for central Europe, but not in Northern Hemisphere polar latitudes (Baldauf et al., 2011a). So the domain chosen here can be used to assess the performance of the model in polar latitudes, since a direct comparison to an area of regular use is possible.The required namelist parameters needed to reproduce the model domain are given in Table A2. The first time step simulated by the model runs used in this study is 1 October 2010, 00:00 UTC, and the last output is for 1 September 2011, 00:00 UTC.The cold temperatures that can be expected in the polar stratosphere especially in winter and the warming in spring both lay well within the simulated time.Output was produced on an hourly basis, the model time step was set to 60 s, using the namelist parameter dt = 60.0.It could be shown that the model runs stably in this setup by validating the whole time period with radiosonde data. The timespan of 11 months is due to the time limit applied to the calculation.The model was run with a time limit of 2 days, reaching a total number of 8076 output hours.The last output then turns out to be on 2 September 2011, at 11:00 UTC, but the authors decided to perform this study for the exact 11 months, as given above. Measurements This study validates the output of the COSMO model using the temperature (T ) and relative humidity (RH) recorded by radiosondes of stations within the model domain.T and RH are regularly observed values and are here considered basic physical parameters whose distribution well represents the physical state of the model.The measurement data used in this study were taken from the ESRL (Earth System Research Laboratory) radiosonde database provided by NOAA (National Oceanic and Atmospheric Administration) (Schwartz and Govett, 1992). The location of the 24 stations is given in Fig. 3, exact values and the names being given in Table B1.This choice includes all polar stations in the domain and the same number of temperate stations with good data coverage. All stations typically release one radiosonde every 12 h, at 00:00 and 12:00 UTC, so 671 ascents can be expected from each station during the period of 335 simulated days.The actual number of ascents for each station is also given in Table B1.All stations except Ny-Ålesund, which has a little more than one ascent per day, come close to or exceed this number, the average being at 673 ascents.Model and regridded reanalysis data were only considered at times when there was an ascent at the specific station, so approximately every 12 h. In order to compare sonde and model data, the grid point closest to each station was used to compare the simulation with measurements.Since the resolution is only 0.2 • , the error made by this simple identification is small.The latitude and longitude of the closest grid point can also be found in Table B1.An interpolation to the exact location was not considered necessary as the radiosondes drift with the wind, an effect not accountable, since the exact geographic location of each measurement taken by the sonde is not available.This is also the reason why no interpolation in the vertical was done. In each ascent, the value closest to each model output layer at even kilometers was identified with the height of that layer, the maximum difference allowed having been set to 500 m.Since there are typically more than 20 measurements taken in an ascent, the error was much smaller than this value, reaching only 156.0 m on average, with a standard deviation of 126.3 m. The data were used as downloaded from the server, only excluding values in RH > 100 %.It was found that all stations in Russia give much higher humidity values than the other stations, which is the reason why the humidity data of all Russian stations were excluded from the investigation.This will be further discussed in Sect.4.3.1. Results This sections presents the results of the model validation study.Two questions are to be answered: is the model able to simulate the polar latitudes and the stratospheric heights?And what is the influence of the boundary data on these results?Following the questions, the answers will also have to be twofold. After presenting the output grid, the results in temperature are presented.Those of relative humidity are described in the following section.The latter is preceded by the explanation of why it seemed reasonable to exclude the data of Russian stations when examining relative humidity. The output grid In order to compare the model results to the measurements, model output on a vertical grid of whole kilometers from 8 to 33 km was used.The values given out above 27 km are already within the damping layer and the results can no longer be considered to come genuinely from the model, so measurements were only compared up to 27 km.As noted above, the boundary data were also interpolated onto the output grid, using the same program that is used to prepare the boundary data for running the model, called INT2LM (Schättler, 2013).COSMO uses terrain following coordinates.Above a certain value specified in the namelist, the layers become smooth and are no longer terrain following.This height has to be higher than the highest mountain tops in the domain and in this case was set to vcflat = 7000.0,given in the namelist in meters.This is the reason why all analyses done in this study only start at 8 km. Temperature To begin the discussion, a look at Fig. 4 exemplifies the basis of this study.It shows all the soundings of the station Jan Mayen during the time considered here.The warming at the end of the polar winter can be plainly seen.Most striking are the many white areas in the image, showing the lack of measurement data.The bottom figure shows the corresponding result of the model run with boundary data by ERA-Interim.The image is filled, but the data were only used for the following analysis if measurements were also available at the timestamp. Figure 5 gives exemplary time series of Jan Mayen and Madrid at 26 km height, approximately 2.5 km above the model top of the standard vertical COSMO grid for both model runs.When comparing the two figures, temperature values reflect the different latitude: winter temperatures above Jan Mayen are much colder than above Madrid, the warming in spring much more pronounced.The good correspondence of model and measurement not only shows that the two model runs and also the boundary data are very similar, but also that the model performance does not change during the whole simulated period.There is no greater offset in the end than in the beginning. To compare the data in a more quantitative manner, Fig. 6 shows the mean ascent at Jan Mayen for both model runs.The boundary data are also included in the image.All three soundings lay on top of each other.The minimum temperature in the lowermost stratosphere is well reproduced.In order to compare to a temperate station, Fig. 6 also gives the mean ascent of the station in Madrid.The minimum is more pronounced, but also reproduced by the model.There is no difference visible between the model run forced by ERA-Interim and that forced by NCEP reanalysis data. In order to further compare the performance of COSMO, Fig. 7 shows the scatterplots of all measured against mod- eled temperature values with color coded height intervals for all polar stations.The variability in higher altitudes is lower, which is why the scatter is reduced with height.Both model runs with different boundary data simulate temperature very well, reaching about r 2 = 0.98.The results of the model in temperate latitudes was just as good and the correlation does not reach higher values when using the regridded boundary data (not shown). When reducing the data to values of descriptive statistics, all stations can be easily compared.Figure 8 shows the mean of T model − T meas and T bound − T meas for all levels and for stratospheric levels with z ≥ 11 km.The stratospheric layers are also those layers added when using the extended instead of the standard vertical grid.In both cases, the values are well reproduced by the model.When considering all layers, the mean values of the boundary data are lower than those of B1 for a list of the stations corresponding to the numbers.measurement, the model output actually being closer to the measurement.When considering the new stratospheric layers, the model performance is just as good as it is when considering all layers.The boundary data are now closer to measurements than for all levels.Overall, COSMO is able to reproduce measurements in temperate as well as polar latitudes in all heights, the mean difference never exceeding 0.5 K. The spatial distribution for the run forced by ERA-Interim is shown in Fig. 9, the figure being very similar when looking at the results of the run using the NCEP reanalysis as boundary data.It now becomes clear that the slight outliers of stations 7, 16 and 21 also visible in Fig. 8 are all close to the eastern border of the model domain.By looking at the stations used to examine the problem of Russian humidity data, however, it could be shown that this effect is not visible when considering more eastern stations.It is not due to . Mean difference of model values and measurements of temperature for each station over all levels when using ERA-Interim as forcing data.The picture is similar when using NCEP reanalysis data. the relative location of the three stations within the model domain but more likely to the measurement data. Another aspect when comparing the model output to measurements and regridded reanalysis data are the variability of the model in between those times when measurements or reanalysis data are available.Model output was saved every hour, while measurement or reanalysis data are available at most every 6 h, as explained in Sects.2.3 and 3.In order to asses this variability, Fig. 10 shows a shorter time series of only 10 days of the three data sets, including all existing model and reanalysis data.It becomes obvious that the model shows an internal variability that is not present in the less frequent measurement or reanalysis data.The greater variability is linked to physical processes that happen on short timescales of only hours or less.These cannot be captured by regridding the reanalysis data to a finer grid. Excluding the Russian humidity data When examining the relative humidity of the 24 stations chosen for the validation of the model, it became apparent that the model could not reproduce the relative humidity data of any station within Russia (or of Gomel, the only station in Belarus with data during the modeled period, as became clear when examining more stations). As there was no apparent reason for this offset and only seven stations lay within Russia in the original set (five polar and two temperate), this issue needed further investigation.The data of all available 23 Russian stations well within the model domain and Gomel in Belarus (see Table B2) were compared with 24 other stations in the eastern part of the domain but not in Russia or Belarus (see Table B3).The result is best illustrated by the mean over all RH values of all ascents in each group.Figure 11 shows the result for the Russian stations and the 24 stations outside of Russia that had been chosen.While the model reproduces the values of the stations outside of Russia, the measurement values of those stations within Russia are very different from the model values but also from the regridded analysis or the measurements of those stations outside of Russia. In addition to the mean, the station Kaliningrad (no.8), surrounded by the non-Russian stations Leba (no.11), Kaunas (no.12), Visby (no.13) and Tallinn (no.16) also allows for a spatial investigation.While the results of Kaliningrad are similar to the mean of Russian stations, the mean ascents of the surrounding stations are all similar to the mean of the non-Russian stations.These two findings are in line with Balagurov et al. (2006) and Moradi et al. (2013).The authors of these studies come to the conclusion that the measurement technique used in radiosondes of Russia give values for relative humidity that are significantly too high for low pressure.All together, this lead to the decision to exclude Russian stations from the further investigation of the performance of COSMO with respect to relative humidity. The mean values of the ascents of temperate and polar stations for both model runs is given in Fig. 12.The low strato-spheric values are well reproduced by the model for polar and temperate stations and both runs, while the tropospheric offset is larger.In heights lower than 13 km, the model is too humid on average, the values being approximately 10 % too high.The mean of tropospheric values seems to be better reproduced for polar stations when using the NCEP reanalysis.The bias is of measurements and model data are also present in the forcing reanalysis data, these being dryer than measurements on average.The model reduces this bias and produces a wetter atmosphere than that of the reanalyses.So the bias is combination of model physics, boundary data and maybe also measurement problems.Overall, model results fit measurements better than the reanalysis data. However, when looking at the scatterplot of the polar stations, given in Fig. 13, it becomes clear that the model is only able to reproduce a mean value that is similar to the measurements.There is no notable correlation in any height.The variability in the measurements is simply too high to be reproduced by the model.This is also visible in the figures showing the mean ascents.The standard deviation of the model and the regridded analysis is much smaller than that of the measurements in stratospheric layers.Figure 14 shows the time series of relative humidity at 10 and 21 km heights.At 21 km height, the values are very low most of the time.While the small-scale variations in the troposphere are not reproduced by the model, the stratospheric variability is well captured by the model. Figure 15 shows the spatial distribution of mean RH meas − RH model over all layers.The Russian stations have been excluded, but two other stations also show an offset compared to the other stations: Tórshavnar (no.11) and Scoresbysund (no.23).The modeled values are higher than measurements, with RH = 4 %.This again is probably not an effect of the model but more likely of the measurements since surrounding stations do not show similar effects.The value fits the range of 2-6 % of dry bias reported by Wang et al. (2013) for radiosondes of type Vaisala RS92, but the type of sonde is not known for any of the stations in this study. Relative humidity is on the one side very variable, so that it becomes hard to model exactly, and on the other side it seems not an easy parameter to measure, as the problems first found in Russian data show which are apparently also present in the data of other stations. Similar to examining temperature, a closer look at a shorter time period in form of a time series can give information on the internal variability of relative humidity in the model.Figure 16 shows the time series of relative humidity at Scoresbysund for 10 days at the end of January 2011.The model shows a great variability on short timescales that is not present in the other data sets.The coarsely time-resolved measurements cannot be used to judge the fluctuations happening in the model on short timescales.It becomes understandable that especially relative humidity is difficult to compare to radiosonde data, as the variability in the field is just so large that the model cannot be expected to reproduce the exact values that were measured at a specific site. Boundary forcing interval This section describes the results of the two model runs that were performed with larger boundary forcing intervals of 12 h (called int12 in plots) and 24 h (int24) relative to the other runs with 6-hourly forcing (called int6).Both of these runs ran stably and the setups were used to simulate the same time period as the run with 6-hourly forcing. In order to compare the three runs, Table C1 gives the correlation coefficients of model and measured temperature and relative humidity (excluding Russian stations) for all three runs, listed separately for polar and temperate stations.The correlation is slightly weaker for both variables with the increased boundary forcing interval, the coefficient becoming smaller as the interval increases.This is expected, as the forcing interval determines how strongly the model is influenced by the boundary values that represent a realistic meteorology.But the decrease is not very strong and measured temperature can still be seen as very well reproduced even by the run that uses only one boundary input field per day. In addition to comparing each run with measurement data, the runs can be directly compared with one another.For this, the 6-hourly time series data that was prepared at each station presents a good database.The difference between the model runs does not increase with simulation time (not shown).The mean difference between the separate stations and a mean of all stations in each height is presented in Fig. 17.In all heights and for both variables, the run with 24-hourly forcing shows a larger difference to the original run than the run with 12-hourly forcing. Extending the damping layer In a second test, the sensitivity of the model to the extent of the damping layer was investigated with an additional model run.For this run, the lower end of the damping layer was set to 22 km (called rdh22 in plots), 6 km lower than in the original run (rdh28).It then extends one-third of total model height of 33 km. Another test run had been planned for which the model height was increased to 42 km, leaving the damping layer as is.This setup ran only for a few days before numerical insta- bilities lead to the breakdown of the model.The reasons for these instabilities were not investigated further, but this also showcases that it is not a trivial task to find a vertical grid with which the model runs stably. The setup with rdheight = 22.0 on the other hand ran stably for the time period considered in this study.Table C2 lists the correlation coefficient of model against measurement data for temperate and polar stations, including all layers up to 21 km.The differences are only marginally small and the runs can be considered to reproduce measurements equally well. In order to asses the difference between the model runs, the 6-hourly data generated for each station are again used to calculate a profile of the difference of the two model runs for each station and for the whole data set.The result of the analysis is shown in Fig. 18.The shapes of the curves are 01.10.10 24.11.10 18.01.11 14.03.11 08.05.11 02.07.11 similar to those of Fig. 17, where the boundary input interval was varied.The overall difference is small and similar in magnitude to the difference when doubling the boundary forcing interval to 12 h.Just where the damping layer starts to be active, a kink is visible in the profile of T , showing the necessity to stop evaluation of the model below the damping layer height when wanting to compare measurements and the model. Summary and conclusions This study presents a new, extended vertical grid for the regional model COSMO.The extended grid reaches up to 33 km, almost 10 km above the model top of the standard vertical setup used for the forecast of central Europe by the DWD in the domain COSMO-DE.By reducing the magnitude of the damping layer to 5 km, the added layer that can be considered to be free running reaches 28 km, compared to 11 km in the standard setup.This is already well in the lowermost stratosphere. The extended vertical grid is planned to be used for simulations covering the polar spring and the associated ozone loss, which is why it was tested using a domain spreading over central and northern Europe.To assess the influence of different boundary conditions, two model runs were compared with measurements, using ERA-Interim or NCEP reanalysis as boundary conditions for the model.Both model runs covered the same period, from 1 October 2010 to 1 September 2011.The model simulated this period stably.Additionally, three more runs using ERA-Interim as boundary forcing were done, two with increased boundary forcing intervals of 12 and 24 h and one with an increased damping reaching down to 22 km. The output was compared with measurements of temperature and relative humidity from all 12 polar radiosonde stations in the domain and as many in temperate latitudes. The measurements of temperatures are well reproduced by the model for all stations and heights.This is not only true for the mean, but also for the comparison of single ascents.The error in heights above 11 km is even smaller than that when considering all layers, probably because the variability is not as high as when including the tropospheric values.The mean error made by the model is smaller than 0.5 K for all stations.The boundary data, which was regridded to the output grid, reaches similar values. When comparing relative humidity values, it was found that Russian stations (and Gomel in Belarus) had systematically submitted higher values.This finding was strengthened by comparing all 23 Russian stations in the domain and Gomel to 24 stations not in Russia but in the eastern part of the domain and considering model and boundary data.After excluding Russian stations from the analysis of relative humidity, it became apparent that the model is not capable of reproducing the exact values of each measurement and neither is the regridded boundary data.But it does reproduce the low stratospheric values and fits measurements well when taking a mean over the whole time period.In the tropospheric layers, the model values are more humid than measurements. The sensitivity study using longer boundary forcing intervals shows how the model reacts to this factor.The difference to measurements increases with increasing the interval, just like the difference to the original model run.The stability of the model when using the extended vertical layering does not depend on short boundary forcing intervals.The results of the run with an increased damping layer height reaching down to 22 km do not differ much from the original setup.The height of the damping layer does influence the results of the model, but differences reach only about 1 K to the case of T , for example. The vertical grid for COSMO presented in this study seems a good alternative to the standard vertical layering of the COSMO-DE domain when focusing on the upper troposphere and lower stratosphere in polar latitudes.It has been shown to run stably, simulating almost a year.By comparing with data from synoptic radiosondes and regridded reanalysis data, it could be shown that the model is able to reproduce measurements of temperature well and produce reasonable values of relative humidity.The enlarged time series show a small-scale variability in the model that is not present in the measurements and cannot be expected form regridding the boundary data.The stability against varying the boundary forcing interval and the extent of the damping layer was shown with three additional model runs.Using this extended vertical grid expands the possible applications of COSMO into the stratosphere.With its high resolution it could be used to study cross-tropopause transport or simulate the chemistry of the lower stratosphere in polar latitudes when also including COSMO-ART. Figure 1 . Figure 1.The vertical grids of the COSMO model considered in this study.The damping layers are also given as shaded areas. Figure 2 . Figure 2. The vertical structure of the NCEP and ERA-Interim reanalysis used as boundary conditions. Figure 4 . Figure 4. Temperature values of all soundings of the station Jan Mayen, station no.20.Measurements are displayed on the top, the image below shows the corresponding model values.Note that this is not a time series plot.The dates along the abscissa hold true only for the location they indicate and do not define exact time in between.Dates only increase from left to right, but they are not evenly spaced in time. Figure 6 . Figure 6.Mean temperature values at each height for the station on Jan Mayen, station no.20, on the top, and for Madrid, station no. 1, on the bottom, showing results of the run forced by ERA-Interim (left) and NCEP (right).The horizontal lines give the 1σ standard deviation. Figure 7 . Figure 7. Scatterplot of modeled against measured temperature for polar stations when forcing the model with ERA-Interim (top) and NCEP reanalysis data (bottom).The data were color coded by height to visually inspect the variability in each height section.The statistics in the upper left hand corner refer to the whole data set. Figure 8 . Figure 8. Mean difference in temperature over all heights (top) and heights with z ≥ 11 km (bottom) for each station.The dashed line corresponds in color to the full line which is always half the standard deviation of the difference above and below the mean value.See TableB1for a list of the stations corresponding to the numbers. Figure 10 . Figure 10.Time series of measured and modeled temperature as well as the regridded boundary data, 10 (top) and 23 km (bottom) above Scoresbysund, station no.19, over 10 days at the end of January 2011.All data points available in each data set are included. Figure 11 . Figure 11.Mean relative humidity values of Gomel (BY) and the 23 Russian stations (top), and 24 stations outside of Russia but in the eastern part of the domain (bottom).The horizontal lines give the 1σ standard deviation. Figure 12 . Figure 12.Mean values of relative humidity for polar (top) and temperate (bottom) stations for the model run forced by ERA-Interim (left) an NCEP (right) reanalysis data.Russian stations were excluded from this analysis, as described in the text.The horizontal lines give the 1σ standard deviation. Figure 13 . Figure 13.Scatterplots of modeled against measured relative humidity for the runs forced by ERA-Interim (top) and NCEP (bottom).The data were color coded by height to visually inspect the variability in each height section.The statistics in the upper left hand corner refer to the whole data set. Figure 14 . Figure 14.Time series of relative humidity at 10 km (top) and 21 km (bottom) height above Jan Mayen for the model forced by ERA-Interim data. Figure 16 . Figure 16.Time series of relative humidity at 10 (top) and 23 km (bottom) heights above Scoresbysund for the model, the forcing ERA-Interim reanalysis and the measurement data at the end of January 2011. Figure 17 . Figure17.Difference between the model runs with 12 and 24hourly forcing to the original run with 6-hourly forcing for T (top) and RH (bottom).Shown is one profile for each station and the mean of all stations. Figure 18 . Figure18.Difference between the model run with the lowest extent of the damping layer at 28 km to the standard with rdheight = 22 for T (top) and RH (bottom).Shown is one profile for each station and the mean of all stations. The model domain and the radiosonde stations used in this study.The domain is displayed as gray shading, the radiosonde stations are numbered from south to north, numbers also referring to TableB1.Russian stations are marked in red. Mean difference of measurements and model values of relative humidity for each station when using ERA-Interim as forcing data.The picture is similar when using NCEP reanalysis data. Table A1 . Heights of the layers of the standard and the extended COSMO grid, specified in meters. Table A2 . Name list parameters of the preprocessor int2lm needed to reproduce the model domain.
v3-fos-license
2017-09-12T18:37:46.317Z
2014-02-28T00:00:00.000
46303678
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=43404", "pdf_hash": "bb9b1bfc4e560764a7242e4fd6eedf28c19f0be0", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:468", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bb9b1bfc4e560764a7242e4fd6eedf28c19f0be0", "year": 2014 }
pes2o/s2orc
A New Approach to Reducing Mortality from Dengue In 2009, based on a multicenter study conducted in Asia and Latin America and subsidized by the Dengue Control (DENCO) Research Program, the World Health Organization (WHO) proposed a new classification for dengue cases. The purpose of the present study was to evaluate the applicability of the new classification, relative to its previous version [1]. The evaluation, conducted in Campo Grande county, Mato Grosso do Sul state, Brazil, drew on secondary data from referral healthcare centers that assist high-severity dengue patients. A total of 156 medical records of patients with laboratory diagnosis of dengue were investigated. The records covered two epidemic periods: summer of 2006-2007 and summer of 2009-2010. The results showed that 64.6% of cases classified as dengue fever under the 1997 criteria presented manifestations of severity, warranting their reclassification as dengue with warning signs (49) or severe dengue (15) under the 2009 revised criteria. Bleeding, persistent vomiting, and severe, continuous abdominal pain were the most prevalent warning signs, indicative of risk of progression to severe disease. The revised classification was proved less complex than the current version, facilitating the identification of cases and the clinical management of patients. Introduction In Brazil, dengue fever-the arbovirosis most widely distributed among humans-is a disease of mandatory reporting and constitutes a serious public health issue.To date, no vaccine has been developed against the disease [2].Vector control is deficient, and the spread of infection has remained unchecked.Proper classification of cases and their subsequent management have become a global challenge, since timely, accurate diagnosis is crucial to reducing mortality rates.The current classification of dengue cases, implemented in 1974 and revised in 1997, comprises three levels of severity: dengue fever (DF), dengue hemorrhagic fever (DHF), and dengue shock syndrome (DSS) [1].Classification of more severe cases is dependent on laboratory results, a practice that has been questioned in recent years, as it can lead to categorizing severe cases such as DF for lack of criteria that take warning signs into account.In 2009, based on a multicenter study conducted in Asia and Latin America, subsidized by the Dengue Control (DENCO) Research Program, the World Health Organization (WHO) proposed a new classification for dengue cases into two categories: dengue fever (which includes cases with and without warning signs) and severe dengue [3].Warning signs are key to reducing mortality and providing the cases, which are properly classified and managed [4].The purpose of the present study was to evaluate the applicability of the new classification, relative to its previous version. Materials and Methods The evaluation, conducted in Campo Grande, the capital city of Mato Grosso do Sul state, drew on secondary data culled from referral healthcare centers that assist dengue fever patients.A total of 156 medical records of patients with laboratory diagnosis of dengue were investigated.The records covered two epidemic periods: summer of 2006-2007, with dengue virus type 2 (DENV-2) as the infective agent, and summer of 2009-2010, with DENV-1 and 2. The study was approved by the Universidade Federal de Mato Grosso do Sul (UFMS) Research Ethics Committee (permit 2174, issued 19 October 2011).Data were collected from November 2011 to January 2012 using a modified version of a WHO form for surveying medical records, originally employed in 18 countries as part of a study on the usefulness and applicability of the WHO revised classification [5]. The form employed in the present study contained nine sections addressing the healthcare center, demography, signs and symptoms, clinical findings, images, laboratory findings, discharge diagnosis as per source document, discharge diagnosis established by reviewer, and laboratory validation. Discharge diagnosis, established by the attending physician, was based on the 1997 criteria; post-discharge diagnosis by the reviewer was based on the revised 2009 classification [1]. DF is generally benign, self-limited, and rarely fatal, beginning with high fever of sudden onset, accompanied by headache, prostration, myalgia, arthralgia, retro-orbital pain, and maculopapular rash, followed or not by itching.Nausea, vomiting, and diarrhea can occur on the 2nd and 3rd day [1]. DHF is characterized by for simultaneous features: fever (lasting for up to one week); spontaneous bleeding (usually petechiae) or positive tourniquet test; thrombocytopenia (<100,000 mm 3 ); and plasma leakage, characterized by 20% increase in hematocrit levels or 20% decrease after the critical stage, or occurrence of pleural effusion, pericardial effusion, or ascites [6]. In DSS, all the defining criteria for DHF may be present in association with circulatory failure, manifested by signs of shock-hypotension; weak, fast pulse rate; convergent blood pressure (differential arterial pressure <20 mmHg; cold extremities and cyanosis; prolonged capillary refill time) possibly in conjunction with oliguria and mental confusion [1] [7]. The revised classification (WHO, 2009) considers the following clinical presentations: Dengue without warning signs: Non-severe cases presenting with the general clinical manifestations of the disease, including fever and pain.These can be concomitant with gastrointestinal manifestations such as nausea, mild vomiting, and diarrhea.Warning signs are absent. Dengue with warning signs: Cases associated with increased vascular permeability, presenting with abdomin-al pain or tenderness; persistent vomiting; fluid accumulation; mucosal bleeding; lethargy or irritability; liver enlargement (≥2 cm); and increase in hematocrit concurrent with rapid decrease in platelet count.Cases of dengue with warning signs can be further categorized as severe dengue when one or more of the following features are present: hypovolemic shock with plasma leakage, fluid accumulation with respiratory distress, severe bleeding, and severe organ involvement.These cases require emergency treatment or intensive care [4]. Results Table 1 shows the classification of cases according to both systems. The results showed that 64.6% of cases originally categorized as DF exhibited manifestations of severity, warranting their reclassification as dengue with warning signs (49) or severe disease (15) under the 2009 revised criteria.These patients were treated at lower-complexity healthcare services, but should have been referred to services of greater complexity, given the presence of warning signs, indicative of potentially fatal outcomes. Table 2 lists the occurrence of clinically relevant warning signs indicative of risk of progression to severe disease in the sample investigated.The presence of warning signs in 80 cases originally classified as DF means that only 22.4% were correctly categorized and properly treated in the ambulatory setting.Misclassification and consequent mismanagement (treatment limited to the primary-care level) were found in 17% of cases presenting with bleeding, 13% with persistent vomiting, and 8% with severe abdominal pain.Potentially lethal aggravation of these cases could be precluded, if treatments were based on accurate diagnosis by taking warning signs into account. The evaluation of diagnostic validity based on disease severity revealed a sensitivity level of 0.4336, specificity of 0.8139, positive predictive value of 0.8596, and negative predictive value of 0.3535 for the current classification, relative to the revised criteria.Reproducibility assessment revealed moderate agreement between current and revised classifications (κ = 0.5385, p = 0.0021). Discussion Because the traditional WHO classification requires four simultaneous criteria to classify cases as DHF, the absence of one or more criteria may lead to misclassification of severe cases-an issue reported by several investigators. The traditional classification has been challenged on grounds that its complexity can lead to erroneous diagnosis by emphasizing the occurrence of hemorrhage rather than assigning greater weight to plasma leakage [8]. A retrospective study of severe dengue cases conducted in the city of Rio de Janeiro found a trend for belated hospitalization of these patients, combined with too early discharge in some cases, demonstrating that healthcare workers can fail to identify warning signs that should warrant longer hospitalization.Ensuring timely, effective clinical management of dengue patients remains a challenge, despite the need for promptly identifying cases that can progress to severe disease or death [9]. Applying the revised classification to pediatric patients in Indonesia, concluded that this set of criteria can better detect severe cases [10].Comparing traditional and revised classifications concluded that a high percentage of dengue cases presenting circulatory failure was not correctly identified by current WHO criteria.These findings show that the current categorization is less accurate in identifying severe disease [11]. In a study conducted in 18 countries [5], cases regarded as DF were re-evaluated in the light of the revised classification, revealing a majority of patients presenting warning signs (51.9%) and some with severe disease (5.7%)-conditions that require entirely different clinical management, and similar results were found in the present investigation).The study concluded that the revised classification is more sensitive in identifying severe dengue and is simpler to apply in the clinical setting, with a high potential to facilitate clinical management of dengue cases.Alexander et al. (2011), in a multicenter study conducted in four Latin American and Southeast Asian countries, demonstrated that abdominal pain, mucosal bleeding, and decreased platelet counts are associated with a significantly higher risk of severe illness. The presence of abdominal pain-a frequent complaint preceding the onset of shock has been unequivocally correlated with increased risk of progression to severe disease [3] [12]. An investigation conducted in India, similar in design to the present study, compared both WHO classifications.The traditional classification was found to categorize the majority of cases (75%) as DF.One case remained uncategorized.In contrast, the revised system revealed a predominance (82.1%) of severe dengue cases, in addition to severe manifestations in cases classified as DF.The revised classification was found to be highly sensitive in identifying severe cases, and simple to apply [13].Based on clinical severity, it takes clinical signs and symptoms as a gold standard, while the current classification is primarily based on laboratory criteria to achieve diagnosis and establish treatment. Early identification, monitoring of severe forms of the disease, and implementation of appropriate intravenous therapy are measures known to reduce mortality rates [12]. In the present study, the revised WHO (2009) classification was found to be less complex than the set of criteria current employed, facilitating the identification of dengue cases and the clinical management of patients [1]. Table 1 . Reclassification of dengue cases according to the revised WHO (2009) criteria, distributed by discharge diagnosis based on current WHO (1997) categories (n = 156). Table 2 . Comparison of current (WHO, 1997) and revised (WHO, 2009) classifications of dengue cases, considering the presence of warning signs.
v3-fos-license
2018-04-03T02:53:22.746Z
2017-08-01T00:00:00.000
13797136
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/9/8/889/pdf", "pdf_hash": "777d4567f7ea8ae5d3fd2b12254481c8fb50edc6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:469", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "777d4567f7ea8ae5d3fd2b12254481c8fb50edc6", "year": 2017 }
pes2o/s2orc
Attenuation of Multiple Organ Damage by Continuous Low-Dose Solvent-Free Infusions of Resveratrol after Severe Hemorrhagic Shock in Rats Therapeutic effects of continuous intravenous infusions of solvent-free low doses of resveratrol on organ injury and systemic consequences resulting from severe hemorrhagic shock in rats were studied. Hemorrhagic shock was induced by withdrawing arterial blood until a mean arterial blood pressure (MAP) of 25–30 mmHg was reached. Following a shock phase of 60 min, rats were resuscitated with the withdrawn blood plus lactated Ringer’s. Resveratrol (20 or 60 μg/kg × h) was continuously infused intravenously starting with the resuscitation phase (30 min) and continued until the end of the experiment (total treatment time 180 min). Animals of the shock control group received 0.9% NaCl solution. After the observation phase (150 min), rats were sacrificed. Resveratrol significantly stabilized the MAP and peripheral oxygen saturation after hemorrhagic shock, decreased the macroscopic injury of the small intestine, significantly attenuated the shock-induced increase in tissue myeloperoxidase activity in the small intestine, liver, kidney and lung, and diminished tissue hemorrhages (particularly in the small intestine and liver) as well as the rate of hemolysis. Already very low doses of resveratrol, continuously infused during resuscitation after severe hemorrhagic shock, can significantly improve impaired systemic parameters and attenuate multiple organ damage in rats. Introduction Traumatic hemorrhage, i.e., the rapid and hemodynamically significant loss of intravascular volume, creates great morbidity in the injured [1,2] and leads to the most frequent cause of preventable deaths after severe traumatic injury [3][4][5][6][7]. In the acute phase of hemorrhage, the main priority is to stop the bleeding as quickly as possible to prevent hemodynamic instability, tissue ischemia by decreased tissue perfusion and consecutively impaired tissue oxygenation, inflammation, and thus organ dysfunction and eventually death [1,3]. Of those patients surviving the pre-clinical phase, about 20% suffer from either multi-organ failure or sepsis [3,6,8], resulting in increased morbidity and lethality. An important therapeutic step is to restore the circulating volume. However, the optimal resuscitation strategy as well as the composition of the fluid is still a matter of controversial discussions [3]. In the present study, a model was used that is close to the current practice for treatment after severe trauma and blood loss in humans according to the Trauma Register of the Deutsche Gesellschaft für Unfallchirurgie [9][10][11]. Based on this model, we studied the therapeutic effects of resveratrol (trans-3,4 ,5-trihydroxystilbene), a naturally occurring plant antibiotic (phytoalexine). A fast-growing number of animal and human studies currently focus on the effect of the polyphenol, but the mechanism of action is still unknown, though likely pleiotropic. Among many effects, life extension, attenuation Induction of Hemorrhagic Shock and Resuscitation Regime Thirty minutes after inserting the femoral catheters hemorrhagic shock was induced by removing 2 mL blood every 3 min through the femoral artery catheter using a 2-mL syringe (Terumo, Leuven, Belgium) prefilled with 0.2 mL ACD-A solution [10,11]. Bleeding was continued until the MAP dropped to 25 to 30 mmHg; this typically took about 20 min. During the following 10 min, the MAP was carefully adjusted by sampling of smaller blood volumes (0.5 to 1 mL). The blood was stored in sterile plastic conical tubes at 37 • C. For the next 60 min, the MAP remained between 25 and 30 mmHg, typically without the need of any further intervention during the shock phase. In some individual cases, small amounts (0.1 to 0.5-mL aliquots) of 0.9% NaCl solution had to be administered or additional small blood samples (0.1 to 0.5-mL aliquots) to be withdrawn, to keep the MAP within the desired range. After the shock phase, animals were resuscitated within 30 min by transfusion of the withdrawn blood plus LR (equal to twice the volume of the blood loss; 37 • C) into the jugular vein using a syringe pump (Perfusor-Secura FT; B Braun, Melsungen, Germany). Resveratrol solutions were prepared as described previously [29]. The polyphenol (1.2 mg) was freshly dissolved in 100 mL of sterile 0.9% NaCl solution. An aliquot of the resveratrol solution was diluted with two volumes of sterile 0.9% NaCl solution to obtain a second (lower concentrated) resveratrol solution; the pH of both resveratrol solutions was adjusted to pH 7.35 with NaOH. Afterwards, resveratrol solutions were filtered through bacteria-tight filters (Minisart ® 0.2 µm; Sartorius, Göttingen, Germany). Animals of the Shock-resveratrol groups received an infusion of 20 or 60 µg/kg × h (cumulative dose: 60 or 180 µg/kg) at 5 mL/kg × h into the femoral vein starting with the resuscitation phase and continued until the end of the experimental time (total treatment time 180 min). A shock control group received only 0.9% NaCl solution without resveratrol; sham group rats, undergoing all procedures except that hemorrhagic shock was induced, received either pure 0.9% NaCl solution (5 mL/kg × h) or resveratrol (60 µg/kg × h). To compensate for fluid loss over surgical areas and the respiratory epithelium, 0.9% NaCl solution (5 mL/kg × h, 37 • C) was infused through the femoral vein catheter until the end of the experiment, if not replaced by the resveratrol solution in corresponding groups [10,11]. Biomonitoring Systolic, diastolic and mean arterial blood pressure (MAP) were recorded continuously via the femoral artery catheter that was connected to a pressure transducer, and displayed on a monitor. Ringer's solution was delivered at 3 mL/h to keep the catheter functional. Heart rates were determined from systolic blood pressure spikes. The core body temperature of the rats was continuously monitored using a rectal sensor. Cooling of the animals was prevented by means of an underlying heated operating table and by covering the animal with aluminum foil. The oxygen saturation was recorded continuously using a pulse oximeter (OxiCliq A; Nellcor, Boulder, CO, USA) placed at the left hind limb. The breathing rate was determined based on ventilation movements in 10-min intervals. Assessment of Blood and Plasma Parameters Using a 2-mL syringe containing 80 IU (international unit) electrolyte-balanced heparin (Pico50; Radiometer Medical, Brønshøj, Denmark), blood samples (0.7 mL) for blood gas analysis and the assessment of markers of organ injury and function were taken from the femoral artery immediately after its insertion (T = 0 min), before shock induction (T = 50 min), after the end of shock induction (T = 80 min), immediately before the beginning of resuscitation (T = 140 min), at the end of resuscitation (T = 170 min), and 30 (T = 200 min), 90 (T = 260 min) and 150 min (T = 320 min) thereafter ( Figure 1). Biomonitoring Systolic, diastolic and mean arterial blood pressure (MAP) were recorded continuously via the femoral artery catheter that was connected to a pressure transducer, and displayed on a monitor. Ringer's solution was delivered at 3 mL/h to keep the catheter functional. Heart rates were determined from systolic blood pressure spikes. The core body temperature of the rats was continuously monitored using a rectal sensor. Cooling of the animals was prevented by means of an underlying heated operating table and by covering the animal with aluminum foil. The oxygen saturation was recorded continuously using a pulse oximeter (OxiCliq A; Nellcor, Boulder, CO, USA) placed at the left hind limb. The breathing rate was determined based on ventilation movements in 10-min intervals. Assessment of Blood and Plasma Parameters Using a 2-mL syringe containing 80 IU (international unit) electrolyte-balanced heparin (Pico50; Radiometer Medical, Brønshøj, Denmark), blood samples (0.7 mL) for blood gas analysis and the assessment of markers of organ injury and function were taken from the femoral artery immediately after its insertion (T = 0 min), before shock induction (T = 50 min), after the end of shock induction (T = 80 min), immediately before the beginning of resuscitation (T = 140 min), at the end of resuscitation (T = 170 min), and 30 (T = 200 min), 90 (T = 260 min) and 150 min (T = 320 min) thereafter ( Figure 1). Figure 1. Timetable of the experimental procedures. Blood samples (0.7 mL) for blood gas analysis (BGA) and the assessment of markers of organ injury and function were taken from the femoral artery immediately after its insertion (1, T = 0 min; start of biomonitoring), before shock induction (2, T = 50 min), after the end of shock induction (3, T = 80 min; target-MAP 25-30 mmHg reached), immediately before the beginning of resuscitation (4, T = 140 min; start of autologous blood and resveratrol application), at the end of resuscitation (5, T = 170 min), and 30 (6, T = 200 min), 90 (7, T = 260 min) and 150 min (8, T = 320 min; resection of organs, death of rat) thereafter. For each blood sampling, animals were substituted with a 0.7-mL bolus of 0.9% NaCl solution via the femoral artery (with the additional effect to keep the catheter functional). Arterial blood pH, oxygen and carbon dioxide partial pressures (pO2, pCO2), oxygen saturation (sO2), base excess (BE), hemoglobin (Hb) concentration, hematocrit (Hct), electrolytes (Na + , K + , Ca 2+, Cl − ) and metabolic parameters (lactate, glucose) were assessed with a blood gas analyzer (ABL 715; Radiometer, Copenhagen, Denmark). Blood plasma was obtained by centrifugation (3000× g for 15 min at 25 °C) and stored at 4 °C until its use. The plasma activity of lactate dehydrogenase (LDH) as a general marker for cell injury, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) as markers for liver cell injury, creatine kinase (CK) as a marker for muscle cell injury and the plasma Figure 1. Timetable of the experimental procedures. Blood samples (0.7 mL) for blood gas analysis (BGA) and the assessment of markers of organ injury and function were taken from the femoral artery immediately after its insertion (1, T = 0 min; start of biomonitoring), before shock induction (2, T = 50 min), after the end of shock induction (3, T = 80 min; target-MAP 25-30 mmHg reached), immediately before the beginning of resuscitation (4, T = 140 min; start of autologous blood and resveratrol application), at the end of resuscitation (5, T = 170 min), and 30 (6, T = 200 min), 90 (7, T = 260 min) and 150 min (8, T = 320 min; resection of organs, death of rat) thereafter. For each blood sampling, animals were substituted with a 0.7-mL bolus of 0.9% NaCl solution via the femoral artery (with the additional effect to keep the catheter functional). Arterial blood pH, oxygen and carbon dioxide partial pressures (pO 2 , pCO 2 ), oxygen saturation (sO 2 ), base excess (BE), hemoglobin (Hb) concentration, hematocrit (Hct), electrolytes (Na + , K + , Ca 2+, Cl − ) and metabolic parameters (lactate, glucose) were assessed with a blood gas analyzer (ABL 715; Radiometer, Copenhagen, Denmark). Blood plasma was obtained by centrifugation (3000× g for 15 min at 25 • C) and stored at 4 • C until its use. The plasma activity of lactate dehydrogenase (LDH) as a general marker for cell injury, aspartate aminotransferase (AST) and alanine aminotransferase (ALT) as markers for liver cell injury, creatine kinase (CK) as a marker for muscle cell injury and the plasma creatinine concentration as a parameter of renal function were determined with a fully automated clinical chemistry analyzer (Vitalab Selectra E; VWR International, Darmstadt, Germany). The isoenzyme MB of CK (CK-MB), Troponin I and Troponin T values were determined from frozen blood samples. Citrate-blood for measurements of the prothrombin time and international normalized ratio (INR), as well as EDTA-blood for platelet counts were sampled at the end of the experiments (T = 320 min) using 3-mL citrate-and EDTA-monovettes, respectively (Sarstedt, Nümbrecht, Germany); parameters were determined at the central laboratory of the University Hospital Essen based on clinical standards. Macroscopic Scoring of the Injury to the Small Intestine After its resection, the small intestine was immediately cut into ten pieces of equal length (9.5-10.5 cm, termed "10-cm segments" below) and rapidly transferred to Petri dishes containing cold (4 • C) buffer (140 mM NaCl, 20 mM HEPES, pH 7.4). The 10-cm segments were cut open along the mesenteric border and spanned with their luminal sides up on styrofoam plates that had been submersed in the buffer. A macroscopic damage score was used to rank the severity of injury by gross observations [29]. The portion of the area (in %) of the different macroscopic damage scores (0, 1, 3, or 9) was considered, a mean value given for each 10-cm segment, and the values of all 10-cm segments were averaged to evaluate the entire small intestine injury. Tissue Processing for Assays Based on Tissue Homogenates After the determination of the macroscopic score, each 10-cm segment of the small intestine was cut in the middle and the resulting 20 specimens were dissected with scissors in safe-lock tubes (2.0 mL; Eppendorf, Hamburg, Germany) containing 1 mL cold (4 • C) homogenization buffer on ice (140 mM NaCl, 20 mM HEPES, 1 tablet protease inhibitor mixture/50 mL, pH 7.4). Of the lung, liver and kidney, small pieces were transferred into safe-lock tubes with homogenization buffer as well. The tubes (with steel grinding balls) were placed into a mixer mill (model MM200; Retsch, Haan, Germany) and the specimens homogenized (15 min, 30 oscillations/s). The total volume of the small intestine homogenate was documented for each animal, and homogenates of all organs were weighted and then centrifuged (16,000× g for 15 min, 4 • C). The resulting supernatants were placed on ice and immediately used for the assays below. Determination of Tissue and Free Plasma Hb The tissue Hb content of the small intestine, lung, liver and kidney was determined from the absorption of the Hb Soret band within the homogenate supernatants and served as a marker for tissue hemorrhages. The absorption maximum between 400 and 420 nm was determined in 1 mL of the homogenate supernatant, with homogenization buffer serving as a blank. Values were corrected for unspecific absorption/turbidity at 475 nm. The Hb content of the organs was calculated in duplicate based on the molar extinction coefficient of Hb at its Soret band maximum (ε = 131,000 M −1 cm −1 ) and expressed in µmol Hb/kg body weight (small intestine) and in µmol/L pro g tissue weight (other organs). Free plasma Hb, a measure of hemolysis, was also determined from the absorption of the Hb Soret band, with 0.9% NaCl solution serving as a blank [31,32]. Plasma (100 µL) from the final blood sample at the end of the experiment was diluted with 900 µL of 0.9% NaCl solution and the Hb concentration in µmol/L was determined as outlined above. Determination of Tissue Myeloperoxidase (MPO) Activity As a measure of neutrophils, the activity of myeloperoxidase (MPO) within the organ homogenate supernatant was determined from the H 2 O 2 -dependent oxidation of O-dianisidine [33]. Briefly, the reaction buffer was freshly prepared (315 µM O-dianisidine and 147 µM H 2 O 2 in 50 mM KH 2 PO 4 /K 2 HPO 4 buffer, pH 6.0, 25 • C) and MPO activity determined in duplicate from the colored product formation at 460 nm using a clinical chemistry analyzer (Vitalab Selectra E; VWR International, Darmstadt, Germany). Activities in the small intestine tissue were expressed in U/kg body weight and those in the other organs as U/L pro g tissue weight. Statistics Experiments were performed with eight animals per experimental group. Data are expressed as mean values ± SEM. Comparisons among multiple groups were performed using one-way analysis of variance (ANOVA) either for nonrecurring or for repeated measures followed by Fisher (LSD) post-hoc analysis. A p-value < 0.05 was considered significant. Effects of Resveratrol on Blood Pressure and other Systemic Parameters In both sham groups, Sham-NaCl and Sham-R60, the MAP remained stable around 100 mmHg during the whole experimental period ( Figure 2). Compared to sham rats receiving pure saline, rats infused with resveratrol (60 µg/kg × h) maintained a slightly higher MAP about one hour after application was started. In rats of the shock groups, the MAP was decreased to 25-30 mmHg (Shock-NaCl: 27.5 ± 0.4 mmHg; Shock-R20: 28.0 ± 0.5 mmHg; Shock-R60: 28.3 ± 0.4 mmHg) during shock induction (T = 50-80 min) and remained within this range (Shock-NaCl: 26.2 ± 0.2 mmHg; Shock-R20: 26.4 ± 0.2 mmHg; Shock-R60: 25.7 ± 0.3 mmHg) during the shock phase (T = 80-140 min). Upon resuscitation (T = 140-170 min) with the withdrawn blood plus LR, the MAP in the Shock-NaCl group recovered to 89.0 ± 2.9 mmHg (T = 170 min) and slowly decreased during the following observation period (T = 170-320 min). Animals infused with resveratrol (20 or 60 µg/kg × h), starting with the resuscitation phase, maintained a significantly higher MAP during the post-resuscitation period without significant differences between the two groups. International, Darmstadt, Germany). Activities in the small intestine tissue were expressed in U/kg body weight and those in the other organs as U/L pro g tissue weight. Statistics Experiments were performed with eight animals per experimental group. Data are expressed as mean values ± SEM. Comparisons among multiple groups were performed using one-way analysis of variance (ANOVA) either for nonrecurring or for repeated measures followed by Fisher (LSD) post-hoc analysis. A p-value < 0.05 was considered significant. Effects of Resveratrol on Blood Pressure and other Systemic Parameters In both sham groups, Sham-NaCl and Sham-R60, the MAP remained stable around 100 mmHg during the whole experimental period ( Figure 2). Compared to sham rats receiving pure saline, rats infused with resveratrol (60 μg/kg × h) maintained a slightly higher MAP about one hour after application was started. In rats of the shock groups, the MAP was decreased to 25-30 mmHg (Shock-NaCl: 27.5 ± 0.4 mmHg; Shock-R20: 28.0 ± 0.5 mmHg; Shock-R60: 28.3 ± 0.4 mmHg) during shock induction (T = 50-80 min) and remained within this range (Shock-NaCl: 26.2 ± 0.2 mmHg; Shock-R20: 26.4 ± 0.2 mmHg; Shock-R60: 25.7 ± 0.3 mmHg) during the shock phase (T = 80-140 min). Upon resuscitation (T = 140-170 min) with the withdrawn blood plus LR, the MAP in the Shock-NaCl group recovered to 89.0 ± 2.9 mmHg (T = 170 min) and slowly decreased during the following observation period (T = 170-320 min). Animals infused with resveratrol (20 or 60 μg/kg × h), starting with the resuscitation phase, maintained a significantly higher MAP during the post-resuscitation period without significant differences between the two groups. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on mean arterial blood pressure (MAP) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, I; 60 min shock phase, S), then were resuscitated (within 30 min, R) with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 μg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9 % NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). Mean arterial blood pressure (MAP) was measured in the right femoral artery every 10 minutes. Shown are mean values ± SEM (n = 8 animals per group). SEM values not visible are hidden by the Figure 2. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on mean arterial blood pressure (MAP) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, I; 60 min shock phase, S), then were resuscitated (within 30 min, R) with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9 % NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). Mean arterial blood pressure (MAP) was measured in the right femoral artery every 10 minutes. Shown are mean values ± SEM (n = 8 animals per group). SEM values not visible are hidden by the symbols. * p < 0.05 (vs. Shock-NaCl). # p < 0.05 (vs. Sham-NaCl; entire period from the resuscitation). The arrow indicates the start of the resveratrol infusions. The heart rate in the Sham-NaCl and Sham-R60 group remained stable at around 340 beats per minute (bpm) throughout the whole experiment, without a significant difference between both groups. In all shock animals, heart rate decreased significantly to about 230 bpm (Shock-NaCl: 233 ± 8 bpm; Shock-R20: 220 ± 7 bpm; Shock-R60: 242 ± 5 bpm) at the end of the shock induction (T = 80 min) and recovered continuously during the shock period and the resuscitation phase. In the Shock-NaCl group, the heart rate was slightly but significantly higher at the end of the experimental time compared to Sham-NaCl animals (Shock-NaCl: 386 ± 5 bpm; Sham-NaCl: 342 ± 8 bpm; Sham-R60: 354 ± 7 bpm). Compared to shock group animals receiving pure saline, the infusion of resveratrol (20 or 60 µg/kg × h) did not significantly affect the heart rate. The breathing rates of rats from both sham groups were fairly stable between 50 and 60 breaths per min and did not differ during the experiment. During the shock phase, the breathing rate in all shock groups increased continuously and compared to sham group rats, was significantly higher from T = 210 min (40 min after the resuscitation phase) until the end of the experiment. There was no significant difference regarding the course of breathing rates among rats of the Shock-NaCl, Shock-R20 and Shock-R60 groups. The rectal temperature of the Sham-NaCl and the Sham-R60 group rats slightly increased by 0.5 • C but remained otherwise stable around 37 • C throughout the whole experiment without significant differences between both groups. During shock induction and the following shock phase, the rectal temperature of all shock group animals significantly decreased by almost 1 • C (T = 140 min: Shock-NaCl: 36.1 ± 0.1 • C; Shock-R20: 36.2 ± 0.1 • C; Shock-R60: 36.2 ± 0.2 • C), but regained the values of both sham groups around 50 minutes after resuscitation. There were no significant differences in rectal temperature among all groups at the end of the experiment. The peripheral oxygen saturation, as measured via pulse oximetry at the rats left hind limb, was around 99% in both sham groups and did not differ during the entire experiment. In all shock groups, peripheral oxygen saturation was not undetectable after shock induction. During the resuscitation phase, values (≥97%) reappeared, but from 50 min after the resuscitation period (T = 220 min) until the end of the experiment, peripheral oxygen saturation was again not detectable in 67% of the animals of the Shock-NaCl group. In animals of the Shock-R20 and Shock-R60 groups, peripheral oxygen saturation, however, was detectable in 80.7% and 69.3%, respectively, of the animals during the same period. Effect of Resveratrol on Parameters of the Acid-Base Status and on pO 2 Arterial blood pH and BE in rats of the two sham groups hardly changed during the experiment and were in the physiological range at the end of the observation time without a significant difference among both groups ( Table 1). The pCO 2 continuously dropped during the experiment from 58.1 ± 1.8 mmHg (Sham-NaCl) and 52.9 ± 0.6 mmHg (Sham-R60) to 43.8 ± 2.4 mmHg (Sham-NaCl) and 42.3 ± 2.7 mmHg (Sham-R60) without a significant difference between both groups. Similarly, the pO 2 constantly dropped during the first half of the experimental period but increased again in the second half and at the end of the experiment was not significantly different compared to the initial values. In the Shock-NaCl group, the pH dropped down to 7.15 ± 0.01 during shock induction and the shock phase, remained at that level upon resuscitation and slightly increased during the following observation period, thereby reaching a final value that was not significantly different from that of the Sham-NaCl group ( Table 1). The pCO 2 significantly dropped upon both shock induction and the shock phase, rapidly increased during the resuscitation phase, but decreased again in the following observation phase; pCO 2 was significantly lower in the final blood sample compared to sham group rats. The pO 2 in the Shock-NaCl group increased during the experiment but was not different from animals of both sham groups at the end of the experiment. The BE rapidly dropped significantly upon shock induction and reached its lowest value at the end of the shock phase (−14.3 ± 0.9 mmol/L), but then increased to −9.4 mmol/L at the end of the observation phase, indicating (in line with the altered pH, pCO 2 , breathing rate and pO 2 ) a metabolic acidosis that was insufficiently respiratory compensated. Compared to the values of the Shock-NaCl group, both resveratrol infusions had no significant effect on the alterations in pH, pCO 2 , BE and pO 2 ( Table 1). In contrast to its effects on pulse oximetry (see above), resveratrol did not alter central arterial oxygen saturation as measured via blood gas analysis (Table 1). Table 1. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on parameters of blood gas analysis and coagulation subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood (30 min) in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). Parameters of blood gas analysis (pH; oxygen and carbon dioxide partial pressure, pO 2 , pCO 2 ; base excess, BE; hemoglobin, Hb; hematocrit, Hct; oxygen saturation, sO 2 ), electrolytes (K + , Na + , Ca 2+ , Cl − ) and of coagulation (prothrombin time; international normalized ratio, INR; platelets count) were determined from final arterial blood samples as obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Effect of Resveratrol on Blood Hemoglobin and Hematocrit In the Sham-NaCl and the Sham-R60 group, basal values of blood Hb concentration and Hct, as determined from the first arterial blood sample, were around 13.5 mg/dL and 41%, respectively. The values slightly decreased without a difference between both groups to the end of the experiment due to the withdrawal of blood samples (Table 1). Arterial oxygen saturation was stable within the physiological range (≥97%) at any time without differences among both groups. In rats of the Shock-NaCl group, the Hb and Hct significantly dropped during shock induction and remained at these values (8.5 ± 0.2 mg/dL; 26.5 ± 0.6%) until the end of the shock period. During the resuscitation and the following observation period, the Hb and Hct significantly increased again, reaching values at the end of the experiment that were similar to the values before shock induction and slightly higher than those in the Sham groups (Table 1). In rats infused with 20 or 60 µg resveratrol/kg × h, this course of both the Hb and Hct remained unaffected by the polyphenol (Table 1). Effect of Resveratrol on Plasma Electrolyte Concentrations In both sham groups, plasma concentrations of potassium (K + ), sodium (Na + ), calcium (Ca 2+ ) and chloride (Cl − ) remained in the physiological range during the experiment and did not differ in Nutrients 2017, 9, 889 9 of 19 a significant manner (Table 1). Hemorrhagic shock resulted in a significant increase in K + and Cl − , which remained elevated until the end of the experiment (Table 1). Calcium concentrations were significantly decreased after resuscitation but regained pre-shock values at the end of the experimental period without a significant difference compared to both sham groups. The blood Na + concentration was not significantly affected by shock or resuscitation. Neither the course of these shock-induced alterations in plasma electrolytes nor the final values were affected by any resveratrol dose (Table 1). Effect of Resveratrol on Prothrombin Time, INR and Thrombocyte Count Prothrombin time, INR and platelet count, as determined at the end of the experiments, were not significantly altered by resveratrol in sham animals ( Table 1). In rats of the Shock-NaCl group, prothrombin time and the number of platelets were significantly decreased, and the INR increased. In shock animals receiving resveratrol, these changes were lower but significant only for the thrombocyte count and INR in the Shock-R20 group (Table 1). Effect of Resveratrol on Tissue Parameters of Organ Injury In animals of both the Sham-NaCl and the Sham-R60 group, the macroscopic injury score of the small intestine was close to zero, and the MPO activity as well as the tissue Hb content of the small intestine, liver, lung and kidney were low ( Figure 3). There were no significant differences between all these parameters among animals of both groups. Hemorrhagic shock and resuscitation resulted in a significant increase in the macroscopic score of the small intestine as well as in the tissue MPO activity and Hb content of all organs from the Shock-NaCl group animals, indicating invasion of neutrophils and tissue hemorrhage. In rats of the Shock-R20 and Shock-R60 groups, most of these shock-induced alterations were significantly diminished in the small intestine, liver and lung ( Figure 3A1-A3,B1,B2,C1,C2). In the kidney, shock-induced alterations in tissue MPO activity and Hb concentration were not significantly diminished by any resveratrol dose (Figure 3). Effect of Resveratrol on Plasma Parameters of Organ Injury In sham group rats, plasma activities of AST, ALT (Figure 4), LDH and CK ( Figure 5), as well as creatinine concentration, did not change significantly throughout the experiment and were not affected by resveratrol. In the Shock-NaCl group, all these parameters increased strongly and significantly during the resuscitation period and thereafter until the end of the experiment (final plasma value for creatinine: 1.38 ± 0.11 mg/dL). These alterations in AST and ALT activity were decreased but only significantly so for AST activity by both resveratrol doses ( Figure 4). The shock-induced increases in LDH and CK ( Figure 5) activity as well as the creatinine concentration were not significantly affected by the polyphenol. In sham group animals, resveratrol administration did not have any effect on CK-MB as determined from final plasma samples at the end of the experiment (Figure 6). Shock resulted in a significant increase in CK-MB, which was significantly diminished by resveratrol without a significant difference between both doses ( Figure 6). Figure 3. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on tissue parameters of organ injury subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 μg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The macroscopic injury score of the small intestine (A1), as well as tissue myeloperoxidase (MPO) activity and hemoglobin (Hb) content of the small intestine (A2,A3), the left lobe of the liver (B1,B2), the left lung lobe (C1,C2) and the left kidney (D1,D2) were determined instantly after the animals were sacrificed at the end of the experiment. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). The concentration of free plasma Hb, measured to determine the rate of hemolysis in the final blood sample, was low and not different in both sham groups (Figure 7). Hemorrhagic shock induced a more than 4-fold increase in free plasma Hb (Shock-NaCl group), which was significantly diminished by both resveratrol doses with a nearly identical efficacy (Figure 7). Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on tissue parameters of organ injury subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The macroscopic injury score of the small intestine (A1), as well as tissue myeloperoxidase (MPO) activity and hemoglobin (Hb) content of the small intestine (A2,A3), the left lobe of the liver (B1,B2), the left lung lobe (C1,C2) and the left kidney (D1,D2) were determined instantly after the animals were sacrificed at the end of the experiment. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). The concentration of free plasma Hb, measured to determine the rate of hemolysis in the final blood sample, was low and not different in both sham groups (Figure 7). Hemorrhagic shock induced a more than 4-fold increase in free plasma Hb (Shock-NaCl group), which was significantly diminished by both resveratrol doses with a nearly identical efficacy (Figure 7). Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on the free plasma hemoglobin concentration (hemolysis) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 μg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The free hemoglobin (Hb) concentration was determined in plasma from final blood samples as obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Effect of Resveratrol on Plasma Glucose and Lactate Concentrations Plasma glucose and lactate concentrations in both sham groups remained fairly constant throughout the experiment and were independent of the presence of resveratrol (Figure 8). In the Figure 6. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on the plasma concentration of the muscle-brain type creatine kinase (CK-MB) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The concentration of isoenzyme MB of creatine kinase (CK-MB) was determined in plasma from final blood samples obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Nutrients 2017, 9,889 12 of 19 Figure 6. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on the plasma concentration of the muscle-brain type creatine kinase (CK-MB) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 μg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The concentration of isoenzyme MB of creatine kinase (CK-MB) was determined in plasma from final blood samples obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Figure 7. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on the free plasma hemoglobin concentration (hemolysis) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 μg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The free hemoglobin (Hb) concentration was determined in plasma from final blood samples as obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Effect of Resveratrol on Plasma Glucose and Lactate Concentrations Plasma glucose and lactate concentrations in both sham groups remained fairly constant throughout the experiment and were independent of the presence of resveratrol (Figure 8). In the Figure 7. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on the free plasma hemoglobin concentration (hemolysis) subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, 60 min shock phase), then were resuscitated within 30 min with the withdrawn blood and lactated Ringer's equal to twice the volume of the withdrawn blood in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). The free hemoglobin (Hb) concentration was determined in plasma from final blood samples as obtained immediately before rats were sacrificed. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). Effect of Resveratrol on Plasma Glucose and Lactate Concentrations Plasma glucose and lactate concentrations in both sham groups remained fairly constant throughout the experiment and were independent of the presence of resveratrol (Figure 8). In the Shock-NaCl group, glucose concentration strongly increased during shock induction, then dropped in the further course of the experiment, and finally reached a significantly lower value than in sham group animals ( Figure 8A). Also, the plasma lactate concentration significantly increased upon induction of hemorrhagic shock, reaching its highest value at the end of the shock phase before it declined to a concentration only slightly but still significantly higher than the one observed in sham group rats ( Figure 8B). No dose of resveratrol resulted in a significant effect on the course of shock-and resuscitation-induced alterations in glucose and lactate concentrations. Shock-NaCl group, glucose concentration strongly increased during shock induction, then dropped in the further course of the experiment, and finally reached a significantly lower value than in sham group animals ( Figure 8A). Also, the plasma lactate concentration significantly increased upon induction of hemorrhagic shock, reaching its highest value at the end of the shock phase before it declined to a concentration only slightly but still significantly higher than the one observed in sham group rats ( Figure 8B). No dose of resveratrol resulted in a significant effect on the course of shockand resuscitation-induced alterations in glucose and lactate concentrations. Discussion Our results indicate that very low doses of resveratrol, continuously infused during resuscitation after severe hemorrhagic shock, can significantly improve impaired systemic parameters and attenuate multiple organ damage in an exacerbated rat model that is close to the clinical situation. To dissolve more than 3 mg resveratrol in 100 mL aqueous solution, solvents vehicles are needed. So far, in nearly all animal studies examining the efficacy of resveratrol in preventing detrimental effects of hemorrhagic shock, dimethyl sulfoxide (DMSO) was added as a solubilizer [13][14][15][16][17][19][20][21][23][24][25][26]34,35]. DMSO itself has been reported to have anti-inflammatory, anti-thrombotic, anti-proliferative as well as free radical scavenging properties in different disorders, shown in various animal and human studies [27,36,37]. Of course, these vehicle actions can interfere with Figure 8. Effect of continuous intravenous infusions of solvent-free low-dose resveratrol on blood glucose and lactate concentrations subsequent to severe hemorrhagic shock in rats. Rats underwent severe hemorrhage (30 min shock induction, I; 60 min shock phase, S), then were resuscitated (R, within 30 min) with the withdrawn blood and LR equal to twice the volume of the withdrawn blood (30 min) in the absence (Shock-NaCl) or presence of resveratrol (20 or 60 µg/kg × h until the end of the experiment; Shock-R20, Shock-R60) and observed for a further 150 min. Sham animals received either pure 0.9% NaCl solution (Sham-NaCl) or the higher resveratrol dose (Sham-R60). Blood glucose (A) and lactate (B) concentrations were determined in plasma from arterial blood samples as obtained at the time points indicated. Shown are mean values ± SEM (n = 8 animals per group). * p < 0.05 (vs. Shock-NaCl). The arrow indicates the start of the resveratrol infusions. Discussion Our results indicate that very low doses of resveratrol, continuously infused during resuscitation after severe hemorrhagic shock, can significantly improve impaired systemic parameters and attenuate multiple organ damage in an exacerbated rat model that is close to the clinical situation. To dissolve more than 3 mg resveratrol in 100 mL aqueous solution, solvents vehicles are needed. So far, in nearly all animal studies examining the efficacy of resveratrol in preventing detrimental effects of hemorrhagic shock, dimethyl sulfoxide (DMSO) was added as a solubilizer [13][14][15][16][17][19][20][21][23][24][25][26]34,35]. DMSO itself has been reported to have anti-inflammatory, anti-thrombotic, anti-proliferative as well as free radical scavenging properties in different disorders, shown in various animal and human studies [27,36,37]. Of course, these vehicle actions can interfere with effects originating from the applied polyphenol. In the present study, resveratrol was continuously infused over 30 min following hemorrhage shock, instead of a single bolus administration as performed in all the corresponding studies. The lower resveratrol concentration (1.2 mg resveratrol in 100 mL 0.9% NaCl) allowed us to avoid co-administration of the biologically active solvent DMSO. Putative Mechanisms of Protection: Systemic Parameters The drop in systemic blood pressure during hemorrhagic shock is registered by baroreceptors mainly placed in the carotid bulb and results in an activation of the sympathetic nervous system to maintain the perfusion pressure of the most vital organs. A pre-and post-capillary constriction in organs mostly expressing alpha receptors, i.e., in so-called shock organs (kidney, muscle, intestine, liver), follows in favor of the perfusion in brain and heart, which mostly express beta receptors. In our experiments, shock was induced by reducing the mean arterial blood pressure to 25-30 mmHg, which usually leads to a centralization of the reaming circulating blood by increasing the total peripheral resistance (TPR) through sympathomimetic effects to maintain the cardiac output (Q). In this context, the cardio-and vascular-protective potential of resveratrol is known to stabilize the cardiac output [20,[23][24][25]38]. In line with this, in the present study, the MAP after resuscitation was stabilized in animals from the Shock-R20 and Shock-R60 groups ( Figure 2). Hemorrhagic shock causes mitochondrial damage and thus decreases ATP synthesis. Decreased ATP levels result in activation of ATP-sensitive potassium channels (KATP), eventually causing hyperpolarization and thus inhibition of L-type calcium channels. This leads to a reduced influx of Ca 2+ and attenuates contraction of the ASMC. The consequence is hypotension, persisting even after reperfusion. Resveratrol improves mitochondrial function and preserves contractibility of ASMC [20,38]. Thus, both mechanisms, the stabilization of the cardiac output and preserved contractibility of the ASMC, may be responsible for the higher MAP of those animals receiving resveratrol. Putative Mechanisms of Organ Protection Hemorrhagic shock caused a significant increase in the macroscopic score of the small intestine as well as in tissue MPO activity and tissue Hb content of all organs from the Shock-NaCl group animals (Figure 3), indicating invasion of neutrophils and tissue hemorrhage. Myeloperoxidase (MPO) is a mammalian peroxidase that is found in neutrophils and to a lesser extent in monocytes. Its primary function is to kill microorganisms by forming highly reactive halide chloride derived oxidants. The MPO-H 2 O 2 -chloride system produces hypochlorous acid (HClO) from hydrogen peroxide (H 2 O 2 ) and the chloride anion (Cl − ) during the respiratory burst. Unfortunately, an artificial release of this oxidant to the outside of the cell can damage normal tissue and aggravate injury in organs [39]. Comparing the effect of resveratrol on distinct shock organs, i.e., small intestine, liver, kidney and lung following hemorrhagic shock, the small intestine and liver were found to be protected best; both MPO and tissue Hb were significantly lower in the presence of both polyphenol doses (Figure 3). In previous studies, we already demonstrated that infusions of solvent-free low doses of resveratrol protect the small intestine against ischemia/reperfusion injury in a model of severe intestinal injury in rats [29]. Trauma-hemorrhage is known to increase the expression of pro-inflammatory mediators, such as the early mediator interleukin 6 (IL-6), which increases the expression of other cytokines, chemokines and adhesion molecules. Yu et al. found an increased activity of the cytokine-induced neutrophil chemoattractants 1 and 3 (CINC-1 and CINC-3) after hemorrhagic shock, which are important chemotactic factors for neutrophils, and the intercellular adhesion molecule 1 (ICAM-1), known to be an important adhesion molecule for neutrophils to leave the bloodstream and invade tissues [13]. CINC-1, CINC-3 and ICAM-1, as well as the cytokine tumor necrosis factor alpha (TNF-α), were significantly decreased by resveratrol (30 mg/kg), probably via an estrogen receptor-dependent up-regulation of hemoxigenase-1 (HO-1) [13,15,21]. Our results indicate that even low doses of the polyphenol (0.06 or 0.18 mg/kg), continuously infused intravenously, diminish liver and intestinal injury by attenuating neutrophil invasion and thus tissue hemorrhage, as indicated by the decreased tissue MPO activity and tissue Hb concentrations, macroscopic score of the intestine (Figure 3) as well as markers of hepatic injury (Figure 4). Resveratrol has already been shown to reduce acute lung injury in different models [40][41][42]. In our present study, resveratrol attenuated shock-induced lung damage ( Figure 3C), likely via a pathway similar to the one protecting the liver and small intestine [34], though the injury of these organs was much more evident. The daily resveratrol dose failing any observable adverse effects in rats has been reported to be as high as 300 mg/kg, and the kidney was found to be the major target of organ toxicity in animals treated with 3 g/kg [43]. Hemorrhagic shock results in reduced organ perfusion, including the kidney and eventually leads to acute renal injury, which is first due to the low blood volume, low blood pressure and thus low perfusion pressure, exacerbated by intrarenal damages, e.g., due to inflammatory processes. Resveratrol has been shown to have salutary effects on the kidney following trauma hemorrhage in higher concentrations [17,19]. In the present study, the MPO activity was non-significantly diminished by the higher resveratrol dose ( Figure 3D1), and the tissue Hb content was not affected by any resveratrol dose ( Figure 3D2). MPO, a highly cationic protein, is known to bind to the negatively charged glomerular basement membrane through ionic bonds [39]. Shock and reperfusion prime inflammatory cells for increased responsiveness. Oxidative stress, including H 2 O 2 , released by inflammatory cells rises, which, together with the now bound MPO enzyme, may additionally damage the kidney. Resveratrol is reported to enhance the expression and activity of the endothelial nitric oxide synthase (eNOS) [44][45][46]. Stimulated eNOS activity increases endothelial nitric oxide (NO) by oxidation of the amino acid L-arginin to citrullin and NO. Increased NO concentration results in relaxation of smooth muscle cells, reduction of TPR and thus improved organ perfusion. In our experiments, the polyphenol protected the heart, indicated by a significant reduction of the MB-fraction of the CK ( Figure 6) and hence contributes to a better cardiac output with further stabilization of the MAP. Price et al. have described dose-dependent effects of resveratrol on mitochondrial function via a PGC-1α-dependent pathway [47]. Recent studies indicate that the activation of SIRT1 by low resveratrol doses plays an important role in this pathway explaining pleotropic effects of resveratrol on different organs in hypoxia, ischemia and reperfusion [48]. Our data support the idea that very low doses of resveratrol ameliorate mitochondrial function via a SIRT1-dependent pathway; however, other pathways outlined above and below may contribute to the protective effects observed. Role of Acid Base and Metabolic State in Organ Protection The shock-induced alterations in pH, pCO 2 and BE in line with the increased breathing rate in shock group animals indicated a respiratory compensated metabolic acidosis that was not affected by any resveratrol dose. The fairly equal blood electrolyte, glucose and lactate concentrations in shock group animals treated or untreated with resveratrol as well cannot explain the protective effects of the polyphenol on impaired systemic parameters and parameters of organ injury. The blood Hb concentration and consequently the Hct in shock group animals significantly increased starting with the resuscitation phase, reaching values at the end of the experiment that were significantly higher than those in Sham-NaCl group rats. These alterations increase blood viscosity, thus decrease blood flow velocity and are likely to be the result of a fluid shift into the interstitial space causing edema. Blood Hb and Hct remained slightly but not significantly lower in rats infused with 60 µg resveratrol/kg × h (Table 1). Tissue edema can additionally impair the outcome by e.g., increasing the diffusion barrier of the lung. Coagulation Prothrombin time and INR are measures of the extrinsic pathway of coagulation. Tissue factor (TF) is a membrane-bound protein and together with factor VII, the key trigger of the extrinsic pathway of coagulation. TF resides on the surface of sub-endothelial cells that are usually not in contact with the circulating blood [49]. Injury or inflammatory mediators, which expose TF to the circulating blood, result in aggregation of thrombocytes and potentially vascular occlusion [50]. Activation of Sirtuin 1 (SIRT1) impairs TF protein expression and activity by decreasing NFκB/p65 activation [51]. Inflammatory mediators such as tumor necrosis factor alpha (TNF-α) are known to increase TF expression and activity in monocytes, macrophages, endothelial cells and vascular smooth muscle cells [52]. Resveratrol is assumed to activate SIRT1 [45] and eventually mitochondrial superoxide dismutase (MnSOD) [53] and thus has an important role in antioxidant defense. SIRT1 also deacetylates the eNOS, which diminishes the TF activity even further as outlined above. As resveratrol is known to increase eNOS activity [45,46] and to decrease inflammatory mediators such as TNF-α, it may diminish coagulation, thus improving perfusion, which eventually protects from organ injury. In our experiments, shock resulted in a significantly decreased prothrombin time, an increased INR and a decreased platelet count in animals from the Shock-NaCl group, indicating a worsened coagulation state with an increased risk of bleeding (Table 1). In line with the considerations outlined above, resveratrol significantly attenuated these changes in INR and platelet count in the Shock-R20 group. Calcium concentrations were significantly decreased after resuscitation due to the citrate contained in ACD solution A, which was used to achieve anticoagulation of the withdrawn blood. One advantage of citrate as an anticoagulant is the lack of systemic anticoagulation, known to result from anticoagulants like heparin with the risk of further hemorrhage especially under circumstances with an impaired coagulation state [54]. The considerably increased free plasma Hb concentration (4-fold) indicating hemolysis after hemorrhagic shock was significantly diminished by both resveratrol doses (Figure 7)-an effect that already has been reported on in vitro erythrocytes [55] and in rats following severe acute pancreatitis [56]. Hemolysis may additionally facilitate platelet clumping in the microcirculation and thus promote organ dysfunction. Reduced hemolysis can further improve oxygen supply of impaired organs and particularly attenuate organ injury by free Hb. Taken together, animals from both resveratrol groups, Shock-R20 and Shock-R60, maintained a higher MAP as well as a more often detectable peripheral oxygen saturation. This strongly suggests that an increased cardiac output (Q), an improved perfusion of the peripheral vascular system by a better relaxation through the eNOS-system, an increased mitochondrial function in ASMC's, decreased inflammatory effects as well as diminished thrombotic events on different organs may explain the protective effects of resveratrol following hemorrhagic shock. Considering the low doses of resveratrol administered, our data suggest that the combination of these salutary effects of the polyphenol more likely explains its protective effects following severe shock than antioxidant mechanisms, such as radical scavenging. Resveratrol is widely used as a supplement and is not regulated by the Food and Drug Administration (FDA). Other polyphenols like polydatin also show promising effects in trauma hemorrhage [18,20]. Conclusions We are the first to show the salutary effects on different organs and systemic parameters of continuous intravenous infusions of solvent-free low-dose resveratrol following severe hemorrhagic shock. We hypothesize that the concentrations of resveratrol used in our experiments were too low, and the application period too short to explain the considerable protective effects of resveratrol by its known radical scavenging capabilities. We rather hypothesize that the effects of resveratrol after hemorrhagic trauma result from its anti-inflammatory potential as well as positive effects on MAP and organ perfusion.
v3-fos-license
2016-05-04T20:20:58.661Z
2011-04-22T00:00:00.000
17266803
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-11-150", "pdf_hash": "80f89fc304fee055a011894bd10894a07b6efcfa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:470", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "sha1": "80f89fc304fee055a011894bd10894a07b6efcfa", "year": 2011 }
pes2o/s2orc
Isomalto oligosaccharide sulfate inhibits tumor growth and metastasis of hepatocellular carcinoma in nude mice Background Hepatocellular carcinoma (HCC) usually has a dismal prognosis because of its limited response to current pharmacotherapy and high metastatic rate. Sulfated oligosaccharide has been confirmed as having potent antitumor activities against solid tumors. Here, we explored the preclinical effects and molecular mechanisms of isomalto oligosaccharide sulfate (IMOS), another novel sulfated oligosaccharide, in HCC cell lines and a xenograft model. Methods The effects of IMOS on HCC proliferation, apoptosis, adhesion, migration, and invasiveness in vitro were assessed by cell counting, flow cytometry, adhesion, wound healing, and transwell assays, respectively. The roles of IMOS on HCC growth and metastasis in xenograft models were evaluated by tumor volumes and fluorescent signals. Total and phosphorylated protein levels of AKT, ERK, and JNK as well as total levels of c-MET were detected by Western blotting. IMOS-regulated genes were screened by quantitative reverse-transcription PCR (qRT-PCR) array in HCCLM3-red fluorescent protein (RFP) xenograft tissues and then confirmed by qRT-PCR in HepG2 and Hep3B cells. Results IMOS markedly inhibited cell proliferation and induced cell apoptosis of HCCLM3, HepG2, and Bel-7402 cells and also significantly suppressed cell adhesion, migration, and invasion of HCCLM3 in vitro. At doses of 60 and 90 mg/kg/d, IMOS displayed robust inhibitory effects on HCC growth and metastasis without obvious side effects in vivo. The levels of pERK, tERK, and pJNK as well as c-MET were significantly down-regulated after treatment with 16 mg/mL IMOS. No obvious changes were found in the levels of pAkt, tAkt, and tJNK. Ten differentially expressed genes were screened from HCCLM3-RFP xenograft tissues after treatment with IMOS at a dose of 90 mg/kg/d. Similar gene expression profiles were confirmed in HepG2 and Hep3B cells after treatment with 16 mg/mL IMOS. Conclusions IMOS is a potential anti-HCC candidate through inhibition of ERK and JNK signaling independent of p53 and worth studying further in patients with HCC, especially at advanced stages. Background Hepatocellular carcinoma (HCC) is the sixth most common cancer and the third leading cause of cancerrelated death globally [1]. As indicated in statistics, the disease is diagnosed in 30% to 40% of all patients at early stages and about 20% of all patients are amenable to curative therapies, such as resection, liver transplantation, and radiofrequency ablation [2,3]. Five-year survival rates of up to 60% to 70% have been achieved in wellselected patients [2]. However, HCC at advanced stages usually carries a dismal prognosis because of liver dysfunction, lack of effective treatment options, and a high metastatic rate [4,5]. Therefore, it is urgent to explore new therapeutic options for patients with advanced HCC. Two distinctive differences in molecular structure exist between isomalto oligosaccharide sulfate (IMOS) and PI-88. IMOS is composed of four sulfated isomaltose molecules with a molecular weight <1500 Da, whereas PI-88 is composed of five sulfated mannose molecules with a molecular weight of 2100 to 2585 Da. Such alterations in structure may affect its toxicity and antitumor effects. In this report, we present our preliminary evidence of the effects of IMOS on experimental HCC growth and metastasis. IMOS IMOS, with a patent (patent no. ZL2005 1 0002141.8) granted by the State Food and Drug Administration of China, is designed and successfully synthesized de novo by Herbon Polysaccharide Bio-tech. Figure 1 shows the chemical structure of IMOS. IMOS was dissolved in Dulbecco modified Eagle medium (DMEM) containing 10% fetal bovine serum (FBS; Gibco BRL, Grand Island, NY, USA), sterilized with a 0.22-μm filter (Millipore, Billeria MA, USA), and reserved at a concentration of 320 mg/mL for in vitro assays. In a similar way, IMOS was dissolved in saline under sterile conditions with a concentration of 600 mg/mL for in vivo assays. Cell proliferation assay Cell proliferation was assessed by the method described previously [17]. In brief, HCCLM3, HepG2, and Bel-7402 cells were seeded into 96-well plates at 2 × 10 3 cells/well. Twenty-four hours later, cells were exposed to IMOS at doses ranging from 0 to 64 mg/mL. On days 1, 2, 3, 4, and 5, cells were digested with pancreatic enzymes including ethylenediaminetetraacetic acid (EDTA) and washed with phosphate-buffered saline (PBS). Cell numbers were then counted by the Coun-tess™ automated cell counter (Life Technologies, CA). Cell cycle and apoptosis assays Cell cycle and apoptosis were detected using the Annexin V-FITC Apoptosis Detection Kit™ according to the manufacturer's instructions (BD Pharmingen, San Diego, CA). Briefly, HCCLM3, HepG2, and Bel-7402 cells were plated into 6-well plates at 4 × 10 5 cells/well. After treatment with IMOS at 0, 2, 4, 16, 32, or 64 mg/ mL for 24 hours, the cells were fixed with ethanol and stained with annexin V for early apoptosis assay by a fluorescence-activated cell sorter (FACS) Calibur cytometer (BD Biosciences, San Jose, CA, USA). In the same way, 72 hours after IMOS treatment, the cells were stained with propidium Iodide (PI) for late apoptosis and cell cycle assays. Cell adhesion assay The 96-well flat-bottom plates were precoated with 50 μL/well of 1:8 PBS-diluted Matrigel at 4°C overnight. After removing all coating solutions, the plates were blocked with 150 μL of 1% bovine serum albumin for 1 hour at 37°C. Then, HCCLM3 cells were treated with 0, 16, 32, or 64 mg/mL IMOS for 4 hours, seeded into Matrigel-coated wells at 5 × 10 4 cells/well, and incubated for 2 hours at 37°C in 5% CO 2 . After extensive washing, cells were fixed with 100 μL/well of 4% formaldehyde for 20 minutes and stained with a hematoxylin solution for 10 minutes. The average numbers of adhesion cells in four quadrants were counted by inverted microscope. Wound healing assay Cell migration was analyzed by a wound healing assay. When cells grew to 90% of confluency, a scratch wound in the monolayer was made using a pipette tip. After washing away all detached cells with PBS, the remaining cells were treated with 0, 16, 32, or 64 mg/mL IMOS and then the distances of wounds were measured by microscope at 0, 24, and 48 hours after treatment. Cell motility was evaluated the following formula: Cell motility = (distance 24 or 48 hours -distance 0 hour )/distance 0 hour . Invasion assay Cell invasion was analyzed by a Transwell™ Permeable Supports system (Corning, Inc., Corning, NY, USA) Figure 1 Chemical structure of IMOS. R: SO 3 Na or H according to the manufacturer's instructions. HCCLM3 cells were pretreated with 0, 16, 32, or 64 mg/mL IMOS for 48 hours, and then seeded into the Matrigel-coated upper insert at 8 × 10 4 cells/24-wells in medium supplemented with 1% serum. Medium containing 10% serum was added to the well as a chemoattractant. Following a culture of 48 hours, non-invading cells were removed from the upper surface by wiping with a cotton swab. The membrane was fixed with 4% formaldehyde for 15 minutes at room temperature. The invading cells were stained with Giemsa (Sigma, Munich, Germany) for 25 minutes, and their numbers in 10 fields of each triplicate filter were analyzed by inverted microscope. Protein levels detected by Western blotting Total and phosphorylated protein levels of AKT, ERK, and JNK as well as total protein of c-MET in HepG2 and Hep3B cells were evaluated by Western blotting. About 20 μg protein was extracted from sham-treated and 16 mg/mL IMOS-treated cells, separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), transferred onto polyvinylidene fluoride membranes, and then reacted with primary rabbit antibodies against total and phosphorylated AKT, ERK, and JNK(1:500; Bioworld Tech, Minneapolis, MN, USA), c-MET (1:1000, Epitomics, Burlingame, CA, USA) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH). After being extensively washed with PBS containing 0.1% Triton X-100, the membranes were incubated with alkaline phosphatase-conjugated goat anti-rabbit antibody for 30 minutes at room temperature. The bands were visualized using 1-step™ NBT/BCIP reagents (Thermo Fisher Scientific, Rockford, IL, USA) and detected by the Alpha Imager (Alpha Innotech, San Leandro, CA, USA). Tolerable dose assay of IMOS in vivo A dose-escalation strategy was used in male athymic BALB/c mice (Institute of Materia, CAS, Shanghai, China) to determine maximum tolerable dose of IMSO. Eighty mice, aged 4 weeks, were divided into groups of 10 mice apiece, with each mouse intraperitoneally injected with IMOS at a dose of 0, 30, 60, 90, 180, 360, 480, or 600 mg/kg/d, respectively. Mouse survival was monitored every day. We planned to halt the dose escalation would be halted if mice died. Serum was collected for assays of hepatorenal function. Plasma was collected for determining thrombocyte counts. Heart, liver, and kidney tissues were subjected to hematoxylin and eosin staining for pathologic examinations. The maximum nonlethal dose was determined for the following in vivo therapeutic study. All procedures were approved by the Animal Care and Use Committee of Shanghai, China. Antitumor growth and metastasis assays in vivo Antitumor activities of IMOS in vivo were assessed against HCCLM3-RFP xenografts. Three 4-week-old male athymic BALB/c mice were injected subcutaneously with 1 × 10 7 /0.2 mL of HCCLM3-RFP cells in the right upper flank region to establish subcutaneous xenograft models. Four weeks later, the tumors that had grown to 1 cm in diameter were removed, cut into 1mm 3 pieces, and implanted into livers of another 24 mice to establish orthotopic xenograft models as described previously [18]. Then, all mice were randomly divided into four groups of six mice each and intraperitoneally injected with IMOS at 0, 30, 60, or 90 mg/kg/d once daily for 30 consecutive days. Fluorescent images of in situ tumor were taken once a week with the mouse anesthetized with 50 mg/kg sodium pentobarbital. On day 30, all mice were sacrificed and tumor volume was calculated using the formula V (mm 3 ) = width 2 (mm 2 ) × length (mm)/2. Metastatic foci in lungs and mesenteries were counted by fluorescent stereomicroscope (stereomicroscope: Leica MZ6; illumination: Leica L5 FL; C-mount: 0.63/1.25; CCD: DFC 300FX). Fluorescence area (AOI, pixel) was quantified by Image-Pro Plus 6.0 (Media Cybernetics, Silver Spring, MD, USA) as described previously [17]. IMOS-regulated genes detected by quantitative reversetranscription polymerase chain reaction (qRT-PCR) and qRT-PCR array Tumor tissues from mice treated with IMOS at a dose of 0 or 90 mg/kg/d were enrolled for differentially expressed gene analysis by RT Profiler PCR Arrays (SABioscience, PAHS-027A, Frederick, MD, USA) and performed by Kangchen Bio-tech (Shanghai, China). The mRNA levels of differentially expressed genes in HepG2 and Hep3B cells with 16-mg/mL IMOS treatment were confirmed by qRT-PCR. Total RNA of cells was extracted using RNeasy MinElute Cleanup Kit (Qiagen, Valencia, CA, USA). Then, 1.5 μg RNA was reversely transcribed into first-strand cDNA using SuperScript™ III Reverse Transcriptase (Invitrogen, NY, USA). Primer sequences and amplification conditions are listed in Additional file 1. The reactions were performed on a DNA Engine Opticon system (MJ Research, Reno, NV, USA) using SYBR ® Green PCR Master Mix (Applied Biosystems). Following each cycle, SYBR green fluorescence was monitored and the melting curve was analyzed to ensure that a single PCR product was obtained. Afterward, the size and specificity of amplicons were confirmed by 2.5% agarose gel electrophoresis. All reactions were repeated in three separate runs and evaluated with the Opticon Monitor software (Version 1.02). GAPDH was used to normalize the samples. RNase-free water (Qiagen) was included as a negative control in RNA extraction and in each run. Statistical analysis Statistical analysis was performed with SPSS 15.0 for windows (SPSS, Chicago, IL, USA). Quantitative variables were expressed as means ± SD and analyzed by ANOVA. Results were considered statistically significant at P < 0.05. Inhibitory effects of IMOS on HCC proliferation To explore the effects of IMOS on HCC proliferation, HCCLM3, HepG2, and Bel-7402 cells were treated with IMOS at doses ranging from 0 to 64 mg/mL. IMOS dramatically decreased cell numbers of all tested cell lines in a dose-dependent manner, especially when exposed to 16-and 64-mg/mL doses of IMOS (Figure 2A-C). The inhibitory ratio of IMOS on cell proliferation was significantly increased from 23.3% ± 1.9% to 99.06% ± 4.6% in HCCLM3 cells, from 24.4% ± 9.5% to 98.5% ± 9.8% in HepG2 cells, and from 27.8% ± 1.2% to 91.4% ± 1.6% in Bel-7402 cells during a 5-day treatment ( Figure 2D). The data suggest IMOS has robust suppressive effects on HCC proliferation. Cell cycle arrest and apoptosis induced by IMOS To address whether proliferation inhibition of IMOS was attributed to cell cycle arrest, cell cycle phases of HCCLM3, HepG2, and Bel-7402 cells were next analyzed by flow cytometry. As expected, cell numbers at the S and G 2 /M phases were significantly decreased, whereas cell numbers at the G 0 /G 1 phase were markedly increased after IMOS treatment in a dose- Figure 3A-C). To further investigate whether cell apoptosis was also involved in proliferation inhibition caused by IMOS, early and late apoptotic cells were monitored by annexin V and PI staining, respectively. The percent of early apoptosis cells increased significantly after ≥4 mg/mL IMOS treatment ( Figure 3D-F). The percent of late apoptotic cells was statistically higher in HCCLM3, HepG2, and Bel-7402 cells after ≥16 mg/mL IMOS treatment (Figure 3G-I). Adhesion, migration, and invasiveness of HCCLM3 inhibited by IMOS in vitro To detect antitumor activities of IMOS on HCCLM3, cell adhesion, wound healing, and transwell assays were performed after treatment with IMOS at doses of 0, 16, 32, and 64 mg/mL, respectively. HCCLM3 adhesion was markedly inhibited by IMOS at doses of 16, 32, and 64 mg/mL as compared with sham treatment (Figure 4A and 4B). HCCLM3 migration was significantly suppressed in a dose-and time-dependent manner (Figure 4C and 4D). In addition, the numbers of transmembrane cells after IMOS treatment with doses of 0, 16, 32, and 64 mg/mL were 174.67 ± 5.69, 84.33 ± 18.15, 69 ± 18.52, and 17 ± 5.57, respectively ( Figure 4E and 4F). These data demonstrate that IMOS is a potent inhibitor of cell adhesion, migration, and invasiveness of HCCLM3. Maximum tolerable dose of IMOS in vivo To determine the maximum tolerable dose for in vivo therapeutic study, IMOS with an initial dose of 30 mg/ kg/d was injected intraperitoneally into mice once daily in an escalating-dose schedule. No mouse death was observed during a 30-day treatment with IMOS given at 30, 60, and 90 mg/kg/d. On day 30, the numbers of thrombocytes in mice treated with IMOS at 0, 30, 60, and 90 mg/kg/day were 1164 ± 183, 1089 ± 210, 1170 ± 224, and 1049 ± 258 × 10 9 /L, respectively. No obvious thrombocytopenia was found after IMOS treatment. No significant abnormalities were found in these mice as evaluated by body weights (Additional file 2), hepatorenal function (Additional file 3), and pathologic examinations of heart, liver, and kidney tissues (Additional file 4). However, mice began to die at day 6 at an IMOS dose of 180 mg/kg/d. Therefore, IMOS at a dose of ≤90 mg/kg/d was well tolerated by athymic BALB/c mice. HCC growth and metastasis suppressed by IMOS in xenograft models To determine its effects on HCC progression, IMOS was intraperitoneally given to HCCLM3 xenograft mice at doses of 30, 60 and 90 mg/kg/d. In accord with the results in vitro, IMOS inhibited tumor growth and metastasis of HCCLM3 xenograft in a dose-dependent pattern ( Figure 5A-E). Tumor growth was observed to be dramatically suppressed by IMOS at 90 mg/kg/d after a 3-week treatment ( Figure 5A). On day 30, tumor volumes in 30, 60 and 90 mg/kg/d IMOS-treated mice were 1.24 ± 0.28 cm 3 , 1.01 ± 0.22 cm 3 , and 0.8 ± 0.1 cm 3 , respectively, much smaller than the volume in sham-treated mice (1.91 ± 0.27 cm 3 , P < 0.001; Figure 5B). Furthermore, metastatic foci in lung and mesentery in mice treated with IMOS doses of 30, 60, and 90 mg/ kg/d were 3327 ± 137 and 1547 ± 56,1335 ± 115 and 72 ± 15, 1120 ± 105 and 60 ± 11, respectively, which were also statistically smaller than seen in sham-treated mice (3506 ± 125 and 1764 ± 78; Figure 5C-E). The results suggest that IMOS has potent suppressive activities not only on HCC growth but also on HCC metastasis. Signal pathways and gene expressions regulated by IMOS To understand underlying mechanisms, total and phosphorylated protein levels of AKT, ERK, and JNK as well as c-MET were analyzed in sham-treated and 16-mg/mL IMOS-treated HepG2 and Hep3B cells. The levels of pERK, tERK, and pJNK were significantly down-regulated in both cell lines, whereas the level of c-MET was markedly down-regulated only in HepG2 cells. No obvious changes were found in the protein levels of pAkt, tAkt, and tJNK ( Figure 6A). Furthermore, 10 differentially expressed genes were found in 90-mg/kg/d IMOS-treated xenograft tissues as compared with shamtreated tissues. Among them, Bcl-2, BIRC/survivin, PCNA, CDK1/CDK2, and PRC1 were down-regulated more than 2-fold, whereas BAI-1, TP73, and MDM2 were up-regulated more than 2-fold ( Figure 6B). Except for RPRM/Reprimo and interferon β (IFN-b), similar expression profiles were confirmed in IMOS-treated HepG2 and Hep3B cells as compared with HCCLM3 xenograft ( Figure 6C and 6D). Discussion HCC usually has a limited response to current pharmacotherapy. It has been reported that MHCC97L and HepG2 cells surviving oxaliplatin treatment show enhanced migration and invasion in vitro and increase metastasis to the lung when reinoculated into nude mice [19]. Similar results have been observed in our previous study that IFN-α had contrasting aspects of consistently suppressing HCC growth but also promoting tumor metastasis capacity [20]. Therefore, it is urgent to investigate new agents with robust inhibitory effects on both HCC growth and metastasis. Fortunately, small molecular agents of sulfated oligosaccharides were confirmed to have potent antitumor activities against primary tumor growth and metastasis [11,12]. Therefore, IMOS was assumed to have similar activities against HCC progression. As expected, IMOS dramatically inhibited cell proliferation and induced cell cycle arrest and apoptosis in three tested HCC cell lines. Furthermore, suppressive effects on cell adhesion, motility, and invasiveness of HCCLM3 in vitro as well as tumor growth and metastasis of HCCLM3 xenograft in vivo were obviously achieved by IMOS treatment in a dose-dependent manner. These findings suggest that IMOS is a possible novel compound to be used against progression of HCC. According to previous studies, sulfated oligosaccharides were thought to suppress tumor angiogenesis, growth, and metastasis primarily by their competitive inhibition of the cleavage of heparan sulfate-growth factor complex, thus reducing the release of growth factors, such as vascular endothelial growth factor and basic fibroblast growth factor from the microenvironmental matrix [21]. However, many direct actions of IMOS on HCC cells in vitro cannot be satisfactorily explained by its inhibition of heparanase activities. Therefore, we focused our mechanism studies on cell proliferation and apoptosis regulation in this study. Numerous studies have shown that Akt, ERK, and JNK signaling pathways have important roles in cancerous cell proliferation, cell cycle, and apoptosis regulation in a p53-dependent or -independent manner [22][23][24]. Dysfunctions of those pathways are common events in tumorigenesis and progression in many types of cancers, including HCC [25,26]. Therefore, HepG2, a HCC cell line with wild-type p53, and Hep3B, a cell line with mutant p53, were used for further study to elucidate the mechanism of IMOS. Our preliminary results from Western blots revealed that ERK and JNK but not Akt signaling pathways were significantly inhibited by IMOS in both cell lines, whereas c-MET, a well-known p53 transcriptional target, was only down-regulated in HepG2 [27]. All those results indicated that ERK/JNK signaling pathways were involved in IMSO-mediated p53 activity in HepG2, but not in Hep3B. Ten differentially expressed genes were screened from HCCLM3-RFP xenografts. Eight of them were confirmed in IMOS-treated HepG2 and Hep3B cells. Because similar profiles of most differentially expressed genes were found in both HepG2 and Hep3B cells, IMOS may inhibit cell proliferation and induce cell apoptosis of HCC in a p53-independent manner. Among them, BAI-1 and TP73 were significantly up-regulated, whereas Bcl-2, BIRC5/survivin, PCNA, and CDK1/CDK2 were markedly down-regulated. Our findings are consistent with previous observations that Bcl-2, BIRC5/survivin, and BAI-1 were the mediators of cell cycle arrest and apoptosis regulation [28][29][30]. Ectopic expression of PCNA was able to suppress cell apoptosis [31], thus decreased expression of PCNA in HCC after IMOS treatment was probably able to induce cell apoptosis. Furthermore, as an apoptosis-induced gene [32], enhanced expression of P73 in IMOS-treated tissue may also promote HCC apoptosis. Conclusions IMOS, a novel sulfated oligosaccharide, at doses of ≤90 mg/kg/d exhibited potent antitumor effects on experimental HCC growth and metastasis. It should be a promising anti-HCC agent and worth further studies in patients with HCC, especially disease at advanced stages.
v3-fos-license
2018-04-03T04:26:35.457Z
2011-11-15T00:00:00.000
18015811
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/jir/2011/439752.pdf", "pdf_hash": "0a4c473c90d2019b34d6331da9e267d8028e2baa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:471", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "1aeea990c9b932a841be8355955db372ea79fac5", "year": 2011 }
pes2o/s2orc
The Confluence of Stereotactic Ablative Radiotherapy and Tumor Immunology Stereotactic radiation approaches are gaining more popularity for the treatment of intracranial as well as extracranial tumors in organs such as the liver and lung. Technology, rather than biology, is driving the rapid adoption of stereotactic body radiation therapy (SBRT), also known as stereotactic ablative radiotherapy (SABR), in the clinic due to advances in precise positioning and targeting. Dramatic improvements in tumor control have been demonstrated; however, our knowledge of normal tissue biology response mechanisms to large fraction sizes is lacking. Herein, we will discuss how SABR can induce cellular expression of MHC I, adhesion molecules, costimulatory molecules, heat shock proteins, inflammatory mediators, immunomodulatory cytokines, and death receptors to enhance antitumor immune responses. Introduction Stereotactic radiosurgery (SRS) was originally developed for the treatment of intracranial tumors and has demonstrated clinical effectiveness in treating a variety of benign and malignant conditions. Its extracranial counterpart, stereotactic body radiation therapy (SBRT), also known as stereotactic ablative radiotherapy (SABR), has more recently shown efficacy for the treatment of tumors in organs such as the liver and lung. The potential for using SABR is likely greater than for SRS given the larger volume of potential indications outside the central nervous system. Technology, rather than biology, is driving the rapid adoption of SABR in the clinic due to advances in precise positioning, motion control, dosimetry, and precise targeting with image guidance. Dramatic improvements in tumor control have been demonstrated in several studies due to the demonstration that very potent dose can be delivered by use of the mentioned technology. However, our knowledge of normal tissue biology response mechanisms to large fraction sizes is relatively lacking compared to conventional fractionation. Radiobiologic Considerations A fundamental issue in SABR is whether classical radiobiologic modeling with the linear-quadratic (LQ) model is a valid method to assess the biologically effective dose at the high doses typically encountered in radiosurgery. This point was debated in back-to-back papers in seminars in radiation oncology [1,2], where Brenner argued that LQ formalism was appropriate whilst Kirkpatrick and colleagues suggested it was inappropriate. Brenner's argument is based on the robustness of the LQ model to predict fractionation and dose-rate effects in experimental models in vitro and in vivo at doses up to 10 Gy. This conclusion is based on the premise that cell killing is the dominant process mediating the radiotherapeutic response for both early and late effects including vascular effects. Brenner argued that, to date, there is no evidence of problems when LQ has been applied in the clinic. However, this was the crux of Kirkpatrick and colleagues' argument. They noted multiple studies demonstrating that the administration of a single high dose of radiation in vivo had a much greater effect than predicted by the LQ model; they cited several examples including Leith et al. [3] who calculated that the dose to obtain a high probability of tumor control for brain lesions would be at least 25 to 35 Gy using the LQ model, which was much higher than the observed clinically effective radiosurgical dose, which was in the range of 15-20 Gy. Kirkpatrick maintained that there was a disconnect between in vitro cell survival data and observed clinical data which suggests that there is more than one mechanism of radiation damage and that these operate differentially at low and high doses. In addition, Kirkpatrick argues that the LQ model does not effectively address the potential existence of radioresistant cancer stem cells, which may require a threshold dose to be crossed before their death is triggered. Several authors have proposed alternate models to the LQ. In all cases, they argue that the LQ model was intended as a low dose mathematical representation of the data constituting the survival curve [4,5]. As most survival curves demonstrate a curvilinear "shoulder" followed by a linear portion on a linear log scale, the high-dose trend to endlessly curve associated with the LQ model overpredicts survival at high dose per fraction from a purely mathematical perspective. In the case of the universal survival curve of Park et al. [5], the strength of the LQ in the low dose realm is exploited but abandoned for the linear multitarget model in the high-dose realm. Thus, the in vitro survival curve has goodness of fit in all clinically significant ranges including the ablative range characteristic of SABR. Admittedly, none of the proposed mathematical models properly account for in vivo effects including vascular and immune contribution to cell death. The Role of Tumor Stroma As stated above, the accepted rationale for radiotherapy (RT) is based on causing lethal DNA damage to tumor cells and the tumor-associated stroma. There is unequivocal evidence which has been presented by Fuks and colleagues that the tumor stroma plays an important role in the response to high dose per fraction radiation treatment. They demonstrated that vascular endothelial cell apoptosis is rapidly activated above 10 Gy per fraction [6], and that the ceramide pathway orchestrated by acid sphingomyelinase (ASMase) operates as a rheostat that regulates the balance between endothelial survival and death and thus tumor response [7]. These studies relied heavily on mice that had ASMase knocked out in all tissues; the authors have countered the argument that defective immune system that is known to occur in ASMase −/− mice [8] influenced their observations [6]. Damage to vascular/stromal elements in tumors has also been observed around 2 weeks after radiation exposure that was less dependent on size of dose per fraction [9]. Pathological observations show profound changes in vasculature after radiosurgery and from studies on arteriovenous malformations [10], where obliteration of abnormal vasculature occurs months after irradiation, but is rarely seen below single doses of 12 Gy climbing steeply with increasing doses above this threshold. In terms of the infiltrating immune cell component of tumor stroma, conventional RT has traditionally been viewed as immunosuppressive [11], but the systemic effects of both cancer and local radiotherapy of cancer on the immune system are clearly more complex than this. Although lymphocyte radiosensitivity is well recognized, the effects of different doses and delivery methods on systemic and locoregional naive, effector, or regulatory T cell or other immunologically relevant populations is still the subject of debate [12,13]. Several authors have investigated the potential immunomodulatory effects of localized RT on tumors resulting in conflicting reports as to whether these responses promote or interfere with tumor reduction [14][15][16]. This dualism is something that is to be expected and is inherent in a system that has to promote both destruction of pathogens and tissue healing while regulating anti-self reactivity. It is also possible that the more positive effects seen in colorectal cancer where the immune score was significantly associated with differences in disease-free, disease-specific, and overall survival [17] are in part a reflection of additional microbial challenges that may not be present in other sites. showed that radiation can trigger signals that stimulate tolllike receptor 4 on antigen presenting dendritic cells (DCs) [18], Liao has shown that irradiation of DC can enhance presentation of antigenic peptides by the exogenous pathway and is a maturation signal, while inhibiting internal antigen processing [21], and Merrick has shown a decrease in IL-12 production that has a negative effect on presentation [15]. Several reports have shown increased expression of MHC class I and coaccessory molecules after radiation of both tumor and host cells, while Chakraborty et al. [19] reported a direct effect of radiation on tumors by modifying the phenotype of tumor cells to render them more susceptible to vaccine-mediated T-cell killing, and others have shown that radiation-induced changes in the tumor immune microenvironment to promotes greater infiltration of immune effector cells [22] (Figure 1). Mechanisms of Radiation Driven Tumor Immunology The early report of Stone [23] that the immune system can dramatically alter the dose required to obtain local tumor control has been updated by Lee and colleagues, who showed that CD8+ T cells could be responsible for the therapeutic effects of ablative radiation [24]. The delivery of an ablative dose of radiation of 15-25 Gy was found to cause a significant increase in T-cell priming in draining lymphoid tissue, leading to reduction or eradication of the primary tumor or distant metastasis in a CD8+ T-cell dependent fashion in an animal model. While conventional 2 Gy doses seem inferior at generating such responses, higher sized dose fractions may be better than single doses [25]. The possibility that there may be a certain dose per fraction that is optimal for stimulating radiation adjuvanticity is of relevance to mechanism of radiation-induced immune stimulation and clinical practice. Conventional RT has already been shown to enhance tumor-specific Tcell responses [26], but such responses are likely of little clinical relevance and surely can be improved upon by optimizing dose delivery and integrating RT with modern immunotherapeutic strategies. Radiation can not only kill tumor cells releasing tumor antigens and molecules with what are collectively called damage-associated molecular patterns (DAMPs) that exert various immunomodulatory effects including induction of the expression of cytokines, chemokines, and release of inflammatory mediators [27][28][29][30] (Figure 1). Although proinflammatory cytokines generally are produced by higher doses than are conventionally used in RT, there may be an accumulating effect [31]. Radiation also increases the per-meability of the local vasculature either directly or through cytokine production that leads to recruitment of circulating leukocytes into surrounding tissues including antigen-presenting cells and effector T cells [32][33][34]. Thus, a radiationinduced proinflammatory microenvironment within irradiated tumors could provide DCs with maturation inducing stimuli critical for eliciting effective antigen presentation. The obverse of this is that radiation can stimulate production of suppressor myeloid cells [35] and Treg cells [36] in a dosedependent manner that presumably aim to dampen and contain tissue damage and that can be highly immunosuppressive. Thus, to "unmask" the more positive aspects of radiation killing on immunity, it may be necessary to target and impair these natural defenses. Advances in the understanding of the mechanisms that regulate the development of antitumor immunity, as well as improved knowledge of the complex effects of radiation on tissues [37], have revived interest in the possibility of combining radiation and immune-based therapies to achieve a better local and systemic tumor control [28][29][30][31]. Since William Coley started treating patients at the end of the 19th century with bacterial toxins, there have been waves of enthusiasm promoting immunotherapy for the treatment of cancer. The introduction of cytokines, in particular interleukin-2 (IL-2), for cancer treatment was a major clinical effort that had modest success. Until recently, however, these efforts have been hampered by a lack of molecular definition of tumor antigens, a means of delivering them effectively, and a sensitive and reliable way to measure responses. This situation changed with the molecular cloning of human tumor-associated antigens that could be recognized by T cells, the ability to culture powerful antigen presenting cells (APCs) in the form of dendritic cells (DCs), and to assess immune responses to specific tumor epitopes using tetramer and ELISPOT assays [38]. These advances allied to the development of genetically modified mouse models have led to a deeper understanding of the interactions between cancer and the immune system of the host [39]. Indeed, the available experimental evidence supports the hypothesis that once tumors have become clinically apparent their immunogenicity has been modified by the selective pressure of the immune system, resulting in the growth of tumors that are characteristically poorly immunogenic, being able to escape immune detection, and/or to actively inhibit immune effectors [39]. Furthermore, it is clear that, although T cells become tolerant to many self-antigens in the thymus, which depletes the pool that might react to cancer, tolerance to many self-components is actively maintained in the periphery by several mechanisms. For example, immature DC presenting self-antigens to T cells are tolerogenic and peripheral tolerance is maintained by Tregs subset that can be innate or induced. Suppressor macrophages form a final barrier to immune function and can result in immune shutdown [40]. Peripheral tolerance can be broken by "maturation" of DC in local sites that allow transient immune responses to invading pathogens, but it leads to the belief that if it were not for these regulatory mechanisms T cells could respond better to "self-" antigens on tumors, something for which there is now considerable evidence [41]. The recognition of the fact that the host can break a state of tolerance that has developed to its own tumor offers many possibly effective immunotherapeutic strategies, some being currently tested in clinical trials. The "danger" model of immunity suggests that pathogens with associated molecular patterns (PAMPs) and DAMPS engender an inflammatory milieu that promotes the development of antigen-specific immunity through DC maturation that allows internalization of apoptotic and necrotic cellular debris and presentation of processed antigen to T cells. Thus, administration of radiation may therefore be considered to create an inflammatory setting via DC maturation, induction of apoptosis, necrosis, cell surface molecules, and secretory molecules. As with many other challenges, radiation upregulates expression of immunomodulatory surface molecules (MHC, costimulatory molecules, adhesion molecules, death receptors, heat shock proteins) and secretory molecules (cytokines, inflammatory mediators) in tumor, stromal, and vascular endothelial cells. Important amongst these may be the upregulation of TNF family members that could promote cell killing, not only by TNF in the microenvironment but also by radiation-induced TNF. Can Radiobiologic Models Be Adapted to Account for Other Modes of Tumor Response at High Dose Per Fraction? Therefore, the evidence would seem to suggest that there are several potential immunologic mechanisms for cell killing in the high-dose range. The LQ model has long been considered to overestimate radiation cell killing at these doses as a consequence of the model's prediction of a continuous downward bend (ßd2) in the survival curve. While in vivo data are sparse, the dose-response may be linear above 12 Gy [42], and two-component or other models have been described that may better predict the response at dose per fraction above 5-7 Gy. For example, Park et al. [5] described the effects of radiation in the ablative dose range using a universal survival curve (USC) model, which combines the LQ and multitarget models using a transition dose to separate the two fitting components of the model. Using the LQ model, the potency of the doses used in the Indiana University phase II trial of SABR for medically inoperable NSCLC (20 Gy × 3) was estimated to be 1.7 times greater than the biological effectiveness of a similar Japanese trial delivering 12 Gy × 4. However, when the USC model was used, the potency of the Indiana University regimen was only 1.34 times more than the Japanese regimen [5]. Other models have included the generalized LQ (gLQ) model in which the reduction of conversion of sublethal to lethal injury in hypofractionated ablative dose radiation is taken into account and the actual effect of the radiation is lower than what was estimated by the LQ model [43]. However, modeling may never fully describe the complexity of the biological processes involved in the response to high dose per fraction radiation, but it might facilitate the ability to design optimal radiosurgery treatment plans once sufficient clinical data have been obtained. From a radiobiological perspective, what is clear is that there are processes that are different at high from low dose per fraction and these include the ability of cells to progress through the cell cycle, the likelihood of cell death perhaps with a different mechanism, vascular effects, proinflammatory effects, and immune effects. Local Radiation Enhancement of Systemic Immunity It is clear from what has been said that localized cancer has systemic immune effects as does RT. It is also clear that the outcome of cancer and cancer therapy depends heavily upon the nature of the cells that are generated, in particular with respect to metastasis and overall survival. It seems likely that unexpected discrepancies in the relative efficacies of different anticancer regimens and divergence or convergence between regional and distant failures could be due to such systemic influences, for example, of local tumor control on the incidence of distance metastasis. Future studies aimed at assessing the predictive value of systemic responses in the response of cancer to different dose schedules of RT are likely to be very informative, and strategies that target systemic innate and cancer and radiation-induced regulatory mechanism hold great promise. These strategies, together with DC-based and other forms of antitumor vaccination, can greatly modify the total radiation dose required to achieve local control as well as influencing distant disease, and RT should adapt to optimally integrate with such approaches. While most chemotherapy regimens are thought to compromise the immune system, they also can have immunomodulatory effects that require study. Conclusions Searching for references on PubMed that contain "SBRT" or "SABR" and "biology" reveals very few hits emphasizing that this is an area of modern radiotherapy where detailed understanding biology needs to catch up with the clinic [44]. Small animal platforms are now developed to simulate a realistic SABR delivery in experimental animals [45] and other recent developments in image-guided small animal irradiators could also be adapted to simulate SABR [46]. A wealth of knowledge already exists in the radiobiology archive from the '60s, '70s, and '80s where large doses per fraction were used for ease of experimental design in experimental studies, which needs to be revisited. In the meantime, combination immunotherapy and radiation approaches are being translated into the clinic [47]. Currently, combination immunotherapy and radiation approaches are being translated into the clinic where intratumoral dendritic cell injection with coordinated irradiation and introduction of autologous, unmanipulated dendritic cells has been the subject of sarcoma therapy [48]. At present, SABR represents an exciting, effective, yet empirically designed radiation therapy. Increasing our knowledge of the underlying biology associated with modern high-dose delivery will only serve to improve the therapeutic benefit of this modality. In addition, we believe that SABR could be optimized for use with immunotherapeutic approaches so as to better generate tumor antigen-specific cellular immunity.
v3-fos-license
2020-10-22T18:55:03.616Z
2020-10-20T00:00:00.000
224820630
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1128/mbio.01920-20", "pdf_hash": "80d05e9f2b9086ce7cbc8ae3982e761ec5237c63", "pdf_src": "ASMUSA", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:476", "s2fieldsofstudy": [ "Biology" ], "sha1": "92b643886884ecb2533756315f7a79eb0d9315e3", "year": 2020 }
pes2o/s2orc
IP7-SPX Domain Interaction Controls Fungal Virulence by Stabilizing Phosphate Signaling Machinery Invasive fungal diseases pose a serious threat to human health globally with >1.5 million deaths occurring annually, 180,000 of which are attributable to the AIDS-related pathogen, Cryptococcus neoformans. Here, we demonstrate that interaction of the inositol pyrophosphate, IP7, with the CDK inhibitor protein, Pho81, is instrumental in promoting fungal virulence. IP7-Pho81 interaction stabilizes Pho81 association with other CDK complex components to promote PHO pathway activation and phosphate acquisition. Our data demonstrating that blocking IP7-Pho81 interaction or preventing Pho81 production leads to a dramatic loss in fungal virulence, coupled with Pho81 having no homologue in humans, highlights Pho81 function as a potential target for the development of urgently needed antifungal drugs. C ryptococcus neoformans causes fatal meningitis worldwide, especially in immunosuppressed individuals and is responsible for more than 220,000 infections and 180,000 deaths annually (1). Infection is initiated in the lungs and can spread via the blood to the brain to cause meningitis that is fatal without treatment. All fungi, including C. neoformans, use signaling pathways to respond and adapt to host stress and hence to promote their pathogenicity (2). The inositol polyphosphate synthesis pathway, which produces the inositol pyrophosphate 5-PP-IP 5 (IP 7 ) (3)(4)(5)(6)(7)(8), and the phosphate sensing and acquisition (PHO) pathway (3,5) are essential for fungal growth in the lung and spread of infection to the brain. However, whether 5-PP-IP 5 -mediated virulence impairment is due to defects in phosphate homeostasis remains to be addressed. As an organism with a haploid genome, C. neoformans served as a useful model to pioneer the characterization of the inositol polyphosphate synthesis pathway in a human fungal pathogen (3)(4)(5)(6)(7)(8). Using an inositol polyphosphate kinase (IPK) gene deletion approach to block IP production at different sites, it was shown that the inositol pyrophosphate, 5-PP-IP 5 , is produced by the sequential phosphorylation of inositol trisphosphate (IP 3 ) by the IPKs Arg1, Ipk1, and Kcs1 and that 5-PP-IP 5 is the direct product of Kcs1 (Fig. 1A). In comparison to the other IP products in the pathway, loss of 5-PP-IP 5 had the most negative impact on virulence in a mouse model (4). 5-PP-IP 5 is the main IP 7 isomer in eukaryotic cells and consists of a myo-inositol backbone with five covalently attached phosphates and one di(pyro)phosphate at position 5. 5-PP-IP 5 is further phosphorylated at position 1 by Asp1 to produce 1,5-PP 2 -IP 4 (IP 8 ) (9,10). Loss of IP 8 had minimal impact on cellular function and virulence (4). The role of 5-PP-IP 5 in other human fungal pathogens has not been determined, presumably due to the inability to create viable IPK deletion mutants. However, the creation of a heterozygous ARG1/IPK2 deletion mutant in Candida albicans demonstrated important roles for IPK products in cellular function (11). Although 5-PP-IP 5 plays a critical role in fungal virulence, it is unclear how it functions at the molecular level. In nonpathogenic fungi, plants and mammalian cells inositol pyrophosphates, which are highly negatively charged, form electrostatic interactions with the positively charged binding pocket of SPX domains found in components of the phosphate homeostasis machinery (12)(13)(14)(15)(16)(17)(18)(19)(20). The term SPX is derived from the proteins in which the domain was first discovered (Syg1, Pho81, and Xpr1). SPX domains are small (135 to 380 residues long). They are either located at the N termini of proteins or occur as independent, single-domain proteins. The interaction of inositol polyphosphates with SPX domains has been shown to modulate phosphate sensing, transport and storage (16,21). In fungi, phosphate homeostasis is regulated by the PHO pathway. The mechanism of PHO pathway regulation in the model yeast, Saccharomyces cerevisiae, and in C. neoformans is mostly conserved, except for the absence of a transcriptional coregulator in C. neoformans, which coincides with an expanded number of gene targets (22,23). In both organisms, phosphate deprivation is sensed by a core regulatory CDK complex comprised of the kinase Pho85, the cyclin Pho80, and the CDK inhibitor (CKI) Pho81, which initiates a transcriptional response aimed at restoring cellular phosphate levels (3,5,24,25). When phosphate is abundant, Pho85 is active and phosphorylates the transcription factor Pho4, thus facilitating its export from the nucleus. When phosphate is scarce, Pho81 inhibits Pho85, preventing Pho4 phosphorylation and its export from the nucleus. This leads to the induction of genes involved in the acquisition of phosphate and potentially other nutrients in the case of C. neoformans (22,26,27). Blocking transcriptional activation of the PHO genes in C. neoformans and C. albicans by deleting the Pho4-encoding gene attenuated virulence in a mouse infection model (3,28). In S. cerevisiae, activation of the PHO pathway requires the Vip1-derived IP 7 isomer, 1-PP-IP 5 (29). In this study, we investigate the role of Kcs1-derived 5-PP-IP 5 in PHO pathway activation in the fungal pathogen C. neoformans and provide evidence of additional evolutionary divergence in PHO pathway regulation in fungi. We also show that the critical roles of 5-PP-IP 5 and Pho81 in virulence are conveyed primarily via 5-PP-IP 5 interaction with the SPX domain of Pho81 and provide novel mechanistic insight into how inositol pyrophosphates regulate PHO pathway activation. RESULTS Kcs1-derived 5-PP-IP 5 is required for PHO pathway activation in C. neoformans. The inositol polyphosphate biosynthetic pathway in C. neoformans is represented in Fig. 1A. 5-PP-IP 5 , derived from Kcs1, is the major IP 7 isomer in fungi. Kcs1 activity is also necessary for the subsequent generation of 1,5-PP 2 -IP 4 (IP 8 ) by Asp1. To determine whether these inositol pyrophosphates play a role in phosphate homeostasis in C. neoformans, growth of the kcs1⌬ and pho4⌬ strains was compared in the absence of free phosphate. The results in Fig. 1B demonstrate that growth of both strains is similarly attenuated in either phosphate-free medium (MM-KCl) or in medium where all phosphate is covalently bound to glycerol (␤-glycerol-phosphate). Next, we investigated whether delayed growth of the kcs1⌬ mutant in the absence of phosphate correlates with an inability to upregulate genes involved in phosphate acquisition (PHO genes). PHO genes in C. neoformans encode three acid phosphatases, including secreted Aph1, which is a biochemical reporter for PHO pathway activation (5,25); three high-affinity phosphate transporters (Pho84, Pho840, and Pho89) (24); Vtc4 (a component of the Vacuolar Transport Chaperone complex involved in synthesizing inorganic polyphosphate as a phosphate store) (12,24); two proteins involved in lipid remodeling and phosphate conservation (betaine lipid synthase [Bta1] and glycerophosphodiesterase [Gde2]) (30,31); and the CDKI, Pho81. Expression of these genes is upregulated in the wild type (WT) following phosphate starvation and is controlled by the transcription factor Pho4 (3,4,24,25). Similar to the pho4⌬ mutant (3), the PHO genes remained suppressed in the kcs1⌬ mutant relative to the WT (Fig. 1C), indicating that 5-PP-IP 5 (the product of Kcs1) and/or its derivative 1,5-PP 2 -IP 4 (produced by Asp1) are essential for PHO pathway activation and that the precursors of 5-PP-IP 5 (IP 3 , IP 4 , IP 5 , and IP 6 ) play little or no role in the PHO pathway activation. In a previous study, we showed that the cryptococcal ipk1⌬ mutant accumulates significant quantities of another inositol pyrophosphate, 5-PP-IP 4 . The ipk1⌬ mutant is deficient in the native Kcs1 substrate IP 6. Consequently, Kcs1 phosphorylates IP 5 at the 5 position to form 5-PP-IP 4 . Using the ipk1⌬ mutant, we investigated whether 5-PP-IP 4 , which has a similar structure to 5-PP-IP 5 , can also promote PHO pathway activation. Production of extracellular acid phosphatase was used as a reporter to quantify PHO pathway activation in phosphate-starved WT and mutant cells. The results in Fig. 1D FIG 1 Legend (Continued) indicated in red. In C. neoformans, phospholipase C1 (PLC1)-derived IP 3 is sequentially phosphorylated to IP 4-5 and IP 6 by Arg1 and Ipk1, respectively. Kcs1 generates PP-IP 4 and 5-PP-IP 5 /IP 7 from IP 5 and IP 6 , respectively. However, PP-IP 4 is only detected in the ipk1Δ mutant. Asp1-derived 1,5-PP 2 -IP 4 , but not 1-PP-IP 5 , has been detected in C. neoformans. (B) 5-PP-IP 5 is required for optimal growth in the absence of phosphate. Overnight YPD cultures were serially diluted (10 6 to 10 1 cells per 5 l) and spotted onto YPD agar. Plates were incubated at 30 and 37°C for 2 days before being photographed. Growth of the 5-PP-IP 5 -deficient C. neoformans mutant strain (kcs1⌬) is attenuated to a similar extent as the PHO pathway activation-defective mutant strain (pho4⌬). (C) Expression of phosphate-responsive genes regulated by Pho4 is compared by qPCR following growth in the presence and absence of phosphate (calculated using the -ΔΔC T method and ACT1 as the housekeeping gene. The expression in each strain is normalized to the WT ϩPi. (D) 5-PP-IP 4 cannot substitute for 5-PP-IP 5 in promoting PHO pathway induction since the ipk1Δ mutant strain, which accumulates 5-PP-IP 4 , fails to activate the PHO pathway in response to phosphate deprivation. APase activity refers to the extent of p-nitrophenyl phosphate hydrolysis by extracellular APases quantified spectrophotometrically at 420 nm (see Materials and Methods for a detailed description). The results are expressed as fold change relative to WTϩP i . (E) 5-PP-IP 5 has opposing roles in PHO pathway activation in C. neoformans (Cn) and S. cerevisiae (Sc). PHO pathway activation during phosphate deprivation is compared in WT Cn and Sc and their congenic 5-PP-IP 5 -deficient strains (arg1⌬/ipk2⌬ and kcs1⌬). APase activity was measured as in panel D and normalized to the APase activity of the corresponding WT strains. (F and G) Asp1-derived 1-PP-IP 5 and 1,5-PP 2 -IP 4 are dispensable for PHO pathway activation and growth of C. neoformans during phosphate deprivation. A drop dilution test was performed as described previously (see panel B). In panel G, PHO pathway activation was assessed using the APase activity assay and normalized to WT at 0.5 h. All bar graphs represent the means Ϯ the standard deviations of three biological replicates. demonstrate that, despite its structural similarity to 5-PP-IP 5 and high abundance in the ipk1⌬ mutant strain, 5-PP-IP 4 cannot substitute for the native Kcs1 products in activating the PHO pathway, even though it alleviated some of the kcs1Δ-specific phenotypic defects (7). In contrast to what we observed in C. neoformans ( Fig. 1C and D), previous reports in S. cerevisiae suggest that PHO gene expression is constitutively active in the kcs1Δ mutant (32). To investigate this further, we assessed PHO pathway activation in WT C. neoformans and S. cerevisiae and their congenic 5-PP-IP 5 -deficient mutant strains (Cnarg1⌬/Scarg82⌬ and kcs1⌬) in parallel. The results in Fig. 1E confirm that the absence of Kcs1-derived inositol pyrophosphates does elicit opposite effects on PHO pathway activation in the two yeast species. Hyperactivation of the PHO pathway in the Sckcs1⌬ mutant is consistent with that observed by Auesukaree et al. (32). Asp1/Vip1-derived inositol pyrophosphates are dispensable for PHO pathway activation in C. neoformans and S. cerevisiae. Asp1 (C. neoformans) and its ortholog Vip1 (S. cerevisiae) phosphorylate 5-PP-IP 5 to produce 1,5-PP 2 -IP 4 (4). Vip1 also phosphorylates IP 6 to produce an alternate isomer of IP 7 , 1-PP-IP 5 . Although we have never detected 1-PP-IP 5 in WT C. neoformans or in the kcs1Δ mutant (4), we considered the possibility that Asp1 produces small quantities of 1-PP-IP 5 in C. neoformans. To investigate the involvement of 1-PP-IP 5 and 1,5-PP 2 -IP 4 in PHO pathway activation in C. neoformans, we employed the ASP1 deletion mutant (asp1⌬). First, we assessed growth of asp1⌬ on minimal medium (MM) without phosphate and in the presence of ␤-glycerol-phosphate as the only source of phosphate. Under both conditions, the growth of asp1⌬ and WT strains was similar (Fig. 1F). This contrasted with the compromised growth observed for the kcs1Δ mutant. Next, we quantified PHO pathway activation in WT, kcs1⌬, and asp1⌬ strains using the acid phosphatase reporter assay. Cultures were shifted from phosphate-replete to phosphate-deficient medium and production of secreted acid phosphatase was measured for up to 24 h. Similar to the results shown in Fig. 1C to E, acid phosphatase activity was almost abolished in the kcs1⌬ mutant over the experimental time course (Fig. 1G). In contrast, acid phosphatase activity in WT and asp1⌬ strains had increased ϳ100-fold by 5.5 h of phosphate deprivation and plateaued out to 24 h. Thus, Kcs1-derived 5-PP-IP 5 , but neither 1-PP-IP 5 nor 1,5-PP 2 -IP 4 , promotes PHO pathway activation in C. neoformans. Vip1-derived IP 7 was implicated in PHO pathway activation in S. cerevisiae (29). However, we found phosphate deprivation-induced PHO pathway activation to be comparable in the S. cerevisiae WT and vip1⌬ mutant (see Fig. S1 in the supplemental material). Overall, the results in Fig. 1 show that, in contrast to S. cerevisiae, Kcs1-derived 5-PP-IP 5 is the main IPK pathway product involved in PHO pathway activation in C. neoformans and suggest that the PHO pathway has become rewired in C. neoformans. 5-PP-IP 5 acts upstream of CDK Pho85 to promote PHO pathway activation. During phosphate deprivation, the CKI Pho81 blocks Pho85 kinase activity and hence phosphorylation of the transcription factor Pho4. Pho4 is subsequently retained in the nucleus to induce expression of PHO genes. In humans, yeast, and plants, inositol pyrophosphates interact with the SPX domain of proteins, including the SPX domain of Pho81 in S. cerevisiae (12-14, 16-18, 20, 33, 34). Like ScPho81, Pho81 in C. neoformans also has an SPX domain. We therefore hypothesized that 5-PP-IP 5 interacts with cryptococcal Pho81 to modulate PHO pathway activation. As a first step to testing this hypothesis, we used the CDK inhibitor Purvalanol A to bypass Pho81 inhibition (3,35) and assess whether the PHO pathway can be reactivated in the absence of 5-PP-IP 5 . The results show that even when phosphate is present, Purvalanol A derepresses the PHO pathway in the WT and 5-PP-IP 5 -deficient mutants, including the kcs1⌬ mutant, but not in the pho4⌬ control strain, in which PHO pathway activation is blocked downstream of Pho85 (see Fig. S2A and B). Furthermore, we observed a progressive derepression of the PHO pathway up to 50 M Purvalanol A in WT and kcs1⌬ strains irrespective of phosphate status with the effect plateauing at 50 M (see Fig. S2C). These data suggest that 5-PP-IP 5 functions upstream of Pho85 to inhibit Pho85 kinase activity and promote PHO pathway activation. Key IP7-binding residues in SPX domains are conserved in Pho81 homologs from numerous virulent fungi. Pho81 homologs from numerous fungal species, including C. neoformans and others known to infect humans, contain an N-terminal SPX domain with a lysine surface cluster putatively involved in binding inositol pyrophosphates ( Fig. 2A). The SPX domain is followed by an ankyrin repeat domain and a glycerophosphodiester phosphodiesterase domain. The GDE domain in cryptococcal (Cn) Pho81 does not contain critical catalytic residues involved in phospholipid hydrolysis and hence is most likely enzymatically inactive. Alignment of the CnPho81 SPX domain with SPX domains from other fungal proteins, including ScVtc2 for which a role for the basic surface cluster in inositol polyphosphate binding has been validated by site-directed mutagenesis (16), demonstrated the conservation of key lysine residues in CnPho81 (Fig. 2B). We adopted the strategy used by Wild et al. (16) to alter K 221,224,228 in the cryptococcal Pho81 SPX domain to alanine, creating the Pho81SPX AAA strain to assess the contribution of 5-PP-IP 5 -Pho81 interaction to Pho81 function. The Pho81SPX control strain was taken through the same procedure as Pho81SPX AAA and is therefore genetically identical except for the AAA mutation. As a control, we also deleted the entire PHO81 gene (pho81Δ) ( Table 1; see also Table S1, Fig. S3, and Fig. S4). 5-PP-IP 5 binding to the Pho81 SPX domain promotes PHO pathway activation. To investigate the role of 5-PP-IP 5 -Pho81 interaction in phosphate homeostasis, growth of the Pho81SPX AAA strain was compared to that of the WT and Pho81SPX control strains in the presence and absence of phosphate ( Fig. 3A and B). The pho81Δ strain, its reconstituted strain pho81ΔϩPHO81, and the pho4⌬ and kcs1⌬ strains were included as controls. All strains had a similar growth rate in the presence of phosphate (Fig. 3A). In contrast, the growth rate of the Pho81SPX AAA and pho81Δ mutant strains was reduced relative to that of the WT and Pho81SPX control strains in phosphate-deficient medium (Fig. 3B). As expected, growth of kcs1⌬ and pho4⌬ was also reduced in phosphatedeficient medium (Fig. 3B). Next, the role of 5-PP-IP 5 -Pho81 interaction in PHO pathway activation was assessed using an acid phosphatase reporter assay (Fig. 3C). Similar to the growth assays, the PHO pathway activation was abrogated during phosphate deprivation in the Pho81SPX AAA , pho81Δ, kcs1⌬, and pho4⌬ mutant strains relative to that of the WT and Pho81SPX control strains. 5-PP-IP 5 levels have been reported to decline in S. cerevisiae in response to phosphate deprivation (16,36). We now demonstrate that the same occurs in C. neoformans with a decline of approximately ϳ50% observed ( Fig. 3D. Despite this decline, 5-PP-IP 5 levels are sufficient to promote PHO pathway activation in WT and Pho81SPX control strains. We also confirmed that Pho81 associates with 5-PP-IP 5 via K 221,224,228 in the SPX domain by performing affinity capture experiments using a 5-PP-IP 5 -conjugated resin. To enable Pho81 detection by Western blotting, we added a green fluorescent protein (GFP) tag at the C terminus of WT and mutant Pho81 (see Fig. S3) and confirmed that tag addition did not affect functionality (see Fig. S5). The Pho81-GFP expressing strains were cultured in phosphate (P i )-deficient and P i -replete medium. Cell lysates were incubated with chemically synthesized affinity capture resins, presenting either a stable nonhydrolyzable version of 5-PP-IP 5 (5-PCP-IP 5 ) (37) or P i (as a control), to pull down Pho81SPX-GFP and Pho81SPX AAA -GFP. The extent of binding of native and mutant Pho81 proteins (molecular mass, 170.2 kDa) to each resin was compared by anti-GFP Western blotting (Fig. 4). Levels of Pho81SPX and Pho81SPX AAA were more comparable in P i -grown versus P i -starved cells. Given that the protein concentration was similar in all lysates, increased Pho81SPX relative to Pho81SPX AAA in P i -starved cells is attributable to PHO81 being a phosphate-responsive gene and the PHO pathway being functional only in the Pho81SPX strain (Fig. 1B). Hence, Pho81-mediated inhibition of Pho85 drives its own induction during P i starvation. The affinity capture results demonstrate that under both growth conditions, native Pho81 binds to the 5-PCP-IP 5 , but not to the P i , resin. In contrast, the mutated variant does not bind to either resin but appears in the flowthrough. Thus, native Pho81SPX protein, but not its Pho81SPX AAA variant, binds 5-PP-IP 5 . FIG 3 The lysine surface cluster in the Pho81 SPX domain is required for PHO pathway activation. The Pho81SPX AAA and pho81⌬ strains grow at a rate similar to that of WT, Pho81SPX, and pho81⌬ϩPHO81 strains in the presence (A), but not in the absence (B), of phosphate. In the absence of phosphate, the growth of the Pho81SPX AAA strain is reduced to a level similar to that observed for the pho81⌬, kcs1⌬, and pho4⌬ mutant strains. The strains were cultured for 7, 24, and 31 h in MM-KCl, and growth at each time point was assessed by measuring the optical density (550 nm) of the culture using a spectrophotometer. (C) The strains were cultured in MM-KCl, and the PHO pathway activation was assessed at the indicated times using the APase activity assay. APase activity refers to the extent of p-nitrophenyl phosphate hydrolysis by extracellular APases quantified spectrophotometrically at 420 nm. In panels A, B, and C, the results represent the means Ϯ the standard deviations of three biological replicates. (D) Comparison of the level of 3 H-inositol-labeled 5-PP-IP 5 (IP 7 ) in the WT strain by anion-exchange HPLC following growth in P i ϩ or P i -medium. The metabolic profile of the kcs1Δ strain following growth in YPD medium is provided to indicate the position of IP 7 . In S. cerevisiae, Pho81 forms a stable complex with Pho85-Pho80 independently of phosphate status, but only inhibits the CDK during phosphate deprivation (38). Interaction of Pho81 with Pho85-Pho80 is primarily via Pho80 (38,39). To determine whether the association of CDK components in C. neoformans is phosphate dependent, the WT strains expressing either Pho81-GFP (see Fig. S3) or Pho85-mCherry were cultured in P i -depleted and P i -replete media. GFP trap and an anti-mCherry antibody were used to immunoprecipitate Pho81 and Pho85, respectively, and any associated proteins from cell lysates. CDK components were separated by SDS-PAGE and identified by one-dimensional liquid chromatography-mass spectrometry (1D-LC-MS). In both sets of immunoprecipitations, Pho81, Pho85, Pho80 and a second cyclin, glycogen storage control protein (CNAG_05524), were consistently detected in the CDK complex regardless of phosphate availability ( Table 2). A BLAST search against the S. cerevisiae genome database using the glycogen storage control protein sequence as a query revealed that this cyclin is most similar to cyclins Pcl6 and Pcl7 which, among other cyclins, are most closely related to Pho80 (see Fig. S6A). Thus, we renamed this cyclin CnPcl6/7. Of all the genes encoding CDK complex components, PHO81 was the most phosphate responsive (ϳ14-fold induction) ( Fig. 1C; see also Fig. S6B), followed by PHO80 and PLC6/7 (ϳ4-to ϳ5-fold induction) (see Fig. S6B). A small increase in PHO85 gene expression (ϳ1.8-fold) was observed but was not statistically significant. Although PCL6/7 is phosphate-responsive, it is dispensable for PHO pathway activation as assessed using a pcl6/7Δ mutant (see Fig. S6B). Given its similarity to cyclin homologues involved in glycogen storage in S. cerevisiae, we investigated whether cryptococcal Plc6/7 also has a role in glycogen storage. The results in Fig. S8 demonstrate reduced glucose induction of the glycogen metabolic genes, GSY2/CNAG_04621 and GLC3/ CNAG_00393, in the pcl6/7Δ, Pho81SPX AAA , and pho81Δ strains relative to the WT. The results suggest that, in addition to activating the PHO pathway by interacting with Pho80-Pho85, 5-PP-IP 5 -Pho81 modulates glycogen storage by interacting with Pcl6/7-Pho85. PP-IP 5 -Pho81 interaction stabilizes the CDK complex of the PHO pathway. To investigate whether 5-PP-IP 5 interaction with Pho81 affects Pho81 association with the CDK complex, Pho81SPX-GFP and Pho81SPX AAA -GFP were immunoprecipitated from cells cultured in the presence and absence of P i . Pho81-associated Pho85 was then quantified by Western blotting (Fig. 5A). Cdc2 in cell lysates was used as an indicator of sample protein concentration prior to immunoprecipitation. Under P i -depleted conditions, Pho81SPX and Pho85 abundance increased to a similar extent (ϳ2-fold) compared to their levels in cultures supplied with P i (Fig. 5A, compare lanes 1 and 3), consistent with increased CDK complex formation. Increased Pho85 and Pho81 abundance following P i deprivation correlated with increased PHO85 and PHO81 gene expression (see Fig. S6B: ϳ1.8-fold for PHO85 and ϳ14-fold for PHO81). However, the a Anti-GFP-Pho81 or anti-mCherry-Pho81 immunoprecipitations were separated by SDS-PAGE and the associated CDK components were identified by 1D-LC-MS. All CDK components (Pho81, Pho85, and Pho80) and an additional cyclin (Pcl6) were detected consistently in both sets of immunoprecipitations prepared from cells grown in phosphate-replete and phosphate-depleted medium. Control immunoprecipitations were also performed on the WT (no GFP or mCherry) and the absence of all CDK components was confirmed. The PEP score (PEP) is based on the probability of identification: scores above 3 are equivalent to a q-value of Ͻ0.002. "% Cov" is the percent coverage of the open reading frame the observed peptides match, while the number of peptide spectral matches (PSMs) is proportional to the protein abundance. All PSMs were filtered to ensure a Ͻ1% false discovery rate. increase in PHO81 expression far exceeded the increase in Pho81 protein in the immunoprecipitates, consistent with translation of only a proportion of PHO81 transcripts and/or rapid degradation of excess free Pho81. Quantification of Pho85 association with native and mutant Pho81 in the absence of PHO pathway activation (P i ϩ culture) demonstrated weaker Pho85 binding to mutant Pho81 (Fig. 5A, compare lanes 1 and 2), suggesting that 5-PP-IP 5 is required for stabilizing the CDK complex. Interestingly, we observed that the abundance of Pho81SPX AAA declined during P i deprivation/PHO pathway activation (Fig. 5A, compare lanes 2 and 4), rendering comparison of Pho85 association with WT and mutant Pho81 during P i deprivation unfeasible. Using qPCR, we ruled out reduction of PHO81SPX AAA gene expression under inducing conditions as a possible explanation (see Fig. S8). Rather, the detection of cleaved GFP in the mutant sample (Fig. 5A, lane 4) was indicative of Pho81 degradation during P i deprivation. The reduced stability of mutant Pho81 under these conditions coincides with lower levels of IP 7 (Fig. 3D). To further investigate the impact of 5-PP-IP 5 interaction with Pho81 on CDK association, we tagged Pho81SPX with GFP in the kcs1Δ mutant background and repeated the immunoprecipitations on P i ϩ cultures. Pho81SPX protein was not detected in kcs1Δ lysates (total protein) and immunoprecipitations (GFP-Trap IP) (Fig. 5B) or in intact 5-PP-IP 5 -deficient cells by fluorescence microscopy (Fig. 5C). Once again, qPCR ruled out reduced PHO81 gene expression as a possible explanation ( Fig. 1C; see also Fig. S8, using GFP strains and growth conditions identical to those in Fig. 5B). Hence the results are consistent with degradation of Pho81, but not Pho85, in a 5-PP-IP 5 -deficient environment. From the results in Fig. 4 and 5, we propose a model (Fig. 6) where Pho81 stability and association with Pho85-Pho80 and Pho85-Pcl6/7 depends on its ability to bind 5-PP-IP 5 and where 5-PP-IP 5 -Pho81 interaction promotes PHO pathway activation and glycogen biosynthesis. 5-PP-IP 5 -Pho81 interaction is critical for fungal virulence and dissemination. To determine the impact of 5-PP-IP 5 -Pho81 interaction on cryptococcal virulence, we 1 and 3) and Pho81SPX AAA -GFP (AAA, lanes 2 and 4) from lysates following cell growth in P i ϩ and P i -medium. Immunoprecipitates and total cell lysates (control) were resolved by SDS-PAGE. Immunoprecipitated Pho81-GFP was detected by anti-GFP Western blotting. Anti-CDK antibody, which detects the PSTAIR motif, was used to detect Pho85 in the immunoprecipitates and cell lysates, as well as Cdc2 in the cell lysate, as indicated. The blot is representative of three biological replicates where, on average, Pho85/ Pho81SPX AAA association was 2.7-fold weaker than Pho85/Pho81SPX association in P i ϩ cultures. (B) GFP-trap was used to immunoprecipitate Pho81SPX-GFP from WT and kcs1Δ lysates following cell growth in P i ϩ medium. Immunoprecipitates and total cell lysates (control) were resolved by SDS-PAGE. Pho81-GFP, Pho85, and Cdc2 were detected by Western blotting as in panel A. (C) Pho81SPX-GFP is not detected by fluorescence microscopy (DeltaVision) in an IP 7 -deficient (kcs1Δ) background following cell growth in P i ϩ medium (using the same conditions as in panel B). Autofluorescence of the cell walls is detected in all samples due to the prolonged exposure essential for observing Pho81-GFP. investigated whether the PHO pathway activation-defective Pho81SPX AAA and pho81Δ mutant strains retained key virulence traits characteristic of C. neoformans (e.g., ability to grow at 37°C and produce capsule and melanin). We found that all phenotypes were identical to that of the Pho81SPX, WT, and pho81ΔϩPHO81 strains (results not shown). Despite the availability of significant levels of free phosphate in most environments within the mammalian host, the alkaline pH of host blood and tissues mimics phosphate starvation, leading to activation of the fungal PHO pathway (3,40,41). Consistent with this, the PHO pathway activation-defective cryptococcal strain, pho4⌬, exhibits reduced growth at alkaline (including host) pH, even when phosphate is available (3). We therefore compared growth of the PHO pathway activation defective Pho81SPX AAA strain and the Pho81SPX control strain at acidic and basic pH and included the WT, pho4Δ, kcs1Δ, pho81Δ, and pho81ΔϩPHO81 strains as additional controls (Fig. 7). At pH 5.4 and pH 6.8, none of the pairwise growth differences relative to the parent strain were statistically significant except for the WT versus the kcs1Δ strain. The reduced growth of kcs1Δ is expected since this mutant grows slower than the WT under nonstress conditions (YPD medium) due to Kcs1 having a pleiotropic role in cellular function (4). In contrast, at pH 7.4 and pH 8 (P i ϩ), growth of the pho4⌬, pho81⌬, kcs1⌬, and Pho81SPX AAA strains was reduced relative to the WT, Pho81SPX, and pho81⌬ϩPHO81 strains (Fig. 7) consistent with the alkaline pH environment mimicking phosphate deprivation (3,40). In panel A, 5-PP-IP 5 -bound Pho81 inhibits Pho85 during phosphate deprivation, preventing phosphorylation of Pho4 and triggering PHO pathway activation to promote pathogenicity. In contrast, 5-PP-IP 5 binding-defective Pho81 cannot form a stable complex with Pho80-Pho85 and Pho85 remains active, phosphorylating Pho4 to prevent PHO pathway activation. In panel B, 5-PP-IP 5 -bound Pho81 may also regulate Pcl6-Pho85 to fine-tune glycogen metabolism. In both panels A and B, 5-PP-IP 5 binding-defective Pho81 is unstable and becomes degraded. Next, we assessed what effect blocking 5-PP-IP 5 -Pho81 interaction had on fungal virulence in a mouse inhalation model, which mimics the natural route of infection in humans. All mice infected with the Pho81SPX control strain succumbed to infection with the median survival time being 23 days (Fig. 8A). In contrast, no mice infected with the Pho81SPX AAA mutant became ill, and by 60 days postinfection their average weight had increased by 20 Ϯ 5.5% relative to their average preinfection weight. Organ burdens determined in Pho81SPX-infected mice at time-of-death and in Pho81SPX AAAinfected mice at 60 days postinfection show almost no infection in the lungs and brain of Pho81SPX AAA -infected mice by 60 days postinfection ( Fig. 8B and C). This is consistent with the inability of this strain to establish a lung infection and disseminate to the brain. We also investigated the effect of deleting the PHO81 gene on fungal virulence and included the pho81ΔϩPHO81 strain as a control (Fig. 8). For the survival analysis, the pho81⌬ mutant strain behaved similarly to the Pho81SPX AAA strain, with no pho81⌬-infected mice succumbing to infection over the 60-day time course (Fig. 8D). Furthermore, the pho81⌬-infected mice had gained a similar amount of weight by 60 days postinfection as the Pho81SPX AAA -infected mice. As expected, pho81ΔϩPHO81-infected mice had a similar median survival time to that of WTinfected mice. Organ burdens were also determined in WT-and pho81ΔϩPHO81infected mice at time-of-death and in pho81⌬-infected mice at 60 days postinfection. Similar to what was observed for the 5-PP-IP 5 -binding defective strain, the lung and brain burdens were reduced substantially in pho81⌬-infected mice relative to both WT-and pho81ΔϩPHO81-infected mice (Fig. 8E and F), consistent with the inability of this strain to establish a lung infection and disseminate to the brain. DISCUSSION Our work has shown that the inositol polyphosphate biosynthesis pathway in C. neoformans intersects with the PHO pathway signaling machinery via Kcs1-derived 5-PP-IP 5 rather than via Asp1/Vip1-derived 1-PP-IP 5 , providing evidence of evolutionary rewiring with respect to inositol pyrophosphate regulation of the PHO pathway. We also show that 5-PP-IP 5 exerts much of its effect on virulence by promoting PHO pathway activation via its interaction with the SPX domain of Pho81. Using crystallographic, biochemical, and genetic analysis, Wild et al. demonstrated that recombinant SPX domains from yeast, filamentous fungal, plant, and human proteins bind 5-PP-IP 5 , IP 6 , and IP 8 with high affinity but not IP 3 /IP 4 /IP 5 or free orthophosphate. These researchers also identified conserved lysine residues responsible for PP-IP binding. Substituting these lysine residues with alanine did not impact secondary or tertiary structure of SPX domains but did abrogate PP-IP binding (16). By adopting the same approach and incorporating the same alteration into the SPX domain of the full-length protein, we now extend these findings to Pho81 in C. neoformans, demonstrating that mutation of the conserved lysine residues prevents Pho81 from binding to 5-PP-IP 5 . From our investigation of CDK component association by 1D-LC-MS and Western blotting, we propose a model where Pho81 association with Pho85-Pho80 depends on 5-PP-IP 5 interaction with the Pho81 SPX domain and where 5-PP-IP 5 -Pho81 interaction promotes PHO pathway activation (Fig. 6). 5-PP-IP 5 therefore has a bridging role by promoting the association of CDK complex components, irrespective of phosphate status. Although phosphate deprivation coincided with a decline in 5-PP-IP 5 levels, more CDK complex formation was observed (Fig. 5, lane 3), suggesting that the levels of 5-PP-IP 5 under these conditions were sufficient to promote increased CDK complex , and their health was monitored for up to 60 days. Infection burdens in the lung (B and E) and brain (C and F) were determined at time of death (Pho81SPX-, WT-, and pho81ΔϩPHO81-infected mice) and at 60 days postinfection (Pho81SPX AAA and pho81Δ-infected mice). Lungs and brains were homogenized, serially diluted, and plated onto agar plates. Plates were incubated at 30°C for 2 days. Colony counts were adjusted to reflect CFU per gram of tissue. The difference in survival (log-rank test) and organ burden (Mann-Whitney U test/two-paired t test) between Pho81SPX-or Pho81SPX AAA -infected groups is statistically significant (i.e., P Յ 0.0021 in all cases). No difference in survival and organ burden was observed between the WT and pho81ΔϩPHO81 infection groups. However, the reductions in survival and organ burden observed for the pho81Δ-infected mice, relative to the two control strains, was statistically significant (i.e., P Յ 0.003 in all cases). formation. Interestingly, we found that mutant Pho81 became unstable during P i deprivation (Fig. 5A, lane 4). This could be attributable to 5-PP-IP 5 stabilizing Pho81, in addition to stabilizing the association of Pho81 with the cyclin-dependent kinase complex. The reason why mutant Pho81 instability was not as obvious in the presence of P i (Fig. 5A, lane 2) could be due to residual binding of 5-PP-IP 5 and higher 5-PP-IP 5 availability. In support of this, we were unable to detect WT Pho81 in a 5-PP-IP 5deficient background. Our model in Fig. 6 also supports a role for 5-PP-IP 5 -Pho81 interaction in stabilizing the association of Pho81 with Pcl6/7-Pho85 to fine-tune glycogen metabolism. Although PCL6/7 is a phosphate-responsive gene, we showed that it is dispensable for PHO pathway activation. In S. cerevisiae, Pho85 interacts with 10 cyclins, including Pho80, Plc6, and Pcl7, to regulate the PHO pathway, cell cycle, polarity, and glycogen metabolism (42)(43)(44)(45). In addition to Pho80 and Pcl6/7, C. neoformans has five other cyclins. However, since we did not detect their association with Pho81, they are unlikely to direct phosphate-dependent activity of Pho85. In support of our data showing that 5-PP-IP 5 functions as an intermolecular stabilizer, there are other examples of where IP and PP-IP interactions with SPX and non-SPX domains stabilize multiprotein complexes. In a model plant Arabidopsis thaliana, 1,5-PP-IP 5 (IP 8 ) facilitates interaction of SPX1 with the PHR1 transcriptional regulator of the phosphate starvation response when phosphate is present (17). This response is triggered by a drop in the abundance of IP 8 upon phosphate deprivation. In mammalian cells, IP 4 stabilizes the histone deacetylase HDAC3-SMRT corepressor complex via non-SPX domain interactions to regulate gene expression. In this context, IP 4 acts as "intermolecular glue" by wedging into a positively charged pocket formed at the interface between the two proteins (46)(47)(48). Wild et al. (16) proposed that inositol polyphosphates communicate cytosolic phosphate levels to SPX domains to regulate phosphate uptake, transport, and storage in fungi, plants, and animals. However, our findings indicate that although 5-PP-IP 5 interaction with the Pho81 SPX domain is essential for PHO pathway activation in C. neoformans, PHO pathway activation is not triggered by 5-PP-IP 5 but rather by additional signaling component(s). The following evidence supports this conclusion: several reports, including this study, show that the intracellular concentration of inositol pyrophosphates, including 5-PP-IP 5 , decreases during phosphate starvation (16,17,36). The decreased abundance of 5-PP-IP 5 is unlikely to trigger PHO pathway activation as the pathway is constitutively repressed in the 5-PP-IP 5 -deficient kcs1Δ mutant. Furthermore, Pho81-Pho85-Pho80/5-PP-IP 5 complexes are present even when phosphate is available, and their abundance increases upon phosphate deprivation. It is likely that 5-PP-IP 5 molecules wedged inside the complexes are partially protected from degradation and therefore have a slower turnover than free 5-PP-IP 5 . Taken together, our data suggest that preformed CKI-CDK/5-PP-IP 5 complexes await signals other than fluctuating 5-PP-IP 5 levels to trigger a phosphate starvation response. Crystallographic, biochemical, and genetic analysis are required to map regions in cryptococcal Pho80 that interact with Pho81 and potentially with 5-PP-IP 5 . In S. cerevisiae, two sites on Pho80 involved in binding Pho4 and Pho81 were identified that are markedly distant to each other and the active site (45). These regions will serve as a guide to map the corresponding regions in cryptococcal Pho80. Pho81 in S. cerevisiae was also shown to inhibit Pho80-Pho85 via a novel 80-residue motif adjacent to the ankyrin repeats (called the minimal domain [MD]). This MD was shown to be necessary and sufficient for Pho81 function as a Pho85 inhibitor. This is in contrast to mammalian CKIs, which exert their regulatory function via ankyrin repeats. Domain mapping and structural studies will allow assessment of whether an MD exists in cryptococcal Pho81 to provide a second point of contact between 5-PP-IP 5 -Pho81 and cyclins. SPX domains have been reported to undergo a conformational change upon ligand binding (16). Structural comparison of 5-PP-IP 5 -bound and free cryptococcal Pho81 may therefore shed light on whether 5-PP-IP 5 binding induces a conformational change in Pho81. Complementary data can be obtained by creating Pho81 deletion variants to map regions required for binding 5-PP-IP 5 and cyclins. This information will promote understanding of how conformational changes triggered by 5-PP-IP 5 binding affect Pho81 association with Pho80-Pho85 to bring about CDK inhibition and PHO pathway activation. It will also address why the outcome of 5-PP-IP 5 -SPX domain interaction leads to different responses in different yeast species and provide insight into the physiological relevance of specific IP species in PHO pathway function. We previously demonstrated that deletion of the cryptococcal gene encoding the transcription factor, Pho4, led to constitutive repression of the PHO pathway regardless of phosphate status, reduced growth at alkaline pH, a condition that mimics phosphate starvation and hypovirulence in a mouse inhalation model. The loss of virulence in the pho4Δ mutant was characterized by a higher median survival time of pho4Δ-infected mice relative to WT-infected mice, reduced lung colonization, and the almost complete prevention of fungal dissemination to the host brain (3). In this study, we found that growth of the Pho81-SPX AAA strain was also inhibited at alkaline pH. However, Pho81-SPX AAA virulence was reduced even more substantially: in contrast to infection with the pho4Δ mutant where only 50% of the mice succumbed to infection, all mice infected with the Pho81-SPX AAA strain survived and infection burdens in lung and brain were drastically reduced. The infection kinetics and organ burdens observed for Pho81-SPX AAA -infected mice were similar to those observed for pho81Δ-infected mice, suggesting that Pho81 promotes invasive fungal disease predominantly via its association with PP-IP 5 . A potential explanation for why the Pho81 mutants are more attenuated in virulence than the pho4⌬ mutant is that 5-PP-IP 5 -bound Pho81 regulates more than one CDK complex (see model in Fig. 6). 5-PP-IP 5 may therefore regulate cellular functions other than phosphate homeostasis, namely, glycogen metabolism. Alternatively, Pho81 may have PP-IP 5 -dependent cellular function involving interactions with proteins other than CDK components. In summary, we provide additional evidence of evolutionary divergence in PHO pathway regulation in a fungal pathogen of medical significance by demonstrating that interaction of the IP 7 isomer 5-PP-IP 5 , not 1-PP-IP 5, with the Pho81 SPX domain is essential for PHO pathway activation. The critical roles of 5-PP-IP 5 and Pho81 in fungal virulence are conveyed primarily via the interaction of 5-PP-IP 5 with the Pho81 SPX domain. Finally, we demonstrate that 5-PP-IP 5 functions as intermolecular "glue" to stabilize Pho81 association with Pho85/Pho80, providing novel mechanistic insight into how inositol pyrophosphates regulate the PHO pathway. Since Pho81 has no homologue in mammalian cells, disrupting fungal Pho81 function is a potential antifungal strategy. MATERIALS AND METHODS Fungal strains and growth conditions. Wild-type C. neoformans var. grubii strain H99 (serotype A, MAT␣) and S. cerevisiae WT strain BY4741 were used in this study. All mutant and fluorescent strains created or procured in this study are listed in Table 1 and details of their construction are provided in Materials and Methods and in the supplemental material. Routinely, fungal strains were grown in YPD (1% yeast extract, 2% peptone, and 2% dextrose). Phosphate-deficient minimal medium MM-KCl (29 mM KCl, 15 mM glucose, 10 mM MgSO 4 ·7H 2 O, 13 mM glycine, 3.0 M thiamine) was used to induce acid phosphatase activity. KCl was substituted with 29 mM ␤-glycerol phosphate for drop dilution assay media or 29 mM KH 2 PO 4 for MM-KH 2 PO 4 . The latter was used as a control medium in which acid phosphatase activity was suppressed. In some of the experiments, the cells were grown in phosphatedepleted (low-phosphate) YPD (LP-YPD) to induce PHO pathway activation. LP-YPD was prepared as follows: 5 g yeast extract, 10 g peptone, and 1.23 g MgSO 4 were dissolved in 475 ml of water with prolonged stirring (at least 15 min). Then, 4 ml of concentrated NH 4 OH was added dropwise, while the medium was vigorously stirred. The salts were allowed to precipitate for at least 30 min at room temperature. The medium was filtered through a 45-m filter, supplemented with 10 g dextrose, and adjusted to pH ϳ6.5 with concentrated HCl. The resulting medium was filter sterilized. Mice. The Australian Resource Centre (Western Australia) provided mice (C57BL/6) for the virulence experiments. The mice weighed between 20 and 22 g (6 to 8 weeks old), and the sex was female. Maintenance and care conditions were as follows. Access to food (autoclavable rat and mouse chow supplied by Specialty Feeds) and water was unrestricted, and the light-dark cycle was 12 h. Before experiments, the acclimatization period for the animals was 1 week. Animal experiments were performed in accordance with protocol 4254.03.16, approved by the Western Sydney Local Health District animal ethics committee. Virulence studies in mice. Female C57BL/6 mice (10 per infection group) were anesthetized by inhalation of 3% isoflurane in oxygen and infected with 2 ϫ 10 5 fungal cells via the nasal passages as described previously (4). Mice were monitored daily and euthanized by CO 2 asphyxiation when they had lost 20% of their preinfection weight or prior if showing debilitating symptoms of infection, i.e., loss of appetite, moribund appearance, or labored breathing. Median survival differences were estimated using a Kaplan-Meier method. Posteuthanasia, the lungs and brain were removed, weighed, and mechanically disrupted in 2 ml of sterile PBS using a BeadBug (Benchmark Scientific). Organ homogenates were serially diluted and plated onto Sabouraud dextrose agar plates. Plates were incubated at 30°C for 2 days. Colony counts were performed and adjusted to reflect the total number of CFU per gram of tissue. Strain creation. (i) Pho81 SPX mutant with or without GFP tag. Lysine residues in the Pho81 SPX domain putatively involved in binding 5-PP-IP 5 (K 221,223,228 ) were identified by sequence alignment. The Pho81 SPX mutant strain (Pho81SPX AAA ) and its control strain (Pho81SPX) were then created in a multistep process (see Fig. S3). First, the SPX domain of PHO81 was deleted in the WT H99 strain. Second, genomic DNA encoding the 5= end of PHO81, including the SPX domain, was amplified to generate native and mutated versions. In the mutated version, codons encoding lysine 221, 223, and 228 were exchanged for those encoding alanine by overlap PCR. Native (NAT) and mutant (AAA) fragments were then fused to the GDE2 promoter (GDE2p) and a dominant resistance marker by overlap PCR and used to reconstitute the spx⌬ genotype by homologous recombination. The GDE2 promoter (GDE2p) was used to replace the native GDE1 promoter of Pho81, because Pho81 shares its promoter with the adjacent gene, CNAG_02542. GDE2p was a suitable choice because PHO81/GDE1 and GDE2 are induced to a similar extent by Pho4 during phosphate deprivation (3). In a third step, WT and mutant PHO81 were tagged with GFP at the C terminus. The KUTAP vector containing GFP optimized for fluorescence in C. neoformans was a gift from Peter Williamson (NIAID, NIH, Bethesda, MD). Each step, including the dominant resistance markers used, is described in more detail below and is summarized in Fig. S3A. Step 1: deletion of the PHO81 SPX domain. To delete the PHO81 SPX domain (see Fig. S3A, step 1), the SPX deletion construct was created by overlap PCR, joining the 5= flank, the hygromycin resistance cassette with the ACT1 promoter and GAL7 terminator (Hyg r ), and the 3= flank. The 5= flank, consisting of 977 bp upstream of the PHO81 gene, was PCR amplified from genomic DNA using the primers PHO81_ots_s and (HygB)PHO81-5=a. The 3= flank, consisting of 1,275 bp downstream of the SPX domain, was PCR amplified using the primers (HygB)PHO81-3=s and PHO81_ots_3=a. Hyg r was PCR amplified with the primers Neo-s and HygB_a (49). The three fragments were fused together using the primers PHO81_5=s and PHO81_3=flank-a, and the resulting 4,955-bp product was used to delete the SPX domain from PHO81 in the H99 WT strain, using biolistic transformation (50). Hygromycin B-resistant (Hyg r ) colonies were screened by PCR amplification across the SPX external recombination junctions using the primers indicated in Table S1 in the supplemental material. A successful transformant was used in step 2 to create the Pho81SPX and Pho81SPX AAA strains. Step 2: reconstitution of spx⌬ with SPX (Pho81SPX) or SPX AAA (Pho81SPX AAA ). For the reconstitution of spx⌬ with SPX (Pho81SPX) or SPX AAA (Pho81SPX AAA ) (see Fig. S3A, step 2), the following three fragments were fused together by overlap PCR: (i) the neomycin resistance cassette with ACT1 promoter and TRP1 terminator (Neo r ), (ii) the GDE2p to drive expression of PHO81, and (iii) the PHO81 gene sequence consisting of the 1,070-bp SPX domain (native or AAA) and 1,275 bp downstream of SPX. Neo r was PCR amplified from pJAF1 using the primer pair Neo-s and Neo-a. H99-derived GDE2p was PCR amplified using the primer pair (NEO)GDE2p-s and (SPX)GDE2p-a. The 2,345-bp native PHO81 SPX fragment (SPX Nat ) was PCR amplified using the primer pair SPX-start-s and PHO81_ots_3=a. The mutant PHO81 SPX fragment (SPX AAA ) was created by PCR amplifying the 1,070-bp SPX domain and 1,275 bp downstream of SPX using the primer pairs SPX-start-s/Pho81-AAA-a and Pho81-AAA-s/PHO81_ots_3=a, which introduced the mutation at the adjoining ends. The two fragments were then fused together by overlap PCR, using the primer pairs SPX-start-s and PHO81_ots_3=a, to introduce the A 221,223,228 mutations in the overlapping region. A third PCR was then used to fuse Neo r -GDE2p-SPX Nat or Neo r -GDE2p-SPX AAA using the primer pair Neo-s and PHO81_3=flank-a. The final products were introduced into the ⌬spx strain created in step 1, resulting in strains Pho81SPX Nat and Pho81SPX AAA . Geneticin-resistant, hygromycinsensitive transformants were screened by PCR amplification across the NeoR-GDE2p-SPX recombination junctions (see Fig. S3B) using the primers indicated in Table S1. Step 3: GFP-tagging Pho81SPX and Pho81SPX AAA . For GFP-tagging Pho81SPX and Pho81SPX AAA (see Fig. S1B, step 3), a construct consisting of (i) the 5= flank, encoding 865 bp of the 3= end of the PHO81 gene without the stop codon; (ii) GFP fused to the nourseothricin resistance cassette (Nat r ); and (iii) the 3= flank, encoding 866 bp downstream of the PHO81 gene, was created by overlap PCR. The 5= flank was PCR amplified from H99 genomic DNA using the primer pair Pho81-ots-s and Pho81-3f-a (GFP). Using the primers GFP-start-s and Neo-a, GFP-Nat r (3,116 bp) was PCR amplified from the pCR21 vector (Invitrogen) into which GFP-Nat r had previously been cloned. The 3= flank was PCR amplified from genomic DNA using the primer pair Pho81-3f-s_(NEO) and Pho81-ots-a. These three overlapping fragments were fused by a final overlap PCR using the primer pair Pho81-5f-s and Pho81-3f-a. The final product was introduced into strains Pho81SPX and Pho81SPX AAA , creating GDE2p-Pho81-GFP and GDE2p-Pho81 AAA -GFP, respectively, using biolistic transformation. Nourseothricin-resistant transformants were screened by PCR amplifying regions across recombination junctions (see Fig. S3B) using the primers described in Table S1. (ii) PHO81 deletion and rescue. A PHO81 gene deletion construct was created by joining the 5= flank (963 bp of genomic DNA upstream of the PHO81 coding sequence), the hygromycin B resistance (Hyg r ) cassette (with the ACT1 promoter and GAL7 terminator), and the 3= flank (1,424 bp of genomic DNA downstream of the PHO81 coding sequence). The three fragments were fused by overlap PCR using the primer pair PHO81_5=s and PHO81_3=a. This deletion construct was used to transform the H99 WT strain using biolistics (50), creating ⌬pho81:HYGB. Hygromycin-resistant colonies were screened by PCR amplification across the 5= and 3= recombination junctions using the primers indicated in Fig. S4 and Table S1 to confirm that homologous recombination had occurred at the correct site. To create the PHO81 reconstituted strain (⌬pho81ϩPHO81), the genomic PHO81 locus, which comprised the coding region and 5,727 bp upstream and 345 bp downstream of the coding region, was PCR amplified from genomic DNA prepared from the H99 WT strain using the primer pair PHO81_5=s and (NEO)PHO81-Rec-5=a. The neomycin resistance (Neo r ) cassette (with the ACT1 promoter and TRP1 terminator) was PCR amplified from pJAF (51) using the primer pair Neo-s and Neo-a. The two fragments were fused by overlap PCR using the primer pair PHO81-Rec-5=s and Neo-a, and the resulting gene fusion was used to transform the ⌬pho81 mutant using biolistics as described above. Neomycin-resistant transformants were screened for their ability to secrete acid phosphatase (Aph1) using the colorimetric pNPP reporter assay described previously (3). This phenotype was lost following deletion of PHO81 in the WT strain. Transformants that tested positive for secreted acid phosphatase activity were tested further for the presence of the PHO81 gene by PCR amplifying an internal region of the PHO81 locus from genomic DNA using the primers indicated in Fig. S4 and Table S1. (iii) PHO85-mCherry strain. To create a WT C. neoformans strain expressing PHO85 as an mCherry fusion protein, a construct consisting of (i) the 5= flank, 1,141 bp of 3= end of the PHO85 gene without the stop codon; (ii) mCherry; (iii) the hygromycin resistance cassette (Hyg r ) with the ACT1 promoter and GAL7 terminator; and (iv) the 3= flank, 988 bp downstream of the PHO85 gene, was created by overlap PCR. The 5= flank was PCR amplified from H99 WT genomic DNA using the primer pair PHO85-int-s1 and (mCherry)PHO85-a. The mCherry was PCR amplified from pNEO-mCherry vector using the primer pair (PHO85)mCherry-s and (ActP)-mCherry-a. Hyg r was generated using the primer pair Neo-s and HygB-a. The 3= flank was PCR amplified from H99 WT genomic DNA using the primer pair (Gal7t)PHO85-3=flank-s and PHO85-3=flank-a1. These four fragments were fused by overlap PCR using the primer pair PHO85int-s3 and PHO85-3=flank-a3, and the final product was then used to transform H99 WT using biolistic transformation (50). Hygromycin-resistant colonies were screened by PCR amplification across the recombination junctions using primers listed in Table S1. (iv) Pho81SPX-GFP in a WT and kcs1⌬ background. A DNA construct consisting of (i) the 5= flank, encoding 865 bp of the 3= end of the PHO81 coding region minus the stop codon, (ii) GFP fused to the nourseothricin resistance cassette (Nat r ), and (iii) the 3= flank, encoding 866 bp downstream of the PHO81 coding region, was amplified by PCR from genomic DNA prepared from the Pho81SPX-GFP strain used in Fig. 5 using the primer pair Pho81-5f-s and Pho81-3f-a. The final 4,559-bp product was introduced into the kcs1⌬:NEO strain (4) using biolistic transformation. Nourseothricin-resistant transformants were screened by PCR by amplifying regions across the recombination junctions using the primers described in Table S1. Assessing PHO pathway activation. (i) Acid phosphatase reporter assay. Extracellular acid phosphatase (APase) activity associated with the APH1 gene product was measured as previously described (5). Briefly, YPD overnight cultures were centrifuged, and the pellets were washed twice with water and resuspended in PHO pathway-inducing and noninducing medium (see above) at an optical density at 600 nm (OD 600 ) of 1. The cultures were incubated at 30°C for 3 h or as indicated otherwise. After incubation, 200 l of each culture was centrifuged, and the pellets were resuspended in 400 l of APase reaction mixture (50 mM sodium acetate [pH 5.2], 2.5 mM p-nitrophenyl phosphate [pNPP]). Reactions were performed at 37°C for 5 to 15 min, which was the time determined to be within the linear range of APase activity (5). Reactions were stopped by adding of 800 l of 1 M Na 2 CO 3 . APase-mediated hydrolysis of pNPP was quantified spectrophotometrically at 420 nm. Any growth difference among strains was corrected by measuring the OD 600 prior to performing the assay, and the APase activity was calculated as OD 420 /OD 600 . In some cases, the APase activity was normalized to the WT and expressed as a fold change. In the experiment where PHO pathway activation in the absence of phosphate was measured over a 2-day time course, 10 to 300 l of culture was used for the APase activity assay, with smaller amounts needed at longer induction times to prevent reaction saturation due to increased culture growth. All assays were performed in biological triplicate. (ii) Quantitative PCR. RNA extraction, cDNA synthesis, and qPCR of PHO genes in C. neoformans strains was performed as described previously (3). The sequences of primers used for qPCR are listed in Table S1. Fractionating 3 H-labeled inositol polyphosphates. [ 3 H]inositol labeling of fungal cells was performed as previously described (52), with modifications (4). Overnight fungal cultures grown in YPD were diluted to an OD 600 of 0.05 in fresh YPD containing 10 mCi/ml [ 3 H]myo-inositol (Perkin-Elmer) and incubated until an OD 600 of Ͼ6. The cells were pelleted, washed, and resuspended in MM with or without phosphate (as indicated). After an additional 2 h of incubation, fungal cells were pelleted, washed, and snap-frozen in liquid nitrogen. To extract inositol polyphosphates, the cells were resuspended in extraction buffer (1 M HClO 4 , 3 mM EDTA, 0.1 mg/ml IP 6 ) and homogenized with glass beads using a bead beater. Debris was pelleted, and the supernatants were neutralized (1 M K 2 CO 3 , 3 mM EDTA) and stored at 4°C. The radiolabeled inositol polyphosphates were fractionated by anion-exchange high-pressure liquid chromatography (HPLC). Creation of a 5-PP-IP 5 affinity capture resin. Synthesis of resin-bound 5PCP-IP 5 , a diphosphoinositol polyphosphate analog containing a nonhydrolyzable bisphosphonate group in the 5-position as described in detail by Wu et al. (37). This bisphosphonate analog closely resembles the natural molecule, both structurally and biochemically, while exhibiting increased stability toward hydrolysis in a cell lysate (53). SUPPLEMENTAL MATERIAL Supplemental material is available online only.
v3-fos-license
2018-09-16T04:35:48.983Z
2018-09-07T00:00:00.000
52172445
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0203062&type=printable", "pdf_hash": "099efd4d21bab14e707dc10ceae901fa64c666a5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:477", "s2fieldsofstudy": [ "Biology" ], "sha1": "099efd4d21bab14e707dc10ceae901fa64c666a5", "year": 2018 }
pes2o/s2orc
Preparation, characterisation, and controlled release of sex pheromone-loaded MPEG-PCL diblock copolymer micelles for Spodoptera litura (Lepidoptera: Noctuidae) Sex pheromones are important for agricultural pest control. The main sex pheromone components of Spodoptera litura are (Z,E)-9,11- and (Z,E)-9,12-tetradecadienyl acetate (Z9,E11-14:Ac; Z9,E12-14:Ac). In this study, we investigated the optimal conditions for encapsulation of S. litura sex pheromonesin micelles via the self-assembly method using monomethoxy poly (ethylene glycol)-poly (ε-caprolactone) (MPEG-PCL) as a biodegradable wall-forming material with low toxicity. In the L9(34) orthogonal experiment, 3 amphiphilic block copolymers, with different hydrophilicity to hydrophobicity ratios, were examined. Optimal encapsulation conditions included stirring of MPEG5000-PCL2000 at 1000 rpm at 30°C with 2.5:1 wall-forming: core material mass ratio. S. litura sex pheromone-loaded MPEG5000-PCL2000 micelles presented a homogeneous spherical morphology with apparent core-shell structure. The release kinetics of optimized MPEG5000-PCL2000 micelles was best explained by a first-order model. Encapsulated Z9,E11-14:Ac and Z9,E12-14:Ac were released slowly, not suddenly. Methyl oleate (MO) was used as an agent to control micellar release performance. When MO content equalled block content, micelle half-life could be prolonged, thereby controlling the release speed. Overall, our results showed MPEG-PCL as a promising controlled-release substrate for sex pheromones. Introduction Spodoptera litura Fabricius (Lepidoptera: Noctuidae), a type of polyphagous pest with an aggressive eating pattern, has a wide range of hosts, encompassing approximately 200 kinds of plants [1] including those of various vegetables, fruits, baccies, cotton, corn, tea, and other cash crops [2]. However, due to the overuse of chemical agents to prevent and control this a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Critical micelle concentration The critical micelle concentration (CMC) of MPEG-PCL was measured by a UV spectrophotometer (TU-1810, Beijing Purkinje General Instrument Co., Ltd.). Three kinds of MPEG-PCL, with different proportions of hydrophobic and hydrophilic components were dissolved in deionized water to obtain a stock solution of concentration 1.000 g/L. The absorption maxima of the different concentrations of MPEG-PCL (0.001, 0.005, 0.008, 0.01, 0.04, 0.05, 0.1, 0.2, 0.35, 0.5, 0.7, and 1 g/L) was recorded and mapped with the values of lgA-lgC (A: absorption, C: concentration); critical micelle concentration of block copolymer corresponded to the concentration at which the first derivative curve reached zero [37]. Orthogonal experimental design An L 9 (3 4 ) orthogonal table (Table 1) was adopted for this test. The investigated factors included wall-forming materials (W), mass ratio of sex pheromone to wall-forming materials (W/S ratios), reaction temperature (T), and stirring speed (S); encapsulation efficiency of the micelles (EE) was considered as the assessment index. The optimised formulation was prepared in triplicate. Determination of entrapment efficiency Briefly, 0.5 mL of the sex pheromone-loaded micelle solution was fully mixed with 0.5 mL ultrapure water. The solution was extracted with 1 mL n-hexane and completely disrupted using an ultrasound sonicator (Scientz-IID, Ningbo Scientz Biotechnology Co., China) on ice. The encapsulated sex pheromone was dissolved in the hexane after 30 min. The concentration of sex pheromone was determined by gas chromatography (GC, Agilent 7890B, Agilent Technologies, Santa Clara, CA, USA). For GC, a capillary column (HP-5, 30 m × 0.32 mm × 0.25 μm) with a flame ionisation detector and a splitless injector, with nitrogen as the carrier gas, was used. GC conditions were as follows: the column temperature set at 80˚C (held for 5 min), raised to 210˚C at 10˚C/min, and held at 210˚C for 15 min. A standard curve was generated according to the concentration of sex pheromones and peak area; quantity of each component in the sex pheromone was determined from the standard curve. The standard curve regression equations of Z9,E11-14:Ac and Z9,E12-14:Ac were y = 16806x + 50.5 (R 2 = 0.9997) and y = 18672x − 3.4706 (R 2 = 0.9999). The sex pheromone entrapment efficiency was calculated using Eq 1: Characterisation of micelles Particle morphology. The micelle morphology was observed by transmission electron microscope (TEM, HT 7700, Hitachi, Tokyo, Japan), and the speeding voltage during the test was 80 kV. Samples were prepared by dropping the micelle solution on a carbon-coated copper net, followed by air drying, and dyeing with 0.2 wt% phosphotungstic acid. Determination of particle size. The particle size and its distribution were analysed using a Malvern nanometre particle size analyser (MNPSA) (Zetasizer Nano S90, Malvern Instruments Ltd., Malvern, UK). Stability of micelles. Micelles were stored at 2, 4, and 8˚C in the dark. In order to evaluate the physical stability of nanoparticles during this storage period, particle size distribution was monitored at time intervals of 0, 15, and 30 days, using the method described in the section "Determination of particle size". Release performance Sex pheromone release. To evaluate sex pheromone release, the micelles were transferred to a centrifuge tube and placed in an artificial climate chamber (MGC-450HP2, Shanghai Yiheng Co., China) with controlled temperature in the range of 35 ± 3˚C, light: dark cycle of 12 h:12 h, and relative humidity of 75 ± 5% for a period of 28 days. The samples were taken out of the artificial climate chamber at regular time intervals for sex pheromone examination by GC. To evaluate the release of sex pheromones from micelles prepared under optimal conditions, the samples were examined every day during the first 14 days, and every 7 days during the subsequent 14 days. To evaluate the release from micelles containing the controlled-release agent, the samples were examined every 3 days over a period of 15 days. Three samples were used in each experiment. Sex pheromone release was expressed as percentage of accumulated release, since this enabled the evaluation of performance of different micelles. Accumulated release was calculated using Eq 2: where W 0 is the sex pheromone content at the initial time and W t is the sex pheromone content at each recorded time. Sex pheromone release kinetics for optimized micelles. For a better understanding of the efficacy of sex pheromones, their release kinetics were studied. Selection of a suitable kinetic model for fitting the sex pheromone release data helped determine the release characteristics. There are a number of kinetic models that describe the overall release of sex pheromone from the vehicle. The most common mathematical models used are: zero-order model (Eq 3), first-order model (Eq 4), Higuchi model (Eq 5), Korsmeyer-Peppasmodel (Eq 6), and Hixson-Crowell model (Eq 7) [39][40][41][42][43][44][45]: where C t -amount of drug released in time t, C0-the initial amount of drug, K0-zero-order kinetic constant, K1-first-order kinetic constant, K H -Higuchi kinetic constant, K KP -Korsmeyer-Peppas release constant, KHC-Hixson-Crowell release constant, n-diffusional release exponent, t-time. Half-life calculations. Depletion of pheromone components from the micelle formulations was characterised by the first-order kinetic model: lnC t = lnC 0 +K 1 Át. Half-lives (t 1/2 ) for compounds were determined from the exponential equation, substituting calculated values of C 0 and K 1 , and setting (C t /C 0 ) to 0.5 [46]. Statistical analysis. Statistical analysis was done with SPSS 17.0 software package (Chicago, IL, USA). One-way analysis of variance (ANOVA) for independent samples followed by Duncan's multiple range tests were performed to evaluate the quantitative results. Data were obtained from triplicate samples and, expressed as mean ± standard error (SE); values of P 0.05 and P 0.01 were considered statistically significant and extremely significant, respectively. Optimisation of MPEG-PCL micelle formation The results of the L 9 (3 4 ) orthogonal experiments using MPEG-PCL nanoparticles are shown in Tables 2 and 3 Conversely, it was much easier to draw a more intuitive conclusion from the results by range analysis of the orthogonal experiment. However, the calculation processes were extensive and could not evaluate the errors; thus, it was necessary to carry out variance analysis of the orthogonal experiment results. It can be seen from the variance analysis tables (Tables 4 and 5) that except for the W/S ratios, all the other factors (including W, S, and T) had significant effects on the experimental results. The order of factors affecting the encapsulation efficiency of Z9,E11-14:Ac and Z9,E12-14:Ac, obtained from variance analysis, was the same as that from the range analysis. Encapsulation efficiency of the micelles was controlled by the length of hydrophobic or hydrophilic chain (wall-forming materials), W/S ratio, T, and S. Based on the two analyses, it was concluded that the order of effect of W and T on the encapsulation efficiency was different. Since the mass ratio of Z9,E11-14:Ac was much larger than that of Z9,E12-14:Ac, factor S was regarded as the most important factor affecting the encapsulation efficiency followed by T, W, and W/S ratios. S likely played an important role in the formation of micelles, since the sex pheromone should be well mixed in the process of micelle formation, and a certain speed would be required when water is added to the solution to conjugate the hydrophilic ends of the amphiphilic block copolymer. The influence of W was determined by the length of hydrophobic and hydrophilic chains, whereas T likely influenced micellar assembly and speed of sex pheromone volatility to lessen the encapsulation efficiency. However, the influence of W/S ratios on encapsulation efficiency was relatively small. The optimal conditions for S. litura sex pheromone encapsulation with MPEG-PCL, determined from the above results, involved stirring MPEG 5000 -PCL 2000 at a speed of 1000 rpm at 30˚C with a 2.5:1 mass ratio of wall-forming to core materials. Based on these conditions, three parallel experiments with MPEG-PCL micelles were subsequently conducted ( Table 6). The results consistently showed that entrapment efficiency was the highest among the combinations used in the orthogonal experiments, which verified the utility and feasibility of the conditions. Characterisation of microcapsules For fresh MPEG 5000 -PCL 2000 nanoparticles, prepared according to the optimised formulation and preparation conditions, the particle size was 374 ± 5.13 nm by MNPSA (Fig 1). The formation of micellar nanostructures was confirmed by TEM. The MPEG 5000 -PCL 2000 nanoparticles showed a homogeneous spherical morphology, with average diameter of 300 nm, presenting an apparent core-shell structure (Fig 2). The size of the MPEG 5000 -PCL 2000 nanoparticles, measured by TEM, was smaller compared to that from MNPSA measurements, since the former was related to the collapsed nanoparticles after water evaporation, whereas the latter represented their hydrodynamic diameter [47]. After preparation, the micelles were dispersed in aqueous medium. Therefore, stability of their sizes was of great importance, both as a measure of particle structure integrity and as an indicator of the possible inter-particular associations (aggregation). At sub-zero temperatures, the solution solidified and the micellar structure lost its integrity. For this purpose, we chose 2, 4, and 8˚C as the storage temperatures, at which the particle size was monitored in the dark over a period of 30 days. The variation of micellar size as a function of storage time is shown in Table 7. All the micelles increased slightly in size, throughout the measurement period, at different temperatures. This observation could not be an indicator of aggregation, which usually leads to a several-fold increase in size; instead, copolymer swelling and/or hydration may be responsible for this event [48]. Since the variation of micellar size was less when stored at 2˚C, we chose to store the micelles at 2˚C in the dark for the best storage conditions. Sex pheromone release kinetics in optimized micelles The sex pheromone release results of MPEG 5000 -PCL 2000 micelles were used in various mathematical models to evaluate the kinetics and mechanism of release from the micelles. Based on the correlation coefficient (R) value in various models, the one that fit best with the release data was selected; the one with a high 'R' value was considered as the best fit. The release constant was calculated from the slopes of the appropriate models, and the regression coefficient (R 2 ) was determined (Table 8). Release performance of optimized micelles The plot of accumulated release from sex pheromone-loaded MPEG 5000 -PCL 2000 micelles indicated that Z9,E11-14:Ac could be released from micelles faster than Z9,E12-14:Ac, in a sustained manner. The two components had a high release rate in the first 3 days, which was attributed to the fact that nanoparticles usually contain sex pheromone not only at the inner core but also on their surface. After this initial loss, sex pheromone release approximated firstorder release rates more closely [49]. Accordingly, following the first burst release period, sex pheromone was released slowly, independent of the initial sex pheromone concentration in the micelles. As shown in Fig 3, from day 4 to 14, the release rate tended to slow down and remained constant. After 14 days, the release rate decreased further and tended to be stable, although the release rate of Z9,E11-14:Ac was less than that of Z9,E12-14:Ac. According to the first-order kinetic model, the half-life of Z9,E11-14:Ac and Z9,E12-14:Ac in the micelle was 5.6 and 7.0 days, respectively. The half-life difference of 1.4 days may have been due to the different proportions of sex pheromone components in the micelle. Based on the results of this study, we found that Z9,E11-14:Ac and Z9,E12-14:Ac were released slowly from MPEG 5000 -PCL 2000 micelles, and that no sudden release occurred throughout the process, thereby indicating that diblock copolymer micelles were suitable for use as a controlled substrate. Our studies indicated that although MPEG-PCL diblock copolymer micelles did not maintain a constant release rate, they met the first-order kinetic model requirements, with adynamic rapid-to-slow release, lasting for almost a month. Other release carriers, such as PVC, have demonstrated equal or better release duration for that pheromone [46]. However, the micelles in this study were in aqueous solution and hence environment friendly; they were physically and chemically stable, non-toxic, and biodegradable. Table 9 shows that while the differences among the tested concentrations of MO were not significant, when the mass of wall-forming materials equalled that of MO (10 mg/mL), the halflife of Z9,E11-14:Ac and Z9,E12-14:Ac in the micelle increased by 3.7 and 4.2 days, respectively, compared to that of the control. With the increased content of controlled-release agent, the efficiency of controlled release declined, potentially due to the organic liquid which may have affected micelle formation and inhibited the encapsulation efficiency, thereby impacting the release rate. Fig 4 shows that MO, as a controlled-release agent, could retard the overall release rate of micelles, especially in the first 3 days without burst release. Compared to that of the control, release rate of the two components was slower over the first 6 days. From day 7 to 15, the release rates increased relative to that during the first 6 days. Thus, addition of appropriate quantities of MO into the micelle could prolong the half-life and control the release performance. Conclusions The optimal preparation conditions of S. litura sex pheromone-amphiphilic block copolymer micelles were shown to involve stirring MPEG 5000 -PCL 2000 at a speed of 1000 rpm at 30˚C with a 2.5:1 mass ratio of wall-forming to core materials. The nanoparticles presented a homogeneous spherical morphology with an apparent core-shell structure, and were free from the inter-micellar adhesion phenomena. The release kinetics of optimized MPEG 5000 -PCL 2000 micelles was best explained by first-order model. Since the release from micelles was slow, without a sudden-release phenomenon, the amphiphilic copolymer was considered suitable for use as a controlled substrate. When the mass of added MO equalled that of wall-forming materials, the half-life could be prolonged, thereby allowing control of the release rate. These results indicated that the diblock copolymer could be a suitable controlled-release substrate, and the micelles could have potential use in the control applications of mass trapping and mating disruption in the field.
v3-fos-license
2017-08-17T02:35:16.063Z
2012-09-12T00:00:00.000
44010961
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/mpe/2012/807656.pdf", "pdf_hash": "b9e638b14671d39892a9a245aca2f639bfe73bf1", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:481", "s2fieldsofstudy": [ "Computer Science", "Mathematics", "Engineering" ], "sha1": "b9e638b14671d39892a9a245aca2f639bfe73bf1", "year": 2012 }
pes2o/s2orc
A Recurrent Neural Network for Nonlinear Fractional Programming This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints. Introduction Compared with the well-known applications of nonlinear programming to various branches of human activity, especially to economics, the applications of fractional programming are less known until now.Of course, the linearity of a problem makes it easier to tackle and hence contributes its wide recognition.However, it is certain that not all real-life economic problems can be described by linear models and hence are not likely applications of linear programming.Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance in solving concrete problems is steadily increasing.Moreover, it is known that the nonlinear optimization models describe practical problems much better than the linear optimization models do. The fractional programming problems are particularly useful in the solution of economic problems in which various activities use certain resources in various proportions, while the objective is to optimize a certain indicator, usually the most favorable return-onallocation ratio subject to the constraint imposed on the availability of goods.The detailed descriptions of these models can be found in Charnes et al. 1 , Patkar 2 , and Mjelde 3 .Besides the economic applications, it was found that the fractional programming problems also appeared in other domains, such as physics, information theory, game theory, and others.Nonlinear fractional programming problems are, of course, the dominant ones for their much widely applications, see Stancu-Minasian 4 in details. As it is known, conventional algorithms are time consuming in solving optimization problems with large-scale variables and so new parallel and distributed algorithms are more competent then.Artificial neural networks RNNs governed by a system of differential equations can be implemented physically by designated hardware with integrated circuits and an optimization process with different specific purposes could be conducted in a truly parallel way.An overview and paradigm descriptions of various neural network models for tackling a great deal of optimization problems can be found in the book by Cichocki and Unbehauen 5 .Unlike most numerical algorithms, neural network approach can handle, as described in Hopfield's seminal work 6, 7 , optimization process in real-time on line and hence to be the top choice. Neural network models for optimization problems have been investigated intensively since the pioneer work of Wang et al., see 8-14 .Wang et al. proposed several different neural network models for solving convex programming problems 8 , linear programming 9, 10 , which, proved to be globally convergent to the problem's exact solutions.Kennedy and Chua 11 developed a neural network model for solving nonlinear programming problems where a penalty parameter needed to tune in the optimization process and hence only approximate solutions were generated.Xia and Wang 12 gave a general neural network model designing methodology which put together many gradient-based network models for solving the convex programming problems under this framework with globally convergent stability.Neural networks for the quadratic optimization and nonlinear optimization with interval constraints were developed by Bouzerdorm and Pattison 13 and Liang and Wang 14 , respectively. All these neural networks can be classified into the following three types: 1 the gradient-based models 8-10 and its extension 12 ; 2 the penalty-function-based model 11 ; 3 the projection based models 13, 14 .Among them the first was proved to have the global convergence 12 ; the third quasi-convergence 8-10 only when the optimization problems are convex programming problems.The second could only be demonstrated to have local convergence 11 and more unfortunately, it might fail to find exact solutions, see 15 for a numerical example.Because of this, the penalty-function-based model has little applications in practice.As it is known, nonlinear fractional programming does not belong to convex optimization problems 4 and how to construct a good performance neural network model to solve this optimization problem becomes a challenge now since.Motivated by this idea, a promising recurrent continous-time neural network model is going to be proposed in the present paper.The proposing RNN model has the following two most important features.1 The model is complete in the sense that the set of optima of the nonlinear fractional programming with interval constraints coincides with the set of equilibria of the proposing RNN model. 2 The RNN model is invariant with respect to the problem's feasible set and has the global convergence property in the sense that all the trajectories of the proposing network converge to the exact solution set for any initial point starting at the feasible interval region.These two properties demonstrate that the proposing network model is quite suitable for solving nonlinear fractional programming problems with interval constraints. Remains of the paper are organized as follows.Section 2 formulates the optimization problem and Section 3 describes the construction of the proposing RNN model.Complete property and global convergence of the proposing model are discussed in Sections 4 and 5, respectively.Section 6 gives some typical application areas of the fractional programming.Illustrative examples with computational results are reported in Section 7 to demonstrate further the good performance of the proposing RNN model in solving the intervalconstrained nonlinear fractional programming problems.Finally, Section 8 is a conclusion remark which presents a summary of the main results of the paper. Problem Formulation The study of the nonlinear fractional programming with interval constraints is motivated by the study of the following linear fractional interval programming: where: Accordingly, constraints a ≤ Ax ≤ b can always be transformed into a ≤ x ≤ b by change of variable method without changing the programming's format, see 13 for quadratic programming and 17 for linear fractional interval programming.So, it is necessary to pay our attention on problem 2.1 only.As the existing studies on problem 2.1 , see 16-19 , focused on the classical method which is time consuming in optimization computational aspects, it is sure that the neural network method should be the top choice to meet the real-time computation requirement.To reach this goal, the present paper is to construct a RNN model that is available both for solving nonlinear fractional interval programming and for linear fractional interval programming problem 2.1 as well. Consider the following more general nonlinear fractional programming problem: where g x , h x are continuously differentiable function defined on an open convex set O ⊆ R n which contains the problem's feasible set W {x | a ≤ x ≤ b} and x, a, b the same as in problem 2.1 , see the previous v -vi .Similarly, we suppose the objective function's dominator g x always keeps a constant sign, say g x > 0. As the most fractional programming problems arising in real-life world associate a kind of generalized convex properties, we suppose the objective function F x to be pseudoconvex over O.There are several sufficient conditions for the function F x g x /h x being pseudoconvex, two of which, see 20 , are 1 g is convex and g ≥ 0, while h concave and h > 0; 2 g is convex and g ≤ 0, while h is convex and h > 0. It is easy to see that the problem 2.1 is a special case of problem 2.3 . We are going to state the neural network model which can be employed for solving problem 2.3 and so for problem 2.1 as well.Details are described in the coming section. The Neural Network Model Consider the following single-layered recurrent neural network whose state variable x is described by the differential equation: where ∇ is the gradient operator and f W : R n → W is the projection operator defined by For the interval constrained feasible set W, the operator f W can be expressed explicitly as 3.3 The activation function to one node of the neural network model 3.1 is the typical piecewise linear f W i x i which is visibly illustrated in Figure 1. To make a clear description of the proposed neural network model, we reformulate the compact matrix form 3.1 as the following component ones: When the RNN model 3.1 is employed to solve optimization problem 2.3 , the initial state is required to be mapped into the feasible interval region W.That is, for any x 0 x 0 1 , x 0 2 , . . ., x 0 n ∈ R n , the corresponding neural trajectory initial point should be chosen as x 0 f W x 0 , or in the component form, Accordingly, the architecture of the proposed neural network model 3.4 is composed of n integrators, n processors for F x , 2n piece-wise linear activation functions, and 2n summers.Let the equilibrium state of the RNN model 3.1 be Ω e which is defined by the following equation: 3.5 The relationship between the minimizer set Ω * of problem 2.3 and the equilibrium set Ω e is explored in the following section.It is guaranteed that the two sets coincide exactly and, this case is the most available expected one in the neural network model designs. Complete Property As proposed for binary-valued neural network model in 21 , a neural network is said to be regular or normal if the set of minimizers of an energy function is a subset or superset of the set of the stable states of the neural network, respectively.If the two sets are the same, the neural network is said to be complete.The regular property implies the neural network's reliability and normal effectiveness, respectively, for the optimization process.Complete property means both reliability and effectiveness and it is the top choice in the neural network designing. Here, for the continuous-time RNN model 3.1 , we say the model to be regular, normal, and complete respectively if three cases of Ω * ⊆ Ω e , Ω e ⊆ Ω * , and Ω * Ω e occur, respectively.The complete property of the neural network 3.1 is stated in the following theorem. Stability Analysis First, it can be shown that the RNN model 3.1 has a solution trajectory which is global in the sense that its existence interval can be extended to ∞ on the right hand for any initial point in W. The continuity of the right hand of 3.1 means, by Peano's local existence theorem, see 24 , that there exists a solution x t; x 0 for t ∈ 0, t max with any initial point x 0 ∈ W, here t max is the maximal right hand point of the existence interval.The following lemma states that this t max to be ∞. Lemma 5.1.The solution x t; x 0 of RNN model 3.1 with any initial point x 0; x 0 x 0 ∈ W is bounded and so, it can be extended to ∞. Proof.It is easy to check that the solution x t x t; x 0 for t ∈ 0, t max with initial condition x 0; x 0 x 0 is given by x t e −t x 0 e −t t 0 e s f W x s − ∇F x s ds. 5.1 Obviously, mapping f W is bounded, that is f W ≤ K for some positive number K > 0, where • is the Euclidean 2 norm.It follows from 5.1 that ≤ max x 0 , K . 5.2 Thus, solution x t is bounded and so, by the extension theorem for ODEs, see 24 , it can be concluded that t max ∞ which completes the proof of this lemma.Now, we are going to show another vital dynamical property which says the set W is positive invariant with respect to the RNN model 3.1 .That is, any solution x t starting from a point in W, for example, x 0 ∈ W, it will stay in W for all time t elapsing.Additionally, we can also prove that any solution starting from outside of W will either enter into the set W in finite time elapsing and hence stay in it for ever or approach it eventually.Theorem 5.2.For the neural dynamical system 3.1 , the following two dynamical properties hold: Proof.Method to prove this theorem can be found in 14 and for the purpose of completeness and readability, here we give the whole proof as follows again. Suppose that, for i 1, . . ., n, W i {x i ∈ R | a i ≤ x i ≤ b i } and x 0 i x i 0; x 0 ∈ W i .We first prove that for all i 1, 2, . . ., n, the ith component x i t x i t; x 0 belongs to W i , that is, x i t ∈ W i for all t ≥ 0. Let Noting that x i t ∈ W for t ∈ 0, t * i and assumption 5.4 implies x i t * i a i , so, by 5.6 , we get This is in contradiction with the assumption 5.4 .So, t * i ∞, that is x i t ∈ W i for all t ≥ 0. This means W is positive invariant and hence a is guaranteed. Second, for some i, suppose x 0 i x i 0; x 0 / ∈ W i .If there is a t * i > 0 such that x t * i ∈ W i , then, according to a , x i t will stay in W i for all t ≥ t * i .That is x i t will enter into W i .Conversely, for all t ≥ 0, suppose x i t / ∈ W i .Without loss of generality, we assume that x i t < a i .It can be guaranteed by a contradiction that sup{x i t | t ≥ 0} a i .If it is not so, note that x i t < a i , then sup{x i t | t ≥ 0} m < a i .It can be followed by 3.4 that Integrating 5.8 gives us which is a contradiction because of x i t < a i .Thus, we obtain sup{x i t | t ≥ 0} a i .This and the previous argument show that, for x 0 / ∈ W, either x t enters into W in finite time and hence stays in it for ever or ρ t dist x t , W → 0, as t → ∞. We can now explore the global convergence of the neural network model 3.1 .To proceed, we need an inequality result about the projection operator f W and the definition of convergence for a neural network.Definition 5.3.Let x t be a solution of system ẋ F x .The system is said to be globally convergent to a set X with respect to set W if every solution x t starting at W satisfies ρ x t , X −→ 0, as t −→ ∞, 5.10 here ρ x t , X inf y∈X x − y and x 0 x 0 ∈ W. Definition 5.4.The neural network 3.1 is said to be globally convergent to a set X with respect to set W if the corresponding dynamical system is so. Theorem 5.6.The neural network 3.1 is globally convergent to the solution set Ω * with respect to set W. Proof.From Lemma 5.5, we know that 5.12 Let v x − ∇F x and u x, then 5.14 Define an energy function F x , then, differentiating this function along the solution x t of 3.1 gives us 5.15 According to 5.14 , it follows that Mathematical Problems in Engineering 11 It means the energy of F x is decreasing along any trajectory of 3.1 .By Lemma 5.1, we know the solution x t is bounded.So, F x is a Liapunov function to system 3.1 .Therefore, by LaSalle's invariant principle 25 , it follows that all trajectories of 3.1 starting at W will converge to the largest invariant subset Σ of set E like However, it can be guaranteed from 5.16 that dF/dt 0 only if f W x − ∇F x − x 0, which means that x must be an equilibrium of 3.1 or, x ∈ Ω.Thus, Ω is the convergent set for all trajectories of neural network 3.1 starting at W. Noting that Theorem 4.1 tells us that Ω * Ω and hence, Theorem 5.6 is proved to be true then. Up to now, we have demonstrated that the proposed neural network 3.1 is a promising neural network model both in implementable construction sense and in theoretic convergence sense for solving nonlinear fractional programming problems and linear fractional programming problems with bound constraints.Certainly, it is also important to simulate the network's effectiveness by numerical experiment to test its performance in practice.In next section, we will focus our attention on handling illustrative examples to reach this goal. Typical Application Problems This section contributes to some typical problems from various branches of human activity, especially in economics and engineering, that can be formulated as fractional programming. We choose three problems from information theory, optical processing of information and macroeconomic planning to identify the various applications of fractional programming. Information Theory For calculating maximum transmission rate in an information channel Meister and Oettli 26 , Aggarwal and Sharma 27 employed the fractional programming described briefly as follows. Consider a constant and discrete transmission channel with m input symbols and n output symbols, characterized by a transition matrix P p ij , i 1, . . ., m, p ij ≥ 0, i p ij 1, where p ij represents the probability of getting the symbol i at the output subject to the constraint that the input symbol was j.The probability distribution function of the inputs is denoted by x x j , and obviously, x j ≥ 0, j x j 1. Define the transmission rate of the channel as: T x i j x j p ij log p ij / k x k p ik j t j x j . 6.1 The relative capacity of the channel is defined by the maximum of T x , and we get the following fractional programming problem: With the notations: Optical Processing of Information In some physics problems, fractional programming can also be applied.In spectral filters for the detection of quadratic law for infrared radiation, the problem of maximizing the signalto-noise ration appears.This means to maximize the filter function φ x a x 2 x Bx β 6.5 on the domain S {x ∈ R n , 0 ≤ x i ≤ 1, i 1, . . ., n} in which a and β are strict positive vector, and constant, respectively, B is a symmetric and positive definite matrix, a x represents the input signal, and x Bx β represents the variance in the background signal.The domain of the feasible solutions S illustrates the fact that the filter cannot transmit more than 100% and less than 0% of the total energy.The optical filtering problems are very important in today's information technology, especially in coherent light applications, and optically based computers have already been built. Macroeconomic Planning One of the most significant applications of fractional programming is that of dynamic modeling of macroeconomic planning using the input-output method.Let Y t be the national income created in year t.Obviously, Y t i Y i t .If we denote by C ik t the consumption, in branch k, of goods of type i that were created in branch i and by I ik the part of the national income created in branch i and allocated to investment in branch k, then the following repartition equation applies to the national income created in branch i: Mathematical Problems in Engineering 13 The increase of the national income in branch k is function of the investment made in this branch where I k i I ik .In these conditions, the macroeconomic planning leads to maximize the increase rate of the national income: Illustrative Examples We give some computational examples as simulation experiment to show the proposed network's good performance. Example 7.1.Consider the following linear fractional programming: This problem has an exact solution x * 0, 0 T with the optimal value F x * 1/3.The gradient of F x can be expressed as and pay attention to 7.2 , we get 7.4 The dynamical systems are given by 7.5 Various combinations of 7.5 formulate the proposed neural network model 3.1 to this problem.Conducted on MATLAB 7.0., by ODE 23 solver, the simulation results are carried out and the transient behaviors of the neural trajectories x 1 , x 2 starting at x 0 0.4, 1 T , which is in the feasible region W, are shown in Figure 3.It can be seen visibly from the figure that the proposed neural network converges to the exact solution very soon.Also, according to b of Theorem 5.2, the solution may be searched from outside of the feasible region.Figure 4 shows this by presenting how the solution of this problem is located by the proposed neural trajectories from the initial point x 0 0.5, 3 T which is not in W. Example 7.2.Consider the following nonlinear fractional programming: 7.6 This problem has an exact solution x * 1, 1 T with the optimal value F x * 1/13.The gradient of F x can be expressed as The dynamical systems are given by Figure 6 presents a trajectory from outside of W, here 4, 3 , it can be seen clearly from this that the solution of this problem is searched by the proposed neural trajectory soon. Conclusions In this paper, we have proposed a neural network model for solving nonlinear fractional programming problems with interval constraints.The network is governed by a system of differential equations with a projection method. The stability of the proposed neural network has been demonstrated to have global convergence with respect to the problem's feasible set.As it is known, the existing neural network models with penalty function method for solving nonlinear programming problems may fail to find the exact solution of the problems.The new model has overcome this stability defect appearing in all penalty-function-based models.Certainly, the network presented here can perform well in the sense of real-time computation which, in the time elapsing sense, is also superior to the classical algorithms.Finally, numerical simulation results demonstrate further that the new model can act both effectively and reliably on the purpose of locating the involved problem's solutions. Figure 1 : Figure 1: The activation function f i x i of the neural network model 3.1 . Figure 2 : Figure 2: Functional block diagram of the neural network model 3.1 . 2 Figure 3 : Figure 3: Transient behaviors of neural trajectories x 1 , x 2 from the inside of W. 2 Figure 4 : Figure 4: Transient behaviors of neural trajectories x 1 , x 2 from the outside of W. on MATLAB 7.0., by ODE 23 solver, the transient behaviors of the neural trajectories x 1 , x 2 from inside of the feasible region W, here x 0 2, 3 , are depicted in Figure5which shows the rapid convergence of the proposed neural network. 2 tFigure 6 : Figure 6: Transient behaviors of neural trajectories x 1 , x 2 from the outside of W. Charnse et al. gave a different method which transformed the fractional interval problem into an equivalent problem like 2.1 by using the generalized inverse of A, and the explicit solutions were followed then.Also, B ühler 19 transformed the problem into another equivalent one of the same format, to which Mathematical Problems in Engineering he associated a linear parametric program used to obtain solution for the original interval programming problem. Theorem 4.1.The RNN model 3.1 is complete, that is, Ω * Ω e .Proof.See 20, Lemma 11.4.1 .Let F x : R n → R be a differentiable pseudoconvex function on an open set Y ⊆ R n , and W ⊆ Y any given nonempty and convex set.Then x * is an optimal solution to the problem of minimizing F x subject to x ∈ W if and only if x − x * T ∇F x * ≥ 0 for all x ∈ W. I ik t 6.8 subject to the constraints C k t ≥ max C k , C k 0 , where C k t i C ik t , I k 0 ≤ I k t ≤ I k max , and C k represents minimum consumption attributed to branch k whereas I k max is the maximum level of investments for branch k. Transient behaviors of neural trajectories x 1 , x 2 from the inside of W.
v3-fos-license
2019-08-16T22:28:34.736Z
2018-11-30T00:00:00.000
200174354
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.library.msstate.edu/index.php/ruraled/article/download/466/429", "pdf_hash": "028442c0a2790b448c351f15fba02abfc36d88a5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:482", "s2fieldsofstudy": [ "Education", "Environmental Science" ], "sha1": "51f2f153f6f3f03f785cebf50561d36f3a13882e", "year": 2018 }
pes2o/s2orc
Missouri-Iowa Science Cooperative (Science Co-Op): Rural Missouri-Iowa Science Cooperative (Science Co-Op): Rural Schools-Urban Universities Collaborative Project. Schools-Urban Universities Collaborative Project. There is a dearth of studies in science education that are both comprehensive and focused on rural schools. Thus, this brief is in the form of a research report on the impact of an externally funded, five-year professional development project. The project involved approximately 1500 teachers on the student achievement of approximately 20,000 K-6 students in 36 small, rural Midwest school districts. Larry G. Enochs, Research Introduction Pressure on schools to address waning student interest and poor achievement in science, technology, engineering, and mathematics (STEM) has continued unabated since the publication of A Nation at Risk (1983), Science for All Americans and Benchmarks for Science Literacy (American Association for the Advancement of Science [AAAS], 1989[AAAS], , 1993)), and the National Science Education Standards (NSES, National Research Council [NRC], 1996).The TIMSS report (International Association for the Evaluation of Educational Achievement, 2000) and the Program for International Student Assessment (PISA, Organization for Economic Co-operation and Development [OECD], 2006) results substantiated concerns that US students are falling behind students in other industrialized countries.These mounting concerns ultimately led in part to the passage of the No Child Left Behind Act of 2001 (NCLB, 2002), which now requires the annual assessment of students' performance in language arts, mathematics, and science. The current call for reform in science education led to significant funding from the National Science Foundation (NSF) for "systemic change" projects at the state, urban, and local levels.These initiatives were focused primarily on (1) high-quality professional development (PD) of teachers' content and pedagogical content knowledge and (2) the availability and utilization of high-quality instructional resources, assuming that these would lead to (3) improved inquiry-based teaching practices translating into (4) improved student performance.Many projects focused on urban and suburban systems.However, the Science Co-op Project focused on under-represented, underserved, rural, isolated school districts and elementary and middle school science programs.This project assumed that success would be based as much on good engineering in designing solutions that addressed the available resources and local constraints as much as on good science.The project title reflects a basic metaphor for the design and problem solution-farm cooperatives-a historical approach used in rural America to face the economic and political demands placed on small farmers.This brief report provides insights into the design and results of the four factors in the model-PD, resources, classroom practices, and student achievement (see Shymansky, Annetta, Everett, & Yore, 2008, for a more detailed report). Context Systemic change requires serious consideration of the system and subsystems involved.In the case of the Science Co-op Project, this meant two state education agencies (Iowa Department of Education and Missouri Department of Elementary and Secondary Education), 36 school districts (25 in Iowa and 11 in Missouri), about 1,500 teachers, and approximately 20,000 students spread over 40,000 square miles.The enormity and complexity of the project are partially reflected in these numbers and further complicated by the fact that Iowa does not have an official statewide science curriculum and assessment program while Missouri has both.Historically, Iowa ranks amongst the leaders in the USA for literacy and science achievement while Missouri ranks below average in both. The target school districts were small and geographically isolated, and many faced significant economic pressures leading to unexpected high attrition among school administrators and teachers.Furthermore, this project focused on consolidated school districts that are ferociously independent.These differences not only encouraged diversity and autonomy at the school district, school, and classroom levels but also contributed to the challenges of effecting systemic reform.Science Co-op attempted to address these concerns with a design that incorporated a cascading leadership model that gradually moved leadership from a project-centered team to a local leadership team of advocates, coaches, and administrators in each school district across the five years of the project.Local PD activities were supplemented by regional facilitators in face-to-face meetings and regional electronic workshops and presentations via interactive television (ITV).The instructional changes involved moving toward a constructivist-oriented, learning cycles teaching approach, utilizing NSF-funded curriculum materials (FOSS, STC, Insights, combinations of modular and textbook programs, local units, etc.) and the development of local curricular supplements, resource people, and assessment strategies. The consistent features across all subsystems in the Science Co-op Project were the NSES, children's misconceptions, and elements of constructivist-oriented inquiry teaching (learning cycle).All teachers were required to develop teaching resource binders (TRBs) for all science units in their grade-level teaching assignment that adapted the resources to local conditions and their students.The TRBs contained connections between the unit's objectives, state benchmarks or NSES content, inquiry, and social context standards and adaptations of available curriculum resources and programs. Data Collection and Analysis Formative and summative evaluations were applied to the professional development experiences, resources, teacher perceptions, classroom observations, and student performance.Some evaluation data were collected annually, while others were collected biannually.Experienced test constructors developed the questionnaires, tests and protocols used in the project and observers were certified by common training and calibration workshops on an annual basis (Horizon Research Inc. [HRI]; see horizonresearch.com/LSC/ for instruments and complete description of projects).The quality, validity, and reliability of these data varied within reasonable limits (see Shymansky et al., 2008, for a complete report).Since instruction and learning effectiveness identified in small rural districts could be associated with a specific teacher, all analyses (descriptive statistics, analysis of variance, t-tests) were restricted to the project-level and were based on random samples of PD activities, questionnaires, tests, telephone interviews, and classroom visits. Results Random samples of PD activities (5 to 8 per year) were observed across the project's term using HRI scales for the individual categories; capsule ratings indicated that these activities were judged to be high quality, rated as accomplished effective PD to exemplar PD.Random samples of 10 teachers interviewed each year confirmed these claims.By project's end, 583 (46%) of the 1,269 targets, "steady-state" teacher population received more than 129 hours of professional development (compared to 13% for all LSCs).There was a teacher turnover rate of 25%, a principal turnover rate of 56%, and a superintendent turnover rate of 67% over the five years.All districts and schools achieved the project's objective of 14 inquiry-based units in K-6 with very few not having 2 in each grade level.Surveys of 300 teachers randomly selected by HRI at the start (2000) and then in the final year (2005) of the project suggest that teachers on the whole were teaching more lessons per week (3.3 vs. 3.0) but on fewer topics annually (4.9 vs. 3.9) for more minutes per week (120 vs. 114) during the school year.These results are consistent with the less coverage of topics/more depth of consideration theme promoted in the NSES (1996). The quality of classroom practice was tracked by observing random samples of 16 teachers identified by HRI on a biannual basis.These data indicated improvement in all categories (5-point scale: not reflective to extremely reflective of best practices) and capsule rating (8-point scale: ineffective instruction/passive learning to exemplary instruction) of the HRI Classroom Observation Protocol (Table 1).ANOVA and pair-wise t-tests of the means for the three years revealed significant main effects and differences between the successive ratings with the greatest differences occurring between the 2000 (baseline) and 2005 (post-project) ratings. Students' science performance was judged by their perceptions of science instruction and their content test scores.Grade 3 and Grade 6 student responses to 5-point scale (strongly disagree to strongly agree) items on two forms of the Student Perceptions Of Classroom Climate (SPOCC).Student responses were positive or slightly more positive at the end of the project than at the start of the project.The use of my ideas, the family interest, and attitude toward science subscales, areas of major focus in the interactive-constructivist learning cycle and the adaptation strategy used in the project, were significantly higher for Grade 6 girls-a point at which girls (and even many boys) often lose interest in science. The cut-off scores on the Missouri Assessment Program (MAP-Science) and the Iowa Test of Basic Skills (ITBS-Science) were used to evaluate student science achievement of students in Missouri and Iowa Science Co-op schools respectively.The externally set cut-off scores represent the percentage of students classified as having achieved a proficient or advanced level of understanding of the tested standards.The MAP and ITBS data indicate that the percentage of Grade 3 and Grade 7 students achieving proficient or advanced performance levels in 2005 exceeded the 2000 cohort by 21% and 10%, respectively, in Missouri and by 9% and 3%, respectively, in Iowa. Closing Remarks The Science Co-op's successes are not only in the results reported here, but are also found in its impact on (a) science instruction and learning for future students in these rural school districts and (b) the procedural solutions to providing PD to isolated teachers and accessing resources to implement the NSES teaching and program opportunity standards for all children.The legacy of passionate, well educated advocates and ongoing leadership for science education (105 teachers achieved masters degrees in science education during the project) is highly valued and much needed in rural America.The value of the hybrid delivery system for PD consisting of IT applications, communityuniversity partnerships, and cascading leadership have been implemented using existing technologies and proven models and its practical applications have been established.The coop solutions to resources in financially challenged districts-where teachers set up sharing and delivery systems for neighboring districts and rental systems involving a state retired teachers association and area education agencies-were examples of rural ingenuity.Furthermore, the same collaborative spirit was found in how regional clusters of districts networked and shared teachers and local resource people from rural industries and government agencies to enhance many PD activities.We celebrate these schools' and teachers' successes and believe they can be replicated in other rural systems and subsystems. Table 1 Means and standard deviations for classroom observations(N = 16 per year)
v3-fos-license
2017-07-12T17:39:47.318Z
2015-05-01T00:00:00.000
12783565
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ejnmmiphys.springeropen.com/track/pdf/10.1186/s40658-016-0171-2", "pdf_hash": "273214810472f1694598d8c05a6adf6cc99c4994", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:483", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "273214810472f1694598d8c05a6adf6cc99c4994", "year": 2017 }
pes2o/s2orc
Quantitative myocardial blood flow imaging with integrated time-of-flight PET-MR Background The use of integrated PET-MR offers new opportunities for comprehensive assessment of cardiac morphology and function. However, little is known on the quantitative accuracy of cardiac PET imaging with integrated time-of-flight PET-MR. The aim of the present work was to validate the GE Signa PET-MR scanner for quantitative cardiac PET perfusion imaging. Eleven patients (nine male; mean age 59 years; range 46–74 years) with known or suspected coronary artery disease underwent 15O-water PET scans at rest and during adenosine-induced hyperaemia on a GE Discovery ST PET-CT and a GE Signa PET-MR scanner. PET-MR images were reconstructed using settings recommended by the manufacturer, including time-of-flight (TOF). Data were analysed semi-automatically using Cardiac VUer software, resulting in both parametric myocardial blood flow (MBF) images and segment-based MBF values. Correlation and agreement between PET-CT-based and PET-MR-based MBF values for all three coronary artery territories were assessed using regression analysis and intra-class correlation coefficients (ICC). In addition to the cardiac PET-MR reconstruction protocol as recommended by the manufacturer, comparisons were made using a PET-CT resolution-matched reconstruction protocol both without and with TOF to assess the effect of time-of-flight and reconstruction parameters on quantitative MBF values. Results Stress MBF data from one patient was excluded due to movement during the PET-CT scanning. Mean MBF values at rest and stress were (0.92 ± 0.12) and (2.74 ± 1.37) mL/g/min for PET-CT and (0.90 ± 0.23) and (2.65 ± 1.15) mL/g/min for PET-MR (p = 0.33 and p = 0.74). ICC between PET-CT-based and PET-MR-based regional MBF was 0.98. Image quality was improved with PET-MR as compared to PET-CT. ICC between PET-MR-based regional MBF with and without TOF and using different filter and reconstruction settings was 1.00. Conclusions PET-MR-based MBF values correlated well with PET-CT-based MBF values and the parametric PET-MR images were excellent. TOF and reconstruction settings had little impact on MBF values. Background Several imaging modalities are being used in assessment of myocardial perfusion and in detection of coronary artery disease (CAD). Unlike many other modalities and techniques, PET has the ability to measure myocardial blood flow (MBF) in absolute terms. The added value of quantitative MBF over qualitative myocardial perfusion imaging has been shown in several studies [1][2][3][4][5]. 15 O-water PET is considered to be the gold standard for non-invasive quantitative measurements of MBF [6,7]. The short half-life of 15 O (2 min) allows for measurement of both rest and hyperaemic MBF in less than half an hour. However, 15 O-water is freely diffusible and is not, like other PET or SPECT perfusion tracers, trapped in the myocardium, thus making assessment of the left ventricular volumes and function technically very challenging. On the other hand, cardiac MRI has become the gold standard in assessment of myocardial volume, myocardial mass and ventricular function and is also used for tissue characterisation and vascular flow measurements. Recently, integrated PET-MRI systems have become available, which allow for MBF measurements with PET and cardiac MRI simultaneously. Cardiac PET-MRI can give improved functional and morphological information (size, regional and global cardiac function, ejection fraction, stroke volume, intravascular flow measurements, tissue characterisation, etc.) compared to PET-CT or PET alone. Myocardial perfusion can be quantified with MRI, but it is technically demanding and although the coronary flow reserve (CFR) seems comparable between PET and MR, the absolute MBF values from PET and MR are only weakly correlated [8]. CFR is commonly used in the diagnosis of CAD; however, several studies have shown that absolute MBF at stress is superior to flow reserve in the detection of haemodynamically significant CAD [3,[9][10][11]. Combining MBF quantified with PET and functional and morphological information obtained with MRI is promising and will allow for a more comprehensive assessment in cardiac disease in a single patient visit. In addition, radiation doses can be reduced significantly because no CT is needed for attenuation correction of PET data. However, dynamic scans with short-lived tracers such as 15 O-water are among the biggest challenges to PET systems, because of the combination of very high count rates immediately after injection when all of the injected radioactivity is inside the field of view (FOV) of the scanner, and very low count rates at the end of the scan because of the near homogeneous distribution in the body and the passing of three radioactive half-lives. In addition, the larger axial FOV and smaller detector ring diameter compared to PET-CT result in a higher sensitivity, and hence higher count rates which presents a challenge for count rate linearity. These also result in a larger fraction of scattered radiation, which is further amplified by the presence of coils inside the FOV. Furthermore, attenuation correction based on MRI (MRAC) is still challenging [12,13], and little is known on the quantitative accuracy of cardiac perfusion PET imaging with a PET-MR scanner. Hence, the performance of the PET systems in the new PET-MR scanners in relation to the measurement of MBF needs to be validated. The aim of the present study was to validate a silicon photomultiplier (SiPM)based time-of-flight (TOF) capable PET-MR scanner for quantitative cardiac PET imaging using 15 O-water, by comparison to routine clinical PET-CT data. In addition to the cardiac PET-MR reconstruction protocol as recommended by the manufacturer, comparisons were made using a PET-CT resolution-matched reconstruction protocol both without and with TOF to assess the effect of time-offlight and reconstruction parameters on quantitative MBF values. Scanners PET-CT scans were acquired on a Discovery ST PET-CT scanner (GE Healthcare, Waukesha). This scanner is equipped with 24 rings of 6 × 6 × 30 mm BGO detectors grouped in blocks of 6 × 6 crystals coupled to a single position-sensitive photomultiplier tube (PMT). The scanner produces 47 image slices with a slice thickness of 3.27 mm. The transaxial and axial FOV of the scanner are 70 and 15.7 cm, respectively. The system sensitivity according to the National Electrical Manufacturers Association NU-2 2007 standard is 9.1 cps/kBq [14]. PET-MR scans were acquired on a Signa PET-MR scanner (GE Healthcare, Waukesha). This scanner is equipped with 45 rings of 3.95 × 5.3 × 25 mm LYSO detectors grouped in blocks of 4 × 3 crystals coupled to 3 × 2 silicon photomultipliers (SiPM) each. SiPM gains are individually adjusted based on continuous temperature measurements to provide constant scanner sensitivity. The transaxial and axial PET FOV of the scanner are 60 and 25 cm, respectively. System sensitivity is 23 cps/kBq, and the scanner is capable of TOF-PET with a time resolution of circa 370 ps (manufacturer's specifications and authors' NEMA measurements). Phantom study In order to establish which PET-MR reconstruction protocol resulted in images that were most comparable to our clinical routine PET-CT reconstructions, a NEMA image quality phantom with six fillable spheres (diameter 10, 13, 17, 22, 28 and 37 mm) was scanned in both scanners. The background of the phantom was filled with 20 MBq 18 F and the spheres with a 10 times higher radioactivity concentration than the background, and the phantom was scanned on the Discovery PET-CT and the Signa PET-MR for 15 min each. PET-CT images were reconstructed using our clinical routine reconstruction parameters: ordered subsets expectation maximisation (OSEM) with 2 iterations, 21 subsets, and a 4.3 mm Gaussian post-filter. PET-CT attenuation correction was based on a low-dose CT scan. PET-MR images were reconstructed using OSEM with various numbers of iterations and subsets, different post filters, as well as without and with the use of TOF information. PET-MR attenuation correction was based on a built-in CT-based attenuation template of the phantom. One-centimeter diameter spherical volumes of interest (VOI) were automatically drawn over the centre of each sphere, and recovery for each sphere was calculated by dividing the measured radioactivity concentration to the known true radioactivity concentration. The PET-MR reconstruction method that best matched the PET-CT images was determined as the method with the smallest sum of squared residuals between PET-CT and PET-MR recovery coefficients for the three smallest spheres. Subjects Eleven patients (nine male; mean age 59 years; range 46-74 years) participated in this prospective study. The patients had known or suspected CAD with intermediate pretest probability of obstructive coronary disease (20-84% clinical pre-test probability) according to ESC Guidelines [15], and were referred for a 15 O-water PET-CT study for evaluation of MBF. Written informed consent was obtained from all subjects, and the study was performed with permission from the local Radiation Ethics Committee and the Regional Board of Medical Ethics in Uppsala and in accordance with the declaration of Helsinki. Scan procedure The subjects underwent 15 O-water PET scans at rest and during adenosine-induced hyperaemia on both a GE Discovery ST PET-CT and a GE Signa PET-MR scanner on the same day (nine subjects) or within 4 days (two subjects). The radiation dose from the clinical PET-CT scan was approximately 1.8 mSv and the radiation dose from the PET-MR scan was approximately 0.8 mSv. PET-CT: A 6-min dynamic PET perfusion scan during rest was started simultaneously with the administration of 400 MBq of 15 O-water. After a 20-30-min delay to allow for decay of the remaining activity following the first injection, an identical PET scan was performed during adenosine-induced hyperaemia. Adenosine infusion 140 μg × kg −1 × min −1 was started 2 min prior to the stress scan and continued during the 6-min scan time. To correct for photon attenuation, a single low-dose respiration-averaged CT scan during normal breathing was acquired before the resting PET scan (140 kV, 10 mAs, rotation time 1 s, pitch 0.562). PET-CT images were reconstructed using OSEM (2 iterations, 21 subsets), applying all appropriate corrections such as for random coincidences, dead time, normalisation, and scatter, using a transaxial FOV of 50 cm and a 128 × 128 image matrix. PET-MR: A 6-min dynamic PET perfusion scan during rest was started simultaneously with the administration of 400 MBq of 15 O-water. After a 20-30-min delay following the first injection, an identical PET scan was performed during adenosine-induced hyperaemia as described above. Functional MR-imaging was obtained between the rest and stress PET-scans with a FIESTA (true FISP) cine sequence covering the left ventricular myocardium from apex to base in 8-mm-thick short-axis slices with 2.0 mm gap. To correct for photon attenuation, a two-point Dixon sequence during breath-hold was acquired during the resting PET scan and during the hyperaemic PET scan. This sequence enables segmentation of fat and water tissue, lungs and air, which form the basis for creation of the MR-based attenuation map. The arms, which are not included in the MR images, are added to the attenuation map from non-attenuation corrected TOF-PET data [16]. PET-MR images were reconstructed using OSEM into 128 × 128 pixel images and a FOV of 53.4 cm, using the cardiac protocol as recommended by the manufacturer (from here on referred to as std). To assess the effect of TOF and reconstruction settings on MBF values, PET-MR data were also reconstructed using the PET-CT resolution-matched protocol based on the phantom study, both without and with TOF. Reconstruction parameters are summarised in Table 1. All appropriate corrections such as for random coincidences, dead time, and normalisation were applied in all reconstructions. Data analysis The PET data was analysed semi-automatically using Cardiac VUer software, resulting in both parametric MBF images and segment-based MBF values for the entire left ventricle and for three regions corresponding to the coronary artery territories [17]. Coronary flow reserve (CFR) was defined as stress perfusion divided by rest perfusion and was calculated for each segment. The calculation of MBF was based on a onetissue compartment model with an input function from arterial cluster analysis comprising left atrial and ventricular cavities and ascending aorta and with correction for spillover from left and right ventricular cavities into the myocardium: Here, C PET (t) is the radioactivity concentration as measured in a voxel or region by PET, PTF is the perfusable tissue fraction, V T is the distribution volume of water, here fixed to 0.91 mL/g. C A (t) and C RV (t) are the radioactivity concentrations in arterial blood and in the right ventricular cavity, respectively, and V LV and V RV are the left-and right-ventricular spillover fractions. Parametric images were computed using a basis-function implementation of this model [17], whereas regional values were calculated using non-linear regression of Eq. 1. For PET-MR data, the cluster analysis and parametric image construction in Cardiac VUer were only performed for the standard clinical reconstruction protocol. For assessment of the regional MBF values for the other reconstruction methods, the blood vessel and regional myocardial VOIs resulting from the standard clinical analysis were projected onto the resolution-matched images both without and with TOF. Parametric MBF images from PET-CT and from PET-MR were compared visually. To verify the count rate linearity of PET-MR during the first pass of the radioactivity through the PET FOV, the area under the time-activity curves from the arterial input functions during the first minute of the scans was compared for PET-CT and PET-MR. For this comparison, the arterial time-activity curves were normalised to their mean radioactivity concentrations during the last 4 min of the scan to account for possible small differences in amount of injected 15 O-water. The analysis of functional MR images was performed on a GE AW workstation using commercially available software (CardiacVX). The endocardial contour was semiautomatically traced and manually adjusted when needed. The ejection fraction was calculated with the software using Simpson's rule. Statistical analyses Continuous variables are presented as mean values ± standard deviation (SD). Comparison of the hemodynamic data, the global MBF and CFR values and the area under the time-activity curves from the input functions was performed by a Wilcoxon signedrank test. Correlation and agreement between PET-CT and PET-MR-based regional MBF and CFR values were assessed using Deming regression and Bland-Altman analysis and intra-class correlation coefficients. A two-sided p value of less than 0.05 was considered significant. Statistical analyses were performed using SPSS (version 21.0). Results Recovery coefficients of the PET-CT images reconstructed using our clinical routine protocol, as well as for a number of PET-MR reconstructions, are given in Fig. 1. Based on this data, a PET-MR reconstruction protocol without time of flight, using 2 iterations, 28 subsets and a 6 mm post-filter, resulted in recovery coefficients that were most similar to those for PET-CT. Magnetic resonance imaging showed normal global systolic function in all subjects, with a mean ejection fraction (EF) of 65%; range 57-72%. MBF data from one patient was excluded because of movement during the PET-CT scan. Systolic blood pressure, heart rate and rate pressure product (RPP) were comparable between the PET-CT and the PET-MR scans as shown in Table 2. An example of typical time-activity curves of the arterial input function at rest and at stress in PET-CT and in PET-MR is shown in Fig. 2. The mean area under the curves during the first 1 min for all patients (±SD) was 49.5 ± 9.6 kBq/ml × min for PET-CT and 48.0 ± 8.3 kBq/ml × min for PET-MR (p = 0.12). Global mean (±SD) MBF values at rest and stress were 0.92 ± 0.12 and 2.74 ± 1.37 mL/g/min for PET-CT and 0.90 ± 0.23 and 2.65 ± 1.15 mL/g/min for PET-MR, respectively (p = 0.33 and p = 0.74). Global mean (±SD) CFR values were 2.97 ± 1.31 for PET-CT and 3.05 ± 1.23 for PET-MR (p = 0.65). The relations between PET-MR-based and PET-CT-based regional MBF and CFR are shown in Fig. 3. Intra-class correlation coefficients (ICC) between PET-CT and PET-MR regional MBF and CFR were 0.98 and 0.89, respectively. The agreement between PET-MR-based and PET-CT-based regional MBF at rest, at rest corrected for rate-pressure-product (RPP) and at stress is shown in Fig. 4. Intra-class correlation coefficients (ICC) between PET-CT-based and PET-MRbased regional MBF at rest, corrected rest and stress were 0.76, 0.93 and 0.96, respectively. The image quality of parametric MBF images, as shown in Fig. 5, was excellent for PET-MR and in most cases, superior to the PET-CT images, based on visual assessment. The agreement between resolution-matched PET-MR-based regional MBF with and without TOF (FX and HD) and with standard reconstruction (PET-MR std) is shown in Fig. 6. The ICC was 1.00 both between the PET-MR MBF with and without TOF and between PET-MR MBF std and FX-reconstruction. Discussion This present study assessed the quantitative accuracy of cardiac perfusion measurements with 15 O-water in the Signa PET-MR scanner. A high correlation and agreement between PET-MR-based and PET-CT-based MBF was found. This enables the application of previously established cut-off values for MBF with 15 O-water PET-CT also in PET-MR studies [10]. In 15 O-water cardiac scans, the count rates are very high in early time frames, which presents a challenge for count rate linearity of the PET scanner and reliable arterial input function definition, which is essential for the calculation of MBF. We recently performed a NEMA count rate linearity test of the PET-MR scanner as part of the scanner's acceptance procedure, starting at a total amount of radioactivity of 950 MBq. This corresponds to an approximately 40 kBq/ml or 340 MBq in the field of view of the scanner, which is similar to the maximum amount encountered during the 15 O-water scans if all the activity would be within the field of view during the first pass of the tracer. The measured radioactivity concentration did not deviate more than 5% from the true radioactivity concentration at any time during this test, so we are confident that the scanner behaves linearly during the scans Fig. 2 Time-activity curves of the arterial input function derived from cluster analysis comprising the left atrial and ventricular cavities and ascending aorta at rest and at stress in PET-CT and in PET-MR in a typical patient and the arterial input function is recovered well. This was further verified by comparing the area under the arterial input function during the first 1 min for PET-CT and PET-MR scans, which did not differ significantly. The small differences between the MBF measurements in the PET-CT and in the PET-MR can likely be attributed to physiologic variations of myocardial blood flow and lie well within the variability of repeated measurements of 15 O-water myocardial perfusion at rest and during adenosine hyperaemia as reported by Kaufman et al. [18]. The repeatability coefficients for MBF (calculated as 1.96 × SD of the differences) they reported were 0.17, 0.28 and 0.90 for global rest, global corrected rest and global adenosine stress, respectively, and the repeatability coefficients for regional MBF were 0.20-0.46 at rest and 0.41-0.59 at adenosine stress. The repeatability coefficients for MBF measured in the PET-CT and in the PET-MR in our study, as shown in Table 3, are comparable to those reported by Kaufman et al. Although the agreement between PET-CT-based and PET-MR-based MBF was high, the small differences in MBF values could still result in different clinical decisions for the PET-CT and for the PET-MR-based studies. Using the previously established cutoff value of 2.3 mL/g/min to decide between normal and pathological stress MBF [10], on a subject-based level, five subjects had pathological MBF (at least one segment with MBF <2.3 mL/g/min) in the PET-CT study and all of these subjects also had pathological MBF in the PET-MR-based analysis. Five subjects had normal MBF in all segments in the PET-CT study; four of these subjects also had normal MBF in all segments in the PET-MR study whereas one subject had reduced MBF in all the segments (global MBF was 2.3 mL/g/min with PET-CT and 1.8 mL/g/min with PET-MR). This subject was one of the two subjects that underwent the PET-MR scan on a different day than the PET-CT scan; 3 days later. On a segment-based level 13 out of 30 segments had pathologically reduced MBF in the PET-CT study; 11 of these segments were also pathological with the PET-MR-based analysis whereas 2 were normal. Seventeen segments had normal MBF in the PET-CT study and 13 of these regions Fig. 4 Correlation (a, c and e) and Bland-Altman plots (b, d and f) of PET-MR-based regional MBF std (clinical protocol) versus PET-CT-based regional MBF at rest (a, b), regional MBF at rest corrected for RPP (c, d) and at regional MBF at adenosine-stress (e, f). were also normal with the PET-MR-based analysis, whereas 4 segments were pathological. Altogether, in 24 out of 30 segments, the PET-CT and the PET-MR-based decisions of normal or reduced MBF agreed and in 6 segments, they did not agree. Four of these six segments that did not agree were in the two patients that underwent the PET-CT and PET-MR studies on different days. Patients were requested to not alter any medications between the PET-CT and the PET-MR scans and to withhold from caffeine during 24 h before both PET-scans, but failure in compliance to this or other physiologic reasons, rather than differences in the scanners, may have influenced the results and we feel confident in trusting the clinical decisions based on the MBF values using the GE Signa PET-MR scanner. Considering the similar ICC values of the present PET-CT -PET-MR comparison and the variability study by Kauffman et al., it is likely that similar differences in clinical diagnoses would have occurred if PET-CT scans had been repeated. When using a fixed cut-off value for pathological MBF, there is always a probability that patients with MBF close to this cut-off value will be diagnosed differently based on different scans even with the relatively high reproducibility of MBF measurements. CFR is commonly used in the diagnosis of CAD, although several studies have shown that absolute MBF at stress is superior to flow reserve [3,[9][10][11]. In our study, MBF values showed better agreement between PET-CT and PET-MR than CFR values, as shown in Fig. 2. As shown in Fig. 6a, the relation between the resolution-matched PET-MR data and the PET-CT data was virtually identical to the relation between the clinical standard PET-MR data and PET-CT data depicted in Fig. 3. Indeed, as Fig. 6b, c shows, the standard cardiac PET-MR reconstruction applying 3 iterations, 28 subsets and an 8 mm post-filter produced nearly identical MBF values to the PET-CT-resolutionmatched reconstruction with 2 iterations, 28 subsets and a 6 mm filter, both without (HD) and with (FX) TOF. Although this may seem counterintuitive, this is probably due to the fact that for 15 O-water, MBF is based on the clearance rate, i.e., the exponential term in Eq. 1, of the tracer instead of the amplitude of the myocardial timeactivity curve (TAC). Additional filtering does affect this amplitude, but not the shape of the myocardial TAC, and hence does not affect the clearance rate. This means that for 15 O-water, additional filtering does not decrease MBF, whereas it would for other flow tracers. In the PET-MR, the MRAC is still a matter of concern; the attenuation map does not differentiate between the soft tissue and bone, which can result in underestimation of the PET signal [19,20]. A recent study showed comparable relative myocardial FDG uptake in PET-MR and PET-CT images [16] but little is known on the impact of attenuation on the quantitative accuracy of cardiac perfusion in the PET-MR. For PET-CT, it has been shown that MBF can be measured accurately with 15 O-water without correcting for attenuation [21]. Errors in attenuation correction affect the amplitudes of the time-activity curves but not their shapes, and hence not the measured clearance rates. Indeed, in our study, a high agreement was found between PET-MR-based and PET-CT-based MBF, suggesting that the potential errors in MRAC have little impact on the 15 O-water MBF values. This result cannot readily be extrapolated to other tracer used for measuring MBF such as 82 Rb and 13 N-ammonia, since for those tracers MBF is determined from the uptake instead of the clearance of the tracer. TOF-PET imaging is an emerging imaging technology both for PET-CT and PET-MR. In a recent study, Mehranian et al. assessed the impact of TOF image reconstruction on PET quantification errors induced by MR-based attenuation correction in 18 F-FDG and 18 F-choline whole body PET-MR scans [22]. They showed that TOF substantially reduced artefacts and significantly improved the quantitative accuracy. In recent cardiac PET-CT studies with 13 N-ammonia and 82 Rubidium, TOF reconstruction also improved image quality and increased MBF [23,24]. In our study, we did not find any significant impact of TOF and filter and reconstruction settings on the quantitative accuracy of cardiac perfusion measurements with 15 O-water in the PET-MR. However, the parametric PET-MR MBF images with TOF were excellent and in most cases, the image quality was visually superior to the PET-CT images. We did not evaluate the effect of TOF, filter and reconstruction settings on parametric image quality, as well as in PET-CT, [24,25] TOF is expected to improve image quality and to make it possible to find smaller perfusion defects, which should be evaluated in a further study. The results of this study are depending on the specific technology of the Signa SiPM PET-MRI scanner and reconstruction methods used. Quantitative accuracy of MBF values obtained using other PET-MR scanners and tracers should be validated in a similar manner. However, with 15 O-water offering the largest challenge to PET scans in terms of count rate variations possibly with the exception of 82 Rb, we expect that the results in the present work in terms of PET performance are also valid for dynamic myocardial imaging with other tracers. Conclusions Cardiac perfusion measurements with 15 O-water can be performed accurately with the fully integrated Signa PET-MR scanner. MR-based attenuation correction, TOF and reconstruction settings have little impact on the quantitative MBF values. Cardiac PET-MRI allows for quantitative assessment of MBF combined with the superior functional and morphological information from MRI.
v3-fos-license
2016-05-12T22:15:10.714Z
2007-01-24T00:00:00.000
6520941
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000175&type=printable", "pdf_hash": "ad9fe3a2e755fb3fcca5cdffd4384ee037f93c03", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:484", "s2fieldsofstudy": [ "Biology" ], "sha1": "0376923a454455d6df184bcf3904e2ba43b68438", "year": 2007 }
pes2o/s2orc
Evx2-Hoxd13 Intergenic Region Restricts Enhancer Association to Hoxd13 Promoter Expression of Hox genes is tightly regulated in spatial and temporal domains. Evx2 is located next to Hoxd13 within 8 kb on the opposite DNA strand. Early in development, the pattern of Hoxd13 expression resembles that of Evx2 in limb and genital buds. After 10 dpc, however, Evx2 begins to be expressed in CNS as well. We analyzed the region responsible for these differences using ES cell techniques, and found that the intergenic region between Evx2 and Hoxd13 behaves as a boundary element that functions differentially in space and time, specifically in the development of limbs, genital bud, and brain. This boundary element comprises a large sequence spanning several kilobases that can be divided into at least two units: a constitutive boundary element, which blocks transcription regulatory influences from the chromosomal environment, and a regulatory element, which controls the function of the constitutive boundary element in time and space. INTRODUCTION Protein-DNA interactions on DNA cis-regulatory elements and cross-talk between these complexes underlie precise transcription regulation.Since genes are embedded in huge DNA molecules containing abundant putative cis-regulatory elements such as enhancer and promoter sequences, the interactions of cisregulatory sequences is extremely complicated.To achieve proper transcription regulation, selection of these interactions is inevitable.Some DNA regions are believed to play roles controlling traffic interactions among cis-regulatory elements. Boundary elements divide a chromosome into independent units for transcription regulation.Mainly through genetic studies of Drosophila melanogaster, several candidate boundary elements (also known as insulator sequences or insulators) have been isolated [1][2][3][4][5].Many of these elements act as components for homeotic gene regulation [1,2,4,6,7].Enhancer activity must be well organized to achieve the proper transcription regulation of clustered homeotic genes needed for the development of proper anteroposterior segmental identity within an organism's body.Mutations within such boundary elements alter gene expression profiles by leading misuse of these enhancer sequences and cause morphological shifts in segmental identities [2,8]. A similar phenomenon has been observed in mammalian orthologous homeotic complex genes [9,10].Hox genes are responsible for the anterior-posterior identity of the mammalian body, as is in the case of Drosophila.Misregulation of Hox genes causes morphological alterations [11][12][13] and can even be detrimental [14,15].As with Drosophila, enhancer-promoter interactions in mammals also require precise organization for Hox gene expression regulation [9,10].An enhancer that drives Hoxd11 in the cecum cannot associate with the promoter of Hoxd13, which is about 10 kb away from Hoxd11 [9,10].Intergenic deletion of Hoxd12-Hoxd13, however, causes Hoxd13 to be expressed in the cecum in an expression pattern resembling that of Hoxd11 [9,10].Taken together, these results indicate that the Hoxd12-Hoxd13 intergenic sequence functions as an insulator which prevents enhancer access to promoter. Here, we report another candidate insulator sequence in the HoxD complex-the Evx2-Hoxd13 intergenic region.We demonstrated that this fragment possesses position-effect protection as well as insulator activity, two activities that are required for a sequence to be considered as a boundary sequence.Further-more, we found that this boundary sequence functions in a tissuespecific manner, and that the regulation of tissue specificity can be separated from the boundary activity. Differential expression profile of Evx2 and Hoxd13 Hoxd13 is located within 8 kb of the Evx2 gene, which is encoded on the opposite strand of DNA from other HoxD genes (Figure 1A).Examination of 11 days post-coitus (dpc) embryos revealed that the expression profile of Evx2 is distinct from that of its neighbor, Hoxd13 (Figure 1B) [16].In younger embryos, however, the expression profiles of Evx2 and Hoxd13 are almost identical.Expression of Evx2 in central nervous system (CNS) begins at 10 dpc, being especially prominent in the isthmus (Figure 1B).Hoxd13, however, is never expressed in anterior structures throughout embryonic and fetal development (Figure 1B).We previously created mice harboring a series of Hoxd9/lacZ marker transgene insertions into the region surrounding Evx2 (Figure S1A) [14].When the transgene was positioned immediately downstream of the Evx2 poly(A) + signal (relI), lacZ expression mirrored Evx2 expression; however, when the transgene was positioned half-way between the Evx2 and Hoxd13 initiation codons (relO), lacZ expression resembled Hoxd13 (i.e., lacked CNS expression in 11 dpc embryos) (Figure S1B) [14].Enhancer activities driving the expression of these genes in CNS, digits, and genital bud are located about 250 kb downstream from Evx2 [17].Taken together, these results suggest that an enhancer blocker element (insulator) exists in the vicinity of the Evx2 promoter, and that this insulator prevents enhancer(s) from interacting with Hoxd13 in the CNS (Figure 1C) [14,18]. Molecular dissection of Evx2-Hoxd13 insulator activity Using homologous recombination in ES cells, we translocated candidate boundary sequences to the region immediately downstream of Evx2 along with the Hoxd9/lacZ reporter transgene (Figure 1A; Figure 2A, B).The resulting ES cells were injected into blastocysts to create transgenic mouse embryos.Chimeras and progeny from the chimeras were then stained for b-galactosidase activity.If the candidate boundary sequence has insulator activity, then lacZ expression pattern should resemble that of Hoxd13; otherwise, lacZ expression pattern should resemble that of Evx2 (i.e., in the CNS). From our previous observations [14], we predicted that the boundary sequence is located between the Evx2 promoter and the NsiI site, half-way between Evx2 and Hoxd13 (i.e., the relO transgene insertion site) [14] (Figure S1).Indeed, when the entire candidate sequence and reporter transgene were inserted into the relO transgene insertion site (i.e., the 5 kb fragment between XhoI site in exon1 of Evx2 and NsiI site), lacZ-staining pattern in embryos was similar to that of Hoxd13, indicating that the XhoI-NsiI (XNs) fragment blocked enhancer activity in CNS (Figure 2C).This observation guided our experimental design. To assess the insulator activity in more detail, we divided the XNs fragment into three overlapping segments of about 2 kb each-XhoI-BamHI (XB), EcoRI-BglII (EBg), and BamHI-BamHI (BB)-starting from the region adjacent to the Evx2 initiation codon (Figure 2B).We then prepared transgenic mice harboring each fragment along with the Hoxd9/lacZ reporter transgene.Transgenic animals harboring either EBg or BB displayed an Evx2-like expression pattern (Figure 2D, E), indicating that these two fragments failed to block interaction between the CNS enhancer and the Hoxd9/lacZ promoter (Figure 2).On the other hand, transgenic animals harboring XB displayed no transgene expression in the CNS, indicating that this fragment has insulator activity (Figure 2F; Figure S2). To assess the insulator activity of the XB fragment further, we divided the 2.5 kb fragment into three pieces (Figure 2B) and found that none of these shorter fragments showed insulator activity by blocking enhancer interactions with the reporter transgene (Figure 2G, H).This indicates that a particular length of DNA containing multiple protein binding sites dispersed within the 2.5 kb XB fragment is required for proper insulator function and that insulator activity is a consequence of complex DNAprotein interactions (Figure 2).Bell and colleagues reported that the binding sequence of the transcription factor CTCF is necessary and sufficient for enhancer insulation activity of HS4 of the chicken b-globin locus [19].However, unlike in the case of the bglobin insulator, we were unable to isolate a small DNA element that functioned as an enhancer blocker in our study system.In addition, an in silico search using TESS (http://www.cbil.upenn.edu/tess/) failed to identify any candidate association sites for CTCF. Neighboring sequence regulates Evx2-Hoxd13 insulator activity Comparison of the lacZ expression patterns of XNs-and XBtargeted transgenic mice revealed distinct differences (Figure 3A, B).As shown in Figure 2, both fragments blocked the expression of the lacZ reporter gene in the CNS.In XNs mice, we observed lacZ expression in the limbs and genital bud (i.e., resembling Hoxd13 expression), whereas in XB mice, lacZ expression was absent in the limbs and genital bud (Figure 3B).These results suggest that (1) the XB fragment is a constitutive insulator, and (2) the specificity of the blocker activity is determined by a sequence in the BamHI-NsiI site (BNs) outside of the core blocker sequence, XhoI-BamHI (Figure S3).The sequence in the BNs fragment may counteract or cancel the blocking activity of the XB fragment in limbs and genital bud, therefore, posterior HoxD genes are expressed in the limbs and genital bud (Figure 3C).Thus, the blocking sequence in the Evx2-Hoxd13 system can be divided into two unitsa constitutive blocker and a blocker regulator. Methylation status of Evx2-Hoxd13 insulator sequence To gain insight into the molecular mechanisms of the insulator activity of the XB fragment, we next analyzed chromatin structures for the methylation status of DNA residues, a key means of regulating gene transcription.Indeed, methylation is thought to contribute to gene silencing, as evidenced by the high frequency of methylated residues often observed in DNA surrounding silenced promoters [20].It is probable that the methylation status of residues within functional insulator differs among organs or differs depending on a gene's transcription status, as observed in the case of the H19 imprinting system [21,22].In our system, since insulation takes place only in hindbrain and not in limbs, we examined and compared the methylation status of DNA samples from the hindbrain and forelimb of 11 dpc embryos.The samples spanned over 1,700 bp of DNA fragment within the insulator fragment, as assessed by the bisulfite method [23]. Hindbrain DNA samples contained methylated cytosine residues at relatively low to moderate frequencies (Figure 4).By contrast, forelimb DNA samples contained essentially no methylated residues (Figure 4), which is consistent with the absence of insulator activity in limbs. Lethality of targeted transgenic mice and Hoxd13 expression To further investigate the enhancer blocking activity of the XB fragment, we examined the expression patterns of Hoxd13 and Evx2 in targeted transgenic mice harboring the XB fragment.Our examination of several litters of 11.5 dpc embryos identified no homozygotic (XB/XB) targeted transgenic mice.With further analysis, we could not find the homozygous allele, even among 7.5 dpc embryos (Table 1).We did find, however, the BB/BB allele among 8.5 and 11.5 dpc embryos from one of the control lines harboring the BB targeted transgene (Table 1).Since lethality could not be segregated after more than 5 generations of outbreeding, we concluded that the lethal phenotype is closely linked to the presence of an additional copy of the XB fragment next to the Evx2 gene. To investigate the possible misregulation of Hoxd13 or Evx2 transcription resulting from insertion of the XB-containing transgene, we assessed Hoxd13 and Evx2 expression in 7 dpc embryos from the XB transgenic line by real-time PCR (Figure 5A).Three separate samples of cDNA prepared from three litters of internally bred wild-type mouse embryos and six samples of cDNA from six litters of internally bred BB mouse embryos were used as controls.We compared expression data from these controls to the Hoxd13 expression measured from eight cDNA samples prepared from 8 litters of embryos resulting from the breeding of XB heterozygote mice.While litters resulting from the breeding of wild-type and BB mice did not express Hoxd13, litters resulting from the breeding of XB mice expressed Hoxd13, although expression levels varied among the XB litters (Figure 5A).These results suggest that mice of the XB transgenic line expressed Hoxd13 prematurely, before the 7-dpc stage, and the lethality observed in these mice was due to this premature expression. Previously, we proposed that a repressive region outside of the HoxD complex is responsible for the early repression of genes in the HoxD complex, preventing the premature expression of Hoxd genes before the 7-dpc stage [14].The presence of an extra copy of the XB fragment, which acts as a constitutive boundary element when the regulatory sequence (BB fragment) is absent and when it is inserted between the repressive region and HoxD complex, may interfere with the repression from the repressive region thereby causing Hoxd genes-in this case Evx2 and Hoxd13 genes-to be expressed prematurely (Figure 5B).Taken together, these findings suggest that the XB fragment possesses protection activity against repression as well as enhancer blocker activity. XB protects against repression from chromosomal environment Exogenous genes introduced into a genome often become repressed over time by the chromosomal environment surrounding their insertion sites.Our observations indicated that insertion of the extra copy of XB fragment protects the neighboring promoters, Evx2 or Hoxd13 in this case, against repression (Figure 5), we examined whether this protection is also operative when the XB fragment is positioned randomly in the genome. We made two constructs-one containing the fluorescent protein Venus (Construct I), and the other containing the Venus marker gene bounded on both sides by the XB fragment (Construct II) (Figure 6A; for XB, see Figure 2, 3).Both of these constructs also harbored a neomycin-resistance marker to facilitate isolation of stable transformants.These constructs were introduced into NIH3T3 cells, which were then cultured in G418-containing media for one week to select neomycin-resistant colonies of randomly integrated transformed cell lines (Figure 6B). Twelve stably transformed colonies containing each construct were selected.Most of the clones contained few copies of the transgene (1-5 copies), as estimated by genomic Southern hybridization.After isolation, colonies were maintained in media lacking neomycin and maintenance of Venus fluorescence was assessed by fluorescence-activated cell sorting (FACS) (Figure 6B).After 1 month of culturing in media lacking neomycin, most clones continued to express the Venus marker, except for one clone harboring Construct II in which the transgene may have become disrupted at the beginning of the experiment (Figure 6C).6C).These results strongly suggest that the XB fragment conveys protection regardless of its position in the genome. DISCUSSION Mammalian Hox genes form a tightly packed gene cluster within the genome [24].Their genomic structure is believed to be highly correlated to the transcription mechanisms underlying their collinear expression [25].Recent studies have suggested that the initiation of Hox expression requires the DNA domains outside of the gene cluster, domains that regulate the collinear expression of Hox genes during development [14,26,27]. On the other hand, the tight packing structure of Hox genes makes it difficult to alter the expression profiles of these genes and to explain how each gene is influenced differently by a common regulator.Therefore, distinguishing the expression patterns of each Hox gene may require an enhancer insulator or boundary element system similar to that proposed in Drosophila melanogaster [2].We have previously presented several lines of evidence that such regulatory elements exist in the mammalian Hox complex [9,18,28].In the present study, we determined the functional sequence of one boundary element of the mouse HoxD complex and analyzed the mechanisms of its function. Tissue specificity of enhancer-blocker function The expression profiles of Evx2 and Hoxd13 genes have many aspects in common, such as initiation timing and expression in the future digit domain.However, Evx2 is strongly expressed in the CNS, while Hoxd13 is not.Interestingly, both of the enhancers that drive these genes in digits and brain are located and oriented similarly in relation to the HoxD complex, i.e., about 250 kb beyond Evx2 and Hoxd13 [17]. Several lines of evidence indicate that these differences derive from the ability of enhancers to differentially associate with different promoters.In the present study, we demonstrated that the functional region underlying insulator activity in our system comprises two units-a 2.5 kb fragment having constitutive blocker activity and a functional sequence regulating tissue specificity of the insulator activity.The detailed mechanistic basis of the interaction between these two functional sequences remains to be clarified.However, our series of targeted transgene experi- ments clearly indicated that the blocker regulator interferes with blocker (insulator) sequence function in digits and genitalia, thus supporting our hypothesis that chromatin makeup affects tissuespecific dynamics.In addition, methylation assays revealed that, in limbs or body parts in which the insulator sequence does not function, a hypomethylated form of the insulator prevails.The regulator sequence may play a crucial role in regulating the methylation of the insulator sequence, and hence in regulating insulator function. Comparison with b-globin system The HS4 fragment from the chicken b-globin locus is a well-known insulator sequence identified in vertebrates.Extensive examination of HS4 showed that this sequence has two separable activitiesinsulator activity and position-effect protection activity-each of which arises from two short and distinct DNA elements [29]. Unlike in the case of chick HS4, we were unable to dissect the functional insulator fragment apart from the 2.5 kb fragment, and thus were unable to isolate from the Evx2-Hoxd13 intergenic fragment the short sequence responsible for insulator activity or position-effect protection activity.In the chick b-globin locus, a CTCF-binding sequence is required and sufficient to block enhancer-promoter interaction [19], and another sequence is required for position-effect protection [29].The H19/Igf2 imprinting region also has CTCF-binding sites that function as insulator sequences in the imprinting system [21,22].In our system, however, we were unable to identify a CTCF-binding sequence by in silico screening or by chromatin immunoprecipitation assays (ChIP) using an anti-CTCF antibody (Kondo, unpublished data).Together, these findings suggest that the mechanisms underlying boundary activity in Evx2-HoxD are different from those underlying boundary activity in the b-globin and H19 systems [21,22]. Functioning mode of the chromatin boundary element We observed transgene expression in the CNS, limbs, and genitalia of transgenic mice harboring constructs in which promoters were inserted into the genomic region between Evx2 and CNS and/or limb enhancers (GCR) [14,17,28].Genes located beyond Hoxd13, however, showed no expression in CNS.Thus, boundary elements probably define the range of a chromosomal domain in which transcription factors can search for a promoter. Enhancer titration or competition can be a possible mechanism of the Evx2-Hoxd13 insulator [18,28].It is generally believed that some spatially close promoters compete for enhancers [30].Indeed, the XB fragment may contain the Evx2 promoter.However, the XB fragment-the enhancer blocker (insulator)did not show promoter activity when we randomly inserted it into the genome as a transgene (Kondo, unpublished data).Additionally, titration or competition appears to occur specifically against the neural enhancer not the limb enhancer, even though Evx2 promoter activity is prominent in both of these tissues.Specificity depended on the presence of an additional sequence (BNs fragment) next to the XB fragment, suggesting that even if the XB fragment has promoter activity when it is inserted near Evx2, promoter activity is not a decisive factor dictating whether the sequence will function as a blocker. In addition, enhancer titration cannot explain the position-effect protection activity we observed in our study system.Although these findings may not directly exclude the possibility, we believe that the enhancer insulation occurring in the Evx2-Hoxd13 system is not driven entirely by promoter titration. Differential expression of Evx2 and Hoxd13 in the CNS may also be due to uni-directional repression activity.In this scheme, the XB fragment recruits repression from only one side (side adjacent to the BamHI restriction site) of the flanking sequences.The BNs fragment, on the other hand, disturbs this repression activity in the limbs and genital bud, and as a result, posterior Hoxd genes are repressed in the CNS.The unidirectional repression activity scenario, however, cannot explain why Evx2 and Hoxd13 were prematurely expressed in XB-targeted transgenic mice.Additionally, this scenario cannot account for our transfection assay results showing that the genes adjacent to the BamHI side of the XB fragment maintained active transcription and the XB fragment showed protection activity against repression from the genomic environment where the transgenes were inserted.These phenomena contradict the unidirectional repression hypothesis. Boundary elements are candidate sequences that divide chromosomal DNA into units, regulating promoter-enhancer interactions and the extent of chromatin remodeling [8,31].Our findings prompt a revision of this traditional definition of chromatin boundary sequences, one including protection activity against negative chromatin spreading in addition to enhancer blocker activity.We showed that the sequence identified in this series of experiments is a proper boundary sequence, since it had position-effect protection activity as well as insulator activity.According to Corces and colleagues, boundary sequences are important for the three-dimensional assembly of chromosomal DNA [31]; these sequences, therefore, appear to represent constitutive activity that forms the structural basis of genomic DNA. Although we could not detect the CTCF association on this boundary region, seeking associating factors on this boundary region is one of the important aspects to reveal the mechanisms of boundary function.Previous studies on Drosophila boundary elements raised several candidates for associating factors, such as BEAF-32 [32], Su(Hw), Mod(mdg4) [33], Pleiohomeiotic (Pho) or Trithorax-like (Trl = GAGA factor) [34].Unfortunately, mammalian homologues of most of these factors remain elusive.However, potential association of Pho and Trl suggests the involvement of Polycomb group genes (PcG) and trithorax group genes (trxG) in the boundary function in Drosophila melanogaster.In vertebrate systems, PcG or trxG factors may also be involved in the boundary function through histone modifications. In the present study, we demonstrated that the Evx2-HoxD boundary element operates in tissue-and time-specific modes.This boundary element is composed of two functional fragmentsa constitutive boundary element and a boundary regulator fragment-the latter of which gives tissue specificity to the constitutive boundary element.The constitutive boundary element can protect promoters from the repressive influence of chromosomal environments when it is inserted along with promoters.This finding also indicates that constitutive boundary elements can be useful tools for stably expressing exogenous promoters in eukaryotic cells. Targeted transgene A construct containing the Hoxd9/lacZ indicator transgene, which was inserted into a HindIII site just downstream of the Evx2 gene, and a test fragment (from the 59-flanking region of the transgene) was introduced into R1 ES cells by electroporation.ES cells were selected by using neomycin resistance, and homologous recombinants were isolated by genomic Southern hybridization as previously described [14].Chimeras were created with these homologous recombinant ES cells to establish targeted transgenic animals. Cell culture NIH3T3 cells were cultured in Dulbecco's Modified Eagle's Medium supplemented with 10% fetal bovine serum.Constructs containing either a Venus expression marker gene or a Venus marker flanked by candidate boundary sequences (XB fragment) were introduced into NIH3T3 cells by electroporation.Twelve colonies from each pool of transformants containing the two constructs were randomly isolated and used for further testing.We continued to culture these transformants, assessing their fluorescence 1, 6, and 12 months after starting the culture by FACS. Gene expression study b-galactosidase staining and in situ hybridization were performed according to established protocols.The Hoxd13 probe was described by Dolle ´et al. [35].The Evx2 probe was derived from an XbaI-BamHI genomic fragment of the Evx2 gene corresponding to the 39 UTR of Evx2 gene.We also assessed Hoxd13 gene expression by real-time PCR using a Corbett RG-3000 with Invitrogen Platinum SYBRGreen PCR kit.We isolated mRNA from 7 dpc embryos from three different pregnant wild-type mice (wild-type matings), six different pregnant BB/2 mice (BB-targeted transgenic heterozygotic matings), and eight different pregnant XB/2 mice (XB-target transgenic matings).Each cDNA sample was derived from mRNA pooled from each litter, which was composed of several embryos (e.g., three cDNA samples from wild-type mice; six cDNA samples from BB/2 mice; eight cDNA samples from XB mice). To control for the amount of cDNA, we carried out PCR with b-actin. Figure 1 . Figure 1.Genomic structure and expression patterns of Evx2-Hoxd13.(A) Evx2 and Hoxd13 genes are located within 8 kb of each other.(B) Expression patterns of Evx2 (left panel) and Hoxd13 (right panel) in 11 dpc embryos.Evx2 is expressed in the CNS, as well as in the limbs and genital buds, regions that also express Hoxd13.(c) Scheme illustrating the hypothesis that Evx2 and Hoxd13 display segregated expression patterns.Enhancers located 39 from Evx2 have differential access to the Evx2 promoter and the Hoxd13 promoter.The intergenic region of Evx2-Hoxd13 prevents enhancer access in CNS but not in limbs.doi:10.1371/journal.pone.0000175.g001 Figure 2 . Figure 2. Analysis of the insulator fragment using targeted transgene experiments.(A) Scheme of experimental design.The Hoxd9/lacZ transgene, along with an insulator candidate fragment, was inserted into the region just downstream of Evx2 by a gene-targeting technique using ES cells.The resulting ES cells were injected into blastocysts to establish transgenic mice.(B) The XhoI-NsiI (XNs) fragment (red) was separated into three fragments-BamHI-BamHI (BB), EcoRI-BglII (EBg), and XhoI-BamHI (XB)-each of which were translocated together with the Hoxd9/lacZ reporter transgene.(C-H) LacZ-stained 11 dpc embryos.XNs (C) and XB (F) blocked lacZ gene expression in brain, while BB (D) and EBg (E) failed to do so.Based on these results, we divided the XB fragment into two fragments-XhoI-EcoRI (XE) and EcoRI-EcoRI (EE)which we used to make targeted transgenic mice having a similar configuration to that shown in panel (A).Embryos harboring XE (G) and EE (H) displayed lacZ gene expression in brain but did not display expression patterns indicative of insulation activity.doi:10.1371/journal.pone.0000175.g002 Figure 3 . Figure 3. Insulation activity is spatially dependent.(A) Design of two targeted transgenic mice.XNs and XB as in Figure 2A.(B) Expression pattern of XNs-and XB-transgenic mouse embryos.XNs 11 dpc embryos expressed lacZ in the limbs and genital bud, whereas XB embryos did not.Limbs are shown in the boxed areas and genital buds are shown in the middle panels.(C) Scheme illustrating the regulation underlying enhancer-promoter interaction.Within the CNS, the XB fragment prevents interaction between the enhancer and Hoxd13 promoter, while the BB fragment blocks the insulator activity of the XB fragment within the limbs and genital bud.doi:10.1371/journal.pone.0000175.g003 After 6 months of culturing, Venus expressed by clones harboring Construct I were dramatically different from that by clones harboring Construct II (Figure 6C).Seven of 12 Construct I clones lost fluorescence almost completely, while only three of 12 Con- Figure 4 . Figure 4. Methylation profile of insulator fragment.(A) Hindbrain and forelimbs were dissected from 11 dpc embryos to obtain the DNA used for the methylation assays.(B) Map of the tested fragment.Red line represents the insulator (XB fragment) and blue line represents the regulator (BB fragment).Bisulfite-treated genomic DNA was subjected to PCR using three sets of primers (indicated by arrows; see Experimental Procedures).Primer pairs for one PCR reaction have matching arrow color.(C) Ten clones from each PCR product were sequenced.Their methylation status is shown here.White circles represent methylated cytosine residues, while black circles represent non-methylated cytosine residues.doi:10.1371/journal.pone.0000175.g004 Figure 5 .Figure 6 . Figure 5. Premature expression of Hoxd13 observed in XB-targeted transgenic mice.(A) Hoxd13 expression in 7 dpc embryos was specifically upregulated.Sample RNA levels were normalized according to b-actin mRNA levels.W1-W3, samples from three litters arising from wild-type mice; B1-B6, samples from six litters arising from internally bred BB-transgenic mice; X1-X8, samples from eight litters arising from internally bred XB-transgenic mice.(B) Hypothetical scheme for premature expression of these genes in XB-transgenic mice.The region downstream of Evx2 recruits repression over the Hox complex before the 7-dpc embryonic stage in preparation for Hox expression in wildtype and BB-targeted transgenic mice.The XB fragment disrupts repression, preventing the repressor region downstream of Evx2 from recruiting repression into the HoxD complex.doi:10.1371/journal.pone.0000175.g005 Figure S1 Figure S3 Figure S1LacZ expression of targeted transgenic mice described previously[1].(A) Positions of targeted transgene.The Hoxd9/lacZ marker transgene was inserted half-way between Evx2 and Hoxd13 by using the ES cell technique to produce relO mice.The Hoxd9/lacZ transgene is immediately downstream of Evx2 in relI mice.The resulting ES cells were injected into blastocysts to establish transgenic mice.(B) In relI embryos, the lacZ-staining pattern in the isthmus resembles the expression pattern of Evx2.(C) LacZ-staining pattern indicates that relO mice do not express the transgene in brain, which is consistent with our Hoxd13 in situ hybridization results.Found at: doi:10.1371/journal.pone.0000175.s001(0.02 MB PDF) Figure S2 Sequence of boundary fragment.(A) Physical map of region including boundary sequence.The boundary fragment is indicated in red line.The numbers in parenthesis indicate the distance from XhoI site in the first exon of Evx2.(B) Sequence of XB-boundary fragment.The initiation codon of Evx2 gene is indicated with blue font.Found at: doi:10.1371/journal.pone.0000175.s002(0.02 MB PDF) Figure S3 Sequence of boundary regulator fragment.(A) Physical map of region including boundary regulator sequence.The boundary regulatory fragment is indicated in dark blue line.The numbers in parenthesis indicate the distance from XhoI site in the first exon of Evx2.(B) Sequence of BNs-boundary regulator fragment.Found at: doi:10.1371/journal.pone.0000175.s003(0.02 MB PDF) Text S1 Supporting Information legends and references.Found at: doi:10.1371/journal.pone.0000175.s004(0.03 MB DOC)
v3-fos-license
2014-10-01T00:00:00.000Z
2011-03-08T00:00:00.000
11271235
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-151", "pdf_hash": "022029276bcf69dfa5775751323e1a8f367af22a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:485", "s2fieldsofstudy": [ "Medicine" ], "sha1": "aadea667777297df5a786b00923ec9e9126c6b2d", "year": 2011 }
pes2o/s2orc
Is 'Opt-Out HIV Testing' a real option among pregnant women in rural districts in Kenya? Background An 'opt-out' policy of routine HIV counseling and testing (HCT) is being implemented across sub-Saharan Africa to expand prevention of mother-to-child transmission (PMTCT). Although the underlying assumption is that pregnant women in rural Africa are able to voluntarily consent to HIV testing, little is known about the reality and whether 'opt-out' HCT leads to higher completion rates of PMTCT. Factors associated with consent to HIV testing under the 'opt-out' approach were investigated through a large cross-sectional study in Kenya. Methods Observations during HIV pre-test information sessions were followed by a cross-sectional survey of 900 pregnant women in three public district hospitals carrying out PMTCT in the Busia district. Women on their first antenatal care (ANC) visit during the current pregnancy were interviewed after giving blood for HIV testing but before learning their test results. Descriptive statistics and multivariate regression analysis were performed. Results Of the 900 women participating, 97% tested for HIV. Lack of testing kits was the only reason for women not being tested, i.e. nobody declined HIV testing. Despite the fact that 96% had more than four earlier pregnancies and 37% had been tested for HIV at ANC previously, only 17% of the women surveyed knew that testing was optional. Only 20% of those surveyed felt they could make an informed decision to decline HIV testing. Making an informed decision to decline HIV testing was associated with knowing that testing was optional (OR = 5.44, 95%CI 3.44-8.59), not having a stable relationship with the child's father (OR = 1.76, 95%CI 1.02-3.03), and not having discussed HIV testing with a partner before the ANC visit (OR = 2.64 95%CI 1.79-3.86). Conclusion High coverage of HIV testing appears to be achieved at the cost of pregnant women not understanding that testing is optional. Good quality HIV pre-test information is central to ensure that pregnant women understand and accept the reasons for testing and will thus come back to collect their test results, an important prerequisite for completing PMTCT for those who test HIV-positive. Background The World Health Organization (WHO) and the joint United Nations program on HIV/AIDS (UNAIDS) revised the guidelines for HIV testing in 2007 [1]. The current guidelines were designed to increase coverage of testing and identify patients in need of antiretroviral therapy (ART). In the former 'opt in' HIV strategy, the initiative to be tested was with the individual, not with the health care services, and individual pre-test counseling followed by informed consent was required before testing. In some areas, people were even required to sign a separate informed consent form, which detailed the risks and benefits of being tested [2]. With the new 'opt-out' strategy, individuals have to actively opt out or decline the HIV test after a pre-test information session, often carried out in a group, while post-test counseling is still carried out on an individual basis for all clients. The implications of provider-initiated HIV testing greatly affect women in sub-Saharan Africa (SSA) where they account for nearly 60% of those infected with HIV and where 75% of those living with HIV are between 15-24 years [3]. Women have more contact with the health services e.g. during pregnancy [4] and are thus more likely to undergo HIV screening [5], but it has been observed that consent may be compromised in SSA, which negatively affects women's autonomy and possibly also completion of PMTCT [6,7]. The shift from 'opt in'/client-initiated to 'opt out'/provider-initiated HIV testing has generated a debate on how to best increase the uptake of HIV testing while, at the same time, protect individual rights to voluntary consent for HIV testing [1]. Proponents of "opt out" assert that the provider-initiated consent process is crucial to achieve high coverage of HIV testing and prevention of motherto-child transmissions (PMTCT) while it still protects autonomy [8]. It also helps the 'streamlining' of HIV into 'normal care' thereby decreasing the stigma [8,9]. Those who question the 'streamlined' consent process express doubt about whether informed consent can be ensured in the context of routinely offered HIV testing under conditions of scarce human resources [10,11]. Power differences in the provider-client relationship is also identified as a problem, since it is uncertain whether clients who normally have a lower social status will feel able to opt out of testing against the recommendation of their providers [6]. Others are concerned about the client's ability to provide voluntary consent and to what extent any choice will be presented given that providers are encouraged to motivate clients to test and could be coercive [6]. Women in particular are often also unable to make decisions independently due to gender inequality and lack of knowledge [3,12]. Finally, and most important from a public health perspective, there is concern that pregnant women who fail to make an informed choice about HIV testing are less likely to come back for their test results, an obvious prerequisite for identifying and enrolling HIV-infected women in the PMTCT program, thus undermining the quality and effectiveness of this important intervention [5,13]. A study from Botswana showed that pregnant women felt compelled to test when it was routinely offered and some instead exerted their decision-making power by not returning to collect their test results [13]. Kenya introduced routine rapid 'opt out' HIV testing at antenatal care (ANC) in 2007 [14]. Approximately 76 000 pregnant women are living with HIV in Kenya, thus ranking it sixth among the ten African countries that contribute 67% of the global burden of MTCT [15]. Up to 40% of all pregnant women enrolled in ANC programs in Busia district in western Kenya are estimated to not come back for their test results and will thus never be enrolled into PMTCT (personal communication). Pregnant women and their infants in these two rural districts are considered to be highly vulnerable to MTCT due to the high HIV prevalence (9%) and high fertility rate (7.1) compared to the national average of 7% and 5.1 respectively [14]. This study aims to identify factors associated with consent to HIV testing under the 'opt out' strategy in this area in rural Kenya. Study area This study was performed in Busia district located in western Kenya. This rural district has five administrative divisions with a population estimated at 415 000. The study catchment area has a population of 202 348 living in 312 villages, with 50 000 women of reproductive age and 38 000 children less than five years of age. Surveillance studies at ANC show HIV infection rates close to 10% [14]. Agriculture, fishing and small-scale commercial undertakings are the main economic activities in the district where the average household generates approximately $84 per month. The majority ethnic group is Luhya with a few Luo speakers. There are 22 health facilities in the study area that are private, mission-run or government-owned. About 90% of these facilities offer free rapid HIV testing services except for a few dispensaries that refer patients to health centers or district hospitals for testing. The study was carried out at three public district hospitals collaborating with non-governmental organizations (NGOs) on PMTCT and ART. In all three hospitals, PMTCT and ART have been provided free-ofcharge since 2005 to all women in need in line with the WHO treatment guidelines from 2007. According to the new 'opt out' guidelines implemented in 2007, all pregnant women should participate in a HIV pre-test information group session, followed by rapid 'opt out' HIV testing and individual post-test counseling at their first ANC visit. For pregnant women who test positive for HIV, a CD4 cell count is done to determine whether ART should be initiated or if a single dose of nevirapine during labor is enough (short course combination treatment during pregnancy and breastfeeding has not yet been implemented in Busia district as of end 2010). HIV-infected women should be individually counseled regarding hospital delivery, safe infant feeding and contraceptive use. Study design, sampling and participants The study included twelve sit-in observations of counseling sessions for pregnant women and a large crosssectional survey among pregnant women. The observations were performed by the first author, who is of Kenyan origin and fluent in the local languages spoken in the area, during two randomly selected weekdays and with four visits at each facility. For the cross-sectional assessment, 900 women who were on their first visit to ANC for the current pregnancy were recruited consecutively between August and December 2008. All women in the three hospitals received the same information during the routine pretest information sessions that followed the Kenyan guidelines on PMTCT. A midwife informed them about the study in the ANC reception area during a session on general hygiene. Those willing to participate met the midwife, gave informed consent and were enrolled into the study. No woman among those approached declined to participate and no participant had been informed of her HIV test results before the interview. The sample included all pregnant women who were tested in the three hospitals within the timeframe. Data collection Notes were taken during the observations about the setting for the pre-test counseling session, the content of the session and of how the information about HIV testing was given. The interviewer-administered structured questionnaire contained both closed and open-ended questions in Kiswahili or Luhyia. Data was collected on socio-demographic characteristics, relationship factors, awareness and knowledge about MTCT and PMTCT and experiences of the group counseling session and the HIV testing. The Kenya Medical Research Institute (KEMRI) and the regional ethics board of Karolinska Institute approved this study. Data analysis The observations were compared and evaluated against the Kenya pre-test guidelines of the 'opt out' approach after each observed session. For the cross-sectional data analysis, data were analyzed using SPSS-PASW, version 18 (SPSS, Inc., Chicago, IL). Descriptive statistics were used to summarize all variables of interest in the study population. The outcome variable "making an informed decision to decline HIV testing" was derived from the question 'If you could choose to HIV test or not, would you decline? (Yes/No)'. Independent variables that were used to model the outcome variable included; type of union -'What is your marital status? (Married/Unmarried)', duration of current sexual relationship -'How long have you been in the current relationship? (Not in a relationship, ≤4 years and >4)', stable relationship with the child's father -'Do you live together with the child's father? (Yes/No)', knowing HIV testing is optional -'Do you know that you can choose to HIV test or not? (Yes/ No)', tested for HIV -'Have you tested for HIV at this visit? (Yes/No)', discussing HIV testing with the partner before the ANC visit -'Have you discussed HIV testing with your partner before this ANC visit? (Yes/No) and knowing testing is performed at ANC before the visit -'Did you know that HIV testing is done at ANC before coming today? (Yes/No)' The estimated prevalence of "making an informed decision to decline HIV testing" was reported as a percentage. It is important to note that the outcome variable "making an informed decision to decline HIV testing" implies making a decision about whether to consent to HIV testing or not after awareness i.e. the actual decision of women who knew that HIV testing is optional as well as the perceived decision among those who did not know that testing is optional, had they known that this was the case. The association between "making an informed decision to decline HIV testing" and each categorical independent variable was first assessed using Chi-square or, when the number in the contingency tables was too small, Fisher's exact test. Independent variables with a p-value of <0.20 associated with the outcome at the bivariate level were entered into multiple logistic regression models with the exception of age, education and occupation level that were included regardless of p-value in order to adjust for potential residual confounding linked to the main independent variables. Both backwards and forward logistic regression (Wald test) was performed and gave almost similar result. P-values <0.05 (2-sided test) were considered significant in the final model. Odds ratios (ORs) and their 95% confidence intervals (CI) were computed. The final multivariate model was tested for goodness of fit with the Hosmer-Lemeshow test. Results Observations during pre-test information session at group level The setting of the pre-test counseling session Pre-test counseling sessions were provided to groups of 10-15 pregnant women in a separate space. There were between three and four information sessions per day at each facility. The sessions normally took 45-50 minutes and were mainly performed in the national language Kiswahili and translated simultaneously into the local dialect of Luhyia. Female midwives greeted the audience and introduced themselves when starting the session. The pregnant women were told that they could ask questions during the session in case they wanted to know more, but no woman asked any question or sought clarification at any of the sessions observed. The pregnant women nodded unanimously when the midwife sought to stress the benefits of HIV testing as shown below. Midwife: Do you mothers agree that it is important to test for HIV and protect the unborn child? Women: (nodding) yes (in a group). The content of the pre-test counseling session The information included a description of HIV and AIDS, modes of HIV transmission from a pregnant woman to their child during and after pregnancy, the importance of HIV testing for a diagnosis, secondary prevention of HIV transmission to uninfected male partners and the PMTCT program (single dose nevirapine tablets for the mother and syrup for the infant during a six week period after delivery; skilled hospital delivery; and options of exclusive breastfeeding or formula feeding). Information provided about HIV testing The women were given information about the importance of HIV testing and of learning about their HIV status, and also the status of their partner. Women were not required, but encouraged, to bring their partners in to be HIV tested as well. The importance of having an uninfected baby was emphasized as well as the fact that testing was important in the first trimester of pregnancy. In all the sessions the midwives' undertone was motivational and the message was that testing and knowing one's HIV status was the best decision a mother could make for her unborn child. No information was provided stating that it was an individual and voluntary choice of the woman to decline or accept HIV testing. The midwives referred to the women as 'mothers' and emphasized that it was their responsibility to take the HIV test to protect the baby and have a healthy and virus-free child. When asked by the main author about reasons for not requiring women to bring their partners for HIV testing, the midwives said that men who really loved their women normally accompanied them to ANC to test of their own free will and did not need to be asked to come. Cross-sectional survey of 900 pregnant women Table 1 shows socio-demographic characteristics and HIV testing information of the 900 women enrolled in the study. The median age was 20 years (inter-quartile range 5). The majority of the women (96%) had already had more than four pregnancies including the current one, although 73% were in a relationship of less than four years. About 90% had a stable relationship with the child's father. Eighty percent were in a formal union. About 85% had eight or less years of formal education and 18% were employed. Slightly over one-third (37%), had previously been tested for HIV at ANC using the 'opt-out' approach. Lack of testing kits was the only reason for women not to be tested i.e., no woman declined HIV testing and nearly all were tested for HIV (97%). About 73% knew that HIV testing was done at ANC before coming there and 69% had discussed the test with their partner before the visit. Following the pre-test counseling session, 90% (N = 810) claimed they had understood the information, but only 17% had grasped that HIV testing was optional, 95% were aware of MTCT and 91% had understood that preventing transmission was possible. The reasons given by the 10% (N = 90) women who reported not understanding most of the pre-test information included: the counselor speaking too fast (N = 45), using complicated terms (N = 27) and difficult language (N = 18). Only 20% (N = 180) of the women said they would make an informed decision to decline HIV testing. After adjusting for all potential confounding factors listed in Table 1, only three factors remained independently associated with an increased likelihood of making an informed decision to decline HIV testing in the final multivariate model: knowing that testing was optional, not having a stable relationship with the child's father and not having discussed HIV testing with a partner before the ANC visit. Knowing that testing was optional was the strongest predictor for women saying that they would make an informed decision to decline HIV testing (OR = 5.44, 95% CI 3.44-8.59). Women not in a stable relationship with the child's father were more likely to perceive that they would make an informed decision to decline HIV testing (OR = 1.76, 95% CI 1.02-3.03). Not having discussed HIV testing with a partner before the ANC visit also doubled the likelihood for women saying that they would make an informed decision to decline HIV testing (OR = 2.63, 95% CI 1.79-3.86). Age, occupation and education level were not statistically significant factors but kept in the final model to adjust for residual confounding often associated with these fundamental variables ( Table 2). The number of pregnancies both as a categorical and a continuous variable was not significantly associated with the outcome, probably because only one third had been HIV tested before (OR = 1.96, 95% CI 0.63-6.09). Discussion None of the 900 pregnant women included in this study declined HIV testing under the routine 'opt-out' approach. A majority (83%) had not understood that HIV testing was optional and only one in five stated that they would have been able to make an informed decision to decline HIV testing. This is a fundamental shortcoming of unclear pre-test information, which undermines the assumption of voluntary consent. Thus with the current approach, high coverage of HIV testing at ANC may be achieved at the cost of women not understanding that testing is optional and at the risk of low uptake and completion of PMTCT which is a major problem not only in this area, where between 30%-40% of all pregnant women enrolled in ANC programs are estimated to not come back for their test results (personal communication by David Wamalwa project manager for Busia Child Survival Project), but also documented in other parts of SSA [16,17]. The midwives did provide correct information regarding the importance of HIV testing in the first trimester of pregnancy, but the great majority of women (83%) never understood that it was optional. By saying that testing was 'the best decision a mother could make for her unborn child' the midwives clearly revealed their expectations and left little room for the women to act otherwise. This finding is consistent with another study performed in Kenya showing that women accept HIV testing so as to avoid being perceived as not accepting the message of the midwives [7]. Our findings showed that it was difficult for the providers to remain neutral when informing about routine HIV testing. During the observed counseling sessions the midwives referred to the women as 'mothers' thereby highlighting the importance of the baby. The reason given during the sessions for having the test was the need to protect the child, while nothing was said about HIV testing being optional. From a public health perspective it is important that the women understand and accept the reasons for testing, since this increases enrolment in and adherence to PMTCT. The high number of women counseled simultaneously under the opt-out approach made meaningful interaction difficult and for the midwives it was consequently easiest to provide information using a top-down approach. This approach could be justified as it reduced waiting time for the many women visiting ANC, in a situation where pre-test counseling is the first step before receiving other services of ANC. However, the current set-up makes most women believe that HIV testing is a prerequisite for obtaining other ANC services. To avoid this, the information needs to not only discuss the benefits of the testing, but also its implications and the importance of the post-test counseling. Our findings showed that 83% of the women perceived testing as a mandatory part of ANC services, not as a service independent of antenatal care. This finding could be attributed to unclear delivery of pre-test information and is consistent with observations that poor counseling prevents pregnant women from making informed decisions about HIV testing. Only 20% of the women felt they would have been able to make an informed decision to decline HIV testing. This is a remarkably low proportion, given that more than a third also had been tested for HIV at ANC before. Although none among the 900 women declined the test, a majority seemed to have accepted it because they felt obliged to. An explanation for the misconception could be the power difference between the midwives and the pregnant women. Midwives are trusted and have high social status among pregnant women. In a recent qualitative study exploring reasons for adherence to PMTCT in the same setting we found that HIV-infected pregnant women trusted the midwives to keep their HIV diagnosis secret from the mother-in-law at least during pregnancy appointments (data not yet published). Our findings showed that a great majority of the women had started childbearing at an early age and that 85% of the women had eight years or less of schooling. Possibly many women accepted to have the HIV test because they perceived the midwife to be more knowledgeable and to know best. They seemed not to understand the importance of their own active involvement in accepting or declining the HIV testing and the consequences of having the test. This is consistent with observations that patients in SSA accept to follow recommendations from health providers without fully understanding the consequences of their action as was observed in, for example, Botswana [13]. The implications for HIV testing could be that pregnant women accept to be HIV tested but fail to return for the test results as they realize that they are unprepared for the consequences. Failure to return for results or drop out from PMTCT has been documented from SSA [16,17]. Unfortunately, we were not able to follow the women through the PMTCT process to assess the completion rate, but as mentioned above, a high proportion of pregnant women in this area are reported to never pick up their test results. The rapid test results are available within a quarter of an hour, and so failure to come for them strongly indicates that many women were not ready to face the consequences of a positive test result. They exerted their decision-making power in a more socially acceptable way by dropping out directly after testing. For improved access to and completion of PMTCT pregnant women need to understand the process of testing and voluntarily and consciously consent to HIV testing [18]. In a qualitative study in the Kibera slum in Nairobi exploring the reasons for becoming pregnant among women on ART, we found that women planned to become pregnant to strengthen their sexual relationships and possibly formalize them [19]. In this study the women who did not discuss with their partners felt more able to decline testing, as did those in unstable relationships. These women live in a more insecure situation and lack support to handle the test results, while those in stable relationships and those who have discussed the testing know that they will have support irrespective of the test-results. Couple testing is often promoted as a means to increase male partner support. However, the state of the relationship between a woman and her partner influences the decision-making of the woman in relation to testing. Perceived negative consequences of an HIV diagnosis, such as partner abandonment, isolation and loss of financial support, may be an important reason for women to test alone, to decline picking-up their test results or to avoid HIV testing altogether. This study shows the importance of having a secure relationship and a supportive environment before the testing. A recent study in rural Uganda showed that pregnant women often feel heavily burdened by partner disclosure and couple testing recommendations in relationships where they feel disempowered and dependant on their male partner [20]. It becomes necessary to understand individual women's sexual relationships and dependencies on men in order to improve acceptance of HIV testing and also enrolment and completion of PMTCT. The likelihood of selection bias was low since ANC attendance is high in Kenya, about 90% visit ANC at least once, and one can assume that our participants represent of a majority of pregnant women in this area. The hospitals included in this study are NGO-affiliated and one can assume that the quality of care is similar across the PMTCT programs. Conclusion High coverage of HIV testing appears to be achieved at the cost of pregnant women's lack of knowledge that testing is optional. Good quality HIV pre-test counseling is central for making pregnant women understand and accept the reasons for testing and encourage consent to HIV testing, an important prerequisite for the consequent completion of the PMTCT program by those who are HIV infected. While provider-initiated HIV testing is necessary to increase the number of women who access PMTCT and ART, caution must be taken to actively involve the woman during the consent process, to respect their autonomy and improve the enrollment and completion of PMTCT. Intensive community campaigns are warranted to raise awareness of the HIV testing being performed at ANC and the reasons why it is being carried out, to sensitize the community and make them better prepared to make informed decisions. Health authorities could collaborate with NGOs to disseminate information, improve education and increase communication at household level in rural areas to supplement human and material resources shortages. More work is needed to understand how best to develop testing policies that both protect the voluntary consent process and expand testing to increase the implementation of functioning PMTCT-programs in areas with high HIV prevalence in SSA.
v3-fos-license
2023-04-26T06:17:10.726Z
2023-04-25T00:00:00.000
258312153
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14005", "pdf_hash": "fb8dbd9fb127e8f941bc43f521517137edfa7622", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:486", "s2fieldsofstudy": [ "Medicine", "Physics" ], "sha1": "27daed6d0e6794cc049a4c2afa99dbdbfd11523c", "year": 2023 }
pes2o/s2orc
Online adaptive radiotherapy and dose delivery accuracy: A retrospective analysis Abstract Purpose With online adaptive radiotherapy (ART), patient‐specific quality assurance (PSQA) testing cannot be performed prior to delivery of the adapted treatment plan. Consequently, the dose delivery accuracy of adapted plans (i.e., the ability of the system to interpret and deliver the treatment as planned) are not initially verified. We investigated the variation in dose delivery accuracy of ART on the MRIdian 0.35 T MR‐linac (Viewray Inc., Oakwood, USA) between initial plans and their respective adapted plans, by analyzing PSQA results. Methods We considered the two main digestive localizations treated with ART (liver and pancreas). A total of 124 PSQA results acquired with the ArcCHECK (Sun Nuclear Corporation, Melbourne, USA) multidetector system were analyzed. PSQA result variations between the initial plans and their respective adapted plans were statistically investigated and compared with the variation in MU number. Results For the liver, limited deterioration in PSQA results was observed, and was within the limits of clinical tolerance (Initial = 98.2%, Adapted = 98.2%, p = 0.4503). For pancreas plans, only a few significant deteriorations extending beyond the limits of clinical tolerance were observed and were due to specific, complex anatomical configurations (Initial = 97.3%, Adapted = 96.5%, p = 0.0721). In parallel, we observed an influence of the increase in MU number on the PSQA results. Conclusion We show that the dose delivery accuracy of adapted plans, in terms of PSQA results, is preserved in ART processes on the 0.35 T MR‐linac. Respecting good practices, and minimizing the increase in MU number can help to preserve the accuracy of delivery of adapted plans as compared to their respective initial plans. setup scans. 1,2 Adaptive radiotherapy (ART), which considers daily modifications in organs at risk (OAR) and target volumes, makes it possible to ensure that dose constraints are respected, while achieving optimal target volume coverage. 3 In our institution, the MRIdian has been in use since June 2019, for ART oriented toward stereotactic body radiation therapy (SBRT) of abdominal tumors, mostly of the liver and pancreas. 4 During IMRT, the respect of good clinical practices requires the performance of patient specific quality assurance (PSQA) measurements for each treatment plan, in order to assess and validate its dose delivery accuracy. 5 The accuracy of dose delivery can be defined as the ability of a system to interpret and deliver a treatment plan as it was generated in the treatment planning system (TPS). During online ART processes, it is not possible to include the PSQA step, because of the presence of the patient on the treatment table, and also due to time pressure. [6][7][8] Consequently, the accuracy of delivery of the adapted plans cannot be verified before treatment. Several studies have focused on the evaluation of ART processes using end-to-end anthropomorphic phantoms. [9][10][11][12][13] Recently, Elter et al. 9 evaluated the process in a realistic way using a deformable anthropomorphic pelvic phantom, with 3D dose distribution assessment through gel detectors. These studies have provided interesting evaluations of the ART process in specific cases, but did not evaluate overall adapted plan delivery accuracy. In this context, we sought to investigate the impact of the ART process on the delivery accuracy of adapted treatment plans, in a retrospective evaluation of 3Dγ pass rates of PSQA performed after delivery of adapted plans. To this end, over 100 PSQA results from adapted plans for liver and pancreas treatments performed with the ArcCHECK (Sun Nuclear Corporation, Melbourne, USA) were analyzed, and compared to the PSQA of the initial treatment plans. Our primary objective was to verify whether the 3Dγ pass rate of adapted plans is maintained with the specific practices applied during the ART process. Indeed, the manufacturer (Viewray Inc.) recommend paying particular attention to the variation in monitor unit (MU) number during the ART process, because of its influence on the irradiation time, and possibly on the plan delivery accuracy. We therefore studied the variation in MU number between the initial and adapted plans,and compared it to the PSQA results. To the best of our knowledge, no study to date has investigated the PSQA dilemma during the ART process, using appropriate statistical testing. For our institution, this study was a key step towards validating our clinical and dosimetric practices, in order to justify the withdrawal of adapted plan PSQA from our ART workflow. General description The MRIdian ART process is a succession of welldefined steps that must be strictly followed, in order to ensure accurate delivery of treatment (Figure 1).At each fraction, a daily MRI scan of the patient is initially performed and registered to the primary planning image, considering the gross tumor volume (GTV) or the clinical target volume (CTV). In this study, each treatment considered was planned and delivered in a specific and reproducible breath-hold position defined at the simulation step. In addition to the importance of patient installation, using the appropriate positioning and immobilization devices, strict control of correct breath-holding during the daily MR acquisition is required, to reduce excessive body variations, and to limit variations in the internal structures, as compared to the primary planning image. Then, the initial planning contours and electron density map are adapted to the daily MR image, respectively by a radiation oncologist and a physicist. The approach applied is based on that described by Bohoudi et al. 14 consisting of limited checks and correction for daily changes within a distance of 3 cm from the planning target volume (PTV). This method assumes that this region includes the highest dose gradients, with the possible hot spot variations significantly affecting OAR doses. According to the prescribed dose, sensitive OAR initially defined by the radiation oncologist are re-considered daily in the ART workflow. An optimized PTV is generated daily with the help of cropping rules to spare OAR according to their position with regard to the CTV. Then, the original treatment plan is recalculated using the adapted contours of the day. By comparing the daily dose distribution to the initially planned dose distribution, the radiation oncologist can choose to treat with the initial plan (i.e. no adaptation) or to adapt the treatment plan. The adapted plan is obtained by performing TPS re-optimization, taking into account the daily optimized PTV as the target volume. In case of adaptation, the only QA available before irradiation is a secondary Monte Carlo dose calculation (for TPS calculation verification) performed with gamma index analysis. Immediately before delivering the adapted plan, the settings of the gating process are entered, and the feasibility is verified as follows: (i) delineation of the tracked volume, (ii) definition of the gating limits, adjusting the "beam on" window, and (iii) preview of the gating process on a live sagittal slice MR image. F I G U R E 1 MRIdian fraction workflow: systematic steps (blue) and additional steps for ART workflow (yellow). 2.1.2 Online adaptation: optimization practices and MU considerations Re-optimization can be performed at three levels of complexity: first, simple segment-weight optimization; second,fluence re-optimization based on the original set of planning parameters, or third, full optimization based on modified and adapted objectives. [14][15][16][17][18] The irradiation conditions will logically be changed, whatever the optimization choice, and in particular, the MU number. In our institution, the initial plan settings are kept during re-optimization, in order to limit sources of variation as much as possible, in particular the beam number with associated angles, and the maximal number of IMRT step-and-shoot segments that are fixed. A specific and unique maximal number of segments is set for each initial plan of each patient. Online plan adaptation needs to be performed as quickly as possible because of the presence of the patient in the treatment position. 14 For each step, the therapist, radiation oncologist and medical physicist have to be well trained to optimize their operating time. In this context, the re-optimization step should not increase the delivery time, which is classically high (often around 10 min) on the MRIdian Linac with a global ART fraction duration than can exceed an hour. 19 This issue is of paramount importance, especially for abdominal treatments, such as those considered in this study, which are delivered in the breath-hold position. An increase in the delivery time will increase the number of apneas required to deliver the entire treatment,with the risk of tiring the patient. In this situation, repeating and reproducing the right breath-old position could be complicated for the patient, to the detriment of target volume coverage. 20 Description of the treatment plans For this study, we included the treatment plans of 30 patients treated for abdominal tumors, namely: 15 patients treated for liver cancer and 15 patients treated for pancreatic cancer. For each patient, SBRT was prescribed and treated over a maximum of 10 days. The dose prescribed is adjusted by the radiation oncologist considering the proximity of certain OARs. This aspect is one of the main differences between these two cancer localizations, as the number of highly radiosensitive OARs is lower around the liver than around the pancreas, which is surrounded by digestive structures (duodenum, bowel, or stomach). Consequently, the level of dose prescription is often higher for liver tumors (40, 45, or 50 Gy in five fractions of 8, 9, or 10 Gy) compared to pancreatic tumors (30, 35, or 40 Gy delivered in five fractions of 6, 7, or 8 Gy). Thus, a total of 30 initial treatment plans plus their adapted fractions were considered in this study. Among the 150 treatment fractions, a total of 124 were adapted (82%), mainly due to OAR modifications and dose constraint failures. In the large majority of cases, re-optimization was done with the original planning parameters. For liver plans, the mean and standard deviation for beam number was 15 ± 3 (median: 16, range [9][10][11][12][13][14][15][16][17][18][19]) and for segment number was 53 ± 9 (median: 52, range [37-76]). For pancreas plans, the mean and standard deviation for beam number was 16 ± 2 (median: 15, range [14][15][16][17][18][19]) and for segment number was 57 ± 6 (median: 59, range [49-68]). Assessment of dose delivery accuracy The dosimetric plan quality as described by Moore et al. 21 was not evaluated in this study. We assumed that the clinical evaluation and validation of adapted plans was optimal, and equivalent between each adapted plan and its corresponding initial plan. Only the variation in the accuracy of dose delivery was considered, and defined here as a variation in PSQA 3Dγ analysis results. To this end, in addition to the 30 PSQA measurements from the initial treatment plans, we also calculated and analyzed PSQA measurements for the 124 adapted fractions after treatment delivery. To do this, the Arc-CHECK cylindrically shaped QA device was used. It is made of PMMA with an outer diameter of 26.6 cm and an inner cavity diameter of 15.1 cm. The device includes 1386 diode detectors of a size of 0.8 × 0.8 mm 2 , helically arranged at a physical depth of 2.9 cm. An TA B L E 1 Gamma index analysis limits. MRI-compatible device was used for this study, and previously validated on the MRIdian system. 22 The ArcCHECK software system, called SNC Patient (version 8.4) enables comparison between the measured and planned dose, with global or local gamma index analysis. Considering the bore design of the MRIdian system and its limited diameter (70 cm), the ArcCHECK can be lateralized in order to center the device on the significant isodose and optimize the consistency of the PSQA. This process has been validated in a previous study. 23 Systematically, the same positioning of the ArcCHECK was used for adapted plan checks, as that used for the initial plan. Considering that this study used SBRT, gamma index pass rates were analyzed with a dose difference and distance to agreement (DTA) threshold of respectively 2% and 2 mm, with a 10% dose threshold. Firstly, global normalization was considered in the analysis for its superior clinical relevance. 5 Nevertheless, local normalization was also analyzed because of its sensitivity to the high dose gradient often observed in SBRT treatment. Consequently, both forms of analysis are of value in characterizing the dose delivery accuracy. The action limits (ALim) and tolerance limits (TLim) were defined according to the procedure described by Miften et al., 5 offering a process view including all sources of variation. The TLim is the minimum value that keeps the process unchanged. The ALim is the minimum acceptable performance value and is lower than the TLim. If a result is lower than the TLim but outside the ALim, the physicist has to determine whether or not action should be taken. 5 For ease of use, the TLim clinically applied is the calculated TLim, rounded up to the nearest multiple of five, and thus, more restrictive. All the values are summarized in Table 1. Consequently, the clinical TLim of the gamma index pass rate was set for local and global dose difference analysis at respectively 85.0% and 95.0%. Influence of MU number During initial user training, the manufacturer (Viewray Inc.) usually recommends containing and limiting the MU increase to +20% of the initial plan's MU number during the ART process. As the dose rate is constant on the MRIdian (600 MU/min 1 ), the MU number is a relevant parameter to characterize the total delivery time, considering that the number of segments is unchanged. Variations in plan complexity cannot be completely characterized using only MU number variation. Nevertheless, absent other plan complexity indices, it could be of direct help for the users if a relation is established between treatment PSQA result variation and MU number variation. In this context, variation in MU number distribution was plotted for each adapted plan ( Figure 2) and then investigated to analyze our practices and their possible impact on PSQA 3Dγ analysis results. For each adapted plan, differences in PSQA pass rates between the initial plan and the adapted plan were calculated and plotted according to relative variation in MU number (Figures 3 and 4). 2.5 Statistical analysis PSQA pass rates are described as mean values with standard deviation (SD). Medians and range were also calculated. The non-parametric Wilcoxon signed rank test was used to assess the difference between the mean values of initial versus adapted PSQA. Correlations between PSQA pass rate variations and relative MU variations were tested using Pearson's correlation coefficient. All tests were two-sided, and p-values < 0.05 were considered statistically significant. All analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). Global approach In a first approach, PSQA results were considered for each localization (liver and pancreas). Table 2 presents the means and medians of local and global initial and adapted gamma index pass rates for all patients. For liver and pancreas cancer patients, there was no significant difference in global mean value whereas there was a significant difference in local mean value. Consequently, no significant impairment of PSQA results was observed based on the global analysis. Nevertheless, based on local analysis, there was a significant degradation in 3Dγ results between the initial and the adapted PSQA.The degradation was greater for pancreas plans than for liver plans (−0.7% vs. −1.5%). The mean values were within the TLim (85.0% for local pass rate and 95.0% for global pass rate) for each group. F I G U R E 2 Distribution of MU relative difference between adapted plans and their respective initial plan. Liver plans are represented on the left side and Pancreas plans are represented on the right side. The red lines represent the +20% limit suggested by the manufacturer. F I G U R E 3 PSQA results variation for liver patient according to the MU variation between the initial and each adapted plan. F I G U R E 4 PSQA results variation for Pancreas patient according to the MU variation between the initial and each adapted plan. TA B L E 2 Global and local initial and adapted PSQA ArcCHECK gamma pass rates for the liver and pancreas. Table 3 lists the number of PSQA results among the adapted plans that did not satisfy the clinical TLim. For liver plans, only three global PSQA results of the adapted plans fell outside the clinical TLim, with a limited decrease in the pass rate: 94.8% for two plans and 94.6% for one. All the local analysis PSQA results satisfied the clinical TLim of 85.0%. Consequently, the PSQA results of all the adapted liver plans were clinically acceptable and validated. Detailed analysis For pancreas plans, 14 global PSQA results from the adapted plans failed to satisfy the clinical TLim, accounting for around 22% of all fractions. For local analysis PSQA results, the clinical TLim was not satisfied for three adapted plans, whereas the calculated TLim was satisfied. Unlike liver plans, the PSQA results of several adapted plans for the pancreas were impacted, with 14 failed plans according to the global analysis. Nevertheless, 10 of these 14 plans were in agreement with the calculated TLim (92.8%). Finally four plans did not satisfy the clinical TLim for the global analysis. Nevertheless, for two of these four plans, the impairment to PSQA results was limited, and the global pass rates (respectively 90.6% and 92.2%) were above the ALim (90.2%). For the two most substantially deteriorated adapted plans, global pass rates outside the ALim were observed, at respectively 87.5% and 89.6%, accounting for 1.6% of all adapted plans included in this study. Influence of variations in MU on PSQA results MU number variation was analyzed because of its relation to the time pressure of the ART process, and because of the possible impact of an increase in MU number on the complexity and dose delivery accuracy of the plan. First, we calculated the variation in MU between the initial and the adapted plan, for both localizations. The distribution of the differences is shown in Figure 2. For most plans, the increase in MU number was contained, and within the maximum limit of +20% recommended by the manufacturer. For some plans, an increase of the MU number beyond the +20% limit, and beyond +40% in a small number of cases,was observed for both types of tumor. Considering these distributions and the high number of substantial variations in MU number, we investigated the influence on PSQA results. To this end,the variation in PSQA results as a function of the variation in MU number for each adapted plan was calculated and is plotted in Figures 3 and 4, respectively for liver and pancreas global and local analysis. Pearson's correlation coefficients for each group are detailed in Table 4. For liver global analysis, there was a significant inverse correlation between PSQA result variation and MU variation, whereas there was no statistically significant correlation between liver local analysis PSQA results and MU variation. For the pancreas, for both global and local analysis, there was a statistically significant inverse correlation between PSQA result variation and MU variation (Table 4). Overall, with the exception of liver local analysis PSQA results, we noted a significant deterioration in PSQA results with increasing MU number. These results were confirmed by the results of the linear regression, showing a linear trend (Figures 3 and 4). DISCUSSION PSQA results of the adapted plans mostly showed limited degradation of dose delivery accuracy. In particular, for liver plans, the level of PSQA results remained within the limits of clinical tolerance in all cases. For pancreas adapted plans, some significant deteriorations in PSQA results were observed, but corresponded to complex configurations. Indeed, the four most impacted plans were from patients treated for synchronous double lesions situated in close proximity to one another (distance <5 cm). 4 These synchronous treatments are complex, in particular because of the limitation of the dose contribution between both plans. Consequently, according to the accuracy of the repositioning of the patient and the modification of the OAR position or the distance between the two lesions, re-optimization during the adaptive process could increase the plan complexity in terms of MU number or segment shapes. This likely explains why deteriorations in PSQA pass rates were mainly observed in these patients. Intuitively, the more OARs that are associated with the ART process, the harder it is to plan the treatment. For most liver plans, only one or two OARs were included in the ART process, whereas up to five OARs had to be taken into consideration for pancreas plans. This anatomical configuration logically influences the accuracy of dose delivery in ART. Nevertheless, severe degradations beyond the ALim were only observed for two pancreas plans. For one of these, the increase in MU number was >50%, and was attributed to a sudden change of the optimization parameters by the physicist. When we reviewed this plan, we deemed that the change was not imperative and should have been avoided in order to remain in compliance with good practices. Moreover, this change occurred at the beginning of our experience with the ART process, explaining why we did not find an optimal alternative to change this MU deviation. Considering the good target coverage and the respect of the dose constraints to the OAR, the plan was delivered nonetheless. The change in dose delivery accuracy was pointed out after the delivery with PSQA measurements. With our experience, we would not have treated this plan now. Because of the increased delivery time and the reduced dose delivery accuracy, it is important to follow good practices by carefully and gradually changing the optimization settings. The variation in MU should be systematically evaluated prior to plan delivery. Indeed, in addition to tiring the patient, we observed clear evidence of a tendency for the dose delivery accuracy of the plan to decrease with increasing MU number. Although it is in line with good practices to keep the beam configuration and optimization parameters unchanged, the MU number may increase significantly. Indeed, as illustrated in Figure 5, changes in the shape or position of OARs can impact on target volume shrinkage, and consequently, on the complexity of the target volume shape. Furthermore, a reduction in the distance between the OAR and the target volume can require a steeper dose gradient to satisfy dose constraints. These configurations may lead to more complex planning, with a resultant significant increase in MU. In light of the results of this study, our institution decided to stop performing adapted fraction PSQA after delivery. Even if an initial plan may potentially never proceed to treatment delivery, it still has to be verified, to check the constancy in our dosimetric practices, and in order to use it as a predictive index for the dose delivery accuracy of adapted plans. The present study was performed on only two cancer sites, but the decision to stop PSQA was made for all adapted localizations, on the assumption that the pancreas is the most complicated and sensitive localization and we follow the same practices for each site. To the best of our knowledge, most MRIdian users do not perform PSQA measurements for their adapted plans, because it is very time consuming. For those who perform post-delivery PSQA measurements, our investigation describes a means to discontinue PSQA measurements without changing the quality assurance process. It should be noted however that this work was validated only for two localizations, and within the specific conditions of planning, adapting, and measuring implemented in our study. Further validation is required, especially for users working in different conditions, before stopping adapted plan PSQA measurements. One limitation of this study is that the dose delivery accuracy was assessed solely on the basis of PSQA measurement obtained with the ArcCHECK, which itself presents several limitations to its performance that may have impacted our results. 24 To be more comprehensive, delivery accuracy and complexity of a treatment plan could be defined in terms of other aspects, including the Multileaf collimator (MLC) field shape or using modulation indices. 25 In the context of step-and-shoot IMRT, the investigation of the size of the MLC segment weighted by the number of MU might be of value for assessing the impact on dose delivery accuracy. In this regard, Lamb et al. 18 established online automatic plan consistency checks for Viewray's system based on the evaluation of two parameters: first, the ratio of the "bixel-minutes," defined as the sum of beam on time multiplied by segment area (a measure of integral dose); and second, F I G U R E 5 Variation of OAR shape (Duodenum in blue) between initial and adapted fraction with the influence on the optimized PTV shape (in yellow). The original PTV is in red. the target volume ratio are evaluated as safety checks. This solution makes it possible to monitor the quality of adapted fractions without being too time consuming. More recently, Rippke et al. similarly developed an automatic tool for process-based per-fraction QA for online ART on the MRIdian Linac based on the plan analysis. 26 This type of dosimetric checking has also been investigated on the Unity (Elekta AB, Stockholm, Sweden) MR linac system. 27 In terms of perspectives for future studies,it would be of interest to compare the results of these different tools with the results of PSQA measurement, with a view to investigating a possible correlation. Three additional QA processes may also be warranted to monitor the quality and delivery of treatment plans and, more specifically, on-table adapted treatment plans. These processes are: independent dose calculation, in vivo dosimetry and logfile QA. These three QA processes could be automatized and applied in the workflow without significantly increasing the workload. Indeed, before delivery, independent dose calculation should make it possible to verify whether dose calculation and MU are in agreement. This function is already commercially available on the Viewray and Elekta MR linac systems. 28 After delivery, logfile analysis could inform the user about whether the machine has performed the treatment as planned based on machine parameters. This solution is built-in for the Viewray system. On Elekta MR linac systems, logfiles can be read out and recorded for dose reconstruction. 29 Also after delivery, in vivo dosimetry could help to check whether the treatment went as planned, by taking into account the presence of the patient. Currently, no commercial solution is available for this on the Viewray system. On the Elekta system, the presence of an MV imaging system should make it possible 30 but it is still not commercially available. More generally, the entire ART process is a highly interdisciplinary workflow with the patient in the treatment position, and consequently, is highly time sensitive. Workflows and processes need to be standardized and analyzed to identify any specific risks introduced by the ART process. [31][32][33] Prospective approaches such as failure modes and effects analysis (FMEA) could be applied to quantify risks and associated failures. They may lead to the definition of appropriate process-based QA strategies and tools that could be implemented to reduce risk and avoid critical failures. [31][32][33] Conserving the quality and dose delivery accuracy of adapted plans is necessary, and implies the successful achievement of each successive step in the ART process. CONCLUSION In this study, post-fraction PSQA results from adapted plans for the treatment of liver and pancreas cancer, using the MRIdian 0.35 T MR-linac system, were investigated and demonstrate that the dose delivery accuracy of adapt plans is conserved with limited deterioration of the PSQA results. In the vast majority of cases, pass rates were within tolerance limits. The only degradation in PSQA that was outside the tolerance limits was observed for highly specific and complex treatment plans. We show that minimizing the increase in MU number between initial and adapted plans is key to maintaining the accuracy of dose delivery. In our clinical routine practice, based on the present results, we have decided to stop adapted PSQA measurement. On condition that good practices are adhered to during the ART workflow, we can assume that the results of initial PSQA measurements are comparable to those of adapted plans. Overall, the ART process is a multidisciplinary and complex process that needs to be globally analyzed to limit risks and minimize quality deviation. AU T H O R C O N T R I B U T I O N S Igor Bessieres: Conception and design of the work; acquisition, analysis and interpretation of data for the work; drafting the work; final approval of the version to be published. Olivier Lorenzo: Conception and design of the work; analysis of data for the work; revising the work critically for important intellectual content; final approval of the version to be published. Aurélie Bertaut: Statistical Analysis and interpretation of data for the work; revising the work critically for important intellectual content. Aurélie Petitfils: Analysis and interpretation of data for the work; revising the work critically for important intellectual content. Léone Aubignac: Analysis and interpretation of data for the work; revising the work critically for important intellectual content. Julien Boudet: Conception and design of the work; analysis of data for the work; revising the work critically for important intellectual content; final approval of the version to be published. AC K N OW L E D G M E N T S The authors have nothing to report. C O N F L I C T O F I N T E R E S T S TAT E M E N T The authors declare no conflicts of interest
v3-fos-license
2019-05-12T14:22:32.497Z
2019-01-02T00:00:00.000
149498668
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02602938.2018.1467877?needAccess=true", "pdf_hash": "d589ffba9e702189fa56caf9e42efd592d58b17e", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:487", "s2fieldsofstudy": [ "Education" ], "sha1": "001aab4865397098adeabf13196ab476d73dbc00", "year": 2019 }
pes2o/s2orc
What makes for effective feedback: staff and student perspectives Abstract Since the early 2010s the literature has shifted to view feedback as a process that students do where they make sense of information about work they have done, and use it to improve the quality of their subsequent work. In this view, effective feedback needs to demonstrate effects. However, it is unclear if educators and students share this understanding of feedback. This paper reports a qualitative investigation of what educators and students think the purpose of feedback is, and what they think makes feedback effective. We administered a survey on feedback that was completed by 406 staff and 4514 students from two Australian universities. Inductive thematic analysis was conducted on data from a sample of 323 staff with assessment responsibilities and 400 students. Staff and students largely thought the purpose of feedback was improvement. With respect to what makes feedback effective, staff mostly discussed feedback design matters like timing, modalities and connected tasks. In contrast, students mostly wrote that high-quality feedback comments make feedback effective – especially comments that are usable, detailed, considerate of affect and personalised to the student’s own work. This study may assist researchers, educators and academic developers in refocusing their efforts in improving feedback. Introduction Feedback can be one of the most powerful influences on student learning (Hattie and Timperley 2007). The research literature contains numerous studies on feedback approaches that are regarded as effective, and these have been synthesised in several review studies (Hattie and Timperley 2007;Shute 2008). The evidence base on feedback has also been used to develop several conceptual models that have been influential in our understandings of how feedback should be done (Carless et al. 2011;Boud and Molloy 2013). Substantial advice is thus available on how feedback might be made more effective. In contrast to the effective feedback possibilities expounded in the literature, students generally report in surveys that feedback is done poorly in higher education, compared with other aspects of their studies (Carroll 2014;Higher Education Funding Council for England 2014;Bell and Brooks forthcoming). However, a substantial problem with student satisfaction surveys (particularly the UK's National Student Survey and its Australian equivalent) is that they are based on an outmoded understanding of feedback. Questions tend to ask if students are happy with the volume or quality of comments they receive from their educators (Winstone and Pitt 2017); this contrasts with a strong shift in the literature KEYWORDS Assessment feedback; purpose of feedback; effective feedback CONTACT Phillip dawson [email protected] over the past decade towards understandings of feedback as a process that leads to further learning (Sadler 2010;Carless et al. 2011;Molloy and Boud 2013). The meanings of various terms in the field of assessment and feedback have changed over recent decades (Cookson forthcoming). The early 2010s marked a shift in how feedback was positioned within the literature, with understandings of feedback moving from something 'given' to students towards feedback as a process in which students have an active role to play. Where Shute's (2008) review of feedback focused on 'information communicated to the learner' , and Hattie and Timperley (2007) focused on 'information provided by an agent' , more recent understandings reposition feedback within its conceptual roots in biology or engineering, as a process leading to improved work . This shift in how feedback is understood places emphasis on many more features of feedback than just the provision of 'hopefully useful' comments from educators to students. The conceptualisations of feedback currently prominent in the literature consider the entire feedback process, driven by the student rather than the educator, involving a multitude of players, and necessarily involving the student making use of information to effect change. The literature has thus moved forward in how it understands feedback -but it is not clear if those involved in feedback have been brought along with it. Prior to this shift in the early 2010s, there had been a range of studies about staff and student perceptions of feedback. Some focused only on what students thought. For example, Poulos and Mahony (2008) found that health students at an Australian university were interested not just in modality and timeliness, but also the credibility of the feedback source. In studies that considered both staff and student perspectives, there were generally discrepancies between student and educator perceptions of feedback practices. For example, in one Hong Kong study, educators tended to report a much more positive picture than students when both were asked similar questions about feedback (Carless 2006). An Australian study found a similar mismatch between educators' espoused feedback theories and practices and their actual feedback behaviours, with actual practices often falling short of individuals' ideals (Orrell 2006). The general message appears to be one of inconsistencies between understandings of actual practices, educator perspectives and student perspectives (Li and De Luca 2014). However, since the shift in researcher and expert understandings of feedback in the early 2010s, there has been a dearth of studies on what staff and students experience as effective feedback. In particular, there has been a lack of studies that include both staff and students from a range of institutional and disciplinary contexts. The typical post-2010 study on perceptions of feedback focuses on a single discipline at a single institution, with a convenience sample of less than 200 student research participants and no staff participants (e.g. Dowden et al. 2013;Robinson, Pope, and Holyoak 2013;Bayerlein 2014;Pitt and Norton 2017). While there have been a handful of studies that also include a small number of staff or several disciplines (e.g. Orsmond and Merry 2011;Sanchez and Dunworth 2015;Mulliner and Tucker 2017), these have still largely been single-institution studies with cohorts skewed toward particular genders and concentrated in limited discipline groups. Given Sadler's (2010) caution about the need to be careful when generalising in feedback research, there is a need for studies that are more inclusive and comprehensive in terms of disciplines, institutions, year levels, gender and other characteristics. The lack of broader studies on staff and student perceptions of feedback is problematic, because we do not know to what extent staff and students have been brought along with the changing understandings of feedback occurring in the literature. In assessment more broadly, educators are the people who design what students are expected to do, and their opinions about what is effective may be more influential than research evidence about what occurs (Bearman et al. 2017). Similarly, in a process-oriented conceptualisation of feedback, students are the main actors , and their understandings of what feedback is for and what makes it effective are necessary to implement sophisticated designs. This paper addresses a gap in our understanding around what educators and students think feedback is for, and what they think makes for effective feedback, through qualitative analysis of a purposive sample from a large-scale feedback survey. In particular, it addresses the following research questions: RQ1: What do staff and students think is the purpose of feedback? RQ2: What do staff and students think makes for effective feedback? Method We administered a large-scale survey about feedback to staff and students at two Australian universities in 2016-2017. The survey instrument is available at www.feedbackforlearning.org/wp-content/ uploads/Feedback_for_Learning_Survey.pdf and is free to use under a Creative Commons ShareAlike 4.0 International Licence. Valid responses were received from 4514 students and 406 staff. The survey was primarily quantitative, but it also included a small number of open response items. This paper reports on our qualitative analysis of a subset of the open response data where staff with assessment responsibilities (i.e. educators) and students discussed what they thought feedback was for, what they thought constituted effective feedback, and gave examples of effective feedback they had experienced. The survey data were collected as part of the first phase of a research project funded by the Australian government Office for Learning and Teaching and undertaken by two Australian universities. The project had the broad focus of seeking to understand the feedback experiences and practices of coursework students and university staff (both academic and professional). Approval was received from the Human Research Ethics Committees of both universities prior to all data collection. Participation in the survey was voluntary, and participants were offered the opportunity to go into a prize draw for a small incentive. We acknowledge that both the opt-in nature of the study and the incentive may affect the representativeness of the participants recruited. Sample Resource constraints dictated that we needed to sample the data in order to conduct in-depth qualitative analysis. We therefore opted for a sample of 200 student responses from each institution (total N = 400) that was representative of the characteristics of the overall populations in terms of gender, international/domestic enrolment, online/on-campus enrolment, and faculty. As the entire data-set of educators was of comparable size (n = 323 participants) we opted to use all data from teaching staff rather than a sample. Analysis For this paper, analysis focused on a subset of two open-response questions. Students and staff with assessment responsibilities were asked to: (a) state what they saw as the purpose of feedback; and (b) state why they considered a recent, self-selected instance of feedback had been effective. We conducted a thematic analysis of the data similar to the process described by Braun and Clarke (2006). As with any thematic analysis, a series of choices were made during the research design that shaped the themes developed and the outcomes of the analysis. Ontologically, the study is broadly based in realism (Maxwell 2012); we think the participants experience feedback as a real thing, and it is possible for educators and students to have very different experiences of the same feedback reality. We undertook an inductive analysis, with the acknowledgement that as feedback researchers we bring a set of domain theory to the topic, and that we actively construct themes rather than have them passively 'emerge' from the data (Varpio et al. 2017). We developed 'semantic' themes (Braun and Clarke 2006), because we were more interested in what our participants explicitly wrote than we were in identifying latent meanings; responses to openended qualitative survey questions often tend to be too 'thin' to support deeper forms of analysis (LaDonna, Taylor, and Lingard 2018). Our coding framework was developed through an iterative process of reading subsets of the data, sharing notes and testing preliminary codes. This process involved four researchers (MH, PD, MP, TR) going through five major iterations. Once a preliminary framework was developed through this process it was shared as a codebook with another researcher not involved in its development (PM), who then coded data from 50 participants before making minor amendments to the codes in consultation with two members of the research team (PD, MH). The final framework was then applied to the entire sample. Results and discussion This section is structured around the two core research questions for this study. Data relating to each are presented in summary form, and analysis of each key theme is then reported and discussed. While we report on the prevalence of themes within the sample, we are cautious in making any claim of generalisability to the broader population. These themes are illustrative and establish a valuable foundation for critically reflecting on our cultures of feedback and for generating future lines of inquiry. RQ1: What do staff and students think is the purpose of feedback? Participants indicated four main purposes of feedback: justifying grades; identifying strengths and weaknesses of work; improvement; and affective purposes. The prevalence of each of these purposes is presented in Table 1, along with subthemes where we observed them. It should be noted that data presented in tables may add up to more than 100% because individual participants' data may have been coded in multiple themes. Is feedback about improvement, or justifying a grade? The vast majority of responses expressed that feedback is about improvement, with 90% of students and 89% of staff mentioning some sort of improvement as a purpose of feedback. For staff, an improvement purpose of feedback was ten times as prevalent as a grade justification purpose; for students, improvement was more than twenty times as prevalent as justification. Improvement was regarded by some participants as an 'obvious' purpose of feedback, which is perhaps unsurprising given the prevalence of this theme. It is heartening to see such a high prevalence for improvement, as it is the fundamental element of popular feedback ideas such as those proposed by Carless et al. (2011), Sadler (2010 and Boud and Molloy (2013). However, when students and staff wrote about improvement, there was a marked difference in what they regarded as the object of improvement, as outlined in Table 2. The most common response by participants was that the purpose of feedback was improvement but they did not state an object of the improvement. Those participants who referred to unspecified improvement may have implicitly assumed some default focus of improvement or an overall, general improvement. Where participants did specify a particular focus for improvement, it was usually improvements to students' work, improvements in understanding, or improvements in learning or study strategies. Building on work by Boud and Molloy (2013), Carless (2015) defined feedback as 'a dialogic process in which learners make sense of information from varied sources and use it to enhance the quality of their work or learning strategies. ' (p. 192). The prevalence of improvements to both work and learning strategies in the data suggests that some educators and students share our own understanding of the purpose of feedback. It also suggests that some aspects of Hattie and Timperley's (2007) recommendation that feedback should focus on improving self-regulation may be represented in the understandings of educators and students. The assessment and feedback literature has recently increased its focus on the development of students' understandings of quality and their ability to make decisions about quality work, known as 'evaluative judgement' (Tai et al. forthcoming). A small set of participants wrote about improvements related to evaluative judgement, such as improved self-evaluation or improved understanding of standards. However, evaluative judgement as an overarching capability was not a substantive presence in the data-set. Educators and students were predominantly focused on improvements to work, and an improvement in the ability to produce work, not on an improvement in the ability of students to evaluate work. Pointing out strengths and weaknesses The identification of strengths and weaknesses in student work was sometimes reported by participants without mention of the use of that information for improvement. This perhaps corresponds to older, information-centric understandings of the purpose of feedback, such that feedback is about telling students what is good and bad about their work, but not about telling students how to improve it or students using the information for improvement. A typical expression of this theme was made by one educator, who said the purpose of feedback was 'to allow the student to see where their strengths and weaknesses for that task lie. ' Feedback to motivate and make students feel good A small number of staff, and a smaller number of students, mentioned affective purposes for feedback. For these participants, one of the purposes of feedback was to motivate students to do better work, to acknowledge student effort, to encourage students, or to make them feel good about their work. One student noted that although it is not the primary purpose of feedback, if affective purposes are not attended to the results can be hurtful: [The purpose of feedback is] primarily to improve in future, but also to compliment and motivate and provide positive reinforcement to the effort and time put in. It's absolutely shattering when the assessor does nothing but nit-pick and criticise with no positivity at all. This was a somewhat common way of considering affect as a purpose of feedback: as a secondary but essential purpose. RQ2: What do staff and students think makes for effective feedback? Staff and students diverged more on what makes for effective feedback than they did for the purpose of feedback. When staff and students were describing what made a feedback experience effective, the prominent themes were the content of the comments, aspects of the feedback design and the source of the feedback information. The prevalence of each of these top-level themes is shown in Table 3. Feedback design As discussed earlier, the feedback literature has moved from a focus on providing better information to students (e.g. feedback comments on student work) to also consider designing the tasks and activities in which students engage (e.g. requiring students to use feedback comments from their first assignment in their second assignment). A slight majority of educators mentioned design as what made feedback in their classes effective; however, relatively few students mentioned design at all. The specific features are summarised in Table 4. Compared with students, a higher proportion of staff thought design was what supported the self-selected effective feedback instance they discussed. This was true for design in general, and for almost all specific features of feedback design. One possible explanation for this is that educators may be more aware of design than students; feedback design can take educators significant time and consideration, whereas students may not notice the design and instead focus on the products of the design (e.g. comments). Students are often regarded as wanting more timely feedback (Li and De Luca 2014), and it is common for institutions to require feedback comments be provided to students within a particular timeframe. Prompt turnaround of feedback was mentioned by a very small number of students, and 10% of educators. However, we would argue that prompt turnaround of feedback is actually a second-order concern; the most important concern for timeliness is that feedback information is available to students in time for them to undertake the next task. Timeliness in the form of feedback information being available when the student needs it was not mentioned by students, and was mentioned by a very small number of staff. We would regard the availability of feedback comments in time to do subsequent work as a fundamental requirement for feedback to occur at all, and the relative scarcity of this theme suggests this view is either not held by many students and educators, or it is perhaps so taken-for-granted that it was not considered worth mentioning. Another potentially taken for granted feature of feedback is that it needs to be iterative or connected; students need to have tasks structured in such a way that they can demonstrate their improvement from one task to the next . Few educators and fewer students mentioned this as a feature of effective feedback. Where students mentioned this theme, they said feedback was made effective either by repeated attempts at the same task, repeated attempts at similar tasks, tasks split into pieces and interspersed with feedback, or in-class feedback followed by feedback on an improved submission. However, despite being mentioned by relatively few students, several of those students mentioned this as the only feature that made their specific instance of feedback effective. Similarly, for many of the staff who mentioned iterative or connected tasks as a feature of effective feedback in their classes, this was the only theme found in their data. Some specific feedback design features have gained popularity in feedback literature over recent years; in particular peer feedback, the use of exemplars and feedback moderation. The near absence of peer feedback is potentially unsurprising, as although we are aware of some use of peer feedback within our contexts, we are also aware of resistance to these approaches from students and educators (Liu and Carless 2006;Tai et al. 2016;Adachi, Tai, and Dawson 2018). The absence of exemplar approaches may be explained by exemplars not being viewed by educators and students as part of feedback processes; although exemplars are compatible with models like Feedback Mark 2 (Boud and Molloy 2013), they may not fit within everyday educator or student definitions of feedback. We suspect the lack of comments about feedback moderation (the review of educator feedback comments by other educators, as described in Broadbent, Panadero, and Boud 2018) may be due to this process being largely nonvisible to students, or it being a relatively niche practice. The design elements most mentioned by students related to modalities -that is, the forms in which feedback information was provided. These comments tended to relate to the perceived affordances of particular modalities: rubrics were noted as 'accurate' or 'detailed'; digital recordings were 'easy to understand' or more voluminous; face-to-face feedback was personalised and thorough. The lack of comments from students around automated sources (e.g. formative multiple-choice quizzes) is perhaps surprising. However, this does not imply these sources are ineffective; it merely implies they were not a part of the most effective recent feedback experience for these students (or, potentially, are not considered to be feedback as such). Within the conceptualisation of feedback adopted in this paper, comments are 'dangling data' (per Sadler 1989, p. 121) until they are actively used by students. A small number of educators and a very small number of students directly referred to students taking an active role as what had made feedback effective. For example, one educator said that this aspect of design was what made feedback effective: Students were given very detailed comments on an essay draft. They were required to produce a final draft, and a reflective piece that explained/justified their response to comments on their draft. While it may be increasingly accepted that feedback needs to be enacted in order to complete the feedback loop and thus qualify as feedback, the relative scarcity of designs that made students the actors in feedback was somewhat surprising. In addition, although data in this theme reflected students as active, it did not usually describe agentic, student-driven practices such as feedback-seeking. Feedback comments By far the most common top-level theme identified in student responses was that high-quality 'feedback information' is part of effective feedback. This aligns with findings from Li and De Luca's (2014) systematic review that noted some of the features of feedback most desired by students were that it was 'personal, explicable, criteria-referenced, objective, and applicable to further improvement' (p. 390). However, we were surprised to see that the quality of feedback information was mentioned by proportionally fewer educators. The specific features of feedback comments noted by educators and students are summarised in Table 5. The most prominent theme in the data about comments is that students found usable comments effective. Given the conceptualisation of feedback adopted in this paper, such a statement may sound fundamental or even tautological. However, given its prevalence, it is worth emphasising that the most common active ingredient in effective feedback from the student perspective was communicating what needs to be improved. While this was usually expressed in terms of improvements to the students' work or understanding, some students also mentioned that feedback which focused on improvements to learning strategy was also effective. The next most prevalent theme for students in terms of the content of comments was that feedback needs to be detailed, specific or thorough. For many students who mentioned detail, it was the sole feature that made their instance of feedback effective. A related, less common theme was that feedback needed to be clear, focused, precise or direct. Some students mentioned that their feedback experience was made effective by being personalised or individualised. These descriptions proved difficult to analyse as they often used a term like 'personalised' without explanation of what the term meant. In exploratory focus groups conducted to clarify some responses from this survey, we asked students (n = 28) what the term 'personalised' meant to them, and we received the consistent response that students thought feedback was personalised when they felt the assessor had actually read their work and was making comments specifically about it -as opposed to receiving generic feedback information about the cohort's work. Based on this, we consider the individualised and personalised themes to be perhaps inseparable; however, we report them as distinct codes here because that was what we saw in the survey data as standalone. In contrast, there was also a small set of students and staff who found generic feedback comments (the opposite of personalised feedback) effective. A small number of students indicated that their recent effective feedback experience was made effective thanks to broadly affective features of the comments made about their work: the comments were nice, positive or constructive, or supportive, encouraging or motivating. For a substantial minority of students who discussed either of these themes it was the only theme mentioned; however, for most students these themes were mentioned alongside other features. Based on Li and De Luca's (2014) review, as well as recent work on evaluative judgement (Tai et al. forthcoming), we expected that staff, and to a lesser extent students, would value comments that made reference to standards or criteria. Student participants in other qualitative studies (e.g. Poulos and Mahony 2008) often mentioned criteria as an effective reference point for feedback. To our surprise, very few staff or students in our study mentioned these features, and even when we re-checked the rubric modality subtheme from feedback design, there was little explicit mention of standards or criteria. Sophisticated feedback models like sustainable feedback (Carless et al. 2011) or Feedback Mark 2 (Boud Table 5. Qualities of comments provided that support effective feedback by educators and students. Description Staff ( and Molloy 2013) are dependent in some part on feedback that makes explicit reference to standards, in order to develop student understanding of those standards; it is concerning that standards were not a feature of effective feedback experiences for many staff and students. Most students mentioned comments that identified how to improve, but few mentioned the reference point for those improvements. Conclusion Returning to the research questions for this study, we have found that, broadly speaking, staff and students think the purpose of feedback is improvement. From the staff perspective, feedback was made effective primarily through design concerns like timing, modalities and connected tasks. From the student perspective, feedback was made effective through high-quality comments which were usable, sufficiently detailed, attended to affect and appeared to be about the student's own work. However, we were also very interested to find other, sometimes seemingly incongruous experiences of effective feedback. For example, while many students had effective feedback experiences involving feedback information tailored to them, some others appreciated generic comments. Here, the experiences of staff and students seem to concur with what Sadler (2010) observed from four review studies on feedback: At the risk of glossing over the complexities of what is known about feedback, the general picture is that the relationship between its form, timing and effectiveness is complex and variable, with no magic formulas. (Sadler 2010, p. 536) The staff focus on feedback designs and the student focus on feedback comments may reflect the elements of feedback processes that are most readily noticeable to staff and students. In improving feedback, it may be helpful for students to attend to and reflect on the design elements that support their learning. Better understandings of feedback designs are an element of the 'assessment literacy' feedback recipience process that find to be supportive of more agentic engagement in feedback. In addition, student demand for better feedback designs -rather than just better feedback comments -may support educators who wish to change how they do feedback. Staff and students are sometimes stereotyped as holding regressive views of feedback, and we were heartened to find that this was not the case with most of our sample. Our participants held what we would regard as relatively positive and sophisticated views of feedback, especially with respect to its purpose. However, we caution that this could be a result of selection bias, with educators and students who think more about feedback potentially more likely to complete a survey on the topic. However, despite this sophistication there are still several frontier topics in feedback that have not featured in recent effective feedback experiences for our participants. Evaluative judgement, peer feedback, exemplars and feedback moderation are concepts or practices that are regarded as holding merit by researchers (Liu and Carless 2006;Carless and Chan 2016;Broadbent, Panadero, and Boud 2018;Tai et al. forthcoming), but either weren't noticed, experienced or prioritised as most effective by our participants. This study may assist researchers and academic developers in refocusing their efforts. For example, interventions to convince this sample of educators and students that feedback is about improvement rather than justifying a grade may seem patronising. However, interventions to shift students from a focus on the quality of comments towards what they do with those comments may be well received, as, although there was a strong focus on the quality of comments, the most commonly identified feature of effective feedback comments was that they were usable. Affective and relational matters did not feature strongly in this study; however, we know from other research that these matters are crucial and impact different bodies of students differently (Telio, Ajjawi, and Regehr 2015;Ryan and Henderson forthcoming). It may be worthwhile to target development around affective matters with these educators and students. This study, as with many others in the field of higher education, asked individuals to report what they think is effective, rather than measuring if particular approaches are successful (Tight 2013). While we agree with the need in general to move towards understanding effects of feedback, we think this needs to be done with reference to a framework of what individuals think feedback is for, and what they experience as effective. In addition, if feedback designs prove as difficult to change as assessment designs, the opinions colleagues express about 'what works' may be much more powerful in influencing feedback practice than published empirical research (Bearman et al. 2017). The analysis in this study was informed by a modern understanding of what feedback is: a process, designed by educators, undertaken by learners, which is necessarily about improvement. If the field of feedback research is to properly adopt this sort of conceptualisation of feedback, there may be a need to re-examine some of the fundamental findings and assumptions of the field within this framework. Although we took an inductive approach, we were unable to ignore key conceptual arguments from recent years; other inductive analyses may similarly yield new insights on the staff and student experience of feedback if conducted within this frame. From a practical perspective, there is also a great need to move institutional and national surveys toward more modern and sophisticated conceptualisations of feedback (Winstone and Pitt 2017). Despite our modern framing, however, we found that some old-fashioned ideas remain prevalent. For example, students had an overwhelming focus on the content of comments as what had made feedback effective, which at first glance appears to run counter to models like Feedback Mark 2 (Boud and Molloy 2013). However, balanced against this, the most common feature of comments that made them effective from the student perspective was that they were usable. This study has demonstrated that educators and students may hold more sophisticated views of feedback than they are sometimes credited with. But despite the differences between staff and students around what makes for effective feedback, the starkest differences were between the participants and what we the authors, as feedback researchers, regard as the purpose of feedback and what makes it effective. Students and staff continue to believe the purpose of feedback is largely to 'provide' comments with (often vague) notions that it leads to improvement. However, such beliefs overemphasise the idea that we have a clear sense of what quality input (provision of information) looks like. Instead, we argue that feedback should be judged by looking at what students do with information about their work, and how this results in demonstrable improvements to their work and learning strategies. In other words, effective feedback needs to demonstrate an effect. In doing so we can best judge, and adapt accordingly, the entire feedback system, including the form of comments. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by the Australian Government Department of Education and Training [grant number ID16-5366]. Notes on contributors Phillip Dawson is an Associate Professor and the Associate Director of Deakin University's Centre for Research in Assessment and Digital Learning (CRADLE). He holds a PhD in Higher Education, and a first-class honours degree in Computer Science. Phill's current research interests include digital threats to academic integrity, academics' assessment design thinking, feedback and learning analytics, while his methodological expertise covers research synthesis, digital research methods and case study research. He also has a research background in mentoring, peer learning and higher education pedagogy. Michael Henderson is an Associate Professor and the Director of Graduate Studies in the Faculty of Education at Monash University. He researches and teaches on the topics of educational technology and instructional design, including ethics of social media use and assessment feedback designs. Michael leads the OLT funded project Feedback for Learning and is a lead editor for AJET. Paige Mahoney is an Associate Research Fellow at Deakin University's Centre for Research in Assessment and Digital Learning (CRADLE). She holds a first-class honours degree in Professional and Creative Writing and History. Paige's previous research has explored the complex intersections between history and fiction, gender and memory and regional and national identities. Michael Phillips is a Senior Lecturer in the Faculty of Education at Monash University. Michael's research explores the complexity of engaging educators in higher education and schools in professional learning. In addition to his work on teacher's knowledge, he has developed a national profile in multi-modal assessment feedback. Tracii Ryan is a Research Fellow in the Faculty of Education at Monash University. Tracii has research expertise relating to the motivations, outcomes and individual differences associated with internet use. Tracii also has several years of experience working across a range of research projects within the higher education context, and her most recent work focuses on assessment and feedback.
v3-fos-license
2023-10-14T15:07:38.833Z
2023-10-12T00:00:00.000
264068774
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pamm.202300129", "pdf_hash": "e6bc439fa8af0c894ac28d1b2e7398eb03711544", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:488", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "a0a736daec93d85fc39a6f2d83e3a89675d91ef6", "year": 2023 }
pes2o/s2orc
Detection and characterization of local nonlinearity in a clamped‐clamped beam with state space realization The state space representation of linear and nonlinear systems is widely used in the literature for system characterization, system identification, and model‐based control synthesis. In the case of systems with local nonlinearity, the realization of a state space model where the states are physically interpretable as a requirement has been a challenging task. This requirement becomes more emphasized, especially in the scope of industrial high‐precision systems. Consequently, the class of black‐box system identification approaches becomes less attractive. In the scope of this paper, we are interested in modeling systems with dominant local nonlinearity where employing linear models can only cover a limited range of system dynamics. More specifically, geometric nonlinearities which are present in the joints (bolted) of structural interfaces are analyzed. The main goal is to provide systematical modeling of such systems without using a sparse nonlinear representation. Such a low‐order nonlinear model can improve the simplicity of analyzing the nonlinear system and can be used for structural vibration and noise control. In order not to neglect the sophisticated linear modeling technique, the linear model is proposed to be extended by means of smooth nonlinear terms. The systematic approach contains the modeling of the linear counterpart followed by the localization and characterization steps in the well‐known three‐step paradigm. For the characterization step, this work relies on the acceleration surface method (ASM). The experimental setup under study, as a benchmark, is a set of two beams of different lengths and thicknesses connected by a screw that is excited by a mechanical shaker. The axes are oriented in the transverse direction of the beams, while the boundaries are realized as imperfect clamped‐clamped boundary conditions at two ends in the model. Consequently, by selecting the excitation amplitude, we can control the dominant dynamics at lower excitation amplitudes and invoke the local nonlinearity at higher amplitudes. For higher excitation levels using sine sweep signals, the phase space information of the shaker and accelerometer sensors is used to detect the local nonlinearities along the clamped‐clamped beam. The detected and characterized nonlinearities are incorporated into the linear system as a systematic approach for modeling such a structurally nonlinear system. nonlinearities along the clamped-clamped beam.The detected and characterized nonlinearities are incorporated into the linear system as a systematic approach for modeling such a structurally nonlinear system. INTRODUCTION Structural vibration and noise control are important fields of engineering that focus on minimizing unwanted vibrations and noise in structures.In various industries, including aerospace, automotive, construction, and manufacturing, where excessive vibrations and noise can negatively impact performance, safety, and human comfort, structural vibration and noise control have become more essential.One of the key concepts related to structural vibration and noise control is the active control system.It utilizes sensors, actuators, and control algorithms to counteract vibrations or to reduce noise actively.These systems continuously monitor the vibrations or noise levels and generate appropriate signals to minimize their effects [1].Advantages of designing control strategies based on a thorough understanding of the system dynamics, of the flexibility to optimize control algorithms before implementation, and the potential for adaptive control strategies that can adjust to changing conditions lead to the development and implementation of mathematical models in the framework of the controller design became an important research topic.The so-called model-based control engineering related to more efficient and effective enhancement in minimizing vibrations and noise in various structures [2]. The requirements for the mathematical models in controller design have ascended, and now there are multitudinous number of modeling techniques [3].In the following paper, the focus will be developing a low-order model that captures the dominant linear and locally nonlinear dynamics which can be employed for reducing the modeling uncertainties in model-based control development.Hence, the nonlinear model is computationally cheap and in a closed-loop scheme can represent enhanced robustness against unwanted disturbances.Compared to high-order models, low-order models have fewer state variables and parameters.This results in simpler mathematical equations and reduces the computational burden associated with analyzing and simulating the model.It allows for faster computations, making it feasible to implement real-time control algorithms.However, the parameter identification of the mathematical model can be challenging and requires a large amount of data.Non-parametric system identification presents a formidable challenge due to its dependence on the meticulous design of experiments [4], ensuring the excitation of all nominal system dynamics.In the frequency domain, the applicable frequency range may extend, for instance, from 0 to 1000 Hz, as seen in active vibration control scenarios.Consequently, the acquisition of time-domain signals necessitates high sampling rates over extended durations to encompass both low and high-frequency dynamics effectively.Moreover, in the context of Multiple Input Multiple Output (MIMO) systems, the intricacies of experiment design become considerably more demanding due to the intricate interactions among individual inputs.This complexity often results in the accumulation of vast datasets within non-parametric nonlinear MIMO systems, posing a significant challenge.In contrast, the utilization of low-order nonlinear parametric models offers an elegant and efficient means of data representation.Such models not only provide a more graceful explanation of the data but also facilitate their application in tasks such as model-based control synthesis. In this contribution, the low-order model of the clamped-clamped beam will be developed to capture the linear and nonlinear characteristics.To utilize the sophisticated linear modeling technique, the linear model is proposed to be extended through smooth nonlinear terms.The selection of the appropriate nonlinear model order depends on a trade-off between the model accuracy and the computational complexity of the local nonlinearity, that is, in the context of this paper this is realized by a screw connection.Applying the acceleration surface method (ASM) offers insight into this trade-off, allowing one to choose the proper model order for describing the nonlinear behavior.The determination of the nonlinear coefficients will be carried out by the conditioned reverse path (CRP) method.This approach flourishes by leveraging the principles of the reverse path (RP) methodology, thereby mitigating the necessity of exciting each individual response location.In the framework of the CRP method, the local nonlinearity is far away from the excitation point.A prerequisite for the utilization of the CRP method is that the excitation of the system exists at a significant distance from localized nonlinearity, a condition met by the experimental arrangement involving a beam clamped at both ends.Employing the CRP method enables the computation of nonlinear coefficients corresponding to the nonlinear function acquired through the ASM.Additionally, the CRP method facilitates the computation of the Frequency Response Function (FRF). EXPERIMENTAL PROCEDURE The development of a low-order model is illustrated on a clamped-clamped beam, presented in LINEAR LOW-ORDER MODELING USING GENETIC ALGORITHM The aim is the state space representation of the nonlinear system.In this section, the focus is first on the linear statespace presentation based on mass, stiffness, and damping matrix.For the initial value of the system parameters restricted in the space of the sensors, the substructure is generated using Abaqus Finite Element Analysis (FEA).For this purpose, preparations such as the geometric modeling of the clamped-clamped beam, assignment of the material parameters, and definition of the boundary condition are performed.Choosing the retained nodes, which are the location of the five sensors, and allowing their transversal orientation of the beam, the substructure procedures are performed to obtain reduced-order mass and stiffness matrix.Calculating the Rayleigh damping based on the mass and stiffness matrix, the FRF is calculated based on the state space model in conjugation with system parameters. To generate the validation data, the experimental modal analysis is performed.In the framework of linear modeling, the excitation signal has to be chosen in such a way that the nonlinearity of the system remains unaffected.However, Experimental studies have shown that nonlinear motions are detected at small excitations.For this reason, the concept of best linear approximation (BLA) developed by Pintelon and Schoukens [5] for calculating the FRF is used.This method quantifies the level of nonlinear distortion in the FRF and analyzes the impact of the process and measurement noise on the BLA.One requirement of the excitation signal is the application of designed periodic excitation like the multisine excitation This equation calculates the time-domain representation of a signal () by summing up contributions from different frequency components obtained from the Fourier domain at different time instances , using complex exponentials.The complex exponential term is the kernel of the summation and represents sinusoidal functions at discrete frequencies: represents the imaginary unit √ −1, 2 is used to ensure that the frequency is specified in radians per second, represents the maximum frequency component you are interested in within the signal, is the total number of samples in the signal, the frequency index is currently being summed over, is the time variable at which you are evaluating the signal.The sum formula of the equation indicates that you are taking a summation over a finite range of values for the index variable .The range is from − to , where is a positive integer.The fraction at the beginning of the equation ensures that the amplitude of the output signal is normalized when converting from the frequency domain to the time domain. In Figure 2, The FRF is shown together with the noise variance and total variance.The latter are the dominating errors in this benchmark.Two nonlinear distortion curves are plotted.The first shows the actual level of the distortions as they are present at the output of the system.However, to estimate the model, the data that are averaged over the five realizations.Although averaging does not eliminate the presence of the nonlinear distortions, their impact on the variance of the estimated FRF of is reduced and it is the latter variance that should be used during the identification since the averaged data are used in the identification step [6].However, in order to mitigate the disparity between the measured and calculated FRFs, the genetic algorithms was employed within the Matlab software environment.This approach facilitated the optimization of system parameters (mass , stiffness, and damping matrix denoted as M, K, and D respectively) to minimize the divergence between the FRFs.The experimental data in Figure 1, consisting of one input and five corresponding outputs, were first implemented into the software.This input-output dataset was then utilized to compute the five linear FRFs.These FRF served as the objective functions for minimization within the Genetic Algorithm.Subsequently, a series of arrays were defined encompassing potential values for the mass, stiffness, and damping matrices.Initial values were assigned to these matrices, acquired through the Finite Element Method.Utilizing these matrices, a state space model was formulated in the subsequent step.From this model, the five FRFs were derived in the final stage.Employing the defined objective functions, the deviation was assessed via the root mean square approach.Should the deviation be less than 30%, the calculated mass, stiffness, and damping matrix values were deemed satisfactory.If the deviation exceeded this threshold, a Genetic Algorithm optimization process was initiated.Within the Genetic Algorithm process, steps 1 through 5 were reiterated.Noteworthy parameters governing the Genetic Algorithm include a Crossover Fraction of 0.8, a Population Size of 800, a Stall Generation count of 900, and a maximum number of Generations set at 1000.The outcomes of the Genetic Algorithm optimization are visually depicted in Figure 3.It demonstrates that the FRF derived from the state-space model closely approximates the FRF established through experimental modal analysis (Figure 4).Given the intended application of this low-order model within an active real-time control system, the disparity in accelerations within the time domain was also examined, particularly within the time interval spanning from 75.5 to 80 s, as depicted in Figure 5. Collectively, the level of concurrence observed amounts to approximately 76%. F I G U R E 3 FRF based on the optimized system parameters closely approximates the FRF based on the experimental modal analysis.FRF, Frequency Response Function. F I G U R E 4 The genetic algorithm for optimizing the mass, stiffness and damping matrix. F I G R E 5 The accelerations measured along the beam (blue line) are compared with those from the simulation model (red dashed line).The quantitative assessment of agreement between these two datasets yields an approximate concurrence of 76%. ACCELERATION SURFACE METHOD The nonlinear modeling consists of the characterization step of the local nonlinearity and the estimation step for estimating the nonlinear coefficients.In this section, the focus is on the characterization of local nonlinearity around the screw connection.Therefore, the ASM [7] is suggested enabling a qualitative evaluation of the nonlinearity through visualization of the acceleration over the surface spanned by the relative displacement and velocity.The foundation of the ASM is Newton's second law of motion.In the vicinity of the nonlinearity, Newton's second law of motion can be formulated as follows: where describes the total amount of Degrees of freedoms (DOF) in the system, , are the mass matrix coefficients, , ẋ, ẍ describe the displacement, velocity and acceleration vectors, is the restoring force vector and is the external force vector.Neglecting all the inertia and restoring force contributions that have no direct influence on the nonlinear component and focussing on the local nonlinearity between two locations, the equation will simplify to If no force is applied to DOF and and if the mass coefficient is neglected, a simple transformation results in The nonlinear order can be determined graphically by evaluating (4) near the local nonlinearity, plotting the accelerations to their corresponding relative displacements and velocities.A three-dimensional figure is obtained whose axes describe the acceleration, relative velocity, and relative displacement.To allow interpretation of this figure, a cross-section is generated along the velocity axis and corresponding accelerations and displacements are plotted that are relatively small to a threshold value (typically a few percent of the maximum velocity values).For the clamped beam, the accelerations and relative velocities are plotted on a graph within the threshold value.For this purpose, measured data are generated with an The ASM is applied for different excitation levels (3.5, 6, 14, and 17 N) to investigate the smooth nonlinear behavior of the screw connection.The point cloud is based on displacement and acceleration data that pertain to within 1% of the maximum velocity (0.0448, 0.0786, 0.1441 and 0.2367 m/s).ASM, acceleration surface method. excitation signal of 3.5, 6, 14 and 17 N and in a frequency range of 22 to 26 Hz, which includes the first resonance frequency. In the next step, the point cloud is approximated by a various order of polynomial function.The square error is calculated between the point cloud and each polynomial function of a different order.The best fit between the point cloud and the polynomial function is selected, and dependent on the chosen order of the polynomial function, the linear equation will be extended.In the experiment of the clamped-clamped beam, the best fit of the point clouds is approximated by the second and third polynomial order functions, presenting in Figure 6. CRP METHOD The CRP method [8] extends the application of the RP algorithm [9] by relaxing the condition of exciting every measurement point.Both methods are proper for calculating the FRF of the underlying linear system and for estimating the nonlinear coefficients.Concerning the clamped-clamped beam, the smooth nonlinearity of the screw connection can be described with the second and third polynomial order function according to the ASM, and the CRP method is applied to determine the nonlinear coefficients.For the sake of completeness, the nonlinear equation of motion is presented: The equation of motion ( 5) is transformed into the frequency domain with the help of the Fourier transform.The requirement is the existence of the Fourier transform for each term of the equation of motion: where is the linear dynamic stiffness matrix, is the Fourier transform of the generalized displacement vector, Fourier transform of the generalized force vector.The nonlinear term of the equation of motion is expressed as the sum of components, and each of them depends on a nonlinear function vector through a coefficient matrix .In the framework of the RP method, every response location has to be excited, otherwise, the method is not applicable.The CRP method relaxes this condition and estimate the nonlinear coefficients if the excitation is far away from the local nonlinearity.The CRP method used the relationship among the power spectral densities of displacement, nonlinear vectors and forces, and the cross-spectral density matrix between the th and th nonlinear vector () to construct the hierarchy of uncorrelated response components in the frequency domain.So the conditioned power spectral density matrices (−1∶) [10] between two nonlinear vectors and can be calculated by recursive algorithm starting with = 1: where < , , the subscript −1 ∶ − 1 means PSD of the part uncorrelated with the spectra of nonlinear vectors from the first through the ( − 1)th.Conditioned PSD matrices involving the excitation vector and response vector are obtained in a F I G U R E 7 Amplitude and phase of the conditioned and unconditioned FRFs.FRF, Frequency Response Function. similar manner, substituting the subscripts i and/or j with the subscripts X and F in the equation.The dynamic compliance matrix of the system can be finally computed by means of 1 or 2 estimation procedures: 2 ∶ = −1 (−1∶) (−1∶) The coefficient matrices can be estimated by Finally, the underlying linear model (ULM) based on the classical 1 function is compared w.r.t. the recovered conditioned FRFs using the CRP method.The magnitude and the phase are shown in Figure 7 for the outputs 5 .As expected, the classical estimation of the FRFs based on 1 , is distorted under the effect of invoked nonlinearity.However, the conditioned spectral analysis based on the CRP recovers the ULM correctly. CONCLUSION In this contribution, a nonlinear low-order model in state-space representation is successfully developed, representing the dynamic of the clamped-clamped beam in the space of the sensor.The presented technique is a superposition of the linear and nonlinear model.A linear low-order model is generated using the substructure generation procedure of F I G U R E 1 Schematic configuration of the measurement setup (left), picture of the clamped-clamped beam (right). Figure 1 . The experimental setup studied consists of two beams of different lengths and thicknesses connected by three screws.The boundary conditions for the respective ends of the beam are realized as imperfect clamped boundary conditions.For the low-order modeling, different force excitation () are generated for the electrodynamical shaker, exciting the clampedclamped beam.The resulting accelerations 1 (), 2 (), 3 (), 4 () and 5 () are recorded at five locations along the clamped-clamped beam.The acquired signals are the basis for the subsequent approaches in this contribution. F I G U R E 2 The FRF computed from the I/O signal (black line) is compared with the FRF of the initial state space model (violet dashed line) based on the FEA substructure generation.FRF, Frequency Response Function; I/O, Input/Output.
v3-fos-license
2024-06-05T05:10:16.112Z
2024-06-03T00:00:00.000
270223409
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "dd134a9d03776f7c20bb5f7f474ac7d44ae791d7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:490", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "dd134a9d03776f7c20bb5f7f474ac7d44ae791d7", "year": 2024 }
pes2o/s2orc
A scoping review of the toxicity and health impact of IQOS This work aims to summarize the current evidence on the toxicity and health impact of IQOS, taking into consideration the data source. On 1 June 2022, we searched PubMed, Web of Science, and Scopus databases using the terms: ‘heated tobacco product’, ‘heat-not-burn’, ‘IQOS’, and ‘tobacco heating system’. The search was time-restricted to update a previous search conducted on 8 November 2021, on IQOS data from 2010–2021. The data source [independent, Philip Morris International (PMI), or other manufacturers] was retrieved from relevant sections of each publication. Publications were categorized into two general categories: 1) Toxicity assessments included in vitro, in vivo, and systems toxicology studies; and 2) The impact on human health included clinical studies assessing biomarkers of exposure and biomarkers of health effects. Generally, independent studies used classical in vitro and in vivo approaches, but PMI studies combined these with modeling of gene expression (i.e. systems toxicology). Toxicity assessment and health impact studies covered pulmonary, cardiovascular, and other systemic toxicity. PMI studies overall showed reduced toxicity and health risks of IQOS compared to cigarettes, but independent data did not always conform with this conclusion. This review highlights some discrepancies in IQOS risk assessment regarding methods, depth, and breadth of data collection, as well as conclusions based on the data source. INTRODUCTION Smoking cigarettes remains at alarmingly high rates worldwide (1.18 billion regular smokers) and is responsible for the annual death of 7 million casualties 1 .Efforts to curb this epidemic continue growing, including tobacco control policies, information campaigns, cessation care, and harm reduction approaches.During the last two decades, many nicotine and tobacco products have been introduced with reduced exposure and risk claims 2,3 .These alternative products with harm reduction potential include oral nicotine pouches, electronic cigarettes (ECs), and heated tobacco products (HTPs).An HTP that has gained global attention and rapid market expansion is IQOS, a product by Philip Morris International (PMI) 4 .IQOS was introduced into test markets in Japan and Italy in 2014, and within six years, its sales have expanded to over 60 countries 5 .IQOS relies on heating reconstituted tobacco at a temperature well below the temperatures measured in combustible cigarettes 4 .Recently, PMI secured a 'modified exposure' order from the US FDA based on a comprehensive modified risk tobacco product (MRTP) application 6 .However, the FDA found that PMI's current data do not demonstrate that IQOS, as used by consumers, will significantly decrease the risk of tobacco-induced diseases for individuals or harm to the population 7 . Nevertheless, several independent reports criticized the PMI data presented to the FDA 8 .For example, one report criticized the population health impact model used by PMI to justify that IQOS would benefit the individual and public health, and argued that this model excludes morbidity and underestimates mortality related to IQOS use in the population 9 .Also, independent researchers examined PMI data and found that claims of reduced exposure and risk are unsupported by the data [10][11][12] .Moreover, some independent researchers have encouraged policymakers to consider independent evidence before authorizing the marketing of IQOS and similar products that may harm public health 11,13 .Also, some health professional societies recommended that the toxicity of newly introduced tobacco products like IQOS should not be compared to combustible cigarettes but to no tobacco product use situations, i.e. focusing on the absolute, not relative toxicity 14 . In this article, we conduct a literature review to assess the data on IQOS toxicity and health impact published by PMI-sponsored research (affiliated authors or funded studies) and independent research.Data from in vitro, in vivo, and systems toxicology studies were extracted to assess IQOS toxicity.The systems toxicology approach integrates multi-level biological data to comprehensively understand systemic molecular and functional changes from an omics-based method using computational modeling to extrapolate classical toxicology findings to risk assessment 15,16 .In addition, clinical studies were assessed for biomarkers of exposure and health effects of IQOS.This review aims to compare the cumulative evidence on the toxicity and health effects of IQOS from all data sources, including independent and tobacco industry-sponsored research while highlighting the methodological differences and conclusions among the studies listed. We previously reported a systematic review on IQOS conducted on 8 November 2021, on Web of Science, PubMed, and Scopus using the terms 'heated tobacco product', 'heat-not-burn', 'IQOS', and 'tobacco heating system' 17 .For the current scoping review, we looked at articles that assessed Previous studies Identification of studies via other methods Records identified from: Citation searching (n = 4) Eligibility Reasons for full-text exclusion of a publication: does not report IQOS-specific data, reports data unrelated to the topic (toxicity and health impact), full-text is not in English, the study was retracted, or does not report original data.*The total number (n=341) of reports included in our previous systematic review that focused on the chemical analysis of IQOS emissions 17 . IQOS toxicity and health effects from our previous search.We also included more recent articles using the same search terms and methodology (up to 1 June 2022).Only reports written in English were included.A publication was excluded if it did not report IQOS-specific data, reported data unrelated to the topic (toxicity and health impact), or if the study was retracted or did not report original data.Figure 1 summarizes the selection process. We extracted information on the data source [independent, PMI, or other heated tobacco product (HTP) manufacturers] from each publication's author affiliation, conflict of interest, and/or study funding sections.Publications were categorized into two types of assessments: 1) toxicity, and 2) impact on human health.Toxicity assessments included in vitro, in vivo, and systems toxicology studies.The impact on the human health category included clinical studies assessing biomarkers of exposure and biomarkers of health effects (Figure 1). DEVELOPMENTS Figure 2 shows the categorization of publications based on their topic, study design, and exposure/ health effects, showing the distribution based on the data source.Only publications that reported original data were included (n=103) (Supplementary file Table S1).PMI data are presented first in each section below, followed by independent and competing manufacturers' data (Figure 2). Toxicity assessment Sixty-five toxicity assessment studies were classified based on their study designs (i.e. in vitro, in vivo, and systems toxicology studies).Then, they were subcategorized by research focus (i.e.pulmonary, cardiovascular, and other systemic toxicity). In vitro studies Pulmonary toxicity PMI reported a combined 3D lung and liver tissue on a chip study showing that IQOS did not affect cytochrome P450 activity in both tissues 18 .Two other studies showed that after one week of exposure, total particulate matter (TPM) from IQOS had 20 times less effect on mitochondrial function in human bronchial epithelial cells compared to cigarette smoke (CS) exposure 19 .At prolonged exposure of 12 weeks, markers of cellular adaptation were observed 20 . Several independent studies assessed pulmonary toxicity using in vitro methods.In a study of primary rat alveolar epithelial cells, IQOS exposure induced oxidative stress at 6 h.The authors concluded that this may lead to oxidative stress-related diseases like chronic obstructive pulmonary disease (COPD) and idiopathic pulmonary fibrosis (IPF) in humans 21 .Another study using an air-liquid interface (ALI) to assess the cytotoxic effects on human bronchial epithelial cells, showed that IQOS exposure induced higher cytotoxicity (reduced metabolic activity) than e-cigarettes or air controls but lower than combustible cigarettes 22 .While a study found that IQOS was less cytotoxic than CS to human lung epithelial cell line (A549) (90-95% estimated reduction in cytotoxicity), both products yielded reduced levels of glutathione (antioxidant) and increased carbonylation of proteins (markers of chronic lung diseases) 23 .A study of human bronchial epithelial cells (Beas-2B) and primary human airway smooth muscle cells found cytotoxicity to both cell types by IQOS, similar to CS and e-cigarettes 24,25 .A comprehensive study assessed the cytotoxic impact of IQOS gas phase, particle phase, and whole smoke emissions in comparison to Marlboro Red cigarettes on different types of human pulmonary cells [A549 and BEAS-2B cell lines, normal human bronchial epithelial cell (NHBE) cultures from different donors, normal human lung fibroblasts (NHLF), and human embryonic stem cells).The study reported that IQOS smoke (gas phase, particulate phase, or whole smoke] affected critical cellular functions and was equally cytotoxic to CS for several cell types, especially at high levels of exposure.This study showed that less cleaning of IQOS devices increased cytotoxicity 26 . Cardiovascular toxicity A study by PMI researchers showed that IQOS exposure had 18 times fewer inhibitory effects than CS on chemotaxis and trans-endothelial migration of human coronary arterial endothelial cells as a marker of cardiovascular health 27 .However, an independent study of the cytotoxicity of IQOS smoke on human vascular endothelial cells compared to cigarettes and other HTPs showed induced mitochondrial activity.IQOS decreased nitric oxide (NO) production, similar to other HTPs (e.g.Glo), but with lower effects than CS 28 .Similarly, IQOS and e-cigarette exposure were less cytotoxic than CS, less impacted endothelial wound healing of lab-simulated tissue injury, and reduced cellular stress response and inflammatory processes 29 . Other systemic toxicity A PMI study found that IQOS does not inhibit monoamine oxidase, which are enzymes suggested to be involved in smoking addiction, due to the reduced emission of possible monoamine oxidase inhibitors like acetaldehyde and 2-naphthylamine 30 .Another PMI study on human premolars showed that IQOS had minimal effects on teeth discoloration 31 .A study from a competing HTP manufacturer utilized a metabolomics assay to compare the developmental toxicity of IQOS, CS, and e-cigarettes with and without nicotine on human pluripotent stem cells.The data showed that IQOS crossed the developmental toxicity threshold at five times higher concentration than CS, unlike e-cigarettes that did not cross the threshold at maximum tested concentrations 32 . Eight independent studies assessed the systemic toxicity of IQOS exposure or its effects on organs other than the pulmonary and cardiovascular systems.A study compared the effects caused by exposure to IQOS and CS on T lymphocytes' oxidative balance and inflammatory parameters.While IQOS had smaller effects on T cell responses than CS exposure, IQOS smoke and CS impaired T cell proliferation, leading to cell death and decreased interleukin-2 (IL-2) secretion 33 .The effect of CS, IQOS, and e-cigarette aerosol extracts on the viability and differentiation of pre-adipocytes to beige adipocytes as a probe of the development of metabolic disorders was assessed, and only CS yielded detrimental effects 34 .In a study on the viability and function of human osteoprogenitors and mesenchymal cells, IQOS had significantly less toxicity in bone cells than CS 35 .However, another group reported a conflict finding, showing that IQOS exposure impairs preosteoblast cell viability and osteoblastic differentiation to a comparable extent as CS exposure 36 .A study found induced cell death and activated ferroptosis in a concentration and timedependent manner in human corneal epithelial cell lines by exposure to IQOS or CS 37 .Another study found that IQOS can affect orbitopathy differently than CS 38 .The effect of IQOS exposure on teeth discoloration showed less impact than CS on artificial teeth color 39 , and IQOS was not cytotoxic on human keratinocytes and gingival fibroblasts (in the mouth gum) 40 . In vivo studies Pulmonary toxicity A PMI study on chronic exposure of A/J Mice for 18 months to IQOS smoke showed that IQOS significantly reduced toxicity and carcinogenicity on red blood cell profile, liver function, lung inflammation, emphysematous, and histopathological changes compared to CS in respiratory tract organs 41 .Another 8-month exposure study showed that IQOS exposure caused hypermethylation of gene regulatory regions (i.e.promoters and enhancers) in both lung and liver tissues extracted from exposed mice (3 h/ day, five days/week, for eight months), but the impact was smaller when compared to CS 42 . In contrast, an independent study of the acute response of mice to IQOS exposure (1-2 days) showed a significant increase in oxidative stress and total lung glutathione, similar to the response after CS exposure 43 .Another study showed that compared to air-exposed controls, IQOS-exposed mice (1-4 days) had significantly decreased concentrations of reduced glutathione and increased percentage of oxidized glutathione in lung tissues, both markers of oxidative stress 44 .However, another study of mice exposure to IQOS emissions for 6 h/day for seven days did not find evidence of oxidative stress, measured by ROS, but found increased several proinflammatory mediators, including IL-1β and IL-6.This study showed that compared to e-cigarettes and CS, IQOS exposure was associated with lower lung injury 45 .A longer exposure study (5 h/day for two weeks) showed that both IQOS and CS exposure induced epithelial cell damage [higher levels of albumin in bronchioalveolar lavage (BAL)] compared to unexposed mice, yet a lower extent for IQOS.Although the accumulation of neutrophils, macrophages, and T cells in the lungs was lower in IQOS-exposed than in CS-exposed mice, the levels of proinflammatory cytokines and chemokines were similar in both groups 46 . More independent data were reported on IQOS exposure compared to CS.A 1-month exposure study investigated the impact of IQOS on rat ultrastructural lung airways and found that IQOS exposure led to a severe remodeling of smaller and larger airways, increased tissue ROS, and promoted oxidative DNA damage; all factors are considered to increase lung cancer risk 47 .A recent study of mice exposed to IQOS aerosol for six months observed increased markers for pulmonary emphysema similar to those in CS exposure, indicating that IQOS is not completely safe.The authors found elevated levels of neutrophils and lymphocytes in the BAL fluid and upregulated genes involved in apoptosis-related pathways in IQOSexposed mice 48 .A study that assessed the impact of long-term IQOS exposure (24 weeks) showed that IQOS exposure resulted in significantly reduced weight and lung function, higher inflammation, and higher oxidative stress compared to controls, and equivalent to CS exposure impact.The authors concluded that long-term exposure to IQOS could be detrimental to pulmonary health 49 . Cardiovascular toxicity Data from PMI on in vivo cardiovascular toxicity will be discussed in the systems toxicology section.Only two independent studies could be listed under in vivo cardiovascular toxicity.A study to determine the impact of IQOS exposure on vascular endothelial function in rats showed that exposure to emissions from a single IQOS Heatstick exerted similar impairment in arterial flow-mediated dilation as CS 50 .Another study found that all tobacco products, including IQOS and e-cigarettes, impair flow-mediated dilation in rats after a single exposure session 51 . Other systemic toxicity A meta-analysis of four in vivo studies conducted by PMI researchers assessed the impact of IQOS on the activity of the cytochrome P450 1 A2 (CYP1A2) enzyme responsible for the metabolism of harmful xenobiotics like amines.The results showed that switching the animals to IQOS caused the same effect as cessation of exposure to CS in terms of downregulating CYP1A2 activity to normal levels.The same observation was confirmed in four clinical studies (see below) 52 .Another PMI study showed that IQOS and CS have minimal impact on the intestinal microbiome in mice after six months of exposure 53 . An independent study found higher expressions of metallothionein (scavengers of ROS and metals and associated with immune diseases and cancers) in the cells of the lungs and liver from mice exposed to CS but not to IQOS smoke 33 .A report examining PMI data on Sprague Dawley rats exposed to IQOS smoke or CS for 90 days observed increased markers of acute hepatotoxicity, including liver weight and alanine aminotransferase, in the IQOS-exposed group 54 .While a study showed aggravated arthritis symptoms in CS exposure only, IQOS and CS exposures affected lymphoid tissue cellularity and proliferation of splenocytes in mice during arthritis development 55 .Another study found that IQOS exposure impairs bone fracture healing to a similar extent when compared to CS-exposed mice 36 .A study of the impact of prenatal exposure to IQOS on testicular function showed more delayed sexual maturation and impaired spermatogenesis in male offspring compared to those in CS-exposed mice 56 . Systems toxicology No independent studies using systems toxicology were reported.PMI studies that used systems toxicology based on in vitro experiments will be summarized first.A PMI 3-day IQOS exposure study on human gingival epithelial organotypic cultures showed minor histopathological alterations, minimal cytotoxicity, and limited proinflammatory mediator alterations.The subsequent multi-omics analysis showed that IQOS induced about 79% lower biological impact when compared to CS in terms of alterations of genes related to oxidative stress, xenobiotic metabolism, and inflammation 57 .Another study on human organotypic oral epithelial cultures showed that IQOS, compared to CS, yielded less cytotoxicity (significant after 48 h post-exposure), secretion of proinflammatory mediators, and gene expression perturbations related to apoptosis, necroptosis, senescence, xenobiotic metabolism, and oxidative stress 58 .A study of 3D organotypic nasal epithelial culture showed that the impact of IQOS was substantially lower than CS in terms of cytotoxicity, tissue morphology, proinflammatory mediators, ciliary function, transcriptome perturbations, and miRNA expression profiles 59 .Regarding target organ effects, IQOS emitted much lower levels of harmful and potentially harmful constituents (HPHCs), induced lower cytotoxicity on normal primary human bronchial epithelial cells, and exerted lower overall biological impact (3 to 15 times lower than CS) as induced from systems toxicology analysis 60 .A longterm exposure study of IQOS (12 weeks) reported 20 times less toxicity on human bronchial epithelial cells regarding oxidative stress, DNA damage, and epithelial-to-mesenchymal transition (a marker of carcinogenesis) 61 .Similarly, IQOS elicited lower toxicity in all aspects than CS on lung epithelial cells and induced only 7.6% of the CS computationally estimated perturbation of gene expression 62 .Thus, a systems toxicology meta-analysis concluded that IQOS has reduced and more transient effects than CS on buccal, nasal, and bronchial epithelial cells regarding xenobiotic metabolism, oxidative stress, and inflammatory responses 63 .A study on small airway organotypic cells revealed that IQOS exposure induced lower cytotoxicity, lower secretion of proinflammatory mediators, and fewer transient perturbations in gene expression than CS exposure 64 .A recent study assessed 24-hour exposure of young and aged human aortic smooth muscle cells to IQOS and CS and showed no significant effect of IQOS on both cell groups in terms of cell proliferation, functional and molecular endpoints, and gene expression 65 .Another study assessing vascular pathomechanisms indicated a 10 to 20-fold lower effect of IQOS compared to CS on the adhesion of monocytic cells on human coronary arterial endothelial cells (a surrogate of atherogenesis) 66 . An in vivo study showed low to absent effects of IQOS exposure on the inflammatory and oxidative stress response, immune response, and lipid and protein surfactant alterations in the lungs of mice after six months of exposure 67 .Another study showed that longer chronic exposure (18 months) to IQOS indicated lower toxic effects than CS on respiratory tract histology, lung inflammation, emphysematous changes, oxidative stress responses, and xenobiotic metabolism 68 .A 90-day nose-only inhalation exposure showed that IQOS had less impact than CS on body weight, hyperplasia and squamous metaplasia in the upper airway, lung inflammation, and overall biological impact (assessed by transcriptomic analysis).However, similar toxic effects between IQOS and CS were found on leukocyte counts in blood, cholesterol, glucose, liver-related enzyme activity, and weights for various organs and glands.The latter observation was attributed to the animals' nicotine intake and experimental stress 69 .A similar study found the same reduction in toxicity when menthol-flavored IQOS was compared to mentholated reference cigarettes 70 .Follow-up systems toxicology studies showed that after 90 days of exposure, IQOS exposure, unlike CS exposure, did not lead to global miRNA downregulation while upregulating inflammation-related miRNA 71 and menthol IQOS has minimal effect on lung proteomes and lipidomes 72 .Another study showed that ceasing mice's exposure to CS, switching mice to IQOS after two months, or IQOS exposure for eight months, showed a similar reduced impact on lung lipids and lipid-related proteins, including surfactant lipids and proteins 73 . IQOS's impact on the cardiovascular system was also assessed.A study showed that mice exposure to IQOS emissions yielded no significant effect on cholesterol and low-density lipoprotein but increased high-density lipoprotein compared to controls but at a much lower impact than CS, and led to reduced development of atherosclerotic plaques.IQOS exposure also impacted lung volume and function less, inflammation and inflammatory cell infiltration in lung tissues, and less lung injury and emphysematous changes.These reduced effects were also reflected in the absence of IQOS-induced heart, lung, and thoracic aorta gene perturbations 74,75 .A follow-up study showed that IQOS exposure did not affect heart weight, left ventricular structure, atherosclerosis progression, heart function, and gene expression related to atherosclerosis and cardiovascular diseases 76 .Another study showed that eight months of mice exposure to IQOS did not induce atherosclerotic progression (aortic plaque formation), altered lipid profiles, upper airway epithelial hyperplasia and metaplasia, lung inflammation, and progressive emphysematous changes as CS exposure did.Lung morphometry and transcriptomics modeling corroborated the experimental results 77 .PMI researchers also used systems toxicology to evaluate the hepatotoxicity of 8-month IQOS exposure in mice.They showed that IQOS, unlike CS, did not induce alterations in lipid metabolism, xenobiotic metabolism, and iron homeostasis that could be linked to oxidative stress and liver function impairment 78 . A study by a competing HTP manufacturer assessed the transcriptomic perturbations in 3D nasal airway cells acutely exposed to IQOS emissions compared to Glo and CS.The data showed altered expression levels of genes after exposure to IQOS and Glo (115 genes and 2 genes, respectively) compared to thousands of perturbations with CS exposure (2809 genes).In a separate analysis of cytokines, they did not find inflammation effects 79 . Health impact Biomarkers of exposure A PMI randomized controlled study in confinement showed significant reductions in biomarkers of exposure to HPHCs by 47% to 96% in smokers who were switched to IQOS for five days with equivalent nicotine uptake from IQOS compared to participants' brands of cigarettes 80 .Similar studies for menthol IQOS in Japan and the US showed 50-94% reductions in biomarkers of exposure to HPHCs 81,82 .Other studies switching smokers to IQOS resulted in significant reductions in biomarkers of exposure to TSNAs (about 56%), carbon monoxide (about 77%), benzene (about 94%), 1,3-butadiene (about 92%), and acrolein (about 58%) [83][84][85] .However, a multicenter ambulatory trial for 26 weeks in the US, reported more modest reductions (16-49%) in biomarkers of HPHCs, which were attributed to the study design.This study showed even fewer reductions (about 10%) among dual users of IQOS and CS 86 .In terms of nicotine delivery from IQOS, a randomized crossover study showed that the nicotine delivery rate was similar between IQOS and CS with lower plasma nicotine peak after IQOS use (70% of CS peak) 87 , and another study reported a similar pharmacokinetic profile of nicotine from IQOS and CS with similar user satisfaction 88 .Estimation of lifetime cancer and non-cancer risks from 8 HTPs (including IQOS) compared to 273 cigarette brands showed that cancer risk decreased by more than one order of magnitude and a significantly higher margin of exposure (MOE) for non-cancer risks 89 . An independent study showed that IQOS use, like e-cigarettes, led to lower level of end tidal carbon monoxide (eCO) compared to CS among current smokers.However, the authors expressed concern about the longer term effect of eCO increase from baseline after IQOS and e-cigarette use 90 .Another study showed a small but reliable increase in eCO after an IQOS use session 91 .A third study showed no increase in eCO post-IQOS use sessions 92 .A chronic study showed that smokers who switched to IQOS for six months had significantly lower eCO, within the range of non-smokers 93 . Independent research assessed the MOE to toxic emissions from IQOS compared to CS.It showed higher individual MOEs for all compounds in IQOS emissions (less risk) and 23 times higher combined MOE for all toxic compounds (excluding nicotine) than CS 94 .Also, a study estimated the carcinogenic potency of secondhand smoke from IQOS to be three orders of magnitude lower than cigarettes 95 , and another study showed that IQOS does not impair indoor air quality and does not lead to acute health risks for bystanders 96 . A report from a competing manufacturer on a randomized controlled trial, Glo and IQOS reduced urinary biomarkers of exposure (i.e.tobacco-specific nitrosamines, carbonyls, VOCs, and PAHs) by 20-90% in Japanese smokers who switched to these products for five days in confinement 97 . Biomarkers of health effects PMI researchers reported a controlled clinical study that applied systems pharmacology and showed that exposure-response gene signature in blood was similarly reduced in smoking cessation or switching to IQOS groups compared to continued smoking 98 .Moreover, a meta-analysis of four randomized confinement clinical studies corroborated the same result 99 .The multicenter trial discussed in the biomarkers of exposure section showed statistically significant improvement in high-density lipoprotein cholesterol in serum, white blood cell count in blood, carboxyhemoglobin, forced expiratory volume in one second (FEV1), and total NNAL after switching to IQOS for 6 months in smokers 86 .Another study found that the use of menthol IQOS for 5 days by smokers reduced biomarkers of oxidative stress, platelet activation, white blood cell count, and endothelial function, and better lipid metabolism and lung function 100 .A similar study in the US yielded the same reduction in biomarkers of potential harm 101 . An independent study evaluated the acute impact of IQOS use on pulmonary function in smokers and non-smokers, showing a significant decrease in measures of airway function (flow, volume, and diffusion capacity) and oxygen saturation and almost a significant increase in eCO and airway resistance 102 .Another study showed that exclusive use of IQOS has minimal effect on mucociliary clearance function, as reflected by saccharin test transit time 103 . Also, an independent study showed that IQOS or CS exposure by current smokers led to acute arterial stiffness, as reflected by higher brachial and systolic blood pressure 104 .A crossover study of smokers showed that the use of IQOS, e-cigarettes, or CS was associated with acute oxidative stress, platelet function, flow-mediated dilation, and blood pressure, with CS being the most detrimental among the three products 105 .Another study showed that IQOS use similar to CS impaired systolic and diastolic myocardial function among current IQOS users, but unlike CS, had no adverse effect on blood pressure 106 .In contrast, a study showed that IQOS use, like CS and e-cigarette use, increased blood pressure and arterial stiffness, and eCO was elevated for all products for up to 60 min 107 .A study assessed the acute (after a use session) and chronic (after one month of being switched to IQOS use) impact of IQOS and CS on endothelial function, arterial stiffness, myocardial deformation, oxidative stress, and platelet activation among smokers.The data showed that IQOS did not have an acute detrimental effect on markers of vascular function, oxidative stress, and platelet activation, and the results were corroborated by the improvement in endothelial function in the chronic phase of the study.This improvement was attributed to reduced CO exposure or reduced nicotine intake 108 .A study showed that HTP (mainly IQOS) use led to abnormal DNA methylation and gene expression profiles, yet to a lower extent than CS 109 . A few case studies of hospitalization upon using IQOS were also reported.A 20-year-old man developed acute eosinophilic pneumonia after doubling daily IQOS consumption (from 20 to 40 sticks) 110 .Another case study reported the same observation for a 16-year-old youth who started using IQOS 2 weeks before hospitalization 111 .Similarly, a subacute lung injury of a 56-year-old man using IQOS for 2.5 years was reported 112 .In contrast, a study focusing on health benefits for IQOS users with a history of pulmonary diseases showed that in a small cohort of smokers with COPD who switched to IQOS for three years, there was a substantial decrease in COPD exacerbations and improvements in respiratory symptoms and exercise tolerance 113 . In summary, we compared data from independent research, PMI, and other HTP manufacturers regarding the toxicity and health effects of IQOS. The body systems most studied in independent and PMI research are the pulmonary and cardiovascular systems, yet scattered literature exists on other systems.The use of systems toxicology to generate toxicity data on IQOS was unique to PMI studies, and several initiatives were taken to validate the utility of this approach (Figure 3) 114 .For instance, PMI conducted a crowd-sourcing validation of their systems toxicology approach to assess IQOS toxicity, in which experts recruited through a third party performed modeling on data collected from mice and humans, concluding nearly no harmful effect of IQOS 115 .Moreover, they conducted a peer review to assess the validity of the data generated and the robustness of their systems toxicology approach used in the IQOS MRTP application to the FDA 116,117 .However, no independent research has been conducted on IQOS toxicity using systems toxicology, which is critically needed to provide checks and balances. Figure 3 summarizes the data comparing the toxicity and health impact of IQOS to controls (exposed to air), cigarette smoking, and smoking cessation models, including in vitro, in vivo, and human perspectives.This comparison is focused on the general conclusion of the data reports and does not include a detailed assessment of the methodologies used.Except for one PMI study that showed IQOS exposure has beneficial effects, independent and PMI studies reported harmful or no different effects of IQOS compared to control.All PMI and other HTP manufacturers' studies reported beneficial effects of IQOS compared to CS.However, the independent evidence was mixed, reporting beneficial, harmful, or similar effects of IQOS compared to CS. PMI and other manufacturers' data showed an equivalent reduction in toxicity when smokers (or animal models) were switched to IQOS compared to cessation, while some independent research showed harmful effects.It should be noted that our previous systematic review on IQOS content and emissions concluded that industry-supported and independent research agreed on IQOS efficient nicotine delivery and reduced emissions of most cigarette smoking toxicants.Yet, they diverged on increased emissions of chemicals and toxicants in the FDA's HPHC list or beyond 17 (Figure 3). Due to the wide scope of this review, including in vitro, in vivo, and human studies, the vast literature data were more suitably summarized as a scoping review rather than a systematic review for space constraints.Additionally, the quality of summarized studies and the used methodologies were not evaluated as part of this review of the literature on IQOS toxicity.Nonetheless, this scoping review highlights the general trends in the data on IQOS toxicity and health effects from industry-related and independent researchers.This review aims to emphasize the need for additional independent data on IQOS toxicity and health effects to provide checks and balances, ultimately benefiting all stakeholders, including the product manufacturers.While an updated search could enhance the review, it will not impact its main conclusions. CONCLUSION The ever-growing tobacco product landscape complicates tobacco control, especially as stakeholders, including regulatory authorities, independent scientists, and the tobacco industry, tend to compare the toxicity and health effects of new tobacco products to cigarettes, focusing on the relative rather than the absolute risk of these new products.IQOS is a new tobacco product with extensive data generated by its manufacturers and independent researchers, although to a less extent by the latter.Our comparison of the data from both sources showed that they may not always converge on the reduced risk potential of IQOS compared to cigarettes.There is a need for more data on IQOS, especially on the health effects of long-term use among switching smokers, dual users, as well as novice exclusive users. Figure 1 . Figure 1.A flow chart diagram of the scoping review about the toxicity and health impact of IQOS with data from 2010-2021 Figure 2 . Figure 2. Categorization of publications based on the topic, study design, and exposure/health effects from independent research (IND), PMI, and other HTP manufacturers (Other) of the scoping review about the toxicity and health impact of IQOS, 2010-2021
v3-fos-license
2021-03-12T06:16:04.505Z
2021-03-10T00:00:00.000
232197481
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-84984-2.pdf", "pdf_hash": "48bc93eb080900deda06ccadf24ee85b0da6492d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:491", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "a9c7e28dfbe77c51d9f4ecac85c450156e3715a8", "year": 2021 }
pes2o/s2orc
Telomere attrition rates are associated with weather conditions and predict productive lifespan in dairy cattle Telomere length is predictive of adult health and survival across vertebrate species. However, we currently do not know whether such associations result from among-individual differences in telomere length determined genetically or by early-life environmental conditions, or from differences in the rate of telomere attrition over the course of life that might be affected by environmental conditions. Here, we measured relative leukocyte telomere length (RLTL) multiple times across the entire lifespan of dairy cattle in a research population that is closely monitored for health and milk production and where individuals are predominantly culled in response to health issues. Animals varied in their change in RLTL between subsequent measurements and RLTL shortened more during early life and following hotter summers which are known to cause heat stress in dairy cows. The average amount of telomere attrition calculated over multiple repeat samples of individuals predicted a shorter productive lifespan, suggesting a link between telomere loss and health. TL attrition was a better predictor of when an animal was culled than their average TL or the previously for this population reported significant TL at the age of 1 year. Our present results support the hypothesis that TL is a flexible trait that is affected by environmental factors and that telomere attrition is linked to animal health and survival traits. Change in telomere length may represent a useful biomarker in animal welfare studies. Telomeres are repetitive DNA sequences that cap the ends of eukaryote linear chromosomes 1,2 . They shorten with the number of cell divisions in vitro as well as in response to oxidative stress and critically short telomeres trigger a DNA damage response that leads to replicative senescence or apoptosis [3][4][5] . In the last decade or so, measures of average telomere length (TL) taken from blood samples have emerged as an exciting biomarker of health across disciplines including biomedicine, epidemiology, ecology and evolutionary biology [6][7][8] . Considerable among-and within-individual variation in TL has been observed, with a general pattern of rapid telomere attrition during early life and a plateau or slower decline thereafter 9,10 . Both genetic and environmental factors, particularly those associated with physiological stress, predict TL in humans and other vertebrates [11][12][13][14][15][16] . TL has also been repeatedly associated with health outcomes and subsequent survival in a variety of species, particularly humans and birds 7,17 and experimentally elongated TL in mice was associated with a survival advantage 18 . However, a major outstanding question remains to what degree associations between TL and health arise from constitutive differences in TL among individuals set by genes or early life conditions 19 , or from the pattern of within-individual change in TL across individuals' lives which may arise in response to environmental stressors 16 . Estimates of the individual consistency of TL over time in both the human and avian literature vary considerably among studies. Some studies report very high intra-individual correlations, repeatability measures or heritability measures [19][20][21][22][23] which indicate that the rank order in TL among individuals may remain relatively Results RLTL profiles and change measurements. We used blood samples collected between 2008 and 2014 to measure longitudinal RLTL by monoplex qPCR in 1,325 samples from 305 female individuals. On average, 4.3 (range: 2-8) telomere measurements were made of each individual, including the first measurement within 15 days of birth and a variable number of subsequent measures ( Figure S1C). RLTL measures were adjusted for qPCR plate and row to account for known sources of measurement error [48][49][50] and both RLTL change between subsequent measurements ( Figure S1D) as well as RLTL residuals ( Figure S2A) were approximately normally distributed. The mean of all RLTL change measurements was statistically significantly smaller than zero (P < 0.001) indicating that telomere shortening was more frequent than lengthening. Animals varied in the amount and direction of RLTL change across consecutive measurements, with a relatively even proportion of individuals increasing (43.2%) and decreasing (56.8%) in RLTL over time. Figure 1 a and Figure S1D visualise that at young ages RLTL shortens on average, but at older ages RLTL change centres around zero. Consecutive RLTL measurements made on the same individual were overall moderately positively and significantly correlated (r = 0.38, 95% CI: 0.32-0.43, P < 0.001; Fig. 1 b), supporting our previously reported moderate and significant individual repeatability of RLTL 50 . In contrast, RLTL change within the individual was not repeatable (repeatability as variance due to the animal divided by the total variance = 0.00) and the repeatability of absolute RLTL change was small (0.049). Within individuals we observed no constant telomere attrition, maintenance or elongation, but more complex dynamics with short term changes in both directions (Fig. 1c,d). To illustrate this more clearly, example individual RLTL dynamics are shown for all cows with at least seven RLTL measurements in our dataset in Fig. 1e. This complexity means that it is impossible to compare lifetime telomere change dynamics by simply comparing slopes. Therefore, we calculated and examined three average metrics of RLTL dynamics over every individual's lifetime: Firstly, the average of all their RLTL measures ("mean RLTL") which investigates if animals with on average longer RLTL have a survival advantage. Secondly, the average of all their RLTL change measures ("mean RLTL change") and thirdly, the average of all their absolute RLTL change measures ("mean absolute RLTL change"). Mean RLTL averages across the differences between all subsequent RLTL measures, and short-term positive and negative changes can cancel each other out, leaving mostly long-term overall changes to investigate that also consider the overall direction of change. Mean absolute RLTL change, on the other hand, averages across www.nature.com/scientificreports/ absolute differences between all subsequent RLTL measurements and is used to investigate the hypothesis that RLTL change regardless of direction (meaning the amplitude of change) may be associated with a negative health outcome. Figure S3 offers additional visual explanation. We were also interested in investigating, if early life telomere dynamics were a predictor of productive lifespan ( Figure S2D), similarly to what has been observed in bird studies before 20,39 , because early life predictive measures are of particular interest to the dairy industry. Early life telomere dynamics differ from later telomere dynamics in dairy cattle in that there is more consistent and obvious telomere shortening observed during that time. We focussed on RLTL change within the first year of life by only considering two RLTL measurements per animal: The first was taken shortly after birth and the second at the approximate age of one year ( Figure S4A), but because calves are born throughout the year and the second sample is usually taken during an annual sampling in spring, there is some variation in sampling interval ( Figure S4b). We calculated RLTL change between those two measurements and observed that most animals (76%) experienced shortening of RLTL within their first year of life ( Fig. 1f,g, Figure S4D-F). Sampling interval does not correlate with change in RLTL (r = 0.008, 95% CI = − 0.107-0.123, P = 0.891; Figure S5). Factors associated with change in RLTL. We next ran a series of statistical model analyses to test whether known individual, genetic and environmental variables could explain variation in RLTL change. Only sampling year was statistically significant in the initial full model; age in years, genetic group, feed group, birth year, the time difference between sample dates in days, and the occurrence of a health event within two weeks of sampling were not significant (Table S1 & Table S2). Genetic group, feed group, and the time interval between sampling were kept in the model to capture the structure of the experiment, but all other non-significant effects were backwards eliminated. In the reduced model age in years was statistically significant with older animals showing less telomere depletion (0.026 ± 0.006, P < 0.001; Fig. 1a, Table S3 & Table S4). This model was used to calculate the repeatability of telomere change and absolute telomere change as the variance due to the animal divided by the total variance. We hypothesised that milk production ( Figure S2C) may affect change in telomere length, but found no statistically significant relationship between average lifetime milk productivity and change in RLTL, when tested in a subset of animals that had milk productivity measurements available (253 animals with 918 RLTL change measurements, Table S5 & Table S6). We have previously shown that, on average across this population, RLTL declined over the first year of life but showed no systematic change with age thereafter 49,50 . Consistent with this, we found that average RLTL change across consecutive measurements was only significantly negative (indicating a tendency for attrition over time) when the first measurement was made close to birth and the follow up measurement at the age of around 1 year (Fig. 1a, − 0.115 ± 0.01, P < 0.001; Table S7). We observed an association between sample year and change in RLTL (F = 3.84, df = 4, P = 0.004, Table S4) and hypothesised that this might be at least partially due to different weather conditions. We therefore used weather data ( Figure S6) from a Met Office station close to the farm to test if maximum temperature, minimum temperature, average sun hours per day, total rainfall (mm), and total air frost days in the summer and winter quarters correlated with change in RLTL. Maximum temperature over the summer quarter was statistically significantly and negatively correlated with change in RLTL (− 0.012 ± 0.004, P = 0.001, Fig. 2) meaning that we observed more RLTL attrition in hotter summers (Table S8 & Table S9). When sample year was included in the same model with maximum summer temperature it became non-significant, while maximum summer temperature remained statistically significant (Table S10 & Table S11), indicating that summer temperature www.nature.com/scientificreports/ may be the reason for observed yearly variation in RLTL change. The total number of sun hours averaged across the summer quarter (as another marker for a hot summer) was also negatively correlated with change in RLTL (− 0.001 ± 0.000, P = 0.021, Table S12 & Table S13) although the effect size was smaller. Rain during summer may contribute to cool down animals and therefore alleviate RLTL attrition, but in our study population where half of the animals are housed continuously it has a marginal effect (0.000 ± 0.000, P = 0.041; Table S14 & Table S15). Interestingly, maximum winter temperature was also negatively correlated with change in RLTL (− 0.014 ± 0.005, P = 0.009; Table S16 & Table S17), however, when fitted together with maximum summer temperature, it became non-significant (Table S18 & Table S19). After having observed a summer temperature effect on telomere length dynamics, we were interested to find out if similar effects influenced early life telomere length dynamics. We investigated if the amount of early-life RLTL attrition varied with sample year while accounting for the sampling interval in a linear model but did not find any indication for a statistically significant relationship (N = 291, P = 0.666). Therefore, we saw no justification for testing the effect of weather variables on early life RLTL attrition. We next were interested to find out if there were factors that could predict lifetime RLTL change in the complete dataset and considered the total number of specific disease events as internal stressors. More specifically we looked first at the effect of the number of mastitis and lameness events and then at the number of accumulated mastitis and lameness events together (Table S7) on mean RLTL, mean RLTL change and mean absolute RLTL change, but found no statistically significant relationships (Table S20). RLTL change and productive lifespan. Of all dead cows (N = 244) the vast majority (N = 241) had survived to their first lactation, but there was considerable variation in productive lifespan beyond this point (Figure S2D). We wanted to find out if the three measures of life-long change in RLTL (mean RLTL, mean RLTL change and mean absolute RLTL change) could predict productive lifespan and tested them first separately, then together in the same Cox proportional hazard model. Both mean RLTL change (− 5.209 ± 0.845, P < 0.001; Table S21) and mean absolute RLTL change (2.939 ± 0.970, P = 0.002; Table S21) were significantly associated with productive lifespan while mean RLTL was not (coefficient = 0.341, SE = 0.591, P = 0.564, Table S21). When all three measures of lifetime RLTL dynamics were included in the same model only mean RLTL change remained significant (− 4.758 ± 1.018, P < 0.001; Table S21). This implies that the relationship between productive lifespan and mean absolute RLTL change was largely due to covariance with mean RLTL change. Thus, individuals that experienced greater telomere attrition over their lifetimes had a shorter productive lifespan and direction of RLTL change (rather than simply absolute magnitude) was an important aspect of this relationship. To visualise the association between RLTL change measurements and productive lifespan using Kaplan-Meier plots, continuous RLTL measures were transformed to a discrete scale by grouping them into tertiles (Fig. 3). Cox proportional hazard models based on these tertile groupings of RLTL measures showed similar results to those reported above (mean RLTL: 0.014 ± 0.079, P = 0.858; mean RLTL change: − 0.257 ± 0.087, P = 0.003, mean absolute RLTL change: 0.179 ± 0.082, P = 0.029, Table S22). The relationship between mean RLTL change and productive lifespan was robust to the inclusion of milk production (a physiological stressor, which is positively associated with productive lifespan as cows with a low milk yield are more likely to be culled at a younger age) in the model (Table S23). If RLTL declines mostly within the first year of life (as Fig. 1a indicates), the association between change in RLTL and productive lifespan may be driven by the fact that this initial decline contributes relatively more to estimates of mean RLTL change in shorter-lived individuals than in animals that have more follow up samples with more moderate change measures available. We therefore repeated the analysis excluding the early life RLTL change measurements and found that mean RLTL change still predicted productive lifespan (N = 253, coefficient = − 5.056, SE = 1.315, P < 0.001, Table S24). The relationship between mean RLTL change and productive lifespan also remained statistically significant, when animals with fewer than 3 samples (which may be more affected by outlier measurements) were excluded from the analysis (− 3.47 ± 1.34, N = 213, Wald test = 6.7 on 1 df, P = 0.01). In our previous studies we thoroughly tested RLTL at different ages as a predictor of productive lifespan and found that while RLTL at the age of one year correlated with survival, RLTL at other ages (including at birth) did not 50 . In the present study we found that mean RLTL change was a better predictor of productive lifespan than RLTL at the age of one year when tested in the same model (Table S25). Most reasons for culling in our herd were disease-related, but some reasons included accidents and herd management procedures and for some animals the reason for culling remained unknown ( Figure S8). Even when animals without a recorded disease-related reason for culling were excluded from the analysis, mean RLTL change still predicted productive lifespan (Table S26). Similarly, we used a Cox proportional hazard model to test whether early life RLTL change between two samples, one taken shortly after birth and the next at an approximate age of one year, predicted productive lifespan and found that greater early life RLTL attrition was associated with a shorter productive lifespan (− 1.141 ± 0.391, N = 291, P = 0.004, Table S21). When we repeated the analysis using the discrete measure of RLTL change tertiles for visualisation purposes we obtained similar results (− 0.225 ± 0.082, N = 291, P = 0.006, Fig. 4, Table S22). In parallel to the analysis of the whole dataset, the relationship between early life change in RLTL remained statistically significant, when animals without a recorded health-related reason for culling were excluded (Table S26). Discussion Our study animals varied considerably in the magnitude and direction of RLTL change across consecutive sampling points. This is in accordance with other longitudinal studies that have reported a wide variation in TL change, alongside observations that a large proportion of individuals actually exhibit telomere lengthening over time [26][27][28][29][30][31][32][33][34][35] . Previous work has suggested that rapid changes observed in telomere length, and particularly www.nature.com/scientificreports/ apparent telomere lengthening may be due to measurement error affecting mostly qPCR results 51 . However, other studies using simulated data have also shown that telomere lengthening might be biological and not solely due to measurement error 52,53 . For the present study we carefully optimised the qPCR protocol to ensure reproducible results that are also robust to extracting DNA repeatedly from the sample using different DNA extraction techniques 48 . Our qPCR measurements are repeatable: the proportion of total variance due to sample variance is 80%, consecutive measures correlate well (Fig. 1b) and baseline measurements do not correlate strongly with future mean rate of RLTL changing rate ( Figure S9) which has previously been used as a marker for a small measurement error 51,54 . This makes us confident that our measurements overall capture biological variation. We show that, despite TL being moderately consistent across the lifetimes of individuals, considerable withinindividual variation exists and the pattern of change in TL over an individual's life is highly dynamic. Short-term environmental fluctuations impacting TL dynamics could be responsible and may impact individuals in different ways. A recent meta-analysis has shown that different kinds of stressors are associated with telomere loss in non-human vertebrates 16 . We aimed to understand factors influencing telomere change in our study system and found that age was associated with change in RLTL in the following way: young animals on average shortened their TL, but older animals did not show a systematic relationship of change in RLTL with age. This is in accordance with our previous cross-sectional observations in this study population 49,50 . We further found that sample year was associated with change in RLTL and hypothesised that the yearly effect may be partially explained by weather variables after similar observation have been made for other species: In bats stressful weather conditions during a critical time of the year was associated with more telomere attrition 42 . Dairy cattle are metabolically incredibly active and therefore easily experience heat stress in warm and humid climates 46,47 . Dairy cows actively seek shade when temperatures are above their comfort range 47 , which is a behaviour frequently observed on our research farm in Dumfries during the summer months. Indeed, we found using data from a weather station located close to the farm that during hot summers animals experienced more RLTL attrition. Our results indicate that organismal stress is associated with more telomere depletion and thus provide first evidence that change in telomere length may indeed be useful as a biomarker for animal welfare in farm animals as suggested before 55 . In the specific case of heat stress, it is likely that more easily accessible measures such as milk productivity 56 will be more helpful on commercial farms. Our observation that weather correlates with telomere dynamics supports previous findings that TL is affected by environmental conditions 12,16,42 . We could not find evidence in our study that the number of fertility and mastitis events (investigated separately and together) correlated with life-long RLTL change measures. A reason for this may be the crude categorisation of those disease events and adding severity scores in future analyses may influence the result. Individuals with a greater propensity to lose TL over time in our study had shorter productive lives, implying changes in TL reflect important environmental or physiological variation linked to health. We have previously shown that there is a genetic correlation between RLTL at birth and productive lifespan indicating that genes for long telomeres and genes for an improved productive lifespan may be in linkage disequilibrium and inherited together 57 or pleiotropic genes causing long telomeres also improve survival chances. Our data support the contention that within-individual directional change over time in TL is more important than among-individual differences in predicting overall health. While our results that early life attrition in TL correlates with lifespan is in accordance with several bird studies that reported similar results 20,39 , the present study is to our knowledge first demonstration that lifetime variation in telomere attrition rather than variation in constitutive individual differences in average TL predict health outcomes and lifespan in any vertebrate. While there is mounting evidence that TL predicts mortality, health and life history in humans as well as birds and non-human mammals 7,17,[23][24][25][57][58][59][60][61][62] , very few studies have been able to accumulate long-term longitudinal data capable of differentiating the role of among-and within-individual variation in TL to such relationships. There was no relationship between productive lifespan and an individual's average RLTL in the present study. Future studies will show how well our results generalise to other systems as telomere biology is variable amongst species. Cattle telomere biology seems to be similar to other ruminants, horses, zebras, tapirs, some whales and primates including humans in that they have relatively short telomeres and a tight regulation of telomerase expression 63 . If our results extend to some of those other systems and contexts, they have important implications for the utility of TL as a biomarker of health and fitness, lending support to the idea that change in TL is an indirect marker reflecting past physiological insults and stress rather than an indicator of constitutive or genetically-based robustness to life's challenges. Our data also highlight the importance of collecting longitudinal telomere measurements, by showing that in some species it is within-individual change over time in TL that carries the important biological signal. Materials and methods We aimed to follow ARRIVE guidelines 64 throughout this manuscript and provide the ARRIVE essential information in Supplementary File 3. Animal population and data collection. We used samples and data collected as part of the long-term study of Holstein Friesian dairy cattle kept at the SRUC Crichton Royal Research Farm in Dumfries, Scotland 45 . This herd, consisting of around 200 milking cows plus their calves and replacement heifers, has been regularly monitored since 1973 for a broad range of measurements, such as body weight, feed intake, signs of disease (health events), milk yield, productive lifespan and reasons for culling 45 . One half of the milking cows belong to a genetic line that has been selected for high milk protein and fat yield (S), while the other half is deliberately maintained on a UK average productivity level (C). Calves and heifers of both genetic lines are kept together. After first calving all cows are randomly allocated to a high forage (HF) or low forage (LF) diet. The LF diet is energy richer than the HF diet and whilst the LF cows are housed continuously, the HF cows graze over the sum- www.nature.com/scientificreports/ mer months. All cows are milked three times daily and milk yield is recorded. In the present study, these measurements were used to calculate an average milk production in kg per cow including all started lactations and it is referred to this as "average lifetime milk production" ( Figure S2C). Every day cows leave the milking parlour over a pressure plate which detects signs of lameness. Behaviour and health events are documented after visual detection by farm workers (Figure S7). At the end of the animal's life its productive lifespan ( Figure S2D) and a reason for culling are recorded ( Figure S8). Productive lifespan is the time from birth to culling in days and is a proxy for the health span of the animal, because all animals that remain healthy enough to generate profit for the farmer remain in the herd. The most frequent reasons for culling were reproductive problems, mastitis, lameness which are typically the most frequent cull reasons on a commercial dairy farm ( Figure S8). Further information about the animal population can be found in Supplementary File 2. Blood sampling. We collected 1,325 whole blood samples from 305 female individuals in the years 2008 to 2014. Routine blood sampling takes place initially shortly after birth (within 15 days of birth) and then annually in spring ( Figure S1A,B). If possible, an additional sample is taken shortly before an animal is culled. Because of this sampling routine and because calves are being born all year round, age at sampling and sampling intervals vary for animals ( Figure S2B, Figure S3B, Figure S4). Weather data included maximum temperature, minimum temperature, days of air frost, total rain in mm and total sun hours for each month ( Figure S6). Data was reduced to the years of interest between 2006 and 2015 and summarised to maximise its relevance considering the sampling interval on the farm to quarterly statistics in the following way: Routine blood sampling was performed in March (Figure S1 B) and therefore the calendar year was divided into quarters and then allocated to a "sample year" which ran from April in the previous year to end of March of the year when the blood sample was taken. This ensured that sampling periods for weather and telomere data were synchronised. DNA extraction and RLTL measurement. DNA from whole blood samples was extracted with the DNeasy Blood and Tissue spin column kit (QIAGEN) and telomere length was measured by qPCR as previously described [48][49][50]57 . The repeatability of the assay (see Supplementary File 2 for how repeatability was calculated) was 80% and therefore delivers interpretable results 65 . A full description of our DNA extraction and qPCR protocols including quality control steps can also be found in Supplementary File 2. Statistical analysis. All statistical analyses were performed in R studio 66 with R 4.0.2. 67 . Mixed-effects models were implemented using the 'lme4' library 68 , while Cox proportional hazard models were implemented using the library survival 69 and figures were generated with the library 'ggplot2' 70 . All statistical packages used and a full description of the analysis including code can be found on GitHub (https ://githu b.com/LASee ker/ Telom ereCh angeI nDair yCatt le). Accounting for known sources of measurement error. We have shown before that our RLTL data are significantly affected by qPCR plate and qPCR row [48][49][50] . To account for those known sources of measurement error, we used the residuals of a linear model that corrected all RLTL measurements for qPCR plate and row, by fitting plate and row as fixed factors in the model. These residual RLTL measures were used in all subsequent calculations and models of telomere dynamics. RLTL profiles and change measurements. We calculated 1020 RLTL change measurements of 305 female animals as the difference between two subsequent adjusted RLTL measurements within individual (RLTL change = RLTL t -RLTL t−1 ). We used those longitudinal RLTL change measurements as response variables to investigate the impact of various effects such as age, health events and weather conditions on telomere change (see below). We calculated following three measures of lifetime RLTL change: The animal's mean RLTL over all measurements, the mean RLTL change and the mean absolute RLTL change. While mean RLTL change captures the direction and magnitude of RLTL changes, mean absolute RLTL change describes just the magnitude of change without considering its direction because we were interested to investigate if more change in either direction may be correlated with adverse effects. Figure S3 visualises the reasoning behind calculating these three measures of lifetime telomere length dynamics which are not surprisingly moderately correlated with one another (r ranged from -0.53 to 0.30, Figure S10). We were also interested in analysing early life RLTL dynamics and its association with productive lifespan. Therefore, we calculated change in RLTL within the first year of life as the difference between one measurement taken shortly after birth and the next taken at around one year of age ( Figure S4A). www.nature.com/scientificreports/ Factors associated with change in RLTL. To investigate which factors correlate with the direction and amount of RLTL change, a linear mixed model was fitted with RLTL change between two consecutive measurements as response variable and animal identity as random effect. The following factors were included as fixed effects: genetic line, feed group and birth year of the animal, age at sampling (at time t), sample year, and the occurrence of a health event within two weeks before or after sampling (at time t). The time difference between consecutive samplings in days was fitted as a covariate. Non-significant fixed effects (P > 0.05) were backwards eliminated from the model. Age at sampling was modelled as a covariate (age in years). We hypothesised that the high metabolic demand of milk production may impact change in telomere length and therefore repeated the above model for a subset of 918 RLTL measurements of 253 animals with a known average lifetime milk production which was fitted as additional covariate. Average lifetime milk productivity was re-scaled by dividing it by 1000 to adjust it to a comparable scale as the other parameters in the model. We hypothesised that yearly variation in RLTL may be due to yearly variation in weather variables ( Figure S6) and re-ran the model above for the whole dataset while including quarterly weather variables as covariates as a replacement for sample year. We restricted weather observations to the summer and winter quarter to capture the most extreme seasons. Following variables were tested: maximum temperature, minimum temperature, total number of air frost days, totals sun hours averaged across the quarter, average rain in mm. To better understand variables that correlate with early life RLTL, we tested if the amount of early life RLTL attrition (RLTL at 1 year -RLTL at birth) varied with sample year while accounting for the sampling interval in a linear model. After finding factors that influenced RLTL change, we thought that the accumulated number of specific health events that are associated with inflammation and pain may correlate with lifetime RLTL dynamics (mean RLTL, mean RLTL change and mean absolute RLTL change) and tested our hypothesis using the number of mastitis or lameness events in a linear model; these analyses were run first separately by condition and then collectively by summing all events per animal. Association between lifetime RLTL measures and productive lifespan. We used Cox proportional hazard models of productive lifespan that also included mean milk production as covariate and the three measures of lifetime RLTL dynamics (mean RLTL, mean RLTL change and mean absolute RLTL change) as explanatory variables first separately and then together in the same model to test their association with productive lifespan first individually, then while accounting for the effect of the other two measures. For visualisation purposes we converted the continuous measures of lifetime telomere measures to a discrete scale by using tertiles and repeated the Cox proportional hazard models with those and visualised the relationship in Kaplan-Meier plots. To ensure observed associations between RLTL change and productive lifespan were not simply due to more rapid RLTL attrition early in life, the initial Cox proportional hazard models (that included RLTL measures on a continuous scale) were repeated first while all measurements that were taken shortly after birth were excluded and then while all animals with fewer than three RLTL measurements were excluded. Additionally, we wanted to better understand if telomere change or telomere length is the better predictor for productive lifespan. We therefore tested if the previously reported effect of RLTL at a specific age (one year) on productive lifespan 50 remained statistically significant when tested in a Cox proportional hazard model together with milk productivity and mean RLTL change. Lastly, we considered that most, but not all of our animals were culled for health-related reasons ( Figure S8) and repeated the Cox proportional hazard models of productive lifespan with lifetime RLTL dynamics measures as predictors for a subset of animals that had a recorded health-related reason for culling. We were interested to find out if telomere attrition within the first year of life was another predictor of productive lifespan and therefore tested it in a Cox proportional hazard model. We transformed the continuous measure of early life RLTL change to a discrete scale by calculating tertiles and repeated the Cox proportional hazard models using the tertiles as explanatory variables in an effort to visualise the relationship between RLTL change and productive lifespan using Kaplan-Meier plots. Finally, we repeated the Cox proportional hazard analysis of early life RLTL change for those animals that had a recorded disease-related reason for culling. See Figure S11 for a visual description of all Cox-proportional hazard models used in this study.
v3-fos-license
2016-05-04T20:20:58.661Z
2013-02-13T00:00:00.000
2078297
{ "extfieldsofstudy": [ "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055882&type=printable", "pdf_hash": "c1c0a1f2852d600822b878b5adcf2dcb9a9e12ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:492", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "9dd7a700cd0921ce737342f7a887722cac0ac2c3", "year": 2013 }
pes2o/s2orc
High Resolution Population Distribution Maps for Southeast Asia in 2010 and 2015 Spatially accurate, contemporary data on human population distributions are vitally important to many applied and theoretical researchers. The Southeast Asia region has undergone rapid urbanization and population growth over the past decade, yet existing spatial population distribution datasets covering the region are based principally on population count data from censuses circa 2000, with often insufficient spatial resolution or input data to map settlements precisely. Here we outline approaches to construct a database of GIS-linked circa 2010 census data and methods used to construct fine-scale (∼100 meters spatial resolution) population distribution datasets for each country in the Southeast Asia region. Landsat-derived settlement maps and land cover information were combined with ancillary datasets on infrastructure to model population distributions for 2010 and 2015. These products were compared with those from two other methods used to construct commonly used global population datasets. Results indicate mapping accuracies are consistently higher when incorporating land cover and settlement information into the AsiaPop modelling process. Using existing data, it is possible to produce detailed, contemporary and easily updatable population distribution datasets for Southeast Asia. The 2010 and 2015 datasets produced are freely available as a product of the AsiaPop Project and can be downloaded from: www.asiapop.org. Introduction The global human population is projected to increase from 7 billion to over 9 billion between 2011 and 2050, with much of this growth concentrated in low income countries [1]. The greatest concentration in growth is set to occur in urban areas, disproportionately impacting Asia where half of the population is expected to be living in urban areas by 2020 [1]. The effects of such rapid demographic growth are well documented, influencing the economies, environment and health of nations [2]. To measure the impact of this population growth there is a need for accurate, spatially-explicit, high resolution maps that correctly identify population distributions through time. While high-income countries often have extensive mapping resources and expertise at their disposal to create accurate and regularly-updated spatial population databases, across the lower income regions of the world relevant data are often either lacking or are of poor quality [3]. Since the 1990s there has been increasing interest in creating spatially-explicit, large-area gridded population distribution datasets [4,5,6] to support applications such as disease burden estimation, epidemiological modelling, climate change and human health adaptive strategies, disaster management, accessibility modelling, transport and city planning, poverty mapping and environmental impact assessment [5,6,7,8,9,10]. Current global gridded population datasets that are freely available include the Gridded Population of the World (GPW) database, versions 2 and 3 [11,12] and the Global Rural Urban Mapping Project (GRUMP) [13]. In addition, the Land-Scan Global Population database is updated annually, but has some access restrictions [14,15], and the United Nation Environment Programme (UNEP) has compiled gridded datasets for Latin America, Africa, and Asia [16,17,18], while the AfriPop project provides freely-available gridded population data for Africa [6,10,19]. These datasets vary in their modelling techniques and the types of input data used for their construction [20]. Briefly, GPW employs an areal weighting technique that assumes uniformity in population distribution within each administrative unit [5]. The GRUMP dataset builds on the GPW approach, but incorporates satellite night-light derived urban-rural designations in the spatial reallocation of population for each census block [5]. LandScan, UNEP and AfriPop all use dasymetric modelling approaches, utilizing ancillary data, such as land cover, to refine and weight population densities. The LandScan method uses coefficient weights derived from a combination of land cover, transport network and topographic data to re-distribute census data in a gridded format [14], while the AfriPop Project [21] relies principally on land cover, climate zone and detailed settlement information for deriving census data redistribution weights within administrative units [10,20]. Each dataset suffers from limitations for the Southeast Asia region, however, stemming from the input data, mapping process or data availability. While efforts have been focussed in the past on obtaining the most detailed and recent input census data for GPW and GRUMP construction, each remains based upon the circa 2000 round of censuses [5,13], and are thus increasingly outdated. Similarly, the UNEP datasets are based on even older and less detailed input census data. Moreover, the mapping approaches used for the GPW and UNEP datasets have been shown to be generally less precise than that undertaken for GRUMP, Land-Scan and AfriPop [6,10,22]. While an improvement, GRUMP datasets utilise satellite night time light-derived urban extents that have been shown to overestimate actual urban extents for large cities, while missing smaller settlements [23,24,25]. Finally, LandScan does not release information on the input demographic and ancillary spatial datasets, nor does it provide details on modelling methods, making assessments of its accuracy, reproducibility and judgements on its suitability difficult or impossible. In this study we follow a similar approach utilised by the AfriPop Project [26]. We apply a model based on measured relationships between land cover and population density [20] to redistribute administrative unit populations to grid cells in Southeast Asia. We define the region using the official designation of the Association of Southeast Asian Nations (ASEAN) and include Timór-Leste for spatial contiguity ( Table 1).The approach includes separation of urban and rural settlement extents and the integration of remotely sensed land cover data [10,27]. It is important to identify urban from rural areas as the difference in population densities necessitates different land cover weights for distributing population across the landscape. In addition, demographic characteristics and urban versus rural growth rates make it important to treat urban areas different from settlement extents in the redistribution of population. Final products are compared to derived datasets of other global population datasets to assess the accuracy of the different mapping techniques. Population Count Data Population count data were obtained for each country listed in Table 1, principally derived from national population and housing censuses, matched to GIS administrative boundaries for the latest round of censuses, and at as fine an administrative unit level as publicly available. Where the census data are over a decade old, official population estimates were used. Table 1 details the features of the demographic data used. Land Cover Data Fine-scale, satellite imagery-derived land cover datasets were used to reallocate contemporary census-based spatial population count data. Land cover classes were based principally on the MDA GeoCover Land Cover Thematic Mapper (TM) database, a product that provides a consistent global mapping of 13 land cover classes derived from circa 2005, 30 meter spatial resolution Landsat TM spectral reflectance data [28]. The GeoCover imagery classes were reformatted to be consistent with the GlobCover designations used for AfriPop [20], reclassifying and resampling the data to 8.33610 24 degrees spatial resolution (approximately 100 meters at the equator). For areas that were classified as cloud, shadow, or ''No Data'' we filled the data using the nearest neighbour algorithm to create a complete, void-filled land cover dataset for each country. Additional country-specific datasets were used where available to refine the mapping of settlements and land cover. For Cambodia, land cover was refined using detailed water bodies and built area extent datasets from the Ministry of Land Management, Urban Planning, and Construction. For the Philippines and Myanmar, land cover was refined using detailed built area datasets from the Pacific Disaster Center, Global Hazards Information Network [29]. Building and residential data classes from OpenStreetMap (OSM) (http://download-int. geofabrik.de/osm/asia/) [30,31], an open source product that provides free world-wide geographic datasets, were used to refine urban and rural settlement extents for all countries where it was available. Lastly, the GeoCover data does not differentiate large high density urban areas from smaller rural settlements so, for all countries, we applied a conditional statement that used the urban designations set by the GRUMP urban extents dataset to identify which built areas were 'urban' while all other built areas were classified as rural [13]. The inclusion of these additional steps to refine the original GeoCover land cover dataset provide a final product with the most updated ancillary information available on settlement and built landscape features included in the land cover input layer for modelling human population distribution at regional to continental scales. The final land cover datasets were comprised of nine land cover types. Analyses were conducted principally in ArcGIS 10.0 [32] and ERDAS Imagine 2011 [33]. Population Distribution Modelling To model population distributions for the Southeast Asia region, we adopted the methodology used in the construction of the AfriPop datasets [6,7,10,20] and is detailed in Linard, et al. [6] (Text S1). Modifications to the process for Southeast Asia mapping included a change in the input land cover data by using the GeoCover dataset. We also adjusted land cover specific weightings to re-allocate population densities based on Asian climates and countries. An updated Köppen-Geiger classification was used, broken down into seven main climate zones [34]. Equatorial (Zone A) and Arid (Zone B) climates were separated into sub-zones based on precipitation, creating two categories for each zone (Table S1 in Text_S1). As outlined in Linard, et al. [6], different sets of population densities were calculated on a pixel-by-pixel basis within each administrative unit based on the association between land cover and different climate zones. For Southeast Asia, the more detailed census data for Cambodia and Vietnam provided the input for generating per-land cover class population densities ( Figure 1). Data from both countries were used to create per-climate zone average land cover-specific population density weightings which were then applied to redistribute rural populations within administrative units for all countries in Southeast Asia. The total population size at the national level was projected to 2010 and 2015 based on rural and urban growth rates estimated by the UN [1] using the following equation: where P 2010 (P 2015 ) is the required 2010 (2015) population, P d is the population at the year of the input population data, t is the number of years between the input data and 2010 (2015), and r is the urban or rural average growth rate taken from the UN World Urbanization Prospects Database, 2011 version (UNPD) [1]. We chose to use the more commonly used and publically available estimate values from the UNPD over alternative options [35] for consistency and standardization purposes to previous mapping efforts in Africa [6,10,19]. To assign respective growth rates, urban and rural areas were separated using the GRUMP urban extent dataset [5] by recoding any units to ''rural units'' if they did not spatially coincide with the GRUMP urban extent. Two versions of the datasets were produced, one with the total population adjusted to match UN national estimates [1], and the other left unadjusted. Accuracy Assessment Since spatially detailed census data for Cambodia and Vietnam were available to facilitate modelling, these countries were also used in assessing the accuracy of the model. We aggregated the small administrative units to a coarser administrative unit level by summing the smaller units (Text S1, Figure S1). We then used these coarser units and population sums to generate gridded population maps and then compared sums of those gridded estimates with numbers from the original, fine-scale administrative unit populations. We compared the modelling method described above (referred to here as AsiaPop) to the methods used by two widely-used global population datasets, the Gridded Population of the World (GPW version 3) and the Global Rural Urban Mapping Project (GRUMP version 1). The original datasets are both available from the Center for International Earth Science Information Network (CIESIN) at Columbia University. Since we were interested in comparing the accuracies of the modelling processes, not that of the final derived products, we replicated the methodologies used in constructing GPW and GRUMP to ensure that identical input data were used for a fair comparison. GPW and GRUMP comparisons were chosen because these datasets have transparent, easily reproducible methods that are well documented [5,11,13,36]. We did not include the UNEP methods and datasets in the comparison due to the relatively old age of the data, nor did we include LandScan due to a lack of published information on data sources and the modelling approach used [19]. Comparisons were conducted by aggregating the finest available census population counts (Admin Level 3) to the next level coarser (Admin Level 2). We then used those counts to produce gridded population distributions at 8.33610 24 degrees spatial resolution using each of the three methods and compared the observed population totals at the finer administrative level with the summed estimates from the output gridded datasets at the coarser level. Statistical analyses for observed and estimated population counts included measures of squared error and the Kruskal-Wallis test. The Kruskal-Wallis test is a non-parametric alternative to a one-way ANOVA test [37] and was necessary due to the nonnormal distribution of counts. Post-hoc results for the Kruskal-Wallis test were employed in pairwise comparisons [38] and were done using the pgirmess [39] package in R 2.15.1 [40]. Results Population datasets for 2010 and 2015, non-adjusted and adjusted to UN national total estimates [1], were generated at 8.33610 24 degrees spatial resolution and projected to a geographic coordinate system and WGS 84 datum for the ten Southeast Asian nations. Figure 2 shows the projected 2015 population distribution for Southeast Asia, displaying number of people per grid cell (8.33610 24 degrees). Areas highlighted are some of the largest cities in the region and their surroundings. Accuracy Assessments Population distribution datasets were constructed using Asia-Pop, GRUMP v1, and GPW v3 methodologies using Level 2 census data for Cambodia (census year: 2008) and Vietnam (census year: 1999). Figure 3 shows the modelled outputs, focused on two major urban centres, Phnom Penh, Cambodia and Ha Noi, Vietnam. Visual comparison of the three datasets highlights the underlying approach used for each one. GPW v3 evenly distributes the population across each individual administrative unit while GRUMP concentrates the population into a few major urban areas and then uses areal weighting to redistribute the remainder of the population [13]. AsiaPop also concentrates the population in settlements (defined using Landsat-derived land cover) but, in addition, weights population outside settlements based on different land cover types. The absolute error of the different population maps for Cambodia is shown in Figure 4. The AsiaPop method, in general, produces more accurate results, with many more administrative units showing low error values compared to the GRUMP and GPW methodologies. To compare the different population distribution modelling approaches we calculated root mean square error (RMSE), the percentage of RMSE, and the mean absolute error (MAE) ( Table 2). For both Cambodia and Vietnam, the AsiaPop RMSE and MAE measures were lower than those for GRUMP v1 or GPW v3 datasets. The AsiaPop approach also produced the lowest difference between the RMSE and MAE values, suggesting that the variance in individual errors was less for this method than either the GRUMP v1 or GPW v3 methods. Figure 5 shows the relationship between estimated and observed population counts for Cambodia. Each point represents an estimated and actual population count for a level 3 administrative unit. The relationship between the predicted gridded estimates and the observed population totals is substantially more linear for the AsiaPop method than either the GRUMP v1 or GPW v3 method. The AsiaPop model also shows the highest correlation between estimated and observed values at 0.83 compared to the GRUMP v1 (0.62) and GPW v3 (0.53) methods. The fourth plot is a ''bean'' plot [41], which shows the variation of mean absolute error for each population model. Each independent sample (an administrative unit), is represented by the spread of the short, horizontal bars above and below the median (dark horizontal line) and the distribution for each model is shown with vertical histograms. Results for Vietnam do not show as strong a relationship (see Text S1, Figure S2) between estimated and observed population counts, although AsiaPop still shows the highest correlation at 62.2% versus 43.4% and 44.7% for GRUMP v1 and GPW v3 respectively. Results from the Kruskal-Wallis test indicate significant differences between population estimates (p-values ,0.0001) for comparisons of the AsiaPop dataset to the datasets produced using the GRUMP v1 and GPW v3 methods for both countries. There was no statistically significant difference between mean ranks of GRUMP v1 and GPW v3 population estimates for either the Cambodia or Vietnam datasets. Discussion The need for spatially-explicit, large-scale mapping of human population distribution continues to grow, especially given the increasing demand and use of digital, open-access, global-scale datasets [19]. In this study, we present the approaches used to construct a more accurate and detailed population distribution dataset for Southeast Asia, a region that has seen a population increase of greater than 30% over the past 20 years [1,2] influencing economic well-being, environment and health issues, and land use transformations [2,42]. Comparison with alternative population distribution modelling approaches suggests that the AsiaPop dataset more accurately characterizes population distribution in the region than other existing global datasets (Figures 5 and Text S1, Figure S2). The Southeast Asia gridded population dataset presented here takes advantage of the growing collection of open source spatial data of relevance to population distributions (e.g. OpenStreetMap [31]), combining them with remotely-sensed settlement and land cover data to more accurately map human population distributions at a finer spatial scale than ever before [10,13,20]. As shown in previous studies, the use of fine-scale spatial units for census input data can reduce the level of error in the modelling process [20,22,43]. Additionally, weighting the distribution of a population by different land cover types, especially through incorporating detailed datasets on settlement and built areas, provides a more accurate representation of patterns of population density [7,10]. While it remains difficult to validate large-scale population distribution datasets, given that no independent sources exist at a global scale [19], the accuracy of population maps can be assessed if there exists a reference dataset at a finer spatial resolution than maps generated [10,15,44,45]. Using different administrative levels, we have shown here that the AsiaPop method was the most accurate modelling method for the redistribution of population counts compared to existing replicable approaches. The lower RMSE (Table 2) of the AsiaPop method indicates a better overall fit of the model. The smaller difference between RMSE and MAE values for the AsiaPop method suggests this approach also has less variability in errors. The improvements in accuracies over existing mapping methodologies shown here are promising, but sources of error and uncertainty remain in the outputs that should be acknowledged in data usage. Input data error is an important source of potential uncertainty and differs across census datasets, especially in Southeast Asia where it is difficult to provide firm estimates on the number of people who practice swidden cultivation [46]. Further, the variety in ages and administrative unit levels of the input data used across the region (Table 1) means that mapping accuracies are likely different from country to country, where, for example, using Admin level 2 estimates from 2002 likely produces a substantially more uncertain output 2010 population distribution dataset than for Indonesia, where admin level 4 census data from 2010 is used. Moreover, grouping population numbers into different sized administrative units contributes to the modifiable areal unit problem, a source of error prevalent in analyses that use population parameters [47]. Datasets that vary in their spatial resolution can influence the reliability of statistical population estimates, with smaller units generating less reliable estimates but larger spatial units masking relevant geographic variation [48]. Another source of uncertainty stems from the fact that the urban and rural growth rates used here for temporal population projection are only national-scale and thus mask any sub-national variations occurring. Lastly, the use of broad climate zone categories to calculate land cover weightings, as was undertaken here, may not be as accurate a methodology as using multiple finescale spatial datasets on demographic, land use, topographic and infrastructure variables known to correlate with population distributions [6]. Future work will aim to exploit the wealth of spatial datasets on infrastructure, settlement locations, internally displaced populations and land use that are becoming increasingly available, especially for resource poor countries. We will employ a more sophisticated, regression tree-based approach to further improve the accuracy of output population distribution datasets and enable rapid updates. Moreover, the lack of large-area sub-national datasets on age and sex structures of populations is proving detrimental to many areas of research [49], and the derivation of specific age and sex group large area population distribution datasets built from census and household survey data will be a priority. The mapping here of settlements at a single point in time that are then used as inputs to 2010 and 2015 datasets likely does not produce the realistic patterns of change that occur through urban growth, thus, novel model-based approaches to simulating growth in urban extents are being developed to provide more realistic inputs to projected population mapping. Finally, geostatistical interpolation approaches developed elsewhere [50] are being adapted to exploit the increasing availability of geo-located household survey data [49] to produce statistically robust gridded datasets representing a range of demographic and health metrics. Given the speed with which population growth and urbanisation are occurring across much of Southeast Asia, and the impacts these are having on the economies, environments and the health of nations, this study outlines a timely and relevant approach for providing national level population distribution data. Additionally, the Southeast Asian region has a range in spatially-detailed census aggregations providing a good basis for further testing and validation of the dasymetric modelling approach that relies on relationships between land cover and population density to redistribute population distribution in a spatially-explicit manner. The approach was designed with an operational application in mind, using simple and semi-automated methods to produce easily updatable maps as new censuses and ancillary datasets become available. Population datasets for 2010 and 2015 are freely available as a product of the AsiaPop Project and can be downloaded from the project website: www.asiapop.org. Kottek et al., 2006), with sub-regions broken down for Zones A and B. The criteria for sub-zones are based on precipitation and temperature minimums, annual totals and thresholds aggregated into a gridded dataset [1].
v3-fos-license
2023-09-16T15:05:23.937Z
2023-09-10T00:00:00.000
261934684
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.48165/gmj.2023.18.1.5", "pdf_hash": "638cab3e613182ac1ba72cdbe6e5e2f227622256", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:493", "s2fieldsofstudy": [ "Economics", "Computer Science" ], "sha1": "d9bfbac43803b713ddca2c45ee8e87fcef696677", "year": 2023 }
pes2o/s2orc
A study of the indian taxation system on cryptocurrency Cryptocurrencies are digital tokens that allow people to make payments directly to each other through an online system. Since it can be used to buy and sell items and has the ability to store value and increase in value, cryptocurrency is drawing the attention of a lot of investors. As to the nature of cryptocurrency, there are different sets of opinions as to whether it is a currency or a Commodity or Security. The Finance Act 2022 was the first law to recognize Virtual Digital Assets (VDAs) in India and introduced crypto taxes. Accordingly, the income generated from investment in cryptocurrency is subject to tax. The present paper studies the emerging cryptocurrency market in India and the provisions of prevailing income tax law in India related to the taxation of cryptocurrency. Introduction Tax is a compulsory financial charge or some type of levy imposed upon a taxpayer by any government organization in order to fund government spending.Tax revenue serves as a prime source to fund public expenditures.The swift development in ICT has enabled the Govt. to identify new avenues to collect taxes such as the taxation of Cryptocurrency.Taking an inchoate step, the government for the first time has officially termed digital assets including crypto assets under "Virtual Digital Assets". The most commonly known type of crypto asset is a cryptocurrency which is a digital currency and acts as the medium of exchange for exchange of products or services like fiat currency.Although the cryptocurrency market in India is presently unregulated, any profit/loss on transactions involving cryptocurrency typically gets Year 2024, covered under Income Tax Act, 1961.This paper aims to explore cryptocurrency and the prevailing tax regulations concerning the income from these assets. Meaning of cryptocurrency Cryptocurrency is a decentralized and digital currency that uses encryption algorithms to verify transactions.It is a peer-to-peer system that operates as an alternative form to send and receive payments without any third-party intervention i.e. without relying on banks to verify transactions.These currencies run on blockchain technology which is based on a distributed public ledger where the record of all transactions is updated and held by currency holders.Cryptocurrency is attracting a lot of investors as it can be used to buy and sell things and has potential to store and grow in value.There are many different cryptocurrencies available in the market like Bitcoin, Ethereum, Litecoin and Ripple etc. Nature of cryptocurrency As to the nature of cryptocurrency, there are different sets of opinions as to whether it is a currency or a Commodity or Security.A major segment of financial experts believes that all the major characteristics of a currency like mode of exchange, unit of account and store of value are satisfied when it comes to cryptocurrency which supports the idea that it should be treated as currency whereas the other set of experts believe that since cryptocurrency is bought and sold for money, it should be considered as commodity.There is also one more conviction that cryptocurrency should be treated as a security as is a tradeable commodity that can be bought and sold over Crypto-Exchanges. Objectives of the study • To study the emerging cryptocurrency market in India.• To study the provisions of prevailing income tax law in India related to cryptocurrency. Emerging cryptocurrency market in india The cryptocurrency market has grown in size and popularity among investors to facilitate financial activities such as buying, selling, and trading in India and around the world.According to the United Nations Conference on Trade and Development Report 2021, 7.3% of Indians owned cryptocurrency in 2021. Bitcoin, along with other cryptocurrencies, has been operating in the Indian market for a long while.It was in 2012 when small-scale Bitcoin transactions were reported in India for the first time.By 2013, Bitcoins started gaining popularity among the masses and few businesses began to accept Bitcoin payments along with the Indian currency.With the arrival of BtexIndia, Unocoin, and Coinsecure, cryptocurrency exchanges began to spring up within the Country.Later, a few more exchanges like Zebpay, Koinex, and Bitcoin-India made up the list. The increase in demand for Cryptocurrency in India attracted the attention of the Government and the Reserve Bank of India.The Reserve Bank of India can either regulate or prohibit anything that may pose a threat to or have an impact on the financial system of the Country.Reserve Bank has repeatedly through its public notices on December 24, 2013, February 01, 2017 and December 05, 2017, cautioned users, holders and traders of cryptocurrency regarding various risks associated in dealing with such virtual currencies. In April 2018, through a decree passed by RBI, it declared that all RBI Regulated bodies like banks were prohibited from having any business relationships with entities dealing with cryptocurrency.Furthermore, those who already have any business connection with such entities were asked to end the relationship within three months.Later, the Supreme Court of India overturned a decision by the Reserve Bank of India prohibiting banks from dealing with cryptocurrency The Court found that virtual currencies had not caused any visible damage to banks regulated by the RBI. The government of India is yet to bring any legislation concerning the regulation of Cryptocurrency in the Indian market.However, a draft bill for banning cryptocurrency has been in the works.According to the Year 2024, draft "Banning of Cryptocurrency and Regulation of Official Digital Currency Bill 2019", if any person either holds, sells, transfer disposes of, issues, or deals in cryptocurrencies, he shall be liable to imprisonment which may extend to 10 Years.This bill further makes holding any cryptocurrency a non-bailable offense. Tax on cryptocurrency The Indian Income Tax Act has always sought to tax income received and hence the levy of taxes on cryptocurrency can't be ruled out.A taxation mechanism on cryptocurrency and crypto assets has been introduced by the Finance Act 2022.Section 2(47A) has been introduced in the Income-tax Act, 1961 to define Virtual Digital Assets: VDA means: a. any information or code or number or token (not being Indian currency or foreign currency), generated through cryptographic means or otherwise, by whatever name called, providing a digital representation of value exchanged with or without consideration, with the promise or representation of having inherent value, or functions as a store of value or a unit of account including its use in any financial transaction or investment, but not limited to investment scheme; and can be transferred, stored or traded electronically; b. a non-fungible token or any other token of similar nature, by whatever name called; c. any other digital asset, as the Central Government may, by notification in the Official Gazette specify: Provided that the Central Government may, by notification in the Official Gazette, exclude any digital asset from the definition of virtual digital asset subject to such conditions as may be specified therein. Explanation: For the purposes of this clause,a."non-fungible token" means such digital asset as the Central Government may, by notification in the Official Gazette, specify; b. the expressions "currency", "foreign currency" and "Indian currency" shall have the same meanings as respectively assigned to them in clauses (h), (m) and (q) of section 2 of the Foreign Exchange Management Act, 1999Act, (42 of 1999)). Applicable tax rate on cryptocurrency As per the newly introduced Section 115BBH of the Act (applicable from FY 2022-23), the income on the transfer of VDA will attract a tax rate @ 30% .The cost of acquisition shall be allowed as a deduction, however, no incidental expenses like Brokerage etc. shall be allowed as a deduction. Set-off & carry forward of losses Loss on the sale of a VDA can be set off against the income from other VDA.For example, Loss on transfer of Dogecoin can be adjusted against the income from a Bitcoin.Carry forward of loss is also not allowed and therefore, set off of loss is possible only in the current year itself.If the total income of an assessee comprises income solely from VDA, then the benefit of the basic exemption limit of Rs. 2,50,000/ 3,00,000/ 5,00,000 is not allowed. Head under which income from cryptocurrency is taxable The income generated from investment in cryptocurrency is subject to tax under any of the following three heads: • Income from Capital Gain: In view of Section 2( 14) of the Income-tax Act 1961, cryptocurrencies could be deemed to be capital assets if purchased for investments by taxpayers.Therefore, any gain arising on the transfer of a cryptocurrency shall be taxable as capital gains.Thus, if Cryptocurrency is not traded frequently but held as investment / asset then the gains arising at the time of sales will be taxed under the head Income from Capital Gains based on the period of holding.a.If the period of holding is up to 1 year -Short term Capital Gain (STCG) is calculated.b.If the period of holding is more than 3 years -Long term Capital Gain (LTCG) is calculated. • Profit & Gain from Business & Profession: If Cryptocurrency is traded frequently then the gains from the sale will be taxed under the head Profits and Gains from Business and Profession.This is so when cryptocurrency is used to buy goods or services and cryptocurrency is accepted as payment for goods or services.• Income from Other Sources: In the case of crypto mining, the value of the cryptocurrency at the time it was mined counts as income from other sources.Experts believe that currency generated through mining will indeed be considered under the head of income from other sources. Conclusion Cryptocurrency is a social, cultural and technological advancement that goes far beyond the financial innovation.The countries that accept cryptocurrency network strengthen their economy in terms of innovation, investment, employment and taxes.Levy of tax on crypto income is a great move by Indian Govt.Some crypto finance experts believe that the tax clarity will encourage more people to invest in the cryptocurrency market which would result in accelerating the growth of the unsupervised cryptocurrency industry.Also, a well-regulated crypto ecosystem will foster an environment that is conducive to innovation.On the other hand, one set of experts believes that the levy of a flat 30 percent tax on income from crypto transactions and TDS of 1% on each crypto transfer will significantly reduce the daily turnover of crypto transactions on Indian exchanges.Most of the users will now shift to international exchanges in anticipation to escape from this heavy TDS on trading transactions.Many start-ups will also see either moving out of India or considering crypto-friendly countries to execute operations.In the end, we can say that though there has been a rapid expansion in the area of financial technology with has resulted in the tremendous growth of FinTech products but yet there is a gap in the regulation of the virtual currency business in India.The Indian Government should think of formulating a regulatory body to regulate the cryptocurrency market so as to provide a sheltered and protected platform for the investors and putting a ban on illegal activities with increased revenue for the Govt.In the absence of a regulatory framework, investors are exposed to avoidable frauds and there is also an uncertainty for new ventures trying to enter this market.It causes a threat to unconstrained movement of money in the economy giving a boost to many illegal activities like money laundering and the funding of terrorism etc. References: https://taxguru.in/income-tax/taxation-crypto-assets-nutshell.htmlCryptocurrency Taxation Exclusive Guide for 7 Countries (regtechtimes.com)Bitcoin gains currency in India -The Hindu This interpretation of powers vested in RBI was given by the Apex Court in the case of the Internet and Mobile Association of India v. RBI.(2020 SCC Online SC 275). Transaction value does not exceed Rs. 50,000 during the FY Any person not covered in above two categories Transaction value does not exceed Rs. 10,000 during the FY Year 2024, Volume-18, Issue-1 (January-June) deduction lies with the buyer, being a resident person.TAN number is not required by the buyer of VDA as the provisions of section 203A are not applicable.The rate of TDS is 1% except the following cases:
v3-fos-license
2019-12-11T14:01:54.039Z
2019-12-09T00:00:00.000
209165232
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.bmj.com/content/bmj/367/bmj.l6322.full.pdf", "pdf_hash": "5859afc3c05d46106f61a79f677e9518bfb11d77", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:495", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2c989fd4a2c2965b3c36f8505ca321b0c673e23e", "year": 2019 }
pes2o/s2orc
Political events and mood among young physicians: a prospective cohort study. OBJECTIVE To study the effects of recent political events on mood among young physicians. DESIGN Prospective cohort study. SETTING United States medical centres. PARTICIPANTS 2345 medical interns provided longitudinal mood data as part of the Intern Health Study between 2016 and 2018. MAIN OUTCOME MEASURES Mean mood score during the week following influential political and non-political events as compared with mean mood during the preceding four week control period. RESULTS We identified nine political events and eight non-political events for analysis. With the start of internship duties in July, the mean decline in mood for interns was -0.30 (95% confidence interval -0.33 to -0.27, t=-17.45, P<0.001). The decline in mood was of similar magnitude following the 2016 presidential election (mean mood change -0.32, 95% confidence interval -0.45 to -0.19, t=-4.73, P<0.001) and subsequent inauguration (mean mood change -0.25, 95% confidence interval -0.37 to -0.12, t=-3.93, P<0.001). Further, compared with men, women reported greater mood declines after both the 2016 election (mean gender difference 0.31, 95% confidence interval 0.05 to 0.58, t=2.33, P=0.02) and the inauguration (mean gender difference 0.25, 95% confidence interval 0.01 to 0.49, t=2.05, P=0.04). Overall, there were statistically significant changes in mood following 66.7% (6/9) of political events assessed. In contrast, none of the non-political events included in the analysis were statistically significantly associated with a change in mood. CONCLUSIONS Macro level factors such as politics may be correlated with the mood of young doctors. This finding signals the need for further evaluation of the consequences of increasing entanglement between politics and medicine moving forward for young physicians and their patients. Introduction Over the past decade, growing and much needed attention has been paid to high rates of depression experienced by training physicians. Several systemic factors, including heavy workloads, medical errors, and sleep deprivation have been implicated as factors compromising the wellbeing of young doctors. 1-3 Less studied is the impact of exogenous factors such as dramatic societal events-including politics-on the mental health of training physicians. On one hand, the busy day-to-day life of training physicians may make them impervious to such factors. Alternatively, high baseline levels of stress at work may lead to less resilience and large swings in emotions during turbulent events. In the current era, the 2016 US presidential election stands out as a singular political event. Although doctors have traditionally sought to keep politics and medicine separate, changing demographics in medicine and growing debate around issues such as healthcare reform and women's reproductive health have made intersections between medicine and politics increasingly unavoidable. [4][5][6][7] Beliefs about politicised health issues can influence physicians' treatment decisions, and increasing levels of political engagement among physicians may have both personal and public health consequences. 8 Further investigation of the extent to which the current generation of young physicians may be affected by politics could be useful to better understand implications for physician wellbeing and patient care. Using long term data on mood from the Intern Health Study, we sought to examine the effect of political events in the contemporary era on young physicians. 9 We used Google Trends, a tool increasingly employed in health research for gauging population behaviour, to identify periods of peak national awareness of key societal events related to politics. 10 In the wake of the 2016 presidential election, we hypothesised that interns would experience a greater change in mood following political events compared with other major events that were non-political. Participants The Intern Health Study is a prospective cohort study assessing stress and depression during the first year of residency training in the US. 1 In total, 615, 537, and 2129 incoming interns were enrolled in the daily mood arm of the study during the 2016-17, 2017-18, and 2018-19 academic years, respectively, of which 2345 were included in the current analysis. Participants represented 12 specialties at more than 300 residency institutions across the US (Northeast: 25 Data collection To understand the effects of politics on the mental health of young physicians, we assessed how the most salient societal events that occurred during our study period changed the daily mood of interns. We stratified these by political and non-political events. Before the start of the internship, subjects completed an initial survey where they provided demographic information, including gender. Throughout the intern year, subjects responded daily to the following validated onequestion measure of mood valence via the Intern Health iPhone app: "On a scale of 1-10 how was your mood today?" 11 12 Subjects were prompted through an app notification to submit a mood score daily at 8 pm. We identified political and non-political events that had the greatest impacts since the 2016 presidential election based on a History Channel summary of notable 2017 and 2018 events. 13 14 Events categorised as "Politics" were selected as the political events in our analysis. However, for the purposes of this study we included only domestic events in the United States. In addition to the 2016 presidential election, we identified eight political events for inclusion in the analysis (box 1). We considered all other events, categorised as either "Culture" or "Health, Science, and Environment," for inclusion as non-political events (box 2). A few events listed under "non-political" could be considered political in nature (eg, women's march on Washington, National Football League anthem protests); we excluded these after independent and consensus assessment by two of the authors, before analyses were performed. For each event, we queried Google Trends (accessed July 23, 2019) to determine the date of peak public interest (value of 100) within the US. We determined search terms by author consensus based on keywords used in the History Channel event summary, and in some cases we used multiple search terms (supplementary file, table 1). We followed the Checklist for Documentation of Google Trends. 10 statistical analysis We used paired t-tests to compare the mean mood for the week following an event (as defined by peak interest on Google Trends) with the mean mood during the four weeks preceding the event. For events associated with a statistically significant mood change, we first determined the percentage change in mood for men and women and then used a two-sample t-test to determine whether there was a statistically significant gender difference in mood change. We also conducted a sensitivity analysis where we modeled the change in mood score with the event while including the baseline mood score before the event as a covariate. In addition, to explore for geographic variability in our results, we performed a series of one-way analyses of variance to assess for mood change differences in response to events between the four US census regions. Finally, to globally assess whether there was a systematic difference between political and non-political events on their effects on mood, we ran a general linear regression with the absolute value of mood change score for each of the 17 events as the outcome with the political/non-political nature of each included as a covariate. All analyses were performed using SAS version 9.4. P values less than 0.05 were considered statistically significant. results In addition to the 2016 presidential election, we identified eight political events and eight non-political events to study (table 2). Of the enrolled interns, 71.5% (2345/3281) entered a daily mood score during at least one included event period and four weeks preceding that event and were included in the analysis (table 1 gives participant information). Responders were slightly older than non-responders (27.6 years versus 27.3 years; P=0.001) but the groups were not statistically significantly different with respect to gender or change in depression rates with internship. Overall, responding interns reported notable changes in mood following six of the nine political events. The largest decline in mood was observed after the 2016 presidential election (mean mood change −0.32, 95% confidence interval −0.45 to −0.19, t=−4.73, P<0.001), with statistically significant declines in mood also following the January 2017 inauguration (mean mood change −0.25, 95% confidence interval −0.37 to −0.12, t=-3.93, P=0.001), the ban on travel from Muslim majority countries (mean mood change −0.21, 95% confidence interval −0.34 to −0.07, t=−3.07, P=0.002), and Supreme Court confirmation hearings in September 2018 (mean mood change −0.06, 95% confidence interval −0.12 to −0.01, t=−2.35, P=0.02) (table 2). We identified statistically significant increases in mood following the signing of a US presidential executive order designed to keep migrant families together at the US Mexico border (mean mood change 0.16, 95% confidence interval 0.01 to 0.30, t=2.10, P=0.04) and the failure to pass a federal spending bill that included funding for a border wall (mean mood change 0.17, 95% confidence interval 0.11 to 0.23, t=5.28, P<0.001). As a reference and to place these changes in context, the change in mood score associated with the start of internship duties in July was −0.30 (95% confidence interval −0.33 to −0.27, t=−17.45, P<0.001) for our overall sample. Among those subjects who developed depression during internship, the change in mood score was −0.81 (95% confidence interval −0.88 to −0.75, t=−23.81, P<0.001). These findings suggest some of the changes reported above were comparable to declines in mood seen during the start of internship but less than the declines seen in those who developed depression. Not all political events were associated with statistically significant changes in mood score. No difference in mood was observed with the failure to repeal the Affordable Care Act in the US Senate (mean mood change −0.07, 95% confidence interval −0.15 to 0.01, t=−1.67, P=0.10), the deployment of troops to the Mexico border to meet a large migrant caravan (mean mood change −0.03, 95% confidence interval −0.09 to 0.03, t=−1.04, P=0.30), or the 2018 midterm elections (mean mood change −0.03, 95% confidence interval −0.08 to 0.03, t=−0.95, P=0.34). In contrast to the political events, none of the non-political events included in the analysis were statistically significantly associated with a change in mood. In a global analysis across all 17 events, we found that the absolute value of mood change after political events was statistically significantly greater than after non-political events (mean mood change difference 0.09, 95% confidence interval 0.16 to 0.005, F=5.09, P=0.04). In our sensitivity analyses, we confirmed the same six political events were statistically significantly associated with a change in mean mood score after the event when including the baseline mood before the event as a covariate. In contrast, there were no statistically significant time effects for the three remaining political events or any of the non-political events (findings not shown but available from authors). Some gender differences existed in our findings. Women experienced a greater decline in mood with the US presidential election compared with men (mean gender difference 0.31, 95% confidence interval 0.05 to 0.58, t=2.33, P=0.02), and a greater decline in mood in response to the inauguration (mean gender difference 0.25, 95% confidence interval 0.01 to 0.49, t=2.05, P=0.04). In contrast, men experienced a greater mood increase when the Senate failed to pass federal funding to build a border wall (mean gender difference 0.13, 95% confidence interval 0.01 to 0.26, t=2.08, P=0.04). Across the four geographic regions of the US, we noted no statistically significant difference in mood score change for 15 of 17 events. For the 2016 US presidential election (F=3.5, P=0.02) and January 2017 inauguration (F=3.2, P=0.03), the South region had smaller declines in mood compared to the Northeast, Midwest, and West. discussion Our findings describe the susceptibility of young US physicians' moods to major political events during arguably one of the hardest periods of their work lives: intern year. Although numerous factors related to their daily work schedule have been described extensively, the impact of exogenous factors such as those examined here has not been previously reported. We found that the decline in mood with the 2016 US presidential election was greater than the decline with the start of internship-a transition associated with a considerable increase in stress and a fivefold increase in depression. 1 15 This suggests that, even with the high demands and time constraints of internship, young US physicians were engaged with broader sociopolitical events. By comparison, we found that non-political events did not meaningfully affect mood in these young physicians in aggregate. The directionality of these findings is consistent with evidence that young voters and voters with postgraduate education tend to identify as liberal leaning, and supports previous work showing a strong left shift in political affiliation among physicians over the past 25 years. 6 16 With Republican campaign pledges to repeal the Affordable Care Act and restrict women's access to reproductive health services domestically and abroad, these young physicians may have been especially concerned about the healthcare consequences of a Republican presidency. We also found that women were particularly affected by the election results. Following the presidential election and subsequent inauguration, women experienced mood declines that were more than double that of their male counterparts. This finding suggests that the political discourse surrounding issues of gender and sexism throughout the presidential campaign may have disproportionately affected women. The gender difference may have also reflected a greater disappointment among women interns that the US did not elect its first female president. Female interns in our sample may have thus experienced the election outcome on both political and personal levels. Political events continued to correlate with interns' mood after the January 2017 presidential inauguration. Events with outcomes that aligned with conservative political ideologies, such as the Muslim travel ban and Brett Kavanaugh's confirmation to the US Supreme Court, were associated with a mood decrease. In contrast, events with outcomes in line with liberal political ideologies were followed by a mood increase, including the signing of a US presidential executive order to keep migrant families together at the US-Mexico border and following the Senate's failure to pass funding for a border wall. These findings further support existing evidence that young physicians may increasingly identify as liberal, particularly around factors such as gender, ethnicity, and nationality. 6 For most political and non-political events, there was no statistically significant difference in mood change across the four primary US geographic regions. The exceptions to this trend were the presidential election and inauguration, with interns in the South experiencing smaller declines than interns in other regions. A higher proportion of the general population voted Republican in the 2016 presidential election in the South (51.8%) than in the Northeast (40.5%), Midwest (49.2%), or West (38.0%), suggesting that geographic differences in response to the highly partisan election and inauguration events among interns may reflect regional variation in political affiliation. 17 implications What possible mechanisms could underlie our overall findings? In a time defined by the 24 hour news cycle and instantaneous social media updates, exposure to political news is not only unavoidable, but constant. Acute media exposure to severe violence or disasters, such as the September 11 attacks on the World Trade Center, has been shown to negatively affect mental and physical health and even result in symptoms akin to post-traumatic stress disorder (PTSD). 18 Our findings suggest that, in recent years, repeated long term exposure to emotionally arousing news can also have psychological implications. While not as severe as PTSD, these emotional ups and downs may still add to the mental burden of young US physicians, who are already under high levels of stress and at increased risk for mental health issues. [1][2][3] Previous research indicates that events like national elections can be experienced as stressful life events with psychological and biological consequences. [19][20][21] The 2016 US presidential election has been linked to increases in psychological distress, and short term mood changes following the election associated with more sustained physiological stress responses among young adults. 22 23 Studies have also shown an increase in psychological concerns and preterm births among Latina women following the 2016 election. 24 25 Along with our data, this suggests that large scale political events can influence factors relevant to mental and physical health, particularly for those with specific concerns about how the events may affect their lives. For physicians, however, this may extend beyond the personal implications. Shifts in mood among specific groups following sociopolitical events could also have professional consequences as physicians regularly interact with diverse populations. 26 With our finding that political events are associated with changes in mood among young physicians in the US, future studies should examine whether similar dynamics are playing out for young physicians in other countries. In the UK, for example, there may also be emotional consequences for physicians increasingly concerned about the ramifications of Brexit for themselves and their patients. 27 Leading medical organisations have emphasised the need for separation between medicine and politics throughout much of the 20th century. 28 29 Data from the present study suggests that maintaining this separation may be a challenge for the current generation of young physicians who appear to experience mood variations with major sociopolitical events. As residency is a period already characterised by high stress and risk for depression, emotional instability surrounding politics could have personal health implications. At the same time, as physicians' treatment decisions can be influenced by feelings about politics, this could also lead to consequences for patient care. limitations Our study has several limitations. Because our sample consisted of first year intern physicians, results may not be generalisable to all doctors or to other young, politically liberal populations. While we focused on the objectively most salient political and non-political events during the study period, other individual or societal level events affecting mood may have occurred during our study periods and confounded our results. Further, we assessed the effects of events on mood, rather than psychiatric diagnoses, such as major depressive disorder. In addition, because of limited power, we did not examine demographic differences beyond gender. Future investigation of the role of other characteristics, including race, ethnicity, national origin, immigration status, sexual orientation, religion, and political affiliation would be beneficial. Finally, we focused on the US only. Similar dramatic societal events have occurred in other countries, and it is unclear how such exogenous factors affected their physician workforces. conclusion In this investigation of the contemporary effects of political events on the emotional state of young physicians using long term mood data from the Intern Health Study, we observed a statistically significant reduction in mood for the 2016 presidential election and most political events that followed. These findings suggest that in the current era, macro-level factors such as politics may affect the mood of young doctors, with some events leading to declines in mood that matched the drop in mood seen with the start of internship. These findings signal that politics and medicine may interact in strong ways in the current era Patient and public involvement No patients were involved in setting the research question or the outcome measures for this study, nor were they involved in developing plans for recruitment, design, or implementation. No patients were asked to advise on interpretation or writing up of results. The results will be disseminated to participants through electronic newsletter, the study website, press release, and social media. doi: 10.1136/bmj.l6322 | BMJ 2019;367:l6322 | the bmj of medicine and that we should carefully consider their implications for young physicians and their patients. We thank the physicians who took part in the study. We also thank Dr John Ayanian for his valuable insights. Contributors: This study was designed by EF and SS. EF managed data collection. ZZ conducted the statistical analyses. EF, BN, and SS were responsible for interpreting the data. EF and SS wrote the initial manuscript draft, and BN provided critical revisions. SS obtained funding. All authors approved the final manuscript and had full access to the data. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. EF and SS are the guarantors. Funding: Data collection for this study was funded by the National Institute of Mental Health (R01MH101459) and the American Foundation for Suicide Prevention (LSRG-0-059-16). The sponsors had no involvement in the study design, collection, analysis, or interpretation of data or writing of the manuscript. The content of this study is solely the responsibility of the authors and does not necessarily represent the official views of the sponsors. Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; BN reports grants from the American Heart Association, Apple, Inc., and Toyota, compensation as Editor-in-Chief of Circulation: Cardiovascular Quality and Outcomes, a journal of the American Heart Association, and possession of ownership shares of AngioInsight, Inc. outside the submitted work. The authors report no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: This study was approved by the University of Michigan Medical School Institutional Review Board (HUM00033029). Data sharing: Dataset available from the corresponding author at [email protected]. Transparency: The manuscripts guarantors (EF, SS) affirm that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study have been explained. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/.
v3-fos-license
2019-08-17T11:53:33.178Z
2020-02-08T00:00:00.000
241103408
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.32388/3e07zy", "pdf_hash": "da5a5c95ec25bf0645aee66a40697d062d61931f", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:497", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8167067b62626a3cef635007055f772cf1e2f58c", "year": 2020 }
pes2o/s2orc
Carnitine palmitoyltransferase I deficiency Carnitinepalmitoyltransferase IA polymorphism P479L is common in Greenland Inuit and isassociated with elevated plasma apolipoprotein Signs and symptoms of CPT I deficiency often appear during early childhood.Affected individuals usually have low blood glucose (hypoglycemia) and a low level of ketones, which are produced during the breakdown of fats and used for energy.Together these signs are called hypoketotic hypoglycemia.People with CPT I deficiency can also have an enlarged liver (hepatomegaly), liver malfunction, and elevated levels of carnitine in the blood.Carnitine, a natural substance acquired mostly through the diet, is used by cells to process fats and produce energy.Individuals with CPT I deficiency are at risk for nervous system damage, liver failure, seizures, coma, and sudden death. Problems related to CPT I deficiency can be triggered by periods of fasting or by illnesses such as viral infections.This disorder is sometimes mistaken for Reye syndrome, a severe disorder that may develop in children while they appear to be recovering from viral infections such as chicken pox or flu.Most cases of Reye syndrome are associated with the use of aspirin during these viral infections. Frequency CPT I deficiency is a rare disorder; fewer than 50 affected individuals have been identified.This disorder may be more common in the Hutterite and Inuit populations. Causes Mutations in the CPT1A gene cause CPT I deficiency.This gene provides instructions for making an enzyme called carnitine palmitoyltransferase 1A, which is found in the liver.Carnitine palmitoyltransferase 1A is essential for fatty acid oxidation, which is the multistep process that breaks down (metabolizes) fats and converts them into energy.Fatty acid oxidation takes place within mitochondria, which are the energy-producing centers in cells.A group of fats called long-chain fatty acids cannot enter mitochondria unless they are attached to carnitine.Carnitine palmitoyltransferase 1A connects carnitine to long-chain fatty acids so they can enter mitochondria and be used to produce energy.During periods of fasting, long-chain fatty acids are an important energy source for the liver and other tissues. Mutations in the CPT1A gene severely reduce or eliminate the activity of carnitine palmitoyltransferase 1A.Without enough of this enzyme, carnitine is not attached to long-chain fatty acids.As a result, these fatty acids cannot enter mitochondria and be converted into energy.Reduced energy production can lead to some of the features of CPT I deficiency, such as hypoketotic hypoglycemia.Fatty acids may also build up in cells and damage the liver, heart, and brain.This abnormal buildup causes the other signs and symptoms of the disorder. Learn more about the gene associated with Carnitine palmitoyltransferase I deficiency Inheritance This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations.The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
v3-fos-license
2020-10-30T05:08:09.532Z
2020-12-04T00:00:00.000
227329761
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1097/md.0000000000023352", "pdf_hash": "4f2748d29222122bcbe4617b3c850c7be360fca8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:498", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b314848d8412a7cd2211328795a9499557952eb8", "year": 2020 }
pes2o/s2orc
Acupuncture for opioid-induced constipation Abstract Background: Opioid-induced constipation (OIC) is one of the most common complications of analgesic therapy for cancer pain patients who suffer moderate to severe pain. Acupuncture as an effective treatment in constipation has been widely applied. But its efficacy has not been assessed systematically. Thus, the purpose of this study is to provide a protocol to explore the efficacy and safety of acupuncture for OIC. Methods: Randomized Controlled Trials (RCTs) of acupuncture treatment for OIC in 4 Chinese electronic databases (China National Knowledge Infrastructure, Chinese Biological and Medical Database, China Scientific Journal Database, Wan-Fang Data) and 3 English electronic databases (PubMed, Embase, Cochrane Library) will be searched from their inception to September 31, 2020. RevMan 5.3 software and Stata 14.0 software will be used for meta-analysis, EndNote X9.2 and Cochrane Risk of Bias Tool will be used for literature screening and quality assessment. Results: This study will present an assessment of the efficacy and safety of acupuncture treatment for OIC patients through summarize high-quality clinical evidence. Conclusion: The conclusion of our systematic review and meta-analysis may provide evidence of whether acupuncture treatment is beneficial to patients with OIC. INPLASY registration number: INPLASY2020100026. Introduction Opioids are powerful analgesics used for the treatment of acute and chronic pain. [1] WHO also proposed a three-step analgesic ladder for alleviating moderate-to-severe cancer pain by used opioids. [2] Though the WHO three-step ladder has been recognized and used widely around the world for many years, Side-effects are widespread, and among the most troublesome are those linked to opioids-induced bowel dysfunction, which particularly includes opioids-induced constipation (OIC). [3,4] OIC has a negative influence on work productivity, quality of life, and increased national health expenditures. [5] The Rome IV standard defines OIC as new or upgraded symptoms of constipation that appear at the beginning, change, or increase of opioid therapy, and have further clinical features, such as a feeling of incomplete emptying and less than 3 spontaneous bowel movements per week. [6] OIC is the most common and bothersome problem for patients with chronic taking opioids therapy, it affects 60% to 90% of cancer patients with opioids. [7,8] It has been reported that about 215 million prescriptions for opioids in the United States in 2019. [9] OIC occurs primarily related to m-opioid receptor activation in the gut that reduces rectal sensation, decreases peristalsis and increases colonic fluid absorption. This results in harder stools. [10] The National Comprehensive Cancer Network (NCCN) guidelines referred that the prevention and treatment of adverse reactions are an important part of the analgesic therapy plan. Once opioids are used, the prescription laxatives should be implemented to treatment OIC. [11] However, laxatives do not target the underlying cause of opioid binding to the m-receptors in the enteric system and as such are not very effective at managing PY and YW are the first co-authors of this study. OPEN OIC. [12,13] Accordingly, it is essential to find an alternative treatment. Acupuncture is highly valued in traditional Chinese medicine and has a long historical source of more than 2500 years, and maybe a utility non-drug therapy option for OIC. [14][15][16] However, acupuncture as an adjunctive therapy exists a doubt in mainstream oncology. Therefore, it is necessary to evaluate the efficacy and safety of acupuncture in treating OIC through systematic review and meta-analysis, which intend to offer a reliable basis for clinical practice. Design and registration of the review This study has been registered on INPLASY and the registration number is INPLASY2020100026 and the protocol follows the Cochrane Handbook for Systematic Reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) statement guidelines. 2.2. Inclusion criteria 2.2.1. Type of studies. All randomized controlled trials (RCTs) of acupuncture therapy for OIC will be included in the study, while animal experiments, cluster RCTs, reviews, and case reports will be excluded. 2.2.2. Types of participants. The study will include patients who were clinically diagnosed with OIC. There is no restriction on age, gender, or nationality. Besides, the diagnostic criteria are based on the Rome III criteria. [17] 2.2.3. Types of intervention. The patients in the intervention group adopt acupuncture and related treatments, regardless of needle material, acupoint selection, duration of treatment, acupuncture manipulation, while the patients in the control group are treated with drugs, placebo, sham acupuncture, or other conventional therapy, either. 2.3. Types of outcome measures 2.3.1. Primary outcomes. The primary efficacy outcomes measure will be as follows: changes in the Bowel Function Index (BFI) score or Cleveland Constipation Score (CCS). Secondary outcomes. The secondary outcome measures will include the Patient Assessment of Constipation Quality of Life (PAC-QOL) questionnaire, adverse effects linked to interventions. Exclusion criteria The following conditions in the literature will be excluded: repeated literature; incomplete data; inappropriate design method. Search strategy We will search for PubMed, Embase, Cochrane Library, CNKI, WF, VIP, CBM literature databases from its inception to September 2020 with a language restriction on Chinese or English. The details of the search strategy for PubMed are shown in Table 1. 2.6. Data collection and analysis 2.6.1. Study selection. Two of the researchers (PY and YCX) will be extract data independently by reading all titles and abstracts. The screen results are inconsistent will be settled through discussion between the above 2 authors. If their discussion still cannot reach accordance, another author (JRH) will make a final decision of eligible study selection. We will adopt EndNote X9.2 software to conduct a preliminary elimination of duplicate literature, then according to the inclusion and exclusion criteria, a brief screening will perform by reading the titles, abstracts, and keywords of the literature. Besides, we will review the full text to determine the final eligible literature based on details in the articles. The selection procedure of studies is summarized in the following PRISMA flow diagram (Fig. 1). 2.6.2. Data extraction. All information will be extracted by 2 of the independent authors (QLM and RHM) according to predetermined criteria form. Disagreement will be resolved by consulting a third author (YCW), and the extracted data as following: first author, publication date, country, sample size, gender, mean age, details of interventions, treatment courses, follow-up, outcomes, and adverse event. If the information on the papers is unclear, we will contact the author by sending an email. 2.6.3. Risk of bias assessment. The risk of bias assessment of the included RCTs will be evaluated by using the risk of bias assessment tool of the Cochrane Handbook, version 5.1.0, which includes 7 items as following: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, other bias. This evaluation will be conducted by 2 independent reviewers (JM and PY) according to a judgment for literature that will be categorized as low bias, unclear bias, or high bias. 2.6.4. Data synthesis and analysis. RevMan 5.3 software will be used for data synthesis and analysis. When the outcome data is a binary variable, select the relative risk (RR) as the effect scale; when the outcome data is a continuous variable, use the mean difference (MD) and standardized mean difference (SMD) as an effect scale, both calculated by 95% confidence interval (CI). 2.6.5. Assessment of heterogeneity. The heterogeneity test adopts the x 2 test, and I 2 statistic will be used to evaluate heterogeneity. If P > .1 and I 2 < 50%, the fixed effects model will be used. If P .1 and I 2 ≥ 50%, the random-effects model will be used. 2.6.6. Analysis of subgroups. If significant heterogeneity is detected between a group of studies, subgroup analysis will be performed based on acupuncture types, countries, treatment courses, the control group intervention measures. 2.6.7. Sensitivity analysis. If the heterogeneity is significant, we will conduct a sensitivity analysis according to eliminating each of the included studies one by one, and changing the effect scale of studies to evaluate the robustness and quality of the conclusion in the studies. 2.6.8. Assessment of reporting biases. First, if there are more than 10 studies are included, we will draw a funnel plot to analyze publication bias via RevMan 5.3 software, after Egger test and Begg test will be carried out to explore the potential publication bias of studies by using Stata 14.0 software if the funnel plot is asymmetric. 2.6.9. Ethics and dissemination. This meta-analysis and systematic review protocol will not involve ethical approval because where there not contain individual patient data. We will publish this study in peer-reviewed journals and conference presentations, which provide evidence of the efficacy and safety of acupuncture treatment for OIC. Discussion Constipation is the most common and long-term intolerant adverse reaction of opioids, it seriously affects patient quality of life and cannot currently be effectively treated. [18] OIC is caused by the action exerted on opioid receptors in the gastrointestinal tract, in which the mechanism differs from idiopathic constipation. A study has shown that lifestyle changes and over-the-counter drugs are first-line treatments. [19] But in fact, there is still no satisfactory effect and effective alternative therapy in some cases. In recent years, studies of animal experiments show that acupuncture can improve gastrointestinal motility and expression of 5-HT by adjusting nerve stimulation. [20][21][22] However, the efficacy has not been recognized by the clinical guidance and medical organizations, meanwhile, there is not a systematic review about acupuncture for OIC to investigate the clinical efficacy and safety. So we conduct this study to provide a basis of evidence-based medicine and help clinicians make decisions in practice. As far as we know, it is the first time to conduct a systematic review and meta-analysis of acupuncture treatment for OIC, it demonstrates acupuncture has better results in clinical outcomes than non-acupuncture therapy. On the other hand, this study has some limitations, involving the quality of included literature, the inconsistency of acupuncture types, the methodology of studies, language limitation, which may lead to the high heterogeneity.
v3-fos-license
2020-06-04T09:06:11.407Z
2020-08-17T00:00:00.000
219882171
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/me/d0me00036a", "pdf_hash": "dc5ae3d687f1e34feeb1d9d8da4d6e404951d1f6", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:500", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "9d61f65d6f5098c46bf8ed02a7aa5c1d476feb30", "year": 2020 }
pes2o/s2orc
Tailoring morphology of hierarchical catalysts for tuning pore diffusion behaviour: a rational guideline exploiting bench-top pulsed-field gradient (PFG) nuclear magnetic resonance (NMR) The aim of this work is to develop and quantify the tuning of transport properties in porous catalytic materials by tailoring their textural properties. Introduction The development of catalytic materials with suitable textural properties is essential to achieving high activity, 1 selectivity 2 and reusability 3 in catalytic processes. One example of where control of the textural properties of the catalyst is required can be seen in hydrotreating desulfurization processes, that is, the removal of sulphur from natural gas or refined petroleum products. 4 For example, in upstream hydrotreating of diesel, when alumina is used as a catalyst, a high surface area and relatively small pores (ca. 4 nm) are desired for high activity. 5 For analogous downstream processes, such as heavy vacuum gas oil or resid hydrotreating, larger pores (ca. 7-13 nm) 6,7 are required to allow diffusion and access of larger molecules to the catalyst active sites. Conversely, certain highly selective processes such as the partial epoxidation of ethylene to ethylene oxide 8,9 and the selective oxidation or selective hydrogenation of acetylene 10 require catalysts with large pores and low surface areas. This acts to limit the residence time of substrate on the catalyst, thereby boosting selectivity to the desired products by limiting further reactions on the catalyst surface. 11,12 The importance of controlling the textural properties of a catalyst can also be seen in the design of hierarchical zeolites. In such materials, an additional pore system is introduced to the zeolite crystal, aiming at alleviating mass transport limitations imposed by the zeolite micropores. 13 Many studies have investigated the tuning of textural properties of both catalysts and catalyst supports, including carbon materials, [14][15][16] zeolites, 17,18 silicas, 19,20 titanias 21 and aluminas, [22][23][24] which are all commonly used as supports for catalytic materials. Mass transport and textural properties are widely known to be highly inter-related; in particular, tailoring pore structures in order to modify diffusion properties is an aspect of high relevance in the field of heterogeneous catalysis. For example, the importance of such a relationship has been reported to play a role in zeolite catalysis during acetalization reaction. In this process, the observed increase in catalytic activity of hierarchically structured Y zeolites with introduced mesoporosity was attributed to the enhanced diffusion of guest molecules within the pore matrix relative to the purely microporous parent Y zeolite. 25 Such a conclusion was solely based on observations on catalytic conversion measurements as no attempt to quantify diffusion rate inside the pore matrix was made. It is indeed the case that whilst the relationship between textural properties and mass transport is often discussed on qualitative basis, relatively little work has been carried out to systematically investigate the effect of tailoring textural properties of porous catalytic particles on intra-particle diffusion. Previous work has detailed the relationship between pore structure and the overall diffusivity of molecules throughout the respective porous space and how this can then, in turn, affect the catalytic activity of reactions using such catalytic materials. [26][27][28] Therefore, it is clear that a rational overview aimed towards the design of materials with tailored textural properties, and therefore molecular diffusivity through their porous network, is desirable. A powerful tool to probe diffusion inside porous materials is the pulsed-field gradient (PFG) NMR technique. [29][30][31][32][33][34][35] Amongst notable work done in the area of zeolites, Kortunov et al. 36 have investigated the effect of introducing mesopores in microporous zeolites showing that if the introduced pores form as isolated cavities, little or no increase in diffusion coefficient is observed. Conversely, the formation of a newly formed interconnected pore network is expected to lead to significant changes in diffusion coefficients. Indeed, in a recent work on hierarchical macroporous-mesoporous silica (SBA-15) sulfonic acid catalysts with interconnected macropores of tuneable diameter, it was shown that pore size and connectivity are not mutually exclusive and that enhanced mass transport can be achieved through tailoring the macropore size to the reactant size. 37 Whilst this previous work suggests that the introduction of macropores to mesoporous structures enhances mass transport by diffusion, a systematic study looking at the effect of catalyst manufacturing procedures for tailoring textural properties, by introducing macropores, and tuning diffusion properties, has not yet been reported. However, a comprehensive analysis of this would lead to a more rational and guided design of pore structures with certain desirable transport properties. In this work, we carried out a comprehensive and systematic study on how tailoring textural properties of industrial catalytic materials, through various preparation procedures, affects mass transport by diffusion. In particular, we assess the effect of operating conditions in the preparation of hierarchical alumina carriers on the final textural properties of the materials. Low-field, bench-top PFG NMR experiments with different guest probe molecules are then used to quantify how the final textural properties of the materials affect mass transport by diffusion. The advantage of using bench-top NMR instruments is significant as such instruments are much more affordable, compact and easier to operate compared to more traditional high-field instruments. This broadens significantly the application of the methodology both in academia and industry, including catalyst research and development. Materials and chemicals Methanol and ethanol were supplied by Alfa Aesar, n-octane was supplied by Merck. All chemicals were used as received. Deionised water was obtained from a laboratory water purification system. Carrier preparation The alumina carriers were supplied by Haldor Topsøe. For their preparation, pseudoboehmite is peptized with nitric acid in water to form a uniform paste that is later extruded and calcined. Upon calcination pseudoboehmite undergoes thermal transformation according to the following chain of reactions: The transformation to γ-Al 2 O 3 occurs around 450°C and is followed by further phase changes to the δ-Al 2 O 3 form at ca. 900°C and θ-Al 2 O 3 at ca. 1000°C. Until this point the transformation is isomorphous, i.e., the crystal size and textural properties are affected to a relatively small extent. At ca. 1200°C a further transformation to α-Al 2 O 3 occurs, which is accompanied by a rapid sintering and decrease in surface area and porosity. A number of 8 different samples were investigated denoted as Al 2 O 3 (1)-Al 2 O 3 (8). Carriers were prepared by mixing boehmite powder with water in the presence of nitric acid to obtain 600 g of uniform paste. The amount of nitric acid was set to 10 mmol per 1 mol of alumina on calcined basis for all samples. The amount of water expressed as water-to-boehmite ratio (g/g) varied between samples and was set to 1 for Al 2 (Table 1). Subsequently, the resulting paste was extruded, dried and calcined at various temperatures and conditions ( Table 1). The structural properties of all carriers were studied by Haldor Topsøe using mercury (Hg)-intrusion porosimetry. Samples were dried at 250°C prior to analysis. Hg-Intrusion measurements were performed on an Autopore IV instrument from Micromeritics. X-ray diffraction (XRD) measurements The materials were analyzed by X-ray diffraction using a Panalytical XPert Pro instrument system in Bragg-Brentano geometry working in reflectance mode using CuK α radiation (λ = 1.541 Å). The instrument is equipped with a monochromator, soller-, divergence-and anti-scatter slits with a scan range of 5-70 degrees. Rietveld analysis was carried out using the Topas software. PFG NMR diffusion measurements The samples for PFG NMR measurements were prepared by soaking the porous solid under investigation in the liquid of choice for over 24 h prior to the measurements in order to ensure full saturation of the intra-particle pore space; different guest molecules (n-octane, water, methanol or ethanol) were used for the study. The liquid-saturated solid samples were then dried on a pre-soaked filter paper to remove any excess liquid from the external surface and transferred to 5 mm NMR tubes. To ensure a saturated atmosphere in the NMR tube, hence minimising errors due to evaporation of volatile liquids, a small amount of the respective pure liquid was absorbed onto filter paper, which was then placed under the cap of the NMR tube. The tube sample was finally placed into the magnet and left for approximately 15 min before starting the measurements, in order to achieve thermal equilibrium. NMR experiments were performed in a Magritek SpinSolve benchtop NMR spectrometer operating at a 1 H frequency of 43 MHz. The PFG NMR experiments were carried out using a diffusion probe capable of producing magnetic field gradient pulses up to 163 mT m −1 . Diffusion measurements were performed using the pulsed-field gradient stimulated echo (PGSTE) sequence. 38 The sequence is made by combining a series of radiofrequency pulses (RF) with magnetic field gradients (g), according to Fig. 1. The NMR signal attenuation of a PFG NMR experiment as a function of the gradient strength, EIJg), is related to the experimental variables and the diffusion coefficient (D) by: 39 where E 0 is the NMR signal in the absence of gradient, γ H is the gyromagnetic ratio of the nuclei being studied (i.e. 1 H in this case), g is the strength of the gradient pulse of duration δ, and Δ is the observation time (i.e., the time interval between the leading edges of the gradient pulses). The term is often referred to as the b-factor. Eqn (1) assumes a Gaussian distribution of the diffusing spins and it generally applies to free diffusion, such as the case of bulk liquids. However, this equation can also be applied for diffusion in porous materials with a quasi-homogeneous behaviour, that is, with a macroscopically homogeneous pore structure, 40 which shows a linear behaviour of the PFG log plot of the signal attenuation. a Calcined in the furnace on the net (1 cm layer). b Calcined in the furnace in a closed container (5 cm layer). The measurements were performed by fixing Δ = 50 ms and using values of δ = 4-12 ms depending on the sample. The magnitude of g was varied linearly with sixteen spaced increments. In order to achieve full signal attenuation, maximum values of g of up to 163 mT m −1 were necessary. All the measurements were performed at atmospheric pressure and 25°C. The diffusion coefficients D were calculated by fitting eqn (1) to the experimental data. Effect of alumina preparation conditions upon the carrier textural properties The alumina samples were prepared as detailed in the experimental section using varying preparation conditions and their impact upon the porous structure of the final alumina sample was determined. The parameters varied in the preparation of the alumina samples can be seen in Table 1. The alumina carriers prepared according to the conditions reported in Table 1 were characterized by mercury porosimetry. The mean pore diameter (d pore ), percentage of macropores of varying size within the carrier studied and surface area-to-volume ratio (S/V) were determined and are listed in Table 2. From the mercury porosimetry analysis, it can be seen that the samples studied contain a wide variety of average pore sizes, degrees of macroporosity and surface area-to-volume ratios. From the results obtained, it is clear to see that there are 2 main factors which significantly influence the pore structure of the alumina produced: the mixing time, t mix , and the calcination temperature. It can be seen that those samples with a significantly longer t mix contain no macropores within their pore structures, for example, Al 2 O 3 (4) and Al 2 O 3 (5) are both calcined at 700°C but mixed for differing lengths of time. Al 2 O 3 (4) is mixed for 25 minutes and only 2% of macropores with a mean pore diameter >50 nm whereas Al 2 O 3 (5) is mixed for only 6 minutes resulting in a final alumina carrier containing 25% macropores with a mean pore diameter >50 nm. In general, the average pore size increases as macroporosity increases and in turn, these factors reduce the overall surface area-to-volume ratio, as shown in Fig. 2. It must be considered that the surface area-to-volume ratio (S/V) of a porous particle is complex in nature and is dependent upon many factors including the porosity, shape, size and roughness of a specific particle in addition to the pore size distribution. However, the results of the mercury porosimetry analysis show that, for the samples being studied, S/V is significantly influenced by both the presence of macroporosity and the average pore size. In particular, both a low average pore size and degree of macroporosity is essential for preparing high surface area-to-volume ratio alumina carriers. The calcination temperature is also seen to significantly affect the final pore structure of the alumina carriers. Al 2 O 3 (5)-Al 2 O 3 (8) were all prepared by mixing for a relatively short mixing time of 6 minutes resulting in the formation of macropores. Each of these alumina carriers containing macroporosity were then subject to calcination at varying temperatures. The effect of varying calcination temperature upon the final macropore content within the aluminas mixed for 6 minutes only can be seen in Fig. 3. As the calcination temperature is increased, the alumina particles will begin to sinter, forming bigger alumina particles and eventually inducing phase transformations causing the collapse of small pores and resulting in the formation of large mesopores and macropores. 41 At temperatures below 900°C, phase transformations do not occur and the carriers are composed of γ-Al 2 O 3 only. At 900°C , γ-Al 2 O 3 particles begin to sinter and transition partly to θ-Al 2 O 3 and above 900°C, the carriers are composed solely of θ-Al 2 O 3 and α-Al 2 O 3 . The phase transformations that occur result in changes to the textural properties of the resultant carriers, specifically d pore and the percentage of macropores within the samples increases whilst the surface area decreases. As such, the samples prepared using a calcination temperature less than 900°C (Al 2 O 3 (1)-Al 2 O 3 (5)) will be composed of γ-Al 2 O 3 only. This is confirmed from the XRD analysis of Al 2 O 3 (5) (Fig. 4a) showing peaks characteristic of γ-Al 2 O 3 . 42,43 As the preparation calcination temperature is increased from 800°C to 900°C (Al 2 O 3 (5) to Al 2 O 3 (6), Fig. 4a) PFG NMR studies: effect of pore network connectivity on selfdiffusion We now turn our attention to how the textural properties of the final carrier, determined by the different operating conditions in the preparation methods, influence mass transport within the pore structure. One important parameter to assess is the tortuosity, which is a structural property of the porous matrix defining the pore connectivity; knowledge of this parameter is important as values of tortuosity are highly desirable as input parameters for modelling and molecular simulations of mass transport within porous materials. 46 The tortuosity is, in theory, a function of the pore structure only and can be calculated using PFG NMR. Taking the ratio of the free bulk liquid diffusivity, D 0 , to the effective diffusivity of the liquid within the porous material, D eff , gives a dimensionless "PFG interaction parameter", ξ. 47 This relation is shown in eqn (2): This ratio has commonly been inaccurately referred to as the tortuosity, τ, of a porous material. 48 PFG NMR allows the calculation of the tortuosity of a porous medium defined in eqn (3): The important distinction between the parameters defined in eqn (2) and (3) is that for eqn (3), D eff represents the effective self-diffusivity of a weakly-interacting molecule only. Clearly then, the selection of an appropriate guest molecule for PFG NMR experiments is essential to determine the actual tortuosity of a porous medium. Liquid alkanes have been shown to be the most suitable guest molecules for determining tortuosity by PFG NMR experiments 49 due to their distinct lack of chemical functionalities which can interact with the porous medium or indeed, with any other molecules present within the porous medium under study. Effectively, the use of liquid alkanes ensures that the tortuosity calculated is dependant solely on the pore connectivity and is unaffected by any other interactions that could otherwise alter the self-diffusivity of the guest molecule. Previous work by D'Agostino et al. 49 has proven that a reliable estimate of tortuosity is therefore given by: To probe the tortuosity of the samples studied here we have used n-octane. Previous studies have demonstrated that short chain liquid alkanes, namely n-octane, n-decane and cyclohexane, give reliable values of τ regardless of molecular dimension size. 40 A typical PFG NMR decay plot using n-octane imbibed within an alumina pellet used in this study can be seen below in Fig. 5. The log attenuation plots of n-octane imbibed within pellets of the alumina carriers under study can be seen in Fig. 6. The experimental data were fitted using eqn (1), giving a straight line when plotted on a logarithmic scale. Diffusion coefficients were determined by taking the negative value of the respective slopes. No evident curvature is seen in the log attenuation plots indicating that the behaviour is quasi-homogeneous. 50 This behaviour is usually observed for porous materials with a macroscopically homogeneous pore structure when the root mean squared displacement (RMSD) of the diffusing species is much larger than the average pore size of the sample, i.e., the probe molecule will collide with the pore walls many times. As a result, D eff will be representative of the liquid confined within the porous medium and reduced by the tortuosity factor relative to the free bulk liquid. 50 The RMSD of molecules diffusing for a given observation time, t, is defined by: The smallest RMSD for the measurements with n-octane (see Table S1 in ESI †) was probed when investigating the tortuosity of Al 2 O 3 (1) and this was equal to 9.7 μm, much larger than the largest average pore size of the samples studied (201.6 nm), confirming our hypothesis. The quasihomogeneous diffusion behaviour reported for our samples have important implications in terms of industrial scale-up of carrier preparation as it indicates that the preparation method reported here gives carrier particles with a uniform pore structure. The numerical values of the self-diffusion coefficients obtained from the PFG NMR data depicted in Fig. 6 are shown in Table 3. As the tortuosity is a measure of the pore connectivity and is therefore a function of the pore structure of the porous materials under study, it seems appropriate to evaluate this parameter as a function of the average pore size of the alumina carriers; despite tortuosity, which measures pore connectivity, and pore size are in theory independent of each other, previous work has reported that these two Fig. 5 A typical PFG NMR decay plot of n-octane imbibed within Al 2 O 3 (1) obtained using the PGSTE pulse sequence. Self-diffusion coefficients were obtained by fitting the log attenuations to eqn (1). Data collected at atmospheric pressure and 25°C. (8). Solid lines are fitting to eqn (1). Data collected at atmospheric pressure and 25°C. (8) 17.00 ± 0.51 1.42 ± 0.04 Bulk 24.14 ± 0.72 -parameters can be inter-related 51 and that larger pores tend to enhance pore network connectivity, hence enhance the rate of diffusion of the probe molecule within the porous structure. Fig. 7 reports the values of self-diffusivity of n-octane and corresponding tortuosity values for the alumina samples studied here as a function of the average pore diameter. The values of tortuosity reported in this work were found to be of a similar value to those reported in the literature for the same guest molecules within similar porous materials. 52,53 Both the self-diffusivity and tortuosity values show a strong dependence upon the average pore diameter at low values of pore size. The self-diffusivity of n-octane increases drastically upon increasing the pore size from 8.0 nm to 17.0 nm. At average pore sizes higher than 17.0 nm the self-diffusivity no longer increases with increasing pore size and becomes constant, within error, at roughly 1.65 × 10 −9 m 2 s −1 when pore sizes are greater than 17.0 nm. Clearly, it is reasonable to group the samples studied in this work into 2 groups based on these results. Those with small (up to 17.0 nm) average pore sizes and those with relatively 'large' pore sizes, that is, those with pore sizes greater than 17.0 nm. These groupings are indicated in Fig. 7 by the regions I and II representing the small and relatively large pore size samples, respectively. These results can be explained as a molecular confinement effect due to the small size of the pores. At the low pore size of 8.0 nm, the n-octane molecules are highly confined and will be subject to many collisions with the pore walls. As the pore size is increased, some of this restriction is lifted; hence, molecules are relatively freer to move and as such collide with the pore walls less and therefore exhibiting a larger RMSD, hence faster diffusion. When the pore size is large (d pore > 17.0 nm), the level of confinement is further decreased, which further increases the RMSD and hence the average self-diffusivity. As the tortuosity is effectively proportional to the inverse of the selfdiffusivity the same trend is seen but with the tortuosity decreasing as pore size increases, as would be expected. In summary, the results reported in Fig. 7 clearly demonstrate that larger pores ensure a better pore network connectivity, hence a lower tortuosity. Due to the pore size dependence of both the diffusivity and tortuosity of the samples studied in this work, it is important to consider also the contribution of the relative proportions of macropores present within the samples in determining the average pore size. Fig. 8 reports the values of self-diffusivity as a function of the macropore percentage. It can be seen that in general, a greater percentage of macropores present within a sample aids mass transport and decrease tortuosity, hence improving pore network connectivity, possibly by providing wider, less restricted pathways through which molecules can diffuse. Wider pores will inevitably result in less collisions with the pore walls and therefore molecules will diffuse faster within the porous structure. The samples possessing no macropores with a radius greater than 50 nm, show differing values of diffusivity and tortuosity. As there are little to no macropores present in these samples, the value of diffusivity measured is influenced solely by the average pore sizes. Indeed, this is evidenced as, of the samples containing no macropores, the slowest selfdiffusivity value of 9.41 × 10 −10 m 2 s −1 is observed in Al 2 O 3 (1) with an average pore size of 8.0 nm. Al 2 O 3 (2)-Al 2 O 3 (4) also possess no macropores but show faster self-diffusivity values owing to their larger pore sizes. Therefore, it is reasonable to conclude that, for samples containing a similar macropore content, the diffusivity is determined by the restrictions imposed upon the guest molecules due to the average size of the pores. When the carriers contain between 0-27% of macropores with a radius greater than 50 nm, the diffusivity increases/ tortuosity decreases as the contribution of the macropore diffusivity to the overall mass transport processes taking place becomes significant. Carrier samples containing more Fig. 7 The pore size dependence of (a) the self-diffusivity of n-octane imbibed within the pores of the alumina carriers and (b) the calculated tortuosity values. than 27% of macropores with a radius greater than 50 nm show no significant increase in diffusivity indicating that at 27% macroporosity the main transport route is through the macropores. When the percentage of macropores with a radius greater than 200 nm are considered, a similar trend is observed. Therefore, it is reasonable to conclude that, for the samples studied here, a macropore radius of 50 nm is sufficient to alleviate any restriction upon guest molecules, allowing n-octane to diffuse faster with less geometrical restrictions imposed by the porous network. Despite the obvious interlinked relationship between the percentage of large macropores and the average pore diameter seen within the samples, it is important to consider how both factors impact the mass transport of guest molecules throughout the entire porous network. PFG NMR studies: effect of pore surface chemistry on selfdiffusion The previously discussed 'PFG interaction parameter' ξ, can be used to determine the effect of the surface chemistry and subsequent surface interactions upon the self-diffusivity of guest molecules imbibed within the pores of a given porous material. 47,49 The PFG interaction parameters determined using water, methanol and ethanol are shown in Table 4 and the variation in these values with increasing pore size can be seen in Fig. 9. The log attenuation plots of water, methanol and ethanol imbibed within pellets of the alumina carriers under study can be seen in Fig. S1-S3 in ESI. † Hydrogen bonding between molecules with appropriate functional groups and surface hydroxyl groups on a catalyst surface are thought to be significant interactions in adsorption and 54 and can contribute to solvent effects 55,56 resulting in changes in catalytic activity. The PFG interaction parameter, ξ, generally decreases for all three molecules imbibed within the alumina carriers. This effect can be attributed to changes in structural properties, in particular it is due to the reduced restriction of the guest molecules within the carrier pores resulting in less collisions with the pore walls and therefore a faster diffusivity. However, there are significant differences between the values of ξ for water, methanol and ethanol diffusing within the small pore size samples; conversely, the medium to large pore size samples (d pore > 17.0 nm) show very similar values of ξ (between 1.3-1.6) regardless of probe molecule used. Within the structure of the small pore size samples there is a relatively high concentration of guest molecules close to the pore wall and thereby the diffusivity will be significantly affected by interactions of the guest molecules with the pore surface, meaning that those molecules interacting more strongly with the surface will diffuse more slowly. This is consistent with recently reported results on alcohol diffusion in mesoporous silica. 57 In the larger pore size samples, there is a much higher concentration of bulk liquid diffusing in the pore volume as opposed to that at the surface, and therefore, surface interactions will have a lower impact on the diffusivity. Intriguingly, water confined within the pores of Al 2 O 3 (1)-Al 2 O 3 (7) show lower values of interaction parameter than the respective tortuosity values obtained using n-octane indicating an 'enhanced' diffusivity of water relative to n-octane confined within the same network of pores, a property that has been previously detailed for various polyols confined within TiO 2 , SiO 2 and γ-Al 2 O 3 carriers. 49,58 This property is attributed to the disruption of the extensive hydrogen bonding networks between polyol molecules by the porous medium and recent work has confirmed this as well as demonstrating the importance of pore saturation to measuring accurate values of diffusivity in similar systems. 58 However, further discussion of this phenomenon is beyond the scope of this work. Conclusions In this paper, alumina carriers with differing textural properties were prepared under different operating conditions. Among the conditions varied during the preparation, two main parameters, specifically, mixing time and calcination temperature, were changed. Mercury porosimetry analysis confirmed that longer mixing times resulted in smaller average pore sizes and a lower percentage of macropores present in the aluminas produced. Higher calcination temperatures were found to trigger phase transitions of the alumina thereby resulting in the alumina samples produced to have larger average pore sizes and to contain a higher percentage of macropores. It is clear that the textural properties of the alumina carriers can be easily controlled and effectively tailored to form structures with optimal pore characteristics required for specific applications by simply varying the preparation conditions. A comprehensive set of PFG NMR studies using n-octane to probe the effect of textural properties of the prepared carriers confirmed that, up to an average pore size of 17.0 nm, the diffusivity of n-octane increases rapidly with the average pore size. When alumina samples with larger pore sizes were analysed, diffusivity was only slightly higher, reaching a plateau with increasing pore size suggesting that a pore size greater than 17.0 nm is sufficient to alleviate the major restrictions on the probe molecules motion. When alumina samples containing no macroporosity were analysed, the probe molecule diffusivity was determined only by the average pore size of the sample. However, probe molecule diffusivity was found to increase as the percentage of macropores within the sample increased. When the samples contained approximately 27% macroporosity or above, the probe molecule diffusivity remained constant suggesting that 27% macroporosity is sufficient to alleviate any mass transport limitations due to geometrical restriction of the probe molecule within the pore structure of the carriers. In order to study the effect of the pore surface chemistry on diffusion, PFG NMR studies using water, methanol and ethanol were conducted. The results revealed that, for the samples with low average pore size and low macropore content, surface interactions between the probe molecules and the pore surface are significant in determining the diffusivity through the pore structure; conversely, for samples with much larger pore size and macropore content, surface interactions have little effect on determining the diffusive motion of guest molecules. In summary, the study reported here highlights in a very comprehensive and quantitative manner the role of pore size and macropore content on mass transport by diffusion in macroporous-mesoporous catalytic materials. The reported methodology and results, obtained with a bench-top NMR instrument, which is recently increasing accessibility to NMR techniques for the wider scientific academic and industrial communities, may serve as a guideline for tailoring textural properties of porous materials through adopting suitable operating conditions in the preparation procedure, which can lead to pore structures with tuned diffusion properties. We believe that this work will provide a useful tool for those working in the area of catalyst preparation, physical chemistry of porous materials and their applications. Conflicts of interest There are no conflicts to declare.
v3-fos-license
2021-07-22T13:43:12.803Z
2021-07-21T00:00:00.000
236163855
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://evolution-outreach.biomedcentral.com/track/pdf/10.1186/s12052-021-00149-9", "pdf_hash": "b365f9cc09886e517fb932ae5d40979b32fb019a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:502", "s2fieldsofstudy": [ "History", "Biology", "Philosophy" ], "sha1": "5606d962b01c58b2a949624c042bb3b7f0dc00b4", "year": 2021 }
pes2o/s2orc
Evaluating Darwin’s Book on the Descent of Man This is a review of A Most Interesting Problem: What Darwin’s Descent of Man Got Right and Wrong about Human Evolution, edited by Jeremy DeSilva. The book has ten chapters, the first seven evaluating and updating the seven chapters of The Descent of Man, the eighth outlining Darwin’s theory of sexual selection, the ninth criticizing Darwin’s view of how sexual selection shaped human racial divergence, and the tenth summarizing hominin diversification. social complexity and technological development, but in innate mental capacity. He showed that larger, more technological societies were replacing tribal ones. She thinks that these views served to justify "Social Darwinism's" promotion of unbridled, unregulated competition among individuals and societies as right and good. Hers is a very fair introduction to the book's contents. In the first chapter, the biological anthropologist Alice Roberts lays out Darwin's comparative approach to demonstrating the community of ancestry of human beings with other animals from the similarity of amphibian, reptile, bird and mammal skeletons, the anatomical similarities of all vertebrates, the greater similarity of embryos relative to adults, and human vestiges of structures useful in other animals. Although some of Darwin's human "vestiges" have turned out to be adaptive, the soundness of Darwin's approach has been abundantly vindicated by the basic biochemical similarity of all life (Monod 1972, pp. 102ff ). Next, a neurobiologist, Suzana Herculano-Houzel, considers what we know about the mental capacities of human beings and other mammals. She does not start as Darwin did by comparing behaviors enabled by human mental capacities-love, memory, attention, curiosity, imitation, reason (Darwin 1871, p. i.105), imagination, tool use and aesthetic sense (Darwin 1871, pp. i. 46, 51 and 63)-with those of other mammals, although she gives a brief summary of these comparisons on p. 59. Lorenz (1978, see especially chapter 7 on the roots of conceptual thought) had already put Darwin's approach to good use. Instead, she starts from the basic similarity of all vertebrate brains. For lack of space, she gives less detail on the neural basis of different mental capacities than does Changeux (2008), only a brief summary of the functions of various parts of the human brain (pp. 51-53). Instead, she shows that in all vertebrates, neurons are arranged in loops, whereby a sensory neuron's signal can lead to the stimulation of a motor neuron, whose effects in turn stimulate the sensory neuron (pp. 49-50). Moreover, the nervous system exhibits spontaneous activity: it is not simply a stimulus-response (input-output) system like a computer. Finally, the brain has associative connections that allow varying degrees of complexity and flexibility in the behavior of different animals (pp. 50-53). She expects species with more cortical neurons to have greater mental capacity, which in turn allows more flexible and intelligent behavior (pp. 58-59). In primates, neuron size is constant, whereas in other mammals larger brains have larger neurons. Thus a human brain's cortex is half the size of an elephant's, but has three times as many neurons (pp. 57-58). Human brains are big primate brains with more cortical neurons allowing more intelligent behavior. Big brains consume abundant energy: only the ability to cook food, making it more nutritious and digestible, allowed human beings to evolve them (p. 57). Maximum life span, and age at sexual maturity, correlate more closely with number of cortical neurons than with body size or metabolic rate. This circumstance allows more intelligent animals greater opportunity for cultural transmission (pp. 60-61). This chapter is uncommonly full of new good ideas: Darwin would have loved it. In chapter 3 the evolutionary anthropologist Brian Hare discusses Darwin's chapter on the origins of morality. He recognizes (pp. 80-81) that showing how the amoral process of natural selection can favor the evolution of morality is an astounding achievement. As Darwin did in his chapter on mental capacity, Hare here discusses the extent to which human beings share different rudiments of morality with bonobos and dogs-sympathy (female bonobos willingly share a feast of fruit with another, especially a stranger), reasoning (a male bonobo chased by angry females turns about, "cries wolf, " making false alarm calls that create confusion amidst which he escapes), regret (making the wrong choice in what lever to press for food), and learning by imitation. Like Darwin, he finds that apes lack morality. He discusses the love of a dog for its master, and various animals' self-control (restraint of immediate desire in order to gain more later), which last he sees only in larger-brained animals. I find his examples less telling, on the whole, than those of Darwin or de Waal (2016. More significantly, Hare explains how selection among foxes for friendliness with human beings yielded animals that were not only friendly but had floppy ears, curly tails, shorter faces, smaller teeth and multicolored fur, just as happened when dogs evolved from wolves (pp. 74-75). Moreover, friendly foxes, like dogs but unlike wolves or chimpanzees, understand human signals such as pointing. This was a shocking illustration of genetic correlation. Hare, unlike Darwin, knows evidence of the effectiveness of selection for human bonding, which works by increasing levels of the hormone oxytocin. On the other hand, Darwin (pp. i. 75,93) understood that what drove selection for social life and social instincts was interdependence among members of a social group, whose members must cooperate to bring down big prey or defend the group against competitors. Plato (Republic, Book I, 352c) realized long ago that members of a gang of thieves must treat each other justly and fairly if the gang is to survive, let alone function. Hare, however, never mentions the role of interdependence in the evolution of morality. As interdependence is not discussed in any other chapter, this book lacks a crucial element for discerning just what Darwin got right and wrong about human evolution. In chapter 4, the paleontologist Yohannes Haile-Selassie endorses (p. 83) Darwin's argument that human beings, like other animals, are subject to natural selection. He then summarizes fossil evidence for the origin and diversification of hominins after their divergence from the chimpanzee-gorilla lineage. Early hominins (6.5 + Ma), Sahelanthropus, were facultatively bipedal and had shorter, blunter canines than chimps which, as Prum (2017, pp. 296-297) reasonably concluded, reflected reduced male dominance. Their brains, however, were chimp-sized, and they used no tools. The same was true for other hominins before 4.2 Ma. Australopithecus, which first appeared 4.2 Ma, were more bipedal, had larger molars and yet smaller canines, chimp-sized brains, and no tools. Australopithecus diversified during its 2 + million-year tenure, and one species gave rise to the larger-brained genus Homo, perhaps 2.8 Ma, although the most convincing fossils come nearer 2 Ma. Homo erectus appeared 2 Ma, shortly after the smaller Homo habilis, which used tools. Homo erectus, which had larger brains but smaller molars than H. habilis, probably depended on cooked food and needed fire (Wrangham 2009), fashioned stone hand-axes and was probably using fire to cook food by 1 Ma (p. 100). Homo erectus spread to the Caucasus 1.8 Ma, and onward to China and Java, giving rise among others to Neanderthals and Denisovans. Our species, Homo sapiens, with yet larger brains, appeared between 300,000 and 160,000 years ago. Both H. heidelbergensis, ancestral to H. sapiens, and Neanderthals were probably talking by 350,000 years ago (Jolly 1999, pp. 380-381). H. sapiens also spread through Eurasia, first interbreeding with, then replacing the Neanderthals and Denisovans. This series of fossils, unknown in Darwin's time, justified his claim that human beings evolved from smaller-brained primates and disproved his hypothesis that the hands "released" for other uses by bipedalism were immediately used for tool-making. Haile-Selassie's story of what these fossils reveal about the course of human evolution is useful and well told. The first part of Darwin's chapter 5, on the mental and moral faculties of "primeval" human beings, is central to his theory of how morality evolved. The bioarchaeologist Kristina Killgrove's analysis of this chapter, whose focus was the second part on civilization, misses this theory's importance. I therefore review the theory and its current status before turning to her discussion. Darwin (1871, pp. i. 71-72, 159-161) considered intelligence necessary for morality, assumed that both varied heritably, and that since some tribes were always replacing others, tribes whose members were more intelligent, courageous and loyal to each other would win intertribal combats. Morality was strictly intratribal and favored the tribe, not the individual or species, suggesting that morality spread because it was crucial to tribal survival (Darwin 1971, pp. i. 93ff, 162). He saw that within-group selection favors selfishness, but that the tendency to help others from which one received help (as in chimps: de Waal 1997) and especially, helping those reputed for helpfulness, courage, and loyalty to other tribe members would counter this selection. Moreover, tribe members, being intelligent, would see that harmonious cooperation was essential for the tribe and its members to survive (Darwin 1871, p. i. 165). This argument is still considered valid, although communal punishment of non-cooperators also played a crucial role in the evolution of morality (Boehm 1997(Boehm , 2012. Morality, however, is spread not only by intertribal conflict but by the need to cooperate to bring down big game or to pool knowledge on where to find food during severe drought (Boehm 2012). Differentially helping those of good repute stabilizes cooperation (Fehr andFischbach 2003, Fehr 2004;Panchanathan and Boyd 2004), as does punishing noncooperators (Fehr and Gächter 2002). Killgrove (pp. 109-115) gives a fair if sometimes uncomprehending account of Darwin's arguments, mentioning without comment his theory of how morality (which she calls altruism) evolved. Darwin misleadingly illustrated tribe replacing tribe by civilized nations replacing tribes (Darwin 1871, p. i. 160): but disease and vastly superior weaponry, factors irrelevant alike to morality and normal intertribal warfare, drove the victories of the conquistadors. She discusses Darwin's uncertainty whether the (unquestioned) duty to protect the weak and unfortunate leads to society's moral and intellectual decline. She did not mention Darwin's (1871, p. i. 169) acute remark that inequality is needed for civilization to develop and for science and technology to advance. In turn, Darwin failed to grasp that the extreme inequality civilization often imposes often leads to the cultural degradation of the underclass thus created. Her tone changes, and she begins to stray from Darwin's chapter, on p. 115. She taxes Darwin with "patriarchal language" (correct usage at that time, and not necessarily exclusionary: in the King James version of the Bible Genesis 1:27 reads "So God created man... Male and female created he them"); his conflation of morality and religion (not evident in chapter 5); and his endorsement of colonialism (Darwin 1871, p. i. 179) (a flaw not crucial to his main contributions). On p. 116 she taxes Darwin with assuming that intelligence can be measured (Herculano-Houzel suggests a crude measure in chapter 2, pp. 58-59), and justly inveighs at length against the IQ concept, a twentieth century obsession irrelevant to Darwin's chapter 5. Finally, returning to the end of Darwin's chapter, she attacks the idea of progress in civilization. I find this attack odd: there is clearly a trend (which she recognizes: p. 122) during the last 11,000 years for civilizations with progressively larger scales of interdependence and diversity of occupations (Vermeij and Leigh 2012). The danger lies in associating morality with this progress, as Darwin did. In chapter 6 the anthropologist John Hawks discusses how Darwin fit Homo into the classification-the phylogeny, for Darwin (1871, p. i. 188) believed classification should be genealogical (p. 126)-of other primates. Darwin (1871, pp. i. 189-191) argued that phylogeny was best inferred from useless or vestigial characteristics (pp. 139-141). Based on anatomical studies of Huxley, Owen and Mivart, Darwin (1871, p. i. 197) grouped Homo with the apes, but he had Homo diverge from the apes before other apes, including gibbons, diverged from each other (p. 127). Elsewhere (p. 133) Darwin (1871, p. i. 199) considered Homo most closely related to gorillas and chimpanzees, so he tentatively suggested that Homo first evolved in Africa. In Darwin's time, some assigned Homo its own kingdom, whereas Darwin (1871, pp. i. 186, 195) thought that in a phylogenetic classification, Homo should rank as a family or subfamily. Now hominins, including Australopithecus, are ranked as a tribe; hominins plus chimpanzees and gorillas as the subfamily Homininae, and Homininae plus orangutans as the family Hominidae (p. 135). DNA has since shown Homo more closely related to chimps than Darwin thought (p. 137). Chapter 6 was perhaps Darwin's most "Copernican" moment: Copernicus ranked the earth as just another planet, Darwin ranked Homo as just another animal. In chapter 7 the anthropologist Agustín Fuentes evaluates the Descent of Man's last chapter on "the races of man. " Here Darwin concludes that Homo is a single monophyletic species and ranks the races as subspecies (pp. 147-149) which diverged thanks to long isolation on their respective continents (pp. 150-151). Darwin could not explain physical differences among the races without invoking sexual selection (p. 152). Despite his experiences with educated Fuegians and an African black he came to know (Darwin 1871, p. i. 232), he inferred (I think from contrasts in level of civilization) major racial differences in mental and moral capacity (pp. 148, 152). Darwin (1871, i. pp. 236-238) inferred competitive replacement of races from extinctions of tribes, languages and cultures. He inferred from the replacement or subordination of indigenous races by European colonists that the more civilized also won intertribal contests (p. 160). Darwin did not criticize this process, although it was cruel and immoral by his own standard (Darwin 1871 pp. i. 168-169), and might even have approved of it, an attitude Fuentes justly criticizes. Fuentes next summarizes current understanding. Human beings form a single species and subspecies, and races cannot be clearly and consistently distinguished. Human beings are identical over > 99% of their genome, and existing genetic variation is distributed widely and irregularly. Fuentes ends by asking whether Darwin was a racist. Judging by his beliefs, he must have been, although his remark that the "high cultures" of Mexico and Peru developed indigenously (Darwin 1871, p. i. 183) suggests limits to his racism. Darwin's friendship with Fuegians and an African black do not suggest racist behavior. Sadly, Darwin trusted published false "facts" and his wrong inferences therefrom over his personal experience. Fuentes here criticizes Darwin fairly and honorably, with no trace of meanness. In chapter 8, Michael Ryan, who was the first to demonstrate sexual selection by female choice in a wild population (Ryan 1980) and who shared in the first demonstration of conflict between natural and sexual selection (Ryan et al. 1982), summarizes how sexual selection works and how it differs from natural selection. Darwin (1871, pp. i. 256-257) distinguished between natural selection, which adapts a population to its environment, and sexual selection, which is driven by who mates with an individual that will soon mate anyway. Sexual selection arises when more of one sex (usually male) than the other are ready to mate, so members of the former sex compete for mates (pp. 167-169). Darwin (1859, pp. 88-89) noted that this competition can take two forms: combat for matings, or competition to attract mates. Ryan notes (p. 171) that sometimes males that cannot obtain mates by combat or attraction do so by stealth (Warner et al. 1975;Emlen 1997). Wallace opposed the idea of sexual selection by female choice (p. 172). Now, however, no one denies sexual selection by female choice: they argue over why they choose as they do (pp. 174-176). The most likely alternatives are choosing mates with which they will have the most, the most fit, or the best caredfor offspring, or choosing mates by criteria evolved in other contexts, such as choosing food, ease of detection, or previously unrevealed aesthetic preferences. then discusses the neural bases of perceptive abilities, aesthetic preferences, and responsiveness to these perceptions. His chapter is sound biology, well presented, delivered on the basis of considerable thought and experience. How well it helps one understand mate choice in human beings, the reader must decide. In chapter 9, concerning Darwin's two chapters on the role of sexual selection in human sexual dimorphism and racial divergence, the anthropologist Holly Dunsworth opens (p. 183) with the most intemperate of this volume's attacks on Darwin: "This [chapter] is Darwin's begetting every caveman-inspired nugget of dating advice, every best-selling author's stance on innate gender roles, and every entertainer's sexist appeal to science. " Invective is neither science nor coherent argument. The incoherent anachronism of Darwin dispensing dating advice inspired by barbaric cavemen arouses outright laughter. Fortunately, the tone quickly improves. She provides (pp. 185-191) sound evidence for modern views on topics such as how natural (not sexual) selection accounts for latitudinal gradients in skin color, the paler skin of women, and why our ancestors lost their hair > 1 Ma. She makes no effort, however, to understand what led Darwin into error, To learn what misled Darwin, two quotations are helpful. Darwin (1871, p. ii. 385) remarked that "False facts are highly injurious to the progress of science, for they often long endure, " and J. B. S. Haldane (1932, p. 143) remarked that " [Darwin] was commonly right when he thought for himself, but often wrong when he took the prevailing views of his time... for granted. " Darwin based his ideas about human sexual differences on the prevailing view that in primitive tribes, men had to fight to obtain and keep wives. If Boehm (2012) is right that we all descend from egalitarian tribes, the primary form of human social organization from 45,000 or even 200,000 to 15,000 years ago (Boehm 2012, pp. 35, 67, 160) Darwin's conclusions collapse: these egalitarian bands often punished aggressive appropriation of women by death. This collapse unhorses Darwin's claim that males are more intelligent because they had to be clever in winning a wife, leaving no reason to believe the odd and unreasonable doctrine of superior male intelligence (pp. 193-194). Then she asserts the danger of striving for objectivity (pp. 194-195), which is indeed as impossible to attain (Nagel 1986) as perfection. Yet we must strive for both: seeking objectivity brings us out of our self-centeredness to focus on what we study. Next, she rehashes old disputes. She seems to view competition vs cooperation as an either/or (pp. 195-196), whereas evolution is an intricate interplay of competition and cooperation, the competitive process of natural selection often favoring complex social cooperation within species and mutualism among species (Jolly 1999, p. 4, Leigh andZiegler 2019). She taxes Darwin with over-focus on competition, although he ends his chapter 3 with how civilization might extend morality (the basis of cooperation) beyond the tribe to the nation, and how morality and sympathy might be made universal (Darwin 1871, pp. 100-104). Finally, what role should science play in moral decision-making? "What is" is a very poor predictor of "what ought to be. " All the studies of gender roles in chimpanzees, bonobos and Neolithic agriculturalists cannot override the Golden Rule, whose importance to morality Darwin (1871, p. i. 106) so emphasized, that would forbid sexual (or racial) discrimination. I see little evidence that Darwin thought otherwise, but The Descent of Man is not a moralizing book. Science needs women: I think women like Alison Jolly (1966Jolly ( , 1985Jolly ( , 1999 and Jane Goodall (1986) greatly improved primatology. The idea, however, hinted on p. 201 that hard (rigorous) science, as opposed to many of its practitioners, is hostile to women vanishes before the mathematician Emmy Noether, whose appointment to the Göttingen faculty was originally blocked by humanists despite the immense prestige of her advocate David Hilbert (Weyl 2012, p. 54) and Maryam Marzakhan, an Iranian mathematician who won the Fields medal in 2014. I have never heard female biologists complain about scientific rigor. Does this complaint reflect tension between biological and the more humanistic cultural anthropologists? The science journalist Ann Gibbons opens chapter 10 by recounting a tour of Darwin's home, Down House and its grounds, by a few dozen archaeologists and anthropologists of the European Society of Human Evolution. This is a prelude to an imaginary dinner at Down House where the group's experts on paleontology and DNA phylogeny tell Darwin, one by one, what they and others have learned since 1871 about human evolution (pp. 207, 216). This is a rather moving retelling of chapter 4's story, even if Darwin doesn't get to say a word. The retelling brings out a few new twists. He is told that bonobos, not known in his time, are the modern ape most like ancestral hominins (p. 210); that, starting 3 or 4 Ma, there were always several coexisting species of hominins until < 50,000 years ago; that > 5 Ma, early hominins slept in tree nests like chimpanzees to escape predators, ate fruit and seeds, and walked bipedally when on the ground; and that another species of Homo, H. neanderthalensis which later coexisted with our species, or a close relative of Neanderthals, appeared in Spain 430,000 years ago (p. 219). Darwin is also told how miniature species of Homo in the Philippines' Luzon Island and on Flores, that descended from H. erectus, confirmed his view that relatives of large animals on small islands were smaller. Darwin learns how techniques of extracting DNA from fossils revealed another species of Homo, the Denisovans, and showed that Homo sapiens interbred with Denisovans and Neanderthals before a new wave of H. sapiens erupting from Africa 70,000 years ago replaced all other surviving species of hominin (p. 220). Finally, Darwin learns that any pair of chimpanzee populations differ genetically far more than do any two human "races" (p. 221): there is no need to try to delineate human subspecies. The whole dinner is pleasant, cheerful, and a delight for Darwin. Gibbons has provided an uncommonly apt and reconciliatory coda to this turbulent volume. For me, this book was quite a shock. I had expected passionate debate on the relative merits of Darwin's view of how morality and intelligence evolved from social instincts of group-living animals whose survival depended on cooperation, and the ideas of E. O. Wilson's (1975) Sociobiology, based on selfish genes, innate selfishness of human beings, and cooperation only among kin. Instead, amidst a wealth of enlightening information on human ancestry, the evolution of intelligence and morality, the rise and fall of hominin diversity, and the remarkable genetic homogeneity of modern humanity, I found impassioned accusations of Darwin's racism, sexism and reinforcement for "Social Darwinism. " One obvious lesson is that The Descent of Man is a complex book. Like the Bible, one can find, and emphasize, what one chooses to look for. Thus Darwin's (1871, pp. i. 168-169) assertion of our moral obligation to help the destitute and the disabled jumps to my eye; Darwin's (1871, p. ii. 403) assertion that humanity must remain subject to a severe struggle for existence if natural selection is to improve it, jumps to another's. Darwin's (1871, p. i. 101) vision of the attainment of universal sympathy for people of all nations and races, and to other animals (spiced with criticism of the ancient Romans for their inhumanity) jumps to my eye, his apparent acceptance (Darwin 1871, p. i. 160) of the replacement of indigenous tribes by technologically better-equipped colonists jumps to another's. The neurobiologist Changeux (2008, p. 66) sees in Darwin's morality an ethic totally contradictory to Spencer's "Social Darwinism" and its advocacy of unbridled competition, and notes that Spencer invented the idea of "Social Darwinism" in 1850, nine years before Darwin's Origin of Species. The moral philosopher Mary Midgley (2014) sees in Darwin's socially oriented morality an antidote to the rampant individualism of modern Western society, one facet of which is Spencer's "Social Darwinism. " Midgley (2002) notes that in the 1880's Spencer was the best-selling philosopher in the US. On the other hand, Fuentes (p. 161) remarked of Darwin's chapter on the races of man that "To this day racist and nationalist/separatist ideologues use Darwin's words... as basis for their erroneous and intentionally hurtful and hateful positions and actions. " The second lesson implicate in this edited volume is that one must separate Darwinian wheat from Darwinian chaff (as Fuentes valiantly tried to do for the unpromising second half of Darwin's chapter, "On the Races of Man"), and not let the chaff bury the wheat. Darwin (1871) contains two parts, of which the second, Selection in Relation to Sex, is finally attaining the influence it deserves (Fisher 1930, pp. 129-141;West-Eberhard 1983, Prum 2017, Ryan 2018. Judging by this edited volume, Darwin's theory of the evolution of morality is still struggling for understanding, despite its demonstration that the blind mechanism of natural selection can bring forth as purposeful and immaterial a property as morality. This theory consists of two parts. First, morality is enabled by intelligence, for intelligence allows remembering past actions and assessing their consequences, and reason judges between conflicting aims and desires, favoring the most enduringly satisfying act, the one best for the group (Darwin 1871, pp. 88-91). Jolly's (1966) argument that the evolution of intelligence was prompted by social life fits perfectly with Darwin's theory. Darwin (1871, pp. i. 161-166) proposed that selection among groups, reinforced by sexual preference for those reputed for courage, loyalty and cooperativeness, was what favored morality. Nowadays, one would say that ensuring the cooperation needed to bring down big game influenced group selection more than did intertribal conflict, and that communal punishment of non-cooperators was a necessary ingredient that Darwin omitted (Boehm 2012). D. S. Wilson (2019) tried to complete this "Darwinian revolution" by showing how to fulfil Darwin's (1871, pp. i. 100-101) hope that "As man advances in civilization, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to members of the same nation. This point being reached, there is only an artificial barrier to prevent his sympathies from extending to all races and nations. " Darwin's moral vision is one that can inspire constructive social action. In short, A Most Interesting Problem is a stimulating book, but to derive full benefit from it, one must read the first five chapters and the concluding chapter of Darwin (1871). Abbreviations Ma: Million years ago. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from:
v3-fos-license
2024-04-27T05:06:43.108Z
2024-04-22T00:00:00.000
269384319
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "550414b64403800f9067967792542c45e5fa7acf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:503", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "550414b64403800f9067967792542c45e5fa7acf", "year": 2024 }
pes2o/s2orc
First principle study of scandium-based novel ternary half Heusler ScXGe (X = Mn and Fe) alloys: insight into the spin-polarized structural, electronic, and magnetic properties The structural, electronic, and magnetic properties of novel half-Heusler alloys ScXGe (X = Mn, Fe) are investigated using the first principle full potential linearized augmented plane wave approach based on density functional theory (DFT). To attain the desired outcomes, we employed the exchange–correlation frameworks, specifically the local density approximation in combination with Perdew, Burke, and Ernzerhof's generalized gradient approximation plus the Hubbard U parameter method (GGA + U) to highlight the strong exchange–correlation interaction in these alloys. The structural parameter optimizations, whether ferromagnetic (FM) or nonmagnetic (NM), reveal that all ScXGe (where X = Mn, Fe) Heusler alloys attain their lowest ground state energy during FM optimization. The examination of the electronic properties of these alloys reveals their metallic character in both the spin-up and spin-down channels. The projected densities of states indicate that bonding is achieved through the hybridization of p–d and d–d states in all of the compounds. The investigation of the magnetic properties in ScXGe (where X = Mn, Fe) compounds indicates pronounced stability in their ferromagnetic state. Notably, the Curie temperatures for ScXGe (X = Mn, Fe) are determined to be 2177.02 K and 1656.09 K, respectively. The observation of metallic behavior and the strong ferromagnetic characteristics in ScXGe (X = Mn, Fe) half-Heusler alloys underscores their potential significance in the realm of spintronic devices. Consequently, our study serves as a robust foundation for subsequent experimental validation. Introduction Heusler alloys, originally conceived by Friedrich Heusler in 1903, 1 have recently garnered signicant attention within the scientic community due to their promising potential in the realm of spintronics and smart materials. 2Among these alloys, Half-Heusler (HH) semiconductors stand out, characterized by having either eight (08) or eighteen (18) valence electrons and band gaps spanning from 0 to 4 eV.Remarkably, this category encompasses around 250 ternary compounds.Recent research reports have also revealed a multitude of physical phenomena associated with these Heusler alloys, including ferroelectricity, ferromagnetism, and ferroelasticity, attributed largely to their multifunctional properties.As a result, these alloys are continually drawing signicant interest in a wide range of elds, including spintronics, 3,4 optoelectronics (such as sensors, magnetoresistors, photovoltaic detectors, and light-emitting diodes), thermoelectronics, 5,6 shape memory applications, 2,7 piezoelectric semiconductors, 3,8 topological insulators, 4,9 and superconductivity. 10,113][14] Moreover, the pursuit of achieving fully spinpolarized currents has generated considerable interest in these materials. 15Heusler alloys possess another remarkable feature, stemming from their utilization of cost-effective raw materials and their ability to withstand chemical and mechanical stresses at high temperatures and densities.In the realm of thermoelectric applications, Heusler alloys have been subject to extensive investigation. 16Therefore, it is of paramount importance to delve into the magnetic, electronic, and structural properties of Heusler alloys.Such exploration not only promises to enhance the efficiency of thermoelectric devices but also offers insights into various underlying physical phenomena associated with their multifunctional properties. Heusler alloys exhibit a diverse range of crystal structures and are renowned for their distinctive classication.The majority of these crystals adopt a closely packed cubic structure, with four equidistant points within the FCC lattice forming the basis for the unit cell's diagonal. 17Heusler alloys are a class of intermetallic compounds that can be categorized into two main groups: ternary and quaternary.Among ternary intermetallic compounds, the principal families consist of full Heusler alloys (X 2 YZ) and half or semi-Heusler alloys (XYZ).Their crystal structures are denoted as L21 and C1b, respectively.Half Heusler alloys are particularly noteworthy for their costeffectiveness, lightweight nature, and eco-friendly attributes.The typical composition of half Heusler alloys is described as XYZ, where X and Y represent metals of the transition group, with Y being less electronegative, possibly belonging to the alkaline earth metal or rare earth metal group, and Z representing an s-p or main group element. Extensive attention has been paid to the study of Heusler alloys from both an applied and basic standpoint.The search for novel magnetic materials with high spin polarization is becoming more and more important in order to improve the functionality of spintronic devices, such as spin lters and spin valves.Half-metallic Heusler compounds, which have 100% spin-polarized charge carriers at the Fermi level, are among the most promising prospects for reaching this high spin polarization.Continuously emerging properties and potential applications contribute to the evolving landscape of research in this eld.A recent noteworthy development is the anticipation of half-metallic ferromagnetism.Remarkably, the features of many Heusler compounds can be easily predicted based on their valence electron count.In today's technological landscape, magnetic materials play a crucial role, nding applications in diverse areas such as data storage, energy conversion, and contactless sensing. 18However, the process of developing new high-performance magnets is both time-consuming and oen unpredictable, with only a limited number of magnets gaining widespread adoption in mainstream applications. Previous research by Karna et al. 19 shown that structural, magnetic, thermodynamic, and charge transport parameters reveal anisotropic metallic behavior in non-centrosymmetric hexagonal ScFeGe, which is characterized by a weak itinerant incommensurate helimagnetic state below T N = 36 K.A temperature and eld-independent helical wave vector k = (0 0 0.193) with magnetic moments of 0.53 bohr magnetons per formula unit were found in the neutron diffraction experiments, which is primarily conned to the ab-plane.We have examined the structural, electrical, and magnetic characteristics of ScXGe (where X = Mn, Fe) in this study.A thorough rstprinciple investigation has not yet been documented, despite the fact that there have only been a few experimental studies into the structural, thermodynamic, magnetic, and charge transport features of ScFeGe.Furthermore, no theoretical or experimental research has examined the structural, electrical, and magnetic properties of ScMnGe using DFT calculations, motivating us to carry out this investigation and contribute to closing the knowledge gap.Our study also elucidates the interesting effects on the structural properties of these compounds of substituting a Fe atom for the Mn atom. Computational details The present calculation of structural, electronic, and magnetic properties of scandium-based half Heusler alloys ScXGe (X = Mn, Fe) was performed using the full potential linearized augmented plane wave plus local orbitals (FPLAPW + LO) method based on the density functional theory (DFT) as executed in the Wien2k code.ScXGe (X = Mn, Fe) half Heusler compounds exist in hexagonal structure with space group no.189 (P 62m).As shown in Fig. 1, the Sc atom is located in a unit cell at location (0.25, 0.25, 0.25) while the Mn and Fe atoms are located at positions (0.5, 0.5, 0.5) and (0 0 0) respectively.For the treatment of exchange-correlation potentials of ScXGe (X = Mn, Fe), we have employed the LSDA, WC-GGA, and PBE-GGA functional.Moreover, the GGA + U potential, where U is the Hubbard parameter was also added for the treatment of the d state of Mn and Fe to better understand the electronic nature and magnetic structure of ScXGe (X = Mn, Fe).The U parameters within the GGA + U framework were ne-tuned following the methodology outlined in our previous research. 20The U parameters in the GGA + U were adopted in the range from 7 to 7.9 eV by the method introduced in ref. 21.In these calculations, we have utilized a parameter denoted as R MT × K max = 7, which species the matrix size for convergence.Here, K max represents the plane wave cutoff, and the choice of muffin-tin radii (R MT ) is made to ensure that there is no charge leakage from the spherical regions while minimizing the interstitial space.The parameters G max and Ɩ max (angular momentum vector) are taken as 12 and 6 respectively.We have used 1000 k points inside the irreducible rst Brillion zone to ensure excellent convergence of the total energy.For the division of valence and core states, the cutoff energy is chosen to be −7.0Ry. Stability ScXGe (X = Mn, Fe) compounds crystallized in the half Heusler alloys with the chemical formula XYZ. 22Four interpenetrating FCC sub lattices make up the unit cell; they are positioned at (0, 0, 0) and (0.5, 0.5, 0.5) for X (0.25, 0.25, 0.25) for Y, and (0.75, 0.75, 0.75) for Z.In half-Heusler compounds, the one atomic position is unoccupied.In the present work, the atomic positions (0.75, 0.75, and 0.75) are considered empty.To calculate the structural parameters for the ground state of ScXGe (X = Mn, Fe), rst, the total energy versus volume of the unit cell was computed by means of the volume optimization process in the Birch-Murnaghan's equation of state for both the nonmagnetic (NM) and spin-polarized ferromagnetic (FM) cases.Aerward, structural properties such as lattice parameters a (Å), c (Å), bulk modulus B (GPa), pressure derivative of bulk modulus (B p ), ground state energy (E o ), and ground state volume (V o ) of these ternary compounds are calculated using the volume optimization process in the Birch-Murnaghan's equation of state.The Birch-Murnaghan equation of state reveals that, during each cell optimization, energy decreases as unit cell volume increases.The term 'ground state energy' pertains to the minimum energy obtained, and the volume corresponding to this specic energy is referred to as the optimized volume.4][25][26][27][28] The optimization plots and the structural parameters of type-I, type-II, and type-III are presented in Fig. 2 and 3, and Table 2 respectively.From the ground state energies calculated in Table 2, one can state that type I is the most stable structure in both of the compounds.Furthermore, we have also optimized the NM and FM of the type-I structure.Interestingly both of the compounds have lower energies in FM than NM as shown in Fig. 3. 24,[29][30][31] Moreover, it was observed that by replacing the Mn atom with Fe in ScXGe (X = Mn, Fe) alloys, the lattice constants decrease to some extent which might be attributed to the lower atomic number of Mn (25) than Fe (26).The value of the bulk modulus for the alloy ScFeGe is 32.681GPa which is lower than the ScMnGe (714.048GPa).Smaller bulk moduli values observed in ScFeXGe alloy indicate a relatively lower resistance to external forces, suggesting that the rigidity of ScMnGe is less than that of ScFeGe.Our results indicate that the minimum energy is achieved solely at the ground state volume, revealing that all these compounds exhibit their lowest ground state energy in FM optimization.This suggests that the stability of these compounds is enhanced in the type-I FM state.It's worth noting that no prior experimental or theoretical work on the structural properties of these ScXGe (X = Mn, Fe) Heusler compounds exists, making it impossible to directly compare our ndings with other existing data. Electronic properties When considering the electronic properties, the computation of band structures plays a pivotal role in gaining insights into the nature of ScXGe (where X = Mn, Fe) Heusler compounds.In this context, the evaluation of electrical characteristics using various methods such as LSDA, WC-GGA, PBE-GGA, and GGA + U is of Table 1 Atomic occupancy Wyckoff positions XYZ for ScXGe (X = Mn, Fe) half Heusler alloys Paper RSC Advances paramount importance.In our analysis, the outcomes of type-I spin-polarized ferromagnetic band structures hold particular signicance in distinguishing whether a material behaves as an insulator, a metal, or a conductor.Specically, the presence of a substantial energy gap between the conduction and valence bands (CB and VB) signies that electrons cannot reach the Fermi level.When this energy gap is on the order of 1 electron volt (eV), a few electrons tend to breach the Fermi level and transition to the CB, resulting in a limited ability to conduct current in these compounds.In contrast, conductors feature valence electrons capable of easily crossing the Fermi level to reach the CB, essentially leading to an overlap between the valence and conduction bands.The unique characteristics of materials stem from their distinctive band structures, which consequently give rise to unconventional electrical properties when these materials are combined.In this background, we investigated the electronic properties of ScXGe (where X = Mn, Fe) Heusler compounds using self-consistent eld (SCF) calculations.Fig. 4-11 present the band structures for majority and minority spin states in ScXGe (X = Mn, Fe) Heusler alloys.Specically, Fig. 4, 6, 8 and 10 correspond to ScMnGe, while Fig. 5, 7, 9 and 11 correspond to ScFeGe.Our spin-polarized ferromagnetic calculations of type-I reveal that both the majority spin-up and minority spin-down states exhibit metallic behavior in ScXGe (X = Mn, Fe) alloys.The U parameters in the GGA + U were adopted in the range from 7 to 7.9 eV by the method introduced in ref. 21.For band structures and density of states, we have adopted 7 eV.In these materials, valence electrons within the bands traverse the Fermi level, indicating the absence of a band gap between the valence and conduction bands, thus classifying both compounds as metals. To gain a deeper understanding of the electronic properties, we have analyzed the total density of states (TDOS) and partial density of states (PDOS) for the ScXGe (where X = Mn, Fe) Heusler alloys as presented in Fig. 12 and 13.In these gures, the central line, with the valence band to its le and the conduction Consequently, both half-Heusler alloys, ScXGe (X = Mn, Fe), exhibit metallic characteristics. Magnetic properties The magnetic properties of ScXGe (where X = Mn, Fe) were investigated through spin-polarized type-I ferromagnetic calculations.It's important to emphasize that these investigations were conducted at absolute zero kelvin, revealing a pronounced impact on the total magnetic moment for both ScMnGe and ScFeGe.To explain the origin of the magnetic properties of these compounds, refer to the PDOS plots in Fig. 12 and 13.The Mn/ Fe-d, Sc-d, and Ge-p in Fig. 12 and 13 respectively for both the compounds are responsible for the shiing of the density of states from the valence band to the conduction band, which makes the materials metallic.The PDOS of ScMnGe and ScFeGe are quite different in both the spin-up and down channels.This dissimilarity creates a shi of energy which in turn is responsible for the net magnetic moment.Moreover, the variation in positions and amplitudes of the peaks on and around the Fermi level are responsible for the shi of energy, thus creating the ferromagnetic behavior and net magnetic moment in turn.In addition, the strong Coulomb repulsion between the electron of Mn/Fe-d and Ge-p in the p-d hybridization generating crystal elds 32 in these compounds, in which the degenerate states of Mn/Fe-d are converted into two non-degenerate states as shown in Fig. 14.The non-symmetric nature of the DOS in both spin channels predicts that ScMnGe and ScFeGe are ferromagnetic in nature.The total magnetic moment (MT) for each unit cell, along with the interstitial magnetic moment and the atomic magnetic moments of individual atoms, were calculated using various approximations including LSDA, WC-GGA, PBE-GGA, and GGA + U, and the results are summarized in Table 3.The negative/ positive values of individual and interstitial indicate that they are antiparallel/parallel to the magnetic moments of X atoms to reduce/enhance the net magnetic moments.The inclusion of U parameters, which dene the Coulomb interactions within the dstates and apply to all three elements, is a notable aspect. Interestingly, when GGA + U is employed with varying values ranging from 0.52 to 0.59, in lieu of LSDA, WC-GGA, and PBE-GGA, there is a notable increase in the magnetic moments of ScXGe (X = Mn, Fe) compounds.Specically, the data indicate that Mn atoms contribute more signicantly to the overall magnetic moment of ScXGe (X = Mn, Fe) compared to Fe atoms.The larger magnetic moments observed in ScXGe (X = Mn, Fe) compounds strongly suggest the presence of robust ferromagnetic behavior, signifying its potential application in spintronics devices. Formation energy. Following the acquisition of the magnetic stable state, we proceeded to assess the thermodynamic stabilities through the calculation of the formation energy, denoted as E f 33-36 where E tot is the calculated total energy per unit cell, m i is atomic chemical potential for element i, and the x i is the quantity of 4 summaries the computed energy of the compounds' constituent atoms as well as the formation energies of both compounds.As can be seen from Table 4 that both alloys have a negative E f , indicating them easier prepared in experiment. Cohesive energy. The cohesion energy (E coh ), pivotal in forecasting structural stability at the ground state, quanties the magnitude of the bonding force among atoms within materials.This cohesion energy, E coh , for these compounds, is derived using the subsequent expression, [24][25][26][27][28]37 E coh ðrÞ ¼ where E tot is the equilibrium total energy per formula unit of ScXGe (X = Mn and Fe), P N E iso are the sum of energies of isolated Sc, Mn, Fe, and Ge atoms respectively and N is the number of atoms in the unit cell.This is the amount of energy required to break down a crystal into fragments, and it is a measure of both the bond strengths and the mobility of the atoms within the crystal.As indicated in Table 4, when Mn is replaced by Fe the cohesive energy increases, which means that the chemical bonding in Mnbased compounds is weaker than Fe-based compounds.3.3.3Curie temperature.The Curie temperature dictates the intensity of interaction among magnetic atoms.A higher Curie temperature signies stronger cohesion, whereas a lower one indicates weaker interaction.The Curie temperature (T c ) for ScXGe (X = Mn and Fe) compounds are computed by using the methods reported in ref. 38.The Curie temperature was evaluated using the following expression (method 1) where J ij is the exchange interactions that can be dened in eqn (4) 28 and K B and represent the Boltzmann constant. in which E FM is the total energy of the ferromagnetic state (parallel spins) and E AFM is that for the anti-ferromagnetic state (anti-parallel adjacent spins).The denominator 2 results from taking into account the overall energy difference between the ferromagnetic (FM) and antiferromagnetic (AFM) congurations.Eqn (3) is also called Heisenberg method.[41] T c = 23 + 181M tot (5) The expected Curie temperatures of both compounds are shown in Table 3.Both compounds are computed at T c values above room temperature, suggesting the possibility of their application in spintronics and magneto-electronics devices.The T c data that was acquired for both materials shows a linear correlation between T c and magnetic moment.When the Mn atom is replaced with Fe the total magnetic moments and hence Curie temperature decrease (see Table 3 for detail).Larger Curie temperature in Mn-based compounds shows strong ferromagnetism in Mn-based compounds as compared to Fe-based compounds. It is important note that the structural, electronic, and magnetic properties of scandium-based half Heusler alloys ScXGe (X = Fe) were analyzed using the FPLAPW + LO method within DFT at T = 0 K. Due to limited computational resources, temperature-dependent studies couldn't be conducted.The calculated properties align generally with previous DFT results but slightly underestimate experimental data, likely due to the absence of lattice dynamics at T = 0 K, whereas experiments were conducted at room temperature.Sunil K. Karna et al. 19 thoroughly examined the temperature-dependent properties of the noncentrosymmetric hexagonal ScFeGe system, including its structure, magnetism, thermodynamics, and charge transport.They prepared polycrystalline samples via arc melting and analyzed them using various techniques such as PXRD, EPMA, MPMS SQUID magnetometer, NPD, and XANES spectroscopy.Theoretical data from DFT calculations were compared with experimental results obtained at 2 K. Magnetic ordering was observed at T N = 36 K (Néel temperature), along with a metamagnetic transition at 6.7 T for H within the hexagonal ab plane but not along the c axis.Neutron diffraction revealed an incommensurate helimagnetically ordered state below T N with a wave vector k = (0 0 0.193) and a magnetic moment of m S = 0.53m B /Fe aligned within the ab plane.Unusual magnetoresistance behavior was observed below T N , including positive MR (magnetoresistance) below T N and at elds H < H MM for H perpendicular to the c axis. Conclusion In summary, we have explored the structural, electrical, and magnetic properties of ScXGe (X = Mn, Fe) compounds by using rst-principles calculations based on density functional theory (DFT).All ScXGe (where X = Mn, Fe) Heusler alloys have the lowest ground state energy when it comes to spin-polarized optimization, according to structural parameter optimizations carried out using both spin-polarized and non-spin-polarized methods.For both the studied alloys, the ferromagnetic state becomes more energetically stable.The compounds are found to be metallic in nature and the compound's TDOS and PDOS outcomes are consistent with the band structure's result.Further, the p-d hybridization of each compound is also conrmed by the PDOSs.Magnetic properties indicate a significant enhancement in the magnetic moments of ScXGe (X = Mn, Fe) alloys when GGA + U is utilized in place of LSDA, WC-GGA, and PBE-GGA.Particularly, Mn atoms play a more substantial role in contributing to the overall magnetic moment compared to Fe atoms.The negative values for formation energy (−16082.11for ScMnGe and −16541.29 for ScFeGe) underscore the thermodynamic stability of these compounds and the presence of strong atom-to-atom bonds.Moreover, substituting Mn with Fe in the ScXGe (X = Mn and Fe) alloys leads to a reduction in cohesive energy, indicating that the chemical bonding in Mn-based compounds is stronger than in Fe-based compounds.The Curie temperatures for ScXGe (X = Mn, Fe) are notably different, with values of 2177.023K and 1656.099K, respectively, suggesting that Mn-based compounds exhibit stronger ferromagnetism compared to their Fe-based counterparts due to the higher Curie temperature.The intriguing structural, electronic, and magnetic characteristics of ScXGe (X = Mn, Fe) half-Heusler alloys explored in this work highlight their potential importance in the eld of spintronic devices.As a result, our research provides a strong basis for future experimental verication. Fig. 1 Fig. 1 Type I (a and d), type II (b and e) and type III (c and f) crystal structures of ScXGe (X = Mn, Fe) alloys respectively. Fig. 4 Fig.4Illustration of the electronic band structure of ScMnGe alloy using LSDA approximation for spin-up and down configuration. Fig. 5 Fig. 5 Illustration of the electronic band structure of ScFeGe alloy using LSDA approximation for spin-up and down configuration. Fig. 6 Fig.6Illustration of the electronic band structure of ScMnGe alloy using WC-GGA approximation for spin-up and down configuration. Fig. 7 Fig.7Illustration of the electronic band structure of ScFeGe alloy using WC-GGA approximation for spin-up and down configuration. Fig. 8 Fig. 8 Electronic band structure of ScMnGe alloy calculated using the PBE-GGA approximation. Fig. 10 Fig. 10 Electronic band structure of the ScMnGe alloys obtained by GGA + U approximation. Fig. 11 Fig. 11 Electronic band structure of the ScMnGe alloys obtained by GGA + U approximation. Fig. 14 Fig. 14 Total and partial DOS of non degenerate d eg and d t2g states of (a) ScMnGe and (b) ScFeGe Heusler Alloys. Table 2 Calculated structural and total energy parameters such as lattice constants a (Å), c (Å), bulk modulus B (GPa), pressure derivative of bulk modulus (B p ), ground state energy (E o ), and ground state volume (V o ) and c/a ratio of ScXGe (X = Mn, Fe) half Heusler alloys along with available experimental results for comparison Table 4 An overview of the formation energy E f , total energy E tot , and individual atom energies of ScXGe (X= Mn, Fe) alloys expressed in Ry
v3-fos-license
2020-12-17T09:07:51.761Z
2020-12-04T00:00:00.000
229686360
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/5917378.pdf", "pdf_hash": "540a1a849d11f09a4735542003bc02ca13c81871", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:507", "s2fieldsofstudy": [ "Medicine" ], "sha1": "83c4ac80705ba18e6a59b247f0f31b87bfe89d04", "year": 2020 }
pes2o/s2orc
Findings of a Cross-Sectional Survey on Knowledge, Attitudes, and Practices about COVID-19 in Uganda: Implications for Public Health Prevention and Control Measures Background The coronavirus disease (COVID-19) morbidity is rising in Uganda. However, data are limited about people's knowledge, attitudes, and practices. Objective To determine knowledge about COVID-19, attitudes towards presidential directives and Ministry of Health (MoH) guidelines, and adherence to practicing public health preventive measures (KAP) in Uganda. Methods This cross-sectional survey was conducted between April 28 and May 19, 2020. Data were collected using online social media platforms, websites, and popular media outlets. We descriptively summarized data and categorized KAP scores as knowledgeable about COVID-19, positive attitude towards presidential directives and MoH guidelines, and adherent to public health preventive measures, respectively. We tested sex differences in KAP using tests of significance and established independently associated factors using modified Poisson regression analysis, reported using adjusted prevalence risk ratio (aPR) with 95% confidence interval (CI). Results We studied 362 participants with the following sociodemographic characteristics: 86 (23.8%) aged 25-29 years, 212 (58.6%) males, 270 (74.6%) with tertiary or university levels of education, and 268 (74.0%) urban residents. Of the 362 participants, 264 (93.9%) were knowledgeable about COVID-19 (94.1% males and 93.8% females), 51.3% had positive attitudes towards presidential directives and MoH guidelines (51.0% male and 51.8% female), and 175 (48.3%) were adherent to practicing public health preventive measures (42.9% males and 56.0% females). Compared to males, our data shows that females were more adherent to practicing public health preventive measures (aPR, 1.23; 95% CI, 1.01-1.53), knowledgeable about COVID-19 (aPR, 1.01; 95% CI, 0.95-1.07), and had positive attitudes towards directives and guidelines (aPR, 1.01; 95% CI, 0.82-1.25). Conclusions This study shows that public health prevention efforts should be directed to closing the identified gaps in KAP among Ugandans in order to halt the spread of COVD-19 in Uganda as well as the East African region. Introduction Currently, the world is experiencing the novel severe acute respiratory syndrome coronavirus disease 2019 (n-SARS-CoV-2) pandemic, commonly known as COVID-19, which was first reported by the World Health Organization (WHO) on December 31, 2019, as a viral pneumonia outbreak of unknown aetiology in the Hubei Province of China [1]. To date, at least eight million people are infected by COVID-19 and over 400,000 have died, with most counties in Europe worst hit by the pandemic [2]. COVID-19 is rapidly spreading across Africa, and current data indicates 54 countries are affected [3], with close to 200,000 people infected and deaths exceeding 4,000 [2]. Uganda, one of the countries in east Africa, reported its first case of COVID-19 on March 21, 2020, from international travels, and since then, the number of new infections has risen to over 700 as of June 18, 2010, largely driven by truck drivers from her neighboring countries [4]. Coronavirus is transmitted from person-to-person through droplets of saliva or discharge from the nose when an infected person coughs or sneezes [5][6][7]. Infected persons present with mild to moderate symptoms but are able to recover even without treatment [7]. The common symptoms include fever, tiredness, dry cough, shortness of breath, body aches and pains, and sore throat, and very few people present with diarrhoea, nausea, and running nose [7,8]. Factors like old age and comorbidities, namely, cardiovascular diseases, diabetes mellitus, chronic respiratory diseases, and cancer, are associated with poor prognosis [7]. Without effective treatment and vaccine [9], the world is left with a single option thus strict adherence to public health preventive measures: regular handwashing using soap and water or alcohol-based hand rub, social distancing (maintaining a distance of at least two meters), not touching the face, covering the nose and mouth with tissue when coughing or sneezing, staying at home if feeling unwell, wearing of face masks, and prompt seeking of medical care when one has suggestive symptoms [7,9,10]. These measures have been popularized and supported by the WHO, governments, and Ministries of Health globally. To that effect, guidance and policies and presidential directives have been issued. In Uganda, several communication channels are used to reach the population with preventive messages about COVID-19, including presidential directives. This is aimed at improving people's knowledge about COVID-19, changing their attitudes towards adopting public health preventive measures, and improving their adherence to practicing public health preventive measures. However, data are limited regarding people's knowledge about COVID-19, attitudes towards presidential directives and Ministry of Health (MoH) guidelines, and practices of public health preventive measures (KAP). Second, anecdotal observations indicate that males are less adherent to practicing public health preventive measures about COVID-19 compared to females, suggesting possible deficiency in knowledge about COVID-19 and perhaps negative attitudes towards presidential directives and MoH guidelines. However, evidence to support this observation is nonexistent. We therefore conducted a national study to primarily assess knowledge about COVID-19, attitudes towards presidential directives and Ministry of Health (MoH) guidelines, and adherence to practicing public health preventive measures (KAP) among Ugandans aged ≥ 18 years. Our secondary objective was to determine whether there are differences in KAP between males and females. We hypothesized that there is a difference in KAP according to sex. Our findings will inform the design of effective public health preventive measures so as to halt the spread of COVID-19. [11]. Uganda is made up of 134 districts and 6,937 health facilities, of which 3,133 are government owned. Methods and Materials The distribution of health facilities is as follows: five national referral hospitals, 14 regional referral hospitals, 169 general hospitals, 194 health center (HC) IVs (a county level of health facility), and the rest are HC IIIs (subcounty level of health facilities) and HC IIs (parish level health facilities). There are five super specialized hospitals and two specialized institutes, the Uganda Heart Institute and Uganda Cancer Institute [12]. Study Design and Population. We conducted a crosssectional study and the findings are reported in accordance to the guidelines of Strengthening of the Reporting of Observational Studies in Epidemiology (STROBE) [13,14]. The study population consisted of Ugandans aged ≥ 18 years with access to online platforms like WhatsApp, Facebook, Twitter, and Instagram among others. Using Yamane's formula, 2.5% sampling error, and the number of people aged ≥ 18 years with access to internet services, we determined that 1,598 people would be needed. We approached 419 participants but 86.4% (n = 362) accepted to participate in the survey. Although online surveys are associated with low response rate, daily payments for over-the-top tax (OTT, a social media tax for online services) in order to access social media platforms and lockdown during the survey period led to restricted access to mobile data, and this was worsened by the closure of internet shops, and difficulties in accessing internet services in remote settings all contributed to the low sample size. Data Collection and Measurements. Due to approximately three months of lockdown and restricted movements to minimize the spread of COVID-19, community-based national survey was not logistically feasible. Data were therefore collected between April 28 and May 19, 2020, using KoboToolbox, an online software that enabled the uploading of questionnaire online. We utilized WhatsApp groups, Facebook, Twitter, Instagram, websites, and official accounts (Facebook, WhatsApp, and Twitter) of local popular media outlets, which are the available online platforms in the country for data collection and to achieve maximum coverage. We repeatedly provided reminders after every 2 days to improve participant response rate. The questionnaire had questions about KAP and was developed using the Uganda MoH and the World Health Organization (WHO) guidelines on prevention of COVID-19. To arouse participants' interest in taking the survey, we developed one-page recruitment poster with a link to the online questionnaire that contained brief information about the study objectives, importance, and ethical concerns. Once the poster was filled and the individual was eligible, access to the questionnaire was automated. BioMed Research International We collected data on participants' age measured in years and later categorized into several age groups, namely, 18-24, 25-29, 30-34, 35-39, 40-44, and ≥45 years; sex measured as male or female; marital status measured as never married/single or married/staying with partner; level of education measured as none/never received formal education, primary, secondary, and tertiary/university education; current occupation measured as unemployed, self-employed, and formal employment; religion measured as Anglican, Catholic, Muslim, Pentecostal, and others (Seventh day Adventists (SDA), Jehovah Witness, Orthodox, Hindu, Humanist, and Bahai); district of residence that consisted of the 134 districts in Uganda, and participant residence measured as urban/periurban or rural. We categorized the districts of residence into 5 regions: central, eastern, Kampala, northern, and western. We assessed the knowledge scores using 16-item Likert scale questions coded 1-3 to denote true, false, and do not know. The 16 questions were about the signs and symptoms of COVID-19, transmission, mode of spread, prevention, and treatment. For attitude scores, we used five questions, each scored on a scale of 1-5, with the lowest score being strongly disagree and the highest as strongly agree. The five questions focused on the WHO and Uganda MoH guidelines for the prevention of COVID-19, presidential directives, effectiveness of preventive measures that are in use, an assessment of the success of preventive measures, and adequacy of different communication channels. We used 10 questions with measurements on binary scale (yes or no) to measure practices of public health preventive measures. The questions focused on the frequency of handwashing, staying at home for at least five hours, wearing of gloves and masks whenever leaving home, avoidance of handshaking and crowded places, and promptness in seeking treatment in the event of signs and symptoms suggestive of COVID-19 and notification of relevant authorities like the local council system, police, and health authorities. Quality Control Measures. To determine the appropriateness, logical flow, and consistency of questions in the questionnaire, we conducted an online pretest in the neighboring country, Kenya. The respondents provided comments on the logical flow, understanding, and relevance of the questions. We used the comments to revise and to develop the final questionnaire. During data collection, we integrated quality control measures to ensure participation of eligible individuals only. We also used a unique password to protect filled questionnaires and restrict data access to the data analyst. We used VeraCrypt, an open source encryption software, to share data among the research team for the purposes of validation. At data analysis stage, we checked for data consistency, cleaned, and transformed variables. Statistical Analysis. We descriptively summarized categorical data using frequencies and percentages and numerical data using means with standard deviations or medians with interquartile ranges (IQR). For KAP studies, Bloom recommends the following cutoff points: (1) 80-100% for high knowledge, positive attitude, and good practice; (2) 60-79% for moderate knowledge, neutral attitude, and fair practice; and (3) less or equals 59% for low knowledge, negative attitude, and poor practice [15,16]. In this KAP study, we used a cutoff of 75%, which was a modification of the Bloom's cutoff point. Accordingly, we considered participants with scores ≥ 75 % as knowledgeable about COVID-19, having positive attitude towards presidential directives and MoH guidelines, and adherent to public health preventive measures. Conversely, participants with scores < 75% were considered nonknowledgeable about COVID-19, having negative attitudes towards presidential directives and MoH guidelines, and nonadherent to public health preventive measures. We tested median differences in KAP scores with respect to sex using the two sample Wilcoxon's test at bivariate analysis. Furthermore, we assessed differences in proportions of KAP using the chi-square test for large cell counts (typically ≥5) and the Fisher's exact test for smaller cell counts (typically <5). Variables with two-sided probability values less than 5% (p < 0:05) at bivariate analysis and those deemed biologically plausible for differences in KAP between males and females, namely, level of education, residence, and employment status were considered significant for multivariable analysis. We did not use binary logistic regression analysis because the outcomes were large and the use of odds ratio (OR) would overestimate the degree of association. Accordingly, prevalence risk ratios (PRs) were computed using a modified Poisson regression with robust standard errors to control for mild violations of the assumptions [17][18][19]. We reported each PR with the corresponding 95% confidence interval (CI). This analysis was performed in Stata version 15 [20]. 2.6. Ethical Considerations. We obtained informed consent using an informed consent form (ICF) attached to the online questionnaire. We ensured that the questionnaire was inaccessible without filling the ICF that described the potential benefits and risks and rationale for the survey. The ICF mentioned that participation was voluntary, and withdrawal was allowable at any stage. Ethical review and approval was obtained from the AIDS Support Organization Research Ethics Committee (TASO-REC), and reference number is TASOREC/032/2020-UG-REC-009. Sociodemographic Characteristics of Respondents. The mean age of the 362 participants was 33:5 ± 10:4 years, with a median age of 31 years (IQR: 18-75). 212 (58.6%) participants were males, another 212 (58.6%) were married or staying with the partner at the time of the survey, 270 (74.6%) had attained tertiary or university levels of education, 268 (74.0%) were rural residents, and 102 (28.2%) were residents of Kampala district. Furthermore, almost half of the participants (48.9%) had formal employment, and 43.9% were of the Catholic religion (Table 1). Distribution of KAP Level by Sociodemographic Characteristics. We found no differences in proportion of knowledge about COVID-19 with respect to sociodemographic characteristics. Overall, only half of the respondents (51.3%) had positive attitudes towards the presidential directives and MoH guidelines. Most participants with positive attitudes about the presidential directives and MoH guidelines were aged ≥ 45 years (62.5%), female (51.8%), married or staying with the partner (52.3%), had reached at least primary level of education (73.3%), rural (58.4%) and Eastern residents (66.1%), respectively, self-employed (57.4%), and Catholics (56.9%) as shown in Table 2. We observed statistically significant differences in proportion of positive attitudes towards presidential directives and MoH guidelines with respect to level of education (p = 0:019). Analysis of Sex Differences in KAP between Males and Females. In the unadjusted analysis (Table 4), compared to males, females were more likely to have positive attitudes towards presidential directives and MoH guidelines (PR, 1.01; 95% CI, 0.82-1.25) and more adherent to practicing recommended public health preventive measures (PR, 1.30; 95% CI, 1.06-1.61). In the adjusted analysis, results showed that Discussion This is the first national cross-sectional survey to assess knowledge of Ugandans about COVID-19, their attitudes towards presidential directives and MoH guidelines, and adherence to practicing public health preventive measures. Our data shows that approximately 94% of Ugandans are BioMed Research International knowledgeable about COVID-19, and almost 50% had positive attitudes towards presidential directives and MoH guidelines, and less than one in every two are adherent to practicing the recommended public health preventive measures. Most participants knew the main symptoms of COVID-19, mode of transmission, high-risk groups and that there is no effective treatment or vaccine against COVID-19, the importance of supportive treatment, and public health preventive measures, namely, handwashing, personal respiratory hygiene, wearing of masks in public, and isolation areas. Our findings are in agreement with studies conducted in Iran, Tanzania, Paraguay, Malaysia, and China that all show high knowledge scores regarding COVID-19 [21][22][23]. Nonetheless, our findings differ from studies conducted in Bangladesh and Malaysia that show an overall low knowledge score [24,25]. Although we are not certain about the implementation of public health preventive approaches in those countries with low knowledge scores, in Uganda, the dissemination of public health messages trickles up to the lowest administrative level. This might explain the high knowledge scores observed in the study. We found no difference in knowledge scores by sociodemographic characteristics, namely, sex, age, education, and residence among others, contrary to findings in Bangladesh [25], Tanzania [21], and Iran [22], possibly due to constant presidential directives and widespread dissemination of MoH guidelines. The main sources of information about COVID-19 included televisions, social media, radios, and short text messages signals (SMS), perhaps because most of the participants were urban residents, literate, and had formal employment. COVID-19 information could have easily reached the participants via online media, television, and radio and MoH directed text messages. Our study shows that participants have trust in the presidential directives and MoH guidelines, with most reporting that the directives and guidelines are adequate and necessary to halt the spread of COVID-19. Besides, participants have confidence that the country is on course in winning the battle against COVID-19, which is similar to earlier findings in China [26]. This trust and confidence might have resulted from the numerous immediate steps like ban on international and regional travels, curfews, and lockdown among others that were implemented in the country to combat the spread of COVID-19 following the confirmation of the first case on March 21, 2019. However, the overall attitude towards the presidential directives and MoH guidelines is low, and this might translate to compromised adherence to practicing public health preventive measures, which might pose public health threat of community transmission of COVID-19. Our findings differ from studies conducted in China, Tanzania, Paraguay, and Iran that found high attitude scores [21,23]. In our study, the attitude scores are different with respect to levels of education, which is similar to findings in Iran [22]. Our data show low adherence to public health preventive measures, which is not surprising because this study found low score on attitude towards presidential directives and MoH guidelines. Although our overall finding is contrary to earlier results in Iran [22], the findings on specific practices were in agreement with several studies elsewhere [21,23,24]. Furthermore, our data show that adherence to recommended public health preventive measures significantly vary with respect to sex, level of education, age, region, and employment status, which is consistent with a study in Iran [22]. In particular, our study shows that females are more adherent to public health preventive measures than males, suggesting sex-specific measures might be useful in promoting adherence to public health preventive measures and consequently in combatting the spread of COVID-19 in the country. Study Strengths and Limitations. This is the first study in Uganda to determine knowledge about COVID-19, attitudes Knowledge about COVID- 19 13 (12)(13)(14) 13 (12)(13)(14) 13 (12)(13)(14) 0.258 Attitude towards presidential directives and MoH guidelines 22 (19)(20)(21)(22)(23)(24) 22 (19)(20)(21)(22)(23)(24) 22 (19)(20)(21)(22)(23)(24) 0.583 Practices public health preventive measures 7 (7-8) 7 (7-8) 8 (7-9) 0.095 Despite these strengths, there are numerous limitations that should be considered in the interpretation of the results. Our sample size was relatively small compared to what we had desired despite repeated posting of online survey questionnaires. This was not surprising because during the data collection period, the government of Uganda had instituted curfews, restricted movements to critical workers particularly security personnel and healthcare providers, and all shops that were selling nonfood items where potential participants would buy airtime in order to have mobile data to enable filling of the questionnaire were closed. Second, since payment of OTT is compulsory in the country, this might have been a deterrent factor in accessing social media platforms, our main data collection channel. Moreover, we did not provide financial support to encourage data access to the social media platforms used for data collection. Therefore, our responses are limited to those who were able to pay in order to access these platforms. However, the data on KAP were uniformly distributed suggesting that the present sample size is sufficient for statistical inference. Since this was a cross-sectional study, our findings demonstrate an association and with no temporal relationship. Our study had more residents from urban areas and Kampala region and people with relatively high levels of education. The findings might not therefore reflect KAP scores in rural areas and among the illiterate subgroups of the population. However, we attempted to minimize this limitation through adjusted analysis. Even with adjusted analysis, we acknowledge that residual confounding is possible. Also, since practice of public health preventive measures was a self-reported outcome, social desirability bias remains a possibility. Lastly, we did not study several factors which contribute to differences in KAP between males and females such as sources, access, frequency and intensity of exposure to health information including cultural differences among others. We recommend that prospective studies should consider these factors. Conclusions and Recommendations. Our data show a high proportion of knowledge about COVID-19 and relatively low positive attitudes towards presidential directives and MoH guidelines as well as low adherence to practices of public health preventive measures. We observed sex differences in practicing the recommended public health preventive measures, with females being more dominant than males. We found no sex differences in knowledge about COVID-19 and attitudes towards presidential directives and MOH guidelines. We conclude that the current public health preventive efforts should be directed towards closing the identified gaps in KAP. This will help to halt the spread of COVID-19 in Uganda and the East African region. Data Availability The data used/analyzed in this study are available on reasonable request from the corresponding author. Conflicts of Interest The authors declare that they do not have any competing interests.
v3-fos-license