Papers by Edwin K P Chong
Public reporting burden for this collection of information is estimated to average 1 hour per res... more Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports,
arXiv (Cornell University), Oct 6, 2017
Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in ... more Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in statistical ranking and selection as a stochastic control problem, and derive the associated Bellman equation. Using value function approximation, we derive an approximately optimal allocation policy. We show that this policy is not only computationally efficient but also possesses both one-step-ahead and asymptotic optimality for independent normal sampling distributions. Moreover, the proposed allocation policy is easily generalizable in the approximate dynamic programming paradigm.
arXiv (Cornell University), Oct 6, 2017
Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in ... more Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in statistical ranking and selection as a stochastic control problem, and derive the associated Bellman equation. Using value function approximation, we derive an approximately optimal allocation policy. We show that this policy is not only computationally efficient but also possesses both one-step-ahead and asymptotic optimality for independent normal sampling distributions. Moreover, the proposed allocation policy is easily generalizable in the approximate dynamic programming paradigm.
arXiv (Cornell University), Jun 9, 2022
Graph signal processing is a framework to handle graph structured data. The fundamental concept i... more Graph signal processing is a framework to handle graph structured data. The fundamental concept is graph shift operator, giving rise to the graph Fourier transform. While the graph Fourier transform is a centralized procedure, distributed graph signal processing algorithms are needed to address challenges such as scalability and privacy. In this paper, we develop a theory of distributed graph signal processing based on the classical notion of message passing. However, we generalize the definition of a message to permit more abstract mathematical objects. The framework provides an alternative point of view that avoids the iterative nature of existing approaches to distributed graph signal processing. Moreover, our framework facilitates investigating theoretical questions such as solubility of distributed problems.
We study the problem of sensor scheduling for multisensor multitarget tracking-to determine which... more We study the problem of sensor scheduling for multisensor multitarget tracking-to determine which sensors to activate over time to trade off tracking error with sensor usage costs. Formulating this problem as a Partially Observable Markov Decision Process (POMDP) gives rise to a non-myopic sensor-scheduling scheme. Our method combines sequential multisensor Joint Probabilistic Data Association (MS-JPDA) and particle filtering for belief-state estimation, and uses simulationbased Q-value approximation method for "lookahead." The example of focus in this paper involves the activation of multiple sensors simultaneously for tracking multiple targets, illustrating the effectiveness of our approach.

Energies
Operators of mobile platforms that employ hydraulic actuation, such as excavators, seek more effi... more Operators of mobile platforms that employ hydraulic actuation, such as excavators, seek more efficient power transfer from source to load. Pump-controlled architectures achieve greater efficiency than valve-controlled architectures but exhibit poor tracking performance. We present a system-design optimization technique that ensures compliance with design requirements and minimizes peak input power, which correlates inversely with efficiency. We utilize the optimization technique to size a valve-controlled hydraulically actuated stabilized mount on a mobile platform. Our optimization framework accounts for the disturbance spectrum, a stabilization performance measure, the system dynamics, and control system design. Our technique features automated requirement derivation in the form of a parameter estimation, which supports design decisions under constraints. Our results show that one of four inequality constraints is active. This constraint represents a common design rule and results...

IEEE Control Systems Letters
Linear minimum mean square error (LMMSE) estimation is often ill-conditioned, suggesting that unc... more Linear minimum mean square error (LMMSE) estimation is often ill-conditioned, suggesting that unconstrained minimization of the mean square error is an inadequate principle for filter design. To address this, we first develop a unifying framework for studying constrained LMMSE estimation problems. Using this framework, we expose an important structural property of constrained LMMSE filters: They generally involve an inherent preconditioning step. This parameterizes all such filters only by their preconditioners. Moreover, each filters is invariant to invertible linear transformations of its preconditioner. We then clarify that merely constraining the rank of the filter does not suitably address the problem of ill-conditioning. Instead, we adopt a constraint that explicitly requires solutions to be well-conditioned in a certain specific sense. We introduce two well-conditioned filters and show that they converge to the unconstrained LMMSE filter as their truncated-power loss goes to zero, at the same rate as the low-rank Wiener filter. We also show extensions to the case of weighted trace and determinant of the error covariance as objective functions. Finally, we show quantitative results with historical VIX data to demonstrate that our two wellconditioned filters have stable performance while the standard LMMSE filter deteriorates with increasing condition number.

Information and Inference: A Journal of the IMA, Jun 18, 2020
We study the problem of inferring network topology from information cascades, in which the amount... more We study the problem of inferring network topology from information cascades, in which the amount of time taken for information to diffuse across an edge in the network follows an unknown distribution. Unlike previous studies, which assume knowledge of these distributions, we only require that diffusion along different edges in the network be independent together with limited moment information (e.g., the means). We introduce the concept of a separating vertex set for a graph, which is a set of vertices in which for any two given distinct vertices of the graph, there exists a vertex whose distance to them are different. We show that a necessary condition for reconstructing a tree perfectly using distance information between pairs of vertices is given by the size of an observed separating vertex set. We then propose an algorithm to recover the tree structure using infection times, whose differences have means corresponding to the distance between two vertices. To improve the accuracy of our algorithm, we propose the concept of redundant vertices, which allows us to perform averaging to better estimate the distance between two vertices. Though the theory is developed mainly for tree networks, we demonstrate how the algorithm can be extended heuristically to general graphs. Simulations using synthetic and real networks, and experiments using real-world data suggest that our proposed algorithm performs better than some current state-of-the-art network reconstruction methods.
Abstract—The purpose of this article is to examine the greedy adaptive measurement policy in the ... more Abstract—The purpose of this article is to examine the greedy adaptive measurement policy in the context of a linear Guas-sian measurement model with an optimization criterion based on information gain. In the special case of sequential scalar measurements, we provide sufficient conditions under which the greedy policy actually is optimal in the sense of maximizing the net information gain. In the general setting, we also discuss cases where the greedy policy is not optimal. Index Terms—entropy, information gain, compressive sensing, compressed sensing, greedy policy, optimal policy.

We study the distributed detection problem in the context of a balanced binary relay tree, where ... more We study the distributed detection problem in the context of a balanced binary relay tree, where the leaves of the tree correspond to N identical and independent sensors generating binary messages. The root of the tree is a fusion center making an overall decision. Every other node is a relay node that aggregates the messages received from its child nodes into a new message and sends it up toward the fusion center. We derive upper and lower bounds for the total error probability PN as explicit functions of N in the case where nodes and links fail with certain probabilities. These characterize the asymptotic decay rate of the total error probability as N goes to infinity. Naturally, this decay rate is not larger than that in the non-failure case, which is N. However, we derive an explicit necessary and sufficient condition on the decay rate of the local failure probabilities pk (combination of node and link failure probabilities at each level) such that the decay rate of the total er...

We present a neutrosophic set-based model for a time-dependent decision-support system (DSS) with... more We present a neutrosophic set-based model for a time-dependent decision-support system (DSS) with multi-attribute criteria decision-making. Such a DSS includes multiple conflicting objectives, having strategies spanning over several discrete time periods. In this paper, we utilize the concept of neutrosophic sets and some of its operations to develop a computational model that captures decision trees with various imprecise preferences for a time-dependent DSS. Given a time-dependent DSS with N objectives spanning over discrete time periods t ranging from t0 to tn, we use a set of m attributes, denoted by variables a1, . . . , am, where each variable akt (k = 1, . . . ,m), for each t ∈ [t0, tn], is described by a triplet variable xk ( τkt , ikt , fkt ) , where the terms τkt , ikt , and fkt represent degrees of truthfulness membership, indeterminacy membership, and falsity membership for attribute akt at time t, respectively. We then define a set of m time-dependent vectors of impreci...

2018 IEEE 14th International Conference on Control and Automation (ICCA), 2018
Robotic swarms are comprised of simple, individual robots but can collectively accomplish complex... more Robotic swarms are comprised of simple, individual robots but can collectively accomplish complex tasks through frequent interactions with other robots and the environment. One pertinent objective for swarms is mapping unknown, potentially hazardous environments. We show that even without communication or localization, the emergent behavior of a swarm observed at one area can be used to infer the presence of obstacles in an unknown environment. The main body of this work focuses on how partial differential equation (PDE) models of emergent swarm behavior can be derived by applying continuum limits to approximate discrete-time rules for individual robots in continuous-time. We illustrate our approach by demonstrating how obstacles can be located by comparing swarm observations to a base library of PDE models. As supported in this work, the PDE models accurately capture identifying characteristics of the emergent behavior and are solved in a few seconds allowing for fast feature ident...
2018 ASEE Annual Conference & Exposition Proceedings
He received a BS degree in electrical engineering and a BS degree in physics in 2011, as well as ... more He received a BS degree in electrical engineering and a BS degree in physics in 2011, as well as an MS in electrical engineering in 2017 from Colorado State University. His current areas of interest are statistical signal processing and engineering education.

IEEE Open Journal of Engineering in Medicine and Biology, 2020
The purpose of this article is to introduce a new strategy to identify areas with high human dens... more The purpose of this article is to introduce a new strategy to identify areas with high human density and mobility, which are at risk for spreading COVID-19. Crowded regions with actively moving people (called at-risk regions) are susceptible to spreading the disease, especially if they contain asymptomatic infected people together with healthy people. Methods: Our scheme identifies at-risk regions using existing cellular network functionalitieshandover and cell (re)selection-used to maintain seamless coverage for mobile end-user equipment (UE). The frequency of handover and cell (re)selection events is highly reflective of the density of mobile people in the area because virtually everyone carries UEs. Results: These measurements, which are accumulated over very many UEs, allow us to identify the at-risk regions without compromising the privacy and anonymity of individuals. Conclusions: The inferred at-risk regions can then be subjected to further monitoring and risk mitigation.
IEEE Transactions on Automatic Control, 2018
Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in ... more Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in statistical ranking and selection as a stochastic control problem, and derive the associated Bellman equation. Using value function approximation, we derive an approximately optimal allocation policy. We show that this policy is not only computationally efficient but also possesses both one-step-ahead and asymptotic optimality for independent normal sampling distributions. Moreover, the proposed allocation policy is easily generalizable in the approximate dynamic programming paradigm.

The Journal of pharmacology and experimental therapeutics, 2018
The orphan nuclear receptor Nurr1 (also called nuclear receptor-4A2) regulates inflammatory gene ... more The orphan nuclear receptor Nurr1 (also called nuclear receptor-4A2) regulates inflammatory gene expression in glial cells, as well as genes associated with homeostatic and trophic function in dopaminergic neurons. Despite these known functions of Nurr1, an endogenous ligand has not been discovered. We postulated that the activation of Nurr1 would suppress the activation of glia and thereby protect against loss of dopamine (DA) neurons after subacute lesioning with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). Our previous studies have shown that a synthetic Nurr1 ligand, 1,1-bis(3'-indolyl)-1-(-chlorophenyl)methane (C-DIM12), suppresses inflammatory gene expression in primary astrocytes and induces a dopaminergic phenotype in neurons. Pharmacokinetic analysis of C-DIM12 in mice by liquid chromatography-mass spectrometry demonstrated that approximately three times more compound concentrated in the brain than in plasma. Mice treated with four doses of MPTP + probenecid ove...

Systems, 2017
The usefulness of packageability as one of the 'ilities' for systems engineering was investigated... more The usefulness of packageability as one of the 'ilities' for systems engineering was investigated. It was found that packageability plays an important role in a multitude of systems, and it was investigated in several ways. First, a brief analysis showed that at least two criteria must be met for something to be considered an ility. These criteria are that the ility often manifests itself after the system is deployed, and that the potential ility must not simply be a persistent physical characteristic. It was shown that packageability meets both requirements. Second, six different systems were examined, revealing nine general ways packageability is used. They provide a way for system engineers to recognize packageability as a non-functional system property. The usefulness of packageability as a top-level non-functional system property is shown, as well as for subsystems and components. A working definition of packageability is then proposed. Finally, a detailed treatment of packageability is presented for radar systems with transmit-receive modules. Packageability was shown to be a useful ility category that can add value to stakeholders, and that captures real system features that are not captured by other ilities. This work demonstrates that packageability should be considered as an ility for systems engineers.
Appendices
Foundations and Applications of Sensor Management
The introduction of the partial information decomposition generated a flurry of proposals for def... more The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of "the same information" two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory. A palatable measure of intersection information would provide a principled way to quantify slippery concepts, such as synergy. Here, we introduce an intersection information measure based on the Gács-Körner common random variable that is the first to satisfy the coveted target monotonicity property. Our measure is imperfect, too, and we suggest directions for improvement.

IEEE Transactions on Education
This article presents quantitative support that the changes implemented as part of Colorado State... more This article presents quantitative support that the changes implemented as part of Colorado State University's (CSU's) Revolutionizing Engineering Departments (REDs) grant produce statistically significant positive change through a series of nonparametric analysis techniques. Additionally, the set of nonparametric analysis techniques provides a novel approach to quantitatively analyzing student data after significant pedagogical changes are made to the undergraduate curriculum. Background: As part of the grant, a series of significant pedagogical changes were made to the electrical and computer engineering (ECE) undergraduate curriculum. A large portion of these changes relates to knowledge integration techniques, which are used to highlighting the intricate relationships between the three topics of electronics, signals and systems, and electromagnetics. This article presents an analysis of the outcomes that are in part due to these changes. Intended Outcomes: As a result of the grant and the associated curriculum changes, it was anticipated that the cumulative inmajor grade point average for third-year students would increase. It was also anticipated that the in-major intercourse grades would be more positively correlated. The analysis techniques that were used provide novel examples of applications to student data. Application Design: The implemented changes described in this article directly follow from the goals of the National Science Foundation's RED program. Findings: Three nonparametric analysis techniques are performed on a collection of data from ECE undergraduates that was collected over 20 years. It is shown that the intertopical correlations between courses increase immediately following the implementation of the intervention discussed in this article, and statistically, significant evidence is presented supporting that the distribution of grades has positively changed following the intervention.
Uploads
Papers by Edwin K P Chong