Papers by Venkat Venkatasubramanian
Computer Aided Chemical Engineering, 2005
Process safety analysis is necessary for analyzing and assessing in detail the inherent hazards i... more Process safety analysis is necessary for analyzing and assessing in detail the inherent hazards in chemical processes. We have developed a tool (called PHASuite) to assist experts conducting process safety analysis. PHA is knowledge intensive, and the analysis capacity and quality of PHASuite depend exclusively on the quality of domain knowledge. It is, however, impossible and impractical to encode all kinds of knowledge into the knowledge base during development phase of PHASuite. Thus, the major aim of this work is to address the important practical learning needs. The learning-fromexperience strategy using case-based reasoning methodologies and learning from data using Bayesian learning, are investigated.
379837 Ontology-Assisted Knowledge Acquisition and Management in Pharmaceutical Engineering

The widening inequality in income distribution in recent years, and the associated excessive pay ... more The widening inequality in income distribution in recent years, and the associated excessive pay packages of CEOs in the U.S. and elsewhere, is of growing concern among policy makers as well as the common person. However, there seems to be no satisfactory answer, in conventional economic theories and models, to the fundamental question of what kind of pay distribution we ought to see, at least under ideal conditions, in a free market environment and whether this distribution is fair. We propose a game theoretic framework that addresses these questions and show that the lognormal distribution is the fairest inequality of pay in an organization comprising of homogenous agents, achieved at equilibrium, under ideal free market conditions. We also show that for a population of two different classes of agents, the final distribution is a combination of two different lognormal distributions where one of them, corresponding to the top 3-5% of the population, can be misidentified as a Pareto distribution. Our theory also shows the deep and direct connection between potential game theory and statistical mechanics through entropy, which is a measure of fairness in a distribution. This leads us to propose the fair market hypothesis, that the self-organizing dynamics of the ideal free market, i.e., Adam Smith's "invisible hand", not only promotes efficiency but also maximizes fairness under the given constraints.

In this final part, we discuss fault diagnosis methods that are based on historic process knowled... more In this final part, we discuss fault diagnosis methods that are based on historic process knowledge. We also compare and evaluate the various methodologies reviewed in this series in terms of the set of desirable characteristics we proposed in Part I. This comparative study reveals the relative strengths and weaknesses of the different approaches. One realizes that no single method has all the desirable features one would like a diagnostic system to possess. It is our view that some of these methods can complement one another resulting in better diagnostic systems. Integrating these complementary features is one way to develop hybrid systems that could overcome the limitations of individual solution strategies. The important role of fault diagnosis in the broader context of process operations is also outlined. We also discuss the technical challenges in research and development that need to be addressed for the successful design and implementation of practical intelligent supervisory control systems for the process industries.
2011 4th International Symposium on Resilient Control Systems, 2011

Lecture Notes in Computer Science, 2010
Topology is a fundamental part of a network that governs connectivity between nodes, the amount o... more Topology is a fundamental part of a network that governs connectivity between nodes, the amount of data flow and the efficiency of data flow between nodes. In traditional networks, due to physical limitations, topology remains static for the course of the network operation. Ubiquitous data networks (UDNs), alternatively, are more adaptive and can be configured for changes in their topology. This flexibility in controlling their topology makes them very appealing and an attractive medium for supporting "anywhere, any place" communication. However, it raises the problem of designing a dynamic topology. The dynamic topology design problem is of particular interest to application service providers who need to provide cost-effective data services on a ubiquitous network. In this paper we describe algorithms that decide when and how the topology should be reconfigured in response to a change in the data communication requirements of the network. In particular, we describe and compare a greedy algorithm, which is often used for topology reconfiguration, with a non-greedy algorithm based on metrical task systems. Experiments show the algorithm based on metrical task system has comparable performance to the greedy algorithm at a much lower reconfiguration cost.

Integrating Unsupervised and Supervised Learning in Neural Networks for Fault Diagnosis
Batch Processing Systems Engineering, 1996
Recently, there has been considerable interest in the use of neural networks for fault diagnosis ... more Recently, there has been considerable interest in the use of neural networks for fault diagnosis applications. To overcome the main limitations of the neural networks approach, improvements are sought mainly in two respects: (a) a better understanding of the nature of decision boundaries (b) determining the network structure without the usual arbitrary trial and error schemes. In this perspective, we have compared different neural network paradigms and developed an appropriate integrated approach. A feedforward network with ellipsoidal units has been shown to be superior to other architectures. Two different types of learning strategies are compared for training neural networks: unsupervised and supervised learning. Their relative merits and demerits are discussed and a combination has been proposed to develop a network that meets our diagnosis requirements. Unsupervised learning component serves to identify the features and establish the network structure. Supervised learning serves to finetune the resulting network. We present results from a reactor-distillation column case study to demonstrate the structure of the measurement pattern distribution and the suitability of ellipsoidal units approach. By considering the transient behavior in the diagnosis framework, we point out that the problem of fault diagnosis can be treated on the same footing for both batch and continuous processes.
Excipient interaction prediction: application of the Purdue Ontology for Pharmaceutical Engineering (POPE)
Computer Aided Chemical Engineering, 2008
18th European Symposium on Computer Aided Process Engineering ESCAPE 18 Bertrand Braunschweig and... more 18th European Symposium on Computer Aided Process Engineering ESCAPE 18 Bertrand Braunschweig and Xavier Joulia (Editors) 2008 Elsevier BV All rights reserved. Excipient Interaction Prediction: application of the Purdue Ontology for Pharmaceutical Engineering (POPE) ...
Ullmann's Encyclopedia of Industrial Chemistry, 2013

The effects of vehicle-to-grid systems on wind power integration
Wind Energy, 2011
ABSTRACT Renewable energy portfolio standards have created a large increase in the amount of rene... more ABSTRACT Renewable energy portfolio standards have created a large increase in the amount of renewable electricity production, and one technology that has benefited greatly from these standards is wind power. The uncertainty inherent in wind electricity production dictates that additional amounts of conventional generation resources be kept in reserve, should wind electricity output suddenly dip. The introduction of plug‐in hybrid electric vehicles into the transportation fleet presents an possible solution to this problem through the concept of vehicle‐to‐grid power. The ability of vehicle‐to‐grid power systems to help solve the variability and uncertainty issuess in systems with large amounts of wind power capacity is examined through a multiparadigm simulation model. The problem is examined from the perspectives of three different stakeholders: policy makers, the electricity system operator and plug‐in hybrid electric vehicle owners. Additionally, a preliminary economic analysis of the technology is performed, and a comparison made with generation technologies that perform similar functions. Copyright © 2011 John Wiley & Sons, Ltd.
Journal of Pharmaceutical Innovation, 2010
The multiple steps in pharmaceutical product development generate a large amount of diverse infor... more The multiple steps in pharmaceutical product development generate a large amount of diverse information in various formats, which hinders efficient decisionmaking. A major component of the solution is a common information model for the domain. Ontologies were found to meet this need as described in Part I of this two-part paper. In Part II, we describe two applications of Purdue Ontology for Pharmaceutical Engineering. The first application deals with the prediction of degradation reactions through incorporation of molecular structure and environmental information captured in the ontologies. The second application is one that analyzes experiments to identify differences in experimental implementation.

Journal of Pharmaceutical Innovation, 2010
Roller compaction is the major process of dry granulation which is attractive to heat or moisture... more Roller compaction is the major process of dry granulation which is attractive to heat or moisture-sensitive pharmaceutical products. Currently, the product quality of roller compaction is analyzed off-line in the quality control lab. In this work, we demonstrate how online process control can be applied on roller compaction using the simulator built in Part I of this paper. Different control strategies are discussed: multi-loop proportional-integralderivative, linear model predictive control (MPC), and nonlinear MPC. The MPC strategy provides a systematic approach to design the multivariable control system. The simulation results show that the linear MPC can serve as a high-performance control strategy for roller compaction with the trade-off between the control performance and computational complexity. Such enhanced process control facilitates the FDA's process analysis technology initiative.

Entropy, 2010
The excessive compensation packages of CEOs of U.S. corporations in recent years have brought to ... more The excessive compensation packages of CEOs of U.S. corporations in recent years have brought to the foreground the issue of fairness in economics. The conventional wisdom is that the free market for labor, which determines the pay packages, cares only about efficiency and not fairness. We present an alternative theory that shows that an ideal free market environment also promotes fairness, as an emergent property resulting from the self-organizing market dynamics. Even though an individual employee may care only about his or her salary and no one else's, the collective actions of all the employees, combined with the profit maximizing actions of all the companies, in a free market environment under budgetary constraints, lead towards a more fair allocation of wages, guided by Adam Smith's invisible hand of self-organization. By exploring deep connections with statistical thermodynamics, we show that entropy is the appropriate measure of fairness in a free market environment which is maximized at equilibrium to yield the lognormal distribution of salaries as the fairest inequality of pay in an organization under ideal conditions.

nt.ntnu.no
The development of pharmaceutical products and processes involves laboratory scale, pilot plant s... more The development of pharmaceutical products and processes involves laboratory scale, pilot plant scale and commercial scale manufacturing. Through these steps, data on synthesis routes, material properties, processing steps, and scale up parameters are collected and used. Recent advances in process analytical technologies resulted in a large volume of heterogeneous data, which requires a systematic model of the associated information for optimal use. In addition, software tools are used to support decision-making and process modeling during process development. However, each tool creates and uses information in a specific form, making connection difficult amongst the tools and allowing little interaction with experimental data. In this work, an integrated information management system based on structured information is presented. This system is built to explicitly specify important concepts like physical properties and experiments using ontologies. Based on the developed ontologies, an information access and repository system is developed to allow convenient access to the repository for both the user and software tool. The infrastructure, designed to be accessible from the web, would allow simple search and browse as well as complicated queries, which can only be efficiently answered through the use of semantics in the information. This structure is extended to consider integration with software tools through the definition of an applicationagnostic medium. Such an integrated information infrastructure provides a systematic approach to managing, sharing and reusing information, and is expected to result in considerable reduction in pharmaceutical process development time and better quality assurance.

Computers & Chemical Engineering, 2000
Process Fault Diagnosis (PFD) involves interpreting the current status of the plant given sensor ... more Process Fault Diagnosis (PFD) involves interpreting the current status of the plant given sensor readings and process knowledge. Early diagnosis of process faults while the plant is still operating in a controllable region can help avoid event progression and reduce the amount of productivity loss during an abnormal event. PFD forms the first step in Abnormal Situation Management (ASM), which aims at timely detection, diagnosis and correction of abnormal conditions. However the problem of PFD is made considerably difficult by the scale and complexity of modern plants. We briefly outline the various challenges in the area of PFD and review the existing methods to tackle them. We argue that a hybrid blackboard based framework utilizing collective problem solving is the most promising approach. The efforts of the ASM consortium in pursuing the implementation of the state-of-the-art technologies at plant sites are also described.

Computers & Chemical Engineering, 1997
Hazard and Operability analysis (HAZOP) is a popular method for performing hazards analysis of ch... more Hazard and Operability analysis (HAZOP) is a popular method for performing hazards analysis of che,_ ical plants. It is labour-and knowledge-intensive and could benefit from automation. Recently, a knowledge based framework for automating HAZOP analysis, called HAZOPExpert, was proposed. Dimitriadis et al. 0995) proposed a quantitative model based approach for hazard evaluation, This approach used a dynamic model of the plant and bounds on the process disturbances (including failure modes) and parameters to identify possible unsafe situations. The qualitative analysis performed by HAZOPExpert is thorough and cumputationally efficient, However, in some situations it suffers from ambiguity. The quantitative analysis has the capability to perform an exact analysis without ambiguities, but a complete quantitative analysis can he computationally prohibitive, in this paper, we present an integrated qualitative-quantitative approach for hazard identification and evaluation which overcomes the shortcomings of qualitative and quantitative methods. In the integrated framework, the broad details of a particular hazardous scenario are extracted by inexpensive qualitative analyses. A detailed quantitative analysis is then performed if needed and only on those parts of the plant identified by the qualitative analysis to contribute to the hazard. This framework is illustrated using an industrial case study.

Computers & Chemical Engineering, 1992
Colour is the first attribute subject to consumer perception in determining food quality and, in ... more Colour is the first attribute subject to consumer perception in determining food quality and, in many cases, this is the only possible mean to qualify product at purchase. For this reason, the description of colour by analytical methods is fundamental in food processing control. Computer vision systems acquire RGB data which are device-dependent and sensitive to the different lightning. Therefore, they are not directly useful for colour evaluation to mimic human vision. On the contrary, traditional colorimeters, which adopt CIELab coordinates, work in human-oriented colour space where euclidean distance between two different colours (∆E) is well related to the difference perceived by human sight. Nevertheless, vision systems have many advantages as the capability of acquiring larger areas of the food surface and the easiness of implementation in automated plants at low costs. Neural networks, trained on a set of selected colour samples, can approximate RGB to L*a*b* relationships to characterise the colour of food samples under test. The aim of this paper is to present a rapid method based on neural networks for the calibration of a CCD (Charge-Coupled Device) camera colour acquisition system to obtain reliable L*a* b* information. Preliminary results concerning the influence of the composition of the training test and the camera settings (aperture and time of exposure) on the reliability and accuracy of the colour measurement system are also discussed.
Representing bounded fault classes using neural networks with ellipsoidal activation functions
Computers & Chemical Engineering, 1993
ABSTRACT

Computers & Chemical Engineering, 1997
Real-time process fault diagnosis deals with the timely detection and diagnosis of abnormal proce... more Real-time process fault diagnosis deals with the timely detection and diagnosis of abnormal process conditions. Industrial statistics estimate the economic impact due to abnormal situations to be about 20 billion dollars per year in the petrochemical industries alone in the U.S. Thus, it is an important part of safe and optimal operation of chemical plants. A promising alternative approach is proposed in this paper, that of a hybrid, blackboard-based framework, called DKit. The motivation for development of hybrid framework lies in the fact that no single diagnostic method satisfies all the requirements of complex, industrial-scale diagnostic problems. A hybrid framework in which different diagnostic methods perform collective problem solving shows a lot of promise. The current version of DKit, implemented in G2, combines causal model-based diagnosis ~ith statistical classifiers and syntactic pattern recognition. The salient features of this system and its performance on an simulated Amoco FCCU is presented.

Computers & Chemical Engineering, 1997
Abnormal Situation Management (ASM) has received considerable attention from industry and academi... more Abnormal Situation Management (ASM) has received considerable attention from industry and academia recently. The first step towards better ASM is the timely detection and diagnosis of the abnormal situation, Most of the existing methods for fault diagnosis assume that only a single fault occurs at any given time. However, multiple faults do occur in processes, albeit less frequently than single faults. When multiple faults occur, existing methods either lead to incorrect diagnosis or complete lack of diagnosis. Multiple fault diagnosis (MFD) is a difficult problem because the number of combinations grows exponentially with the number of faults. In this paper, a signed directed graph (SDG) based algorithm for MFD is developed. The computational complexity is efficiently handled by assuming that the probability of occurrence of a multiple fault scenario decreases with an increasing number of faults involved. SDG based diagnosis, like any other qualitative method, has poor resolution. This poor resolution is overcome by using a knowledge base consisting of knowledge about the process constraints, maintenance schedules etc. The proposed algorithm is implemented in Gensym' s expert system shell, G2. The application of the algorithm is illustrated using an industrial scale simulation of the standard FCCU called TRAINER.
Uploads
Papers by Venkat Venkatasubramanian