A Protocol of Systems Evaluation
2006, Systems Evaluation and …
Sign up for access to the world's latest research
Abstract
Paper appears in: Cabrera, D. (Ed.). (2006). Systems evaluation and evaluation systems whitepaper series. Ithaca, NY: Cornell University Dspace Open Access Repository. National Science Foundation Systems Evaluation Grant No. EREC-0535492. ... *Corresponding author. Email address: ...
Related papers
Routledge Handbook of Systems Thinking, 2021
The challenges we face in today's world are complex and multifaceted, and the interventions and policies to act on these challenges are equally complex. Program evaluation is a means by which we can know if, how, and under what conditions our interventions are making a difference. In her seminal contribution to the field, Carol Weiss [1] explained, the purpose of program evaluation is "to measure the effects of a program against the goals it set out to accomplish as a means of contributing to subsequent decision making about the program and improving future programming." The Centers for Disease Control [2] define program evaluation as "the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development." Further, "…evaluation should be practical and feasible and conducted within the confines of resources, time, and context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness." At its core, evaluation is about learning, by using feedback to understand and improve program effectiveness.
Proceedings of the 2004 ACM conference on Computer supported cooperative work - CSCW '04, 2004
This paper introduces an evaluation method that provides the capability of comparing results of like-structured evaluations that occur over time and with changing toolsets or environmental conditions. This makes use of the framework ideal for comparison of collaboration tools. The framework helps to structure evaluations by mapping system goals to evaluation objectives, metrics, and measures. The uppermost levels of the framework are conceptual in nature, while the bottom level is implementation-specific, i.e., evaluation-specific. Careful attention during construction of the conceptual elements for an evaluation template allows for its reuse in a series of like-structured evaluations and comparison of those results.
ACM Computing Surveys, 1999
The Evaluation Working Group (EWG) in the Defense Advanced Research Projects Agency (DARPA) Intelligent Collaboration and Visualization (IC&V) program has developed a methodology for evaluating collaborative systems. This methodology consists of a framework for classification of CSCW (Computer Supported Cooperative Work) systems, metrics and measures related to the various components in the framework, and a scenario-based evaluation approach. This paper describes the components of this methodology. Two case studies of evaluations based on this methodology are also described.
Computer, 1995
In this research, a proposed algorithm for determining the system effectiveness and its three parameters has been introduced. It was coded using the Visual Basic programming language. The proposed algorithm showed outstanding performance in solving simple and complex problems. A comparison between the proposed algorithm and well-known available algorithms was undertaken. The proposed algorithm shows superiority and proves accuracy in solving over fifty random generated case studies.
2005
Improper evaluation of systems papers may result in the loss or delay in publication of possibly important research. This paper posits that systems research papers may be evaluated in one or more of the three dimensions of science, engineering and art. Examples of these dimensions are provided, and methods for evaluating papers based on these dimensions are suggested. In the dimension of science, papers can be judged by how well they actually follow the scientific method, and by the inclusion of proofs or statistical measures of the significance of results. In engineering, the applicability and utility of the research in solving real world problems is the main metric. Finally, we argue that art be considered as a paper category evaluated based on elegance, simplicity, and beauty.
2001
Openness is one qualify which modern CBSs strive to possess. This work on the evaluation of CBSs was developed in the context of research work to measure open systems, and that work forms the basis of this paper.
New Directions for Evaluation, 2021
For the last several decades and recently amidst the COVID-19 pandemic, many in the global evaluation communities call for shifts from linear, reductionist ways of thinking and working to approaches that embrace systems and complexity. In this introductory chapter, we orient readers to key systems and complexity traditions and terms and how these have been put to use in the evaluation field. Doing so provides a foundation from which to engage with the subsequent chapters. We close this chapter with highlights from the case examples featured in this issue. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
1976
This publication is the second report by the Survey Project on the structure and content of a proposed Series of monographs and a Handbook to survey the state-of-the-art of applied systems analysis. In the first report (RR-76-16, Systems Analysis: An Outline for the State-of-the-Art Survey Publications, July 1976), we presented a revised outline and current guidelines for the Survey Project publication program; in the present document, the sequel. we discuss the response to a questionnaire-distributed widely throughout the systems analyst community-upon which our revised outline is based. This report should be of interest to the questionnaire respondents, and to a wider audience as well. in that it reflects what some 160 analysts and others associated with systems analysis think about systems analysis. what they consider to be vital and important in this area. and what they think to be peripheral or of minor relevance.
2010
A growing number of military capabilities are achieved through a system of system approach and this trend is likely to continue in the foreseeable future. Systems of systems differ from traditional systems in ways that require tailoring of systems engineering processes to successfully deliver their capabilities. This paper describes the distinct characteristics of systems of systems that impact their test and evaluation, discusses their unique challenges, and suggests strategies for managing them. The recommendations are drawn from the experiences of active system of system engineering practitioners.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.