Academia.eduAcademia.edu

Outline

Defining Systems to Evaluate System Efficiency and Effectiveness

Evaluation Journal of Australasia

https://doi.org/10.1177/1035719X1701700302

Abstract
sparkles

AI

This paper focuses on the application of systems thinking, systems theory, and systems evaluation theory (SET) in evaluating modern day systems, specifically in the context of cardiac care systems in the United States. It proposes a structured approach to defining system boundaries, components, and relationships as essential steps for evaluating system efficiency. The findings illustrate the need for substantial upfront effort in defining systems but highlight the long-term benefits, including actionable evaluation recommendations and improved system performance.

FAQs

sparkles

AI

What are the three core principles of systems thinking in evaluations?add

The paper identifies boundaries, components, and relationships as the three core principles of systems thinking, which can enhance program evaluation quality by providing a comprehensive context.

How can systems evaluation theory (SET) improve evaluation processes?add

SET enhances evaluation by providing a structured, three-step framework that facilitates thorough system definitions, ultimately leading to better assessments of efficiency and effectiveness.

What impact does leadership involvement have in system evaluations?add

Engaging leadership from the start of evaluations significantly increases buy-in and can accelerate implementation of recommendations, with the study noting a 68% implementation rate within 30 days.

When should boundaries of a system be re-evaluated during assessment?add

The paper emphasizes the importance of re-evaluating system boundaries early in the process, as expanding boundaries can include vital components such as bystander involvement, affecting overall effectiveness.

What role do standard operating procedures play in system evaluations?add

Standard operating procedures offer critical insight into the expected interactions and operations of subsystems, enabling evaluators to formulate assessments of adherence and efficiency outcomes.

References (37)

  1. Adams, K. M., Hester, P. T., Bradley, J. M., Meyers, T. J. & Keating, C. B. (2014). Systems theory as the foundation for understanding systems. Systems Engineering, 17(1), pp. 112-123. doi:10.1002/sys.21255
  2. American Heart Association. (2015). Highlights of the 2015 American Heart Association guidelines update for CPR and ECC. Retrieved 25 July 2017 from: http://eccguidelines.heart. org/wp-content/uploads/2015/10/2015-AHA-Guidelines- Highlights-English.pdf
  3. Becker, K., Renger, R. & McPherson, M. (2015). Indicators of buy-in to gauge evaluation success. Evaluation Journal of Australasia, 15(2), pp. 12-21.
  4. Coşkun, R., Akande, A. & Renger, R. (2012). Using root cause analysis for evaluating program improvement. Evaluation Journal of Australasia, 12(2), pp. 4-14.
  5. Eisenberg, M. S. (2013). Resuscitate: How your community can improve survival from sudden cardiac arrest. Seattle: University of Washington Press.
  6. Ericson, C. A. (2011). Concise encyclopedia of system safety: Definition of terms and concepts. Hoboken, NJ: John Wiley & Sons.
  7. Fetterman, D. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage Publications.
  8. Gena, C. (2005). Methods and techniques for the evaluation of user-adaptive systems. The Knowledge Engineering Review, 20(1), pp. 1-37. doi: 10.1017/S0269888905000299
  9. Green, L. W. & Kreuter, M. (2005). Health program planning: An educational approach. New York: McGraw-Hill.
  10. Guest, G., Bunce, A. & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), pp. 59-82. doi.org/10.1177/1525822X05279903
  11. Hargreaves, M. B. & Podems, D. (2012). Advancing systems thinking in evaluation: a review of four publications. American Journal of Evaluation, 33(3), pp. 462-470.
  12. Lee, P. (2017). What's wrong with logic models? Local Community Services Association: Occasional Paper No. 1. Retrieved 26 July 2017 from: http://resultsaccountability. com/wp-content/uploads/2014/03/No1-Whats-wrong-with- logic-models.pdf
  13. Liu, J. J., Bellamy, G. R. & McCormick, M. (2007). Patient bypass behavior and critical access hospitals: implications for patient retention. The Journal of Rural Health, 23(1), pp. 17-24.
  14. Morrel, J. A. (2010). Logic models: Uses, limitations, links to methodology and data. American Evaluation Association Annual Meeting, San Antonio, Texas, November 10-13. Retrieved 26 July 2017 from: http://jamorell.com/documents/ LM_Workshop_AEA_2010_11_06_2010.pdf
  15. Parsons, T. (1961). An outline of the social system. Classical Sociological Theory, (2), pp. 421-440.
  16. Patton, M. Q. (2008). Utilization-focused evaluation. Thousand Oaks, CA: SAGE Publications.
  17. Peters, K., Buzna, L. & Helbing, D. (2008). Modelling of cascading effects and efficient response to disaster spreading in complex networks. International Journal of Critical Infrastructures, 4(1-2), pp. 46-62. doi:10.1504/IJCIS.2008.016091
  18. Preskill, H. & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation. Retrieved 26 July 2017 from: http://nevadafund. org/wp-content/uploads/2014/05/Preskill-and-Boyle-2008- Multidisciplinary-Model-of-ECB.pdf
  19. Radcliff, T. A., Brasure, M., Moscovice, I. S. & Stensland, J. T. (2003). Understanding rural hospital bypass behavior. The Journal of rural health, 19(3), pp. 252-259.
  20. Renger, R. (2015). System Evaluation Theory (SET). Evaluation Journal of Australasia, 15(4), pp. 16-28.
  21. Renger, R. (2016). Illustrating the evaluation of system feedback mechanisms using system evaluation theory (SET). Evaluation Journal of Australasia, 16(4), pp. 15-21.
  22. Renger, R., Bartel, G. & Foltysova, J. (2013). The reciprocal relationship between implementation theory and program theory in assisting program design and decision-making. The Canadian Journal of Program Evaluation, 28(1), pp. 27-41.
  23. Renger, R. & Bourdeau, B. (2004). Strategies for values inquiry: An exploratory case study. American Journal of Evaluation, 25(1), pp. 39-49.
  24. Renger, R. & Hurley, C. (2006). From theory to practice: lessons learned in the application of the ATM approach to developing logic models. Evaluation and Program Planning, 29(2), pp. 106-119.
  25. Renger, R., Foltysova, J., Ienuso, S., Renger, J. & Booze, W. (2017). Evaluating system cascading failures. Evaluation Journal of Australasia, 17(2), pp. 29-36.
  26. Renger, R., McPherson, M., Kontz-Bartels, T. & Becker, K. (2016). Process flow mapping for systems improvement: Lessons learned. The Canadian Journal of Program Evaluation, 31(1), pp. 109-121.
  27. Renger, R., Wood, S., Williamson, S. & Krapp, S. (2012). Systemic evaluation, impact evaluation, and logic models. Evaluation Journal of Australasia, 11(2), pp. 24-30.
  28. Rogers, P. (2011). Program theory and logic models for systemic evaluation. International conference on systemic approaches in evaluation, Eschborn, Germany, January 25-26. Retrieved 26 July 2017 from: http://www.evaluation-conference.de/ downloads/63_C1_presentation.pdf
  29. Sanders, J. R. (1994). The program evaluation standards: how to assess evaluations of educational programs (2nd ed.). Thousand Oaks, CA: SAGE Publications.
  30. Shneiderman, B., Fischer, G., Czerwinski, M., Myers, B. & Resnick, M. (2005). Creativity Support Tools: A workshop sponsored by the National Science Foundation. Washington, D.C., June 13-14.
  31. Spaite, D. W., Hanlon, T., Criss, E. A., Valenzuela, T. D., Wright, A. L., Keely, K. T. & Meislin, H. W. (1990). Prehospital cardiac arrest: the impact of witnessed collapse and bystander CPR in a metropolitan emergency medical service system with short response times. Annals of Emergency Medicine, 19(11), pp. 1264-1269.
  32. Taylor-Powell, E. & Steele, S. (1996). Collecting evaluation data: Direct observation. Program Development and Evaluation. Retrieved 26 July 2017 from: https://learningstore.uwex.edu/Assets/pdfs/G3658-05.pdf
  33. Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), pp. 237-246.
  34. Wehipeihana, N. (2011). He Oranga Poutama case study: Applying developmental evaluation and systems thinking in indigenous contexts. International conference on systemic approaches in evaluation, Eschborn, Germany, January 25-26. Retrieved 26 July 2017 from: http://www.evaluation- conference.de/downloads/58_A4_Presentation_2.pdf
  35. Weiss, C. H. (1995). Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr and C. H. Weiss (eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 65-92). Washington, DC: Aspen Institute.
  36. Williams, J. C. (2015). A systems thinking approach to analysis of the patient protection and affordable care act. Journal of Public Health Management and Practice, 21(1), pp. 6-11. doi:10.1097/PHH.0000000000000150
  37. Williams, B. & Hummelbrunner, R. (2010). Systems concepts in action: a practitioner's toolkit. Stanford, CA: Stanford University Press.