Objective: We sought to several develop parsimonious machine learning models to predict resource ... more Objective: We sought to several develop parsimonious machine learning models to predict resource utilization and clinical outcomes following cardiac operations using only preoperative factors. Methods: All patients undergoing coronary artery bypass grafting and/or valve operations were identified in the 2015-2021 University of California Cardiac Surgery Consortium repository. The primary end point of the study was length of stay (LOS). Secondary endpoints included 30-day mortality, acute kidney injury, reoperation, postoperative blood transfusion and duration of intensive care unit admission (ICU LOS). Linear regression, gradient boosted machines, random forest, extreme gradient boosting predictive models were developed. The coefficient of determination and area under the receiver operating characteristic (AUC) were used to compare models. Important predictors of increased resource use were identified using SHapley summary plots. Results: Compared with all other modeling strategies, gradient boosted machines demonstrated the greatest performance in the prediction of LOS (coefficient of determination, 0.42), ICU LOS (coefficient of determination, 0.23) and 30-day mortality (AUC, 0.69). Advancing age, reduced hematocrit, and multiple-valve procedures were associated with increased LOS and ICU LOS. Furthermore, the gradient boosted machine model best predicted acute kidney injury (AUC, 0.76), whereas random forest exhibited greatest discrimination in the prediction of postoperative transfusion (AUC, 0.73). We observed no difference in performance between modeling strategies for reoperation (AUC, 0.80). Conclusions: Our findings affirm the utility of machine learning in the estimation of resource use and clinical outcomes following cardiac operations. We identified several risk factors associated with increased resource use, which may be used to guide case scheduling in times of limited hospital capacity. (JTCVS Open 2022;11:214-28) 0 10 20 30 Observed LOS (days) Predicted LOS (days) LR GBM 0 10 20 30 Observed length of stay versus predictions by machine learning model. Compared to traditional linear regression, machine learning yielded superior performance in the prediction of length of stay, mortality, acute kidney injury, and reoperation following cardiac operations.
Data corruption is the most common consequence of file-system bugs. When such corruption occurs, ... more Data corruption is the most common consequence of file-system bugs. When such corruption occurs, offline check and recovery tools must be used, but they are error prone and cause significant downtime. Previously we showed that a runtime checker for the Ext3 file system can verify that metadata updates are consistent, helping detect corruption in metadata blocks at transaction commit time. However, corruption can still occur when a bug in the file system’s transactional mechanism loses, misdirects, or corrupts writes. We show that a runtime checker must enforce the atomicity and durability properties of the file system on every write, in addition to checking transactions at commit time, to provide the strong guarantee that every block write will maintain file system consistency. We identify the invariants that need to be enforced on journaling and shadow paging file systems to preserve the integrity of committed transactions. We also describe the key properties that make it feasible ...
Uploads
Papers by jack sun