Academia.eduAcademia.edu

Outline

Making Logic Learnable With Neural Networks

2020, ArXiv

Abstract

While neural networks are good at learning unspecified functions from training samples, they cannot be directly implemented in hardware and are often not interpretable or formally verifiable. On the other hand, logic circuits are implementable, verifiable, and interpretable but are not able to learn from training data in a generalizable way. We propose a novel logic learning pipeline that combines the advantages of neural networks and logic circuits. Our pipeline first trains a neural network on a classification task, and then translates this, first to random forests, and then to AND-Inverter logic. We show that our pipeline maintains greater accuracy than naive translations to logic, and minimizes the logic such that it is more interpretable and has decreased hardware cost. We show the utility of our pipeline on a network that is trained on biomedical data. This approach could be applied to patient care to provide risk stratification and guide clinical decision-making.

References (30)

  1. Brayton, R. and Mishchenko, A. Abc: An academic industrial-strength verification tool. In International Conference on Computer Aided Verification, pp. 24-40. Springer, 2010.
  2. Breiman, L. Random forests. Machine Learning, 45 (1):5-32, Oct 2001. ISSN 1573-0565. doi: 10.1023/ A:1010933404324. URL https://doi.org/10. 1023/A:1010933404324.
  3. Bünz, B. and Lamm, M. Graph neural networks and boolean satisfiability. arXiv preprint arXiv:1702.03592, 2017.
  4. Chatterjee, S. Learning and memorization. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th Interna- tional Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 755- 763, Stockholmsmssan, Stockholm Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr. press/v80/chatterjee18a.html.
  5. Chatterjee, S. and Mishchenko, A. Circuit-based in- trinsic methods to detect overfitting. arXiv preprint arXiv:1907.01991, 2019.
  6. Du, M., Liu, N., and Hu, X. Techniques for interpretable machine learning. Communications of the ACM, 63(1): 68-77, 2019.
  7. Eén, N. and Sörensson, N. An extensible sat-solver. In International conference on theory and applications of satisfiability testing, pp. 502-518. Springer, 2003.
  8. Fraser, N. J., Umuroglu, Y., Gambardella, G., Blott, M., Leong, P., Jahre, M., and Vissers, K. Scaling binarized neural networks on reconfigurable logic. In Proceedings of the 8th Workshop and 6th Workshop on Parallel Pro- gramming and Run-Time Management Techniques for Many-core Architectures and Design Tools and Architec- tures for Multicore Embedded Computing Platforms, pp. 25-30, 2017.
  9. Goldberg, E. and Novikov, Y. Berkmin: A fast and robust sat-solver. Discrete Applied Mathematics, 155(12):1549- 1561, 2007.
  10. Hara, S. and Hayashi, K. Making tree ensembles inter- pretable. arXiv preprint arXiv:1606.05390, 2016.
  11. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized neural networks. In Advances in neural information processing systems, pp. 4107-4115, 2016.
  12. Jung, S. and su Kim, S. Hardware implementation of a real- time neural network controller with a dsp and an fpga for nonlinear systems. IEEE Transactions on Industrial Electronics, 54(1):265-271, 2007.
  13. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Rep- resentations, 12 2014.
  14. Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, S.-I. From local explanations to global understand- ing with explainable ai for trees.
  15. McDanel, B., Teerapittayanon, S., and Kung, H. Embedded binarized neural networks. In Proceedings of the 2017 International Conference on Embedded Wireless Systems and Networks, EWSN 2019;17, pp. 168173, USA, 2017. Junction Publishing. ISBN 9780994988614.
  16. Mishchenko, A., Zhang, J. S., Sinha, S., Burch, J. R., Bray- ton, R., and Chrzanowska-Jeske, M. Using simulation and satisfiability to compute flexibilities in boolean net- works. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 25(5):743-755, 2006.
  17. Moskewicz, M. W., Madigan, C. F., Zhao, Y., Zhang, L., and Malik, S. Chaff: Engineering an efficient sat solver. In Proceedings of the 38th annual Design Automation Conference, pp. 530-535, 2001.
  18. Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., and Yu, B. Interpretable machine learning: definitions, meth- ods, and applications. arXiv preprint arXiv:1901.04592, 2019.
  19. Prasad, P. C., Assi, A., and Beg, A. Binary decision dia- grams and neural networks. The Journal of Supercomput- ing, 39(3):301-320, 2007.
  20. Ribeiro, M. T., Singh, S., and Guestrin, C. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386, 2016.
  21. Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., and Dill, D. L. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685, 2018.
  22. Shung, D. L., Au, B., Taylor, R. A., Tay, J. K., Laursen, S. B., Stanley, A. J., Dalton, H. R., Ngu, J., Schultz, M., and Laine, L. Validation of a machine learning model that outperforms clinical risk scoring systems for upper gastrointestinal bleeding. Gastroenterology, 158(1):160- 167, 2020.
  23. Stanley, A. J., Laine, L., Dalton, H. R., Ngu, J. H., Schultz, M., Abazi, R., Zakko, L., Thornton, S., Wilkinson, K., Khor, C. J., et al. Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study. bmj, 356, 2017.
  24. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25 (1):44-56, 2019.
  25. Umuroglu, Y., Fraser, N. J., Gambardella, G., Blott, M., Leong, P., Jahre, M., and Vissers, K. Finn: A framework for fast, scalable binarized neural network inference. In Proceedings of the 2017 ACM/SIGDA International Sym- posium on Field-Programmable Gate Arrays, pp. 65-74, 2017.
  26. Venieris, S. I. and Bouganis, C.-S. fpgaconvnet: A frame- work for mapping convolutional neural networks on fp- gas. In 2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 40-47. IEEE, 2016.
  27. Wang, E., Davis, J. J., Cheung, P. Y., and Constan- tinides, G. A. Lutnet: Learning fpga configurations for highly efficient neural network inference. arXiv preprint arXiv:1910.12625, 2019.
  28. Wu, J., Dong, M., Rigatto, C., Liu, Y., and Lin, F. Lab- on-chip technology for chronic disease diagnosis. NPJ digital medicine, 1(1):1-11, 2018.
  29. Xu, F., He, F., Xie, E., and Li, L. Fast obdd reordering using neural message passing on hypergraph. arXiv preprint arXiv:1811.02178, 2018.
  30. Zhao, R., Song, W., Zhang, W., Xing, T., Lin, J.-H., Srivastava, M., Gupta, R., and Zhang, Z. Accel- erating binarized convolutional neural networks with software-programmable fpgas. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, FPGA 17, pp. 1524, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450343541. doi: 10.1145/ 3020078.3021741. URL https://doi.org/10. 1145/3020078.3021741.