Academia.eduAcademia.edu

Outline

Computational Film Analysis with R

2022

https://doi.org/10.5281/ZENODO.7074521

Abstract

Computational Film Analysis with R is a first course in the application of computational methods to questions of form and style in the cinema using the R programming language. It aimed at those who have no previous experience of computational film analysis and no prior knowledge of statistics, data science or programming with R is required or assumed. Each chapter discusses the underlying methodological concepts in depth and demonstrates the implementation with R of a range of approaches for analysing sound, colour, editing, cinematography, and scripts of motion pictures, so that after reading this book you will be able to design and execute your own computational film analysis research projects.

References (52)

  1. Adams, B. (2003) Where does computational media aesthetics fit?, IEEE Multimedia 10 (2): 18-27. https://doi.org/10.1109/MMUL.2003.1195158
  2. Arnold, T., and Tilton, L. (2019) Distant viewing: analyzing large visual corpora, Digital Scholarship in the Humanities 34 (Supplement_1): i3-i16. https://doi.org/10.1093/llc/fqz013
  3. Aubert, O., & Prié, Y. (2005) Advene: active reading through hypervideo, Proceedings of the Sixteenth ACM Conference on Hypertext and Hypermedia -HYPERTEXT '05 (2005): 235-244. https://doi.org/10.1145/1083356.1083405
  4. Baveye, Y., Chamaret, C., Dellandrea, E., & Chen, L. (2018) Affective video content analysis: a multidisciplinary insight, IEEE Transactions on Affective Computing 9 (4): 396-409. https://doi.org/10.1109/TAFFC.2017.2661284
  5. Baxter, M. (2014) Notes on Cinemetric Data Analysis. http://www.cinemetrics.lv/dev/Cinemetrics_Book_Baxter.pdf
  6. Birett, H. (1962) Filmanalyse, Filmstudio Kinematographie 1. https://kinematographie.de/KINEMA.HTM#fa
  7. Birett, H. (1988) Alte filme: filmalter und filmstil. Statistische analyse von stummfilmen, in E. Ledig (ed.) Der Stummfilm: Konstruktion und Rekonstruktion. München: diskurs Film: 69-87.
  8. Birett, H. (1993) Motion pictures and statistics, in H. Keqi, B. Hermann, & L. Renzhi (eds.) Proceedings of the International Conference on Texts and Language Research, Xi'an, 29-31 March 1989. Xi'an: Xi'an Jiaotong University Press: 64-69.
  9. Bordwell, D. (2005a) Foreword, in J. D. Anderson and B. Fisher Anderson (eds.) Moving Image Theory: Ecological Considerations. Carbondale, IL: Southern Illinois University Press: ix-xii.
  10. Buckland, W., & Elsaesser, T. (2002) Studying Contemporary American Film: A Guide to Movie Analysis. London.
  11. Buonocore, T. (2019) Exploring chromatic storytelling in movies with R. https://towardsdatascience.com/exploringchromatic-storytelling-with-r-part-1- 8e9ddf8d4187
  12. Burghardt, M., Heftberger, A., Pause, J., Walkowski, N.-O., and Zeppelzauer, M. (2020) Film and video analysis in the digital humanities -an interdisciplinary dialog, Digital Humanities Quarterly 14: 4: http://www.digitalhumanities.org/dhq/vol/14/4/000532/000532.html.
  13. Burghardt, M., Kao, M., & Wolff, C. (2016) Beyond shot lengths -using language data and color information as additional parameters for quantitative movie analysis, Digital Humanities 2016: Conference Abstracts (2016): 753-755.
  14. Burnett, C. (2008) A new look at the concept of style in film: The origins and development of the problem-solution model, New Review of Film and Television Studies 6 (2): 127-149. https://doi.org/10.1080/17400300802098289
  15. Carroll, N. (1998) Film form: an argument for a functional theory of style in the individual film, Style 32 (3): 385-401.
  16. Cutting, J. E. (2016) The evolution of pace in popular movies, Cognitive Research: Principles and Implications 1 (1): 30. https://doi.org/10.1186/s41235-016-0029-0
  17. Deldjoo, Y., Schedl, M., Cremonesi, P., & Pasi, G. (2020) Recommender systems leveraging multimedia content, ACM Computing Surveys 53 (5): 1-38. https://doi.org/10.1145/3407190
  18. Dorai, C., & Venkatesh, S. (2002) Bridging the semantic gap in content management systems, in C. Dorai & S. Venkatesh (eds.) Media Computing: Computational Media Aesthetics. Boston, MA: Springer: 1-9.
  19. Gong, Y., & Xu, W. (2007) Machine Learning for Multimedia Content Analysis. New York, NY: Springer.
  20. Grodal, T., Larsen, B., and Laursen, I. T. (2005) Introduction, in T. Grodal, B. Larsen, and I.T Laursen (eds.) Visual Authorship: Creativity and Intentionality in Media. Copenhagen: Museum Tusculanums Press: 7-14.
  21. Halter, G., Ballester-Ripoll, R., Flueckiger, B., & Pajarola, R. (2019) VIAN: a visual annotation tool for film analysis, Computer Graphics Forum 38 (3): 119-129. https://doi.org/10.1111/cgf.13676
  22. Hanjalic, A., & Li-Qun Xu (2005) Affective video content representation and modeling, IEEE Transactions on Multimedia 7 (1): 143-154. https://doi.org/10.1109/TMM.2004.840618
  23. Hanjalic, A., Sebe, N., & Chang, E. (2006) Multimedia content analysis, management and retrieval: trends and challenges, in E. Y. Chang, A. Hanjalic, & N. Sebe (eds.) Proceedings Volume 6073, Multimedia Content Analysis, Management, and Retrieval 2006 (17 January 2006). https://doi.org/10.1117/12.673788
  24. Heftberger, A. (2018) Digital Humanities and Film Studies: Visualising Dziga Vertov's Work. Cham: Springer.
  25. Ibrahim, M. (2018) Lexus launches ad scripted entirely using artificial intelligence, Campaign (15 November 2018): https://www.campaignlive.co.uk/article/lexus-launches-ad-scripted- entirely-using-artificial-intelligence/1499083
  26. Joshi, B., Stewart, K., & Shapiro, D. (2017). Bringing impressionism to life with neural style transfer in Come Swim. Proceedings of the ACM SIGGRAPH Digital Production Symposium, 1- 5. https://doi.org/10.1145/3105692.3105697.
  27. Kipp, M. (2014) ANVIL: the video annotation research tool, in The Oxford Handbook of Corpus Phonology. Oxford: Oxford University Press: 420-436.
  28. Koch, C. (2016) How the computer beat the Go master, Scientific American. https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/
  29. Konnikova, M. (2012) Humanities aren't a science. Stop treating them like one, Scientific American (10 August 2012): http://blogs.scientificamerican.com/literally- psyched/2012/08/10/humanities-arent-a-science-stop-treating-them-like-one/.
  30. Leake, M., Davis, A., Truong, A., & Agrawala, M. (2017) Computational video editing for dialogue-driven scenes, ACM Transactions on Graphics 36 (4): 1-14. https://doi.org/10.1145/3072959.3073653
  31. Li, Y., & Kuo, C. C. J. (2013) Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation. New York, NY: Springer.
  32. Machkovech, S. (2018) This wild, AI-generated film is the next step in 'whole-movie puppetry,' Ars Technica (6 November 2018): https://arstechnica.com/gaming/2018/06/this- wild-ai-generated-film-is-the-next-step-in-whole-movie-puppetry/
  33. Pustu-Iren, K., Sittel, J., Mauer, R., Bulgakowa, O., & Ewerth, R. (2020) Automated visual content analysis for film studies: current status and challenges, Digital Humanities Quarterly 14 (4). http://www.digitalhumanities.org/dhq/vol/14/4/000518/000518.html
  34. R Core Team (2021) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/
  35. Redfern, N. (2013a) Film studies and statistical literacy, Media Education Research Journal 4 (1): 58-73.
  36. Redfern, N. (2015b) The time contour plot: graphical analysis of a film soundtrack. Zenodo. https://doi.org/10.5281/zenodo.6354341
  37. Redfern, N. (2020b) Quantitative analysis of sound in a short horror film, Humanities Bulletin 3 (2): 246-257.
  38. Redfern, N. (2020c) Sound in horror film trailers, Music, Sound, and the Moving Image 14 (1): 47-71. https://doi.org/10.3828/msmi.2020.4
  39. Redfern, N. (2021c) The soundtrack of the Sinister trailer, Acta Universitatis Sapientiae, Film and Media Studies 20 (1): 36-51. https://doi.org/10.2478/ausfm-2021-0013
  40. Redfern, N. (2022) Computational analysis of film audio. https://doi.org/10.5281/zenodo.6472560
  41. Salt, B. (1974) Statistical style analysis of motion pictures, Film Quarterly 28 (1): 13-22. https://doi.org/10.2307/1211438
  42. Salt, B. (1992) Film Style and Technology: History and Analysis. London: Starword.
  43. Salt, B. (2011) Reaction time: how to edit movies, New Review of Film and Television Studies 9 (3): 341-357. https://doi.org/10.1080/17400309.2011.585865
  44. Sittel, J. (2017) Digital humanities in der filmwissenschaft, MEDIENwissenschaft 4: 472-472.
  45. Smith, J. R., Joshi, D., Huet, B., Hsu, W., and Cota, J. (2017) Harnessing AI for augmenting creativity: application to movie trailer creation, Proceedings of the 25th ACM International Conference on Multimedia: 1799-1808. https://doi.org/10.1145/3123266.3127906
  46. Thompson, K. (2005) Herr Lubitsch Goes to Hollywood: German and American Film after World War I. Amsterdam: Amsterdam University Press.
  47. Tsivian, Y. (2009) Cinemetrics, part of the humanities' cyberinfrastructure, in M. Ross, M. Grauer, & B. Freisleben (eds.) Digital tools in Media Studies: Analysis and research. An overview. New Brunswick, NJ: Transcript-Verlag: 93-100.
  48. Wang, Y., Liu, Z., & Huang, J.-C. (2000) Multimedia content analysis using both audio and visual clues, IEEE Signal Processing Magazine 17 (6): 12-36. https://doi.org/10.1109/79.888862
  49. Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006) ELAN: a professional framework for multimodality research, Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06) (2006): 1556-1559.
  50. Yadav, A., & Vishwakarma, D. K. (2020) A unified framework of deep networks for genre classification using movie trailer, Applied Soft Computing 96: 106624. https://doi.org/10.1016/j.asoc.2020.106624
  51. Yazdani, A., Skodras, E., Fakotakis, N., & Ebrahimi, T. (2013) Multimedia content analysis for emotional characterization of music video clips, EURASIP Journal on Image and Video Processing (1): 26. https://doi.org/10.1186/1687-5281-2013-26
  52. Yi, Y., Wang, H., & Li, Q. (2020) Affective video content analysis with adaptive fusion recurrent network, IEEE Transactions on Multimedia 22 (9): 2454-2466. https://doi.org/10.1109/TMM.2019.2955300