Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization
This paper describes the use of Citizen Curation to explore ways in which cross-modal experiences... more This paper describes the use of Citizen Curation to explore ways in which cross-modal experiences can be used and created by museum visitors. Citizen Curation can be defined as individuals and groups from outside the museum profession engaging in curatorial activities to communicate their own ideas and stories. Previous work has explored how Citizen Curation can be used to broaden the range of voices reflected in the museum, thereby widening its appeal and relevance to new audiences. Recent research suggests that cross-modal experiences, combining visual art with music, can enhance the cultural experience as the visitor simultaneously draws on both what they see and hear. Citizen Curation provides a potential method through which visitors can create and share cross-modal experiences for each other, combining visual art and music. In this paper, we introduce the Deep Viewpoints web application that has previously been used for the Citizen Curation of looking at visual art. We then describe how the application was extended to support two further contexts (i) a musicologist curating experiences that link music to visual art in a museum collection, and (ii) visitors to a museum exhibition experiencing and creating cross-modal experiences. Finally, we reflect on different ways in which technology could be used to support cross-modal museum experiences. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Empirical studies in HCI. This work is licensed under a Creative Commons Attribution International 4.0 License.
Data integration is the dominant use case for RDF Knowledge Graphs. However, Web resources come i... more Data integration is the dominant use case for RDF Knowledge Graphs. However, Web resources come in formats with weak semantics (for example, CSV and JSON), or formats specific to a given application (for example, BibTex, HTML, and Markdown). To solve this problem, Knowledge Graph Construction (KGC) is gaining momentum due to its focus on supporting users in transforming data into RDF. However, using existing KGC frameworks result in complex data processing pipelines, which mix structural and semantic mappings, whose development and maintenance constitute a significant bottleneck for KG engineers. Such frameworks force users to rely on different tools, sometimes based on heterogeneous languages, for inspecting sources, designing mappings, and generating triples, thus making the process unnecessarily complicated. We argue that it is possible and desirable to equip KG engineers with the ability of interacting with Web data formats by relying on their expertise in RDF and the well-estab...
Bias in Artificial Intelligence (AI) is a critical and timely issue due to its sociological, econ... more Bias in Artificial Intelligence (AI) is a critical and timely issue due to its sociological, economic and legal impact, as decisions made by biased algorithms could lead to unfair treatment of specific individuals or groups. Multiple surveys have emerged to provide a multidisciplinary view of bias or to review bias in specific areas such as social sciences, business research, criminal justice, or data mining. Given the ability of Semantic Web (SW) technologies to support multiple AI systems, we review the extent to which semantics can be a “tool” to address bias in different algorithmic scenarios. We provide an in-depth categorisation and analysis of bias assessment, representation, and mitigation approaches that use SW technologies. We discuss their potential in dealing with issues such as representing disparities of specific demographics or reducing data drifts, sparsity, and missing values. We find research works on AI bias that apply semantics mainly in information retrieval, re...
This release fixes a Log4J security vulnerability: #96 Other changes from the latest release: htt... more This release fixes a Log4J security vulnerability: #96 Other changes from the latest release: https://github.com/basilapi/basil/compare/v0.8.0...v0.8.1
Proceedings of the 1st International Workshop on Semantic Applications for Audio and Music, 2018
The Linked Data paradigm has been used to publish a large number of musical datasets and ontologi... more The Linked Data paradigm has been used to publish a large number of musical datasets and ontologies on the Semantic Web, such as MusicBrainz, AcousticBrainz, and the Music Ontology. Recently, the MIDI Linked Data Cloud has been added to these datasets, representing more than 300,000 pieces in MIDI format as Linked Data, opening up the possibility for linking fine-grained symbolic music representations to existing music metadata databases. Despite the dataset making MIDI resources available in Web data standard formats such as RDF and SPARQL, the important issue of finding meaningful links between these MIDI resources and relevant contextual metadata in other datasets remains. A fundamental barrier for the provision and generation of such links is the difficulty that users have at adding new MIDI performance data and metadata to the platform. In this paper, we propose the Semantic Web MIDI Tape, a set of tools and associated interface for interacting with the MIDI Linked Data Cloud by enabling users to record, enrich, and retrieve MIDI performance data and related metadata in native Web data standards. The goal of such interactions is to find meaningful links between published MIDI resources and their relevant contextual metadata. We evaluate the Semantic Web MIDI Tape in various use cases involving user-contributed content, MIDI similarity querying, and entity recognition methods, and discuss their potential for finding links between MIDI resources and metadata.
Themed Evidence: Reading Experiences - Gold Standard
Exploring digital resources in search of pieces of evidence relevant to a certain research theme ... more Exploring digital resources in search of pieces of evidence relevant to a certain research theme is a difficult and important task for humanities research. The concept of evidence is a particularly difficult one, as it relates to a fact being reported in a text, which is relevant to a certain subject of enquiry. The UK Reading Experience Database Project (UK RED) is an open-access dataset containing records that document the history of reading in Britain from 1450 to 1945. Evidence of reading presented in UK RED is drawn from published and unpublished sources as diverse as diaries, commonplace books, memoirs, sociological surveys, and criminal court and prison records. See http://www.open.ac.uk/Arts/reading/UK. This dataset is a gold standard for supporting the evaluation of competing methods for detecting themed evidence of "reading experience".
The Licence Picker Web app is an ontology-driven web application. The Licence Picker Ontology (Li... more The Licence Picker Web app is an ontology-driven web application. The Licence Picker Ontology (LiPiO) has been designed to support data providers in choosing the right policy under which to publish their data. In order to evaluate this ontology, we applied it here in a service for licence selection.
This software implements approach to build Semantic Web ontologies from sample linked data named ... more This software implements approach to build Semantic Web ontologies from sample linked data named Contento. Contento is a data-driven ontology construction kit, based on Formal Concept Analysis (FCA). We show the exploration and analysis functionalities of Contento, as well as the method to generate, annotate and prune concept hierarchies. Moreover, we describe a procedure to go from sample data - extracted from SPARQL endpoints - to a new OWL ontology.
Web Systems make use of a number of datasets with different scopes and relations between each oth... more Web Systems make use of a number of datasets with different scopes and relations between each other and with external systems, covering aspects like acquisition, persistence, versioning, delivering, processing, distributing, and partitioning. Many of these operations include the usage of multiple data sets and targets the creation of new ones from the sources. Datanode is a framework for describing networks of data objects to support deep analysis of the dependencies they have in Web systems. The ontology: http://purl.org/datanode/0.5/ns/
This project includes scripts used in the development of Datanode, an ontology for describing net... more This project includes scripts used in the development of Datanode, an ontology for describing networks of data objects to support deep analysis of the dependencies they have in Web systems. See also the Technical Report: Describing semantic web applications through relations between data nodes http://oro.open.ac.uk
Over recent decades, the natural sciences have moved from formulating hypotheses through the obse... more Over recent decades, the natural sciences have moved from formulating hypotheses through the observation of phenomena to generating them automatically through the analysis of large cross-disciplinary datasets, collected and maintained within large collaborative projects. Recently, it was suggested that musicology should embrace the same paradigm shift, and move to a more collaborative and data-oriented culture. In this paper, we describe the MIDI Linked Data Cloud, an RDF graph of 10 billion MIDI statements linked to contextual metadata. We show examples of its potential application for digital libraries for musicology, and we argue that the use of Linked Data for integrating symbolic music notations and contextual metadata constitutes technical foundations for Web-scale musicology projects.
In this paper, we investigate the use of a mobile, autonomous agent to update knowledge bases con... more In this paper, we investigate the use of a mobile, autonomous agent to update knowledge bases containing statements that lose validity with time. This constitutes a key issue in terms of knowledge acquisition and representation, because dynamic data need to be constantly re-evaluated to allow reasoning. We focus on the way to represent the time- validity of statements in a knowledge base, and on the use of a mobile agent to update time-invalid statements while planning for “information freshness” as the main objective. We propose to use Semantic Web standards, namely the RDF model and the SPARQL query language, to represent time-validity of information and decide how long this will be considered valid. Using such a representation, a plan is created for the agent to update the knowledge, focusing mostly on guaranteeing the time-validity of the information collected. To show the feasibility of our approach and discuss its limitations, we test its implementation on scenarios in the wor...
Knowledge Engineering and Knowledge Management, 2017
The Open University's repository of research publications and other research outputs DKA-robo: dy... more The Open University's repository of research publications and other research outputs DKA-robo: dynamically updating time-invalid knowledge bases using robots Conference or Workshop Item How to cite:
The Semantic Web research community understood since its beginning how crucial it is to equip pra... more The Semantic Web research community understood since its beginning how crucial it is to equip practitioners with methods to transform non-RDF resources into RDF. Proposals focus on either engineering content transformations or accessing non-RDF resources with SPARQL. Existing solutions require users to learn specific mapping languages (e.g. RML), to know how to query and manipulate a variety of source formats (e.g. XPATH, JSON-Path), or to combine multiple languages (e.g. SPARQL Generate). In this paper, we explore an alternative solution and contribute a general-purpose meta-model for converting non-RDF resources into RDF: Facade-X. Our approach can be implemented by overriding the SERVICE operator and does not require to extend the SPARQL syntax. We compare our approach with the state of art methods RML and SPARQL Generate and show how our solution has lower learning demands and cognitive complexity, and it is cheaper to implement and maintain, while having comparable extensibilit...
Uploads
Papers by Enrico Daga