Academia.eduAcademia.edu

Interpretable Feature

description9 papers
group1 follower
lightbulbAbout this topic
Interpretable features refer to the attributes or variables in a model that can be easily understood and analyzed by humans, allowing for insights into the model's decision-making process. They enhance transparency and trust in machine learning models by providing clear explanations of how input data influences predictions.
lightbulbAbout this topic
Interpretable features refer to the attributes or variables in a model that can be easily understood and analyzed by humans, allowing for insights into the model's decision-making process. They enhance transparency and trust in machine learning models by providing clear explanations of how input data influences predictions.
Decision trees are widely recognized for their interpretability and computational efficiency. However, the choice of impurity function-typically entropy or Gini impurity-can significantly influence model performance, especially in... more
Low-dimensional representations, or embeddings, of a graph's nodes facilitate several practical data science and data engineering tasks. As such embeddings rely, explicitly or implicitly, on a similarity measure among nodes, they require... more
Typical graph embeddings may not capture type-specific bipartite graph features that arise in such areas as recommender systems, data visualization, and drug discovery. Machine learning methods utilized in these applications would be... more
Typical graph embeddings may not capture type-specific bipartite graph features that arise in such areas as recommender systems, data visualization, and drug discovery. Machine learning methods utilized in these applications would be... more
Typical graph embeddings may not capture type-specific bipartite graph features that arise in such areas as recommender systems, data visualization, and drug discovery. Machine learning methods utilized in these applications would be... more
Theoretical background How can syntactic structures vary from one language to another, or from one stage to another in the history of a single language? The strongest version of the cartographic approach to syntax says, in effect, that... more
Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have... more
Heterogeneous information network (HIN) are becoming popular across multiple applications in forms of complex large-scaled networked data such as social networks, bibliographic networks, biological networks, etc. Recently, information... more
Layer-wise relevance propagation (LRP) heatmaps aim to provide graphical explanation for decisions of a classifier. This could be of great benefit to scientists for trusting complex black-box models and getting insights from their data.... more
Graphs, such as social networks, word co-occurrence networks, and communication networks, occur naturally in various real-world applications. Analyzing these networks yields insight into the structure of society, language, and different... more
Link prediction of a scale-free network has become relevant for problems relating to social network analysis, recommendation system, and in the domain of bioinformatics. In recently proposed approaches, the sampling of nodes of a network... more
In real-world, our DNA is unique but many people share names. is phenomenon o en causes erroneous aggregation of documents of multiple persons who are namesake of one another. Such mistakes deteriorate the performance of document... more
Network embedding aims to learn vector representations of vertices, that preserve both network structures and properties. However, most existing embedding methods fail to scale to large networks. A few frameworks have been proposed by... more
Heterogenous information network embedding aims to embed heterogenous information networks (HINs) into low dimensional spaces, in which each vertex is represented as a low-dimensional vector, and both global and local network structures... more
This work proposes a novel deep neural network (DNN) architecture, Implicit Segmentation Neural Network (ISNet), to solve the task of image segmentation followed by classification. It substitutes the common pipeline of two DNNs with a... more
Network embedding that encodes structural information of graphs into a low-dimensional vector space has been proven to be essential for network analysis applications, including node classification and community detection. Although recent... more
Sampling a network is an important prerequisite for unsupervised network embedding. Further, random walk has widely been used for sampling in previous studies. Since random walk based sampling tends to traverse adjacent neighbors, it may... more
The potential for machine learning systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. Much recent work has focused on developing algorithmic tools to assess and mitigate such... more
Network embedding that encodes structural information of graphs into a low-dimensional vector space has been proven to be essential for network analysis applications, including node classification and community detection. Although recent... more
Graph representation learning aims to represent the structural and semantic information of graph objects as dense real value vectors in low dimensional space by machine learning. It is widely used in node classification, link prediction,... more
Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have... more
This paper investigates how working of Convolutional Neural Network (CNN) can be explained through visualization in the context of machine perception of autonomous vehicles. We visualize what type of features are extracted in different... more
Αναλύουµε το φαινόµενο του «Αναδιπλασιασµού Προσδιοριστικού ∆είκτη» (Determiner Spreading) µε βάση την διαπίστωση πως έχει κατηγορηµατική ερµηνεία. Προσφέρουµε µία ανάλυση του φαινοµένου αυτού ως κατηγορηµατικών δοµών Φράσης... more
Determiner Spreading (DS) occurs in adjectivally modified nominal phrases comprising more than one definite article, a phenomenon that has received considerable attention and has been extensively described in Greek. This paper discusses... more
Multi-graph clustering aims to improve clustering accuracy by leveraging information from different domains, which has been shown to be extremely effective for achieving better clustering results than single graph based clustering... more
(pre-final version, January 2022, accepted to Syntax, ) On the basis of original data from Moksha Mordvin (Finno-Ugric), I argue that some languages have nominal concord even though modifiers of the noun generally do not show inflection.... more
Graph embedding has become a key component of many data mining and analysis systems. Current graph embedding approaches either sample a large number of node pairs from a graph to learn node embeddings via stochastic optimization or... more
Heads can be spelled out higher than their merge-in position. The operation that the transformationalist generative literature uses to model this is called head movement. Government and Binding posited that the operation in question... more
Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present... more
Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables. In this paper, we apply LRP for the first time to natural language... more
Graphs, such as social networks, word co-occurrence networks, and communication networks, occur naturally in various real-world applications. Analyzing them yields insight into the structure of society, language, and different patterns of... more
The structure of demonstrative expressions in Modern Greek serves as our looking glass into the broader structure of the nominal domain. Concentrating on the different positions of demonstratives and the obligatory co-occurrence of the... more
Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent,... more
—Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent,... more
Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant... more
We summarize the main concepts behind a recently proposed method for explaining neural network predictions called deep Taylor decomposition. For conciseness, we only present the case of simple neural networks of ReLU neurons organized in... more
The Layer-wise Relevance Propagation (LRP) algorithm explains a classifier's prediction specific to a given data point by attributing relevance scores to important components of the input by using the topology of the learned model itself.... more
We state some key properties of the recently proposed Layer-wise Relevance Propagation (LRP) method, that make it particularly suitable for model analysis and validation. We also review the capabilities and advantages of the LRP method on... more
Fisher vector (FV) classifiers and Deep Neural Networks (DNNs) are popular and successful algorithms for solving image classification problems. However, both are generally considered 'black box' predictors as the non-linear... more
Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions of the sample such as... more
We present a comparison of the perception of input images for classification of the state of the art deep convolutional neural networks and high performing Fisher Vector predic-tors. Layer-wise Relevance Propagation (LRP) is a method to... more
Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent,... more
Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications as it allows to verify the reasoning of the system and provides additional information to the human... more
Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent,... more
Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human... more
by Mandar Dixit and 
1 more
With the help of a convolutional neural network (CNN) trained to recognize objects, a scene image is represented as a bag of semantics (BoS). This involves classifying image patches using the network and considering the class posterior... more
Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human... more
Download research papers for free!