The Practical Lexical Function model (PLF)
is a recently proposed compositional distribu-
tional ... more The Practical Lexical Function model (PLF) is a recently proposed compositional distribu- tional semantic model which provides an el- egant account of composition, striking a bal- ance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF be- tween the objective function at training and the prediction at testing which leads to an over- counting of the predicate’s contribution to the meaning of the phrase. We investigate two pos- sible solutions of which one (the exclusion of simple lexical vector at test time) improves per- formance significantly on two out of the three composition datasets.
Modeling covert event retrieval in logical metonymy: probabilistic and distributional accounts
Logical metonymies (The student finished the beer) represent a challenge to compositionality sinc... more Logical metonymies (The student finished the beer) represent a challenge to compositionality since they involve semantic content not overtly realized in the sentence (covert events → drinking the beer). We present a contrastive study of two classes of computational models for logical metonymy in German, namely a probabilistic and a distributional, similarity-based model. These are built using the SDeWaC corpus and evaluated against a dataset from a self-paced reading and a probe recognition study for their sensitivity to thematic fit effects via their accuracy in predicting the correct covert event in a metonymical context. The similarity-based models allow for better coverage while maintaining the accuracy of the probabilistic models.
... Table 1; capital letters designate sets and small letters elements of sets).2 For a lemma l l... more ... Table 1; capital letters designate sets and small letters elements of sets).2 For a lemma l like lamb, we want to know how well a meta alternation (such as ANIMAL-FOOD) explains a pair of its senses (such as the animal and food senses of lamb).3 This is for-malized through ...
Modeling covert event retrieval in logical metonymy: probabilistic and distributional accounts
Page 1. Modeling covert event retrieval in logical metonymy: probabilistic and distributional acc... more Page 1. Modeling covert event retrieval in logical metonymy: probabilistic and distributional accounts Alessandra Zarcone, Jason Utt Institut für Maschinelle Sprachverarbeitung Universität Stuttgart {zarconaa,uttjn}@ims.uni-stuttgart.de ...
Ontology-based distinction between polysemy and homonymy
ABSTRACT We consider the problem of distinguishing polysemous from homonymous nouns. This distinc... more ABSTRACT We consider the problem of distinguishing polysemous from homonymous nouns. This distinction is often taken for granted, but is seldom operationalized in the shape of an empirical model. We present a first step towards such a model, based on WordNet augmented with ontological classes provided by CoreLex. This model provides a polysemy index for each noun which (a), accurately distinguishes between polysemy and homonymy; (b), supports the analysis that polysemy can be grounded in the frequency of the meaning shifts shown by nouns; and (c), improves a regression model that predicts when the "one-sense-per-discourse" hypothesis fails.
This paper presents a graph-theoretic approach to the identification of yetunknown word translati... more This paper presents a graph-theoretic approach to the identification of yetunknown word translations. The proposed algorithm is based on the recursive Sim-Rank algorithm and relies on the intuition that two words are similar if they establish similar grammatical relationships with similar other words. We also present a formulation of SimRank in matrix form and extensions for edge weights, edge labels and multiple graphs.
Uploads
Papers by Jason Utt
is a recently proposed compositional distribu-
tional semantic model which provides an el-
egant account of composition, striking a bal-
ance between expressiveness and robustness
and performing at the state-of-the-art. In this
paper, we identify an inconsistency in PLF be-
tween the objective function at training and the
prediction at testing which leads to an over-
counting of the predicate’s contribution to the
meaning of the phrase. We investigate two pos-
sible solutions of which one (the exclusion of
simple lexical vector at test time) improves per-
formance significantly on two out of the three
composition datasets.