Papers by Carolina Gallardo
This paper describes a new approach for describing contents through the use of interlinguas in or... more This paper describes a new approach for describing contents through the use of interlinguas in order to facilitate the extraction of specific pieces of information. The authors highlight the different dimensions of a document and how these dimensions define the capacities of their respective contents to be found in the scalable process of finding information. A specific interlingua, UNL, will be described. This approach is illustrated both with rich examples of the followed model and with actual applications, that includes the description of some running projects based on the interlingual representation of contents.

CLIR is the acronym of a great variety of techniques, systems and technologies that associate inf... more CLIR is the acronym of a great variety of techniques, systems and technologies that associate information retrieval (normally from texts) in multilingual environments. Many of these systems are based on a double architecture composed by systems in charge of extracting information with a great dependency on the language together with classical machine translation systems. In the early 90's, machine translation systems fell from grace due to the failure of big machine translations projects in Europe, Japan and USA. Due to this reason some approaches, particularly those of linguistic knowledge representation were undeservedly forgotten, and above all the so called "interlinguas". Recently, the re-emergence of these models under the generic name of "ontologies" are supporting most of knowledge representation initiatives, even in an language independent way. However consistency problems are not well solved yet. UNL, initially conceived as a contents representation and multilingual generation system, can also be applied to the CLIR.. This paper aims to show how to create and apply domain specific ontologies using the UNL apparatus, particularly the UNL language as a way of ensuring a consistent representation mechanism.
Efficient document search and description has radically changed with the widespread availability ... more Efficient document search and description has radically changed with the widespread availability of electronic documents through Internet. Nowadays, efficient information search systems require to go beyond HTMLannotated documents. Complex information extraction tasks require to enrich text with semantic annotations that allow deeper and more detailed content analysis. For that purpose, new labels or annotations need to be defined. In this paper we propose to use UNL, an interlingua defined by the United Nations University, as a language neutral standard content representation in Internet. The use of UNL would open documents to a new dimension of semantic analysis, thus overcoming the limitations of current text-based analysis techniques.
Stochastic modeling of alginate production by Azotobacter Vinelandii
... First, we adopt a stochastic approach to propose a simple model to analyze and describe some ... more ... First, we adopt a stochastic approach to propose a simple model to analyze and describe some features of the experimental re-sults on the production of alginate by Azotobacter vinelandii obtained by Trujillo-Roldán, [8,9]. More specifically, we pre-dict the time behavior of the ...
Electric Field Lines
International Journal of Modern Physics C, 1991
We present the computer program called LINES which is able to calculate and visualize the electri... more We present the computer program called LINES which is able to calculate and visualize the electric field lines due to seven different discrete configurations of electric point charges. Also we show two examples of the graphic screens generated by LINES.
Automatic Construction of an Interlingual Dictionary for Multilingual Lexicography
An efficient use of the web will imply the ability to find not only documents but also specific p... more An efficient use of the web will imply the ability to find not only documents but also specific pieces of information according to user’s query. Right now, this last possibility is not tackled by current information extraction or question answering systems, since it requires both a deeper semantic understanding of queries and contents along with deductive capabilities. In this paper, the authors propose the use of Interlinguas as a plausible approach to search and extract specific pieces of information from a document, given the semantic nature of Interlinguas and their support for deduction. More concretely, the authors describe the UNL Interlinguas from the representational point of view and illustrate its deductive capabilities by means of an example.
E-democracy is a cognitive democracy oriented to the extraction and democratization of the knowle... more E-democracy is a cognitive democracy oriented to the extraction and democratization of the knowledge related with the scientific resolution of public decision making problems associated with the governance of society. This paper presents a qualitative approach based in text mining tools to identify voting tendencies from the analysis of the messages and comments elicited by citizens through a collaborative tool (forum). The proposed methodology has been applied to a case study developed with students of the Faculty of Economics at the University of Zaragoza and related with the potential location at the region of Aragón (Spain), of the greatest leisure project in Europe (Gran Scala).
Natural language generation has received less attention within the field of Natural language pr... more Natural language generation has received less attention within the field of Natural language processing than natural language understanding. One possible reason for this could be the lack of standardization of the inputs to generation systems. This fact makes the systematic planning of the process of developing generation systems to become difficult. The authors propose the use of the UNL (Universal Networking Language) as a possible standard for the normalization of inputs to generation processes.

We present here a description of the UNL initiative based on the Universal Networking Language (U... more We present here a description of the UNL initiative based on the Universal Networking Language (UNL). This language was conceived to support multilingual communication on the Internet across linguistic barriers. This initiative was launched by the Institute of Advanced Studies of the United Nations University in 1996. The initial consortium was formed to support 15 languages. Eight years later, this initial consortium changed, many components and resources were developed, and the UNL language itself evolved to support different types of applications, from multilingual generation to “knowledge repositories” or cross- lingual information retrieval applications. We describe the main features of this UNL Language, making a comparison with some similar approaches, such as interlinguas. We also describe some organizational and managerial aspects of the UNL according to criteria of quality and maturity, placing emphasis on the fact that the initiative is open to any interested group or researcher.
This paper describes a new approach for the development of systems that requires natural language... more This paper describes a new approach for the development of systems that requires natural language parsing or generation. This method is based on the use of Descriptive Grammars -in particular, the descriptive grammar for Spanish is used-as the source for linguistic knowledge extraction. This knowledge source allows the use of classical knowledge-engineering methodologies for the extraction of rules that represent partial or complete aspects of the language, without the necessity of appealing to linguistic theories or experts. This easy method opens a new range of possibilities to the development of reliable applications that require parsing or language generation, dialog systems, information extraction, or semantic web applications.
Information can be expressed in many ways according to the different capacities of humans to perc... more Information can be expressed in many ways according to the different capacities of humans to perceive it. Current systems deals with multimedia, multiformat and multiplatform systems but another « multi » is still pending to guarantee global access to information, that is, multilinguality. Different languages imply different replications of the systems according to the language in question. No solutions appear to represent the bridge between the human representation (natural language) and a system-oriented representation. The United Nations University defined in 1997 a language to be the support of effective multilinguism in Internet. In this paper, we describe this language and its possible applications beyond multilingual services as the possible future standard for different language independent applications
Knowledge extraction methods have not efficiently evolved towards new methods to automate the pro... more Knowledge extraction methods have not efficiently evolved towards new methods to automate the process of building multilingual ontologies as the main representation of structured knowledge. The need for ontologies that support massive contents suffers from this bottleneck. Whereas texts are written in natural language, ontologies are expressed in formal languages. This paper describes a new approach based on an interlingual meaning representation between natural and formal languages, where linguistic patterns are identified. The advantages of this method to reach an important level of automation for ontology building along and associated multilingual aspects are depicted. The process is described in detail in a case study.

Information extraction systems have been dealt with at length from the viewpoint of users posing ... more Information extraction systems have been dealt with at length from the viewpoint of users posing definite questions whose expected answer is to be found in a document collection. This has been tackled by means of systems that analyse the user query and try to use the grammar features of each language to find a possible answer. This approach has failed to work for users of other languages or for documents in different languages, save for a few languages for which the query can be machine translated to the target language or all languages. Where there are more languages, however, this approach is impracticable for information extraction in a reasonable time. The massively multilingual approach (> 6 languages) necessarily involves the language-independent representation of the contents, that is, using an interlingua. This paper reports a promising early trial of a method that launches a query in any language against a language-independent representation of the document set using a general-purpose UNL interlingua and receives a precise response.
In this paper, the authors advocate in favor of using an interlingua for representing the knowled... more In this paper, the authors advocate in favor of using an interlingua for representing the knowledge contained in text documents. The advocated interlingua, UNL, was designed by the United Nations University to support a language independent textual representation to overcome linguistic barriers in Internet. This paper describes the main features of UNL and presents the application of this interlingua as a Document Knowledge Representation language. This approach is described through the applications developed in two international projects: HEREIN-II (IST-2000-29355) and AgroExplorer.

E-cognocracy [1-5] is a new democratic system that tries to adequate democracy to needs and chall... more E-cognocracy [1-5] is a new democratic system that tries to adequate democracy to needs and challenges of Knowledge Society. This is a cognitive democracy oriented to the extraction and democratization of the knowledge related with the scientific resolution of public decision making problems associated with the governance of society. It is based [3,6] on the evolutionism of live systems and it can be understood as the government of the knowledge and wisdom by means of the information and communication technology (ICT). E-cognocracy combines the representative and the participative democracies by aggregating the preferences of the political parties with those of citizens and by generating knowledge from the conjoint discussion of the arguments that support their own positions. This paper presents a qualitative approach based in text mining tools to identify these arguments from the analysis of the messages and comments elicited by political parties and citizens through a collaborative tool (forum). The proposed methodology has been applied to a case study developed with students of the Faculty of Economics at the University of Zaragoza and related with the potential location at the region of Aragón (Spain), of the greatest leisure project in Europe (Gran Scala).
PRIOR-WK&E: Social Software for Policy Making in the Knowledge Society
This paper presents a social software application denominated as PRIOR-WK&E. It has been develope... more This paper presents a social software application denominated as PRIOR-WK&E. It has been developed by the Zaragoza Multicriteria Decision Making Group (GDMZ) with the aim of responding to the challenges of policy making in the Knowledge Society. Three specific modules have been added to PRIOR, the collaborative tool used by the research group (GDMZ) for considering the multicriteria selection of a discrete set of alternatives. The first module (W), that deals with multiactor decision making through the Web, and the second (K), that concerns the extraction and diffusion of knowledge related to the scientific resolution of the problem, were explained in [1]. The new application strengthens securitization and includes a third module (E) that evaluates the effectiveness of public administrations policy making.
Uploads
Papers by Carolina Gallardo