Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
181
1.96k
triplet
listlengths
0
31
English is shown to be trans-context-free on the basis of coordinations of the respectively type that involve strictly syntactic cross-serial agreement. The agreement in question involves number in nouns and reflexive pronouns and is syntactic rather than semantic in nature because grammatical number in English, like grammatical gender in languages such as French, is partly arbitrary. The formal proof, which makes crucial use of the Interchange Lemma of Ogden et al., is so constructed as to be valid even if English is presumed to contain grammatical sentences in which respectively operates across a pair of coordinate phrases one of whose members has fewer conjuncts than the other ; it thus goes through whatever the facts may be regarding constructions with unequal numbers of conjuncts in the scope of respectively, whereas other arguments have foundered on this problem.
[ [ "nouns", "CONJUNCTION", "reflexive pronouns" ], [ "grammatical gender", "FEATURE-OF", "languages" ], [ "French", "HYPONYM-OF", "languages" ] ]
In this paper, a novel method to learn the intrinsic object structure for robust visual tracking is proposed. The basic assumption is that the parameterized object state lies on a low dimensional manifold and can be learned from training data. Based on this assumption, firstly we derived the dimensionality reduction and density estimation algorithm for unsupervised learning of object intrinsic representation, the obtained non-rigid part of object state reduces even to 2 dimensions. Secondly the dynamical model is derived and trained based on this intrinsic representation. Thirdly the learned intrinsic object structure is integrated into a particle-filter style tracker. We will show that this intrinsic object representation has some interesting properties and based on which the newly derived dynamical model makes particle-filter style tracker more robust and reliable. Experiments show that the learned tracker performs much better than existing trackers on the tracking of complex non-rigid motions such as fish twisting with self-occlusion and large inter-frame lip motion. The proposed method also has the potential to solve other type of tracking problems.
[ [ "method", "USED-FOR", "intrinsic object structure" ], [ "intrinsic object structure", "USED-FOR", "robust visual tracking" ], [ "low dimensional manifold", "FEATURE-OF", "parameterized object state" ], [ "dimensionality reduction and density estimation algorithm", "USED-FOR", "unsupervised learning of object intrinsic representation" ], [ "intrinsic representation", "USED-FOR", "dynamical model" ], [ "intrinsic object structure", "PART-OF", "particle-filter style tracker" ], [ "dynamical model", "USED-FOR", "particle-filter style tracker" ], [ "tracker", "COMPARE", "trackers" ], [ "tracker", "USED-FOR", "tracking of complex non-rigid motions" ], [ "trackers", "USED-FOR", "tracking of complex non-rigid motions" ], [ "fish twisting", "HYPONYM-OF", "complex non-rigid motions" ], [ "self-occlusion", "FEATURE-OF", "fish twisting" ], [ "self-occlusion", "CONJUNCTION", "inter-frame lip motion" ], [ "inter-frame lip motion", "FEATURE-OF", "fish twisting" ], [ "method", "USED-FOR", "tracking problems" ] ]
In this paper, we present a digital signal processor -LRB- DSP -RRB- implementation of real-time statistical voice conversion -LRB- VC -RRB- for silent speech enhancement and electrolaryngeal speech enhancement. As a silent speech interface, we focus on non-audible murmur -LRB- NAM -RRB-, which can be used in situations where audible speech is not acceptable. Electrolaryngeal speech is one of the typical types of alaryngeal speech produced by an alternative speaking method for laryngectomees. However, the sound quality of NAM and electrolaryngeal speech suffers from lack of naturalness. VC has proven to be one of the promising approaches to address this problem, and it has been successfully implemented on devices with sufficient computational resources. An implementation on devices that are highly portable but have limited computational resources would greatly contribute to its practical use. In this paper we further implement real-time VC on a DSP. To implement the two speech enhancement systems based on real-time VC, one from NAM to a whispered voice and the other from electrolaryngeal speech to a natural voice, we propose several methods for reducing computational cost while preserving conversion accuracy. We conduct experimental evaluations and show that real-time VC is capable of running on a DSP with little degradation.
[ [ "digital signal processor -LRB- DSP -RRB- implementation", "USED-FOR", "real-time statistical voice conversion -LRB- VC -RRB-" ], [ "real-time statistical voice conversion -LRB- VC -RRB-", "USED-FOR", "silent speech enhancement" ], [ "real-time statistical voice conversion -LRB- VC -RRB-", "USED-FOR", "electrolaryngeal speech enhancement" ], [ "silent speech enhancement", "CONJUNCTION", "electrolaryngeal speech enhancement" ], [ "Electrolaryngeal speech", "HYPONYM-OF", "alaryngeal speech" ], [ "speaking method", "USED-FOR", "alaryngeal speech" ], [ "speaking method", "USED-FOR", "laryngectomees" ], [ "sound quality", "EVALUATE-FOR", "NAM and electrolaryngeal speech" ], [ "devices", "USED-FOR", "it" ], [ "sufficient computational resources", "FEATURE-OF", "devices" ], [ "limited computational resources", "FEATURE-OF", "devices" ], [ "DSP", "USED-FOR", "real-time VC" ], [ "real-time VC", "USED-FOR", "speech enhancement systems" ], [ "one", "HYPONYM-OF", "speech enhancement systems" ], [ "one", "CONJUNCTION", "other" ], [ "other", "HYPONYM-OF", "speech enhancement systems" ], [ "computational cost", "EVALUATE-FOR", "methods" ], [ "computational cost", "CONJUNCTION", "conversion accuracy" ], [ "conversion accuracy", "EVALUATE-FOR", "methods" ], [ "DSP", "USED-FOR", "real-time VC" ] ]
We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST. We measured the quality of the paraphrases produced in an experiment, i.e., -LRB- i -RRB- their grammaticality : at least 99 % correct sentences ; -LRB- ii -RRB- their equivalence in meaning : at least 96 % correct paraphrases either by meaning equivalence or entailment ; and, -LRB- iii -RRB- the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets. The paraphrase sets produced by this method thus seem adequate as reference sets to be used for MT evaluation.
[ [ "method", "USED-FOR", "paraphrase" ], [ "paraphrase", "USED-FOR", "machine translation evaluation measures" ], [ "BLEU", "HYPONYM-OF", "machine translation evaluation measures" ], [ "BLEU", "CONJUNCTION", "NIST" ], [ "NIST", "HYPONYM-OF", "machine translation evaluation measures" ], [ "equivalence in meaning", "CONJUNCTION", "grammaticality" ], [ "meaning equivalence", "USED-FOR", "paraphrases" ], [ "meaning equivalence", "CONJUNCTION", "entailment" ], [ "entailment", "USED-FOR", "paraphrases" ], [ "internal lexical and syntactical variation", "CONJUNCTION", "equivalence in meaning" ], [ "paraphrases", "COMPARE", "hand-produced sets" ], [ "method", "USED-FOR", "paraphrase" ] ]
Graph unification remains the most expensive part of unification-based grammar parsing. We focus on one speed-up element in the design of unification algorithms : avoidance of copying of unmodified subgraphs. We propose a method of attaining such a design through a method of structure-sharing which avoids log -LRB- d -RRB- overheads often associated with structure-sharing of graphs without any use of costly dependency pointers. The proposed scheme eliminates redundant copying while maintaining the quasi-destructive scheme 's ability to avoid over copying and early copying combined with its ability to handle cyclic structures without algorithmic additions.
[ [ "Graph unification", "PART-OF", "unification-based grammar parsing" ], [ "speed-up element", "PART-OF", "unification algorithms" ], [ "structure-sharing", "USED-FOR", "method" ], [ "scheme", "USED-FOR", "cyclic structures" ], [ "quasi-destructive scheme 's ability", "FEATURE-OF", "scheme" ], [ "over copying", "CONJUNCTION", "early copying" ] ]
We describe a novel technique and implemented system for constructing a subcategorization dictionary from textual corpora. Each dictionary entry encodes the relative frequency of occurrence of a comprehensive set of subcategorization classes for English. An initial experiment, on a sample of 14 verbs which exhibit multiple complementation patterns, demonstrates that the technique achieves accuracy comparable to previous approaches, which are all limited to a highly restricted set of subcategorization classes. We also demonstrate that a subcategorization dictionary built with the system improves the accuracy of a parser by an appreciable amount
[ [ "system", "USED-FOR", "subcategorization dictionary" ], [ "textual corpora", "USED-FOR", "system" ], [ "system", "USED-FOR", "subcategorization dictionary" ], [ "accuracy", "EVALUATE-FOR", "parser" ], [ "parser", "EVALUATE-FOR", "subcategorization dictionary" ] ]
The introduction of prior knowledge has greatly enhanced numerous purely low-level driven image processing algorithms. In this work, we focus on the problem of non-rigid image registration. A number of powerful registration criteria have been developed in the last decade, most prominently the criterion of maximum mutual information. Although this criterion provides for good registration results in many applications, it remains a purely low-level criterion. As a consequence, registration results will deteriorate once this low-level information is corrupted, due to noise, partial occlusions or missing image structure. In this paper, we will develop a Bayesian framework that allows to impose statistically learned prior knowledge about the joint intensity distribution into image registration methods. The prior is given by a kernel density estimate on the space of joint intensity distributions computed from a representative set of pre-registered image pairs. This nonparametric prior accurately models previously learned intensity relations between various image modalities and slice locations. Experimental results demonstrate that the resulting registration process is more robust to missing low-level information as it favors intensity correspondences statistically consistent with the learned intensity distributions.
[ [ "maximum mutual information", "HYPONYM-OF", "registration criteria" ], [ "low-level criterion", "FEATURE-OF", "it" ], [ "Bayesian framework", "USED-FOR", "image registration methods" ], [ "statistically learned prior knowledge", "USED-FOR", "image registration methods" ], [ "joint intensity distribution", "FEATURE-OF", "statistically learned prior knowledge" ], [ "kernel density estimate", "USED-FOR", "prior" ], [ "kernel density estimate", "USED-FOR", "joint intensity distributions" ], [ "pre-registered image pairs", "USED-FOR", "joint intensity distributions" ], [ "registration process", "USED-FOR", "missing low-level information" ], [ "it", "USED-FOR", "intensity correspondences" ] ]
There is an increased interest in the efficient creation of city models, be it virtual or as-built. We present a method for synthesizing complex, photo-realistic facade images, from a single example. After parsing the example image into its semantic components, a tiling for it is generated. Novel tilings can then be created, yielding facade textures with different dimensions or with occluded parts inpainted. A genetic algorithm guides the novel facades as well as inpainted parts to be consistent with the example, both in terms of their overall structure and their detailed textures. Promising results for multiple standard datasets -- in particular for the different building styles they contain -- demonstrate the potential of the method.
[ [ "method", "USED-FOR", "synthesizing complex , photo-realistic facade images" ], [ "tiling", "USED-FOR", "semantic components" ], [ "occluded parts inpainted", "FEATURE-OF", "facade textures" ], [ "genetic algorithm", "USED-FOR", "facades" ], [ "genetic algorithm", "USED-FOR", "inpainted parts" ], [ "multiple standard datasets", "EVALUATE-FOR", "method" ] ]
We introduce a new interactive corpus exploration tool called InfoMagnets. InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining. As evidence of its usefulness and usability, it has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct domains : tutorial dialogue -LRB- Kumar et al., submitted -RRB- and on-line communities -LRB- Arguello et al., 2006 -RRB-. As an educational tool, it has been used as part of a unit on protocol analysis in an Educational Research Methods course.
[ [ "InfoMagnets", "HYPONYM-OF", "interactive corpus exploration tool" ], [ "InfoMagnets", "USED-FOR", "exploratory corpus analysis" ], [ "it", "USED-FOR", "domains" ], [ "tutorial dialogue", "HYPONYM-OF", "domains" ], [ "tutorial dialogue", "CONJUNCTION", "on-line communities" ], [ "on-line communities", "HYPONYM-OF", "domains" ], [ "educational tool", "USED-FOR", "protocol analysis" ] ]
Sources of training data suitable for language modeling of conversational speech are limited. In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.
[ [ "conversational speech", "USED-FOR", "language modeling" ], [ "class-dependent interpolation of N-grams", "USED-FOR", "recognition task" ] ]
We present a method for detecting 3D objects using multi-modalities. While it is generic, we demonstrate it on the combination of an image and a dense depth map which give complementary object information. It works in real-time, under heavy clutter, does not require a time consuming training stage, and can handle untextured objects. It is based on an efficient representation of templates that capture the different modalities, and we show in many experiments on commodity hardware that our approach significantly outperforms state-of-the-art methods on single modalities.
[ [ "method", "USED-FOR", "detecting 3D objects" ], [ "multi-modalities", "USED-FOR", "method" ], [ "it", "USED-FOR", "it" ], [ "image", "USED-FOR", "it" ], [ "image", "CONJUNCTION", "dense depth map" ], [ "dense depth map", "USED-FOR", "it" ], [ "complementary object information", "FEATURE-OF", "dense depth map" ], [ "templates", "USED-FOR", "modalities" ], [ "approach", "COMPARE", "state-of-the-art methods" ], [ "approach", "USED-FOR", "single modalities" ], [ "state-of-the-art methods", "USED-FOR", "single modalities" ] ]
The compact description of a video sequence through a single image map and a dominant motion has applications in several domains, including video browsing and retrieval, compression, mosaicing, and visual summarization. Building such a representation requires the capability to register all the frames with respect to the dominant object in the scene, a task which has been, in the past, addressed through temporally localized motion estimates. In this paper, we show how the lack of temporal consistency associated with such estimates can undermine the validity of the dominant motion assumption, leading to oscillation between different scene interpretations and poor registration. To avoid this oscillation, we augment the motion model with a generic temporal constraint which increases the robustness against competing interpretations, leading to more meaningful content summarization.
[ [ "compact description of a video sequence", "USED-FOR", "domains" ], [ "image map", "USED-FOR", "compact description of a video sequence" ], [ "image map", "CONJUNCTION", "dominant motion" ], [ "dominant motion", "USED-FOR", "compact description of a video sequence" ], [ "video browsing and retrieval", "HYPONYM-OF", "domains" ], [ "video browsing and retrieval", "CONJUNCTION", "compression" ], [ "compression", "HYPONYM-OF", "domains" ], [ "compression", "CONJUNCTION", "mosaicing" ], [ "mosaicing", "HYPONYM-OF", "domains" ], [ "mosaicing", "CONJUNCTION", "visual summarization" ], [ "localized motion estimates", "USED-FOR", "task" ], [ "generic temporal constraint", "USED-FOR", "motion model" ], [ "generic temporal constraint", "USED-FOR", "content summarization" ], [ "robustness", "EVALUATE-FOR", "generic temporal constraint" ] ]
In cross-domain learning, there is a more challenging problem that the domain divergence involves more than one dominant factors, e.g., different viewpoints, various resolutions and changing illuminations. Fortunately, an intermediate domain could often be found to build a bridge across them to facilitate the learning problem. In this paper, we propose a Coupled Marginalized Denoising Auto-encoders framework to address the cross-domain problem. Specifically, we design two marginalized denoising auto-encoders, one for the target and the other for source as well as the intermediate one. To better couple the two denoising auto-encoders learning, we incorporate a feature mapping, which tends to transfer knowledge between the intermediate domain and the target one. Furthermore, the maximum margin criterion, e.g., intra-class com-pactness and inter-class penalty, on the output layer is imposed to seek more discriminative features across different domains. Extensive experiments on two tasks have demonstrated the superiority of our method over the state-of-the-art methods.
[ [ "dominant factors", "PART-OF", "domain divergence" ], [ "viewpoints", "HYPONYM-OF", "dominant factors" ], [ "viewpoints", "CONJUNCTION", "resolutions" ], [ "resolutions", "HYPONYM-OF", "dominant factors" ], [ "resolutions", "CONJUNCTION", "illuminations" ], [ "intermediate domain", "USED-FOR", "learning problem" ], [ "Coupled Marginalized Denoising Auto-encoders framework", "USED-FOR", "cross-domain problem" ], [ "one", "HYPONYM-OF", "marginalized denoising auto-encoders" ], [ "one", "CONJUNCTION", "other" ], [ "other", "HYPONYM-OF", "marginalized denoising auto-encoders" ], [ "feature mapping", "PART-OF", "denoising auto-encoders learning" ], [ "feature mapping", "USED-FOR", "intermediate domain" ], [ "intra-class com-pactness", "HYPONYM-OF", "maximum margin criterion" ], [ "intra-class com-pactness", "CONJUNCTION", "inter-class penalty" ], [ "inter-class penalty", "HYPONYM-OF", "maximum margin criterion" ], [ "tasks", "EVALUATE-FOR", "method" ], [ "method", "COMPARE", "state-of-the-art methods" ] ]
In this paper, we aim to automatically render aging faces in a personalized way. Basically, a set of age-group specific dictionaries are learned, where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups, and a linear combination of these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each subject may have extra personalized facial characteristics, e.g. mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular subject, yet much easier and more practical to get face pairs from neighboring age groups. Thus a personality-aware coupled reconstruction loss is utilized to learn the dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of our proposed solution over other state-of-the-arts in term of personalized aging progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.
[ [ "dictionary bases", "PART-OF", "age-group specific dictionaries" ], [ "linear combination", "USED-FOR", "personalized aging process" ], [ "patterns", "USED-FOR", "linear combination" ], [ "mole", "HYPONYM-OF", "personalized facial characteristics" ], [ "personality-aware coupled reconstruction loss", "USED-FOR", "dictionaries" ], [ "solution", "COMPARE", "state-of-the-arts" ], [ "solution", "USED-FOR", "personalized aging progression" ], [ "state-of-the-arts", "USED-FOR", "personalized aging progression" ], [ "synthesizing aging faces", "USED-FOR", "cross-age face verification" ] ]
We propose a draft scheme of the model formalizing the structure of communicative context in dialogue interaction. The relationships between the interacting partners are considered as system of three automata representing the partners of the dialogue and environment.
[ [ "model", "USED-FOR", "structure of communicative context" ], [ "dialogue interaction", "FEATURE-OF", "structure of communicative context" ] ]
Visitors who browse the web from wireless PDAs, cell phones, and pagers are frequently stymied by web interfaces optimized for desktop PCs. Simply replacing graphics with text and reformatting tables does not solve the problem, because deep link structures can still require minutes to traverse. In this paper we develop an algorithm, MINPATH, that automatically improves wireless web navigation by suggesting useful shortcut links in real time. MINPATH finds shortcuts by using a learned model of web visitor behavior to estimate the savings of shortcut links, and suggests only the few best links. We explore a variety of predictive models, including Na ¨ ıve Bayes mixture models and mixtures of Markov models, and report empirical evidence that MINPATH finds useful shortcuts that save substantial navigational effort.
[ [ "web interfaces", "USED-FOR", "desktop PCs" ], [ "algorithm", "USED-FOR", "wireless web navigation" ], [ "algorithm", "USED-FOR", "wireless web navigation" ], [ "model", "USED-FOR", "MINPATH" ], [ "model", "USED-FOR", "web visitor behavior" ], [ "model", "USED-FOR", "savings of shortcut links" ], [ "Na ¨ ıve Bayes mixture models", "HYPONYM-OF", "predictive models" ], [ "Na ¨ ıve Bayes mixture models", "CONJUNCTION", "mixtures of Markov models" ], [ "mixtures of Markov models", "HYPONYM-OF", "predictive models" ] ]
This paper describes a particular approach to parsing that utilizes recent advances in unification-based parsing and in classification-based knowledge representation. As unification-based grammatical frameworks are extended to handle richer descriptions of linguistic information, they begin to share many of the properties that have been developed in KL-ONE-like knowledge representation systems. This commonality suggests that some of the classification-based representation techniques can be applied to unification-based linguistic descriptions. This merging supports the integration of semantic and syntactic information into the same system, simultaneously subject to the same types of processes, in an efficient manner. The result is expected to be more efficient parsing due to the increased organization of knowledge. The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system -LSB- 2 -RSB-, in which parsing is characterized as an inference process called incremental description refinement.
[ [ "approach", "USED-FOR", "parsing" ], [ "unification-based parsing", "USED-FOR", "approach" ], [ "classification-based knowledge representation", "USED-FOR", "approach" ], [ "classification-based knowledge representation", "CONJUNCTION", "unification-based parsing" ], [ "unification-based grammatical frameworks", "USED-FOR", "linguistic information" ], [ "KL-ONE-like knowledge representation systems", "USED-FOR", "they" ], [ "classification-based representation techniques", "USED-FOR", "unification-based linguistic descriptions" ], [ "semantic and syntactic information", "USED-FOR", "system" ], [ "KL-ONE style representation", "USED-FOR", "parsing" ], [ "KL-ONE style representation", "USED-FOR", "semantic interpretation" ], [ "parsing", "CONJUNCTION", "semantic interpretation" ], [ "PSI-KLONE system", "USED-FOR", "KL-ONE style representation" ], [ "incremental description refinement", "USED-FOR", "parsing" ], [ "incremental description refinement", "HYPONYM-OF", "inference process" ] ]
In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system, a language tutoring application for deaf learners of written English. The model will represent the language proficiency of the user and is designed to be referenced during both writing analysis and feedback production. We motivate our model design by citing relevant research on second language and cognitive skill acquisition, and briefly discuss preliminary empirical evidence supporting the design. We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling user proficiency at a high level of granularity and specificity.
[ [ "user knowledge modeling architecture", "USED-FOR", "ICICLE system" ], [ "ICICLE system", "HYPONYM-OF", "language tutoring application" ], [ "language tutoring application", "USED-FOR", "deaf learners" ], [ "written English", "USED-FOR", "language tutoring application" ], [ "model", "USED-FOR", "writing analysis" ], [ "model", "USED-FOR", "feedback production" ], [ "writing analysis", "CONJUNCTION", "feedback production" ], [ "second language and cognitive skill acquisition", "USED-FOR", "model design" ], [ "design", "USED-FOR", "language assessment / correction application" ], [ "design", "USED-FOR", "user proficiency" ], [ "granularity", "EVALUATE-FOR", "user proficiency" ], [ "granularity", "CONJUNCTION", "specificity" ], [ "specificity", "EVALUATE-FOR", "user proficiency" ] ]
Constraint propagation is one of the key techniques in constraint programming, and a large body of work has built up around it. Special-purpose constraint propagation algorithms frequently make implicit use of short supports -- by examining a subset of the variables, they can infer support -LRB- a justification that a variable-value pair still forms part of a solution to the constraint -RRB- for all other variables and values and save substantial work. Recently short supports have been used in general purpose prop-agators, and -LRB- when the constraint is amenable to short supports -RRB- speed ups of more than three orders of magnitude have been demonstrated. In this paper we present SHORTSTR2, a development of the Simple Tabular Reduction algorithm STR2 +. We show that SHORTSTR2 is complementary to the existing algorithms SHORTGAC and HAGGISGAC that exploit short supports, while being much simpler. When a constraint is amenable to short supports, the short support set can be exponentially smaller than the full-length support set. Therefore SHORTSTR2 can efficiently propagate many constraints that STR2 + can not even load into memory. We also show that SHORTSTR2 can be combined with a simple algorithm to identify short supports from full-length supports, to provide a superior drop-in replacement for STR2 +.
[ [ "Constraint propagation", "PART-OF", "constraint programming" ], [ "Simple Tabular Reduction algorithm STR2 +", "USED-FOR", "SHORTSTR2" ], [ "SHORTSTR2", "COMPARE", "SHORTGAC" ], [ "SHORTSTR2", "COMPARE", "HAGGISGAC" ], [ "SHORTGAC", "CONJUNCTION", "HAGGISGAC" ], [ "short support set", "COMPARE", "full-length support set" ], [ "SHORTSTR2", "USED-FOR", "short supports" ], [ "SHORTSTR2", "USED-FOR", "drop-in replacement" ], [ "algorithm", "CONJUNCTION", "SHORTSTR2" ], [ "algorithm", "USED-FOR", "short supports" ], [ "full-length supports", "USED-FOR", "SHORTSTR2" ], [ "full-length supports", "USED-FOR", "SHORTSTR2" ], [ "full-length supports", "USED-FOR", "algorithm" ], [ "full-length supports", "USED-FOR", "algorithm" ], [ "drop-in replacement", "USED-FOR", "STR2 +" ] ]
We propose a detection method for orthographic variants caused by transliteration in a large corpus. The method employs two similarities. One is string similarity based on edit distance. The other is contextual similarity by a vector space model. Experimental results show that the method performed a 0.889 F-measure in an open test.
[ [ "detection method", "USED-FOR", "orthographic variants" ], [ "similarities", "USED-FOR", "method" ], [ "edit distance", "USED-FOR", "string similarity" ], [ "vector space model", "USED-FOR", "contextual similarity" ], [ "F-measure", "EVALUATE-FOR", "method" ] ]
Uncertainty handling plays an important role during shape tracking. We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the tracking performance for very noisy images such as ultrasound sequences -LSB- 22 -RSB-. Nevertheless, this approach required user initialization of the tracking process. This paper solves the automatic initial-ization problem by performing boosted shape detection as a generic measurement process and integrating it in our tracking framework. We show how to propagate the local detection uncertainties of multiple shape candidates during shape alignment, fusion with the predicted shape prior, and fusion with subspace constraints. As a result, we treat all sources of information in a unified way and derive the posterior shape model as the shape with the maximum likelihood. Our framework is applied for the automatic tracking of endocardium in ultrasound sequences of the human heart. Reliable detection and robust tracking results are achieved when compared to existing approaches and inter-expert variations.
[ [ "Uncertainty handling", "USED-FOR", "shape tracking" ], [ "fusion of measurement information with system dynamics and shape priors", "USED-FOR", "tracking" ], [ "tracking", "USED-FOR", "noisy images" ], [ "ultrasound sequences", "HYPONYM-OF", "noisy images" ], [ "user initialization", "USED-FOR", "approach" ], [ "user initialization", "USED-FOR", "tracking process" ], [ "boosted shape detection", "USED-FOR", "automatic initial-ization problem" ], [ "generic measurement process", "USED-FOR", "boosted shape detection" ], [ "it", "PART-OF", "tracking framework" ], [ "maximum likelihood", "USED-FOR", "posterior shape model" ], [ "framework", "USED-FOR", "automatic tracking of endocardium" ], [ "endocardium", "PART-OF", "ultrasound sequences of the human heart" ], [ "detection", "CONJUNCTION", "tracking" ], [ "approaches", "CONJUNCTION", "inter-expert variations" ] ]
We present a syntax-based constraint for word alignment, known as the cohesion constraint. It requires disjoint English phrases to be mapped to non-overlapping intervals in the French sentence. We evaluate the utility of this constraint in two different algorithms. The results show that it can provide a significant improvement in alignment quality.
[ [ "syntax-based constraint", "USED-FOR", "word alignment" ], [ "cohesion constraint", "HYPONYM-OF", "syntax-based constraint" ], [ "English phrases", "USED-FOR", "It" ], [ "algorithms", "EVALUATE-FOR", "constraint" ], [ "alignment quality", "EVALUATE-FOR", "it" ] ]
This paper considers the problem of automatic assessment of local coherence. We present a novel entity-based representation of discourse which is inspired by Centering Theory and can be computed automatically from raw text. We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function. Our experiments demonstrate that the induced model achieves significantly higher accuracy than a state-of-the-art coherence model.
[ [ "Centering Theory", "USED-FOR", "entity-based representation of discourse" ], [ "raw text", "USED-FOR", "entity-based representation of discourse" ], [ "ranking learning problem", "USED-FOR", "coherence assessment" ], [ "discourse representation", "USED-FOR", "ranking function" ], [ "induced model", "COMPARE", "coherence model" ], [ "accuracy", "EVALUATE-FOR", "induced model" ], [ "accuracy", "EVALUATE-FOR", "coherence model" ] ]
This paper introduces a robust interactive method for speech understanding. The generalized LR parsing is enhanced in this approach. Parsing proceeds from left to right correcting minor errors. When a very noisy portion is detected, the parser skips that portion using a fake non-terminal symbol. The unidentified portion is resolved by re-utterance of that portion which is parsed very efficiently by using the parse record of the first utterance. The user does not have to speak the whole sentence again. This method is also capable of handling unknown words, which is important in practical systems. Detected unknown words can be incrementally incorporated into the dictionary after the interaction with the user. A pilot system has shown great effectiveness of this approach.
[ [ "robust interactive method", "USED-FOR", "speech understanding" ], [ "approach", "USED-FOR", "generalized LR parsing" ], [ "non-terminal symbol", "USED-FOR", "parser" ], [ "method", "USED-FOR", "unknown words" ] ]
This paper shows that it is very often possible to identify the source language of medium-length speeches in the EUROPARL corpus on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % accuracy depending on classification method -RRB-. The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture - and domain-related ones.
[ [ "medium-length speeches", "PART-OF", "EUROPARL corpus" ], [ "accuracy", "EVALUATE-FOR", "classification method" ] ]
We investigated whether automatic phonetic transcriptions -LRB- APTs -RRB- can replace manually verified phonetic transcriptions -LRB- MPTs -RRB- in a large corpus-based study on pronunciation variation. To this end, we compared the performance of both transcription types in a classification experiment aimed at establishing the direct influence of a particular situational setting on pronunciation variation. We trained classifiers on the speech processes extracted from the alignments of an APT and an MPT with a canonical transcription. We tested whether the classifiers were equally good at verifying whether unknown transcriptions represent read speech or telephone dialogues, and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings. Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better classification accuracy than the MPT-based classifier whilst using fewer classification features.
[ [ "automatic phonetic transcriptions -LRB- APTs -RRB-", "COMPARE", "manually verified phonetic transcriptions" ], [ "automatic phonetic transcriptions -LRB- APTs -RRB-", "USED-FOR", "pronunciation variation" ], [ "manually verified phonetic transcriptions", "USED-FOR", "pronunciation variation" ], [ "speech processes", "USED-FOR", "classifiers" ], [ "alignments", "USED-FOR", "speech processes" ], [ "alignments", "USED-FOR", "APT" ], [ "alignments", "USED-FOR", "MPT" ], [ "APT", "CONJUNCTION", "MPT" ], [ "canonical transcription", "USED-FOR", "alignments" ], [ "classifiers", "USED-FOR", "unknown transcriptions" ], [ "unknown transcriptions", "USED-FOR", "read speech" ], [ "unknown transcriptions", "USED-FOR", "telephone dialogues" ], [ "read speech", "CONJUNCTION", "telephone dialogues" ], [ "APT-based classifier", "COMPARE", "MPT-based classifier" ], [ "classification accuracy", "EVALUATE-FOR", "APT-based classifier" ], [ "classification accuracy", "EVALUATE-FOR", "MPT-based classifier" ], [ "classification features", "USED-FOR", "APT-based classifier" ], [ "classification features", "USED-FOR", "MPT-based classifier" ] ]
Machine reading is a relatively new field that features computer programs designed to read flowing text and extract fact assertions expressed by the narrative content. This task involves two core technologies : natural language processing -LRB- NLP -RRB- and information extraction -LRB- IE -RRB-. In this paper we describe a machine reading system that we have developed within a cognitive architecture. We show how we have integrated into the framework several levels of knowledge for a particular domain, ideas from cognitive semantics and construction grammar, plus tools from prior NLP and IE research. The result is a system that is capable of reading and interpreting complex and fairly idiosyncratic texts in the family history domain. We describe the architecture and performance of the system. After presenting the results from several evaluations that we have carried out, we summarize possible future directions.
[ [ "computer programs", "USED-FOR", "flowing text" ], [ "computer programs", "USED-FOR", "fact assertions" ], [ "fact assertions", "FEATURE-OF", "narrative content" ], [ "natural language processing -LRB- NLP -RRB-", "PART-OF", "task" ], [ "information extraction -LRB- IE -RRB-", "PART-OF", "task" ], [ "cognitive architecture", "FEATURE-OF", "machine reading system" ], [ "cognitive semantics", "CONJUNCTION", "construction grammar" ], [ "prior NLP", "CONJUNCTION", "IE research" ], [ "system", "USED-FOR", "idiosyncratic texts" ], [ "family history domain", "FEATURE-OF", "idiosyncratic texts" ] ]
Nonstationary chaotic behavior is not an oxymoron. We present two methods for capturing nonstationary chaos, then present a few examples including biological signals, ocean waves and traffic flow. The issue is of practical interest because it is often useful to capture when nonstationary events take place and it is desirable to know over what periods a signal is stationary.
[ [ "methods", "USED-FOR", "nonstationary chaos" ], [ "biological signals", "HYPONYM-OF", "examples" ], [ "biological signals", "CONJUNCTION", "ocean waves" ], [ "ocean waves", "HYPONYM-OF", "examples" ], [ "ocean waves", "CONJUNCTION", "traffic flow" ], [ "traffic flow", "HYPONYM-OF", "examples" ] ]
This paper presents a formal analysis for a large class of words called alternative markers, which includes other -LRB- than -RRB-, such -LRB- as -RRB-, and besides. These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine 's operational semantics. The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible.
[ [ "formal analysis", "USED-FOR", "alternative markers" ], [ "words", "PART-OF", "dialog" ], [ "approximation of the formal analysis", "PART-OF", "search engine" ], [ "operational semantics", "PART-OF", "search engine" ], [ "operational semantics", "PART-OF", "natural language applications" ] ]
In this paper, we address the problem of combining several language models -LRB- LMs -RRB-. We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. The oracle knows the reference word string and selects the word string with the best performance -LRB- typically, word or semantic error rate -RRB- from a list of word strings, where each word string has been obtained by using a different LM. Actually, the oracle acts like a dynamic combiner with hard decisions using the reference. We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. We suggest a method that mimics the behavior of the oracle using a neural network or a decision tree. The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
[ [ "log-linear and linear interpolation", "HYPONYM-OF", "interpolation methods" ], [ "hard decisions", "FEATURE-OF", "dynamic combiner" ], [ "neural network", "USED-FOR", "method" ], [ "decision tree", "USED-FOR", "method" ], [ "decision tree", "CONJUNCTION", "neural network" ], [ "method", "USED-FOR", "LMs" ], [ "confidence measures", "USED-FOR", "method" ] ]
We describe a new method for the representation of NLP structures within reranking approaches. We make use of a conditional log-linear model, with hidden variables representing the assignment of lexical items to word clusters or word senses. The model learns to automatically make these assignments based on a discriminative training criterion. Training and decoding with the model requires summing over an exponential number of hidden-variable assignments : the required summations can be computed efficiently and exactly using dynamic programming. As a case study, we apply the model to parse reranking. The model gives an F-measure improvement of ~ 1.25 % beyond the base parser, and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker. Although our experiments are focused on parsing, the techniques described generalize naturally to NLP structures other than parse trees.
[ [ "method", "USED-FOR", "NLP structures" ], [ "reranking approaches", "FEATURE-OF", "NLP structures" ], [ "hidden variables", "USED-FOR", "conditional log-linear model" ], [ "word clusters", "CONJUNCTION", "word senses" ], [ "discriminative training criterion", "USED-FOR", "model" ], [ "dynamic programming", "USED-FOR", "summations" ], [ "model", "USED-FOR", "parse reranking" ], [ "model", "COMPARE", "base parser" ], [ "F-measure", "EVALUATE-FOR", "model" ], [ "base parser", "COMPARE", "Collins -LRB- 2000 -RRB- reranker" ], [ "techniques", "USED-FOR", "parsing" ], [ "techniques", "USED-FOR", "NLP structures" ], [ "techniques", "USED-FOR", "parse trees" ], [ "parse trees", "CONJUNCTION", "NLP structures" ] ]
This paper presents an algorithm for learning the time-varying shape of a non-rigid 3D object from uncalibrated 2D tracking data. We model shape motion as a rigid component -LRB- rotation and translation -RRB- combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed. We constrain the problem by assuming that the object shape at each time instant is drawn from a Gaussian distribution. Based on this assumption, the algorithm simultaneously estimates 3D shape and motion for each time frame, learns the parameters of the Gaussian, and robustly fills-in missing data points. We then extend the algorithm to model temporal smoothness in object shape, thus allowing it to handle severe cases of missing data.
[ [ "algorithm", "USED-FOR", "learning the time-varying shape of a non-rigid 3D object" ], [ "Gaussian distribution", "USED-FOR", "object shape" ], [ "algorithm", "USED-FOR", "3D shape and motion" ], [ "algorithm", "USED-FOR", "temporal smoothness in object shape" ], [ "it", "USED-FOR", "missing data" ] ]
Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0. 447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.
[ [ "Automatic summarization", "CONJUNCTION", "information extraction" ], [ "MUC", "CONJUNCTION", "SUMMAC" ], [ "models", "USED-FOR", "summary generation" ], [ "models", "USED-FOR", "tasks" ], [ "tasks", "PART-OF", "SUMMAC-1" ], [ "positive feature vectors", "USED-FOR", "categorization task" ], [ "positive feature vectors", "CONJUNCTION", "negative feature vectors" ], [ "positive feature vectors", "USED-FOR", "generic , indicative summaries" ], [ "negative feature vectors", "USED-FOR", "categorization task" ], [ "negative feature vectors", "USED-FOR", "generic , indicative summaries" ], [ "text model", "USED-FOR", "adhoc task" ], [ "text model", "USED-FOR", "discourse segment" ], [ "text model", "USED-FOR", "user-directed summaries" ], [ "NormF", "EVALUATE-FOR", "adhoc tasks" ], [ "NormF", "EVALUATE-FOR", "categorization task" ], [ "system", "COMPARE", "system" ], [ "categorization task", "EVALUATE-FOR", "system" ], [ "categorization task", "EVALUATE-FOR", "system" ], [ "adhoc task", "EVALUATE-FOR", "system" ], [ "adhoc task", "EVALUATE-FOR", "categorization task" ] ]
In real-world action recognition problems, low-level features can not adequately characterize the rich spatial-temporal structures in action videos. In this work, we encode actions based on attributes that describes actions as high-level concepts e.g., jump forward or motion in the air. We base our analysis on two types of action attributes. One type of action attributes is generated by humans. The second type is data-driven attributes, which are learned from data using dictionary learning methods. Attribute-based representation may exhibit high variance due to noisy and redundant attributes. We propose a discriminative and compact attribute-based representation by selecting a subset of discriminative attributes from a large attribute set. Three attribute selection criteria are proposed and formulated as a submodular optimization problem. A greedy optimization algorithm is presented and guaranteed to be at least -LRB- 1-1 / e -RRB- - approximation to the optimum. Experimental results on the Olympic Sports and UCF101 datasets demonstrate that the proposed attribute-based representation can significantly boost the performance of action recognition algorithms and outperform most recently proposed recognition approaches.
[ [ "rich spatial-temporal structures", "FEATURE-OF", "action videos" ], [ "dictionary learning methods", "USED-FOR", "data-driven attributes" ], [ "discriminative attributes", "USED-FOR", "discriminative and compact attribute-based representation" ], [ "submodular optimization problem", "USED-FOR", "attribute selection criteria" ], [ "Olympic Sports and UCF101 datasets", "EVALUATE-FOR", "attribute-based representation" ], [ "attribute-based representation", "USED-FOR", "action recognition algorithms" ], [ "action recognition algorithms", "COMPARE", "recognition approaches" ] ]
This paper reports a completed stage of ongoing research at the University of York. Landsbergen 's advocacy of analytical inverses for compositional syntax rules encourages the application of Definite Clause Grammar techniques to the construction of a parser returning Montague analysis trees. A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing * and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae. Some familiarity with Montague 's PTQ and the basic DCG mechanism is assumed.
[ [ "analytical inverses", "USED-FOR", "compositional syntax rules" ], [ "analytical inverses", "USED-FOR", "Definite Clause Grammar techniques" ], [ "Definite Clause Grammar techniques", "USED-FOR", "parser returning Montague analysis trees" ], [ "augmented Friedman - Warren algorithm", "USED-FOR", "parser MDCC" ], [ "post referencing", "FEATURE-OF", "augmented Friedman - Warren algorithm" ], [ "derivational history", "USED-FOR", "intenslonal logic translator LILT" ], [ "reduced IL formulae", "FEATURE-OF", "derivational history" ], [ "Montague 's PTQ", "CONJUNCTION", "basic DCG mechanism" ] ]
Despite their success, convolutional neural networks are computationally expensive because they must examine all image locations. Stochastic attention-based models have been shown to improve computational efficiency at test time, but they remain difficult to train because of intractable posterior inference and high variance in the stochastic gradient estimates. Borrowing techniques from the literature on training deep generative models, we present the Wake-Sleep Recurrent Attention Model, a method for training stochastic attention networks which improves posterior inference and which reduces the variability in the stochastic gradients. We show that our method can greatly speed up the training time for stochastic attention networks in the domains of image classification and caption generation.
[ [ "computational efficiency", "EVALUATE-FOR", "Stochastic attention-based models" ], [ "intractable posterior inference", "CONJUNCTION", "stochastic gradient estimates" ], [ "Borrowing techniques", "USED-FOR", "deep generative models" ], [ "method", "USED-FOR", "stochastic attention networks" ], [ "stochastic attention networks", "USED-FOR", "posterior inference" ], [ "training time", "EVALUATE-FOR", "method" ], [ "training time", "FEATURE-OF", "stochastic attention networks" ], [ "image classification", "EVALUATE-FOR", "method" ], [ "image classification", "CONJUNCTION", "caption generation" ], [ "caption generation", "EVALUATE-FOR", "method" ] ]
A new exemplar-based framework unifying image completion, texture synthesis and image inpainting is presented in this work. Contrary to existing greedy techniques, these tasks are posed in the form of a discrete global optimization problem with a well defined objective function. For solving this problem a novel optimization scheme, called Priority-BP, is proposed which carries two very important extensions over standard belief propagation -LRB- BP -RRB- : '' priority-based message scheduling '' and '' dynamic label pruning ''. These two extensions work in cooperation to deal with the intolerable computational cost of BP caused by the huge number of existing labels. Moreover, both extensions are generic and can therefore be applied to any MRF energy function as well. The effectiveness of our method is demonstrated on a wide variety of image completion examples.
[ [ "exemplar-based framework", "USED-FOR", "image completion" ], [ "exemplar-based framework", "USED-FOR", "texture synthesis" ], [ "exemplar-based framework", "USED-FOR", "image inpainting" ], [ "image completion", "CONJUNCTION", "texture synthesis" ], [ "texture synthesis", "CONJUNCTION", "image inpainting" ], [ "greedy techniques", "COMPARE", "tasks" ], [ "discrete global optimization problem", "FEATURE-OF", "tasks" ], [ "well defined objective function", "FEATURE-OF", "discrete global optimization problem" ], [ "optimization scheme", "USED-FOR", "problem" ], [ "Priority-BP", "HYPONYM-OF", "optimization scheme" ], [ "extensions", "PART-OF", "optimization scheme" ], [ "belief propagation -LRB- BP -RRB-", "USED-FOR", "extensions" ], [ "priority-based message scheduling", "HYPONYM-OF", "extensions" ], [ "priority-based message scheduling", "CONJUNCTION", "dynamic label pruning" ], [ "dynamic label pruning", "HYPONYM-OF", "extensions" ], [ "extensions", "USED-FOR", "intolerable computational cost of BP" ], [ "extensions", "USED-FOR", "MRF energy function" ], [ "image completion examples", "USED-FOR", "method" ] ]
In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of a translation memory system. We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character - and word-segmented data, in combination with a range of local segment contiguity models -LRB- in the form of N-grams -RRB-. Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. Further, in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. We also provide evidence that our findings are scalable.
[ [ "segment order", "CONJUNCTION", "segmentation" ], [ "segment order", "USED-FOR", "translation memory system" ], [ "segmentation", "CONJUNCTION", "segment contiguity" ], [ "segmentation", "USED-FOR", "translation memory system" ], [ "segment contiguity", "USED-FOR", "translation memory system" ], [ "retrieval", "EVALUATE-FOR", "translation memory system" ], [ "character - and word-segmented data", "USED-FOR", "bag-of-words and segment order-sensitive string comparison methods" ], [ "local segment contiguity models", "CONJUNCTION", "bag-of-words and segment order-sensitive string comparison methods" ], [ "N-grams", "FEATURE-OF", "local segment contiguity models" ], [ "character bigrams", "USED-FOR", "indexing" ], [ "character bigrams", "COMPARE", "word N-gram models" ], [ "retrieval accuracy", "EVALUATE-FOR", "character bigrams" ], [ "retrieval accuracy", "EVALUATE-FOR", "word N-gram models" ], [ "bag-of-words methods", "COMPARE", "segment order-sensitive methods" ], [ "retrieval accuracy", "EVALUATE-FOR", "bag-of-words methods" ], [ "retrieval accuracy", "EVALUATE-FOR", "segment order-sensitive methods" ] ]
In this paper we show how two standard outputs from information extraction -LRB- IE -RRB- systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. We describe how this information is used in a prototype system designed to support information workers ' access to a pharmaceutical news archive as part of their industry watch function. We also report results of a preliminary, qualitative user evaluation of the system, which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers.
[ [ "outputs", "USED-FOR", "text collections" ], [ "named entity annotations", "HYPONYM-OF", "outputs" ], [ "named entity annotations", "CONJUNCTION", "scenario templates" ], [ "scenario templates", "HYPONYM-OF", "outputs" ], [ "text browser", "USED-FOR", "text collections" ], [ "prototype system", "USED-FOR", "pharmaceutical news archive" ], [ "qualitative user evaluation", "EVALUATE-FOR", "system" ] ]
We present a new model-based bundle adjustment algorithm to recover the 3D model of a scene/object from a sequence of images with unknown motions. Instead of representing scene/object by a collection of isolated 3D features -LRB- usually points -RRB-, our algorithm uses a surface controlled by a small set of parameters. Compared with previous model-based approaches, our approach has the following advantages. First, instead of using the model space as a regular-izer, we directly use it as our search space, thus resulting in a more elegant formulation with fewer unknowns and fewer equations. Second, our algorithm automatically associates tracked points with their correct locations on the surfaces, thereby eliminating the need for a prior 2D-to-3D association. Third, regarding face modeling, we use a very small set of face metrics -LRB- meaningful deformations -RRB- to parame-terize the face geometry, resulting in a smaller search space and a better posed system. Experiments with both synthetic and real data show that this new algorithm is faster, more accurate and more stable than existing ones.
[ [ "model-based bundle adjustment algorithm", "USED-FOR", "3D model" ], [ "images", "USED-FOR", "3D model" ], [ "unknown motions", "PART-OF", "images" ], [ "surface", "USED-FOR", "algorithm" ], [ "model-based approaches", "COMPARE", "approach" ], [ "model space", "USED-FOR", "regular-izer" ], [ "regular-izer", "COMPARE", "search space" ], [ "it", "USED-FOR", "search space" ], [ "face metrics", "USED-FOR", "face modeling" ], [ "face metrics", "USED-FOR", "face geometry" ], [ "face metrics", "USED-FOR", "search space" ], [ "face metrics", "USED-FOR", "posed system" ], [ "synthetic and real data", "EVALUATE-FOR", "algorithm" ], [ "synthetic and real data", "EVALUATE-FOR", "ones" ], [ "algorithm", "COMPARE", "ones" ] ]
This paper presents an approach to the unsupervised learning of parts of speech which uses both morphological and syntactic information. While the model is more complex than those which have been employed for unsupervised learning of POS tags in English, which use only syntactic information, the variety of languages in the world requires that we consider morphology as well. In many languages, morphology provides better clues to a word 's category than word order. We present the computational model for POS learning, and present results for applying it to Bulgarian, a Slavic language with relatively free word order and rich morphology.
[ [ "approach", "USED-FOR", "unsupervised learning of parts of speech" ], [ "morphological and syntactic information", "USED-FOR", "approach" ], [ "model", "COMPARE", "those" ], [ "those", "USED-FOR", "unsupervised learning of POS tags in English" ], [ "syntactic information", "USED-FOR", "those" ], [ "morphology", "COMPARE", "word order" ], [ "computational model", "USED-FOR", "POS learning" ], [ "Bulgarian", "USED-FOR", "it" ], [ "Bulgarian", "HYPONYM-OF", "Slavic language" ], [ "free word order", "FEATURE-OF", "Bulgarian" ], [ "free word order", "CONJUNCTION", "rich morphology" ], [ "rich morphology", "FEATURE-OF", "Bulgarian" ] ]
Words in Chinese text are not naturally separated by delimiters, which poses a challenge to standard machine translation -LRB- MT -RRB- systems. In MT, the widely used approach is to apply a Chinese word segmenter trained from manually annotated data, using a fixed lexicon. Such word segmentation is not necessarily optimal for translation. We propose a Bayesian semi-supervised Chinese word segmentation model which uses both monolingual and bilingual information to derive a segmentation suitable for MT. Experiments show that our method improves a state-of-the-art MT system in a small and a large data environment.
[ [ "Chinese word segmenter", "USED-FOR", "MT" ], [ "manually annotated data", "USED-FOR", "Chinese word segmenter" ], [ "word segmentation", "USED-FOR", "translation" ], [ "Bayesian semi-supervised Chinese word segmentation model", "USED-FOR", "segmentation" ], [ "monolingual and bilingual information", "USED-FOR", "Bayesian semi-supervised Chinese word segmentation model" ], [ "segmentation", "USED-FOR", "MT" ], [ "method", "COMPARE", "MT system" ] ]
In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment : the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.
[ [ "approaches", "USED-FOR", "part-of-speech tagging" ], [ "French", "USED-FOR", "approaches" ], [ "constraint system", "COMPARE", "statistical model" ], [ "accuracy", "EVALUATE-FOR", "statistical method" ], [ "accuracy", "EVALUATE-FOR", "taggers" ], [ "statistical method", "COMPARE", "taggers" ], [ "taggers", "USED-FOR", "English" ] ]
Structured-light methods actively generate geometric correspondence data between projectors and cameras in order to facilitate robust 3D reconstruction. In this paper, we present Photogeometric Structured Light whereby a standard structured light method is extended to include photometric methods. Photometric processing serves the double purpose of increasing the amount of recovered surface detail and of enabling the structured-light setup to be robustly self-calibrated. Further, our framework uses a photogeometric optimization that supports the simultaneous use of multiple cameras and projectors and yields a single and accurate multi-view 3D model which best complies with photometric and geometric data.
[ [ "Structured-light methods", "USED-FOR", "geometric correspondence data" ], [ "geometric correspondence data", "USED-FOR", "robust 3D reconstruction" ], [ "structured light method", "PART-OF", "Photogeometric Structured Light" ], [ "photometric methods", "PART-OF", "Photogeometric Structured Light" ], [ "Photometric processing", "USED-FOR", "recovered surface detail" ], [ "Photometric processing", "USED-FOR", "structured-light setup" ], [ "photogeometric optimization", "USED-FOR", "framework" ], [ "photometric and geometric data", "USED-FOR", "multi-view 3D model" ] ]
In this paper, a discrimination and robustness oriented adaptive learning procedure is proposed to deal with the task of syntactic ambiguity resolution. Owing to the problem of insufficient training data and approximation error introduced by the language model, traditional statistical approaches, which resolve ambiguities by indirectly and implicitly using maximum likelihood method, fail to achieve high performance in real applications. The proposed method remedies these problems by adjusting the parameters to maximize the accuracy rate directly. To make the proposed algorithm robust, the possible variations between the training corpus and the real tasks are also taken into consideration by enlarging the separation margin between the correct candidate and its competing members. Significant improvement has been observed in the test. The accuracy rate of syntactic disambiguation is raised from 46.0 % to 60.62 % by using this novel approach.
[ [ "adaptive learning procedure", "USED-FOR", "syntactic ambiguity resolution" ], [ "insufficient training data", "CONJUNCTION", "approximation error" ], [ "statistical approaches", "USED-FOR", "ambiguities" ], [ "maximum likelihood method", "USED-FOR", "statistical approaches" ], [ "accuracy rate", "EVALUATE-FOR", "syntactic disambiguation" ], [ "syntactic disambiguation", "EVALUATE-FOR", "approach" ] ]
This paper presents a new approach to statistical sentence generation in which alternative phrases are represented as packed sets of trees, or forests, and then ranked statistically to choose the best one. This representation offers advantages in compactness and in the ability to represent syntactic information. It also facilitates more efficient statistical ranking than a previous approach to statistical generation. An efficient ranking algorithm is described, together with experimental results showing significant improvements over simple enumeration or a lattice-based approach.
[ [ "approach", "USED-FOR", "statistical sentence generation" ], [ "It", "USED-FOR", "statistical ranking" ], [ "It", "COMPARE", "approach" ], [ "approach", "USED-FOR", "statistical generation" ], [ "ranking algorithm", "COMPARE", "enumeration" ], [ "ranking algorithm", "COMPARE", "lattice-based approach" ], [ "enumeration", "CONJUNCTION", "lattice-based approach" ] ]
This article deals with the interpretation of conceptual operations underlying the communicative use of natural language -LRB- NL -RRB- within the Structured Inheritance Network -LRB- SI-Nets -RRB- paradigm. The operations are reduced to functions of a formal language, thus changing the level of abstraction of the operations to be performed on SI-Nets. In this sense, operations on SI-Nets are not merely isomorphic to single epistemological objects, but can be viewed as a simulation of processes on a different level, that pertaining to the conceptual system of NL. For this purpose, we have designed a version of KL-ONE which represents the epistemological level, while the new experimental language, KL-Conc, represents the conceptual level. KL-Conc would seem to be a more natural and intuitive way of interacting with SI-Nets.
[ [ "natural language -LRB- NL -RRB-", "USED-FOR", "Structured Inheritance Network -LRB- SI-Nets -RRB- paradigm" ], [ "operations", "USED-FOR", "SI-Nets" ], [ "operations", "USED-FOR", "SI-Nets" ], [ "NL", "USED-FOR", "conceptual system" ], [ "KL-ONE", "COMPARE", "KL-Conc" ], [ "epistemological level", "FEATURE-OF", "KL-ONE" ], [ "conceptual level", "FEATURE-OF", "KL-Conc" ] ]
We present an algorithm for calibrated camera relative pose estimation from lines. Given three lines with two of the lines parallel and orthogonal to the third we can compute the relative rotation between two images. We can also compute the relative translation from two intersection points. We also present a framework in which such lines can be detected. We evaluate the performance of the algorithm using synthetic and real data. The intended use of the algorithm is with robust hypothesize-and-test frameworks such as RANSAC. Our approach is suitable for urban and indoor environments where most lines are either parallel or orthogonal to each other.
[ [ "algorithm", "USED-FOR", "calibrated camera relative pose estimation" ], [ "synthetic and real data", "USED-FOR", "algorithm" ], [ "algorithm", "CONJUNCTION", "hypothesize-and-test frameworks" ], [ "RANSAC", "HYPONYM-OF", "hypothesize-and-test frameworks" ], [ "approach", "USED-FOR", "urban and indoor environments" ] ]
In this paper, we present a fully automated extraction system, named IntEx, to identify gene and protein interactions in biomedical text. Our approach is based on first splitting complex sentences into simple clausal structures made up of syntactic roles. Then, tagging biological entities with the help of biomedical and linguistic ontologies. Finally, extracting complete interactions by analyzing the matching contents of syntactic roles and their linguistically significant combinations. Our extraction system handles complex sentences and extracts multiple and nested interactions specified in a sentence. Experimental evaluations with two other state of the art extraction systems indicate that the IntEx system achieves better performance without the labor intensive pattern engineering requirement.
[ [ "fully automated extraction system", "USED-FOR", "gene and protein interactions" ], [ "IntEx", "HYPONYM-OF", "fully automated extraction system" ], [ "biomedical text", "USED-FOR", "gene and protein interactions" ], [ "biomedical and linguistic ontologies", "USED-FOR", "biological entities" ], [ "extraction system", "USED-FOR", "multiple and nested interactions" ], [ "IntEx system", "COMPARE", "extraction systems" ] ]
This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Web-based computer-assisted academic writing.
[ [ "method", "USED-FOR", "computational analysis of move structures" ], [ "abstracts of research articles", "USED-FOR", "computational analysis of move structures" ], [ "Web", "USED-FOR", "abstracts" ], [ "abstract moves", "USED-FOR", "language model" ], [ "CARE", "HYPONYM-OF", "prototype concordancer" ], [ "CARE", "USED-FOR", "move-tagged abstracts" ], [ "move-tagged abstracts", "USED-FOR", "digital learning" ], [ "system", "USED-FOR", "approach" ], [ "approach", "USED-FOR", "Web-based computer-assisted academic writing" ] ]
This work presents a real-time system for multiple object tracking in dynamic scenes. A unique characteristic of the system is its ability to cope with long-duration and complete occlusion without a prior knowledge about the shape or motion of objects. The system produces good segment and tracking results at a frame rate of 15-20 fps for image size of 320x240, as demonstrated by extensive experiments performed using video sequences under different conditions indoor and outdoor with long-duration and complete occlusions in changing background.
[ [ "real-time system", "USED-FOR", "multiple object tracking in dynamic scenes" ], [ "system", "USED-FOR", "long-duration and complete occlusion" ], [ "prior knowledge", "FEATURE-OF", "shape" ], [ "prior knowledge", "FEATURE-OF", "motion of objects" ], [ "shape", "CONJUNCTION", "motion of objects" ], [ "tracking", "EVALUATE-FOR", "system" ] ]
We propose a method of organizing reading materials for vocabulary learning. It enables us to select a concise set of reading texts -LRB- from a target corpus -RRB- that contains all the target vocabulary to be learned. We used a specialized vocabulary for an English certification test as the target vocabulary and used English Wikipedia, a free-content encyclopedia, as the target corpus. The organized reading materials would enable learners not only to study the target vocabulary efficiently but also to gain a variety of knowledge through reading. The reading materials are available on our web site.
[ [ "method", "USED-FOR", "organizing reading materials" ], [ "organizing reading materials", "USED-FOR", "vocabulary learning" ], [ "English Wikipedia", "HYPONYM-OF", "free-content encyclopedia" ] ]
A novel bootstrapping approach to Named Entity -LRB- NE -RRB- tagging using concept-based seeds and successive learners is presented. This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted NE, e.g. he/she/man / woman for PERSON NE. The bootstrapping procedure is implemented as training two successive learners. First, decision list is used to learn the parsing-based NE rules. Then, a Hidden Markov Model is trained on a corpus automatically tagged by the first learner. The resulting NE system approaches supervised NE performance for some NE types.
[ [ "bootstrapping approach", "USED-FOR", "Named Entity -LRB- NE -RRB- tagging" ], [ "concept-based seeds", "USED-FOR", "bootstrapping approach" ], [ "concept-based seeds", "CONJUNCTION", "successive learners" ], [ "successive learners", "USED-FOR", "bootstrapping approach" ], [ "PERSON NE", "HYPONYM-OF", "NE" ], [ "successive learners", "USED-FOR", "bootstrapping procedure" ], [ "decision list", "USED-FOR", "parsing-based NE rules" ], [ "NE system", "USED-FOR", "supervised NE" ] ]
We present the first known empirical test of an increasingly common speculative claim, by evaluating a representative Chinese-to-English SMT model directly on word sense disambiguation performance, using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task. Much effort has been put in designing and evaluating dedicated word sense disambiguation -LRB- WSD -RRB- models, in particular with the Senseval series of workshops. At the same time, the recent improvements in the BLEU scores of statistical machine translation -LRB- SMT -RRB- suggests that SMT models are good at predicting the right translation of the words in source language sentences. Surprisingly however, the WSD accuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models. We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. This tends to support the view that despite recent speculative claims to the contrary, current SMT models do have limitations in comparison with dedicated WSD models, and that SMT should benefit from the better predictions made by the WSD models.
[ [ "word sense disambiguation", "EVALUATE-FOR", "Chinese-to-English SMT model" ], [ "WSD evaluation methodology", "EVALUATE-FOR", "Chinese-to-English SMT model" ], [ "Senseval-3 Chinese lexical sample task", "EVALUATE-FOR", "Chinese-to-English SMT model" ], [ "Senseval series of workshops", "EVALUATE-FOR", "dedicated word sense disambiguation -LRB- WSD -RRB- models" ], [ "BLEU scores", "EVALUATE-FOR", "statistical machine translation -LRB- SMT -RRB-" ], [ "SMT models", "USED-FOR", "translation" ], [ "WSD accuracy", "EVALUATE-FOR", "SMT models" ], [ "that", "COMPARE", "WSD accuracy" ], [ "WSD accuracy", "EVALUATE-FOR", "SMT models" ], [ "that", "COMPARE", "WSD accuracy" ], [ "SMT models", "COMPARE", "dedicated WSD models" ], [ "WSD models", "USED-FOR", "SMT" ] ]
In this paper we present a novel, customizable : IE paradigm that takes advantage of predicate-argument structures. We also introduce a new way of automatically identifying predicate argument structures, which is central to our IE paradigm. It is based on : -LRB- 1 -RRB- an extended set of features ; and -LRB- 2 -RRB- inductive decision tree learning. The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results.
[ [ "predicate-argument structures", "USED-FOR", "IE paradigm" ], [ "features", "USED-FOR", "It" ], [ "features", "CONJUNCTION", "inductive decision tree learning" ], [ "inductive decision tree learning", "USED-FOR", "It" ], [ "predicate-argument structures", "USED-FOR", "IE" ] ]
In this paper we present a statistical profile of the Named Entity task, a specific information extraction task for which corpora in several languages are available. Using the results of the statistical analysis, we propose an algorithm for lower bound estimation for Named Entity corpora and discuss the significance of the cross-lingual comparisons provided by the analysis.
[ [ "statistical profile", "USED-FOR", "Named Entity task" ], [ "Named Entity task", "HYPONYM-OF", "information extraction task" ], [ "statistical analysis", "USED-FOR", "algorithm" ], [ "algorithm", "USED-FOR", "lower bound estimation" ], [ "lower bound estimation", "USED-FOR", "Named Entity corpora" ] ]
We attack an inexplicably under-explored language genre of spoken language -- lyrics in music -- via completely unsuper-vised induction of an SMT-style stochastic transduction grammar for hip hop lyrics, yielding a fully-automatically learned challenge-response system that produces rhyming lyrics given an input. Unlike previous efforts, we choose the domain of hip hop lyrics, which is particularly unstructured and noisy. A novel feature of our approach is that it is completely unsupervised and requires no a priori linguistic or phonetic knowledge. In spite of the level of difficulty of the challenge, the model nevertheless produces fluent output as judged by human evaluators, and performs significantly better than widely used phrase-based SMT models upon the same task.
[ [ "lyrics in music", "HYPONYM-OF", "under-explored language genre of spoken language" ], [ "unsuper-vised induction", "USED-FOR", "under-explored language genre of spoken language" ], [ "unsuper-vised induction", "USED-FOR", "SMT-style stochastic transduction grammar" ], [ "unsuper-vised induction", "USED-FOR", "fully-automatically learned challenge-response system" ], [ "hip hop lyrics", "FEATURE-OF", "SMT-style stochastic transduction grammar" ], [ "fully-automatically learned challenge-response system", "USED-FOR", "rhyming lyrics" ], [ "model", "COMPARE", "phrase-based SMT models" ], [ "task", "EVALUATE-FOR", "model" ], [ "task", "EVALUATE-FOR", "phrase-based SMT models" ] ]
In this paper, we investigate the problem of automatically predicting segment boundaries in spoken multiparty dialogue. We extend prior work in two ways. We first apply approaches that have been proposed for predicting top-level topic shifts to the problem of identifying subtopic boundaries. We then explore the impact on performance of using ASR output as opposed to human transcription. Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks : -LRB- 1 -RRB- for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, -LRB- 2 -RRB- for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and -LRB- 3 -RRB- conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks.
[ [ "spoken multiparty dialogue", "USED-FOR", "predicting segment boundaries" ], [ "approaches", "USED-FOR", "predicting top-level topic shifts" ], [ "approaches", "USED-FOR", "identifying subtopic boundaries" ], [ "predicting top-level topic shifts", "FEATURE-OF", "identifying subtopic boundaries" ], [ "ASR output", "COMPARE", "human transcription" ], [ "predicting subtopic boundaries", "PART-OF", "predicting top-level and predicting subtopic boundaries" ], [ "lexical cohesion-based approach", "USED-FOR", "predicting subtopic boundaries" ], [ "predicting top-level boundaries", "PART-OF", "predicting top-level and predicting subtopic boundaries" ], [ "machine learning approach", "USED-FOR", "predicting top-level boundaries" ], [ "lexical-cohesion and conversational features", "CONJUNCTION", "machine learning approach" ], [ "cue phrases", "HYPONYM-OF", "conversational cues" ], [ "overlapping speech", "HYPONYM-OF", "conversational cues" ], [ "overlapping speech", "CONJUNCTION", "cue phrases" ], [ "indicators", "USED-FOR", "top-level prediction task" ], [ "transcription errors", "FEATURE-OF", "ASR output" ], [ "models", "CONJUNCTION", "lexical-cohesion and conversational features" ] ]
We describe a simple unsupervised technique for learning morphology by identifying hubs in an automaton. For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. We create a word-trie, transform it into a minimal DFA, then identify hubs. Those hubs mark the boundary between root and suffix, achieving similar performance to more complex mixtures of techniques.
[ [ "unsupervised technique", "USED-FOR", "morphology" ], [ "hubs", "USED-FOR", "unsupervised technique" ], [ "hubs", "PART-OF", "automaton" ], [ "hub", "HYPONYM-OF", "node" ], [ "node", "PART-OF", "graph" ], [ "word-trie", "USED-FOR", "hubs" ], [ "minimal DFA", "USED-FOR", "hubs" ] ]
In Bayesian machine learning, conjugate priors are popular, mostly due to mathematical convenience. In this paper, we show that there are deeper reasons for choosing a conjugate prior. Specifically, we formulate the conjugate prior in the form of Bregman divergence and show that it is the inherent geometry of conjugate priors that makes them appropriate and intuitive. This geometric interpretation allows one to view the hyperparameters of conjugate priors as the effective sample points, thus providing additional intuition. We use this geometric understanding of conjugate priors to derive the hyperparameters and expression of the prior used to couple the generative and discriminative components of a hybrid model for semi-supervised learning.
[ [ "conjugate priors", "PART-OF", "Bayesian machine learning" ], [ "Bregman divergence", "FEATURE-OF", "conjugate prior" ], [ "geometric understanding of conjugate priors", "USED-FOR", "hyperparameters" ], [ "prior", "USED-FOR", "generative and discriminative components" ], [ "generative and discriminative components", "PART-OF", "hybrid model" ], [ "hybrid model", "USED-FOR", "semi-supervised learning" ] ]
This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6 % -LRB- F1, sentences < 40 words -RRB-, which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
[ [ "PCFG-LA", "HYPONYM-OF", "generative probabilistic model of parse trees" ], [ "PCFG", "USED-FOR", "model" ], [ "non-terminal symbols", "PART-OF", "PCFG" ], [ "latent variables", "USED-FOR", "non-terminal symbols" ], [ "parsed corpus", "USED-FOR", "Finegrained CFG rules" ], [ "EM-algorithm", "USED-FOR", "PCFG-LA model" ], [ "PCFG-LA", "USED-FOR", "exact parsing" ], [ "Penn WSJ corpus", "EVALUATE-FOR", "model" ], [ "Penn WSJ corpus", "EVALUATE-FOR", "unlexicalized PCFG parser" ], [ "model", "COMPARE", "unlexicalized PCFG parser" ], [ "F1", "EVALUATE-FOR", "model" ], [ "F1", "EVALUATE-FOR", "unlexicalized PCFG parser" ], [ "manual feature selection", "USED-FOR", "unlexicalized PCFG parser" ] ]
This paper reports on two contributions to large vocabulary continuous speech recognition. First, we present a new paradigm for speaker-independent -LRB- SI -RRB- training of hidden Markov models -LRB- HMM -RRB-, which uses a large amount of speech from a few speakers instead of the traditional practice of using a little speech from many speakers. In addition, combination of the training speakers is done by averaging the statistics of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. With only 12 training speakers for SI recognition, we achieved a 7.5 % word error rate on a standard grammar and test set from the DARPA Resource Management corpus. This performance is comparable to our best condition for this test suite, using 109 training speakers. Second, we show a significant improvement for speaker adaptation -LRB- SA -RRB- using the new SI corpus and a small amount of speech from the new -LRB- target -RRB- speaker. A probabilistic spectral mapping is estimated independently for each training -LRB- reference -RRB- speaker and the target speaker. Each reference model is transformed to the space of the target speaker and combined by averaging. Using only 40 utterances from the target speaker for adaptation, the error rate dropped to 4.1 % -- a 45 % reduction in error compared to the SI result.
[ [ "speech", "USED-FOR", "speaker-independent -LRB- SI -RRB- training of hidden Markov models -LRB- HMM -RRB-" ], [ "averaging the statistics of independently trained models", "COMPARE", "pooling of all the speech data" ], [ "word error rate", "EVALUATE-FOR", "SI recognition" ], [ "DARPA Resource Management corpus", "EVALUATE-FOR", "SI recognition" ], [ "SI corpus", "EVALUATE-FOR", "speaker adaptation -LRB- SA -RRB-" ], [ "error rate", "EVALUATE-FOR", "adaptation" ] ]
In this paper, we show how to construct a transfer dictionary automatically. Dictionary construction, one of the most difficult tasks in developing a machine translation system, is expensive. To avoid this problem, we investigate how we build a dictionary using existing linguistic resources. Our algorithm can be applied to any language pairs, but for the present we focus on building a Korean-to-Japanese dictionary using English as a pivot. We attempt three ways of automatic construction to corroborate the effect of the directionality of dictionaries. First, we introduce `` one-time look up '' method using a Korean-to-English and a Japanese-to-English dictionary. Second, we show a method using `` overlapping constraint '' with a Korean-to-English dictionary and an English-to-Japanese dictionary. Third, we consider another alternative method rarely used for building a dictionary : an English-to-Korean dictionary and English-to-Japanese dictionary. We found that the first method is the most effective and the best result can be obtained from combining the three methods.
[ [ "Dictionary construction", "PART-OF", "machine translation system" ], [ "linguistic resources", "USED-FOR", "dictionary" ], [ "English", "USED-FOR", "Korean-to-Japanese dictionary" ], [ "automatic construction", "EVALUATE-FOR", "directionality of dictionaries" ], [ "Korean-to-English and a Japanese-to-English dictionary", "USED-FOR", "`` one-time look up '' method" ], [ "`` overlapping constraint ''", "USED-FOR", "method" ], [ "Korean-to-English dictionary", "USED-FOR", "method" ], [ "Korean-to-English dictionary", "CONJUNCTION", "English-to-Japanese dictionary" ], [ "English-to-Japanese dictionary", "USED-FOR", "method" ], [ "method", "USED-FOR", "dictionary" ], [ "English-to-Korean dictionary", "HYPONYM-OF", "dictionary" ], [ "English-to-Korean dictionary", "CONJUNCTION", "English-to-Japanese dictionary" ], [ "English-to-Japanese dictionary", "HYPONYM-OF", "dictionary" ] ]
An empirical comparison of CFG filtering techniques for LTAG and HPSG is presented. We demonstrate that an approximation of HPSG produces a more effective CFG filter than that of LTAG. We also investigate the reason for that difference.
[ [ "CFG filtering techniques", "USED-FOR", "LTAG" ], [ "CFG filtering techniques", "USED-FOR", "HPSG" ], [ "LTAG", "COMPARE", "HPSG" ], [ "approximation of HPSG", "USED-FOR", "CFG filter" ], [ "CFG filter", "COMPARE", "that" ], [ "that", "USED-FOR", "LTAG" ] ]
Syntax-based statistical machine translation -LRB- MT -RRB- aims at applying statistical models to structured data. In this paper, we present a syntax-based statistical machine translation system based on a probabilistic synchronous dependency insertion grammar. Synchronous dependency insertion grammars are a version of synchronous grammars defined on dependency trees. We first introduce our approach to inducing such a grammar from parallel corpora. Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. We introduce a polynomial time decoding algorithm for the model. We evaluate the outputs of our MT system using the NIST and Bleu automatic MT evaluation software. The result shows that our system outperforms the baseline system based on the IBM models in both translation speed and quality.
[ [ "statistical models", "USED-FOR", "Syntax-based statistical machine translation -LRB- MT -RRB-" ], [ "structured data", "USED-FOR", "statistical models" ], [ "probabilistic synchronous dependency insertion grammar", "USED-FOR", "syntax-based statistical machine translation system" ], [ "Synchronous dependency insertion grammars", "HYPONYM-OF", "synchronous grammars" ], [ "dependency trees", "FEATURE-OF", "Synchronous dependency insertion grammars" ], [ "approach", "USED-FOR", "grammar" ], [ "parallel corpora", "USED-FOR", "grammar" ], [ "graphical model", "USED-FOR", "machine translation task" ], [ "stochastic tree-to-tree transducer", "USED-FOR", "graphical model" ], [ "polynomial time decoding algorithm", "USED-FOR", "model" ], [ "NIST and Bleu automatic MT evaluation software", "USED-FOR", "MT system" ], [ "system", "COMPARE", "baseline system" ], [ "IBM models", "USED-FOR", "baseline system" ], [ "translation speed and quality", "EVALUATE-FOR", "system" ], [ "translation speed and quality", "EVALUATE-FOR", "baseline system" ] ]
We propose a framework to derive the distance between concepts from distributional measures of word co-occurrences. We use the categories in a published thesaurus as coarse-grained concepts, allowing all possible distance values to be stored in a concept-concept matrix roughly.01 % the size of that created by existing measures. We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the tasks of -LRB- 1 -RRB- ranking word pairs in order of semantic distance, and -LRB- 2 -RRB- correcting real-word spelling errors. In the latter task, of all the WordNet-based measures, only that proposed by Jiang and Conrath outperforms the best distributional concept-distance measures.
[ [ "distributional measures of word co-occurrences", "USED-FOR", "framework" ], [ "concept-distance measures", "USED-FOR", "tasks" ], [ "distributional word-distance measures", "COMPARE", "concept-distance measures" ], [ "distributional word-distance measures", "USED-FOR", "tasks" ], [ "ranking word pairs in order of semantic distance", "HYPONYM-OF", "tasks" ], [ "correcting real-word spelling errors", "HYPONYM-OF", "tasks" ], [ "correcting real-word spelling errors", "CONJUNCTION", "ranking word pairs in order of semantic distance" ], [ "task", "EVALUATE-FOR", "WordNet-based measures" ], [ "task", "EVALUATE-FOR", "distributional concept-distance measures" ], [ "distributional concept-distance measures", "COMPARE", "WordNet-based measures" ] ]
We focus on the problem of building large repositories of lexical conceptual structure -LRB- LCS -RRB- representations for verbs in multiple languages. One of the main results of this work is the definition of a relation between broad semantic classes and LCS meaning components. Our acquisition program - LEXICALL - takes, as input, the result of previous work on verb classification and thematic grid tagging, and outputs LCS representations for different languages. These representations have been ported into English, Arabic and Spanish lexicons, each containing approximately 9000 verbs. We are currently using these lexicons in an operational foreign language tutoring and machine translation.
[ [ "broad semantic classes", "CONJUNCTION", "LCS meaning components" ], [ "acquisition program - LEXICALL -", "USED-FOR", "LCS representations" ], [ "verb classification", "USED-FOR", "acquisition program - LEXICALL -" ], [ "verb classification", "CONJUNCTION", "thematic grid tagging" ], [ "thematic grid tagging", "USED-FOR", "acquisition program - LEXICALL -" ], [ "representations", "USED-FOR", "English , Arabic and Spanish lexicons" ], [ "lexicons", "USED-FOR", "operational foreign language tutoring" ], [ "lexicons", "USED-FOR", "machine translation" ], [ "operational foreign language tutoring", "CONJUNCTION", "machine translation" ] ]
The theoretical study of the range concatenation grammar -LSB- RCG -RSB- formalism has revealed many attractive properties which may be used in NLP. In particular, range concatenation languages -LSB- RCL -RSB- can be parsed in polynomial time and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity. For example, after translation into an equivalent RCG, any tree adjoining grammar can be parsed in O -LRB- n6 -RRB- time. In this paper, we study a parsing technique whose purpose is to improve the practical efficiency of RCL parsers. The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. The results of a practical evaluation of this method on a wide coverage English grammar are given.
[ [ "range concatenation grammar -LSB- RCG -RSB- formalism", "USED-FOR", "NLP" ], [ "polynomial time", "FEATURE-OF", "range concatenation languages -LSB- RCL -RSB-" ], [ "worst-case parsing time complexity", "EVALUATE-FOR", "grammatical formalisms" ], [ "O -LRB- n6 -RRB- time", "FEATURE-OF", "tree adjoining grammar" ], [ "parsing technique", "USED-FOR", "RCL parsers" ], [ "main parser", "USED-FOR", "language L" ], [ "RCL parser", "USED-FOR", "shared derivation forest" ], [ "wide coverage English grammar", "EVALUATE-FOR", "method" ] ]
In this paper we introduce Ant-Q, a family of algorithms which present many similarities with Q-learning -LRB- Watkins, 1989 -RRB-, and which we apply to the solution of symmetric and asym-metric instances of the traveling salesman problem -LRB- TSP -RRB-. Ant-Q algorithms were inspired by work on the ant system -LRB- AS -RRB-, a distributed algorithm for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in -LRB- Dorigo, 1992 ; Dorigo, Maniezzo and Colorni, 1996 -RRB-. We show that AS is a particular instance of the Ant-Q family, and that there are instances of this family which perform better than AS. We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP 's are competitive with those obtained by other heuristic approaches based on neural networks or local search. Finally, we apply Ant-Q to some difficult asymmetric TSP 's obtaining very good results : Ant-Q was able to find solutions of a quality which usually can be found only by very specialized algorithms.
[ [ "Ant-Q", "USED-FOR", "symmetric and asym-metric instances of the traveling salesman problem -LRB- TSP -RRB-" ], [ "ant system -LRB- AS -RRB-", "USED-FOR", "Ant-Q algorithms" ], [ "ant system -LRB- AS -RRB-", "HYPONYM-OF", "distributed algorithm" ], [ "distributed algorithm", "USED-FOR", "combinatorial optimization" ], [ "AS", "HYPONYM-OF", "Ant-Q family" ], [ "instances", "PART-OF", "family" ], [ "instances", "COMPARE", "AS" ], [ "Ant-Q", "USED-FOR", "symmetric TSP" ], [ "Ant-Q", "COMPARE", "heuristic approaches" ], [ "neural networks", "USED-FOR", "heuristic approaches" ], [ "neural networks", "CONJUNCTION", "local search" ], [ "local search", "USED-FOR", "heuristic approaches" ], [ "Ant-Q", "USED-FOR", "asymmetric TSP" ] ]
In this paper, we develop a geometric framework for linear or nonlinear discriminant subspace learning and classification. In our framework, the structures of classes are conceptualized as a semi-Riemannian manifold which is considered as a submanifold embedded in an ambient semi-Riemannian space. The class structures of original samples can be characterized and deformed by local metrics of the semi-Riemannian space. Semi-Riemannian metrics are uniquely determined by the smoothing of discrete functions and the nullity of the semi-Riemannian space. Based on the geometrization of class structures, optimizing class structures in the feature space is equivalent to maximizing the quadratic quantities of metric tensors in the semi-Riemannian space. Thus supervised discriminant subspace learning reduces to unsupervised semi-Riemannian mani-fold learning. Based on the proposed framework, a novel algorithm, dubbed as Semi-Riemannian Discriminant Analysis -LRB- SRDA -RRB-, is presented for subspace-based classification. The performance of SRDA is tested on face recognition -LRB- singular case -RRB- and handwritten capital letter classification -LRB- nonsingular case -RRB- against existing algorithms. The experimental results show that SRDA works well on recognition and classification, implying that semi-Riemannian geometry is a promising new tool for pattern recognition and machine learning.
[ [ "geometric framework", "USED-FOR", "linear or nonlinear discriminant subspace learning and classification" ], [ "semi-Riemannian manifold", "USED-FOR", "structures of classes" ], [ "submanifold", "PART-OF", "ambient semi-Riemannian space" ], [ "local metrics of the semi-Riemannian space", "USED-FOR", "class structures" ], [ "smoothing of discrete functions", "USED-FOR", "Semi-Riemannian metrics" ], [ "smoothing of discrete functions", "CONJUNCTION", "nullity of the semi-Riemannian space" ], [ "nullity of the semi-Riemannian space", "USED-FOR", "Semi-Riemannian metrics" ], [ "feature space", "FEATURE-OF", "class structures" ], [ "semi-Riemannian space", "FEATURE-OF", "quadratic quantities of metric tensors" ], [ "framework", "USED-FOR", "algorithm" ], [ "algorithm", "USED-FOR", "subspace-based classification" ], [ "SRDA", "COMPARE", "algorithms" ], [ "face recognition -LRB- singular case", "EVALUATE-FOR", "SRDA" ], [ "face recognition -LRB- singular case", "CONJUNCTION", "handwritten capital letter classification -LRB- nonsingular case -RRB-" ], [ "face recognition -LRB- singular case", "EVALUATE-FOR", "algorithms" ], [ "handwritten capital letter classification -LRB- nonsingular case -RRB-", "EVALUATE-FOR", "SRDA" ], [ "handwritten capital letter classification -LRB- nonsingular case -RRB-", "EVALUATE-FOR", "algorithms" ], [ "SRDA", "USED-FOR", "recognition" ], [ "SRDA", "USED-FOR", "classification" ], [ "recognition", "CONJUNCTION", "classification" ], [ "semi-Riemannian geometry", "USED-FOR", "pattern recognition" ], [ "semi-Riemannian geometry", "USED-FOR", "machine learning" ], [ "pattern recognition", "CONJUNCTION", "machine learning" ] ]
A deterministic parser is under development which represents a departure from traditional deterministic parsers in that it combines both symbolic and connectionist components. The connectionist component is trained either from patterns derived from the rules of a deterministic grammar. The development and evolution of such a hybrid architecture has lead to a parser which is superior to any known deterministic parser. Experiments are described and powerful training techniques are demonstrated that permit decision-making by the connectionist component in the parsing process. This approach has permitted some simplifications to the rules of other deterministic parsers, including the elimination of rule packets and priorities. Furthermore, parsing is performed more robustly and with more tolerance for error. Data are presented which show how a connectionist -LRB- neural -RRB- network trained with linguistic rules can parse both expected -LRB- grammatical -RRB- sentences as well as some novel -LRB- ungrammatical or lexically ambiguous -RRB- sentences.
[ [ "deterministic parser", "COMPARE", "deterministic parsers" ], [ "symbolic and connectionist components", "PART-OF", "it" ], [ "patterns", "USED-FOR", "connectionist component" ], [ "rules of a deterministic grammar", "USED-FOR", "patterns" ], [ "hybrid architecture", "USED-FOR", "parser" ], [ "parser", "COMPARE", "deterministic parser" ], [ "training techniques", "USED-FOR", "decision-making" ], [ "connectionist component", "USED-FOR", "decision-making" ], [ "connectionist component", "PART-OF", "parsing process" ], [ "connectionist -LRB- neural -RRB- network", "USED-FOR", "expected -LRB- grammatical -RRB- sentences" ], [ "connectionist -LRB- neural -RRB- network", "USED-FOR", "-LRB- ungrammatical or lexically ambiguous -RRB- sentences" ], [ "linguistic rules", "USED-FOR", "connectionist -LRB- neural -RRB- network" ], [ "expected -LRB- grammatical -RRB- sentences", "CONJUNCTION", "-LRB- ungrammatical or lexically ambiguous -RRB- sentences" ] ]
Robust natural language interpretation requires strong semantic domain models, fail-soft recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi-strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge -LRB- in addition to general linguistic knowledge -RRB- to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge ; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework.
[ [ "semantic domain models", "USED-FOR", "natural language interpretation" ], [ "semantic domain models", "CONJUNCTION", "fail-soft recovery heuristics" ], [ "fail-soft recovery heuristics", "USED-FOR", "natural language interpretation" ], [ "fail-soft recovery heuristics", "CONJUNCTION", "control structures" ], [ "control structures", "USED-FOR", "natural language interpretation" ], [ "single-strategy parsers", "COMPARE", "multi-strategy approach" ], [ "task-specific domain knowledge", "CONJUNCTION", "general linguistic knowledge" ], [ "parsing strategies", "PART-OF", "parsing algorithm" ], [ "case-frame instantiation", "HYPONYM-OF", "parsing strategies" ], [ "parsing strategies", "USED-FOR", "conjunctions" ], [ "parsing strategies", "USED-FOR", "fragmentary input" ], [ "parsing strategies", "USED-FOR", "ungrammatical structures" ], [ "parsing strategies", "USED-FOR", "exotic , grammatically correct input" ], [ "conjunctions", "CONJUNCTION", "fragmentary input" ], [ "fragmentary input", "CONJUNCTION", "ungrammatical structures" ], [ "ungrammatical structures", "CONJUNCTION", "exotic , grammatically correct input" ], [ "specific heuristics", "USED-FOR", "ungrammatical input" ], [ "specific heuristics", "PART-OF", "multi-strategy framework" ] ]
Recently, Stacked Auto-Encoders -LRB- SAE -RRB- have been successfully used for learning imbalanced datasets. In this paper, for the first time, we propose to use a Neural Network classifier furnished by an SAE structure for detecting the errors made by a strong Automatic Speech Recognition -LRB- ASR -RRB- system. Error detection on an automatic transcription provided by a '' strong '' ASR system, i.e. exhibiting a small word error rate, is difficult due to the limited number of '' positive '' examples -LRB- i.e. words erroneously recognized -RRB- available for training a binary classi-fier. In this paper we investigate and compare different types of classifiers for automatically detecting ASR errors, including the one based on a stacked auto-encoder architecture. We show the effectiveness of the latter by measuring and comparing performance on the automatic transcriptions of an English corpus collected from TED talks. Performance of each investigated classifier is evaluated both via receiving operating curve and via a measure, called mean absolute error, related to the quality in predicting the corresponding word error rate. The results demonstrates that the classifier based on SAE detects the ASR errors better than the other classification methods.
[ [ "Stacked Auto-Encoders -LRB- SAE -RRB-", "USED-FOR", "learning imbalanced datasets" ], [ "Neural Network classifier", "USED-FOR", "Automatic Speech Recognition -LRB- ASR -RRB- system" ], [ "SAE structure", "USED-FOR", "Neural Network classifier" ], [ "Error detection", "USED-FOR", "automatic transcription" ], [ "classifiers", "USED-FOR", "automatically detecting ASR errors" ], [ "one", "HYPONYM-OF", "classifiers" ], [ "stacked auto-encoder architecture", "USED-FOR", "one" ], [ "English corpus", "FEATURE-OF", "automatic transcriptions" ], [ "TED talks", "USED-FOR", "English corpus" ], [ "receiving operating curve", "EVALUATE-FOR", "classifier" ], [ "receiving operating curve", "CONJUNCTION", "measure" ], [ "measure", "EVALUATE-FOR", "classifier" ], [ "classifier", "USED-FOR", "ASR errors" ], [ "classifier", "COMPARE", "classification methods" ], [ "SAE", "USED-FOR", "classifier" ], [ "classification methods", "USED-FOR", "ASR errors" ] ]
Within the EU Network of Excellence PASCAL, a challenge was organized to design a statistical machine learning algorithm that segments words into the smallest meaning-bearing units of language, morphemes. Ideally, these are basic vocabulary units suitable for different tasks, such as speech and text understanding, machine translation, information retrieval, and statistical language modeling. Twelve research groups participated in the challenge and had submitted segmentation results obtained by their algorithms. In this paper, we evaluate the application of these segmen-tation algorithms to large vocabulary speech recognition using statistical n-gram language models based on the proposed word segments instead of entire words. Experiments were done for two ag-glutinative and morphologically rich languages : Finnish and Turk-ish. We also investigate combining various segmentations to improve the performance of the recognizer.
[ [ "statistical machine learning algorithm", "USED-FOR", "smallest meaning-bearing units of language" ], [ "morphemes", "HYPONYM-OF", "smallest meaning-bearing units of language" ], [ "these", "USED-FOR", "tasks" ], [ "speech and text understanding", "HYPONYM-OF", "tasks" ], [ "speech and text understanding", "CONJUNCTION", "machine translation" ], [ "machine translation", "HYPONYM-OF", "tasks" ], [ "machine translation", "CONJUNCTION", "information retrieval" ], [ "information retrieval", "HYPONYM-OF", "tasks" ], [ "information retrieval", "CONJUNCTION", "statistical language modeling" ], [ "statistical language modeling", "HYPONYM-OF", "tasks" ], [ "segmen-tation algorithms", "USED-FOR", "large vocabulary speech recognition" ], [ "statistical n-gram language models", "EVALUATE-FOR", "segmen-tation algorithms" ], [ "Finnish", "HYPONYM-OF", "ag-glutinative and morphologically rich languages" ], [ "Finnish", "CONJUNCTION", "Turk-ish" ], [ "Turk-ish", "HYPONYM-OF", "ag-glutinative and morphologically rich languages" ] ]
This paper describes a recently collected spoken language corpus for the ATIS -LRB- Air Travel Information System -RRB- domain. This data collection effort has been co-ordinated by MADCOW -LRB- Multi-site ATIS Data COllection Working group -RRB-. We summarize the motivation for this effort, the goals, the implementation of a multi-site data collection paradigm, and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of spontaneous speech from five sites for use in a multi-site common evaluation of speech, natural language and spoken language.
[ [ "spoken language corpus", "USED-FOR", "ATIS -LRB- Air Travel Information System -RRB- domain" ], [ "spontaneous speech", "EVALUATE-FOR", "multi-site common evaluation of speech , natural language and spoken language" ] ]
This paper proposes the Hierarchical Directed Acyclic Graph -LRB- HDAG -RRB- Kernel for structured natural language data. The HDAG Kernel directly accepts several levels of both chunks and their relations, and then efficiently computes the weighed sum of the number of common attribute sequences of the HDAGs. We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function. The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods.
[ [ "Hierarchical Directed Acyclic Graph -LRB- HDAG -RRB- Kernel", "USED-FOR", "structured natural language data" ], [ "method", "USED-FOR", "question classification and sentence alignment tasks" ], [ "similarity measure", "EVALUATE-FOR", "method" ], [ "similarity measure", "CONJUNCTION", "kernel function" ], [ "kernel function", "EVALUATE-FOR", "method" ], [ "HDAG Kernel", "COMPARE", "kernel functions" ], [ "HDAG Kernel", "COMPARE", "baseline methods" ], [ "kernel functions", "CONJUNCTION", "baseline methods" ] ]
We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies. The submitted model yields 79.1 % macro-average F1 performance, for the joint task, 86.9 % syntactic dependencies LAS and 71.0 % semantic dependencies F1. A larger model trained after the deadline achieves 80.5 % macro-average F1, 87.6 % syntactic dependencies LAS, and 73.1 % semantic dependencies F1.
[ [ "generative history-based latent variable model", "USED-FOR", "CoNLL 2008 shared task" ], [ "generative history-based latent variable model", "USED-FOR", "synchronous dependency parser" ], [ "synchronous dependency parser", "USED-FOR", "syntactic and semantic dependencies" ], [ "macro-average F1 performance", "EVALUATE-FOR", "model" ], [ "macro-average F1 performance", "EVALUATE-FOR", "task" ], [ "syntactic dependencies LAS", "EVALUATE-FOR", "task" ], [ "syntactic dependencies LAS", "CONJUNCTION", "semantic dependencies F1" ], [ "semantic dependencies F1", "EVALUATE-FOR", "task" ], [ "macro-average F1", "EVALUATE-FOR", "model" ], [ "macro-average F1", "CONJUNCTION", "syntactic dependencies LAS" ], [ "syntactic dependencies LAS", "EVALUATE-FOR", "model" ], [ "syntactic dependencies LAS", "CONJUNCTION", "semantic dependencies F1" ], [ "semantic dependencies F1", "EVALUATE-FOR", "model" ] ]
This paper describes a new, large scale discourse-level annotation project - the Penn Discourse TreeBank -LRB- PDTB -RRB-. We present an approach to annotating a level of discourse structure that is based on identifying discourse connectives and their arguments. The PDTB is being built directly on top of the Penn TreeBank and Propbank, thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms. We provide a detailed preliminary analysis of inter-annotator agreement - both the level of agreement and the types of inter-annotator variation.
[ [ "approach", "USED-FOR", "discourse structure" ], [ "discourse connectives", "USED-FOR", "approach" ], [ "PDTB", "USED-FOR", "extraction of useful syntactic and semantic features" ], [ "PDTB", "EVALUATE-FOR", "practical algorithms" ], [ "Penn TreeBank", "USED-FOR", "PDTB" ], [ "Penn TreeBank", "CONJUNCTION", "Propbank" ], [ "Propbank", "USED-FOR", "PDTB" ], [ "level of agreement", "FEATURE-OF", "inter-annotator agreement" ], [ "level of agreement", "CONJUNCTION", "inter-annotator variation" ], [ "inter-annotator variation", "FEATURE-OF", "inter-annotator agreement" ] ]
Currently, N-gram models are the most common and widely used models for statistical language modeling. In this paper, we investigated an alternative way to build language models, i.e., using artificial neural networks to learn the language model. Our experiment result shows that the neural network can learn a language model that has performance even better than standard statistical methods.
[ [ "N-gram models", "USED-FOR", "statistical language modeling" ], [ "artificial neural networks", "USED-FOR", "language model" ], [ "neural network", "USED-FOR", "language model" ], [ "neural network", "COMPARE", "statistical methods" ] ]
Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional -LRB- multi-class -RRB- continuous facial behaviour data ; binary classifiers-one for each target basic-emotion class-are used instead. In this paper, intrinsic topology of multidimensional continuous facial affect data is first modeled by an ordinal man-ifold. This topology is then incorporated into the Hidden Conditional Ordinal Random Field -LRB- H-CORF -RRB- framework for dynamic ordinal regression by constraining H-CORF parameters to lie on the ordinal manifold. The resulting model attains simultaneous dynamic recognition and intensity estimation of facial expressions of multiple emotions. To the best of our knowledge, the proposed method is the first one to achieve this on both deliberate as well as spontaneous facial affect data.
[ [ "intensity of the observed facial displays", "CONJUNCTION", "temporal evolution" ], [ "ordinal man-ifold", "USED-FOR", "intrinsic topology of multidimensional continuous facial" ], [ "topology", "PART-OF", "Hidden Conditional Ordinal Random Field -LRB- H-CORF -RRB- framework" ], [ "Hidden Conditional Ordinal Random Field -LRB- H-CORF -RRB- framework", "USED-FOR", "dynamic ordinal regression" ], [ "model", "USED-FOR", "simultaneous dynamic recognition" ], [ "model", "USED-FOR", "intensity estimation of facial expressions" ], [ "simultaneous dynamic recognition", "CONJUNCTION", "intensity estimation of facial expressions" ], [ "spontaneous facial affect data", "EVALUATE-FOR", "the proposed method" ] ]
Recent advances in linear classification have shown that for applications such as document classification, the training can be extremely efficient. However, most of the existing training methods are designed by assuming that data can be stored in the computer memory. These methods can not be easily applied to data larger than the memory capacity due to the random access to the disk. We propose and analyze a block minimization framework for data larger than the memory size. At each step a block of data is loaded from the disk and handled by certain learning methods. We investigate two implementations of the proposed framework for primal and dual SVMs, respectively. As data can not fit in memory, many design considerations are very different from those for traditional algorithms. Experiments using data sets 20 times larger than the memory demonstrate the effectiveness of the proposed method.
[ [ "document classification", "HYPONYM-OF", "applications" ], [ "data", "COMPARE", "memory capacity" ], [ "block minimization framework", "USED-FOR", "data" ], [ "data", "COMPARE", "memory size" ], [ "framework", "USED-FOR", "primal and dual SVMs" ] ]
Instances of a word drawn from different domains may have different sense priors -LRB- the proportions of the different senses of a word -RRB-. This in turn affects the accuracy of word sense disambiguation -LRB- WSD -RRB- systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy.
[ [ "accuracy", "EVALUATE-FOR", "word sense disambiguation -LRB- WSD -RRB- systems" ], [ "method", "USED-FOR", "sense priors of words" ], [ "new domain", "FEATURE-OF", "sense priors of words" ], [ "well calibrated probabilities", "USED-FOR", "estimations" ], [ "well calibrated probabilities", "USED-FOR", "sense priors" ] ]
In this paper we deal with a recently developed large Czech MWE database containing at the moment 160 000 MWEs -LRB- treated as lexical units -RRB-. It was compiled from various resources such as encyclopedias and dictionaries, public databases of proper names and toponyms, collocations obtained from Czech WordNet, lists of botanical and zoological terms and others. We describe the structure of the database and give basic types of MWEs according to domains they belong to. We compare the built MWEs database with the corpus data from Czech National Corpus -LRB- approx. 100 mil. tokens -RRB- and present results of this comparison in the paper. These MWEs have not been obtained from the corpus since their frequencies in it are rather low. To obtain a more complete list of MWEs we propose and use a technique exploiting the Word Sketch Engine, which allows us to work with statistical parameters such as frequency of MWEs and their components as well as with the salience for the whole MWEs. We also discuss exploitation of the database for working out a more adequate tagging and lemmatization. The final goal is to be able to recognize MWEs in corpus text and lemmatize them as complete lexical units, i. e. to make tagging and lemmatization more adequate.
[ [ "encyclopedias", "USED-FOR", "It" ], [ "encyclopedias", "CONJUNCTION", "dictionaries" ], [ "dictionaries", "USED-FOR", "It" ], [ "public databases of proper names and toponyms", "USED-FOR", "It" ], [ "collocations", "USED-FOR", "It" ], [ "lists of botanical and zoological terms", "USED-FOR", "It" ], [ "lists of botanical and zoological terms", "CONJUNCTION", "collocations" ], [ "Czech National Corpus", "USED-FOR", "MWEs database" ], [ "technique", "USED-FOR", "Word Sketch Engine" ], [ "statistical parameters", "FEATURE-OF", "Word Sketch Engine" ], [ "database", "USED-FOR", "tagging" ], [ "database", "USED-FOR", "lemmatization" ], [ "tagging", "CONJUNCTION", "lemmatization" ], [ "MWEs", "USED-FOR", "tagging" ], [ "MWEs", "USED-FOR", "lemmatization" ], [ "tagging", "CONJUNCTION", "lemmatization" ] ]
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. The backbone of the annotation are semantic roles in the frame semantics paradigm. We report experiences and evaluate the annotated data from the first project stage. On this basis, we discuss the problems of vagueness and ambiguity in semantic annotation.
[ [ "semantically annotated corpus", "USED-FOR", "large-scale acquisition of word-semantic information" ], [ "construction of domain-independent lexica", "HYPONYM-OF", "large-scale acquisition of word-semantic information" ], [ "semantic roles", "PART-OF", "frame semantics paradigm" ], [ "vagueness", "CONJUNCTION", "ambiguity" ], [ "vagueness", "FEATURE-OF", "semantic annotation" ], [ "ambiguity", "FEATURE-OF", "semantic annotation" ] ]
Statistical machine translation -LRB- SMT -RRB- is currently one of the hot spots in natural language processing. Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems, requiring significantly less development time. This is particularly important when building translation systems for new language pairs or new domains. This workshop is intended to give an introduction to statistical machine translation with a focus on practical considerations. Participants should be able, after attending this workshop, to set out building an SMT system themselves and achieving good baseline results in a short time. The tutorial will cover the basics of SMT : Theory will be put into practice. STTK, a statistical machine translation tool kit, will be introduced and used to build a working translation system. STTK has been developed by the presenter and co-workers over a number of years and is currently used as the basis of CMU 's SMT system. It has also successfully been coupled with rule-based and example based machine translation modules to build a multi engine machine translation system. The source code of the tool kit will be made available.
[ [ "Statistical machine translation -LRB- SMT -RRB-", "HYPONYM-OF", "natural language processing" ], [ "SMT", "COMPARE", "rule-based translation systems" ], [ "translation systems", "USED-FOR", "new language pairs" ], [ "translation systems", "USED-FOR", "new domains" ], [ "new language pairs", "CONJUNCTION", "new domains" ], [ "STTK", "HYPONYM-OF", "statistical machine translation tool kit" ], [ "STTK", "USED-FOR", "translation system" ], [ "STTK", "USED-FOR", "SMT system" ], [ "It", "CONJUNCTION", "rule-based and example based machine translation modules" ], [ "It", "USED-FOR", "multi engine machine translation system" ], [ "rule-based and example based machine translation modules", "USED-FOR", "multi engine machine translation system" ] ]
This paper presents an unsupervised learning approach to building a non-English -LRB- Arabic -RRB- stemmer. The stemming model is based on statistical machine translation and it uses an English stemmer and a small -LRB- 10K sentences -RRB- parallel corpus as its sole training resources. No parallel text is needed after the training phase. Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre. Examples and results will be given for Arabic, but the approach is applicable to any language that needs affix removal. Our resource-frugal approach results in 87.5 % agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component. Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38 % in average precision over unstemmed text, and 96 % of the performance of the proprietary stemmer above.
[ [ "unsupervised learning approach", "USED-FOR", "non-English -LRB- Arabic -RRB- stemmer" ], [ "statistical machine translation", "USED-FOR", "stemming model" ], [ "English stemmer", "USED-FOR", "it" ], [ "parallel corpus", "USED-FOR", "it" ], [ "Monolingual , unannotated text", "USED-FOR", "stemmer" ], [ "resource-frugal approach", "COMPARE", "Arabic stemmer" ], [ "agreement", "EVALUATE-FOR", "resource-frugal approach" ], [ "agreement", "EVALUATE-FOR", "Arabic stemmer" ], [ "rules", "USED-FOR", "Arabic stemmer" ], [ "rules", "CONJUNCTION", "affix lists" ], [ "affix lists", "USED-FOR", "Arabic stemmer" ], [ "affix lists", "CONJUNCTION", "human annotated text" ], [ "human annotated text", "USED-FOR", "Arabic stemmer" ], [ "human annotated text", "CONJUNCTION", "unsupervised component" ], [ "unsupervised component", "USED-FOR", "Arabic stemmer" ], [ "Arabic information retrieval", "USED-FOR", "Task-based evaluation" ], [ "average precision", "EVALUATE-FOR", "Task-based evaluation" ], [ "average precision", "EVALUATE-FOR", "unstemmed text" ] ]
The paper assesses the capability of an HMM-based TTS system to produce German speech. The results are discussed in qualitative terms, and compared over three different choices of context features. In addition, the system is adapted to a small set of football announcements, in an exploratory attempt to synthe-sise expressive speech. We conclude that the HMMs are able to produce highly intelligible neutral German speech, with a stable quality, and that the expressivity is partially captured in spite of the small size of the football dataset.
[ [ "HMM-based TTS system", "USED-FOR", "German speech" ], [ "system", "USED-FOR", "football announcements" ], [ "system", "USED-FOR", "expressive speech" ], [ "HMMs", "USED-FOR", "intelligible neutral German speech" ] ]
We give an analysis of ellipsis resolution in terms of a straightforward discourse copying algorithm that correctly predicts a wide range of phenomena. The treatment does not suffer from problems inherent in identity-of-relations analyses. Furthermore, in contrast to the approach of Dalrymple et al. -LSB- 1991 -RSB-, the treatment directly encodes the intuitive distinction between full NPs and the referential elements that corefer with them through what we term role linking. The correct predictions for several problematic examples of ellipsis naturally result. Finally, the analysis extends directly to other discourse copying phenomena.
[ [ "full NPs", "CONJUNCTION", "referential elements" ], [ "analysis", "USED-FOR", "discourse copying phenomena" ] ]
How to obtain hierarchical relations -LRB- e.g. superordinate - hyponym relation, synonym relation -RRB- is one of the most important problems for thesaurus construction. A pilot system for extracting these relations automatically from an ordinary Japanese language dictionary -LRB- Shinmeikai Kokugojiten, published by Sansei-do, in machine readable form -RRB- is given. The features of the definition sentences in the dictionary, the mechanical extraction of the hierarchical relations and the estimation of the results are discussed.
[ [ "hierarchical relations", "PART-OF", "thesaurus construction" ], [ "superordinate - hyponym relation", "HYPONYM-OF", "hierarchical relations" ], [ "superordinate - hyponym relation", "CONJUNCTION", "synonym relation" ], [ "synonym relation", "HYPONYM-OF", "hierarchical relations" ], [ "Japanese language dictionary", "USED-FOR", "relations" ], [ "definition sentences", "USED-FOR", "features" ], [ "definition sentences", "PART-OF", "dictionary" ] ]
Face images of non-frontal views under poor illumination with low resolution reduce dramatically face recognition accuracy. This is evident most compellingly by the very low recognition rate of all existing face recognition systems when applied to live CCTV camera input. In this paper, we present a Bayesian framework to perform multi-modal -LRB- such as variations in viewpoint and illumination -RRB- face image super-resolution for recognition in tensor space. Given a single modal low-resolution face image, we benefit from the multiple factor interactions of training tensor, and super-resolve its high-resolution reconstructions across different modalities for face recognition. Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes, we integrate the tasks of super-resolution and recognition by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition. We show results from multi-modal super-resolution and face recognition experiments across different imaging modalities, using low-resolution images as testing inputs and demonstrate improved recognition rates over standard tensorface and eigenface representations.
[ [ "recognition rate", "EVALUATE-FOR", "face recognition systems" ], [ "live CCTV camera input", "USED-FOR", "face recognition systems" ], [ "Bayesian framework", "USED-FOR", "face image super-resolution" ], [ "viewpoint", "CONJUNCTION", "illumination" ], [ "face image super-resolution", "USED-FOR", "recognition" ], [ "tensor space", "FEATURE-OF", "recognition" ], [ "single modal low-resolution face image", "USED-FOR", "high-resolution reconstructions" ], [ "multiple factor interactions of training tensor", "USED-FOR", "high-resolution reconstructions" ], [ "high-resolution reconstructions", "USED-FOR", "face recognition" ], [ "modalities", "FEATURE-OF", "high-resolution reconstructions" ], [ "super-resolution", "HYPONYM-OF", "pixel-domain super-resolution and recognition" ], [ "super-resolution", "CONJUNCTION", "recognition" ], [ "recognition", "HYPONYM-OF", "pixel-domain super-resolution and recognition" ], [ "maximum likelihood identity parameter vector", "USED-FOR", "super-resolution" ], [ "maximum likelihood identity parameter vector", "USED-FOR", "recognition" ], [ "maximum likelihood identity parameter vector", "USED-FOR", "recognition" ], [ "high-resolution tensor space", "FEATURE-OF", "maximum likelihood identity parameter vector" ], [ "low-resolution images", "USED-FOR", "multi-modal super-resolution and face recognition" ], [ "recognition rates", "EVALUATE-FOR", "multi-modal super-resolution and face recognition" ], [ "recognition rates", "EVALUATE-FOR", "tensorface and eigenface representations" ] ]
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. The units of translation are blocks - pairs of phrases. During decoding, we use a block unigram model and a word-based trigram language model. During training, the blocks are learned from source interval projections using an underlying word alignment. We show experimental results on block selection criteria based on unigram counts and phrase length.
[ [ "phrase-based unigram model", "USED-FOR", "statistical machine translation" ], [ "phrase-based unigram model", "COMPARE", "phrase-based models" ], [ "model parameters", "USED-FOR", "phrase-based unigram model" ], [ "block unigram model", "USED-FOR", "decoding" ], [ "word-based trigram language model", "USED-FOR", "decoding" ], [ "word-based trigram language model", "CONJUNCTION", "block unigram model" ], [ "source interval projections", "USED-FOR", "blocks" ], [ "word alignment", "USED-FOR", "source interval projections" ], [ "unigram counts", "USED-FOR", "block selection criteria" ], [ "unigram counts", "CONJUNCTION", "phrase length" ], [ "phrase length", "USED-FOR", "block selection criteria" ] ]
This paper develops a new approach for extremely fast detection in domains where the distribution of positive and negative examples is highly skewed -LRB- e.g. face detection or database retrieval -RRB-. In such domains a cascade of simple classifiers each trained to achieve high detection rates and modest false positive rates can yield a final detector with many desirable features : including high detection rates, very low false positive rates, and fast performance. Achieving extremely high detection rates, rather than low error, is not a task typically addressed by machine learning algorithms. We propose a new variant of AdaBoost as a mechanism for training the simple classifiers used in the cascade. Experimental results in the domain of face detection show the training algorithm yields significant improvements in performance over conventional AdaBoost. The final face detection system can process 15 frames per second, achieves over 90 % detection, and a false positive rate of 1 in a 1,000,000.
[ [ "approach", "USED-FOR", "fast detection" ], [ "face detection", "CONJUNCTION", "database retrieval" ], [ "cascade of simple classifiers", "USED-FOR", "detector" ], [ "detection rates", "EVALUATE-FOR", "classifiers" ], [ "detection rates", "CONJUNCTION", "modest false positive rates" ], [ "modest false positive rates", "EVALUATE-FOR", "classifiers" ], [ "features", "FEATURE-OF", "detector" ], [ "detection rates", "COMPARE", "low error" ], [ "AdaBoost", "USED-FOR", "classifiers" ], [ "classifiers", "USED-FOR", "cascade" ], [ "training algorithm", "USED-FOR", "face detection" ], [ "AdaBoost", "USED-FOR", "face detection" ], [ "AdaBoost", "COMPARE", "training algorithm" ], [ "detection", "CONJUNCTION", "false positive rate" ] ]
This paper proposes a method for learning joint embed-dings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase lo-calization on the Flickr30K Entities dataset.
[ [ "method", "USED-FOR", "joint embed-dings of images and text" ], [ "two-branch neural network", "USED-FOR", "method" ], [ "multiple layers of linear projections", "PART-OF", "two-branch neural network" ], [ "multiple layers of linear projections", "CONJUNCTION", "nonlinearities" ], [ "nonlinearities", "PART-OF", "two-branch neural network" ], [ "large-margin objective", "USED-FOR", "network" ], [ "cross-view ranking constraints", "FEATURE-OF", "large-margin objective" ], [ "cross-view ranking constraints", "CONJUNCTION", "within-view neighborhood structure preservation constraints" ], [ "within-view neighborhood structure preservation constraints", "FEATURE-OF", "large-margin objective" ], [ "accuracy", "EVALUATE-FOR", "approach" ], [ "image-to-text and text-to-image retrieval", "EVALUATE-FOR", "approach" ], [ "Flickr30K and MSCOCO image-sentence datasets", "EVALUATE-FOR", "method" ], [ "phrase lo-calization", "EVALUATE-FOR", "method" ], [ "Flickr30K Entities dataset", "USED-FOR", "phrase lo-calization" ] ]
State-of-the-art Question Answering -LRB- QA -RRB- systems are very sensitive to variations in the phrasing of an information need. Finding the preferred language for such a need is a valuable task. We investigate that claim by adopting a simple MT-based paraphrasing technique and evaluating QA system performance on paraphrased questions. We found a potential increase of 35 % in MRR with respect to the original question.
[ [ "MT-based paraphrasing technique", "USED-FOR", "QA system" ], [ "paraphrased questions", "EVALUATE-FOR", "QA system" ] ]
The TAP-XL Automated Analyst 's Assistant is an application designed to help an English - speaking analyst write a topical report, culling information from a large inflow of multilingual, multimedia data. It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology.
[ [ "multilingual , multimedia data", "USED-FOR", "TAP-XL Automated Analyst 's Assistant" ], [ "human language technology", "USED-FOR", "It" ] ]
This paper discusses the application of Unification Categorial Grammar -LRB- UCG -RRB- to the framework of Isomorphic Grammars for Machine Translation pioneered by Landsbergen. The Isomorphic Grammars approach to MT involves developing the grammars of the Source and Target languages in parallel, in order to ensure that SL and TL expressions which stand in the translation relation have isomorphic derivations. The principle advantage of this approach is that knowledge concerning translation equivalence of expressions may be directly exploited, obviating the need for answers to semantic questions that we do not yet have. Semantic and other information may still be incorporated, but as constraints on the translation relation, not as levels of textual representation. After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bi-directional English-Spanish fragment. Finally we will present some outstanding problems with the approach.
[ [ "Unification Categorial Grammar -LRB- UCG -RRB-", "USED-FOR", "Isomorphic Grammars" ], [ "Isomorphic Grammars", "USED-FOR", "Machine Translation" ], [ "Isomorphic Grammars approach", "USED-FOR", "MT" ], [ "approach", "USED-FOR", "MT system design" ], [ "approach", "USED-FOR", "monolingual UCG" ], [ "MT system design", "CONJUNCTION", "monolingual UCG" ], [ "MT system design", "HYPONYM-OF", "two" ], [ "monolingual UCG", "HYPONYM-OF", "two" ] ]
In the security domain a key problem is identifying rare behaviours of interest. Training examples for these behaviours may or may not exist, and if they do exist there will be few examples, quite probably one. We present a novel weakly supervised algorithm that can detect behaviours that either have never before been seen or for which there are few examples. Global context is modelled, allowing the detection of abnormal behaviours that in isolation appear normal. Pragmatic aspects are considered, such that no parameter tuning is required and real time performance is achieved.
[ [ "identifying rare behaviours of interest", "PART-OF", "security domain" ], [ "Training examples", "USED-FOR", "behaviours" ], [ "weakly supervised algorithm", "USED-FOR", "behaviours" ], [ "Global context", "USED-FOR", "detection of abnormal behaviours" ] ]
We have developed a computational model of the process of describing the layout of an apartment or house, a much-studied discourse task first characterized linguistically by Linde -LRB- 1974 -RRB-. The model is embodied in a program, APT, that can reproduce segments of actual tape-recorded descriptions, using organizational and discourse strategies derived through analysis of our corpus.
[ [ "computational model", "USED-FOR", "discourse task" ], [ "model", "PART-OF", "program" ], [ "organizational and discourse strategies", "USED-FOR", "APT" ] ]
This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification. The paper also proposes rule-reduction algorithm applying mutual information to reduce the error-correction rules. Our algorithm reported more than 99 % accuracy in both language identification and key prediction.
[ [ "approach", "USED-FOR", "Thai key prediction" ], [ "approach", "USED-FOR", "Thai-English language identification" ], [ "n-gram models", "USED-FOR", "approach" ], [ "n-gram models", "CONJUNCTION", "error-correction rules" ], [ "error-correction rules", "USED-FOR", "approach" ], [ "Thai key prediction", "CONJUNCTION", "Thai-English language identification" ], [ "mutual information", "USED-FOR", "rule-reduction algorithm" ], [ "mutual information", "USED-FOR", "error-correction rules" ], [ "algorithm", "USED-FOR", "language identification" ], [ "algorithm", "USED-FOR", "key prediction" ], [ "accuracy", "EVALUATE-FOR", "algorithm" ] ]
This paper concerns the discourse understanding process in spoken dialogue systems. This process enables the system to understand user utterances based on the context of a dialogue. Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understanding result after each user utterance. By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. This paper proposes a method for resolving this ambiguity based on statistical information obtained from dialogue corpora. Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process. Experiment results have shown that a system that exploits the proposed method performs sufficiently and that holding multiple candidates for understanding results is effective.
[ [ "discourse understanding process", "USED-FOR", "spoken dialogue systems" ], [ "system", "USED-FOR", "user utterances" ], [ "method", "USED-FOR", "ambiguity" ], [ "statistical information", "USED-FOR", "method" ], [ "dialogue corpora", "USED-FOR", "statistical information" ], [ "hand-crafted rules", "USED-FOR", "methods" ], [ "system", "USED-FOR", "method" ] ]
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
6