In linguistics, aggregation is a subtask of natural language generation, which involves merging syntactic constituents (such as sentences and phrases) together. Sometimes aggregation can be done at a conceptual level.
Examples
A simple example of syntactic aggregation is merging the two sentences John went to the shop and John bought an apple into the single sentence John went to the shop and bought an apple.
Syntactic aggregation can be much more complex than this. For example, aggregation can embed one of the constituents in the other; e.g., we can aggregate John went to the shop and The shop was closed into the sentence John went to the shop, which was closed.
From a pragmatic perspective, aggregating sentences together often suggests to the reader that these sentences are related to each other. If this is not the case, the reader may be confused. For example, someone who reads John went to the shop and bought an apple may infer that the apple was bought in the shop; if this is not the case, then these sentences should not be aggregated.
Algorithms and issues
Aggregation algorithms must do two things:
- Decide when two constituents should be aggregated
- Decide how two constituents should be aggregated, and create the aggregated structure
The first issue, deciding when to aggregate, is poorly understood. Aggegration decisions certainly depend on the semantic relations between the constituents, as mentioned above; they also depend on the genre (e.g., bureaucratic texts tend to be more aggregated than instruction manuals). They probably should depend on rhetorical and discourse structure.[1] The literacy level of the reader is also probably important (poor readers need shorter sentences).[2] But we have no integrated model which brings all these factors together into a single algorithm.
With regard to the second issue, there have been some studies of different types of aggregation, and how they should be carried out. Harbusch and Kempen describe several syntactic aggregation strategies. In their terminology, John went to the shop and bought an apple is an example of forward conjunction Reduction [3] Much less is known about conceptual aggregation. Di Eugenio et al. show how conceptual aggregation can be done in an intelligent tutoring system, and demonstrate that performing such aggregation makes the system more effective (and that conceptual aggregation make a bigger impact than syntactic aggregation).[4]
Software
Unfortunately there is not much software available for performing aggregation. However the SimpleNLG system[5] does include limited support for basic aggregation. For example, the following code causes SimpleNLG to print out The man is hungry and buys an apple.
SPhraseSpec s1 = nlgFactory.createClause("the man", "be", "hungry");
SPhraseSpec s2 = nlgFactory.createClause("the man", "buy", "an apple");
NLGElement result = new ClauseCoordinationRule().apply(s1, s2);
System.out.println(realiser.realiseSentence(result));
References
- ↑ D Scott and C de Souza (1990). Getting the Message Across in RST-based Text Generation. In Dale et al (eds)Current Research in Natural Language Generation. Academic Press
- ↑ S Williams and E Reiter (2008). Generating basic skills reports for low-skilled readers. Natural Language Engineering 14:495-535
- ↑ K Harbusch and G Kempen (2009). Generating clausal coordinate ellipsis multilingually: A uniform approach based on postediting. In Proc of ENLG-2009 28:105-144.
- ↑ B Di Eugenio, D Fossati, D Yu (2005). Aggregation improves learning: experiments in natural language generation for intelligent tutoring systems. In Proc of ACL-2005 pp 50–57.
- ↑ A Gatt and E Reiter (2009). SimpleNLG: A realisation engine for practical applications. Proceedings of ENLG09