Direct Compositionality: Is there any reason why not?




старонка1/4
Дата канвертавання28.04.2016
Памер137.2 Kb.
  1   2   3   4


Direct Compositionality: Is there any reason why not?
Pauline Jacobson

Dept. of (cognitive and) Linguistic Science(s)



Brown University

pauline_jacobson@brown.edu
University of Michigan Workshop on Linguistics and Philosophy. November, 2004

draft version, Sept. 9; bibliography to be added


0. Goals
The hypothesis of Direct Compositionality - familiar (in one form) from, the work of Montague (e.g., Montague, 1974) and more recently from considerable work within Categorial Grammar - is that the syntax and semantics work “in tandem”. The syntax is a system of rules (call them “principles” if one prefers) which prove the well-formedness of linguistic expressions while the semantics works simultaneously to provide a model-theoretic interpretation for each expression as it is proved well-formed in the syntax. (I will use the usual metaphor that the syntax “builds” expressions.) Under this conception, there is no intermediate representation such as LF mediating between the surface syntax of linguistic expressions and their model-theoretic interpretation and consequently (and more importantly) no need for an additional set of rules providing a mapping between expressions and their LFs.
This paper is a plea for this view of the organization of the grammar. I will argue that direct compositionality is (a) the simplest conception of the organization of the grammar amongst competing alternatives, that (b) it should therefore be abandoned only in the face of strong evidence to the contrary, and that (c) there has never been convincing evidence to the contrary. The paper divides into two parts. The first part overlaps largely with my paper “The (dis)organization of the grammar: 25 years” which appeared in Linguistics and Philosophy 25.5-6 (the anniversary issue). There I will sort out two (or three) versions of the hypothesis of direct compositionality - essentially, the differences between these versions are differences in theories of syntax: the weaker versions allow a richer set of operations in the syntax than do stronger versions. But I will argue that even what I call Weak Direct Compositionality is much simpler and more explanatory than a view of the grammar in which the syntax first “runs” and then sends syntactic representations to the semantics for compositional interpretation. No arguments for this kind of view over a Weak Direct Compositional theory have, to my knowledge, ever been put forth. To the extent that one finds articulated arguments in the literature against direct compositionality, these are at best arguments only against what I call Strong Direct Compositionality - not arguments against direct compositionality per se. (I say “at best” because I think that most of these have been answered especially in much of the Categorial Grammar (including Type Logical Grammar) and related literature. But there do remain some potential problems for Strong Direct Compsoitionality, and so the question remains open as to just how rich a set of operations are available in the syntax.) It seems, then, that the discussion should not be about “LF based” approaches vs. direct compositionality: rather rational discussion should be about just how strong a version of direct compositionality is possible.
The second part of this paper turns to a group of phenomena which, I argue, are not only compatible with direct compositionality, but with strong direct compositionality. I use this as a case study because these phenomena - and their interactions - have been used to argue for a variety of devices incompatible with strong direct compositionality. The phenomena involve the interaction of extraction, pronoun binding, and “Condition C” effects. The apparent challenges that these pose are (a) they are often accounted for in such a way as to require reference to an abstract level like LF which is the input to the model-theoretic interpretation, and (b) they seem to require reference to representational properties (things like c-command). But under a strong direct compositional view, things like trees are just convenient representations for the linguist of how the syntax worked to prove something well formed and how the semantics worked to put the meanings together - they are not representations that the grammar gets to “see” and so the grammar could not possibly contain principles stated on these representations. From much of the work in Generalized Phrase Structure Grammar in the 80’s and continued work within Categorial Grammar, we have learned that when we seem to find a generalization about representations, this is often an artefact - a consequence of how the rules worked, not a consequence of an actual principle in grammar that refers to representations. I should stress one point at the outset: trying to derive facts from the way the combinatory rules (the syntax, and the semantics) works as opposed to trying to derive facts from principles on representations is not just a trade-off to be determined by what is fashionable. Any theory needs combinatory principles in the syntax and in the semantics - to prove well-formed strings and assign them a meaning. Thus if we can get all the work to come from there (without, of course, unduly complicating these principles) then we have less machinery than a theory which also needs the rules (principles) to keep track of representations, so as to allow these to be the input to further statements.
1. Direct Compositionality
1.1. Four Conceptions of the syntax-semantics “interface”
Thus let me outline here four theories of the syntax-semantics “interface”. The first three embrace direct compositionality in some form - and I want to make these all explicit to show that any of these three is preferable to the “modern” conception of things whereby surface structures are mapped into a separate level which is then compositionally interpreted. (I realize that things are a bit different from this picture under “Minimalism”, but the key point remains the same - the syntax works separately to ultimately build LFs which the semantics then interprets.) Since the possibility of having quantified NPs in object position, and the further possibility of scope ambiguities as in (1) has always played a fundamental role in the discussion of the syntax/semantics interaction (see, for example, any textbook discussion of “LF”), I will illustrate each of these theories with a brief consideration of its treatment of scope:
(1) Some man read every book.
A. Strong Direct Compositionality. Under this view, there is a set of syntactic rules which “build” (that is, prove the well-formedness of) the set of sentences (or other expressions) in the language - generally “building” (i.e., specifying the well-formedness of) larger expressions in terms of the well-formedness of smaller subexpressions. Assume here that each such rule is a context-free phrase structure rule (or, highly generalized rule schema). Note that if this is the only form of syntactic rule in the grammar then the grammar need keep no track of structure: the rules “building” complex expressions merely concatenate strings. Thus under this conception, a tree is just a convenient representation of how the grammar worked to prove a string well-formed; it is not something that the grammar can nor ever would need to “see”. This will become crucial in much of the discussion in Sec. 2: I wish to argue that there is no need for constraints in the grammar which are stated on global chunks of representation and that thinking of various phenomena in that way often just gets in the way of providing an adequate account. (By a “global” chunk of representation I mean a chunk which is larger than a local tree - any statement concerning a local tree can obviously be recast as part of a rule.) Coupled with each syntactic rule is a semantic rule specifying how the meaning of the larger expression is derived from the meaning of the smaller expressions.
The above terminology may be misleading in one respect: strong direct compositionality is not committed to the view that the grammar consists of a list of phrase structure rules each of which is idiosyncratically associated with a semantic rule. Such a view has often come under fire for containing “construction specific” semantic rules - a state which complicates the grammar because it adds to the amount of idiosyncratic information which must simply be listed. But the issue of construction-specific rules is completely independent of the other issues that I am concerned with here, and nothing in the strong direct compositional view requires construction-specific rules. One can maintain that the actual semantic operations are predictable from each syntactic rule. In fact, the earliest discussions of “type driven” interpretation (e.g., Klein and Sag, 1983) were framed within a strong direct compositional view: their use of the term meant that the semantic operation associated with each syntactic (phrase structure) rule schema was predictable from the syntax of the rule combined with the semantic types of the expressions referred to in the rule. Moreover, a strong direct compositional theory can also assume that the syntax itself consists of just a few very general rule schemata (each of which is associated with a general semantic rule schema). Indeed, this is exactly the view taken in various versions of Categorial Grammar; thus the rejection of Strong Direct Compositionality surely cannot be motivated by a rejection of “construction specific” semantic rules.
There is a further issue of relevance here. Much research within the tradition of Strong Direct compositionality - or within one of its slightly weaker versions to be discussed below - has also advocated the existence of unary rules. These are rules which map single linguistic expressions into new ones, and in so doing change the meaning and/or category of an expression (generally without changing the phonology). Such rules have generally gone under the rubric of “type-shift” (and/or category-changing) operations. There is disagreement in the literature as to just how many such operations there are and how generally they should be stated. This in turn ties in with the question of how much “construction-specific” information the grammar contains – or – to put this in less sloganistic terms: just how complex the grammar is. (Note too that any theory allowing for silent operators is not very different from one allowing type-shift rules: a type-shift operation can always be recast as a silent lexical item which applies to the expression with which it combines.) Thus, while one can argue about just how many and how predictable are the individual rules, this too is a separate issue.
There are some very appealing properties of Strong Direct Compositionality. One is the fact that in building strings, the syntax need keep no track of structure, since all combinatory operations simply concatenate strings, and all unary rules have no effect on the internal structure. We can think of each linguistic expression as a triple of <[phonology]; syntactic category; meaning>, where the rules take one or more such triples as input and give back a triple as output.
How are quantifier scope ambiguities handled under strong direct compositionality? As with most things, there is more than one proposal in the literature. The most influential proposal during the late 70’s and 80’s was probably that of Cooper (1975). But there are other proposals for quantifier scopes within strong direct compositionality; in general these involve some type-shift rule or rules. One well-known proposal is developed in Hendriks (1993) and is a generalization of ideas in Partee and Rooth (1983). Here a transitive verb like read is listed in the lexicon with a meaning of type >, but there is a generalized type-shift rule allowing any e-argument position to lift to an <,t> argument position. If the subject position lifts first and then the object position lifts the result is the wide-scope reading on the object. Further generalizations of this can be used to allow for wide scope of embedded quantifiers; other proposals for wide scope embedded material makes use of a combination of type-shift rules and function composition. (For another kind of proposal, see Barker, 2001.)
B. Weak(er) Direct Compositionality. The above picture has often been weakened by the adoption of two related revisions: (a) the combinatory syntactic rules are not all equivalent to context-free phrase structure rules but may perform some other operations, and (b) the syntactic rules do not build only completely unstructured strings but may build objects with more “structural” information. The mildest weakening of A is to be found, perhaps, in those proposals that add only Wrap operations in addition to concatenation operations (for the original Wrap proposal, see Bach, 1979, 1980). Here the combinatory syntactic operations allow two strings not only to not concatenate, but also for one to be infixed into another. As such, the input to the combinatory operations has to be not just unstructured strings, but these strings need to contain at least enough additional information so as to define the infixation point. This has been formalized in a variety of ways; I will not pursue this here, although it is worth noting that Montague’s Quantifying In rule can be recast as an infixation operation, and so Weak Direct Compositional. systems with infixation operations are one way to account for quantifier scopes. I actually do think that Wrap operations are needed - we know from much discussion in the 80’s that natural languages cannot be described purely with context-free phrase structure rules - but the addition of only Wrap operations seems quite reasonable. Although a theory with Wrap is not really what I am calling “Strong(est) Direct Compositionality”, I will continue to use the term Strong Direct Compositionality to encompass a theory in which the only weakening is the addition of infixation operations.
But one can imagine other kinds of weakenings -- including one in which certain transformation-like operations can be performed in the “building” of syntactic structures. (Much of the work in classical Montague grammar allowed substitution and deletion operations, though not much else.) Of course just what can and can’t occur also interacts with the question of how much structure is kept track of in the building of syntactic operations. At the extreme end, one can imagine that the output of each syntactic operation is a full-blown tree rather than a string (see, e.g., Partee, 1976); hence linguistic expressions are now richer, and can be thought of as triples of the form <[phonological representations]; syntactic structure – i.e., a full tree; meaning>.
Montague’s “Quantifying-In” rule - and conceivable variants of it - is compatible with various versions of Weak Direct Compositionality (as noted above, it can be formulated just in terms of infixation, for example; and in Montague’s work it was stated as a substitution operation.) Since my concern is not with Montague’s particular proposal but with this general picture of the architecture of the grammar, I will recast the Quantifying-In rule(s) so as to provide a maximal basis for comparison with other theories. Thus assume (contra Montague, 1974) that a transitive verb like read has a lexical meaning of type >. Assume further the usual theory of variables, and assume (along the lines of Montague’s treatment) that we build syntactic representations with indexed pronouns each of which corresponds to the samely-indexed variable in the semantics. We can have expressions like he1 reads he2 whose meaning – relative to some assignment function g - will be [[reads]]g ([[x2]]g)([[x1]]g). In addition, we will let the syntax keep track of the indices on the unbound pronouns (this is not really necessary, but will facilitate the exposition). More concretely, assume that every node label is enriched with an IND feature, whose value is a set of indices, and that – unless a rule specifies otherwise - the IND value on the category which is the output of a combinatory rule is the union of the IND values of expression on the input. Thus the category of the expression given above is S [IND: {i,j}]. We can thus accomplish Quantifying-In by the following two rules, the first of which is a type-shift rule:
(2) Let be an expression of the form <[]; S [IND: X where i X]; ]]>. Then there is an expression of the form <[];  [IND: X-i]; [[]]g is that function which assigns to each individual a in D, [[]]g[a/x(i)] > (this of course is just the semantics of -abstraction).
(3) Let be an expression of the form <[x, hei y] ; , [[]]> and be an expression of the form <[]; DP; [[]]>. Then there is an expression of the form: <[x DP y]; S; [[]]g = [[]]g([[]]g)>.
(Notice that, unlike in Montague’s treatment, I have broken the Quantifying In rule down into two steps; Montague collapsed into a single but more complex rule.) One can also build Weak Crossover into this picture: Montague’s rule itself required that if there is more than one pronoun with the same index, the substitution could apply only to the leftmost one. Should one want to take the more usual modern view that the appropriate restriction is stated in terms of c-command rather than linear order (cf., Reinhart, 1983) the rule can be restricted so that hei in the above SD must c-command all other occurrences of the same indexed pronoun (such a restriction, of course, commits to the view that the input to these rules are as rich as trees).
Incidentally, here too I have stated the rules in a construction-specific way, but here too this is not a necessary requirement of the program. To be sure, most work within “classical Montague Grammar” of the 70’s made heavy use of construction-specific information, but there is no reason why the program seeking to minimize or eliminate this is incompatible with the general architecture of Weak Direct Compositionality (if there is such a reason, it has never to my knowledge been demonstrated).
C. Generative Semantics-like Direct Compositionality. By this I mean something like the model proposed in Generative Semantics (see, e.g., Bach, 1968; McCawley, 1970; Lakoff, 1971) supplemented with apparatus to supply a model-theoretic to the Logical Forms. Thus, Generative Semantics assumed that deep structure was the same as Logical Form - which means that a series of phrase structure rules and/or rule schemata serve to define a well-formed structure. This was supposed to “represent” the semantics, and in fact much work within Generative Semantics didn’t worry about supplying an actual interpretation to these structures. (For that matter, this is equally true of some of the initial work within the “modern” surface structure to LF view; see, e.g., Chomsky (1976).) But it is easy enough to embed the general idea into a more sophisticated theory of semantics with a model-theoretic component. Simply have the “building” and interpretation of the Logical Forms be as in the Strong Direct compositional approach: each local tree is specified as well-formed by the syntactic rules and – in tandem – is provided a model-theoretic interpretation by the semantic part of the rules. A key difference between this and Strong Direct Compositionality is that this view contains an additional set of transformational rules which map the Logical Forms to surface structures. A concomitant difference is that the rules “building” syntactic structures must keep track of whatever structure is used as the input to the transformational rules; presumably then these rules are building trees rather than strings. Again, though the base rules can be seen as mappings from triples to triples.
The treatment of quantifier scopes within this general view is well-known. First, unlike the Quantifying-In rule above, we have an actual level of representation at which quantified NPs are in the tree, but are in a raised position rather than being in their ultimate surface positions. The difference between their deep and surface positions is handled by a quantifier lowering rule. If we take the lexical meaning of a transitive verb like read to be of type > then the appearance of a quantified NP in object position will always be the result of Quantifier Lowering. Scope ambiguities are handled in the obvious way: since each local tree is interpreted as it is “built” by the phrase structure rules, the obvious formulation of the semantic rules will assign different scopes according to the initial height of the quantified NPs.
Again it is worth spelling this out explicitly. Let the rules build deep structure expressions such he1 read he2 as in the Weak Direct Compositional approach shown above. Assume further that this is assigned the meaning and category as above. Further, assume the following two phrase-structure rule semantic rule pairs; these mirror the rules in (2)-(3):
(4) [IND: X – i] ---> S [IND: X, where i X]; [[]]g is that function from individuals to propositions which assigns to each individual a in D [[S]]g[a/x(i)]
(5) S ---> DP ;  [[S]]g = [[DP]]g([[]]g)
Finally, this is supplemented by one transformation, as follows:
(6) [S DP [ A hei B]] ===> [SA DP B]
(Again, one can restrict the rule so that the occurrence of hei which is analyzed as meeting the SD of the rule is leftmost occurrence of hei (see, e.g., Jacobson, 1977), or one can equally well restrict the rule to apply to the highest such occurrence, to account for Weak Crossover.)
D. Surface to LF. Which brings us to the increasingly popular view. This is that there is a level of LF which receives a model-theoretic interpretation, and which is derived from surface structures by some set of rules. There are actually two possible versions of this. One is that the surface structures are given directly by the compositional syntactic rules (i.e., the phrase structure rules or their equivalents) and these are then mapped into LFs. The second, and more standard view, is that the surface structures themselves are the end-product of a mapping from underlying structures. In terms of the treatment of quantifier scopes this makes little difference; it does make a difference when we turn to wh questions. The actual proposals cast within D generally do presuppose the existence of transformational operations in the syntax - this is because many of the arguments for D rely on similarities between the “overt” transformational operations (mapping from deep to surface structures) and “covert” operations (mapping from surface to LF).
The treatment here of scope ambiguities is also well-known; it is essentially the same as that given above under C, except that the direction is reversed. We first start out with a series of rules which ultimately define a well-formed surface structure at which the quantified material is in situ. Then there are rules mapping this into a structure like the Generative Semantics deep structure, and then the compositional semantic rules will work from the bottom up to interpret this.
Thus assume again that the lexical meaning of read is of type >, and assume a set of phrase structure rules which allow DPs like every book to appear in characteristic “argument” positions (even though they are not of the right type to semantically combine with, e.g., a transitive verb). (No syntactic transformational operations of relevance apply in this case.) We thus build a structure like some man read every book in the syntax, but initially with no interpretation. The interpretive part can be accomplished by a combination of one transformation-like rule – Quantifier Raising (May, 1977) which is essentially the inverse of the Quantifier Lowering rule in (6)) and two rules interpreting the relevant structures. The relevant rules are given in (7) – (9). Incidentally, in the formulation that I give here in (7) QR is not the exact inverse of (6) - this is simply because I formulated this to be more or less in keeping with the treatment given in Heim and Kratzer (1998). But one could formulate the rules to be exact inverses. Either way, the rules are formulated so as to keep track of the index on the variable which is -abstracted over in the interpretation of the category labeled . This can be done by having it be the index which is different on the mother’s index feature set and on the daughter’s feature set (which is encoded above via the rule in (4)), or by actually putting that index as a terminal sister of S. (The difference is part and parcel of the question mentioned above about the use of type-shift operations vs. silent operators; it is difficult to find any empirical consequence to the choice of one or the other).
(7) [S A DPi B] ===> [S DP [ [S A ti B]]]
The lowest S will be interpreted by the kind of interpretation rules that anyone needs; note that ti here is interpreted as xi. The additional rules of interpretation are pretty much just the inverse of what we have already seen:
(8) [[ [ S] ]]g is that function which assigns to each individual x in D the value [[S]]g[a/x(i)]
(9) [[ [SDP ]]g = [[DP]]g ([[]])g
1.2. Why each view is weaker than the one before it
To fully evaluate the advantages or disadvantages of each of these views of the organization of the grammar one would need actual proposals for much larger fragments than are given here. But all other things being equal, it seems obvious that each position in this list represents a more complex view of the organization of the grammar than does the position above it.
In particular, I want to focus on the fact that there is a major cut between A-C on the one hand and D on the other. A, B, and C all have in common the fact that the syntax “builds” in conjunction with the model-theoretic interpretation, and this is discarded in D. But there are certain complications which arise the minute one moves away from the “running in tandem” of the syntax and semantics.
The first objection to the divorcing of the syntax and the semantics might be purely subjective (although I don’t really think it is). This is that there is a clear elegance to a system in which the grammar builds (i.e., proves as well-formed) syntactic objects in parallel with assigning them an interpretation, an elegance which is lost if the grammar contains two entirely separate systems, one of which must “run” first because the other (the semantics) works on the output of the first (the syntax).. But elegance aside, there are two other questionable aspects to divorcing the two combinatory systems. The first is that under the conception in D there is no explanation as to why these systems work on such similar objects, and the second (related) problem is that D requires a duplication of information not required in A-C.
We turn to the first problem. Regardless of the question of whether there are transformations in addition to phrase-structure rules (or rule schemata), just about all theories agree on something like phrase-structure operations each of which specifies the well-formedness (at some level) of a local tree. (As noted above, under strong Direct Compositionality, trees are not actually anything the grammar ever needs or gets to see, so under this view a tree is really a metaphor. Under Weak Compositionality this is also not quite the appropriate terminology. If the syntax allows operations like Wrap, then a “tree” may also be not the right formal object to represent what the syntax does. However, I think that this terminology will do no harm here.) Thus at the end of the day, all theories contain rules which can be seen as proofs of the well-formedness of some kind (or level) of structure, where these “proofs” work bottom up in that they specify the well-formedness of larger expressions (at some level of representation) in terms of the well-formedness of smaller ones. (“Bottom up” is of course also a metaphor here, and one that is not common in discussions of syntax. But once one thinks of the syntactic system as supplying a proof of the well-formedness of some expression then this metaphor is appropriate - larger expressions are well-formed if they are composed in certain ways from smaller well-formed expressions.) Moreover, semantic theories generally agree that the semantics also works on local trees to give the meaning of the mother in terms of the meaning of the daughters. And, like the syntax, it also must work “bottom up” - supplying the meaning of larger expressions from the meanings of the expressions which compose them. Given this, it would seem to be extremely surprising to find that the two systems don’t work in tandem. If the semantics is divorced from the syntactic combinatory rules, then why should it be the case that it too works on local trees? Why not find rules taking large chunks of trees and providing an interpretation for these?1
Let me push this point a bit: consider the discussion of how the semantics works in Heim and Kratzer (1998). On p. 29 they make explicit the following assumption - one which is pretty much accepted in most modern work in formal semantics: Locality. Semantic interpretation rules are local: the denotation of any non-terminal node is computed from the denotation of its daughter nodes.” Now if mothers and daughters are merely what rules refer to and if the semantics works in tandem with the syntax then this of course follows without further ado. But in the program in which trees are first computed and then sent to the semantics, this principle needs to be stipulated - and as such there is no explanation for why it exists. If one believes that the input to semantic interpretation are trees which are computed by the syntax, why should we all be so proud of the fact that we can Curry (aka Shonfinkelize) the meaning of a transitive verb? Why not state the meaning of Mitka licked Kolya on a chunk of representation that includes both subject and object? The fact that the syntax and the semantics work on similar objects is a complete mystery under the view that they are divorced from each other.
Second, there is a clear cost to the move of divorcing the syntactic combinatory rules from the semantic rules. The point is easiest to illustrate by a comparison of theory C to theory D, since these are otherwise most alike. Both theories contain an additional set of rules effecting a mapping between surface structure and LF; they disagree on the direction of the mapping. Crucially, in theory D, the semantic combinatory rules cannot be stated in tandem with the phrase structure rules (or their equivalents), and this means that the syntactic side of things must be stated twice: once as output of the syntax and once as input to the semantics. As illustration, take a case where there happens to be no rule of interest mapping between surface structure and LF (the two are thus the same). So consider the syntactic composition and semantic interpretation of a very simple case like John walks. Here is a “fragment” in C to do this, and a “fragment” in D:
(10) C. S ---> NP VP; [[S]]g = [[VP]]g([[NP]]g)

D. syntactic rule: S ---> NP VP



semantic rule: [[ [S NP VP] ]]g = [[VP]]g([[NP]]g)
Reference to the local tree [S NP VP] is required twice in D but only once in C. The same point can be made by a comparison of the C-fragment given in (4)-(6) to the D-fragment given in (7)-(9). In the D-fragment, the two rules (8) and (9) repeat large parts of the output of the transformational rule (7) which creates the appropriate structure to serve as their input.

One might think that this objection disappear once one moves away from “construction-specific” statement of the semantic rules. But actually, it doesn’t: restating the semantic rules as more general schemata certainly ameliorates the situation, but it does not entirely eliminate the problem. Regardless of how general one makes these, one still needs semantic combinatory statements which provide interpretations for classes of local trees. Hence the semantic rules still need to refer to a set of local trees - even if in highly generalized forms - both in the input to the semantics and as output to the syntax. The fewer the rules, the less duplication there will be, but there will still be some and ths remains suspicious if there is an alternative theory which avoids this. And, in fact, any theory which states the rules together can avoid this. There is one important caveat here: a theory with direct compositionality and/or with deep compositionality will find itself in the same “pickle” if it contains a number of specific syntactic rules and general semantic rules stated separately. Such, for example, was the case in some versions of early “type-driven” GPSG, where the syntax contained various phrase structure rule schemata combined with a general principle as follows: for any rule A ---> B C, if the semantic type of B is a function from C-type things to A-type things then the associated semantic rule will be functional application. Such a theory also contains duplication. (Note though that even this kind of theory is not vulnerable to the other criticism of D, which was it is an accident that the output of the syntax is the same general type of objects as is the input to the semantics. Even if the syntactic rules are stated in a particular format and the semantic rules are stated separately as generalizations over the syntactic rules, there is no mystery as to why the two involve the same kind of objects.) But while there are versions of A-C which suffer from the problem of duplicating information, there are also versions (e.g., some versions of Categorial Grammar) where the syntactic schemata can be stated in forms as general as the semantic combinatory schemata, avoiding this.2
The only way to demonstrate that D avoids unwanted duplication would be to demonstrate that the syntactic rules and the semantic rules should actually be stated in terms of very different objects or very different kinds of objects. This, for example, would be the case if the semantic rules interpreted chunks of non-local trees. Or, this would be the case if the semantic rules looked only at linear strings and not at syntactic structures. But, as mentioned above, no theory seems to maintain this and no one (to my knowledge) has found evidence that we need rules of this type. The claim that the rules operate on different objects could also be substantiated if one could show that the semantic rules took as their input a much larger or more general set of local trees than the syntactic rules give as output. If one could really make such a case, then divorcing the output of the syntax from the input to the semantics would be exactly the right move, but there has been little (if any) real arguments to this effect. (I’ve heard people argue for divorcing the syntax and the semantics on the grounds that we can interpret sentences which the syntax doesn’t allow, and a version of this argument is found in Heim and Kratzer (1998). But when one thinks it through, this argues only for having certain aspects of the semantics predictable from the syntax. If the semantics can interpret some (ungrammatical) sentence, then it has to be the case that one can “imagine” some way that the syntax tripped up and allowed it to be proven well-formed (or, in the representational view of semantics - some way in which the syntax tripped up and assigned it a representation). But if one can imagine just how the syntax “goofed” to assign a sentence a representation in a non-direct compositional view of things, then one can equally well imagine - in a direct compsoitional view - the syntax “goofing” to predict that something is well-formed. Either way, one needs a way to have a predictable semantics to go along with the goofy syntax. But this doesn’t bear on the question of direct compsoitionality. See Jacobson, 2002 for a more detailed discussion of this point.)
1.3. Direct compositionality (in the broad sense): is there any reason why not?
I have tried to show above that - all other things being equal - Direct compositionality (in any of its versions) is simpler, more elegant, and more explanatory than a non-direct compositional view. Are there any real arguments for D over its competitors? In Jacobson (2002 - L&P 25) I tried to show in some detail that the answer is no: arguments which have been put forth in the literature for D at best argue only against Strong Direct compositionality. Unfortunately, much of the modern discussion has pitted these two positions against each other - completely ignoring any of the intermediate positions such as the Weak Direct compositionality of classical Montague grammar.
Space precludes a detailed discussion of this here, but let me briefly mention a few points of relevance. One of the classical “textbook” arguments for the existence of LF is the need to provide a treatment for wide scope quantification in general. But we have seen above that any of the Direct Compositional theories can do just that. The next sort of “textbook” argument that one finds over and over concerns the commonly accepted lore that both wide scope quantification and wh movement are subject to island constraints - and so both must involve movement. Now in the first place, it’s not clear to me that common bit of wisdom is really true. Granted, wide scope quantification is generally difficult out of islands, but it is also difficult to get wide scope readings in a variety of embedded contexts. In that respect, wide scope quantification seems to actually obey much stronger constraints than the constraints on wh movement. For example, recent researchers seem to agree that a no book cannot scope over the matrix subject in (11a), while wh movement is of course possible:
(11) a. Some man believes no book is lousy.

b. What does some man believe is lousy?


Of course merely pointing out that QR is subject to additional constraints does not in itself invalidate the claim that it is also subject to island constraints, but it is perfectly possible that the island effects will simply follow from the more general constraints at work here. On the other side of the coin, it has often been observed that scoping out of, e.g., relative clauses is quite difficult but not crashingly bad the way corresponding wh movement cases can be. Thus the correlation between wh movement constraints and quantifiers scopes is sufficiently imperfect as to make one suspect that something else is going on here.
But for the sake of argument, suppose that the conventional wisdom is true. Does this necessitate acceptance of D over B or C? Certainly not. Generative semanticists, in fact, very loudly proclaimed this very observation as an argument for a movement rule of Quantifier Lowering. Their battlecry in the late 60’s and early 70’s was that since quantifier scopes obeyed the same constraints as movement, it must be a syntactic movement rule. For a clear statement of just this (appearing in an accessible and “mainstream” journal), see Postal (1974), especially p. 383.
This way of putting it of course assumes the classical Generative Semantics view - according to which wh constructions do indeed involve movement. Can the constraints on wh constructions and on quantifier scopes also be collapsed in Weak Direct Compositionality? The answer is of course; relevant discussion of just this point is given in Rodman (1976) (Rodman’s discussion focuses on relative clauses rather than questions, but the extension to questions is obvious). Of course to fully demonstrate this we need to give a Weak Direct compositional account of wh questions; space precludes doing this here (see Karttunen, 1979 for one such account, but with this in hand it is perfectly easy to state the constraints so as to collapse both wh constructions and constructions with quantifiers.
To make an argument for the approach in D as opposed to these alternatives, one would need to show two things. The first is that island effects are most simply accounted for in terms of constraints on movement. (This is still compatible with the Generative Semantics type solution, but would rule out the non-movement accounts of wh constructions taken in Karttunen.) Actually, there was considerable discussion of just this question in the syntactic literature in the mid and late 1970’s, but it was inconclusive. See, for example, Bresnan and Grimshaw (1978), who show that even a Subjacency-inspired approach can be recast so as to constrain deletion as well as movement. Second, one would need to show that the constraint will apply in the right ways only in raising and not in lowering situations (or only if both wide scope quantification and wh movement have to involve movement in the same direction). Such a demonstration would mean that quantified NPs (like wh NPs) must move up the tree. This in turn means that scopes must be given by QR, and that in turn entails a model like D. But I know of no demonstration of this.
There are, of course, other kinds of arguments which have been given for D (and in some cases specifically also against Strong Direct compositionality), but the logic of these arguments often goes as follows. It is assumed that there is some constraint in the grammar which refers to global chunks of representation. We then see that the relevant representation is not found on the surface. Hence we posit an LF with the relevant representational properties. But - as disucssed above - under strong direct compositionality (and in fact under various weaker versions too) we would not expect to find constraints of this type in the first place, suggesting that the original analysis was incorrect. So, in the remainder of this paper I want to turn to some phenomena of just this type: phenomena which are assumed to require representational statements and which in turn seem to require abstract levels like LF at which the appropriate properties are found. What I will argue with this case study is that thinking in representational terms only gets in the way of formulating the compositional semantics.

  1   2   3   4


База данных защищена авторским правом ©shkola.of.by 2016
звярнуцца да адміністрацыі

    Галоўная старонка