Prof. Fakhar Alam

Dept. of English

Govt. College Civil Lines Multan

Prof. Fakhar Alam

Dept. of English

Govt. College Civil Lines Multan

Prof. Fakhar Alam

Dept. of English

Govt. College Civil Lines Multan

Prof. Fakhar Alam

Dept. of English

Govt. College Civil Lines Multan

    Paper 6        Linguistics


    Semantics



    Semantics is concerned with the study of meaning and is related to both philosophy and logic. Semiotics is the study of communication systems in general. Sign language is a common means of communication among those who are deaf and can, if learned from childhood, approach natural language in terms of scope and flexibility.

    • There are four recognisable types of meaning: lexical meaning, grammatical meaning, sentence meaning and utterance meaning which refer to the areas of derivational morphology, inflectional morphology, syntax and pragmatics respectively.

    • External meaning relationships involve sense (relationships between words) and denotation (relationship of word to what it signifies).

    • There are various internal meaning relationships such synonymy (sameness of meaning), antonymy (difference in meaning), hyponymy (hierarchical order of meaning).

    • Different models for semantic analysis are available: prototype theory, where a central concept is taken as typical and less central ones are peripheral, and componential analysis which seeks to break words down into their component semantic parts.

    Explanation:

    Semantics is the study of the meaning of linguistic expressions. The language can be a natural language, such as English or Navajo, or an artificial language, like a computer programming language. Meaning in natural languages is mainly studied by linguists. In fact, semantics is one of the main branches of contemporary linguistics. Theoretical computer scientists and logicians think about artificial languages. In some areas of computer science, these divisions are crossed. In machine translation, for instance, computer scientists may want to relate natural language texts to abstract representations of their meanings; to do this, they have to design artificial languages for representing meanings.

    There are strong connections to philosophy. Earlier in this century, much work in semantics was done by philosophers, and some important work is still done by philosophers.

    Anyone who speaks a language has a truly amazing capacity to reason about the meanings of texts. Take, for instance, the sentence

    (S) I can't untie that knot with one hand.

    Even though you have probably never seen this sentence, you can easily see things like the following:

    1. The sentence is about the abilities of whoever spoke or wrote it. (Call this person the speaker.)
    2. It's also about a knot, maybe one that the speaker is pointing at.
    3. The sentence denies that the speaker has a certain ability. (This is the contribution of the word ‘can't'.)
    4. Untying is a way of making something not tied.
    5. The sentence doesn't mean that the knot has one hand; it has to do with how many hands are used to do the untying.

    The meaning of a sentence is not just an unordered heap of the meanings of its words. If that were true, then ‘Cowboys ride horses’ and ‘Horses ride cowboys’ would mean the same thing. So we need to think about arrangements of meanings.

    Here is an arrangement that seems to bring out the relationships of the meanings in sentence (S).

    Not [ I [ Able [ [ [Make [Not [Tied]]] [That knot ] ] [With One Hand] ] ] ]

    The unit [Make [Not [Tied]] here corresponds to the act of untying; it contains a subunit corresponding to the state of being untied. Larger units correspond to the act of untying-that-knot and to the act to-untie-that-knot-with-one-hand. Then this act combines with Able to make a larger unit, corresponding to the state of being-able-to-untie-that-knot-with-one-hand. This unit combines with I to make the thought that I have this state -- that is, the thought that I-am-able-to-untie-that-knot-with-one-hand. Finally, this combines with Not and we get the denial of that thought.

    This idea that meaningful units combine systematically to form larger meaningful units, and understanding sentences is a way of working out these combinations, has probably been the most important theme in contemporary semantics.

    Linguists who study semantics look for general rules that bring out the relationship between form, which is the observed arrangement of words in sentences and meaning. This is interesting and challenging, because these relationships are so complex.

    A semantic rule for English might say that a simple sentence involving the word ‘can't’ always corresponds to a meaning arrangement like

    Not [ Able ... ],

    but never to one like

    Able [ Not ... ].

    For instance, ‘I can't dance’ means that I'm unable to dance; it doesn't mean that I'm able not to dance.

    To assign meanings to the sentences of a language, you need to know what they are. It is the job of another area of linguistics, called syntax, to answer this question, by providing rules that show how sentences and other expressions are built up out of smaller parts, and eventually out of words. The meaning of a sentence depends not only on the words it contains, but on its syntactic makeup: the sentence

    (S) That can hurt you,

    for instance, is ambiguous -- it has two distinct meanings. These correspond to two distinct syntactic structures. In one structure ‘That’ is the subject and ‘can’ is an auxiliary verb (meaning “able”), and in the other ‘That can’ is the subject and ‘can’ is a noun (indicating a sort of container).

    Because the meaning of a sentence depends so closely on its syntactic structure, linguists have given a lot of thought to the relations between syntactic structure and meaning; in fact, evidence about ambiguity is one way of testing ideas about syntactic structure.

    You would expect an expert in semantics to know a lot about what meanings are. But linguists haven't directly answered this question very successfully. This may seem like bad news for semantics, but it is actually not that uncommon for the basic concepts of a successful science to remain problematic: a physicist will probably have trouble telling you what time is. The nature of meaning, and the nature of time, are foundational questions that are debated by philosophers.

    We can simplify the problem a little by saying that, whatever meanings are, we are interested in literal meaning. Often, much more than the meaning of a sentence is conveyed when someone uses it. Suppose that Carol says ‘I have to study’ in answer to ‘Can you go to the movies tonight?’. She means that she has to study that night, and that this is a reason why she can't go to the movies. But the sentence she used literally means only that she has to study. Nonliteral meanings are studied in pragmatics, an area of linguistics that deals with discourse and contextual effects.

    But what is a literal meaning? There are four sorts of answers: (1) you can dodge the question, or (2) appeal to usage, or (3) appeal to psychology, or (4) treat meanings as real objects.

    (1) The first idea would involve trying to reconstruct semantics so that it can be done without actually referring to meanings. It turns out to be hard to do this -- at least, if you want a theory that does what linguistic semanticists would like a theory to do. But the idea was popular earlier in the twentieth century, especially in the 1940s and 1950s, and has been revived several times since then, because many philosophers would prefer to do without meanings if at all possible. But these attempts tend to ignore the linguistic requirements, and for various technical reasons have not been very successful.

    (2) When an English speaker says ‘It's raining’ and a French speaker says ‘Il pleut’ you can say that there is a common pattern of usage here. But no one really knows how to characterize what the two utterances have in common without somehow invoking a common meaning. (In this case, the meaning that it's raining.) So this idea doesn't seem to really explain what meanings are.

    (3) Here, you would try to explain meanings as ideas. This is an old idea, and is still popular; nowadays, it takes the form of developing an artificial language that is supposed to capture the "inner cognitive representations" of an ideal thinking and speaking agent. The problem with this approach is that the methods of contemporary psychology don't provide much help in telling us in general what these inner representations are like. This idea doesn't seem yet to lead to a methodology that can produce a workable semantic theory.

    (4) If you say that the meaning of ‘Mars’ is a certain planet, at least you have a meaning relation that you can come to grips with. There is the word ‘Mars’ on the one hand, and on the other hand there is this big ball of matter circling around the sun. This clarity is good, but it is hard to see how you could cover all of language this way. It doesn't help us very much in saying what sentences mean, for instance. And what about the other meaning of ‘Mars’? Do we have to believe in the Roman god to say that ‘Mars’ is meaningful? And what about ‘the largest number’?

    The approach that most semanticists endorse is a combination of (1) and (4). Using techniques similar to those used by mathematicians, you can build up a complex universe of abstract objects that can serve as meanings (or denotations) of various sorts of linguistic expressions. Since sentences can be either true or false, the meanings of sentences usually involve the two truth values true and false. You can make up artificial languages for talking about these objects; some semanticists claim that these languages can be used to capture inner cognitive representations. If so, this would also incorporate elements of (3), the psychological approach to meanings. Finally, by restricting your attention to selected parts of natural language, you can often avoid hard questions about what meanings in general are. This is why this approach to some extent dodges the general question of what meanings are. The hope would be, however, that as more linguistic constructions are covered, better and more adequate representations of meaning would emerge.

    Though "truth values" may seem artificial as components of meaning, they are very handy in talking about the meaning of things like negation; the semantic rule for negative sentences says that their meanings are like that of the corresponding positive sentences, except that the truth value is switched, false for true and true for false. ‘It isn't raining’ is true if ‘It is raining’ is false, and false if ‘It is raining’ is true.

    Truth values also provide a connection to validity and to valid reasoning. (It is valid to infer a sentence S2 from S1 in case S1 couldn't possibly be true when S2 is false.) This interest in valid reasoning provides a strong connection to work in the semantics of artificial languages, since these languages are usually designed with some reasoning task in mind. Logical languages are designed to model theoretical reasoning such as mathematical proofs, while computer languages are intended to model a variety of general and special purpose reasoning tasks. Validity is useful in working with proofs because it gives us a criterion for correctness. It is useful in much the same way with computer programs, where it can sometimes be used to either prove a program correct, or (if the proof fails) to discover flaws in programs.

    These ideas (which really come from logic) have proved to be very powerful in providing a theory of how the meanings of natural-language sentences depend on the meanings of the words they contain and their syntactic structure. Over the last forty years or so, there has been a lot of progress in working this out, not only for English, but for a wide variety of languages. This is made much easier by the fact that human languages are very similarin the kinds of rules that are needed for projecting meanings from words to sentences; they mainly differ in their words, and in the details of their syntactic rules.

    Recently, there has been more interest in lexical semantics -- that is, in the semantics of words. Lexical semantics is not so much a matter of trying to write an "ideal dictionary". (Dictionaries contain a lot of useful information, but don't really provide a theory of meaning or good representations of meanings.) Rather, lexical semantics is concerned with systematic relations in the meanings of words, and in recurring patterns among different meanings of the same word. It is no accident, for instance, that you can say ‘Sam ate a grape’ and ‘Sam ate’, the former saying what Sam ate and the latter merely saying that Sam ate something. This same pattern occurs with many verbs.

    Logic is a help in lexical semantics, but lexical semantics is full of cases in which meanings depend subtly on context, and there are exceptions to many generalizations. (To undermine something is to mine under it; but to understand something is not to stand under it.) So logic doesn't carry us as far here as it seems to carry us in the semantics of sentences.

    Natural-language semantics is important in trying to make computers better able to deal directly with human languages. In one typical application, there is a program people need to use. Running the program requires using an artificial language (usually, a special-purpose command language or query-language) that tells the computer how to do some useful reasoning or question-answering task. But it is frustrating and time-consuming to teach this language to everyone who may want to interact with the program. So it is often worthwhile to write a second program, a natural language interface, that mediates between simple commands in a human language and the artificial language that the computer understands. Here, there is certainly no confusion about what a meaning is; the meanings you want to attach to natural language commands are the corresponding expressions of the programming language that the machine understands. Many computer scientists believe that natural language semantics is useful in designing programs of this sort. But it is only part of the picture. It turns out that most English sentences are ambiguous to a depressing extent. (If a sentence has just five words, and each of these words has four meanings, this alone gives potentially 1,024 possible combined meanings.) Generally, only a few of these potential meanings will be at all plausible. People are very good at focusing on these plausible meanings, without being swamped by the unintended meanings. But this takes common sense, and at present we do not have a very good idea of how to get computers to imitate this sort of common sense. Researchers in the area of computer science known as Artificial Intelligence are working on that. Meanwhile, in building natural-language interfaces, you can exploit the fact that a specific application (like retrieving answers from a database) constrains the things that a user is likely to say. Using this, and other clever techniques, it is possible to build special purpose natural-language interfaces that perform remarkably well, even though we are still a long way from figuring out how to get computers to do general-purpose natural-language understanding.

    <> Semantics probably won't help you find out the meaning of a word you don't understand, though it does have a lot to say about the patterns of meaningfulness that you find in words. It certainly can't help you understand the meaning of one of Shakespeare's sonnets, since poetic meaning is so different from literal meaning. But as we learn more about semantics, we are finding out a lot about how the world's languages match forms to meanings. And in doing that, we are learning a lot about ourselves and how we think, as well as acquiring knowledge that is useful in many different fields and applications.

    ----------------

    Semantics vs. Pragmatics

    Semantics can be defined as "the study of the meaning of morphemes, words, phrases and sentences."

    You will sometimes see definitions for semantics like "the analysis of meaning," To see why this is too broad, consider the following. Kim, returning home after a long day, discovers that the new puppy has crapped on the rug, and says "Oh, lovely."

    We don't normally take this to mean that Kim believes that dog feces has pleasing or attractive qualities, or is delightful. Someone who doesn't know English will search the dictionary in vain for what Kim means by saying "lovely":

    (ADJECTIVE): [love-li-er, love-li-est].
    
        1. Full of love; loving.
        2. Inspiring love or affection.
        3. Having pleasing or attractive qualities.
        4. Enjoyable; delightful.

    Obviously this is because Kim is being ironic, in the sense of "using words to convey the opposite of their literal meaning". Kim might have said "great," or "wonderful," or "beautiful", or "how exquisite", and none of the dictionary entries for these words will help us understand that Kim means to express disgust and annoyance. That's because a word's meaning is one thing, and Kim's meaning -- what Kim means by using the word -- is something else.

    There are lots of other ways besides irony to use words to mean something different from what you get by putting their dictionary entries together. Yogi Berra was famous for this: "if you can't imitate him, don't copy him;" and "you can observe a lot just by watching" and dozens of others.

    In fact, even when we mean what we literally say, we often -- maybe always -- mean something more as well. The study of "speaker meaning" -- the meaning of language in its context of use -- is called pragmatics, and will be the subject of the next lecture.

    Philosophers have argued about "the meaning of meaning," and especially about whether this distinction between what words mean and what people mean is fundamentally sound, or is just a convenient way of talking. Most linguists find the distinction useful, and we will follow general practice in maintaining it. However, as we will see, it is not always easy to draw the line.

    Word meaning and processes for extending it

    Word meanings are somewhat like game trails. Some can easily be mapped because they are used enough that a clear path has been worn. Unused trails may become overgrown and disappear. And one is always free to strike out across virgin territory; if enough other animals follow, a new trail is gradually created.

    Since word meanings are not useful unless they are shared, how does this creation of new meanings work? There are a variety of common processes by which existing conventional word meanings are creatively extended or modified. When one of processes is applied commonly enough in a particular case, a new convention is created; a new "path" is worn.

    Metaphor

    Consider the difference in meaning between "He's a leech" and. "he's a louse." Both leech and louse are parasites that suck blood through the skin of their host, and we -- being among their hosts -- dislike them for it. Both words have developed extended meanings in application to humans who are portrayed as like a leech or like a louse -- but the extensions are quite different.

    According to the American Heritage Dictionary, a leech is "one who preys on or clings to another", whereas a louse is "a mean or despicable person." These extended meanings have an element of arbitrariness. Most of us regard leeches as "despicable," and lice certainly "prey on" and "cling to" their hosts. Nevertheless, a human "leech" must be needy or exploitative, whereas a human "louse" is just an object of distaste.

    Therefore it's appropriate for the dictionary to include these extended meanings as part of the meaning of the word. All the same, we can see that these words originally acquired their extended meanings by the completely general process of metaphor. A metaphor is "a figure of speech in which a term is transferred from the object it ordinarily designates to an object it may designate only by implicit comparison or analogy." For instance, if we speak of "the evening of her life", we're making an analogy between the time span of a day and the time span of a life, and naming part of life by reference to a part of the day.

    In calling someone a leech, we're making an implicit analogy between interpersonal relationships and a particular kind of parasite/host relationship.

    This kind of naming -- and thinking -- by analogy is ubiquitous. Sometimes the metaphoric relationship is a completely new one, and then the process is arguably part of pragmatics -- the way speakers use language to express themselves. However, these metaphors often become fossilized or frozen, and new word senses are created. Consider what it means to call someone a chicken, or a goose, or a cow, or a dog, or a cat, or a crab, or a bitch. For many common animal names, English usage a conventionalized metaphor for application to humans. Some more exotic animals also have conventional use as epithets ("you baboon!" "what a hyena!") No such commonplace metaphors exist for some common or barnyard animals ("what a duck she is"?), or for most rarer or more exotic animals, such as wildebeest or emus. Therefore, these are available for more creative use. The infamous 'water buffalo incident' of a few years ago was apparently a case where what was began as a fossilized metaphor coming from a language other than English was interpreted as a much more offensive novel usage.

    Sometimes the metaphoric sense is retained and the original meaning disappears, as in the case of muscle, which comes from Latin musculus "small mouse".

    Metonymy and synecdoche

    Metonymy is "a figure of speech in which an attribute or commonly associated feature is used to name or designate something."

    Synecdoche is "a figure of speech by which a more inclusive term is used for a less inclusive one, or vice versa."

    Like metaphors, many examples of metonymy and synecdoche become fossilized: gumshoe, hand (as in "all hands on deck"), "the law" referring to a policeman. However, the processes can be applied in a creative way: "the amputation in room 23".

    It often requires some creativity to figure out what level of specificity, or what associated object or attribute, is designated by a particular expression. "I bought the Inquirer" (a copy of the newspaper); "Knight-Ridder bought the Inquirer" (the newspaper-publishing company); "The Inquirer endorsed Rendell" (the newspaper's editorial staff); etc. "Lee is parked on 33rd St." (i.e. Lee's car, perhaps said at a point when Lee in person is far away from 33rd St.).

    Connotation/denotation

    The word "sea" denotes a large body of water, but its connotative meaning includes the sense of overwhelming space, danger, instability; whereas "earth" connotes safety, fertility and stability. Of many potential connotations, the particular ones evoked depend upon the context in which words are used. Specific kinds of language (such as archaisms) also have special connotations, carrying a sense of the context in which those words are usually found.

    Over time, connotation can become denotation. Thus trivial subjects were originally the subjects in the trivium, consisting of grammar, rhetoric and logic. These were the first subjects taught to younger students; therefore the connotation arises that the trivium is relatively easy, since it is taught to mere kiddies; therefore something easy is trivial.

    Other terminology in lexical semantics

    In discussing semantics, linguists sometimes use the term lexeme (as opposed to word), so that word can be retained for the inflected variants. Thus one can say that the words walk, walks, walked, and walking are different forms of the same lexeme.

    There are several kinds of sense relations among lexemes. First is the opposition between syntagmatic relations (the way lexemes are related in sentences) and paradigmatic relations (the way words can substitute for each other in the same sentence context).

    Important paradigmatic relations include:

    1. synonymy - "sameness of meaning" (pavement is a synonym of sidewalk)
    2. hyponymy - "inclusion of meaning" (cat is a hyponym of animal)
    3. antonymy - "oppositeness of meaning" (big is an antonym of small)
    4. incompatibility - "mutual exclusiveness within the same superordinate category" (e.g. red and green)

    We also need to distinguish homonymy from polysemy: two words are homonyms if they are (accidentally) pronounced the same (e.g. "too" and "two"); a single word is polysemous if it has several meanings (e.g. "louse" the bug and "louse" the despicable person).

    Lexical Semantics vs. Compositional Semantics

    In the syntax lectures, we used the example of a desk calculator, where the semantics of complex expressions can be calculated recursively from the semantics of simpler ones. In the world of the desk calculator, all meanings are numbers, and the process of recursive combination is defined in terms of the operations on numbers such as addition, multiplication, etc.

    The same problem of compositional semantics arises in the case of natural language meaning. How do we determine the meaning of complex phrases from the meaning of simpler one?

    There have been many systematic efforts to address this problem, going back to the work of Frege and Russell before the turn of the 20th century. Many aspects of the problem have been solved. Here is a simple sketch of one approach. Suppose we take the meaning of "red" to be associated with the set of red things, and the meaning of "cow" to be associated with the set of things that are cows. Then the meaning of "red cow" is the intersection of the first set (the set of red things) with the second set (the set of things that are cows). Proceeding along these lines, we can reconstruct in terms of set theory an account of the meaning of predicates ("eat"), quantifiers ("all"), and so forth, and eventually give a set-theoretic account of "all cows eat grass" analogous to the account we might give for "((3 + 4) * 6)".

    This sort of analysis -- which can become very complex and sophisticated -- does not tell us anything about the meanings of the words involved, but only about how to calculate the denotation of complex expressions from the denotation of simple ones. The denotation of the primitive elements -- the lexemes -- is simply stipulated (as in "the set of all red things").

    Since this account of meaning expressed denotations in terms of sets of things in the word -- known as "extensions" -- it is called "extensional".

    Sense and Reference

    One trouble with this line of inquiry was raised more than 100 years ago by Frege. There is a difference between the reference (or extension) of a concept -- what it corresponds to in the world -- and the sense (or intension) of a concept -- what we know about its meaning, whether or not we know anything about its extension, and indeed whether or not it has an extension.

    We know something about the meaning of the word "dog" that is not captured by making a big pile of all the dogs in the world. There were other dogs in the past, there will be other dogs in the future, there are dogs in fiction, etc. One technique that has been used to generalize "extensional" accounts of meaning is known as possible worlds semantics. In this approach, we imagine that there are indefinitely many possible worlds in addition to the actual one, and now a concept -- such as dog -- is no longer just a set, but rather is a function from worlds to sets. This function says, "Give me a possible world, and I'll give you the set of dogs in that world."

    Like many mathematical constructs, this is not a very practical arrangement, but it permits interesting and general mathematics to continue to be used in modeling natural language meaning in a wider variety of cases, including counter-factual sentences ("If you had paid me yesterday, I would not be broke today").

    --------------

Prof. Fakhar Alam
Website design / content / graphics by Fakhar Alam   © 2015 Prof. Fakhar Alam