Skip to content
Create an account for full access.

How do we interpret signs

So, we have a sign (in the physical world, but in relation to the sign, this is not so important to us), there is meaning, an idea (in the mental space, this is a generalized image that forms in our minds when we understand a sign), and there is value - a real (or imaginary) object in the physical (real or hypothetical) world about which something is said to us by presenting the sign.

To make a reference (or to interpret a sign) - essentially means to understand what (about which object or phenomenon in the world) is being talked about when a simple or complex sign is presented to us.

If communication is done through language, we need to interpret the linguistic expression. Simply put, this means putting words together into sentences and placing them in context to imagine a piece (or set of pieces) of the world where such a statement can be true.

For example, when you are told "there is a computer on the table," you imagine this picture. You can draw it (in your imagination) or mentally hear it (if you are told that the computer fell off the table). So, you can imagine this at the level of information for other senses too, not just through a visual image. Knowing about gravity and having some life experience, you will have no difficulty understanding what is meant.

The general principles of interpreting signs are studied in structural semantics. Two such principles are of interest to us:

Principle of compositionality - this is a simple and almost always correct principle based on an intuitive understanding of how language works. It sounds like this: the meaning of a linguistic expression consists of the sum of the meanings of the smaller linguistic expressions it contains.

This is indeed the case - when we say "the laptop fell off the table," we evoke concepts (ideas) of "laptop" and "table" in our memory, which define the understanding of this phrase. The concept of "fell" also means something to us.

It can be said that the compositional interpretation goes from semantics (the relationship between signs and objects) to pragmatics (understanding the practical meaning of the situation).

However, the principle of compositionality is not enough: many words in language have more than one meaning.

A classic example based on this in the Russian language is the phrase "a guy in the club picked up a model."

In such cases, you have to guess from the context (the personalities of the speakers, the situation, the preceding dialogue, and other indirect data) which specific meaning the ambiguous word has, and then interpret the phrase compositionally.

For example, you can finish the phrase "it remains to install the engine in it," and then the context allows you to narrow down the possible interpretations.

Therefore, the principle of compositionality is complemented by the principle of contextuality: a word is interpreted based on the context. This principle is very useful - it assumes a certain step outside, emphasizing that in order to interpret a linguistic expression, you look at what surrounds it and make some assumptions about the situation before the final interpretation. It can be said that contextual interpretation goes from pragmatics to semantics.

This principle fits well with how our brain works. When we think qualitatively, we often think like this: "let's look at and describe how everything looks as a whole, where there is a gap in this picture, and what ideally should fill it." When we start thinking about what should fill an unclear gap in a clear picture (phrase), we can make some predictions, for example, based on the context, and in general, successfully guess. This works better than analyzing the phrase (situation) compositionally only, trying to find the meaning of the pieces. Most likely, you have encountered this phenomenon when studying foreign languages - often it is possible to guess from the context what a particular unfamiliar word means.

Interestingly, the most modern methods of training computer neural networks for large language transformer models look exactly like this: the model learns to predict hidden fragments of text from billions of examples.

By combining the principles of compositionality and contextuality, it is possible to in some approximation understand how a thinking agent interprets the linguistic expressions that come to it.