Skip to content
Create an account for full access.

Why create explanations?

  • Compactly fit a piece of the world in your mind.

We want to see not just objects and their relationships, but connections on a larger scale that they form. At least within some part of reality to understand how its details are connected, why they are connected in that way, and what follows from it --- to realize some general principle. Knowing this principle, we automatically know everything about all similar things and similar pieces. From this, we feel that we can solve some classes of tasks without additional resource costs. People usually value this feeling and strive to repeat it if they have experienced it once.

We have already encountered this a little when we were talking about levels of abstraction when creating an ontology --- how to orient ourselves towards its use: are we interested in the level of single execution, rules, or a universal principle? These objects of the meta-model belong to the meta-model of creating explanations.

  • We noticed that the previous explanation that was working before is not suitable.

We discovered our lack of understanding, ignorance, or simply a great or systematic astonishment by some part of reality.

  • Obtain a predictive model.

So that from our understanding of how a piece of the world works, we can make good predictions. Many predictions --- phenomena are always multifaceted, predictions can be made in very different directions. Just as reasons can be on different "facets." This is not a linear sequence, but a grid. If we have such a model, we can predict well what will happen in our piece of the world and in all similar pieces.

Note: we don't just want to predict one thing; we want to create a model in such a way that we can make a bunch of good accurate predictions in different directions.

  • Briefly convey changes in your understanding of a piece of the world to another person.

Take a previously understood law or principle and explain it to the interlocutor in a short time. Explain how it works. Explain in a way that they draw some conclusions and new thoughts appear in their mind.

Example:

I have a camera in my hands; in front of me is a person who will operate it. I explain how the camera works: there is a little elf inside, drawing everything it sees; when the person presses the button, the elf throws out the picture it drew at that moment through a slot.

And if my interlocutor has no questions about my sanity, then they immediately understand that:

  • Since there is a living elf inside, it needs to be fed (the elf has been assigned to the class "living creature," and creatures in this class have the property of requiring food).
  • Paints and paper need to be provided (the practice of drawing is done using paints and paper).
  • Perhaps it is necessary to ask the elf if it needs anything (the elf was also classified as "communicative intelligent agents," which means you can talk to them).

All this arises from the fact that I am explaining to them: here is a camera --- inside it is an elf --- the elf draws pictures --- the button makes it throw out the picture it just drew. The rest was built from general considerations and illustrates the idea for us that all representations in a person's mind are connected. They build models based on their worldview and inevitably incorporate new inputs into context.

Another example:

I have a camera in my hands; in front of me is a person who will operate it. I ask them three times a day to bring monkey blood to the camera.

Immediately, there are a lot of questions: why, for whom, why specifically monkey blood, and what about the camera? These questions arise because I gave no explanation.

I want to provide the person with an understanding of a piece of the world that I have. So they can figure out for themselves what follows from it and what is needed from them. Of course, if my piece of the world is large and complex (like some internal mechanism of a large company, for example), then I will need to provide an explanation from different levels of abstraction, from different sides, and in different perspectives. And still, giving an explanation will be easier and more compact than giving all the consequences of that explanation --- because there will always be multiple consequences for such an explanation, and sometimes it is not obvious which ones are worth discussing and which are unlikely to be needed.