Errors in setting Conceptual attention induction
If you have problems with setting conceptual attention, it is worth checking to see if you are making any of the mistakes below, often without even noticing.
Mistake #1: Not building models because "decisions need to be made quickly." Relying on learned automatism S1, if you are not happy with the consequences of such reliance. This mistake is often made due to the basic settings of our brain: as mentioned many times before, it is structured to help us survive in an adverse environment, not to build successful systems. Evolution is slowly changing us, and the emergence of S2 and the observed brain changes in those dealing with large volumes of information by scientists today are evidence of that. However, S1 is much older, has evolved longer, and is the "basic" system that governs human behavior. Therefore, S1 teams take precedence.
It can be said that our consciousness and brain in general more often choose speed over accuracy when choosing between "speed" and "accuracy." However, like any compromise, this choice has consequences: namely, decision-making models become rougher and approximate. And the faster the decision is made, the rougher the model produced "on output."
When new inputs arrive, there is a great temptation to make a quick decision (in seconds or even fractions of a second). When you need to escape predators, this setting helps survive; but in business, there are no decisions that need to be made in seconds. In the life of a modern person, most decisions - such as those involving relationships with others, career, place of residence, and so on - do not require that speed either. Accuracy turns out to be slightly more important: that is, all else being equal, it makes sense to take a little more time to consider decisions, and at least take into account key interests and key risks. For example, sometimes the decision to launch a new business line is made "just because there is good demand in the market." But this does not consider the characteristics of the company itself: namely, how many resources need to be invested to satisfy this demand profitably? What projects will not be implemented if this new line is launched - what is the cost of the best alternative? Won't there be a defocusing at the company level, won't it turn out that it has too many customer segments, each of which it works with not good enough?
Similarly, with the decisions of an individual. If a manager understands that his peers (representatives of adjacent departments) do not do their job well, causing management problems for the manager's team, he needs to somehow resolve this contradiction. At the same time, the manager may feel irritated about what is happening, not understanding why others need to be told at all that they need to work well. Consequently, the first impulse will be to go for an open conflict. Or let's not openly fight but start the conversation from the position of the "accuser." Sometimes such a conversational style brings results; but often it makes the interlocutors want to defend themselves. In "defense" mode, even rational arguments, which could easily be accepted in a calm state, can be "rejected" (because it is not a conversation but a struggle). Achieving the manager's initial goal - agreeing on the parameters of the work products that should be provided by the neighbors - can be extremely difficult in this case. So if a manager understands that there is a problem, it is advisable to exhale, try to survive acute irritation, and choose the negotiation style rationally.
How to fix the mistake: ask yourself if it makes sense to make a decision in seconds, "emotionally," or if it is worth taking time to reflect. You can establish the rule that "a decision must be substantiated (a decision-making model is provided)."
Mistake #2: Undermodeling, or not considering important objects of attention. For example, in communication, not taking into account role interests. Perhaps a team of peers doing subpar work for colleagues is overloaded with projects and is essentially a bottleneck. However, there is no work queue management, tasks are taken on a "who yells louder, we do" basis. Perhaps there are no regulations or checklists describing how to perform routine work. In such conditions, peers cannot help but make mistakes. Therefore, scolding them may be pleasant but not very helpful in solving the problem: important circumstances are not taken into account.
We constantly encounter this undermodeling, including because it is difficult to build a quality model right away, especially if mastery is lacking. For example, when students are just starting to study modeling, it is difficult for them to complete homework. Something is always overlooked, and completed assignments have to be redone. In the "Systems Thinking" course, you will encounter the rule that "there should always be one more role than you have identified." That is, when modeling a project, you will initially include too few roles in the list. And new roles whose interests were not considered but now influence the project will constantly be identified. This will happen especially often in the first attempts at modeling the first project. This is normal.
How to fix the mistake: refine the model iteratively after receiving feedback.
Mistake #3: Overmodeling. You can also overdo modeling: sit and refine the model with a file, not letting it be tested in "combat," that is, with actions in the physical world according to the model. Often this happens because we are afraid that there will be errors in the model (there will definitely be) and it will have to be changed (it will definitely have to be), possibly with a review of personal worldviews. This often happens when something is done for the first time. For example, when you are just starting to study the role of an engineer and are afraid to make a mistake. Or you get into a "dead zone," a motivational rut in learning (detailed in the "Attention Retention" section). Consequently, an "analytical paralysis" sets in, decision-making is delayed.
How to fix the mistake: set the rule for yourself that "you need 50-80% of information to create a quality model and make a decision." To get the missing 20% (or more), you may need to put in as much effort as you did for obtaining the primary 50-80% - or even more. The costs of overmodeling do not pay off! If there are doubts about the decision made or the model seems inadequate, an audit can be conducted. If the audit does not reveal any problems, instead of worrying, start gathering statistics: do something many, many times. For example, write 100 notes by thinking by letter, or 100 posts, or 100 reports, or make 100 cold calls, or have 100 negotiations. Such "combat testing" will give you the missing information much faster and more accurately than if you continue to sit and model.
Mistake #4: Focusing attention on the wrong things. For example, a person entering the kitchen should play the role of a cook but instead goes into the role of a lighting designer and starts figuring out what light bulbs to buy to better illuminate the food preparation area. The necessary role is not fulfilled, and eventually, after half an hour, the person realizes with surprise that it's time to have lunch, but there is no lunch in sight. The required role was never fulfilled.
How to fix the mistake: restore the focus of attention and keep it in mind. Help yourself and others take on the necessary roles by designing a path for the attention cameras: quite literally "throwing" the necessary objects under the cameras. For example, products, cutting board with a knife.
Mistake #5: Not grounding the model on the physical world. Not testing it in combat or not conducting it enough times. For example, for a quality check of a product hypothesis, it is desirable to test it many times: first try to refute it based on your existing information about buyer behavior, market trends; then test it on a small sample; then on a larger sample - and so on. If you cannot refute the hypothesis - you have found a good hypothesis. If you succeed - then you can adjust the hypothesis and try to test the updated one also many times. If you test hypotheses using the method of "my acquaintances said the idea is good, so let's implement the hypothesis / the idea is bad, so let's not implement it" - then no verification, grounding the model on the physical world occurs.
The same applies to the material of the "Modeling and Coherence" course: the studied material must be constantly applied, grounding it on your reality, instead of looking for excuses not to. Then you can expect the quality conceptual attention set as intended.
How to fix the mistake: build a model, choose a course of action - and do an adequate number of repetitions. If you don't know how many you need, you can use the "100 repetitions" method.
Mistake #6: Not resting during conceptual attention setting. Conceptual attention with S2 involvement is resource-intensive. It requires not only quality concentration (the ability to focus on task execution in the moment) but also the ability to concentrate quality, provided by good rest and emotional background stabilization. Otherwise, you will make many mistakes in the models - or will not be able to focus on the model. Instead, you will mimic working on the model instead of actually working on it (there are no resources for "real work").
How to fix the mistake: break down tasks so that each can be completed during a "sit." And organize "sits" in time slots allocated for performing a role (e.g., engineer or leader). Also, plan for quality rest outside of working hours (there should be enough leisure / capacity for leisure in life).
Mistake #7: Not helping yourself design a path of attention. Associative attention will default and unsettle you, causing attention on something else, or switching from task execution or playing the role to another.
How to fix the mistake: organize your workspace, the environment where you play the role (at work and at home). Familiar things around you can help you assume the role - or, conversely, hinder you from playing roles. For example, if you work from home, it may be a good idea to create a workspace and introduce a "do not disturb" rule for family members. And, conversely, do not work outside of the workspace, but instead pay attention to family activities.
Mistake #8: Not automating the use of quality models. Using quality models allows you to get returns for many years, often nonlinearly. Therefore, if a quality model is found, it should be used to the maximum. For example, you can change the practice of organizing meetings: determine the meeting's goal, agenda, roles that need to be played, agents who will perform them and must be present at the meeting; schedule the meeting with reminders (including a day / a few hours before so participants come prepared); after the meeting, make sure to document decisions in the protocol (it may look like a formal document or a message in a Telegram chat). Write checklists on the practice, implement it universally, teach all employees to organize meetings according to this scheme. Organizing meetings becomes predictable work, the meetings themselves also proceed more predictably - thereby freeing up S2 cognitive resources to discuss content (questions that need to be addressed or decisions that need to be made).
If you don't do this, you will constantly be essentially playing an unfamiliar role, performing an incomprehensible practice, and acting with the skill of a novice. Novices find it harder to act than masters.
How to fix the mistake: identify which models' usage should be automated first and apply the appropriate practices to make it happen. How to do this is described in the "Attention Retention" section.