Intelligent agents in systems

People, as well as AI robots within systems, are difficult to account for, as they can be simultaneously

  • the system-of-interest ("product")
  • the supra-system, where it's often more convenient to consider not the entire human, but a part of them--a body part::hardware or a skill as part of the personality::software
  • a material "equipment" like a machine tool within the creator (if they do something manually with their hands/manipulators, where tools like hammers, chisels, and even modeling software are simply extensions of their hands or brain/computer)
  • can be a construct/"material body" with weight and inertia, moisture and hardness (for example, in accounting for inertial movement in physical practices or robotics or in the example of wristwatches)
  • a project role, or even a whole set of project roles: we may not consider the opinions and interests of animals (though that's also bad), but we fundamentally cannot ignore the opinions of people-in-roles, have to take into account the decisions of AI and even just computer programs (remember "I would help, but the computer won't allow it"), and also decisions by organizations as "legal entities."

In addition, people and organizations have relationships of subordination/leadership and ownership relationships concerning assets, and people (and organizations, as well as AI agents) occupy positions. Humans and organizations "belong" to some states. With agents, everything is complicated from the start: agents are complex, their behavior is poorly predictable, they are difficult to describe. Before allocating complex project thinking among different agents (today not only people!), we recommend that you model and document what you know about the agents to avoid confusion and not forget anything. When thinking about people, their AI systems, teams, and collectives (command teams), organizations (enterprises and extended enterprises) you will unexpectedly have to think and write a lot; thinking about intelligent agents and their groups is not simple. If you move up even higher in complexity levels, these will be community levels (including business-eco-systems), societies, and humanity as a whole (due to the variety of agents today, 'agenthood' would be a more accurate term) in general.

People can agree with each other on the most unexpected occasions and take actions for the project team based on these agreements, which can be very unexpected for the team. For example, you assign a factory manufacturer to make a copper heat exchanger based on your 3D engineering model. The engineers of this factory suddenly find that there is too much copper there, and from the best motives, remove the copper--less material-intensive, thinner walls, better heat exchange! And they make an "improved version"; our enhancements" as our bonus to you." You turn on the heat exchanger; the thin walls under the pressure of the liquid start to hum louder than a concert organ! The heat exchanger becomes a vibration generator, a noise source! That's the bonus! "We wanted it better, but it turned out as always" - this is a very common occurrence in projects for creating and developing systems, so any changes have a "hypothesis" status and always be ready for the fact that "the hypothesis was not confirmed." And indeed, without "initiative from below," projects also do not live!

The best management and engineering methods (for example, the culture of architectural work) are designed so that inevitable errors, inevitable mutations do not lead to the loss of system operability, the need to rebuild the system from scratch. These methods/ways/practices are examined in more detail in the courses "Systems Engineering" (especially concerned with the continuous architecture decision-making method during an engineering project) and "Systems Management."

Agents, including collective agents, usually participate in several teams simultaneously. Remember that in the area of interest (a set of interests in a specific subject area) every role played by agents has not only preferences in certain characteristics of interest to them but agents performing these roles will change their roles, change themselves (and develop their body and exobody, develop their personality, mastering new skills), change the surrounding physical world to realize these preferences. Agents (people, AI agents, their organizations) are devilishly inventive in achieving their preferred state of the world. They will influence the course of the system development project and the resulting system itself in the most unexpected ways for each other. Some agents will simplify the target system engineering without losing functionality, some will negotiate additional financing, some will agree on such a change in the supra-system that the target system can be cheaper. And someone will commit fraud in the hope that either no one will notice, or notice--but won't be able to do anything without severe damage to the observer. Agents are inventive and active. Look in the mirror: you are such an agent, and everyone else is unlikely to be worse!

The system with people included in it that we have already dissected in the course is the dance performance/output. The unfolding over time process of changing performance states is defined at a lower system level as an enumeration of changing states of interacting physical objects, and these objects are singled out during the performance. It is best not to stop the interaction of the parts-subsystems of the performance and to analyze the performance::system into components during operation/functioning (try to conceptualize an explosion-diagram); you should not break down the system but should focus on parts in the working system.

A dance performance can be represented as a system; you just need to represent it in space-time, not just in space. The main functional parts-subsystems here will be the dancing::function dancer roles::subsystems. In the performance, human agents act as both material/construct (physical bodies, changing shape in space-time) and functional objects - the roles of the dancers. The dance performance is not as complex as an enterprise and, therefore, does not require consideration of complex questions of corporate governance, strategic planning, operational management/work control. Dance performances can be solos, social (pairs, for each other, not for the audience), and group/ensemble. In addition to the dancers, the performance space (room or even street for street performances), the audience, sometimes judges (dance competitions), music, and musical equipment need to be taken into account. All people are not necessarily the dancers there; sometimes there are cyborgs (e.g., a dancer with four additional robotic arms in a backpack[1](https://vk.com/wall-179019873_1747)). This is an excellent example on which to train your systems thinking.

Events are also a classic example of systems with people. The event management methods[2] (event management) have even become a university academic discipline--designing and developing (for example, conducting a concert tour) a concert or an annual rock festival is challenging, but today there is nothing extraordinary in it.

Considering thousands of people as part of the event system is quite possible; a series of events (for example, an annual festival) can be viewed as producing continuously evolving versions of a system in mechanical engineering or software version release. Here, engineering terminology is intentionally used ("create a concert," "launch a festival") to demonstrate the commonality of thought for very different systems. Of course, event management uses different terminology, but the choice of words/terms is not so important. What is crucial is that when thinking about events, you choose types of systems thinking and apply an understanding of how successful systems are made from systems engineering. The actual methods/practices (culture of work, work style, activities, method -- terms here are not so important) in event management will be covered in applied courses, but aligning all these thoughts on various subprojects (for example, negotiating with sound engineers, making agreements about ventilation, etc. -- purely engineering subprojects) can be done by using systems thinking. It is essential to pay attention to the general reasoning approach in all projects involving the creation of systems with the participation of agents (people, AI robots, and even organizations) as subsystems.

A household with a house, household items, and its residents (including robots)--a case already considered in the course--is another example of systems with human agents.

The most complex systems with people and AI agents are such group creators/enabling/constructor systems as enterprises, project workgroups, extended enterprises (a unique enterprise with its contractors engaged in some significant target system, such as a nuclear power plant, or serial production of complex transportation systems--aircraft or cars).

Systems with people (including their organizations) are always systems of systems due to people's inherent self-belonging. Therefore, working with systems with people in them cannot be done with simple engineering methods that can be used to design a straightforward mechanical or electronic system, manufacture its parts, and assemble them into a functioning whole. No, the metaphor of a watchmaker making parts and assembling them does not work. With people (and any other living "grown" systems, as well as AI systems that can be taught), you are more on the agricultural metaphors:

  • in small systems, scenarios for the caretaker gardener **(who has control over what is created, takes care of each seedling in every flowerbed, knows the position of every tree and cares for it),
  • in large systems, scenarios for the** forester **(who does not have control over his forest--where each tree or bush will grow, but nevertheless the forester has enough influence to prevent serious negative events: can prevent a fire, feed animals in winter, chase away poachers).

In particularly large systems (a large community, society on a state scale, all of humanity), they sometimes speak not just about the complexity of the system, but about the complexity or complex-system thinking, which does not allow constructing actions with predictable results, the results of projects dealing with a large number of people are always to a considerable extent uncertain, they are described probabilistically, even though they are deterministic (deterministic means that there is no "randomness" in them, there are always reasons, but predicting the final outcome is impossible).

It is always necessary to remember that evolution breeds complexity, creates increasingly higher levels of organization, and the driving force of this evolution is the conflicts between system levels, leading to disorganizations[3]. In very large and complex systems composed of people, robots, tools/equipment, i.e., communities (including enterprise communities--eco-systems), societies, humanity (taking into account all other agents, "agenthood" would be a better term than "humanity") in general, everything is so interconnected with one another and the realization of intentions of some agents being resisted/vs. or helping the realizations of the intentions of other agents in such an unexpected way that "surprises" are the norm, and the absence of surprises is almost nonexistent; there are no "best solutions," only the "least bad out of knowingly bad ones."

It is important to remember that you cannot design an enterprise or a team and then manufacture it like a **computer chip ** or a water tower. Systems with agents should be discussed using system thinking but planning, creation, development, operation, dismantling them with classic "iron" or software engineering methods cannot be done; you need to use learning methods--personal engineering for individual agents, system management for collective agents (enterprise development -- that's exactly learning new methods of operation for collective agents).

It is challenging to imagine that the left rocket booster will persuade the right rocket booster to fly not to Mars (where they should fly) but to the Moon--because it would be more reliable and faster, there would be enough fuel by design, and it would be less troublesome. However, such situations occur quite often in systems of intelligent agents. Be cautious when applying technical engineering methods to work with "iron" and "software" to systems with people and AI. However, one should also be equally careful in the opposite situation--the non-application of general systems engineering **work methods, which are the generalization of both work methods with “iron” and “software,” to systems at all evolutionary/system/organizational levels (complexity levels), including not only enterprise engineering but also social engineering of communities, society, and even humanity as a whole. **The implications of this non-application are discussed in John Doyle's works[4].

Discussions about systems with artificial intelligence (AI), starting from a certain level of agency of such systems (the ability to generate predictions of future states of themselves and the world, ideas for improving such states, choosing the best of these ideas, and then planning and implementing plans for realizing these ideas)--discussions about people, people in their collectives, people in their collectives with AI (e.g., enterprises)--are the same thinking; systems thinking is scaleless and anthropocentric-free.


  1. https://vk.com/wall-179019873_1747 ↩︎

  2. https://en.wikipedia.org/wiki/Event_management ↩︎

  3. https://elementy.ru/nauchno-populyarnaya_biblioteka/434505/Konflikty_kak_osnova_slozhnosti ↩︎

  4. https://ailev.livejournal.com/1622346.html ↩︎