Concept of use
Knowledge of the existence of various types of systems (suprasystems, subsystems) in their relative position to the target system in a system breakdown (indication of a system breakdown - this was an indication of the time of use) allows for a more strict/precise delineation of the target system in the world. The concept of a system in physics precisely means some part of the world separated by a boundary from the rest of the world (environment, and when talking more about descriptions/texts, the word "context" is used).
We will highlight the system from the world with attention, considering the boundary as the boundary of our attention, rather than a material environment. So, we take a computer with our attention along with its case (the case is not the system boundary! The boundary passes where the molecules of the case end and the molecules of the air around the case begin, and this boundary is immaterial, it is "in the mind"), a house with its external wall, a cable with its sheath, a cell with its membrane.
Next, we introduce the concept of a "black box": this is a system that we represent without knowledge of its internal structure - we can only describe the function/behavior of the "black box"/system manifested at its external boundary, that is, at the boundary of the space occupied by the system in the physical world. We know nothing about the internal structure, about the subsystems of the "black box". And if we peek inside the system's boundary and talk about how it is arranged, we will call it a "transparent box" (transparent box, sometimes called a "white box"). There is also a "gray box": we know very little about how the system is structured inside its boundary, but we still know something.
We describe the system as a black box at least four times, and this is the "system approach":
- Functionality: as a role (functional/objective role) and its function in interaction with the environment during operation.
- Constructive: as a constructive element that we create and develop during creation.
- Spatially: as the space occupied by the black box at runtime.
- Cost-wise: The total cost of owning the black box.
In the system approach, we also consider:
- Creation graph: besides considering the system as a "black box" in the course of its operation, we take into account the fact that someone will create this system and will develop it.
- Evolution: we consider not only the one-time initial creation of the system but also the system's development: the "black box" will grow/evolve/modernize, it is not a one-time appearance. There will be MVP and numerous increments, "no version of the system is the last."
To "describe the system" is either "direct engineering", that is, design in terms of imagining what kind of system is needed as part of the suprasystem, or "reverse engineering" of an existing system if such a description is not available but is needed for something.
In terms of "direct engineering," it is incorrect to consider that "we know nothing about the system until it exists, we cannot predict the future" (we mention this only because we have heard this from many students). Almost all engineering is about designing systems that do not yet exist, but this does not prevent describing a nonexistent system, or coming up with hypotheses about what kind of system would be successful. To describe a nonexistent system, you need to hypothesize what kind of system will exist in 4D - to come up with a system that would behave in a way leading the suprasystem to perform a particular important function in the future. Thinking about a building that will be constructed next year is quite possible - builders did this a couple thousand years ago, there are no difficulties in imagining a system in the future that is not yet manufactured. In engineering, we describe a system that does not yet exist in the present, this is a common practice, and all design activities are hypotheses about what the future will be like!
The first system approach is the consideration of the system as a black box for performing its function ("providing irreplaceable benefits") within the suprasystem during operation. The ontological modality of this consideration in the case of reverse engineering, as well as direct engineering, is the modality of belief/doxastic modality, that is, it is a hypothesis. This hypothesis can both stand the test of logic and experiment, and fail, and this is addressed within the framework of engineering justification methods:
- Logic check: we show that there are no logical inconsistencies in the system description. If you say that our "black box" is white in color up to the mirror surface to better reflect sunlight and prevent overheating, and at the same time claim that our "black box" is a fashionable dark green color to look aesthetically pleasing, this is an obvious contradiction, the description needs to be changed to eliminate the contradiction (repeat: it doesn't matter whether we are describing an already existing system, or a future one). Engineering is about people in such situations coming to an understanding, resolving the contradictions.
- Experiment test: we can take measurements in the real world and check whether these measurements match within a certain confidence interval for the already existing (manufactured/realized) system with the design values.
In any case, in both direct and reverse engineering, we develop a "black box" description as a hypothesis (we believe that it is true). Then we critique this hypothesis, and if errors are found, we improve, improve, and improve this hypothesis about the "black box" - working with this description to achieve its consistency and accuracy in experiments (reverse engineering) and in predictions (direct engineering, but there will still be an experiment - creating the system and taking measurements on the manufactured system to confirm the hypothesis).
The description should demonstrate that the system "provides irreplaceable benefits" in its functional part, that it is feasible in the constructive part, that the system can fit where it should work spatially, and that it will be beneficial to build and operate the system financially.
The "entrepreneurial hypothesis" is precisely that - a hypothesis that our "black box" will be useful and cost-effective, therefore it will sell well, and investing in its development and production now will yield profit in the future. This is visioneering::method/practice (we avoid using the term "entrepreneurship" although strictly speaking this refers to "entrepreneurship according to Schumpeter", but not everyone understands "entrepreneurship strictly according to Schumpeter", everyone fantasizes something of their own. So we taboo the term "entrepreneur," the practice of "entrepreneurship," and break down the informal understanding of what is happening into several roles. A visioneer::role is precisely an "entrepreneur according to Schumpeter," but it is not a company founder, not someone with a special risk-taking mindset, not a rich person, not an inventor, none of the ordinary associations with an "entrepreneur." But they will assess whether the project will be profitable: they will issue a hypothesis about it. And if they::role consider that there will be no profit, the project will not happen).
If we change the modalities of the black box description from doxastic (belief) to deontic (prohibitions and permissions, prescriptions), the black box description is called system requirements. Earlier system engineering used to develop system requirements, but now they develop concept of operations, which is then refined and detailed into use cases - they primarily differ in this ontological status, but not only. In the concept of operations and then in the use cases, the behavior of the system as the 'black box' is described, that is, the functions of the system, its role in the environment - described as a hypothesis about the imagined behavior of a successful system.
Prior to 2015, the use of requirements engineering was prevalent in systems engineering, but it is no longer in use. Check: up until 2015, ten different textbooks on requirements engineering methods were published annually, but then the focus shifted - there are still book releases happening, but it's just the "old-timers" catering to demands of other "old-timers." This shift from requirements engineering to concept of operations development is detailed in the "Systems Engineering" course.
The main points here are:
- Requirement engineers (often referred to as analysts) intervened between developers and external project roles based on the idea that "developers should not be distracted from development, and these developers cannot communicate with clients - this should be done by specially trained individuals." It turned out that these "specially trained individuals" (analysts) were simply creating a situation of broken telephone (the client says something, the analyst hears something else, documents a third thing, the developer reads a fourth thing from the requirements), causing more harm than the harm of "distracting developers from their work." The delay in information transfer through an additional link resulted in the loss of context and justifications for certain requirements.
- Requirements stemmed from operational concepts that were gradually detailed. Various requirements for specific features were slowly gathered together into a large, rigorously harmonized "monolith" (as architects currently refer to them), approved - and then this "monolith" was passed on to development for "unconditional compliance." This process of "gathering all, approving, and passing on for implementation" turned out to be slow, although significantly better than the historical situation where system design occurred without requirements. Back then, the situation was so bad; it's better not to even recall it. "Having requirements is much better than not having them" was simply the truth! However, even with requirements, things did not go as smoothly. Firstly, there was a delay due to the harmonization of diverse requirements: external project roles, requirement engineers, developers, and architects were all involved. As these requirements were approved, it was understood that they were of significant value, as a considerable amount of effort was invested in them, and they couldn't be changed, given the effort needed to harmonize them! Therefore, upon identifying obvious errors in the requirements or discovering a situation necessitating changes, the requirements were preferred not to be changed. This resulted in making the system worse than it could have been. When encountering clear mistakes in the requirements or situations necessitating changes - the requirements were often left unchanged, not allowing the system to be as good as it could have been. The inability to add features promptly or remove unnecessary ones led to nervous situations - it was possible, but slow and nerve-wracking: one could adjust the requirements only "in a special, slow, and labor-intensive way."
- After the requirements reached the developers, they tried to understand what would actually be useful to the external project roles, which the developers themselves did not see but were identified only by the analysts. The issue was not just about the difficulty of changing the requirements upon discovering errors, but also the developers believed that they were supposed to work with these requirements only once, and the tests were planned not to demonstrate the system's suitability for the client (later understood as validation), but to show that "the requirements have been fulfilled" (these tests were called verification - verification - validation activities). The problem was that the intent, design, development, implementation, testing (both verification and validation) were considered one-time activities. This led to the inability to improve the system - neither to promptly add a new feature nor to promptly remove an unnecessary one. Doing so operationally, and very nervously - it was possible, "there are procedures for changing requirements." But quickly, it was prohibited, "no, it can't be done." Just think about this: "we have the wrong hypothesis; let's quickly fix it" versus "we have been provided with incorrect requirements; let's not implement these requirements."
Following the rejection of the requirements, active use of A|B testing began. When multiple hypotheses were proposed simultaneously (requirements usually demand just one thing!), they were all tested together, and then the best variant was selected based on certain criteria. If you have "hypotheses" instead of "requirements," you interact with them differently: not to "satisfy," but to "review and gradually correct."
The concept of operations (previously - rejected requirements that are the basis for developing the system), primarily contains information about the system's functions concerning its operational/functional environment. Therefore, it consists of various models describing the system's behavior at its external boundary in interaction with the systems outside (systems within the suprasystem). The more detailed behavior models are called use cases. In some schools of systems engineering, use cases are considered separate from the concept of operations (since they are developed later after compact descriptions of the system functionality in the concept of operations), while in others they are part of the concept of operations, and the concept of operations slowly changes as the project progresses: it becomes more specific, precise, more detailed, and incorporates more and more detailed use cases as the system evolves. We adopt the second approach: use cases are included in the concept of operations; it is one of the models included in it. More about this can be learned in the "Systems Engineering" course and the literature recommended by the course.
When talking about the same set of descriptions but at the suprasystem level (for instance, describing not the operational concept of a gear in a mechanical clock but the operational concept of a clock with a gear as part of it as the system concept), you may find that the operational concept of a gear is, in fact, a part of the clock concept since the gear is important in the clock's operation and what it does regardless of the gear's internal structure.
Certainly, the terminology for all these descriptions of the black and the transparent box can vary, especially when looking not at the descriptions themselves but at the documents expressing these descriptions: it is easy to call the operational concept of a gearing in a mechanical clock that describes the behavior of the gear a "questionnaire" sent to gear suppliers asking, "Can you manufacture a gear with these characteristics?" The "questionnaire" will not feature the words "operational concept" or the older term "requirements." If it contains any indications of what the gear should be made of, it won't use the term "system concept." If architectural characteristics (e.g., durability) are or are not specified, the document will be called a "questionnaire." You should be able to look at such a "questionnaire" - and identify the type of description: a "questionnaire"::the operational concept of a gear (and if there are architectural characteristics, note them as well).
Why is it necessary to separate the description of the black box as the concept of operations (system behavior in the external environment, from the system boundary to the surroundings) and not immediately delve into the system concept - how it is structured internally (which subsystems perform which functions and what they are made of)? The differentiation of the black and transparent boxes is required to offer the most varied options for how the system is structured and built - to offer different affordances for performing the system function.
For example, the operational concept of a door key will describe that the key performs the function of changing the door from "locked" to "unlocked" and vice versa. This will be the main function of the key. Architectural characteristics (e.g., service life until the first breakdown, ease of repair, etc.) are less important here; the focus is on what the key is intended for, what changes it causes (behavior -