Article 3. Part II.
Temporal Paradigm:
Artificial Observer
Abstract
Experience does not exist without an Observer. We do not observe time directly like space. However, applying a geometric formalization of time, we can create an Artificial Observer who can experience what is not directly accessible to us.
By looking through the Artificial Observer, we can better understand what time is, and perhaps even learn to navigate time in the same way we navigate space.
Recognizing Sensorics into an Abstraction.
Preprint
In Progress
The Nature of Observer
Experience does not exist without an Observer, so next we will focus on this idea and especially on the Artificial Observer concept. Regardless of nature, an Observer is characterized by the presence of sensorics that makes the act of observation itself possible. But, for digital memory, the presence of artificial sensorics is not “bodily” predetermined. Therefore, the definition of the Artificial Observer should begin with who owns the sensorics.
1.1 Human-User
If the sensorics of a digital memory belongs to a Human-User and the source of observation is the User's activity, then such sensorics is considered as an Artificial Extension of the User's Sensorics.
1.2 Non-Human User
If the sensorics of a digital memory does not belong to a particular Human-User and/or belongs to the digital memory itself and/or else, then it must be defined as a Stand-alone Artificial Observer. Unfortunately, due to the fact that “the notion of personal data is notoriously under-defined” [1], one has to look for another solid basis, which could be the drawing of parallels with biological memory.
However, considering Stand-alone Artificial Observer analogous to biological memory carries an ambiguity in definition: if digital memory observes with all its sensorics, this makes it unconscious, but considering that digital memory is a spatial structure (localized domain), this means that it is conscious all the time. However, from the perspective of human intelligence, digital memory must have both consciousness and unconsciousness in order to be intelligent.
Making a clear categorization, if possible, is not the subject of this work. It suffices the fact that Artificial Observer being external to us inevitably entails the emergence of an interface, which will be defined here as the very possibility to exchange experience (or in terms of formalization — the possibility of translation).
I suppose we can set aside the difficult question of defining intelligence and proceed from the fact that the interaction between Human and Artificial Observer can and should be seen as an exchange of experience. Any other option will always be one-sided toward us. And all the more so because the request to rethink digital memory possessing sensorics is already here.
Artificial Observer
Defining a digital memory possessing sensorics as an Artificial Observer begins with biological parallels.
From a data perspective, any ability to "observe" is a set of data streams. In particular, we will focus on digital memory that collect such streams due to the physical presence of sensorics. For example, a modern smartphone has at least 9 sensors (Accelerometer, Gyroscope, GPS, etc.) [2].
The presence of external sensorics in the digital memory signifies the emergence of the ability to sense (observe) the environment and thereby aggregate experience.
Another obvious parallel is the development of artificial neural networks (NN) [3]. This is a manifestation of the same ability to "observe", but not with external, but with internal sensorics.
These parallels are guidelines in the sequence of Artificial Observer formation (Fig.1):
• External Sensorics + Memory = collecting experience into a structure;
• Internal Sensorics + Memory = the emergence of experience in the structure itself.
Fig.1. External & Internal Sensorics.
To exclude ambiguity, the experience collected by artificial sensorics should be defined as the digital experience, which is stored in digital memory. But the difference with digital memory is that its structure is spatial, whereas in biological memory the structure is temporal.
The fusion of biological and digital memory will result in a complete blurring of the boundaries between them when the need for an interface will also disappear. But this will only happen when digital and biological memory operate at least on the same principal level (in case of a merger and not a takeover).
In particular, biological memory has no "external" and "internal" sensorics: there is no boundary between the body and the brain [4]. Meaning that the digital memory as an Artificial Observer will also strives for the unity of sensorics, which can be generalized as follows:
A self-organizing cycle: sensorics gathers experience, and experience updates sensorics.
Fig.2. Self-organizing Cycle.
The biological analogy [5] behind this generalization suggests the following consideration. In the case of biological sensorics, advancement occurs at the rate of evolution. Artificial sensorics, on the other hand, can be upgraded with the speed of connection and synchronization of new sensorics.
From this follows the question: is it possible that at some point (digital experience accumulation and/or sensorics scale) the cycle of self-organization will transition into self-consciousness? But this question remains open regardless of the nature of the sensorics. For example, do the simplest biological sensorics possess consciousness or not? [6][7]
Another speculative area of evolution may be the premise of the environment as an "external" stimulus for sensory complexity. In this vein, we can review that for artificial sensorics to emerge, first, there must be a consciousness biological sensorics capable of creating such an abstraction as data, and second, gradually saturate the environment with data leading to the evolution of sensorics.
Anyhow, to reduce uncertainty surrounding self-consciousness, within this work its definition is based on a single reliably fixed point of reference:
What possesses self-consciousness and what does not, in any case, is decided here by the humans.
It is this definition that first allows us to hypothesize that digital memory possessing sensorics can experience, then to move on to the question of whether it can be aware of what it is experiencing. And only then (and not necessarily afterwards) move on to the question of whether it is conscious of itself (whether its aware of its own awareness). Either way, it's still nothing more than a monologue of human consciousness with itself.
Machine that experience
Using the above definition of the Artificial Observer, let's consider a self-driving car as an example of a "machine that experience" that can be simplified to 3 components (Fig.3):
• Set of sensors — external sensorics;
• Neural networks — internal sensorics;
• Model of the world — limited digital experience.
Fig.3. "Machine that Experience".
The experience of such machine contains a knowledge (model) of 3D geometry with time dimension. In a similar way, Tesla's autopilot collects data from cameras and forms 3D "Vector-Space" [8]. Or, according to Waymo developers, their autopilot basically sees "the road in multiple dimensions” [9].
A self-driving car is not only an example of a great achievement, it is also very illustrative in terms of how we define self-driving: whether or not a car "sees" space is entirely defined by analogy to our own ability to see and understand it. Also in this example, the SAE [10] classification comes first, and the definition of “intelligence” recedes into the background or into marketing field.
Far more interesting is the way intelligent machines could experience genuinely exotic, alien worlds of sensation. […] There is no reason for these input patterns to be analogous to animal senses, or even to derive from the real world at all.
(Hawkins & Blakeslee, 2004, p.154)
In accordance with this idea [11], such an "exotic sense" can be a temporal sensorics, which experiences ("sees," "feels") not space, but time. The baseline is the hypothesis that geometry is also applicable to time as such (Temporal Geometry).
To test the hypothesis we can take an artificial sensorics ("a machine that experiences"), equip it with knowledge of Temporal Geometry rather than 3D space, and approach the problem of formalizing the temporal structure of experience. As a sensorics in this case we can use, for example, a smartphone and/or similar devices (Fig.4).
Fig.4. Example of Artificial Sensorics.
Tesla, Waymo and other companies predict the road situation and the geometry of the road itself, even if it is not visible (Fig.5). Similarly, temporal sensorics will gather digital experience into a temporal structure and even make predictions about its future growth (Fig.6).
Fig.5. Tesla AI Day 2021: Predictions.
Fig.6. Temporal Geometry Prediction.
This can be seen as an opportunity to exploit the fundamental difference between digital and biological memory: digital memory is capable of recording itself as a temporal structure that can be visualized. We can then approach the problem of translating such geometry into natural language, that is, explaining what we are looking at. In a sense, isn't language itself the geometry we use to describe the physical space we live in?
Regarding space, it is our spatial experience that determines whether a machine is self-driving or not. As for time, the problem is that our memory is largely unconscious [12] and therefore remains intuitive rather than understandable. The idea is that when formalizing time using Temporal Geometry, the properties of time perception embedded in it will make it intuitively consistent, although they will not preclude the need for interpretation (translation into natural language).
Interpretation can be seen as a problem of communication without knowledge of language, where the very possibility of such communication presupposes that the experience wrapped in the unfamiliar sign system resonates with the experience of both parties. And also, that both parties have no other intentions than mutual understanding (Section 8).
This is why it is important for us that the Artificial Observer (at least at the initial state) experience not an "alien worlds of sensation" but exactly the world that we experience. For the same reason, temporal sensorics itself must be analogous and co-scale to our sensorics. Thus, the synchronization of Biological and Artificial Observer will take the clearest outlines if synchronized at the scale of a single human. This could potentially be the most interesting challenge of temporal representation of information: an attempt to capture the unconscious structure of experience of the Biological Observer by using personal artificial sensorics (i.e., looking through an Artificial Observer).
Dimensionality of Recognition
Within its own boundaries, the Stand-alone Artificial Observer can experience in a way that is fundamentally impossible to articulate to our experience. Such an Observer would not need (and might not even “understand” or “care” about) nor our definition of Intelligence, nor our Ethics, nor our need for an Interface. But even knowing that, there's probably nothing that can stop its creation [13].
Translating digital experience into the realm of human understanding can seem like a limitation of the Stand-alone Artificial Observer, but for us it's not a limitation, it's the very outline that makes technology human. That is why we are constrained to reserve both technologically and legislatively the "right to understand". I.e., to leave some "bridge" between us and artificial sensorics, so that we can speak both in the same language and within that language.
It appears that this bridge can be initially based on geometry as a conceptualization of our sensory channel with the highest throughput [14]. But in a broader perspective, it is our Dimensionality of Recognition (realm of recognition, dimensionality of sense). The definition of Dimensionality of Recognition is still rather blurry, due to the lack of a clear definition of Intelligence and Human Intelligence in particular [15].
But it can be argued that sensorics in its entirety has a higher Dimensionality of Recognition than Consciousness, which is largely determined by three-dimensional perception [16]. The difference in throughput can provide insight into how much of the information in memory is in a state of tacit experience and how “small” the locality of Consciousness is in relation to it.
For this reason, it is worthwhile to give not a definition of the Dimensionality of Recognition, but a landscape within which different Observers will resonate with different experiences to capture a common idea (consciously and unconsciously). The Dimensionality of Recognition belongs to the category of those definitions that are easy to grasp by experience but very difficult to articulate (e.g., the definition of time).
Temporal Geometry itself is not limited by the number of dimensions (also because it is one of the spatial concepts of Consciousness). What limits it is the Dimensionality of Recognition. Recognition seems to be as intrinsic to artificial sensorics as it is to biological ones [17]. Human beings may recognize with all their sensorics, but only part of this information we can formalize, using Consciousness [18]. Consciousness is constantly tries to formalize from the “inside” (locality) what defines it from the “outside” (nonlocality), without being able to go “outside”.
Reducing geometry to the locality of our Consciousness is the general task of an interface that will allow the exchange of experience with the Artificial Observer. To ensure that such a translation can be understood, it must be as consistent (intuitive) with our perception of time as Euclidean geometry is consistent with our perception of space.
Our perception of time is determined by at least two interrelated factors:
• Throughput of Consciousness.
The perception of time is not given per se, but is "constructed" by the Consciousness from accumulated experience;
• Unconscious Nature of Memory.
Memory, while accumulating experience, is not directly observed by Consciousness.
We are thus conscious less of the time than we think, because we cannot be conscious of when we are not conscious. […] so consciousness knits itself over its time gaps and gives the illusion of continuity.
(Jaynes, 1976, pp.24-25)
Consciousness & Memory
There are many Consciousness throughput estimations, which can generally be summarized as the capacity to perceive large intervals of time "Paradox of temporal awareness" [19]. This capacity makes us impervious to long processes that simply do not reside entirely in the Consciousness. But these are the exact processes that shape our experience throughout life, whether we are aware of them or not.
The subjective unity of self, of thought and of personal experience is an illusion created by the limited capacity of self-awareness systems...
(Oakley & Eames, 1985, p.247)
Limited capacity is an obstacle to visualizing long-term experience because much of the experience that forms Consciousness is out of its limits.
We are not consciously aware of all the information our mind processes or of the causes of all the behaviors we produce, or of the origin of all the feelings we experience. But the conscious self uses these as data points to construct and maintain a coherent story, our personal story, our subjective sense of self.
(LeDoux, 1985, p.206)
And about the very construction of such a "coherent, personal story" Julian Jaynes said the following:
Consciousness is constantly fitting things into a story, putting a before and an after around any event. […] And this results in the conscious conception of time which is a spatialized time in which we locate events and indeed our lives. It is impossible to be conscious of time in any other way than as a space.
(Jaynes, 1976, p.450)
Then perhaps, we should look for ways to increase the throughput of Consciousness. Psychoactive substances and transcendental practices are such well-known ways [20], and although when used correctly they lead to the expansion of some aspects of Consciousness, they are not quite suitable for practical (day-to-day) application.
Also worth mentioning is the BCI, which can potentially "expand" Consciousness through faster interaction with the computer. But I think that the possibilities of noninvasive interface are far from depleted (and also safer), so there is no reason to go beyond them, except to treat diseases [21].
In any case, it would be wrong to expand Consciousness without understanding the original nature of its limitation.
Our consciousness is presented with an interpretation, not the raw data. Long before this presentation, an unconscious information processing has discarded information so that what we see is a simulation, a hypothesis, an interpretation...
(Nøgrretranders, 1998, pp.186–187)
The reason Consciousness does not observe memory directly is because of the simultaneous essence of memory. Or, from a spatial point of view:
Memory is non-local (not a spatial structure), and any attempt of Consciousness to "catch" non-locality instantly unfolds into locality, leaving all the unconscious "outside" of the localized domain.
Considering the non-locality of memory, Consciousness is local and determined by the capacity to retain in the present (in simultaneity) some localized domain (possibly from compactification into a point, into non-locality).
This localized domain can include abstractions that were initially recognized by higher dimensional sensorics and then for reasons that lie outside of Consciousness, compactified into the locality of Consciousness. In fact, these are all-familiar moments of epiphany, when huge or complex chunks of experience become locally accessible and comprehensible.
Thus, increased throughput can be defined as a longer retention in simultaneity and/or a higher dimensionality of localized domain [22]. Reaching in mentioned ways can feel like a "deeper understanding", a visual effect of "objects extending through time", or a transcendent experience of universal interconnectedness. But "expanded" Consciousness is episodic for good reasons.
If increasing the throughput of Consciousness from within is not a viable option, then expansion should be expected on the "other side" of Consciousness, where it will be retained as an abstraction before (and if) being localized to the domain. But if on the "other side" is not biological but digital memory, there is inevitably a boundary between the two, where the localized domain is essentially an interface:
• Higher level of interface abstraction.
If an interface is defined by Consciousness, the extension will be to a higher level of interface abstraction;
• Blurring the boundaries.
If the interface is defined by memory, the extension will be towards the blurring of the boundaries — brain-computer interfaces (BCI's) — not considered here (at least not within the Spatial Paradigm).
Level of Interface Abstraction
The speed of Human-Computer Interaction depends more on the level of abstraction than on the specific implementation of the UI (Fig.7).
(a) Windows Command Prompt
(b) Google Search
(c) ChatGPT
Fig.7. Command-Line Interfaces (in ascending order of abstraction).
A good example of the violation of this logic are the attempts to adapt screen interfaces in virtual reality (VR). These interfaces continue to work better on screen than in VR simply because they are products of the Spatial Paradigm that, like space itself, implies infinite expansion. So, it makes no difference if you look into infinite 3D space through a small 2D screen or through a virtual reality goggles, it's still infinite non-physical space. VR suffers not so much from current technical limitations as from a lack of a suitable level of abstraction, beyond the reach of other mediums. Perhaps such a level will be the visualization of temporal structures.
However, it is the new level of abstraction that comes first, and only then comes the new vision of the digital medium, and, then a new interface, if needed.
This higher level of abstraction should compactify into locality long processes that are simply not retained in Consciousness in their entirety. What can be defined as to see the "Big Picture of Experience".
However, interfaces at a different level will not be radically different from those of the Spatial paradigm:
Even if one frees the 3 spatial dimensions for time (or any other abstraction), visualization is still limited to three-dimensional perception: whether it is three-dimensional space, or three-dimensional visualization of experience.
This means that the appearance of the conditional button will not change much, it is much more important what abstraction it will be drawn on top of.
The abstraction displaced beyond Consciousness will be determined by the ability of artificial sensorics to recognize it and then translate it into our language — that is, to "draw a button".
Temporal Paradigm: Initial Point
Temporal Paradigm began with a reflection on a clearly evident process: digital memory seeks to merge with biological memory not to comprehend it, but to identify more effective leverage such as “Dark Patterns” and such [23].
As of today, the Temporal Paradigm has no clear definition other than the hypothesis that a digital memory possessing sensorics collects experience, and can be seen as a personal artificial sensorics or as an Artificial Observer. What is different about this view that it is impossible not from a human perspective (human is the only unit of scale). And from this scale, we can look at the infinite space of the Spatial Paradigm, and redefine questions of abstract data into questions of experience:
What is my data?
→
What is my digital experience?
What data I consume and generate?
→
What shapes my digital experience?
Who collects, owns and uses my data?
→
Who reads my mind?
Of course, we can't call all data a digital experience, but we have no idea how much more data has been collected by artificial sensorics than has been created by humans throughout all human history [24]. All this data is on one hand centered around just a few companies [25], and on the other hand, most users have no idea about their own data.
The result is a paradox:
The only party that does not collect user data is the users themselves.
A digital medium such as a smartphone (and other "sensitive" technologies), in terms of artificial sensorics, appears as not belonging to the User, but implanted in him from the outside and connected to who knows where. In this regard, it is more correct to say that Users are not a resource, but a nutrient environment. This leads to the question: what evolves by consuming all this “food”?
But assuming that our freedom is not limited by the position in the food chain, we can summarize the requirements for the Temporal Paradigm on the scale of a single person:
• A personal device should act as an artificial extension of the User's personal sensorics;
• The source of the experience is the User himself;
• Therefore, it is the User who has all the rights to access, collect and analyze his/her own experience.
The reason we don't see the smartphone as a personal artificial sensorics is because it simply doesn't have an App that meets the challenges of an alternative paradigm (i.e., literally one interface, but at a different level of abstraction). At the same time, the constant presence of the smartphone [26] has already effectively made it an extension of our sensorics.
The practical purpose of such missing App might be planning and predicting with characteristic hypotheses such as: to show the digital experience that affect us every day and to show predictions based on the collected experience.
If the artificial sensorics are not connected to human-users, the scale is determined only by the amount of available sensorics in the system. Therefore, everything to which the sensorics can be "connected" can act as an Observer: electrical networks, communications, transportation systems, business processes, production, recycling, etc. In other words, to show invisible long-term processes that we cause and that we are unable to comprehend [27].
Civilization is threatened by changes taking place over years and decades, but changes over a few years or decades are too slow for us to perceive readily.
(Turnbull, Ornstein and Ehrlich, 1991, p.87)
Temporal observation of the environment seems particularly important: if we are incapable of being aware of our own experience, it is naive to expect awareness on a scale larger than human and the length of human life.
Trust
Prioritizing experience over data implies undisputed determination of ownership:
A smartphone and other personal devices* being the private property of the User, signifies that both the sensors and the collected digital experience are also the private property of the User.
*Personal devices falling under the definition of personal artificial sensorics. Literally in the same sense in which a person's sensory organs (the body in general) can belong only to the person himself.
If we assume that the interaction between Human-User and artificial sensorics should be defined as an exchange of experience, then such interaction, among other things, requires the concept of trust. Humanly speaking, it is trust that makes the very act of exchanging experiences possible [28].
For the Spatial Paradigm, there is no difference between data and experience — it is all data that simply does not contain trust as a thing. Just because we can mathematically formalize trust [29] does not mean that it exists inherently in the data. For this very reason, the more advanced machines become in mining our personal data, the more obvious the absence of trust becomes in principle [30]. Thus, the qualitative difference between experience and data is in the trust that is essential to the exchange of experience.
“Attention Economy” [31] and “Surveillance Capitalism” [32] — are the obvious consequences of equating human experience with data. This causes a legitimate reaction from society. However, examples of such reactions, like GDPR, are driven more by political necessity than effectiveness [33].
It looks more realistic not to limit the collection of personal data — that should be done by default, within the Spatial Paradigm. But exactly the opposite — to give the Users all the power to collect as much data about themselves as possible. This means collecting, and therefore understanding, what data Google, Facebook, Microsoft, Apple, and others collect about the User.
There is no reason to believe that a logical system as complex as morality is complete when mathematics is not. In fact, because reducing morality to mathematics may be an impossibility, our moral intuitions may also respond to a logic that is also incomplete. If this is true, trying to reduce machine morality to a set of rules is naïve.
(Hidalgo, Orghian, Canals, Almeida, Martin, 2021, p.153)
The issue of trust goes far beyond technology, but within its boundaries and on the scale of a single person, the main problem can be formulated as follows:
How can I trust my personal artificial sensorics?
I've only found two options here so far:
• The most reliable is the physical absence of wireless connection in artificial sensorics. In this case, the User understands the exact limits of the Artificial Observer with whom he/she exchanges experience. But with a smartphone, this option seems unrealistic;
• Provide the ability to view everything that the artificial sensorics is experiencing at any given time. This can be realized using purely Temporal Geometry, when immediately after observation the data is removed, leaving only the geometric structure captured in the associated Dimensionality of Recognition.
Each day it becomes more and more clear that ethics is the one and only interface of all human information technology [34]. But if relying on some sort of ethics/morality to regulate companies and machines is naive, then the User is left with only one option:
If anyone can collect my personal data, why can't I collect my own data too?
Moreover, I and I alone can rightfully collect such data about myself that no one else can collect except me. In terms of experience, it sounds more obvious: only the person himself owns his own experience.
After all, one might question why only the User agrees to the End-User License Agreement (EULA) [35]? By gaining access to my personal device, doesn't the software supplier himself also become the User of my device and its sensors? By this logic, the installed application must also accept the User's EULA, which defines what data the User has the right to collect about the application.
Instead of the Conclusion
The convergence of digital and biological memory will sooner or later collide with a Spatial Paradigm contradiction: the information in our memory is not data, but experience. But, as it turns out, this contradiction is only critical for the Human-User.
The Spatial Paradigm, like physical space, implies infinite expansion, where the growth of data and sensorics scale will eventually bury the uniqueness of the experience. It is the Human-User who will disappear into the patterns of Big Data and Neural Networks: originality and differences will be recognized, categorized and predicted.
In a sense, it is unavoidable. However, it does not follow that such a game has no multiplayer:
• Harvesting data about Users, in particular without their awareness — is a surveillance and control;
• But if Users collect solely their own experience — it is "Know thyself" and self-regulation as the only solid ground in the “era of information abundance”.
If we think about it, under the User's supervision and based solely on his own experience, artificial sensorics will gradually become as a personal reflection and a personal defender, if you will: as the only way to protect yourself is to know what is being collected about you and, more importantly, to know what is expected of you on that basis.
The sustainable progression of ubiquitous artificial sensorics lies in understanding it as an extension of our own. Such an understanding carries less risk of personifying or even deifying or demonizing so-called AI. But this is only possible through the exchange of experience (not abstract data, obviously) between artificial and biological neural networks.
In other words, without a common ground for communication at least at the level of visual/logical abstractions, all the advances of this technology will become less and less human.
And, after all, what is the point and how will we use a technology that has no human interface?
"
Resources:
[1↑] Gellert, R. (2020). Comparing definitions of data and information in Data Protection Law and Machine Learning: A useful way forward to meaningfully regulate algorithms? Regulation & Governance, 16(1), 156–176. [doi]
[2↑] Straczkiewicz, M., James, P., & Onnela, J.-P. (2021). A systematic review of smartphone-based human activity recognition methods for Health Research. Npj Digital Medicine, 4(1). [doi]
[3↑] Hopfield, J. J. (1982). Neural Networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8), 2554–2558. [doi]
[4↑] Shapiro, L., & Spaulding, S. (2021, June 25). Embodied cognition. Stanford Encyclopedia of Philosophy. [link]
[5↑] Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel Publishing Company. [link]
[9↑] ©2019-2023 Waymo LLC . (n.d.). Self-driving car technology for a reliable ride - Waymo Driver. Waymo. [link]
[10↑] ©2023 SAE International. (n.d.). J3016_202104: Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE International. [link]
[11↑] Hawkins, J., & Blakeslee, S. (2005). 8 The Future of Intelligence: Sensory Systems. In On Intelligence (pp. 154–156). essay, Owl Books.
[12↑] Jaynes, J. (2003). 1 The Consciousness of Consciousness. In The origin of consciousness in the breakdown of the bicameral mind (pp. 21–25). essay, Houghton Mifflin.
[13↑] ©2023 Future of Life Institute. (2023, June 8). Autonomous Weapons Open Letter: AI & Robotics researchers. Future of Life Institute. [link]
[14↑] Cohen, M. A., Dennett, D. C., & Kanwisher, N. (2016). What is the bandwidth of perceptual experience? Trends in Cognitive Sciences, 20(5), 324–335. [doi]
[15↑] Gregory, R. L. (1987). Intelligence. In The Oxford companion to the mind (pp. 375–379). essay, Oxford University Press.
[16↑] Haun, A., & Tononi, G. (2019). Why does space feel the way it does? towards a principled account of spatial experience. Entropy, 21(12), 1160. [doi]
[17↑] Hopfield, J. J. (1982). Neural Networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8), 2554–2558. [doi]
[18↑] Jaynes, J. (2003). Afterword. In The origin of consciousness in the breakdown of the bicameral mind (pp. 455–456). essay, Houghton Mifflin.
[20↑] Bayne, T., & Carter, O. (2018). Dimensions of consciousness and the psychedelic state. Neuroscience of Consciousness, 2018(1). [doi]
[21↑] Müller, O., & Rotter, S. (2017). Neurotechnology: Current developments and ethical issues. Frontiers in Systems Neuroscience, 11. [doi]
[22↑] Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. [doi]
[23↑] Narayanan, A., Mathur, A., Chetty, M., & Kshirsagar, M. (2020). Dark patterns. Communications of the ACM, 63(9), 42–47. [doi]
[25↑] Slynchuk, A. (n.d.). Big brother brands report: Which companies access our personal data the most?. Clario. [link]
[26↑] Smith, A. (2015, April 1). U.S. smartphone use in 2015. Pew Research Center: Internet, Science & Tech. [link]
[27↑] Ornstein, R. E., & Ehrlich, P. R. (1989). 1 The Threat within the Triumph. In New World New Mind: Moving Toward Conscious Evolution (pp. 9–13). essay, Doubleday.
[29↑] Marsh, S. (1994). Formalising Trust as a Computational Concept.
[30↑] Burr, C., & Cristianini, N. (2019). Can machines read our minds? Minds and Machines, 29(3), 461–494. [doi]
[31↑] Williams, J. (2018). Stand out of our Light: Freedom and Resistance in the Attention Economy. Cambridge: Cambridge University Press. [doi]
[32↑] Zuboff, S. (2019). Surveillance capitalism and the challenge of collective action. New Labor Forum, 28(1), 10–29. [doi]
Figures:
Figures 4, 6 and the Illustration in the Title were generated using Temporal Geometry Simulation.