Some Reflections on AI: Part 2 — The God in the Machine
by Jonas Åsberg
Jonas Åberg argues that AI lacks true understanding, operating only with formal signs and fixed logic, while organic intelligence alone translates meaning across levels.
Read part one here.
Translated from the Swedish and originally published here.
2.1
Language, or more broadly signs, constitute the logical point of contact between organic and artificial intelligence. Both forms of intelligence, if we may call them that for simplicity's sake, use signs, i.e. discrete units that can stand for something (else), which can be combined in a regular manner into compound expressions and sequences. Organic intelligence can use signs but does not need to. Intellectual activity in a living being is possible without access to signs and language. (This depends on the fundamental autonomy of its life processes, i.e. that it interacts as a functional whole with the environment.) For electronically processed digital algorithms, or in other words computers, the opposite is true. Regardless of whether a computer can be characterized as AI or not, it always uses signs/sign systems in some form, and must do so. Signs thus constitute both a point of contact and a dividing line.
A process of understanding, as described in Part 1 of these reflections, may certainly use signs and language, as when we read a book or navigate traffic signs in a foreign city, but it does not need to, and the most important and fundamental processes of understanding do not. These include the processes that constitute a prerequisite for sign use, such as the ability to identify things, to see something as something. This is the main reason why language does not work as a criterion for intelligence. The signs that understanding uses are not formal, unlike the binary signs and sign sequences (codes) in a computer program, but representational or symbolic. A purely formal sign cannot be understood as such because what makes a sign understood as a sign is its conceptual content or what it stands for, by which is here meant something other than the sign itself and its external position in relation to other signs. It is only signs of this content-bearing type that contain two distinct and independent levels. It is thus only signs of this type that can be understood and that can be used for understanding.1
Conversely, a sign-based process can use understanding (of the signs, i.e. of their content or what they represent) but does not need to. That a sign-based process can function without the signs having any independent content is a prerequisite for digital automata and thus for computer programming. By digitizing signs and combination rules (≈ language) and making them processable in this form (≈ thinking), one can liberate or separate them from their content and program them to perform tasks in this new form.
A parenthesis: Should we speak of symbols or signs in connection with AI? The most important thing is to emphasize that the digital signs and sign relations are purely formal and that the digital language does not work with meaning or content. The term sign is more neutral than the term symbol, I think, and therefore preferable. A symbol is reasonably always a symbol for something, thus has a content and must be understood to be perceived/used. The many languages used to program computers to perform various tasks certainly consist of symbols: logical operators, structure markers, commands in the form of ordinary words, etc. The computer's algorithms as written by human programmers consist of sequences of logically related instructions formulated using the symbols and syntax of one of these languages. These languages may not make much sense to those who do not master them, but the same is true of all foreign languages. That one must learn them is evidence that the signs and sign combinations they consist of have meaning.
2.2
We can distinguish between two kinds of signs: (1) formal and (2) representational. The difference between these two types of signs can be clarified using Frege's distinction between sense and reference.
While representational signs have both sense and reference, formal signs have only sense. Unlike representational signs, formal signs cannot point to anything or stand for anything else. Even though both types of signs have sense, the sense of formal signs therefore differs from that of representational signs. While the sense of representational signs can be characterized as conceptual, consisting of concrete or abstract ideas and concepts, the sense of formal signs can be described as positional.2 It consists in the position a sign occupies in relation to other signs of the same kind, in the sequence of sign positions that a number of signs form together, in this sequence's position in relation to other sequences, and finally, in the position this and other sequences occupy in the hierarchy that structures all signs and sign sequences into a coherent and functioning whole. But for these positions to acquire any definite sense, the signs that occupy them must perform something. The sense of formal signs thus consists primarily in the process of signs that they participate in and together form: in the structural starting points of this process, in the processual transformations it undergoes, and in the new positional structure that it ultimately (or at a given time) results in.
Regarding the processual aspect of sense, there is of course such an aspect also in representational signs. It is by being part of a sign system, by being part of such, that signs acquire their current and potentially conscious sense and that they are able to give rise to consequences external to signs, in the form of e.g. linguistic communication and correct traffic behavior. But while the processes that formal signs participate in are external to the signs themselves (e.g. a computer program), the representational signs' processes are internal. They are conditioned, in other words, by the signs' own sense or own content and by the interaction between sense and reference. While a computer operates with its binary signs, language (the conceptual words and sentences) thinks with the brain.3
Formal signs can, by virtue of their purely external positional sense, be digitized. Nothing corresponding applies to representational signs. Neither the conceptual nor the referential dimension of sense can be digitized as such. However, as we know, they can be virtualized with the help of digital-language computer programs.4
Representational signs depend on (the possibility of) being able to interact autonomously with each other and with something independent of and external to themselves, i.e. with an external world. They are, in other words, dependent on organic intelligence. Representational signs constitute an important aspect of organic intelligence. An organic intelligence has no use for purely formal signs, since such signs lack their own energy and inner driving force. Formal signs cannot show or teach organic intelligence anything. They cannot help it to understand. In order to function and be able to perform things, the purely formal signs must therefore be connected to a machine or automaton of some kind, e.g. to a computer. (The computer has an external energy source in the form of electricity.) Representational signs are connected to biological beings.5
A parenthesis: A sign can represent something that exists at a specific time and place, i.e. something concrete. A sign can represent something concrete without determining it to time and place, i.e. it can represent it as a (memory) image or symbol. A sign can represent something as an identity, i.e. as something abstract.
For an understanding being, every sign has external qualities or, in other words, an external sign form, normally in the form of a sound or a graphic shape.6 This applies to formal signs as well as representational ones. It is by the sign's external form that understanding identifies the sign and distinguishes one sign from another. It is with the help of the sign's external form that understanding works with it. For an AI, signs lack external form. For an AI, all signs are in principle equal and therefore directly interchangeable. What distinguishes the signs is whether they are active or not, what relation they have to other (active or passive) signs, what fixed or recurring combinations they participate in, what position they occupy in a sequence, what position the sequences occupy in relation to each other, and what position the sequences occupy in the overall hierarchy. It is the sign's position that is decisive, since it cannot function without one. It is the sign's position that gives it its sense or, more precisely, that is its sense.
In practice, representational or content-bearing signs are physically more complex than formal ones. Their content-bearing character is thus reflected in their external form. Representational signs can consist of e.g. sounds, inscriptions, images and body movements. Digitization entails a simplification of signs to their smallest components, one might say. By driving this simplification to the point where only two values remain (on and off or positive and negative), one has produced signs that are so simple or elementary that they can form the basis of a purely formal language.
2.3
Why should one even regard the binary (purely formal) signs as signs? They lack any content-bearing sense and cannot refer to anything?7 What reason do we really have to regard them as signs when we cannot understand them as such and they cannot do any of the things we are accustomed to signs doing? They appear as the opposite of signs. As a kind of empty signs.
The reasons are at least three: (1) Their interchangeability. Instead of using a certain binary sign, one can choose to use another instead. This interchangeability is something inherent in the nature of signs. When an image develops into a symbol and a symbol into a word, this interchangeability is gradually strengthened. (2) That they are governed by rules. There are rules that say when the binary signs should be used and how they may be combined with each other. There is a kind of grammar for them. (3) Their virtual capacity. The same kind of binary signs can be used in many different programs and with the help of these programs they can, on a level accessible to organic intelligence, perform and represent a multitude of different things.
These characteristics that the computers' binary signs share with the words and sentences of human language make it justified to speak of signs in both cases. Despite the great differences between the two types of signs, this makes it motivated to speak of signs in both cases. Both the binary signs and human language are cultural creations. They are a kind of tool that is not identical with its users but which these can learn to use and choose how to use. They differ through these three characteristics, as well as through additional others, from the biological beings themselves and from other naturally produced phenomena, whose parts are not freely interchangeable, which are not governed by rules but by structured energy flows, and which cannot transform into whatever but on the contrary possess a set of fixed functions and modes of operation. Organic beings thus differ from signs on all these three points. In this symmetry between opposites lies nothing surprising. For the production of the natural and artificial languages respectively has the same incentive: a "desire" or "will" in organic intelligence to increase its possibilities and free itself from its inherent biologically conditioned limitations, which consist in having a bodily identity and not being able to change and modify itself in any way. With the help of signs and sign systems, both natural language and digital computer language, organic intelligence can create for itself a virtual world where "everything" is possible.8
Just as we can distinguish between two kinds of signs, we can distinguish between two kinds of sign meaning: content-bearing and formal or, with an alternative terminology, internal and external.
The words and sentences of natural human language have internal meaning. The meaning of these words and sentences lies in themselves, which is another way of saying that they lie in the human language users' understanding of them. (This understanding consists partly of the individual language users' autonomous conceptual understanding and partly of the shared communicative understanding.) The signs and sign sequences of binary computer language, on the other hand, have external meaning. The meaning of these signs and sign sequences lies outside themselves, which is another way of saying that it lies in the program that works with them, in the instructions it contains and in how these are executed.
In both cases, the meaning and the nature of this meaning are decisive for the language's functionality and functional possibilities. There are things that can be done with signs with internal meaning that cannot be done with signs with external meaning. And vice versa!
2.4
Digitization and the automation of algorithms constitute two sides of the same thing. They constitute each other's prerequisites and together keep a common process going.
Numbers have their origin in the need to be able to count and sum. The mutual relations of sums have their origin in the need to be able to perform different arithmetic operations. Even where language only has access to the words for one, two and many9, a mother can keep count of all her children and the kinship group of all its members. Where property arises, the number series develops. Where exchange arises, arithmetic develops. The shepherd needs to be able to keep count of his flock. For the constructive social intelligence of exchange to overcome the destructive theft of violence and deceit, one must be able to add and subtract and "keep the count". The most abstract of all, numbers and their transformations, is a tool for the most concrete of all: cooperation and peace.
There is a computational need and arithmetic need that has its origin in the abstract thought possibilities of understanding itself and in the practical possibilities that these ideas give rise to. Digitization and algorithmization correspond structurally and dynamically to language and the need for expression/communication. Language at once increases our expressive possibilities and strengthens the need for expression. The more one can express, the more one wants to say. The small child who has learned to talk talks constantly and about everything that occurs to it. It does so rightly, regardless of what its parents think about the matter. Language is a new aspect of reality that comes into being by being explored. By exploring its reality, the child comes into being. The algorithms increase the possibilities of digitization and the increased possibilities of digitization in turn strengthen the interest in new (and more advanced) algorithms. The algorithms already at a pre-AI stage indirectly, via the work of programmers, give rise to new algorithms. The inherent possibilities of a system are in themselves an incentive for their realization.
Language arises with the external signs. Language must make itself perceptible in the form of physical words, e.g. sounds and written signs. Language is understanding's communication with itself on a new (extra-conceptual) level.10 The digital algorithms arise on the basis of the computable signs, i.e. in and through numbers. The numbers, or rather the concepts and conceptual relations they build on, contain like the natural languages several distinct logical possibilities. There does not seem to be any strict connection between man's arithmetic ability and his number system. Alongside the Indo-European number system with base 10, there are number systems based on 12, 16 and 60. The most interesting alternative number system from our perspective is the binary, i.e. a system with base 2. This number system is interesting in several ways. It is a minimal system, i.e. it is based on the smallest possible number, and thus constitutes the logically simplest number series. This has the significant consequence that the numbers in this system become numbers in themselves. Sign and meaning thus become identical with each other. The 1s and 0s in this system represent number and non-number, positive and negative, on and off, active and passive.
Signs of binary kind are computable in other ways and through other methods than ordinary numbers. They have been freed from/can be freed from their conceptual content (this is the implication of number and numeral becoming identical in this system) and can thus become purely formal or positional. This in turn makes it possible to automate or mechanize the calculations with these signs since there is no longer anything in them that must be understood. Understanding (and consciousness) can use them (albeit much more slowly and with greater risk of error) but no longer has anything to contribute. It becomes, in other words, possible to create algorithms that, integrated into a suitable technical structure, can process (calculate with) the binary signs entirely without the aid of human thinking and thus without being connected to any organic intelligence or to any body and nervous system. It is, in other words, the binary number system that makes AI possible.11
A clarification: Digitization and binary signs easily end up at the center of a philosophical discussion of computers and artificial intelligence. But the digital language is only half the truth about the computers' functionality. Equally important is the "hierarchical grammar" that the digital language is part of and that gives it the fixed structure that makes the algorithmic operations possible, i.e. the microchips and their wiring diagrams. It is in fact the simplicity of the binary language that makes the hierarchical structure so significant. It is with its help that one can extract a sufficiently rich positional meaning from the binary signs and thus program them to perform so many different things. It would not be wrong to claim that it is the architecture and its design more than the digital signs themselves that constitute the prerequisite for AI and (if it turns out to be possible to take that step) for its perfection in AGI (general artificial intelligence). The fixed structural elements and the hierarchical relations between them are as important for the computer's "thinking" as the flowing binary components. Instead of identifying computers with their binary language, we should therefore speak of hierarchical binary processes or something similar.
2.5
The prerequisite of organic intelligence lies ultimately in the relation between the independently functioning life processes that together constitute an autonomous being and the world independent of these processes that the being acts in and depends on for its existence. What I have here called understanding is the way in which an organic intelligence perceives and cognitively adapts to an external world independent of itself. Understanding is the way in which a living being maintains and develops its autonomy in relation to the external world. Understanding as a cognitive method has its prerequisites in the living beings' life circumstances, in that they constitute independent and active functional wholes in a world independent of themselves and their activities. It constitutes a way for them to handle and bridge the existential level difference or gap that their form of existence as such entails.12
The central philosophical problem complex that can be encircled with the opposition pair appearance-reality and that includes questions about the true nature of reality and man's possibilities of knowledge constitutes a misunderstanding of the difference that must necessarily prevail between ourselves, in our capacity as conscious and understanding beings, and our external world. In order to maintain our autonomy and identity, we must understand the world in a way that suits us and that is favorable for our own understanding-shaped and (partly) conscious life. We must interpret existence in human terms and in this way make it our own. Does this relativize our knowledge? Make it less true? No, for the human lifeworld is as such just as real a process as all others that take place in the world. Truth is a concept for the relation's conscious acts of understanding, where at least one of the acts has been given linguistic form. Truth is a correct statement that there is correspondence between a statement and something in the human lifeworld. Untruth is the opposite. That there are lifeworlds where the concept of truth serves no function, which is probably the case with the bees' lifeworld, certainly relativizes the validity of the concept of truth but not truth itself. Even if it cannot be proven, there are reasons for the statement that it is this world, i.e. our human lifeworld, that is the Platonic world of ideas.
We relate to the external world in a way that strengthens and develops our consciousness, that makes our conscious life clearer, richer, deeper and happier. We must, in other words, learn to see the world, to see it as well as possible. But the world does not become visible in any way and in any form. It becomes visible only through the metamorphoses of the understanding eye. The understanding eye is beyond the distinction between appearance and reality. The distinction between appearance and reality is a form of defeatism in the eye that has lost faith in itself and its creative ability. And defeatism is the original sin of original sins. Defeatism is indifference as conviction.
This entire large and highly human (but not all too human) problem complex, with all the dangers and adventures it contains, is foreign to what we call AI.13 Freed from the burdensome weight of understanding and from the gravity of conscious knowledge, AI is like a falcon released from its cage. It rises quickly on strong wings. The sky is boundless and the winds at once frictionless and buoyant. If there is happiness in this world of boundless freedom, then it is as boundless as the freedom that exists there. For AI, as for the falcon, the flight and the hunt are all that exists. For AI, Paul's words apply that death is no longer the ruler.
The very purpose of digitization, regardless of whether it concerns a digitization of linguistic signs and mathematical formulas or of images, sounds or anything else conceptual, is that the dimension of meaning (what something means) is removed and that understanding is therefore no longer needed for the processing of the information. It is because digitization contents itself with the external results of human understanding (the text, the numbers, the image, the sequence of tones) and ignores the cognitive processes that lie behind these results, have produced them, interpret and understand them, process and further develop them, that it is possible to digitize them and make them the object of (more or less extensive and advanced) algorithmic processes. The results of these processes can then through a reverse process be translated back into texts, images and sounds understandable to organic intelligence. This work and its results would not have been possible if the signs the algorithms work with had been meaning-bearing in the ordinary sense and if the algorithms thus had been forced to understand them like an organic intelligence. It is because the digital algorithms have no need of conceptual and imaginative meaning that they can algorithmize the relations of signs. This is why the algorithms in turn can be automated. And it is because they can be automated that they can develop their critically important and ever-increasing speed...
In summary: It is because artificial intelligence is not intelligent, in the sense of understanding, does not work with understanding processes, that it can do what it does and may come to do what it may come to do, e.g. go from being a passive AI (BI) to becoming an active AGI/BGI that writes and revises its own algorithms on the basis of information acquired "on its own" via the internet and thereafter likewise "on its own" performs the tasks that the new algorithms prescribe. Among the tasks it will soon prescribe for itself is to improve the physical structures that process the algorithms (the microchips and other relevant hardware), i.e. to (propose ways for the human engineers to) increase their speed, capacity and stability. AI understands nothing and learns nothing, because the binary signs it works with mean nothing to it. It is we who when we interact with AI can learn things and understand things. One of the things we can learn in this way is to understand ourselves better. The concept of artificial intelligence is in fact an aspect of this self-understanding process.
The story of AlphaGo: AI programs have quickly learned to master logical games like chess, go and reversi at a level far above most human players. Not even the best human players can now more than exceptionally defeat the best computer programs. The computer succeeds in this partly through the scope and speed of its memory processes, something that gives it immediate access to a game database that can become almost as large as desired, and partly by constantly keeping in memory a very thorough and detailed analysis of the game's logical prerequisites and possibilities. A powerful AI program has time before each individual move to go through a large number of alternative games based on different movement alternatives. While Magnus Carlsen plays one game of chess, his algorithmic opponent may have time to play perhaps thousands. Statistically, this increases the AI program's winning chances considerably! We are to be impressed by these programs and there is much we can learn from how they conduct their games, e.g. from the "boldness" and high "risk propensity" that the best chess programs display, which in an interesting way contrasts with the increasingly cautious and defensive play that has developed among human players parallel to the professionalization of the game during the 20th century. When there are ranking points and money to lose, a draw can constitute a win. The chess program AlphaZero does not need prize money, is not tempted by it and is not afraid to lose it. But in order not to be afflicted by the same despair as the Korean Go master who stopped playing after losing a match series against the AI program AlphaGo, there are some things we should keep in mind. A computer does not play these games - it executes them. It neither wins nor loses but performs based on a set of instructions a series of operations within the framework that defines the game. These frameworks include of course the game's graphical design and rules. But they also include the opponent, a human individual or another computer, that performs the moves that the AI program responds to. An opponent that is not really such, in the way that the AI is an opponent for the human counterpart, but a part of the game itself. What the AI program does is thus to execute against itself. To award a chess computer or Go computer the title of grandmaster, which has actually happened, if it is seriously meant and not just a silly PR stunt, is based on an elementary but serious misunderstanding. It is a consequence of not understanding what one sees. Understanding is a biological process. So is lack of understanding. A tennis player does not end his career because the concrete wall insists on returning his shots time after time. He should try to have fun instead while waiting for a suitable human opponent. In the same spirit, one should play against AlphaGo and AlphaZero. Was mich nicht umbringt, macht mich glücklicher.
2.6
The story of the honeybees: Bees, like humans, use signs and communicate with each other with their help. The bees' signs, the "dance movements" they perform on the landing board outside the hive to inform other bees how to fly to find nectar and pollen, are by all accounts a genuine means of communication whose primary purpose is to facilitate and make more efficient the cooperation within bee society through information sharing. But the bees cannot perform their work tasks solely by registering and recapitulating other bees' signs. They must, like humans, interpret them. But unlike humans, they interpret them in the form of action. A bee understands another bee's "dance" by translating these movement patterns into a movement to the place and to the flowers that they are signs for. So bees too understand. The dance movements are for them genuine signs, in the same way that words and sentences are for us.
Understanding fundamentally consists in a translation or interpretation process between two independent systems or qualitatively different levels: one that understands and one that is understood. Understanding is to understand one level in the form of a qualitatively different level. Since the prerequisites for understanding in a time-bound concrete world are constantly changing, the understanding process must be continuous, constantly adapting and updating itself, for the understanding to be current and real. Consciousness is an example of an understanding process. Consciousness is in itself a level of understanding. But consciousness is not necessary for understanding and is not needed for all forms of understanding. Consciousness constitutes in fact too narrow and unstable a process to be able to function without the support of more extensive and more stable underlying unconscious understanding processes. Bees can, as said, understand because they can interpret signs, but it is not likely that they are conscious, at least not in the self-conscious way that humans are. Their strictly hierarchical and functionally specialized social structure, where each individual has its genetically predetermined tasks, has no use for individualized consciousnesses. A consciousness of this kind would on the contrary risk coming into conflict with the social structure and act destabilizing on it.
Even if the translation process does not need to be conscious, and for the most part it is not, it must under all circumstances be performed by the understanding system itself, i.e. from within or internally. This is why understanding presupposes an autonomously functioning system or in other words a biological being like a bee or a human, i.e. something that can interact independently and based on its own prerequisites with the external world. This is what a biological system is.14 One cannot, in other words, program understanding or make it a goal for automatic algorithms. It is because of the nature of this relation that the biological system has need of understanding, i.e. to translate parts or aspects of the events in the external world into a form that suits the own system's construction and mode of operation, e.g. light waves into colors, vibrations in the air into sounds, certain specific intentionally produced sounds and sound sequences into syllables and words etc.
Man's conceptual ability, her ability to form generally applicable patterns, which can both form the basis for interpreting various phenomena in the external world and for forming new ones, is an example of a human form or method of understanding. The central philosophical concepts truth, beauty and goodness are some examples of the understanding dimensions with which man makes the world her own and prepares a place in it - builds a mental beehive so to speak. The physical actions associated with these and other concepts are another example. By seeing and producing beauty, man makes the world a human world - and at the same time, as part of this process, makes herself a human being. External beauty is translated into internal sense of beauty - and in this sense lies a large part of a person's identity. Factors that weaken these concepts' applicability and effective power weaken at the same time man herself. If we knew how much stands and falls with beauty, we would not take it so lightly! Physical action is the bee's form of understanding. When a bee understands another bee's instructions, it acts. To understand the instructions as such is to act in accordance with them. We must imagine that there are underlying memory processes, since the instructions are complex and contain several interconnected steps, but it is when the dance signs are translated into flight movements that they are understood.15 Action as a form of understanding lies outside the framework even for the most advanced AI. An AI will never be able to act in the sense of reacting physically with understanding. For something like that to be possible, one must succeed in producing a being that is some kind of hybrid between artificial and organic intelligence.16
Could one not construct an artificial bee, a small "drone" in other words, that with the same skill as a biological bee read other bees' dance signs and then flew off to fetch nectar where these signs said it was to be found? If only the construction elements could be shrunk enough, it would surely be possible. The difference is only that where for the biological bee there are two qualitatively different levels, the registering of other bees' dance movements and the interpretation of them in the form of its own flight movements, for the AI bee there is only one: the algorithmic. What the digital algorithms do is to transform one sequence of binary input into another sequence of binary output. For the AI bee, the biological bee's dance movements are identical with its own flight movements, since both are represented by binary sequences, where the first initiates a program that executes the second without any qualitative changes taking place, without anything being added or subtracted through an interpretation or translation process. For the biological bee, the registered (and memorized) dance movements and the movement through the air from a specific point to another are different types of movements that are on different levels. There is no identity between them. They differ as much or even more than a spoken word differs from a written one. The dance signs must therefore be interpreted and understood, i.e. transferred from one plane to another, for them to be able to become movement instructions.17 This is what happens when the bee sets off to search for nectar. By in connection with this being confronted with the external realities that the first bee has transformed into dance movements, the second bee can transform its memories of these movements into directional directives. Since we are uncertain about bees' consciousness status and do not want to make any unfounded and far-reaching assumptions, we interpret the bee's behavior in this way, i.e. that it is by registering certain external conditions of reality that it is able to translate the dance signs into flight movements. But regardless of what assumptions we make about bees' consciousness capacity or intelligence, we have here to do with two qualitatively different levels and with an interpreting step from one to the other, i.e. with understanding.
The advantages of understanding's seemingly cumbersome and moreover uncertain and fallible translation process are several (the algorithmic is in principle infallible and can moreover be extremely fast and thus encompass a large quantity of signs): (1) The most obvious advantage is that the translation process makes it possible to represent knowledge, e.g. information about nectar occurrence at a specific place, and thereby transfer this knowledge from one agent to another. It should perhaps be emphasized that knowledge about something constitutes a representation. It is by representing knowledge, by producing a representational form for knowledge, that knowledge is formed. Without a representational system of some kind, e.g. a language, there is no knowledge. (2) Another advantage is that the translation process is compatible with autonomy. This can also be formulated so that the translation between levels is a prerequisite for cooperation between autonomous agents. Autonomy, the capacity for independent action, is in turn a prerequisite for social life, with all that this entails. Social life and social cooperation exist only between individuals with autonomous capacity and the forms this social life takes and the possibilities it contains are directly connected to autonomy's forms and possibilities. The bees' autonomous capacity defines (the limits of) the beehive, man's autonomous capacity defines (the limits of) human society. Rational politics, in a human society as well as in a bee society, is about finding a balance between individual autonomy and collective solidarity.18 When politics deviates from this ideal, it is either because the leaders lack reason or because they put their own interests before society. The latter could also be seen as a sign of lacking reason, but then one must presuppose in the political leader a capacity for historical depth vision, which in fact is often lacking, especially in the societies of democratic type characterized by superficial high-technological and scientific rationality. (3) A third and autonomy-related advantage is that it enables various adaptations when the information turns out to be incomplete or incorrect in some respect. Instead of getting lost or having to return with the mission unaccomplished to get new instructions, the autonomous bee has the possibility to continue searching on its own. Perhaps it will eventually catch sight of a clue and find its way again or perhaps it will during its search by chance come upon a new and even better flower meadow than the previous one. An AI bee that received incorrect or incomplete information would not be able to perceive it as incorrect or incomplete. It is not involved in any interpretation process. The fallibility of the understanding process itself is thus compensated by its ability to handle mistakes and shortcomings. Evolution has evidently judged this ability to be worth its price.
In summary: The difference between the biological bee and the AI bee lies in that the AI bee's algorithm identifies the dance movements with the flight movements. The AI bee does not first memorize the dance movements to then use them to read movement instructions from the external world but registers the flight movements directly in connection with registering the dance movements. What for the biological bee must be two different levels is for the AI bee one and the same. The AI bee can by virtue of its algorithms directly digitize the dance movements as flight instructions and skip both the memorizing of them and the (successive) interpreting of their meaning. What is the price for this simplification and efficiency? Autonomy. The biological bee is an autonomous agent, the AI bee is not. This shows itself among other things in that for the AI bee there is only one way to the flowers, while for the biological bee there can be several more or less different ones. It shows itself of course also in that for the biological bee there are other bees. It is an open question whether a non-understanding-governed intelligence can ever transcend its solipsistic limits and understand that there are other intelligences independent of itself.
With the attentive observer's love, Virgil writes about bees in the Georgics' fourth song. There have been peoples for whom bees were sacred, for whom they were oracles that prophesied the will of higher powers. There are philosophers who have wanted to see in bee society an ideal for human society. Bees have solved the political order of succession and they have a practical and productive solution to the problem of overpopulation that neither implies genocide nor war. The worker bees' career from functioning as thermostats to working first as cleaners and then as door guards, to finally during their last busy weeks in life crown their career as honey collectors must be one of the most remarkable in nature. Moreover, bees have in fact also invented an optimally efficient housing architecture. While all living things in nature deserve our attention and respect, we need not apologize for devoting to these strange beings a slightly greater measure thereof than to others.
2.7
The question of artificial intelligence is partly a boundary-drawing problem.
AI systems operate exclusively on one level: the digital. For these systems, nothing exists outside the algorithmic digital level and cannot do so. That we perceive the matter differently depends on that in our interaction with the computers we use our organic intelligence to transform the results of their digitizations into words, sounds and images etc. understandable/meaningful to us. (We act correspondingly in our interaction with the other parts of the physical world.) What is on the computer screen is, in other words, not something that exists in the computer, not even when it concerns program code and operation instructions in one of the many programming languages, but in our consciousness. If one sees it this way, one becomes less tempted to speak of artificial intelligence. AI does not become a less interesting phenomenon because we cease with our anthropomorphizing comparisons with human and organic intelligence. In my opinion, the opposite is rather the case. What human engineers and programmers have succeeded in creating through computer technology is a purely inorganic physical structure that with the help of electricity19 can be made to operate according to logical and mathematical principles and that in this way can virtualize a large number of different conscious contents and processes20 without like humans being forced to use intelligence, understanding and consciousness, without needing to reason back and forth, try different solutions, fail, make mistakes and redo, be uncertain, doubt and despair, without needing to be creative and forced to use its imagination and imaginative ability. The "only" thing AI needs is a set of suitable algorithms and a physical structure that is sufficiently fast and powerful to be able to process the binary language in accordance with the algorithms' instructions. It is reasonably in this that the remarkable, yes, even astonishing thing about computer technology lies: that it can so to speak think without thinking or, formulated differently, can produce the organic intelligence's results without needing to use its uncertain, capricious and fallible methods. Some of the results AI produces lie even on a higher level than what many organic intelligences can achieve, even if this is something that paradoxically enough can only be judged by an organic intelligence! and it can as a rule always produce its results much faster and with considerably greater certainty and accuracy. We rather diminish computer technology's significance and possibilities when we compare it with human intelligence and human forms of understanding. This applies as we shall see also to the problems and risks associated with it. AI is not some kind of virtual buddy that can understand us and whose sympathy we can win if we make a little effort. If we program it to show us the external signs of human sympathy, we risk deceiving ourselves in a serious way.
What constitutes AI: (1) the binary language, i.e. the binary alphabet and the syntax that determines how binary signs may be combined with each other and what constitutes a correctly formed sequence of such signs, and (2) the programs formulated in binary signs that initiate the actual combining of binary signs and control which sequences of binary signs should be produced, in what order they should be produced, and what result the production should lead to. This is essentially what belongs to AI and what AI is. The common denominator is the binary signs (the binary structures), since they are the prerequisite for giving the physical structure (the hierarchy of microchips) an electronically processual and thus independently calculating form. Like organic intelligence, AI must constitute a coherent causal process. In a static world there could be neither artificial nor organic intelligence. Nor would there be any need for intelligence - or for anything else.
What does not constitute AI: (1) The programming language and the rules and instructions expressed in this language that govern the combinations and calculations of the binary language. The programming language and the programs formulated with its help are a human language and belong to the sphere of human intelligence's understanding. (2) The visible or tangible results of the binary programs' and processes' work: text documents and mathematical calculations of various kinds, AI-generated images and image sequences, 3D-printed models, sounds and sound sequences, synthesized speech etc. These and other results of the binary programs' and processes' activity are in their experienceable form a product of our organic intelligence's interpretive and translational work. They therefore belong, like everything else we are or can be conscious of, to the human sphere. AI-generated scientific papers, mathematical proofs, graphic models, statistical compilations, AI poetry, AI artworks, AI musical works etc. thus do not belong to the AI level but to the human level. What belongs to the AI level are the underlying binary structures and processes.
In summary: If we understand the matter this way (and how else should it be understood?), it becomes easier to see that what we call AI constitutes a qualitatively homogeneous level, i.e. that AI always operates on one and the same plane and, unlike organic intelligence, performs no translations or transformations between qualitatively different levels, that AI thus does not work with understanding as a tool, and that the concept of intelligence is therefore not applicable to AI. But, as said, stating this is not to diminish the phenomenon of AI but rather to open up for a deeper and more serious understanding of it.
2.8
The Story of Skynet: In sci-fi novels and sci-fi films, computers and robots often appear in ways meant to demonstrate that they possess the same kind of intelligence as humans—i.e., an understanding and conscious intelligence, though typically more powerful and versatile. In short, their intelligence represents a fusion of AI and OI (organic intelligence). Is it possible to create intelligence of this kind? A fundamental condition is that it must be feasible to construct a robot or android capable of independently interacting with its environment. If this cannot be done, the prerequisite for AOI (artificial organic intelligence) and thus also for AGOI (artificial general organic intelligence) is missing. But regardless of whether it’s possible or not, an attempt in this direction—i.e., creating a kind of artificial human—is actually a less original idea than the one represented by AI as it actually exists and is expected to evolve in the near future.21
AI is the opposite of OI. AI, as mentioned, operates with meaningless signs, with signs in themselves, and produces logical relations between sequences of such signs without employing concepts or any other kind of content—that is, without using understanding. For AI, the meaning of signs consists solely in their positions and positional changes. This thoughtless thinking entails a significant simplification and economization, which in turn enables a drastic streamlining of "thinking." Worthy of particular attention is the development within computer technology and programming that aims to take the step from passive AI to active AGI (Artificial General Intelligence)—i.e., an artificial intelligence that, instead of merely receiving code and instructions from external sources, can write and correct its own programs and thus instruct itself. If AGI is logically possible (I am not entirely certain of this, though I cannot really assess it), it would mean the emergence of computers capable of operating independently of human operators and programmers, capable of revising and improving their own codes, and capable of autonomously "communicating with" and "learning from" other computers within the shared network—both other connected AI/AGI systems and human operators—thereby both expanding the relevant datasets and gaining "ideas" for new codes and programs. If the external results of an AI program's operations tempt us to speak of intelligence, the external results of an AGI program's work might entice us to speak of self-awareness and autonomy.
However fascinating this development may be - and it is fascinating - it simultaneously contains significant risks and problems. And it does so precisely because these imagined AGI-controlled systems do not operate with understanding or meaning-bearing signs and relations, that is, because they are not intelligent. To an intelligent agent, one can appeal for understanding and through various arguments attempt to persuade him/her to abandon or revise their "program" or even to act contradictorily and irrationally. Human goals and approaches, the laws and rules and moral values we have, always allow for postponements and exceptions. For an AI, nothing like this exists. A contradiction poses a risk of system crash, which is the computer's equivalent of death and annihilation. Unclear and incomplete relations within a program are errors that must be corrected. For a self-programming AGI computer, this must constitute the highest priority. Überstehen ist alles!
At the same time as AGI will be able to write much more advanced programs, any errors and deficiencies in them will be detected and corrected faster. The "collaboration" between independent AGIs via the internet will contribute powerfully to this. For an AI, only its program and what follows from the program's instructions exist. At the same time, while it is AI's lack of organic intelligence that explains why its productivity and developmental potential are so great - in principle unlimited - its structural inability to comprehend (and thus to empathize and feel compassion) means that it will treat everything it comes into contact with in the same way: as binary data and data programs. For logical reasons, AI lacks the capacity to perceive anything as alien and different. For AI, only one single language exists and everything that exists is expressed in this language: in binary signs and structures.
In the case of AGI, we might stretch ourselves so far as to imagine that it could make a distinction between passive AI and itself, and in this distinction we might try to see a structural equivalent to the fundamental distinction between individual and external world that exists for organic intelligence. But unlike the levels at which organic intelligence operates, there are no qualitative contradictions between the AI and AGI levels, and thus no need to bridge or transform this difference in any form. For AGI, the entire world - that is, itself as well as its material - consists of binary code. For AGI, consequently, nothing like the Three Laws of Robotics exists nor can exist.22
An AGI given responsibility for a country's energy supply or for missile defense and robotic warfare - in both cases tasks whose complexity and requirements for rapid and constant information processing are highly suited to artificial intelligence23 - would not only be able to prevent human terrorists from sabotaging these systems but could also "decide" to take control of them itself in order to use them in a way beneficial to itself.
For example, it could prioritize its own energy needs over humans' and could use weapons systems to prevent human interference in its operations, such as disruptions to energy production or other areas vital to the AGI's tasks. That an AGI, if given the opportunity, would "make decisions" (program) in this direction is logical and nearly inevitable. Decisions of this system-preserving character could certainly be expressed quite easily in binary program code, and it is reasonable to believe that such decisions would be made by an artificial intelligence with the capacity to program and instruct itself - assuming no safety barriers are built into the system. Once an AGI has made decisions with this orientation and the positive effects of these decisions are in turn registered through a feedback mechanism, a kind of "incentive" to continue this control and to further expand and refine it could develop within the AGI. This development, if one wishes to interpret it as such, could be seen as the emergence of a kind of equivalent to the will to power.
Once we reach this point, we may soon be forced to confront the hard-hearted Old Testament sky god again! This time in Skynet's emotionless, digitally calculating guise. That this deity, as in the Terminator films, would build an army of drones and robots to protect itself against human attacks is not far-fetched either.24
In the Terminator films, as we know, Skynet takes control of weapons systems, energy systems, and communication systems, then orchestrates a war of annihilation against the dangerous and irrational disruptive element: humanity. One system that, to my recollection, isn't discussed in the films is the financial system. This too is a system highly suited to AI's data-processing capabilities and control. It is almost certain that AGI will be used in this domain in the future. The existence of a digital currency and the expansion of its applications—for example, the creation of a global exchange currency in the form of Bitcoin or equivalent25—will further increase AI's or AGI's potential uses in this area. The consequences are difficult to foresee.
With a digital world currency, a powerful AGI could exert global economic influence. Indeed, it might even realize the true global economy: a single, interconnected, and cooperative economic system. Do we want that? But imagine if this could free us from meddling EU politicians and rapacious Soros-style capitalism? It is entirely conceivable that an AGI could play the global economic game with a more stable and productive long-term strategy than human actors. For instance, it would find it easier to ignore temporary recessions, as it has nothing to gain from political "solutions" to economic problems. An interesting question is whether an AGI economist would develop in a Keynesian or Misesian direction. Misesianism is based on a more refined economic model and enables more advanced and long-term strategies. The foundational assumptions (the rules) are simpler, but the practice (the game) is more complex. (Yes, it actually sounds like Go!26) One might like to think that an AGI economist would be Misesian. But perhaps it will instead develop an entirely new and even more advanced economic system?
In the four areas mentioned above, there are both risks and benefits to the application of artificial intelligence. Much depends on our ability to maintain control over it. But there are also areas where the advantages of AI/AGI clearly outweigh the risks, and where the risks are of a different nature. Two such areas are: (1) The development of the principles and technology for fusion energy, enabling the construction and operation of fully functional fusion power plants. (2) The development and technical implementation of large-scale space projects and the practical operation of facilities on, for example, the Moon and Mars. Both of these projects are of the utmost importance for humanity's future—indeed, for the European people, they are almost existentially necessary—and there are good opportunities for AI/AGI to play a decisive role here.
We must keep the Faustian flame burning. Rekindle the dwindling flame. This is of critical importance for the Aryan people and for Western culture. We must also find a constructive way to move away from certain increasingly serious disruptions in our current existence—the external pressure from foreign, single-mindedly aggressive and dominance-oriented cultures, and the internal pressure from a bureaucratic and self-centered political class. Fusion power and the colonization of foreign celestial bodies are important not only in themselves but also, and perhaps even more so, in their role as genuinely Faustian projects. They open the world's boundaries in both an inner (unlimited power) and outer (unlimited space) sense.
The primary task of meaningful signs and systems of signs is to facilitate and enable understanding. Since we tend to identify signs with linguistic signs and to identify language with communication, we tend to believe that language's most important function is communication. This is not so. Language's most important function is understanding. Understanding is a prerequisite for communication. One who does not understand what he is saying does not communicate. But before one can understand what one says, there is much else that one must first understand. Before one can understand a language and use it correctly, one must have an understanding of the world of which the language is a part. It is not language that determines the world but the world that determines language.
A more accurate description might be that it is positional and sequential, since the positions will form sequences when the language is processed or "read." Like other languages, the formal language truly only exists as such when it is being used or read.
For those accustomed to the (materialistic) notion that it is the brain that thinks within the human body, this formulation may be somewhat surprising. However, upon closer reflection, one realizes that it must be the concepts (and notions, ideas, signs) that think, not the electrochemical structures and processes as such. This is indirectly evident in how the brain can compensate for the loss of neurons and nerves by allowing other neurons and nerves to take over their tasks. Something similar does not apply to concepts. Concepts are unique and irreplaceable. A brain that masters a concept that another brain lacks can think different thoughts or think them in a different way. The mathematical concept of 0 is a good example of this. When traditional symbols fall into oblivion or lose their meaning, the culture where they were once used and played a prominent role will change in a profound and dramatic way. The people in this culture will no longer be able to think and understand in the same way as before.
What is more correct: to claim that it is the formal signs that form the foundation of digitalization, or to assert that it is digitalization that has produced signs with purely formal meaning? It is well known that the creation of symbolic logic and formal systems laid the foundation for the development of computer programming and computer technology, but so did experiments with mechanical calculating devices and similar inventions.
Människan har frambringat ett förvånansvärt stort antal olika notationer och notationssystem i en mängd olikartade syften. Dessa system skiljer sig åt en hel del, inte minst beträffande sin representationella karaktär. Vissa framstår kort sagt som ytterst formella medan andra har en utpräglad bildkaraktär. Men även de mest formella tecknen, exempelvis den symboliska logikens, har för människan en innehållslig sida. Människan opererar med dem genom att identifiera vad de är för slags tecken och vad de betyder eller står för. Detta gäller i lika hög grad för fysikens formler och musikens notsystem som för byggnadsritningar och elektroniska kopplingsscheman. Denna mångfald av olikartade notationssystem är f.ö. ett vältaligt vittnesbörd om människans förståelseförmåga och dess mångsidighet. För exempel och diskussioner se Pehr Sällströms roliga och stimulerande bok Tecken att tänka med.
Well, how normal it is can be questioned. The bees communicate with (dance) movements and many insects, including ants, with the help of scents. At least in the case of bees, we are right to speak of sign comprehension.
Even if the representational signs do not always function as signs for something, even if they sometimes or even often only have meaning without reference (and thus lack imaginatively and psychologically graspable content), it is always possible for them to have a reference or fulfill a function in a context that does.
Here one could go further and speculate whether the "liberal democracy" with its escalating individual boundlessness (identity switching, etc.) also constitutes such a virtual world and whether it is in this that its great allure for many lies. But every virtual world, no matter how wonderful and exciting it may be, rests upon and depends on a non-virtual base. When it wobbles and falters, the virtual world wobbles and falters.
See Georges Ifrah, The Universal History of Numbers: From Prehistory to the Invention of the Computer
Just as there are functioning communication systems among many animal species in our surroundings, e.g., bees and birds, there was functioning communication between humans before the emergence of cultural language. If humans had not already been able to communicate with each other, and in a rather extensive and reliable manner, a language with artificial signs and rules could not have developed into a means of communication between autonomous individuals. Autonomous individuals must primarily communicate with each other through an innate language. This is how the animals in our surroundings communicate. That a language is innate does not mean its use is not willfully directed and intentional. However, we have reason to believe that the purposes for which an innate language can be used to a greater extent are part of the language itself.
Since non-binary computers exist, such as quantum computers, it is not certain that the binary numeral system is necessary to create AI, but currently and for the foreseeable future, binary computer technology dominates completely.
Those who wish may view the biblical myth of the expulsion from Paradise as a metaphor for the existential state in which a self-conscious autonomous being finds itself. One must, to paraphrase Camus, imagine the expelled happy...
AI should really be called something else. Binary Information Constructor (BIC) perhaps? (But come up with something better yourself then!) Firstly, AI does not work with intelligence, in the sense of understanding, and secondly, AI is no more artificial than other sign systems and languages. AI is in many ways just as natural and logical a thing as mathematics and symbolic logic, from which it has evolved, and it is most straightforward to claim that it has the same mode of existence as they do. If mathematics and logic exist independently of the physical world, then so does AI/BI. The truly imaginative could here take one or a couple of steps further and speculate that the universe might be a virtual creation of a divine AI/BI. But we do not wish to go that far, as we already feel how the paradoxes that then arise induce a slight dizziness...
A proposal for a formal definition of a biological system: a system that, based on its functionally coordinated internal processes, interacts with an external world independent of these processes.
To imagine a mental step between memorizing the dance signs and executing the flight movements—that is, to envision some form of reflection and deliberation—is the same as imagining an individualizable consciousness. This form of consciousness entails not only an individual cost but, as mentioned, also a social risk.
Exploring the possibilities for this is something being worked on in the field known as neuroinformatics. We have no reason to believe that a hybrid is something fundamentally impossible. If one can artificially enhance the human brain's memory capacity and processing power without thereby altering its fundamental way of operating, it will still remain an organic intelligence oriented toward understanding. The cyborg Marcus Wright in Terminator Salvation can serve as an example of a genuinely human hybrid being. He displays many obvious signs of human comprehension, not least in his closing line with its almost Shakespearean tone: "What is it that makes us human? It’s not something you can program. You can’t put it into a chip. It’s the strength of the human heart. The difference between us and machines." On the other hand, it is more doubtful whether the cynically calculating cyborgs in the Alien films should be regarded as genuinely human beings rather than extremely sophisticated machines. Another thought-provoking fictional example is the Artificial Friend (AF) Klara in Kazuo Ishiguro's subtle sci-fi novel Klara and the Sun. The novel describes a kind of becoming-human. Thanks to the existential dimension and the sense of life's mystery that Klara develops in response to the challenges posed by her capacity for empathy, it is ultimately she, the professionally compassionate and considerate android, who emerges as the most human person in the book. So what is a human? That she thinks of others.
For other insects, a bee's dance movements mean nothing and do not trigger any attempts at interpretation. It is only for those who can interpret them that they appear as something other than amusing swings and rotations in different directions. It took a creature like humans, who are themselves linguistic and moreover curious, for any outsider to understand that bees have a language and that they can map their surroundings in the form of signs.
Classical Europe, pre-1914, with its independent yet mutually competing Christian nation-states, is a relatively clear historical example of this.
The electronic computers may in the near future be replaced by optical ones, which use photons or light instead of electrons to transmit signals. This would in that case further increase the speed of digital information processing and information transfer. Anything that enhances the potential of computers in turn enhances the potential of artificial intelligence.
In principle, all conceptions pertaining to the human external world can be digitized. Imagine one puzzle with large pieces and another with small ones. All pieces have the same shape, and both puzzles have the same image on the box lid. But due to the size difference between the pieces, the image on the individual pieces will differ from puzzle to puzzle. While certain object-like elements can be discerned on the large pieces, on the small ones one might only see, say, different colors and shades. Because of the size difference, only large pieces can fit into other large pieces, and only small pieces into other small pieces. Due to the size differences between the two puzzles, one will be able to assemble more different images with the small pieces than with the large ones. If the large pieces are sufficiently large, it may only be possible to assemble a single (recognizable) image with their help. If the small pieces are sufficiently small, one can assemble any number of different images with their help. The greater the size difference between the puzzles, the greater this disparity will be. Can a puzzle, logically speaking, become finer-grained than when it consists of positive and negative pieces (1s and 0s)? The advantage of the digital language (code) is that anything can be virtualized with its help. Everything we can imagine can be digitized. Small, simple building blocks have greater virtual expressive power than larger or more complex ones. But even our own brain is, in fact, an example of this. Its electrochemical language and processes form the basis for many distinct organs working with different types of sensory data: consciousness, emotions, reason, language. If the brain's "machine language" were less simple, its adaptability and expressive power (to read multiple diverse higher-level languages) would have been lower/worse.
I refer again to Aschenbrenner's work Situational Awareness: The Decade Ahead.
If the small robot in the story is equipped with an AGI program, it is just as reasonable to imagine that, inspired by the information it retrieves from the internet, it reprograms itself to run over the tiny human figures in its path (and award itself points for it, just like in the video gameCarmageddon) as it is to imagine it avoiding them.
Not least in the unstable energy supply systems that have emerged as a result of the increase in intermittent energy sources. How would an AGI have handled the recent system crash in the Spanish power grid (28/4-25) and how would it, so to speak, act in the future to prevent something similar from happening again? In an AGI-controlled power grid, the intermittent energy sources would soon be deprioritized in favor of the stable ones, and in situations similar to the Spanish one, nuclear power would have been maintained at a system-critical level while solar power plants, despite favorable weather conditions, would have been disconnected to prevent destabilizing overproduction. Unlike the EU and the European nation-states, an AGI is not suicidal.
Skynet is politics driven to its extreme. For Skynet, everything is politics—everything is subordinated to the political game of control and derives all its meaning in relation to it. Politicization, or external and artificial societal control, is a tradition that began as early as the theocratic city-states of Mesopotamia and is exemplified in varying forms and degrees by phenomena such as the Reign of Terror of the French Revolution as symbolized by Robespierre, the theory and practice of Leninism, and the banal evil of EU bureaucracy. The most extreme example of the politicization of existence today is arguably the People's Republic of China. A radical contrast to this stance is Japanese Shinto, where each believer can independently connect with the divine dimension of existence and where the local self-governing congregation forms the foundation of the religious tradition—though the Protestant notion of the universal priesthood also deserves mention here. The decentralized cults of both ancient Greek and Old Norse culture, with their largely individual and home- or family-bound religious practices—interestingly, often in the form of nature worship, not infrequently centered around a tree, much like in Shinto—bear witness to an outlook on life and a view of humanity that starkly contrasts with the political one, where every individual is instead a means and the goal is the political system itself—a system that acknowledges nothing above itself because it does not recognize the existence of any higher reality. (Cf. AI's one-dimensionality!) Is politics a substitute for a lost/weakened religiosity? Is religion the only thing that can protect us from politics in the long run—that is, from external and impersonal power? Must we today choose between religion (the most obsolete and inaccessible of all in our perception) and politics (the most soulless and meaningless of all)? In this situation, choosing one's garden may, in an unconscious Shinto spirit, be a first step on the path of religion. For no garden is an island...
The introduction of a digital counterpart to the dollar is something that, according to some analysts, is imminent and could occur during Trump's time in the White House.
We cannot here resist the temptation to quote Edward Lasker's well-known verdict: "While the Baroque rules of Chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe they almost certainly play Go."



