Kristinn Thórisson on constructivist AI

 |   |  Conversations

krisDr. Kristinn R. Thórisson is an Icelandic Artificial Intelligence researcher, founder of the Icelandic Institute for Intelligent Machines (IIIM) and co-founder and former co-director of CADIA: Center for Analysis and Design of Intelligent Agents. Thórisson is one of the leading proponents of artificial intelligence systems integration. Other proponents of this approach are researchers such as Marvin Minsky, Aaron Sloman and Michael A. Arbib. Thórisson is a proponent of Artificial General Intelligence (AGI) (also referred to as Strong AI) and has proposed a new methodology for achieving artificial general intelligence. A demonstration of this constructivist AI methodology has been given in the FP-7 funded HUMANOBS project HUMANOBS project, where an artificial system autonomously learned how to do spoken multimodal interviews by observing humans participate in a TV-style interview. The system, called AERA, autonomously expands its capabilities through self-reconfiguration. Thórisson has also worked extensively on systems integration for artificial intelligence systems in the past, contributing architectural principles for infusing dialogue and human-interaction capabilities into the Honda ASIMO robot.

Kristinn R. Thórisson is currently managing director for the Icelandic Institute for Intelligent Machines and an associate professor at the School of Computer Science at Reykjavik University. He was co-founder of semantic web startup company Radar Networks, and served as its Chief Technology Officer 2002-2003.

 

Luke Muehlhauser: In some recent articles (1, 2, 3) you contrast “constructionist” and “constructivist” approaches in AI. Constructionist AI builds systems piece by piece, by hand, whereas constructivist AI builds and grows systems largely by automated methods.

Constructivist AI seems like a more general form of the earlier concept of “seed AI.” How do you see the relation between the two concepts?


Kristinn Thorisson: We sometimes use “seed AI”, or even “developmental AI”, when we describe what we are doing – it is often a difficult task to find a good term for an interdisciplinary research program, because each term will bring various things up in the mind of people depending on their background. There are subtle differences between both the meanings and histories of these terms that each bring along several pros and cons for each one.

I had been working on integrated constructionist systems for close to two decades, where the main focus was on how to integrate many things into a coherent system. When my collaborators and I started to seriously think about how to achieve artificial general intelligence we tired to explain, among other things, how transversal functions – functions of mind that seem to touch pretty much everything in a mind, such as attention, reasoning, and learning – could efficiently and sensibly be implemented in a single AI system. We also looked deeper into autonomy than I had done previously. This brought up all sorts of questions that were new to me, like: What is needed for implementing a system that can act relatively autonomously *after it leaves the lab*, without the constant intervention of its designers, and is capable of learning a pretty broad range of relatively unrelated things, on its own, and deal with new tasks, scenarios and environments – that were relatively unforeseen by the system’s designers?

My Constructionist Design Methodology (CDM) was conceived over a decade ago as a way to help researchers build big *whole* systems integrating a large number of heterogeneous cognitive functions. In the past 10 years CDM had already proven excellent for building complex advanced systems – from AI architecture for interactive agents such as the Honda ASIMO humanoid robot to novel economic simulations. Combining methodology and a software system for implementing large distributed complex systems with heterogeneous components and data, we naturally started by asking how the CDM could be extended to address the above issues. But no matter how I tried to tweak and re-design this framework/methodology there seemed to be no way to do that. Primarily due to my close collaboration with Eric Nivel – I soon saw how the CDM could not address the issues at hand. But it went further than that:  It wasn’t only the CDM but *all methodology of that kind* that was problematic, and it wasn’t simply ‘mildly laking’ in power, or ‘suboptimal’, but in fact *grossly insufficient* – along with the underlying assumptions that our past research approaches were based on, as imported relatively wholesale from the field of computer science. As the CDM inherited all the limitations of existing software methodologies and engineering methodologies that are commonly taught in universities and used in industry, no methodology existed to our best knowledge that could move us toward AGI at something I considered an acceptable speed.

A new methodology was needed. And since we could see so clearly that the present alonomic methodologies – methods that assume a designer outside the system – are essentially ‘constructionist’, putting the system designer/researcher in the role of a construction worker, where each module/class/executable is implemented by hand by a human – our sights turned to self-constructive systems, producing the concept of constructivism. A self-constructive system is capable of bootstrapping itself to some extent, in a new environment, and learn new tasks that its designer did not anticipate. Such a system must of course be supplied by a “seed”, since without a seed there can be no growth, and the implication is then that the system develops on its own, possibly going though cognitive stages in the process. What we do is therefore seed AI, developmental AI, and constructivist AI. The principal concept here is that there are self-organizing principles at play, such that the system-environment couple allows the AI to grow in a reasonably predictable way from a small seed, according to the drives (top-level goals) that were contained in the beginning. I had been introduced to Piaget’s ideas in my early career, and the concept of constructivism seemed to me to capture the idea very well.

What we are doing is *our* constructivism, which may or may not overlap with the meaning of how others use that term – the association with Piaget’s work is at an abstract level, as a nod in his direction. One important difference with how others use the term, as far as I can see, is that while we agree that intelligent systems must be able to acquire their knowledge autonomously (as was Piaget’s main point) our emphasis is on *methodology*: We have very strong reasons to believe that at a high level there are (at least) to *kinds* of methodologies for doing AI, which we could call ‘constructionist’ and ‘constructivist’. Our hypothesis is that only if you pick the latter will you have a shot at producing an AGI worthy of the “G”. And at present, *all* the approaches proposed in AI, from subsumption to GOFAI, from production systems to reasoning systems to search-and-test, from BDI to sub-symbolic – whatever they are called and however you slice the field and methodological and philosophical approaches – are of the constructionist kind. Our constructivist AI methodology – CAIM – is our current proposal for breaking free from this situation.


Luke: What is the technical content of CAIM, thus far?


Kris: As a methodology a great deal of the CAIM is perhaps closer to philosophy than tech-speak – but there are some fairly specific implications as well, which logically result from these more general concerns. Let’s go from the top down. I have already mentioned where our work on CAIM originated; where the motivation for a new methodology came from: We asked ourselves what a system would need to be capable of to be more or less (mostly more) independent of its designer after it left the lab – to be more or less (mostly more) *autonomous*. Clearly the system would then need to take on *at least* all the tasks that current machine learning and cognitive architectures require their designers to do after they have been implemented and released – but probably a lot more too. The former is a long list of things such as identifying worthy tasks, identifying and defining the necessary and sufficient inputs and outputs for tasks, training for a new task, and more. The latter – the list of *new* features that such a system would need and which virtually no system to date deals with – includes e.g. how to re-use skills (transfer of knowledge), how to do ampliative reasoning (unified deduction, induction, abduction), how to identify the need for sub-goal generation, how to properly generate sub-goals, etc., and ultimately: how to evaluate one’s methods for doing all of this and improve them. Obviously none of this is trivial.

So there are some high-level principles that we put forth, the first of which I will mention is the need to approach cognitive architectures *holistically*. This is much more difficult than it sounds, which is why nobody really wants to take that on, and why computer science in general still shies away from it. But it is necessary due to the nature of complex systems that implement complex functions coupled via a large number of heterogeneous interactive connections: Such systems behave in very complex ways when you perturb them, and it becomes a race with combinatorics if you try to uncover their workings via standard experimental designs, by tweaking x and observing the effect, tweaking y and observing again, etc. As famously noted by Newell, you can’t play 20 questions with nature and win, in his paper with that title. When trying to understand how to build a system with a lot of complex interacting functions (‘function’ having the general meaning, not the mathematical one) you must take all the major factors, operations and functions into account from the outset, because if you leave any of them out the whole thing may in fact behave like a different (inconsistent, dysfunctional) system entirely. One such thing that typically is ignored – not just in computer science but in AI as well – is time itself: In view of CAIM, you cannot and must not ignore such a vital feature of reality, as time is in fact one of the key reasons why intelligence exists at all. At the high level CAIM tells you to make a list of the *most* important features of (natural) intelligences – including having to deal with time and energy, but also with uncertainty, lack of processing power, lack of knowledge – and from this list you can derive an outline for the requirements for your system.

Now, turning our attention at the lower levels, one of the things we – and others – realized is that you need to give a generally intelligent system a way to inspect its own operation, to make it capable of *reflection*, so that it can monitor its own progress as it develops its processes and skills. There are of course programming languages that allow you to implement reflection – Lisp and Python being two examples – but all of these are severely lacking in other important aspects of our quest for general intelligence, a primary one being that they do not make time a first-class citizen. This is where adoption of CAIM steers you in a bit more technical than many other methodologies: It proposes new principles for programming such reflective systems, where time is at the core of the language’s representation, and the granularity of an “executable semantic chunk” must be what we refer to as “pee-wee size”: small enough so that the execution time is highly consistent and predictable, and flexible enough so that larger programs can be built up using such chunks. We have built one proto-architecture with this approach, the Autocatalytic Endogenous Reflective Architecture (AERA). These principles have carried us very far in that effort – much further than I would have predicted based on my experience in building and re-building any other software architecture – and it has been a pleasant surprise how easy it is to expand the current framework with more features. It really feels like we are on to something. To take an example, the concept of curiosity was not a driving force or principle of our efforts, yet when we tried to expand AERA to incorporate such functionality at its core – in essence, the drive to explore one’s acquired knowledge, to figure out “hidden implications” among other things – it was quite effortless and natural. We are seeing very similar things – although this work is not quite as far along yet – with implementing advanced forms of analogy.


Luke: Among AI researchers who think regularly not just about narrow AI applications but also about the end-goal of AGI, I observe an interesting divide between those who think in a “top down” manner and researchers who think in a “bottom up” manner. You described the top-down method already: think of what capabilities an AGI would need to have, and back-chain from there to figure out what sub-capabilities you should work toward engineering to eventually get to AGI. Others think a bottom-up approach may be more productive: just keep extending and iterating on the most useful techniques we know today (e.g. deep learning), and that will eventually get us to AGI via paths we couldn’t have anticipated if we had tried to guess what was needed from a top-down perspective.

Do you observe this divide as well, or not so much? If you do, then how do you defend the efficiency and productivity of your top-down approach to those who favor bottom-up approaches?


Kris: For any scientific goal you may set yourself you must think about the scope of your work, the hopes you have for finding general principles (induction is, after all, a key tenet of science), and the time it may take you to get there, because this has an impact on the tools and methods you choose for the task. Like in any endeavor, it is a good idea to set yourself milestones, even when the expected time for your research may be years or decades – some would say that is an even greater reason for putting down milestones. We could say that CAIM addresses the top-down  and middle-out in that spectrum: First, it helps with assessing the scope of the work by highlighting some high-level features of the phenomenon to be researched / engineered (intelligence), and proposing some reasons for why one approach is more likely to succeed than others. Second, it proposes mid-level principles that are more likely to achieve the goals of the research program than others – such as reflection, system-wide resource management, and so on. With our work on AERA we have now a physical incarnation of those principles, firmly grounding CAIM in a control-theoretic context.

The top-down / bottom-up dimension is only one of many with a history of importance to the AI community including symbolic versus sub-symbolic (or non-symbolic / numeric), self-bootstrapped knowledge versus hand-coded, reasoning-based versus connectionist-based, narrow-and-deep versus broad-and-shallow, few-key-principles versus hodgepodge-of-techniques (“the brain is a hack”), must-look-at-nature versus anything-can-be-engineered, and so on. All of these vary in their utility for categorizing people’s views, and with regards to their importance for the subject matter we can say with certainty that some of them are less important than others. Most of them, however, are like outdated political categories: they lack the finesse, detail, and precision to really help moving our thinking along. In my mind the most important thing about the top-down versus bottom-up divide as you describe it is that a bottom-up approach without any sense of scope, direction, or proto-theory, is essentially no different than a blind search. And any top-down approach without some empirical grounding is philosophy, not science. Neither extremes are bad in and of themselves, but let’s try to not confuse them with each other, or with informed scientific research. Most of the time reality falls somewhere in between.

Of all possible approaches, blind search is just about the most time-consuming and least promising way to do science. Some would in fact argue that it is for all practical purposes impossible. Einstein and Newton did not come up with their theories through blind, bottom-up search, they formulated a rough guideline in their heads about how things might hang together, and then they “searched” that very limited space of possibilities thus carved out. You could call this proto-theories, meta-theories, high-level principles, or assumptions: the guiding principles that a researcher has in mind when he/she tries to solve unsolved problems and answer unanswered questions. In theory it is possible to discover how complex things work by simply studying their parts. But however you slice this, eventually someone must put the descriptions of these isolated parts together, and if the system you are studying is greater than the sum of its parts, well, then someone must come up with the theory for how and why they fit together like they do.

When we try to study intelligence by studying the brain, this is essentially what we get: it is one of the worst cases of the curse of holism – that is, when there is no theory or guiding principles the search is more or less blind. If the system you are studying is large (the brain/mind is) and has principles operating on a broad range of timescales (the brain/mind does) based on a multitude of physical principles (like the mind/brain does) then you will have a hell of a time putting all of the pieces together, for a coherent explanation of the macroscopic phenomenon you are trying to figure out, when you have finished studying the pieces you originally chose to study in isolation. There is another problem that is likely to crop up: How do you know when you have figured out all the pieces when you don’t really know what are the pieces? So – you don’t know when to stop, you don’t know how to look, and you don’t know how to put your pieces together into sub-systems. The method is slowed down even further because you are likely to get sidetracked, and worse, you don’t actually know when you are sidetracked because you couldn’t know up front whether the sidetrack is actually a main track. For a system like the mind/brain – of which intelligence is a very holistic emergent property – this method might take centuries to deliver something along the lines of explaining intelligence and human thought.

This is why methodology matters. The methodology you choose must be checked for its likelihood to help you with the goals of your research – to help you answer the questions you are hoping to answer. In AI many people seem to not care; they may be interested in the subject of general intelligence, or human-like intelligence, or some flavor of intelligence of that sort, but they pick the nearest available methodology – that produced by the computer science community for the past few decades – and cross their fingers. Then they watch AI progress decade by decade, and feel that there are clear signs of progress: In the 90s it was Deep Blue, in the 00s it was the robotic vacuum cleaner, in the 10s it was IBM Watson. And they think to themselves “yes, we’ll get there eventually – we are making sure and steady progress”. It is like the Larson joke with the cows practicing pole vaulting, and one of them exclaiming “Soon we’ll be ready for the moon!”.

Anyway, to get back to the question, I do believe in the dictum “whatever works” – i.e. bottom-up, top-down, or a mix – if you have a clear idea of your goals, have made sure you are using the best methodology available, for which you must have some idea of the nature of the phenomenon you are studying, and take steps to ensure you won’t get sidetracked too much. If no methodology exists that promises to get you to your final destination you must define intermediate goals, which should be based on rational estimates of where you think the best available methodology is likely to land you. As soon as you find some intermediate answers that can help you identify what exactly are the holes in your methodology you should respond in some sensible way, by honing it or even creating a brand new one; whatever you do, by all means don’t simply fall so much in love with your (in all likelihood, inadequate) methodology that you give up on your original goals, like much of the AI community seems to have done!

In our case what jerked us out of the old constructionist methodology was the realization that to get to general intelligence you’d have to have a system that could more or less self-bootstrap, otherwise it could not handle what we humans refer to as brand-new situations, tasks, or environments. Self-bootstrapping requires introspection and self-programming capabilities, otherwise your system will not be capable of cognitive growth. Thorough examination of these issues made it clear that we needed a new methodology, and a new top-level proto-theory, that allowed us to design and implement a system with such features. It is not known at present how exactly these features are implemented in either human or animal minds, but this was one of the breadth-first items on our “general intelligence requires” list. Soon following this came realizations that it’s difficult to imagine a system with those features that doesn’t have some form of attention – we also call it resource management – and a very general way of learning pretty much anything, including about its own operation.

This may seem like an impossible list of requirements to start with, but I think in our favor is the “inventor’s paradox”: Sometimes piling more constraints makes what used to seem complex suddenly simpler. We started to look for ways to create the kind of a controller that was amenable to be imbued with those features, and we found one by taking a ‘pure engineering’ route: We don’t limit ourselves to the idea that “it must map to the way the brain (seems to us now) to do it”, or any other such constraint, because we put engineering goals first, i.e. we targeted creating something with potential for practical applications. Having already very promising results that go far beyond state of the art in machine learning, we are still exploring how far this new approach will take us.

So you see, even though my concerns may seem to be top-down, my there is much more to it, and my adoption of a radically different top-level methodology has much more to do with clarifying the scope of the work, trying to set realistic goals and expectations, and going from there, looking at the building blocks as well as the system as a whole – and creating something with practical value. In one sentence, our approach is somewhat of a simultaneous “breadth-first” and “top-to-bottom” – all at once. Strangely enough this paradoxical and seemingly impossible approach is is working quite well.


Luke: What kinds of security and safety properties are part of your theoretical view of AGI? E.g. MIRI’s Eliezer Yudkowsky seems to share your broad methodology in some ways, but he emphasizes the need for AGI designs to be “built from the ground up” for security and safety, like today’s safety-critical systems are — for example autopilot software that is written very differently from most software so that it is (e.g.) amenable to formal verification. Do you disagree with the idea that AGI designs should be built from the ground up for security and safety, or… what’s your perspective on that?


Kris: I am a big proponent of safety in the application of scientific knowledge to all areas of life on this planet. Knowledge is power; scientific knowledge can be used for good as well as evil – this I think everyone agrees with. In my opinion, since there is a lot more that we don’t know than what we know and understand, caution should be a natural ingredient in any application of scientific knowledge in society. Sometimes we have a very good idea of the technological risks, while suspecting certain risks in how it will be managed by people, as is the case with nuclear power plants, and sometimes we really don’t understand either the technological implications nor social management processes, as when a genetically engineered self-replicating systems (e.g. plants) are released into the wild – as the potential interactions of such a technology with the myriads of existing biological systems out there that we don’t understand is staggering, and whose outcome is thus impossible to predict. Since there is generally no way for us to grasp even a tiny fraction of the potential implications of releasing e.g. self-replicating agent into the wild, genetic engineering is a greater potential threat to our livelihood than nuclear power plants. However, both have associated dangers, and both have their pros and cons.

Some people have suggested banning certain kinds of research or exploration of certain avenues and questions, to directly block off the possibility of creating dangerous knowledge in the first place. The argument goes, if no one knows it cannot be used to do harm. This purported solution is not practical, however, as the research avenue in question must be blocked everywhere to be effective. Even if we could instantiate such a ban in every country on Earth, compliance could be difficult to ensure. And since people are notoriously bad at foreseeing which avenues of research turn out to bring benefits, a far better approach is to give scientists the freedom to select the research question they want to try to answer – as long as they observe general safety measures, of course, as appropriate to their field of inquiry. Encouraging disclosure of research results funded by public money, e.g. the European competitive research grants, NIH grants, etc., is a sensible step to help ensure that knowledge does not sit exclusively within a small group of individuals, which generally increases opportunity for (mis)use in favor of one group of people over another.

Rather than ban certain research questions to be pursued, the best way to deal with the dangers resulting from knowledge is to focus on its application, making for instance use of certain explosives illegal or strictly conditional, making production facilities for certain chemicals, uranium, etc. conditional on the right permits, regulation, inspection, and so on. This may require strong governmental monitoring and supervision, and effective law enforcement, and this has associated cost, but this approach has built-in transparency and is already proven to be practical.

I think artificial intelligence not at the maturity stage of either nuclear power or generically engineered beings. The implications of applying AI in our lives is thus fairly far from the kinds of dangers posed by either of those, or comparable technologies. The dangers of applying current and near-future AI in some way in society are thus of the same nature as the dangers inherent of firearms, power tools, explosives, armies, computer viruses, and the like. Current AI technology can be used (and misused) in acts of violence, in breaking the law, for invasion of privacy, or for violating human rights and waging war. If the knowledge of how to use AI technology is distributed unevenly among arguing parties, e.g. those at war – even that available now – it could give the knowledgeable party an upper hand. But knowledge and application of present-day AI technology is unlikely to count as anything other than one potential make-or-break factor among many. That being said, of course this may change, even radically, in the coming decades.

My collaborators and I believe that as scientists we should take any sensible opportunity to ensure that our own research results are used responsibly. At the very least we should make the general populous, political leaders, etc., aware of any potential dangers that we believe new knowledge may entail. I created a software license to this end, that I prefer to stick on any and all software that I make available, which states that the software may not be used for invasion of privacy, violation of human rights, causing bodily or emotional distress, or for purposes of committing or preparing for any act of war. The clause, called the CADIA Clause (after the AI lab I co-founded), can be appended to any software license by anyone – it is available on the CADIA Website. As far as I know it is one of a very few, if not the only one, of its kind. It is a clear and concise ethical statement on these matters. While seemingly a small step in the direction of ensuring the safe use of scientific results, it is in my mind quite odd that more such statements and license extensions don’t exist; one would think that large groups of scientists would be taking steps in this direction already all over the planet.

Some have speculated, among them the famed astrophysicist Stephen Hawking, that future AI systems, especially those endowed with superhuman cognitive powers, may quite possibly pose the biggest threat to humanity that any invention or scientific knowledge ever has. The argument goes like this: Since superhuman AIs must be able to generate sub-goals autonomously, and of course the goals of superhuman AIs will not be hand-coded (unlike virtually 100% of all software created today, including all AI systems in existence), we cannot directly control what sub-goals they may generate; hence we cannot ensure that they behave safely, sensibly, or in any predictable way at all. There may be some relevance of such an argumentation to certain systems development approaches currently underway. However, I believe – based on the evidence I have been exposed to so far – that the fear stems from what I would call a reasonable induction based on incorrect premises. Current methodologies in AI, other than those that my group employs, produce software whose nature is identical to the operating system on your laptop and mobile phone: It is a hand-crafted artifact with no built-in resilience to perturbations to speak of, largely unpredictable responses to unforeseen and unfamiliar input, no self-management mechanisms, no capabilities for cognitive growth, etc. In fact we could say, in short, that they are not really intelligent, at least not in the sense that a superhuman intelligence must be. The point is that existing methods for building any kinds of safeguards into these systems are very primitive, to say the least, along the lines of the “safeguards” built into nuclear power plant software: These are safeguards invented and implemented by human minds. These kinds of safeguards are limited by human that designs them. And as we well know, it is difficult for us to truly trust systems built this way. So people tend to think that future superhuman AIs will inherit this trait. But I don’t think so, and my collaborators and I are working on an alternative and pretty interesting new premise for speculating about nature of future superhuman intelligences, and their inherent pros and cons.

Although our approach has a lot of low-level features in common with organic processes, it is based on explicit deterministic logic, as opposed to largely impenetrable sub-symbolic networks. It does not suffer from the same kind of unpredictability that, say, a new genetically engineered plant that is released into the wild does, or an artificial neural net trained on a tiny subset of what it will take for input when deployed. Our system’s knowledge – and I am talking about AERA now – grows during its interaction with its environment under guidance from its top-level goals, or drives, given to it by the programmers. It has built-in self-correcting mechanisms that go well beyond anything implemented (yet) in everyday software systems, and even those still in the laboratory belonging to the class of state-of-the-art “autonomic systems”. Our system is capable of operations at a meta-level, from hand-coded meta-goals, based on self-organizing principles that are very different from what has been done before. In our approach the autonomous system is capable of a similar sort of high-level guidance that we see in biological systems for helping them survive; when turned “upside-down” these mechanisms result in the inverse of self-preservation-at-any-cost, to a kind of environment-preservation, making them conservative and trustworthy to an extent that no nuclear power plant, or genetically engineered biological agent in the wild, could ever reach using present engineering methodologies. So, we have invented the first seed-based AI system but also possibly a new paradigm for ensuring the predictability for self-expanding AIs, as we see no relevance of the concerns fielded by the more pessimistic researchers to our work. That being said, I should emphasize that we are right in the middle of this research, and although we have a seemingly predictable, self-managing, autonomous system on our hands, much work remains to be done to explore these, and other related issues of importance. Whether our system can reach superhuman, or even human-levels of intelligence, is completely unclear – most would probably say that our chances are slim, based on progress in AI so far, which would be a fair assessment. But it cannot be completely precluded at this stage. The software resulting from our work, by the way, is released under a BSD-like CADIA Clause license.


Luke: You write that “the fear [expressed by Hawking and others] stems from… incorrect premises.” But I couldn’t follow which incorrect premises you were pointing to. Which specific claim(s) do you think are incorrect?


Kris: Keep in mind that this discussion is still highly speculative; there are lots of gaps in our knowledge that must be filled in to imagine the still very hypothetical kinds of superhuman intelligences we think may be spring to life in the future.

One underlying and incorrect premise is to is to think that the kind of system necessary and sufficient to implement superhuman intelligence will be cursed with the same limitations and problems as the systems created with todays methods.

The allonomic methodologies used for all software running on our devices today produce systems that are riddled with problems, primarily fragility, brittleness, and unpredictability, stemming from their strict reliance on allonomically infused semantics, that is, their operational semantics coming strictly from outside of the system – from the human designer. This results in system unpredictability of two kinds.

First, large complex systems designed and written by hand are bound to contain mistakes in both design and in implementation. Such potential inherent failure points, most of which have to do with syntax rather than semantics, will only be seen if the system itself is in a particular state in the context of a particular environmental state where it is operating. And since these points of failure can be found at any level of detail – many of them will in fact be at very low levels of detail – the values of the system-environment state pair may be very specific, and thus the number of system state – environment state failure pairs may be enormous. To ensure reliability of a system of this nature our only choice is to expose it to every potential environmental state it may encounter, which for a complex system in a complex environment is prohibitive due the combinatorial explosion. We do this for airplanes and other highly visible and obviously fatal systems, but for most software this is not only cost prohibitive but virtually impossible. In fact, we cannot predict beforehand all the ways an allonomic system may fail partly because the system’s fragility is so much due to syntactic issues, which in turn are an unavoidable side effect of any allonomic methodology.

The other kind of unpredictability also stems from exogenous operational semantics, the fact that the runtime operation of the system is a “blind” one and hence the achievement of the system’s goal(s) is rendered inherently opaque and impenetrable to the system itself. A system that cannot analyze how it achieves its goals cannot propose or explore possible ways of improving itself. Such systems are truly blindly executed mechanical algorithms. If the software has no sensible robust way to self-inspect – as no hand-written constructionist system to date can since their semantics are strictly exogenous – it cannot create a model of itself. Yet a self-model is  necessary to a system if it is to continuously improve in achieving its highest-level goals; in other words, self-inspection is a major way to improve the coherence of the system’s operational semantics.

So self-inspection and modeling can increase system predictability at both the semantic and syntactic levels. Autonomous knowledge acquisition – constructivist style knowledge creation, as opposed to hand-coded expert-system style – coupled with self-modeling ensures that the system’s operational semantics are native to the system itself, bringing its operation to another level of meaningfulness not yet seen in any modern software.

This is how all natural intelligences operate. Because you have known your grandmother your whole life you can predict, with a given certainty, that she would not rob a bank, and that if she did, she would be unlikely to harm people unnecessarily while doing it, and however unlikely, you can conjure up potential explanations why she might do either of those, e.g. if she were to “go crazy”, the likelihood of which can in part be predicted by her family history, medication, etc.: The nature of the system referred to as “your grandmother” is understood in the historical and functional context of systems like it – that is, other humans – and in light of her history as an individual.

Modern software systems are not like that. Because we have never really seen artificial systems of this kind we have a difficult time imagining that such software could exist. We have not really seen any good demonstrations of meta-control or self-organization in artifacts, or seed-AI systems with explicit top-level goals. So we might be inclined to think that superhuman artificial intelligences might be like deranged humans with all the perils of modern software – and possibly new ones. The software of the future that has a sense of self will of course still be software, but it will not be like a “black-box alien” being coming to Earth from outer space, or a mad human with brilliant but twisted thoughts, since we can – unlike humans – open the hood and take a look inside. And the insides are unlikely to be like modern software, because they will operate on quite different principles. So rather than a completely independent and autonomous self-preserving entity behaving like a madman with illusions of grandeur, or a malevolent dictator intent on ensuring its own power and survival, future superhuman software may be more akin to an autonomous hammer: A next-generation tool with an added layer of possible constraints, guidelines, and limitations, that give its human designers yet another level of control over the system, one that allows them to predict the system’s behavior at lower levels of detail, and more importantly at much higher levels than can be done with today’s software.


Luke: I’m not sure Hawking et al. are operating under that premise. Given their professional association with organizations largely influenced by the Bostrom/Yudkowsky lines of thought on machine superintelligence, I doubt they’re worried about AGIs that are like “deranged humans with all the perils of modern software” — instead, they’re probably worried about problems arising from “five theses“-style reasons (which also motivate Bostrom’s forthcoming Superintelligence). Or do you think the points you’ve made above undercut that line of reasoning as well?


Kris: Yes, absolutely.


Luke: Thanks, Kris!