secret/ blueprint/ path to AGI: novelty detection/ seeking

hi all. kurzweil wrote in 2006 “the singularity is near”. foreboding words! but today, still maybe more of a feeling than a fact. definitely, the AI field has started to mature into a new steady advance period/ era in the last few years, also with a burst of energy/ enthusiasm/ innovation heralded by the Google Deepmind acquisition in 2014, and other massive shifts toward increased investment by large corporations and govts. the Musk Open AI initiative was announced in 2015.

the other massive milestone is the ready conquering of Go by Google in 2016 by AlphaGo. in late 2017, a new version AlphaZero was announced that plays superior to AlphaGo (at “beyond human grandmaster level”) after learning merely from the rules and reinforcement learning, ie no example human-level play presented as training whatsoever. AlphaZero also plays grandmaster level chess after learning “from scratch”. this breakthrough is not fully/ widely appreciated in some ways. it is the first case of a potentially more general algorithm for AI emerging from “previously relatively narrow” study of AI in games.

AI has the terminology “weak AI” and “strong AI” for different levels/ sophistication/ “ability”. more recently the term AGI, Artificial General Intelligence has been coined.

many expert commentators have noted that there exists no general theory of the nature of intelligence. there are many proliferating partial theories about “important aspects”. historically and continuing, in short it is far from easy, rather extremely delicate/ subtle, to separate the causes and effects of intelligence. and humans are biased in their own (attempted) understanding. from a future pov, likely some of the supposed understanding so far is actually highly veiled confusion.

this essay seeks to fill the vacuum of a glaring lack of a general theory of AI. it boldly but systematically proposes a general mechanism for AGI. its at times a sketch, a roadmap, a blueprint. it necessarily varies between rough and clear. it builds on existing knowledge, but in a distinctly different way, with a key twist. it is mostly formulated using metaphors and analogies, not so much with technical content.

however, the expectation/ assertion is that adhering/ following these ideas by top researchers will lead to AGI, and anyway that they are moving both generally and specifically in this direction if even not at all influenced by this particular analysis. it aims to be “paradigm shifting”. a few scatttered top researchers are already nearby/ on the scent of this trail, so to speak; this essay aims to focus the semi random walk into more definitive directions. speaking with the goal of achieving this “momentous/ ancient dream of humanity” hopefully some unforeseeable combination/ confluence in the near future will cause it all (ie worldwide AI research) to shift out of the unmistakeable underlying “mere” incrementalism bordering on directionlessness and “catch fire” or “reach critical mass” wrt a key related research insight/ milestone and/ or breakthrough with this as a personal attempt/ contribution.

there is a more abstract section followed by a very practical/ pragmatic section with very specific short-term action items that will be almost fully recognizable by/ within reach of a talented AI engineer, well defined milestones some likely to be achievable in the short term future ie within a few years to demonstrate the overall viability/ correctness of the research program agenda, nevertheless ofc highly depending on community recognition/ drive/ dedication/ scale etc.

obviously mere words are not sufficient to evoke AGI, but intelligent words combined with innovative actions can drag the future into the present! so it is also something of a “call to arms”.

⭐ ⭐ ⭐

1st, define intelligence in a way that has been previously considered at times but not exactly focused on.

intelligence is an emergent property arising between the interaction of a learning agent and its complex environment.

now, to unroll the implications of this fairly simple but meaningfully nuanced/ intricate at below-the-surface definition. it is tempting to associate intelligence with the agent itself, but according to this definition, intelligence cannot really be exhibited without an accompanying environment. an agent might have the potential for intelligence, but without the environment, it is not capable of expressing this potential.

therefore the “seed” analogy is quite apropos. the emergent property of intelligence is similar to growth and the same term is used for biological plants as it is used for human intellectual/ cognitive development. an agent that stops learning can be said to lack (“further”) intelligence. moreover, there are other many strong analogies eg “the tree of knowledge” for intellectual domains.

the AI field is facing the strange paradox of trying to understand intelligence in the behavior of machines. the next analogy will advance this case. what is a simple, or simplest device that fulfills the above definition in a rudimentary way?

consider a thermostat. it responds to the enviroment (temperature). the thermostat does nothing if the external temperature is constant. it “acts” when it “senses” a change in the temperature. it has a sort of “internal drive” to regulate the temperature either through effecting heating or cooling or both. the thermostat might even have a buffering type aspect in that it doesnt immediately kick in depending on the current temperature trend/ history.

the thermostat therefore has really no function outside of a dynamic environment. furthermore, a more “intelligent” thermostat might “tune into” more complex aspects of its environment. such as a daily or yearly cycle. in responding ahead of time to an expected change, it has a kind of “anticipation”.

the next basic metaphor that illustrates a simple intelligent agent is that of a maze. mice/ rats can explore mazes and find their way out of them, remembering their structure. there are purposeful aspects to this exploration such as finding food, or less purposeful aspects such as merely exploration that seems to help them establish a mental map of the territory. here the concept of familiar vs unfamiliar territory comes into play.

therefore, mapmaking is a key metaphor for intelligence. except, the map is not an object outside the agent, it is “contained” in the mental encoding of the agent in a symbolic way. such as biological neurons, or some other substrate such as artificial digital neurons, or something else entirely. some encoding.

⭐ ⭐ ⭐

the next analogy to consider, more abstractly, is a tree. a tree cannot exist without an environment. a tree grows from a seed. some trees live very long, almost indefinitely. if the tree is dead, it is not growing. if the tree is not growing, it is effectively dead. the roots of the tree sustain/ anchor the tree and draw nutrients out of the environment. the tree is actually constructed out of its environment, so to speak.

this analogy is more abstract, but the human nervous system and that of other animals bears a strong resemblance to a tree in many ways. there is “pruning” of neurons at a certain stage of development. the brain exhibits “plasticity” which is another word for growth but a growth that is tied with its learning of the environmental structure.

the brain grows in response to its interaction with the complex environment. the brain growth is limited in a limited environment. the environment is understood here as not merely elements contributing to the organisms physical survival such as food. the brain grows in response to the conceptual environment. eg other interacting humans, their complex behaviors, such as speech, emotions, etc., other complex objects in the enviroment eg natural objects or complex objects constructed by other humans, etc.

the human brain apparently consumes sensation, information, data almost in the way that the organism consumes food.

the human brain apparently digests this data in a key way. it finds trends, patterns, structure, encoded in memories. it maps the world. in psychology there are special loaded terms such as “frames” or “schemas” for these “pieces/ units of understanding”. the patterns are often embedded/ hierarchical such that patterns contain other patterns. an object has parts. but abstract ideas/ concepts have parts also. the brain data storage system blurs the distinction between physical parts and conceptual parts, the brain does not make a tight distinction.

⭐ ⭐ ⭐

the prior definition of intelligence is mostly uncontroversial and generally understood— and partial. it is about, in a way, only ~½. it is oriented around what is sufficient. now to revise it, and focus on what is necessary. this is the key part that has been eluding prior community focus/ awareness.

the prior definition mentions interaction. this word needs to be expanded on and contains the whole secret. 1st, it entails action. the agent takes action. but what guides this action? is there some kind of meaning, goal, or motive behind the action?

the only sensible answer, after throwing out all the ones that fail to be explanatory, is that

the agent acts on the environment in a way (or “ways”) that increases its knowledge.

this definition looks simple, but yet is extremely complex and multifaceted. it might seem “circular” but is really cyclical.

some may think that organisms “main” drives are for food or other aspects of (general) survival such as mating/ reproduction. these are of course key drives, but they dont necessarily entail significant intelligence. even extremely unintelligent organisms such as cells express and can accomplish these basic drives. significant intelligence can be employed to obtain survival, but mere survival is not the primary aspect of intelligence.

the agent cannot increase its knowledge without having an effective way to store knowledge, and a way to identify new “findings” that dont fit into that knowledge. a/ the key word from psychology that is increasingly used in AI research is novelty. the agent must be able to detect novelty, and even more than this, focus and seek it. in other words “novelty seeking” is at the core of intelligence!

there are related terms from AI research. “exploration” and “discovery” are used in a wide set of contexts, and evoke some of the wide/ variegated aspects of novelty seeking.

another key term is structure. the agent must be able to recognize and find different structure in the environment, and that structure is encoded in its memory. novelty is roughly, “findings outside the known structure”.

the concept of novelty ties in with the growth idea. once a concept is “assimilated” it is no longer novel. therefore what is novel is a continuously moving target. its an endless cycle. moreover the intelligence of the agent is related to how much novelty it can find. an agent that has a low capacity to “encode environmental structure” will eventually not find as much novelty.

since this new ideology must be discriminated in some way from prior ideologies as a scientific research hypothesis, call it the Novelty Detection/ Seeking Theory.

⭐ ⭐ ⭐

another key concept from neurobiology that is used in AI is “feature detection”. the agent must learn to recognize features, and put them together into structures. structures are collections of features. structures and features are not defined exactly as in the classical definition. they are not invariably inanimate. the response of a ball to being kicked could be regarded in a sense as a “feature” and a collection of such features could comprise a structure. (similarly/ in roughly the sense that physics is “structured” by laws.) another similar term used in specialized psychological meaning is a complex.

maybe even more or most importantly, a response by another human to the entitys own actions (speech, writing etc) is a key feature of the environment, involved in dominating/ top human brain structures related to “socializing/ socialization/ society”.

a related area of neurobiological research is “neural darwinism”. this is some (strong) biological evidence for some of these ideas. basically, different neurons are recruited for different functions depending on environmental stimulus, and the greater the complexity of the stimulus, the greater recruitment or differentiation of neurons dedicated to that phenomenon.

a key related area of AI investigation/ research is “supervised vs unsupervised learning”. this dichotomy is both a bit accurate and a bit off from the pov of this new ideology. Novelty Detection/ Seeking is in some ways the ultimate in unsupervised learning. also there is no “evaluation function” as with supervised learning. there is an evaluation function, but its not exactly based on the latest “input data sample”. the evaluation function is whether the agent is finding new novelty and growing its known structure.

related to storing structure is the topic of compression. an agent that can more effectively compress its symbolic structure representations will have superior chance of storing more structure. the compression also helps with novelty detection. if unimportant details are “thrown out” in the compression, the novelty detection functions more effectively.

another related area of novelty measurement is “entropy”. this is a very complex topic that originated in physics but finds major application in computer science and increasing attention in machine learning theory. informally, entropy measures “disorder”. but order vs disorder can be very straightforward such as maybe in physics or chemistry equations, or much more abstract. it appears that the ultimate abstraction of “entropy” is to measure all possible structuredness in the environment, and violations of it are effectively “novelty”.

an old related concept from centuries old philosophy is that of the tabula rasa or literally “blank slate”. Novelty Detection/ Seeking is in a way the ultimate “tabula rasa” theory. it has long ago been proven that humans do not start as a tabula rasa in many important ways, but that is not an effective argument against the feasibility/ validity of the Novelty Detection/ Seeking theory.

a key related concept from machine learning is signal processing, and “finding signal in noise”. signal and noise are in the eyes of the beholder, so to speak, and novelty is the difference between them. compression also ties in with discarding noise.

returning to the tree analogy, there is some more to think about. complex human motor actions are stored in the motor cortex. why is there a specialized component for this? the motor cortex grows with more complex actions over time. the conclusion from Novelty Detection/ Seeking theory is that these are actions which even though repeated, still lead to novel structures. for example, using words/ talking in a conversation, reading a book (mostly simple eye movements), watching a movie (again mostly simple eye movements), going to a class, going to work, etc…

ie it is reminiscent of the ways that both the roots of the tree and its branches grow over time. in this analogy the roots digging into the environment are the motor actions, and the branches growing into space are like the conceptual structures “extracted” from the environment. one supports the other. one is grounded in the other.

the Novelty Detection/ Seeking theory is a direct affront on one of the largest difficulties/ shortcomings of existing (narrow/ weak) AI (or maybe the core/ overarching deficiency), namely that it is highly domain specific. arguably the greatest shortcoming of AI gives rise to its key overarching principle. Novelty Detection/ Seeking is at an extreme, in a way, by nature/ definition nearly the exact opposite of domain-specificity. it asserts that possibly all domain structure can be acquired starting from scratch, or so to speak, “thats where the magic happens”. aka “at long last, the mystery is revealed”!

⭐ ⭐ ⭐

the main objection to this theory is that it cant possibly explain intelligence. a simple counterexample that comes to mind might be speech recognition. how can an entity recognize speech only by pursuing novelty? yet it appears that this is how babies do indeed learn speech.

speech is a complex phenomenon contained in their environment presented by other humans. it is learned through an interaction starting from individual words, and in a vocalization-listening cycle. the words are used as building blocks in the complex structure of language. language is only one kind of structure “contained” in the environment (expressed by the people inhabiting it), but it is a shared structure.

nevertheless the speech objection is very important and forms the basis for the last/ most ambitious of the action items.

there is no question the speech hurdle is a key consideration and will eliminate many limited/ inferior Novelty Detection/ Seeking systems. but the general assertion of this ideology is that a sufficiently advanced Novelty Detection/ Seeking system does exist in theory, it just remains to be discovered/ isolated/ optimized/ perfected.

one would say if the Novelty Detection/ Seeking theory can explain speech and language acquisition, then that is a very powerful and persuasive element of evidence in its favor.

another objection might be that Novelty Detection/ Seeking is not a major established AI theory in prior literature/ investigation. as the saying goes, “as designed”/ “thats not a bug, its a feature!”

another objection might be the classic “when you have a hammer, everything looks like a nail”. Novelty Detection/ Seeking is not yet a real hammer, its a theoretical one. admittedly some of this theory is likely to be off or too optimistic, but there seem to be no other viable/ plausible contenders for the epicly ambitious goal of really explaining intelligence.

⭐ ⭐ ⭐

how exactly can this “novelty” be measured/ quantified? this is one of the main unknown areas of analysis, if not the central one, and existing research with a lot of emphasis on the less ambitious (but still difficult) supervised learning approach has not gone in the direction of attempting to directly answer this critical/ pivotal/ central/ core question (aka “low hanging fruit”). however, just because it is difficult does not mean it is infeasible or impossible, and if the theory is to be believed, this particular investigation is indispensible/ utterly unavoidable in the path to achieve AGI.

on the other hand the very recent yet astonishing “paring down” of AlphaZero is already a very remarkable/ strong/ dramatic/ extraordinary step in this direction. the immediate breakthrough of Alphazero already possibly signals a pivot in research direction and a potential large/ mass paradigm shift. and theres a massive edifice of highly related material from machine learning, signal processing, statistics, etc.; and another bold conjecture is that maybe even fairly unsophististicated metrics can possibly scale well. a few ideas for ANNs, but notice this theory is not specific to ANNs:

  • novelty is encoded in neuron weights. neurons with constantly varying weights have not converged to a structure. also, low weights that have little influence on the overall neuron function are more likely to be noise.
  • novelty is encoded/ proportional to total # of neurons and connections that are not random. (obviously merely many neurons is not a measure of novelty, but higher novelty/ structure encoding capacity requires more neurons/ connections.)
  • but then, if neurons weights are not evolved based on gradient descent on an evaluation function, what is left? there are some ideas from self-organization working on entirely local rules such as Hebbian learning.

⭐ ⭐ ⭐

now, consider some practical experiments of varying difficulty to try to carry out/ prove the Novelty Detection/ Seeking theory (or from another pov, key/ central conjectures in disguise). these are carefully formulated to leverage existing state-of-the-art technology and knowledge, and yet push it dramatically beyond. in some ways this essay, while overall conceptually long formulated (decades), is triggered by the latest google Alphazero breakthrough, less than a month old.

the 1st item seems closely within reach of almost existing technology.

Challenge 1. build a system that plays both Go and Chess at expert levels merely through a novelty detection/ seeking approach.

this challenge is inspired by the recent positive/ breakthrough results of AlphaZero which has almost already completely fulfilled the challenge. all that remains is to change this system “not very much” in two key ways:

  • the system should not start out with either the rules of Chess or Go in its “knowledge” but instead discover them through play/ exploration. the agent is aware only if a game ends either by winning or losing, or making an illegal play (eg leading to a loss)!
  • the system might use something similar to reinforcement learning, but it doesnt have any intrinsic evaluation functions that measure play quality/ winning possibility. instead it discovers those merely by exploring the “gamespace” ie by exploring “most” possible scenarios through novelty detection/ seeking.

it may seem like a radical assertion, or counterintuitive based on current scientific knowledge/ understanding, but the expectation of this theory/ research program is that even with these extremely limited starting conditions, the system/ goal is indeed feasible/ achievable/ within near-term grasp. the time is ripe!

Challenge 2. build a system where a humanoid figure learns to balance/ walk/ run based merely on novelty detection/ seeking.

again, Google is doing research in this area and has already made major progress using a virtual system with a physics engine. again this involves maybe “not major revision” to existing code, mainly to take out a supervised evaluation function that measures balance/ distance walking, and replace it with a novelty detection/ seeking system instead.

Challenge 3. build system that learns to play video games merely through novelty detection/ seeking.

again, close to existing technology. existing systems use supervised learning based on game scores. can an agent discover/ learn to play a game not even “knowing” the concept of a game score? one might say metaphorically “learning to play with both hands tied behind its back.” the theory suggests that this is indeed not only possible but expected. the prediction is that the agent will naturally seek more complex games based on building an internal map reflecting the external structure.

Challenge 4. agent that passed Challenge 2 is given a bike, and it learns to ride the bike based on novelty detection/ seeking.

this again seems unrealistic, but the basic idea is that the bike is a foreign object, ie novel. pushing it is increasingly novel. pushing it off the ground more so. balancing it even more. mounting it, further. pushing pedals, more so. etc. also these challenges while possible in real world environments with eg robotics can be entirely simulated without large computational expense, and are quite within the range of a desktop pc running a physics engine.

Challenge 5. this requires realtime technology, prior ones do not. take the best performing Novelty Detection/ Seeking systems for challenges 1-4 and give it a vocal and hearing apparatus, and subject it to spontaneous speaking/ conversations with/ among human speakers (of whatever age/ background etc). the system should “learn” words and speak them, and possibly advance to greater areas of language acquisition/ utilization such as sentences/ Question-Answer/ conversations etc.

Challenge 5 is admittedly wildly ambitious, and today may sound like science fiction and implausible, or even worse, magical thinking, but successfully achieving challenges 1-4 will make 5 more conceivable/ believable/ within reach. (leading/ zen question: is primitive human speech/ language interaction more complex than grandmaster chess or go? dont forget that apes can do it, albeit not without inevitable controversy.) in the authors estimate, possibly existing supercomputer hardware technology is already sufficient. all that remains to be established is the exact/ tuned Novelty Detection/ Seeking code dynamics.

there are many more challenges to list, but it is not necessary to enumerate them right now after Challenge 5, because it seems quite plausible that very cleverly optimizing Challenge 5 will lead to AGI (cf Turing Test). again others will likely have different opinions at this time, but “rome wasnt built in a day” and “time will tell”.

 

Advertisements

16 thoughts on “secret/ blueprint/ path to AGI: novelty detection/ seeking

  1. Francis

    Hello! Francis here,
    Very interesting article. I’ve been wondering on the issue since I learned that AI is now a softwere issue.

    I wanted to know your opinion about my opinion on the language issue. From my piont of view language is a tool to reach specific goals, such as the parents aproval in case of a child, related to it’s survival. For a child, language is a way to assure it’s sirvival, so, wouldn’t the objective be missing in order to develop the speach autonomus learning? Which would imply the need to evaluate, for example, syntax. Other primates share the survival drive, it is it that impulses them to develop such a skill.
    I imagine the robot throwing random words until he says one of the desired words form a list (maybe “please”, to later assemble “please plug my cord”). Alternative words could be allowed (maybe “would” to later “would you plug my cord”). Babys pronounce radom sounds until someone validates some sounds. Later they are conplexified to finally base them on abstract structures.

    Thank you!

    Reply
    1. vznvzn Post author

      the essay downplays the significance of the survival drive for an AGI. its defn in primates but eg in human babies, survival is generally assured even if they dont speak. yes, you have sketched out (as the essay/ theory predicts) that the AGI will tend to reinforce its acquisition of vocabulary based on “response in the environment” to its own “babblings” ie other humans reacting to its words. response can include others facial expressions, gestures, actions, speech responses, etc.

      Reply
  2. Philippe

    Hi,
    I could not agree more that the first thing to do is to try to have a usable definition of intelligence (if possible but I doubt about it because it seems to me very subjective) or at least some examples or principles to define boundaries allowing us to identify what an AGI should exhibit as « intelligence ». In fact, I think we cannot tackle the AGI problem without knowing what is the problem and we need to try to have the clearest definition to solve the right problem.
    Let’s begin with your hypothesis : « intelligence is an emergent property arising between the interaction of a learning agent and its complex environment ».
    First : my main source of interest and inspiration is experimental cognitive psychology and I think we cannot define « intelligence » without referring to our only example: animals (humans included). So, I agree with you in a certain way when you write « the AI field is facing the strange paradox of trying to understand intelligence in the behavior of machines ».
    I do agree with you about the need for an environment but I will add that the agent access his environment through different types of sensors giving quantities on various physical properties of the external environment.
    Let’s use your thermostat example. The thermostat has one goal : maintain temperature in a certain range. The thermostat collect quantities evolving during time on a specific physical property (external temperature), the thermostat does nothing if the external temperature is constant so it acts correctly when the external data do not exhibit any change. We could say that its behavior is adequate in that case, it does what it has to do, its internal representation and related actions are correct. If the thermostat notice a change in the temperature, it tries to regulate the temperature either through effecting heating or cooling or both. In this case, it internal representation and the corresponding actions are accurate too. And you are right, we could add memory to the thermostat to manage more complex cases.
    So is the thermostat intelligent ? I think you agree with me when I say : No. We could add very complex algorithms in the thermostat but these algorithms are the result of experiences of an external agent : you and me. The thermostat cannot adapt its internal representation and actions according to new experiences. So our thermostat should be able to store experiences and according to its new experiences (it needs a memory to store them) it should build new strategies to regulate the temperature by itself (but it implies a cost function related to the new strategies developed else why should it do so ?). These « experiences » are data collected through time by sensors from the external environment and specific structures of these data (combination of quantity and timing) should lead to specific actions.
    So, let’s suppose our thermostat is able to :
    – collect data from the external world and store them (gather experiences),
    – predict what will happen in specific conditions of the external world and do the right action (do nothing or do something which are two pertinent actions) to achieve its goal (and minimize or maximize its cost function).
    In that case, is the thermostat intelligent ? I will be more incline to say YES, may be.
    We seems to be close, you suppose the need for an “anticipation” and I talk about predictions (that is a cognitive psychology theory to explain the brain functions), you suppose internal map making is a key metaphor for intelligence and I need an internal representation and we cannot avoid some « encoding » and some mechanism to detect new specific external conditions leading to new behavior.
    Let’s go back to your hypothesis : « intelligence is an emergent property arising between the interaction of a learning agent and its complex environment ».
    If we do not know what is inside the agent I say YES, it is an emergent property arising from the agent interacting with its environment when I observe it.
    That’s all for now ! 😉
    Philippe.

    Reply
  3. vznvzn Post author

    we seem to not disagree on anything. the point of the thermostat example is that it is a (rather rudimentary) machine yet can exhibit intelligent like “behavior” with some basic conditions met. my other point about the thermostat is that it regulates the environment. the idea is that a supposedly intelligent agent that doesnt act on anything does not seem to have much intelligence. intelligence and motive/ agenda seem tightly connected. there are many “motivations” but part of the extraordinary idea in this novelty detection paradigm is the idea that increasing (accurate) internal knowledge of the outside world itself is a kind of motivation that can drive action. the idea that this really is the ultimate agenda of any intelligent agent when all else superfluous is stripped away. that is not to say its the only possible agenda, there could be other sub-agendas that relate to it.

    Reply
    1. Philippe

      I would like to ask a question about this point : ” my other point about the thermostat is that it regulates the environment. the idea is that a supposedly intelligent agent that doesnt act on anything does not seem to have much intelligence”.
      Suppose another machine which is able to adapt to external data (like the thermostat) and to predict what will happen next. You observe this new machine by consulting its predictions. This new machine do not act but to me it seems as intelligent as the thermostat case. No ?

      About “part of the extraordinary idea in this novelty detection paradigm is the idea that increasing (accurate) internal knowledge of the outside world itself is a kind of motivation that can drive action” : I reach the same conclusion, it could be the only motive that drive the agent. I also agree about the other sub-agenda.

      In that case, I will continue on the next part of your essay tomorrow.

      Reply
      1. vznvzn Post author

        predicting the actions of another intelligent agent does involve intelligence, but the key idea in the novelty hypothesis is that AGI nec involves interacting with a complex environment. but note that here “environment” has a special meaning and is defined broadly. for example “cyberspace” could be considered an environment, or another group/ society of interacting intelligent agents/ minds, etc.

        how about getting a stackexchange acct & we can chat online at length in this room. https://chat.stackexchange.com/rooms/9446/theory-salon more on chat

      2. Philippe

        => how about getting a stackexchange acct & we can chat online at length in this room.
        Yes, we can talk on any chat if you want but I would prefer by email.

        About stackexchange chat, I do not have the required parameters below because I never chat on Internet :
        ” for new “se” users, it requires 20 rep pts earned across any se sites to participate in any chat room.”
        I also cannot be a member of The Stack Exchange Network, because I have no highest reputation on any site on Internet (I almost does not exist except my email address 🙂 ).

      3. Philippe

        Hi,
        My demonstration about prediction as a key part of the brain mechanism is ready but it is 17 pages long with paper references, images and graphics, not suitable for a blog.
        Philippe

  4. Philippe

    While waiting for another way to discuss on the subject … 😉

    Restarting from the tree analogy.

    I will skip the analogy with the tree, and yes the brain does exhibit pruning of neurons and neuronal plasticity (among many other things) depending on learning from its external environment but I would not say that the brain « grows », instead I would prefer to say that it adapts (we do not know precisely how because of the multitude of changes we can observe in it).

    So the brain adapts thanks to its interaction with the environment and the amount of predictions it does is limited by the quantity of information (complexity) the external environment contains.

    I will try to re-phrase the definition of an environment to be sure we share the same definition.

    An environment is purely data.

    These data can be external, like visual data extracted from the external world with cones and rods converting light to current (ions flow), auditory data such as frequencies extracted by hair cells converting specific frequencies to electrical impulses (Spikes) …

    These data can also be internal, like neuromodulators where chemicals substances regulate diverse populations of neurons or introspection when we observe what our brain does, …
    So, no matter we speak about speech, emotions, thoughts, muscles, visual perception, auditory perception, touch, smell, theory of mind (guessing what other people think or want)…

    So, everything is data.

    I agree when you say that the brain try to find trends, patterns, structure, encoded in memories. In fact, it seems to me that it tries to find useful pieces of information to predict what will happen next.

    I agree, the structure is hierarchical and there is no distinction on the kind of data managed by the brain.

    I am not sure that an « intelligent » machine needs to act. That was my point when I asked :
    « Suppose another machine which is able to adapt to external data (like the thermostat) and to predict what will happen next. You observe this new machine by consulting its predictions. This new machine do not act but to me it seems as intelligent as the thermostat case. No ? »

    I think I did a mistake when I said that I reach the same conclusion about “part of the extraordinary idea in this novelty detection paradigm is the idea that increasing (accurate) internal knowledge of the outside world itself is a kind of motivation that can drive action”. I think that the motivation is to produce accurate predictions and if not, adapt its internal representation to be more accurate next time (of course, as you say by increasing accurate internal knowledge of the outside world but, for me, the fundamental principle is “to produce accurate prediction” and “to increase the internal knowledge” is a way to do it).

    I will try to demonstrate this with some scientific papers in cognitive science tomorrow.

    Reply
  5. vznvzn Post author

    hi, have been doing a lot of research and (maybe like you) have found many scientific refs that are suggesting that accurate prediction is a key element of intelligence/ consciousness eg in animals & humans. my feeling is that this paradigm is not incompatible with the idea of “increasing internal accurate representation of knowledge” but some bridge needs to be drawn between the two. seem maybe to to be 2 equivalent ways of looking at the same thing.

    its not hard to gain 20 pts on the stackexchange network, there are many sites of interest, and odds are it has already turned up in your internet google searches (its similar to wikipedia that way). this site might interest you. only a few well phrase questions or answers suffice. https://psychology.stackexchange.com/

    as for “the environment is data”. that is generally true but not exactly. there are some subtleties. for example can an agent learn complex relationships with the environment merely from static data? my answer is to some degree yes, but to some degree no. my feeling is that the most intelligent agents cannot learn key aspects of a dynamic environment from static data. because the environment must interact with the agent and this dynamic interaction ie “action/ reaction/ dynamic feedback” etc is what is largely encoded in the maps. however, as a thought experiment, one can “freeze” the brain of an agent that has learned these maps and that static map “contains” intelligence. but it cannot further “grow”.

    re brains “growing”. it is quite proven wrt the immature animal/ human. its true the brain tends to mature in the adult but it appears brains are continuing to grow in the adult also. possibly through neurogenesis but also, wrt this ideology, neuroplasticity is a kind of growth. ie even if new neurons are not being added, one can say that the brain is growing.

    as for your 17 pg paper, when did you start writing it? plz post a url & will read it asap. 🙂

    Reply
    1. Philippe

      > have been doing a lot of research and (maybe like you)
      > have found many scientific > refs that are suggesting
      > that accurate prediction is a key element of intelligence

      OK, for consciousness I do not agree but it is another debate extremely complicated.

      > my feeling is that this paradigm is not incompatible
      > with the idea of “increasing internal accurate
      > representation of knowledge” but some bridge needs to
      > be drawn between the two. seem maybe to to be 2 equivalent
      > ways of looking at the same thing.

      Yes, you will observe one and the other at the same time but which one drive the other ?
      In my document, I show an example where you cannot learn if there is no surprise (no error of prediction)
      In fact, to find the right way it would be good to simulate.

      > its not hard to gain 20 pts on the stackexchange network,
      > there are many sites of interest, and odds are it has
      > already turned up in your internet google searches
      > (its similar to wikipedia that way). this site might
      > interest you. only a few well phrase questions or
      > answers suffice. https://psychology.stackexchange.com/

      Yes, may be, but for professional reasons I prefer to stay as anonymous as possible.

      > as for “the environment is data”. that is generally
      > true but not exactly. there are some subtleties.
      > for example can an agent learn complex relationships
      > with the environment merely from static data? my answer
      > is to some degree yes, but to some degree no. my feeling
      > is that the most intelligent agents cannot learn key
      > aspects of a dynamic environment from static data.

      I have a problem with “static data”, may be I don’t understand something.
      For an animal, there is no static external data, every data captured through sensors change during time.
      Could you explain a little bit more your point of view ?

      > re brains “growing”. it is quite proven wrt the immature
      > animal/ human. its true the brain tends to mature in
      > the adult but it appears brains are continuing to grow
      > in the adult also. possibly through neurogenesis but also,
      > wrt this ideology, neuroplasticity is a kind of growth.
      > ie even if new neurons are not being added, one can say
      > that the brain is growing.

      OK, we are splitting hair, you prefer growth I prefer adaptation 🙂

      > as for your 17 pg paper, when did you start writing it?
      I wrote it during the exchanges I add with the GoodAI team.
      It is a very little part of the course for researchers at “College de France” in Paris about Experimental cognitive psychology.
      There : https://www.college-de-france.fr/site/stanislas-dehaene/_course.htm
      (Sorry, almost everything is in French in this web site)

      > plz post a url & will read it asap. 🙂
      OK, I will try to find a place to put it.

      Reply
  6. Philippe

    Sorry for the double answer, the first time my answer did not appear when I posted it so I wrote it a second time, now the two answers appear. If you want, you can delete one of them.

    Reply
  7. Philippe

    There is something strange on your blog, now my 2 answers to your post from April 12, 2018 at 9:22 am starting with “hi, have been doing a lot of research and (maybe like you) …” have completely disappeared. With your administrative account can you see them ?

    Reply
  8. Pingback: top AGI leads 2018½ | Turing Machine

  9. Pingback: CURIOSITY PARADIGM OF INTELLIGENCE gains traction! via open AI + deepmind + google brain | Turing Machine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s