Pieter Wisse
Polymath Charles S. Peirce (1839-1914) argues that a person’s particular
belief directs some equally specific conduct. A belief, or interpretant as
Peirce also calls it, is established through semiosis, or action of the sign.
As a cognitive entity, belief is ... believed to exist in, say, an
object. Peirce’s model of semiosis therefore exhibits sign, object and
interpretant as elements of an irreducible system.[1]
The semiotic triad has recently been expanded to an enneadic model by the
author.[2] Every element of the original triad changed to constitute a
dimension. Along each dimension of the semiotic ennead (here reproduced as
figure 5), three more finely grained, relative, elements appear. So, the object
dimension is constituted by situation, identity and conduct. Deviating from
Peirce’s terminology, the now more familiar term behavior substitutes for
conduct.
Starting from situation, identity and behavior as elements, I outline a
direction for artificial intelligence that might lead to a closer formal
correspondence with natural intelligence.[3]
From Peirce I take the cue of cognition’s behavioral relevance. It follows
that artificial intelligence is always artificial behavior, too. I’ll first
develop an approach to behavioral variety as could be intelligently managed.
For that purpose, the minimal cognitive system consists of three nodes. I’ll
simply call them node a, b and c, respectively. Now one way of configuring
them, is to interpret a as a situation, b as an identity and c as a behavior.
Staying with simple nodes, their connections must not so much be quantitatively
weighted but qualitatively determined. Then, proceeding with the configuration
under discussion, between a and b there is a directional situation connection
(determining, please note, for this configuration, a as a situation and b as a
corresponding identity). Likewise, a directional behavior connection occurs
between b as an identity and c as a corresponding behavior.
Including such qualitative connections, three original nodes can result in six
different behavioral configurations, as figure 1 elaborates.
Figure 1: Permutation for configurations.
The permutation still leaves ambiguity unreconciled, though. With several s-, respectively b-connections extending from / reaching toward some identity, it is always a particular pair of them that constitutes some behavioral configuration. As an additional assumption, a connection is fitted with the reference to its complementary connection in the overall configuration. Figure 2 illustrates such extended connections for the left-side behavioral configuration in figure 1. Whereas the nodes remain general for my outline, that is, a, b and c, both connections are necessarily specified accordingly: for example, σn and βn.
Figure 2: Shifting specificity from nodes to connections.
The configuration is disambiguated with
σn = {a, b, βn} and, consequently, βn = {a, b, σn}.
It might be argued that I am moving too far away from traditional
connectionism to continue to properly apply the label. But then again, I’m only
taking connections more seriously than ever.
From my semiotic perspective, I am now ready to explore some of the territory
for artificial intelligence. I’d like to emphasize that mine is an initial
outline, also with limited space. My tentative approach should be taken as an
invitation to further exploration.
For accommodating situationally differentiated behaviors, at the minimum
three nodes specify some particular behavior. For the moment, I also set the
maximum behavioral configuration at three nodes. Then, given a cognitive system
with p nodes, what behavioral variety can it entertain?
The number of different selections of three elements from a set of p elements
is easily derived. It reads ‘p over 3’ and computes as: p! / ((p-3)!3!).
As seen above, every three nodes yield 3! behavioral configurations. It follows
that p cognitive nodes may support p! / (p-3)! situationally differentiated
behaviors.
This number of p! / (p-3)! configurations originating from p (general) nodes,
is of course also precisely the number of (specific) situation connections,
respectively behavior connections.
Explicitly investing connections with qualitative value might appear
counterproductive, at least for mainstream connectionism. Yet, rather than
trying to mitigate it as a risk, i.e. obstructing options for quantitative
analysis etcetera, for new opportunities I confidently continue speculating in
that direction. For it suggests, as I mentioned at the outset, increased
convergence between models for natural and artificial intelligence.
Here I can only provide a flavor of enhanced semiotic connectionism. I already
remarked taking situation, identity and behavior as relative. By that I mean
more than that some node may appear in one of all three such capacities.
Relativism also entails the possibility of moving beyond three nodes for a
behavioral configuration. On the left, figure 3 sketches a configuration the
two hierarchically positioned nodes, a and b, to reflect a situation. It should
be possible to differentiate d as c’s behavior from d occurring in, for
example, ‘just’ b as the situation in question for c (see the right side of figure
3).
It might not enough to maintain that βn = {c, d, σn}.
For more nodes in a situational capacity, for precision of behavior all
connections must be known. For the left side of figure 3 that means βm
= {c, d, σq, σr}.
Figure 3: Multi-level situational differentiation of behavior.
In figure 4, the configuration on the left of figure 3 reappears. To its right, a configuration consisting of the same nodes in the same order is added, but with the node acting in the capacity of an identity moved up one level. Of course, the situational representation has shrunk, respectively the behavioral representation has expanded accordingly. All specific connections should reflect the particular configuration they help constitute.
Figure 4: Additional variety though compounded situations and/or behaviors.
Combining nodes, even when limited to hierarchical compounds, for situations
and/or behaviors of courses increases the behavioral variety to be accommodated
by a cognitive system. At the minimum, a situation requires one node for its
representation in a particular behavioral configuration. When p nodes are
available, the maximum of nodes making up a situation is (p-2); one node is
always reserved for an identity and the minimum for a behavior is, just like a
situation, one node.
Suppose x nodes constitute a situation. A maximum of p! / (p-x)! situational
subconfigurations results.
With x nodes in a situational capacity, the number of nodes constituting a
behavior may vary from 1 to (p-x-1). Let k range from 1 to (p-x-1), then for
any x the maximum number of behavioral subconfigurations equals the sum of k!
over that range. The number of complete configurations for any x is their
product.
All together,
The number of specific connections must be adjusted accordingly. With x nodes for situation there are of course x situation connections in a configuration. The same applies to k nodes for behavior. So,
I’ve amply shown, how differentially (inter)connected nodes might qualify at least as far as static variety goes. For now I’ll leave discussing several aspects of dynamics of artificial intelligence from my tentative perspective of semiotic connectionism. Let me just make some introductory remarks on a reinterpretation of AI’s well-known BDI orientation: belief, desire and intention.
So far, I have more or less unproblematically applied terms such as identity. My approach is essentially semiotic because nine such concepts constitute an irreducible ennead. Without providing much comment, it is reproduced here as figure 5.
Figure 5: Semiotic ennead.
When the model of figure 5 is applied, the cognitive system (interpretation)
involves elements that correspond to identity, situation and behavior. What is
assumed to be inside cognition, are motive and concept revolving around focus.
Then, a belief that is acted upon is a motivated concept. However, the same may
be said about a desire. Or an intention. Belief, desire and intention are not
atomic concepts. Their dynamics can only be recognized from the vantage point
of an irreducible model.
Human language is a late evolutionary development. Cognition is a much older
phenomenon. Abstracting from (human) language, the semiotic ennead helps to
recognize that cognition essentially involves a behavioral function. Language
use should comply with such a functional explanation, not the other way around.
references
1. Buchler, J., editor, Philosophical
writings of Peirce, Dover, 1955.
2. Wisse, P.E., Semiosis & Sign Exchange: design for
a subjective situationism, including conceptual grounds of business
information modeling, Information Dynamics, 2002.
3. Wisse, P.E., Ontology for interdependency, steps to an ecology of
information management, in: PrimaVera, working
paper # 2007-05, Amsterdam University, 2007.
April 2007, web edition 2007 © Pieter Wisse