These notes are excerpts, mostly taken from my email correspondence during the year 2006.
Only recently have I discovered Buchler’s original work, studying it from the perspective of what I developed earlier as the semiotic ennead. The ennead is my formal extension of Peirce’s dynamic triad, i.e. model of semiosis. I now find that such a nine-fold structure helps to further clarify many of Buchler’s concepts, too, essentially interdependent as he has made them out.
Anonymity, too, is irreducibly involved in identity.
I was commissioned to contribute a chapter to The Handbook of the History of Information Security, scheduled to appear in 2007 with Elsevier Science. So, I had to pay lip service to both security and history. I am an expert at neither. :-) If ever it appears, the chapter is titled Semiotics of identity management.
A draft version has already been published as a working paper in the PrimaVera series in information management (Amsterdam University). What I hope you'll find of interest is especially part 2, where the semiotic ennead is outlined. I have also included, say, a dialogical sketch of identity requirements. Between both 'self' and 'other,' such requirements range from guaranteed anonymity to guaranteed identification. Consummation of interaction rests on approaching symmetry.
My plan is to write an article demonstrating how Buchler's Metaphysics of Natural Complexes — it seems all his earlier books only support my argument — supplies the ontological foundation for a method for conceptual modeling I've developed several years ago (metapattern). Contrary to what Buchler states, I find it possible to map the variety exemplified by natural complexes at their multiplicity of ordinal locations. In fact, that is precisely the method required for the information variety at Internet scale (where all information sets may be interconnected, raising the problem of controlling differential meanings).
The 2nd edition of Buchler’s Metaphysics of Natural Complexes includes, as Appendix III, his article On the concept of the world. Referring to the book's page numbers, I am especially drawn to the following sentences:
The idea of total interrelatedness might be held to justify "locating" a complex in a single universal scheme. But positing such a scheme and providing a map of it are two different matters. Complexes are not related to, not located in, the World as World. There is no location for them beyond their ordinal locations. There is no "ultimate" location.[pp. 252-253]
Among Innumerable Complexes there are innumerable differences and innumerable similarities, but there is no final hierarchy of complexes. The World has no form, no boundaries, no constitution. It is not mappable. There are complexes which have a beginning and complexes which have an ending. Innumerable Complexes, the World, has no beginning and no e nding.[p.257]
Frankly, I don't see what "total interrelatedness" has to do with
"a single universal scheme." And why are "positing such a scheme
and providing a map of it [...] two different matters"?
Actually, I find that metapattern supplies a universally applicable method for mapping (which I call modeling). Please note that I'm limiting myself to 'method,' only. I am very happy about my discovery of Buchler's metaphysics, precisely because such firm denial of "total interrelatedness" gives whatever map/model its discriminating features.
What I find generally confusing with Buchler is how — as I read him, anyway — he uses the term complex for both, say, the contour and what is ordinally located. You might say that I concentrate on — mapping/modeling — what are 'just' ordinally located behaviors. It is always the location/situation which disambiguates behavior, serving as the precondition for mapping. Subsequently, within a particular overall map/model a complex-as-contour is hinted at (also read: emerges) because some situationally differentiated behaviors explicitly share an identity. Hinted at, only, for there is always another perspective/situation, and so on. "Innumerable" is how Buchler calls it, and I agree. At this point, I would like to emphasize that such identity-sharing is how I view a mechanism for "coordinatability of traits." (For me, traits and behavior are synonymous.)
So, an exhaustive map/model is indeed a contradiction in terms. However, my claim is that partial maps/models — where partial once again reflects an "order," too — gain quality when at least a potentially exhaustive method is applied.
I apologize for my dense explanation. As I already wrote to you, I shall attempt aligning metapattern with Buchler's metaphysics. After all, he was there first. Such a paper should of course provide a somewhat gentler introduction.
I've always had mixed feelings about the direction that artificial intelligence took. You might say that my model of dynamics of enneadic semiosis also tries to overcome the oversimplification that I find characteristic of work during AI's first decades. Then again, I'm not at all current on AI details. From what I know, however, and to mention an obvious example, I find Buchler refreshing. He doesn't deny variety, but seeks to come to terms with it.
You may recognize that it also works the other way. I mean, my (meta)model helps to tidy up and extend Buchler's largely informal account. I find that I've designed a more powerful approach to synthesis. Does it really take the odd generation or so to get some attention in the first place? My experience is that all over, i.e. from business to government to academia, people are engaged in their small projects, forgetting about practical opportunities from big(ger) theoretical leaps. Who ever said paradigm innovation is easy?
We’ve spent considerable time trying to discover how Pile meets both what is claimed for it (everything … :-) and what we ourselves find important (unambiguous control of dynamics of multicontextual information variety). We recognize that several representations in Pile could represent the same original information. We are interested in the algorithm used by Pile to construct one — or more? — of such representations; our impression is that such an algorithm could be related to existing compression techniques. We are also interested in the algorithm used to search for data in such a (re)presentation.
We were looking for answers to such questions because we were sufficiently intrigued to investigate how Metapattern/KnitbITs and Pile might complement each other. We still are, as matter of fact.
So, for that reason we wanted to get a clear idea on how exactly Pile might support, first of all, an ordinary business application. Further along the line of Metapattern/KnitbITs’ essential orientation at integrating information differentials (mechanism: multiple contexts), how could Pile contribute to what we’ve come to call civil informational infrastructure? For user trust requires security on mutually reinforcing aspects such as authenticity, access control, activity coordination (workflow), authorization, audit trails, and digital archiving. Such aspects call for structural transparency (with performance of secondary importance, only).
We find — and please take it as the compliment it is intended to be — that your paper Freeing Data From the Silos does not so much cover Pile as-it-functions. You are proposing much-needed extensions instead, especially suggesting how metainformation for variable typing may be incorporated. We fully agree. As such, your recent paper comes very close indeed to accurately describing the powerful principles of Metapattern/KnitbITs. For requisite variety, how we treat metainformation in practice offers more degrees of freedom than you’ve outlined; but you are certainly pointing in the right direction! It precisely is the interplay between information and metainformation, just as you sketched, too, which largely extends opportunities.
Rather than claiming radical originality, though, we like to emphasize for Metapattern/KnitbITs that we’re ‘only’ building upon earlier work. Anyone who is familiar with the history of electronic data processing should readily acknowledge that already a tradition exists for “A Relationistic Approach to Information Processing” (subtitle of course taken from your own paper on Pile, Freeing Data From the Silos).
Our verifiable claim for Metapattern is that it supports a higher-order synthesis in information modeling. KnitbITs is the corresponding operational platform, from distributed clients to distributed servers, vice versa. For orientation, we’d like to invite you to start from the short text On benefiting from Metapattern. There, you’ll also find references comparing the multicontextual modeling method of Metapattern to the relational approach (sic!) and to object orientation, respectively. And we’ve documented a fundamental analysis of Topic Maps, including how Metapattern departs from it to meet newly arising challenges at infrastructural scale; see Topic Maps uprooted. If you don’t know about Topic Maps, yet, you’d be surprised how close your description also matches its principles.
Our impression of Pile is that it does have spectacular features. But we believe, as you have actually confirmed by proposing extensions for type control, that Pile is not the generally applicable ‘engine’ for information processing. We still imagine Pile might be productively combined with Metapattern/KnitbITs, with Metapattern/KnitbITs establishing necessary overall structure and Pile providing specialized services.
As a final remark that might interest you, we’re keeping KnitbITs at the frontier of digital programming technology. It now means making full use of .NET.
A description at the conceptual level provides my paper The pattern of metapattern. KnitbITs as Metapattern's implementation departs in several aspects, as operational challenges cannot be side-stepped. In fact, far more flexibility has been added as what I covered only ‘holds’ conceptually.
Martijn Houtman will be able, and is happy, to explain more. He's been applying .NET at technology's edge, so I'm sure that you can engage in an especially productive discussion.
Please take this message as a short presentation of Metapattern. I’ve included a few references on Metapattern and its implementation, KnitbITs. I’ve made such equally compact materials available on the Internet.
From an original orientation at manufacturing systems and supply chains, you may view Metapattern, more generally, as organizing information order in networked value chains.
I am not proposing just ‘a better mousetrap.’ Metapattern implies a change of perspective.
My main reason for approaching you is that your analysts can be counted upon to recognize opportunities from a qualitative, productive discontinuity when they see one. And when they do, they also know how to communicate such a paradigm shift to the relevant audience(s).
So, what makes Metapattern quite different?
Metapattern’s opportunities essentially derive from — quote taken from the abstract of The pattern of metapattern — including context as a formal variable within information sets, instead of seeing context, often implicitly and therefore unrecognized, as an informal presupposition that is kept outside. An information object may appear in multiple contexts, with unambiguously corresponding variety of behavior. By also paying consistent attention to the aspect of time, the approach is augmented even further.
I’m sure that’s too dense a description for an immediate, full grasp.
Please take away from it for now that controlling contextual and temporal variety without operational limits has already become essential at the scale where information technology is currently applied, not to mention the future.
Helping to get much-needed change — but a need still largely unrecognized; which at the same time makes it an opportunity for pioneers! — underway, Metapattern can also ‘hide’ how different it can eventually work out. For nobody likes a revolution. At least, nobody does whom I know in business and government which of course constitute the relevant audiences. It therefore is a crucial aspect of its design that Metapattern, at first, may appear not quite different, but actually quite the same.
Metapattern supports finely tuned evolution.
In terms of planned change, I’d like to emphasize that in a most practical sense it is precisely the discontinuity of making multiple contexts explicit that allows for … minimal disturbance of continuity when moving to a higher level of information management.
Metapattern makes it quite simple. Legacy systems with all their original details (!) can be unambiguously included in an overall networked configuration when each such system is considered as a wholesale context. Such straightforward initial migration for metainformation, only, serves as the basis for subsequent, step-by-step integration for information and applications proper. (Below, I refer to an additional presentation on practical integration strategy.)
The first wave of opportunities occurs, indeed, from looking at the past. Now a huge legacy of isolated information systems demands their integration.
Why do attempts at integration consistently fail? Metapattern is unique in that it establishes consistency at the semantic level regardless of scale.
It may sound as a contradiction, but contextual/temporal variety control is an absolutely necessary condition for integration. In short, realistic integration builds upon requisite variety.
The technology of Metapattern’s implementation, KnitbITs, fully supports managing information variety. Any enterprise, government institution etc. can start today.
The second, longer term wave of opportunities results from an orientation at the future, i.e. when executives are really starting to appreciate the power of multicontextual differentiation.
Sooner or later they will (with your help, of course :-). Then, Metapattern assists to develop semantic infrastructure for informational interactions, i.e. connecting companies, citizens etc. New markets, new governance …
For example, my own work on e-government in the Netherlands clearly indicates that its policy goals are simply unattainable without Metapattern's semantic ordering principles and corresponding information modeling (already making it a short-term priority, there, as a matter of fact).
My own idea is that immediate benefits of Metapattern are to be gained from intra-enterprise integration. Multicontextual precision eliminates duplication, resulting in huge financial savings, increased quality etc.
The realistic prospect of enormous financial savings from enterprise application integration, when done ‘the Metapattern way,’ was the reason why I chose to contact your executive program. Don’t multinational enterprises often spend hundreds of millions of US dollars on usually ill-directed integration efforts? I would say that only spending a fraction of their original budget for real results should interest executive management.
You’ll find some pertinent remarks in the short presentation Integration strategy for information resources (starting from a legacy perspective).
And the orientation at the past can be immediately aligned with an orientation at the future. For intra-enterprise integration can right from the start improve the enterprise agility, too, i.e. prepare the enterprise for a networked future where it acts with far more precision to contribute to, respectively participate in various value chains.
Actually, including context as a variable — formally, it’s a bit more elaborated, but as I said I’m happy to leave the details here for what they are — means that Metapattern may support a variety of business cases. Admittedly, I’m no business strategist but I’ve drawn up some suggestions by way of Remarks on business cases for Metapattern/KnitbITs. There, you’ll also find ample mention of value chains and how an enterprise can optimally position itself.
I have extensively documented Metapattern, even including its grounding in philosophical semiotics. At this stage of your orientation, let me just conclude by repeating my first message I’ve sent your company, some two weeks ago:
I would like to draw your attention to a short text I've written, On benefiting from Metapattern. You may find it highly beneficial for your company, too, to offer advice to your old and new clients about new directions in integrated information management. You'll find that I have provided a solid foundation for what you yourself are clearly recognizing as a major development. Could we explore such opportunities for collaboration?
Should you already want to pursue more details, several further references
are given in On benefiting from Metapattern.
Metapattern opens up a whole new conceptual field, with highly practical innovations in its wake.
My general claim for Metapattern is that it exemplifies a genuine trend, one that any trend watcher might like to inform its audiences about (or, putting it the other way around, one that you would hate to have missed).
My more specific claim is that any enterprise can start today at successfully integrating its application/data, while improving its agility.
Metapattern holds the explicit assumption that it's not just a matter of attributing meaning. Rather, it's a matter of attributing context ... and (only) then meaning is established with accuracy.
My current idea on Pile is that what is 'given' are on the one side a data input set, and on the other side a set of primitive elements. Then, as it were between them, a purely relational structure is produced as a pile. It is a purely mechanical exercise. Input set, respectively primitive elements remain outside the pile in question. So, the claim is justified that a pile only holds relations. But the complete 'system' includes input set and primitives, too.
Indeed, a particular primitive element may have several occurrences in the input set. Why not call the 'environment' of each occurrence, as indicated by relations in a pile, a context?
Metapattern’s concept of context should be taken semantically, instead. Its nature should explain that I believe that Pile's decomposition through intermediary relations is not the whole answer. What is missing is intentionality, i.e. the use of information which requires an orientation at what-information-is-about in the first place.
On the importance of relations, I suppose we all agree. As I wrote in Topic Maps uprooted, "I have no argument at all [...] as far as its basic building blocks are concerned." So, Pile, too, "naturally 'goes back' to the same two elements." Two elements, rather than relation, only? Yes, when input set and primitives are necessarily included in the systemic perspective.
We should be careful to distinguish what I've labeled levels in my analysis of Topic Maps. So, ultimately any flexible information processing tool should rely on relation. At the model level, a larger variety of concepts is required.
Being able to accommodate for model variety at the implementation level requires as a matter of maintaining order the (far) fewer building blocks there to be relationally multidimensional (with dimensions corresponding to what at the model level still are separate concepts). Of course, it is possible to model with 'just' the implementation building blocks but doing so would all too easily lead us to miss requisite variety.
Pile addresses a particular class of problems/opportunities. My view is that it is optimally suited when an input set and — especially — its subsequent pile can be held in internal memory (which nowadays is quite a large set, anyway) and where the primitive of choice already carry a moderate degree of semantic relevance.
Metapattern's building blocks are somewhat more elaborately equipped — at least, that's how I now see it — than Pile's. So, whatever structure a pile holds can always be emulated with Metapattern. But of course Metapattern's flexibility comes at a price in other areas, for example lower performance.
Indeed, Metapattern is also 'only' suited for a particular class of problems. One way of appreciating Metapattern is that data is increasingly distributed while the requirements for coordination also increase. So, data persists. And it does in various places, to be subsequently coordinated. Then, for most practical business and government purposes performance is not really an issue (for selected information, computers and telecommunications are 'fast enough,' anyway). Another vital consideration is that a limited set of primitives can no longer be counted upon to carry meaning throughout the larger system of connected databases etc. At the assembly/model level, the emphasis of optimization shifts to unambiguously appointing contexts.
We certainly may not ignore relevant variety. So, at the model level, the priority for context is an attempt to make multiple dimensionality manageable. You might say that it is my 'job' to be sensitive to conceptual requisite variety: Metapattern as method for information modeling. Martijn Houtman’s 'job' is to make the 'thing' KnitbITs work accordingly, in turn stimulating conceptual development, and so on.
An important point of Metapattern is that relations are purposefully made, i.e. establishing a structure that is subsequently kept secure as required for — I've mentioned those aspects earlier — authenticity, authorization, audit trail etc.
KnitbITs essentially applies a basic building block that allows an instantiation to be tied up to (an)other instantiation(s) with requisite variety. Its multiple dimensions are sui generis, i.e. characteristic for the multicontextual — and temporal, for that matter — requirements at the model level.
The basic building block's thing-behavior, then, appears inside a thing-context. Likewise, a characteristic context is erected for accommodating its connection-behavior.
It's like the (non-) dichotomy of light as particle, respectively as wave. We shouldn't decide on just one, absolutely valid explanation. Both are irreducibly required, with one 'occurring' in particular depending on the context.
Or there's object philosophy. Inevitably it arrives at the point where it also needs the concept of process. Quite, process philosophy ends up requiring the concept of object ...
Departing from keeping properties/attributes as it were in a closed container was in fact already known as radical entity-relationship modeling, whereas entity-attribute-relationship modeling considers attributes held inside an entity's container. Metapattern (also) makes all relations explicit.
It is important to distinguish between information and what-information-is-about. As regards the former, at some point information may be considered as having arrived at its practical limit of (syntactical) decomposition. But what-information-is-about may practically still require further decomposition, i.e. conceptual or semantic. In this second sense, Metapattern doesn't set primitives as an absolute limit. It is fundamentally open to continued conceptual decomposition (which is a view I recently found in the work of American philosopher Justus Buchler; my idea is that Buchler didn't have a clue about information technology; but ontologically, he certainly understood requisite variety; I've documented my discovery of Buchler's relevance in a paper).
I agree that (downward) decomposition may be closed, i.e. terminate at a fixed level, as far as elementary sign units are concerned. However, for the class of problems that Metapattern addresses my idea is that it doesn't really bring any additional gains to go to that length. (I readily acknowledge that there is also a class of problems for which it may be very different.)
Along the, say, syntactical dimension for (downward) decomposition, for practical reasons I would suggest stopping two steps before reaching bits (zero and one, only). Still, Metapattern/KnitbITs can go to any imaginable length. Of course I understand what happens when optimizing bit-level re-use (which is precisely why Martijn Houtman compares Pile to compression technology).
Metapattern has (also) a lot in common with so-called object-role modeling. For a proper comparison, the distinction between the two levels of building block, respectively assembly/model is equally valid.
From the concept of role, however, it is difficult to escape from traditional decomposition. Of course, a role may be split into sub-roles, and so on. But that leaves the original object intact. Metapattern, on the other hand, recognizes that the role attributed to an object, was also only attributed it for that object's particular role. It may sound confusing, but that's what I've come to call upward decomposition (see also, especially, the section on context in The pattern of metapattern).
An awareness that no role is 'initial,' but always already a role ... of a role is essential for modeling variety.
Now is certainly the time for emphasizing connections (relations, associations ...) as they have become sorely neglected. A balance has to be (re)established. But please don't exchange one absolutist approach for another. Depending on what you're dealing with, choose an emphasis and subsequently mitigate the consequences of such unavoidable one-sidedness.
Let me summarize what I've learned so far from your comments (and let me apologize for what I've missed :-):
1. It is essential to distinguish between Pile Engine and Pile Agents.
2. Much of what I expect(ed) Pile Engine to do, in fact resides with a particular, i.e. more or less specialized, Pile Agent.
3. Given Pile, Metapattern/KnitbITs may be considered a Pile Agent.
4. Metapattern might be refactored, with Pile Engine as one of its new modules/components. One problem: persistent, distributed data. Another problem; the requirement for integrated multi-dimensional relations management (recursive context and time), i.e. relational multi-dimensionality actually constituting the lowest 'practical' level for the problem class we're aiming at.
Immediate benefits from Metapattern can be most clearly demonstrated for intra-enterprise application & data integration. There, at least, a problem holder, that is management or, even better, a particular manager, can be identified. But (only) if (s)he experiences a problem with data disorder, a real opportunity exists.
My grounding of information modeling is in semiotics, for which I’ve extended traditional triadic semiotics to enneadic semiotics. With nine, rather than three, elements of course the possibilities for requisite variety are correspondingly widened. So, the semiotic ennead is actually the metamodel for subsequent 'normal' models. You might also consider it the major conceptual tool, or even method.
A critical aspect of metapattern's added value lies with recognizing that too many assumptions are usually kept implicit. Before translating whatever model to a data base scheme, it pays to making (ontological) assumptions (more) explicit. It results first of all in an often quite different conceptual model.
We did look at the associative model of data and its implementation, Sentences, a few years ago, actually. As a method for modeling, the associative approach has its roots here in the Netherlands, with Sjir Nijssen. About thirty-five years ago he came up with NIAM, or Nijssen's Information Analysis Methodology. The basic idea is that elementary facts are always expressed as predicate propositions. It is now called object-role modeling, as a predicate contains 'slots.' An object occupying such a slot plays a corresponding role, hence object-role modeling. The association concept of Sentences is identical with Nijssen's predicate sentence and Halpin's configuration of roles for objects. All such methods can be classified as belonging to — what subsequently became known as — the language act approach to modeling.
Now it might of course be possible that the so-called associative model of data was developed independently. Anyway, what all over continues to strike me is how ignorant proponents of one particular approach seem of other, often very,very similar or even identical approaches. It must be commercially advantageous to act dumb ...
Metapattern certainly shares important aspects with the 'traditional' language act approach/paradigm. It essentially differs from, say, object-role modeling in how it unambiguously determines an object's various roles (also read: behaviors). It does so by making context explicit. It makes an information system practically scalable. Rather than encapsulating all roles, a separate 'partial' object is instantiated for each role (as determined by a particular context).
An unavoidable aspect of innovation is terminological. If you were only using terms as current science, custom, or whatever, dictate ... you simply couldn't innovate. Don't worry about conventions, at least not at the stage of discovery.
An orientation at semantics is critically important. It is currently sorely neglected, but is of paramount importance. It should be recognized that interconnection requires design etc. at infrastructural scale.
The infrastructural nature of much of current, leave alone, future information systems is not yet recognized. A confused notion of infrastructure is to blame. Infrastructure is often mistaken for commodity, for example by Nicolas Carr. Such simplification leaves problems unsolved, even solidifies them.
The 'big' point, really, that I'm making is that from Metapattern as a new, (far) more comprehensive paradigm both current problems with integration can be solved and new opportunities arise. I'm only too aware, though, that I cannot turn it into equally 'big' business on my own, but I'm sure that your company can.
The contextual turn of Metapattern may look deceptively simple; however, its implications are far-reaching. Unambiguous control of structural information variety can now be indefinitely extended. Yes, I fully realize I'm making an ambitious claim ... which I believe is fully justified.
I’m convinced your company stands to benefit by far the most, commercially, should it embrace Metapattern to serve existing and new markets. Now your employee who is reported to have a look may be struggling hard with Metapattern’s essential novelty. For a paradigm shift starting from conceptual modeling is required for managing information variety at the scale where meanings/behaviors are necessarily multiple (with Metapattern providing the disambiguating mechanism through precision, regardless of scale, in temporal and — recursive — contextual articulation). Its strategic advantage seemed right away clear to you. Should your employee experience difficulties grasping the operational details, please urge him to contact me. I would hate to see a ground-breaking opportunity get lost through simple lack of communication.
As soon as you want to go out and interest an audience for your results, you'll discover that people always apply their existing frame of reference. But the very point of genuine innovation is ... that it cannot be properly explained in older terms, period. Otherwise it wouldn't be new, now would it? I'm afraid you're going to be faced with that dilemma as you prepare to go more public.
With most people you actually don't have to be afraid at all that they might ‘steal’ whatever ground-breaking ideas you may expose. As I said, they are only interested in themselves, anyway. So, another thing is that you shouldn't waste any efforts trying to protect ideas. Anything can always be improved. My basic attitude is that time, intellect and money are best used for further innovation; optimal protection lies in keeping the lead.
What can Metapattern be used for? It supports management of information variety. Its first major area of application seems when an organization maintains a whole bunch of so-called legacy systems. With such different, separately developed systems, semantic order is simply missing. For example, one system may apply the meaning a to x, while another system applies meaning b to x.
The habitual response for integration is strict standardization. So, either a, or b. It always fails, though. For what happens is that the meaning with the most powerful constituency inside the organization prevails. Metapattern recognizes that both meanings are probably relevant. For that’s usually why there have been different systems developed in the first place. So, it just depends. Metapattern's formal mechanism for coordinating different meanings is context. So, in this abstract example, x will have two contexts, say y and z. Then x-in-context-y has meaning a and x-in-context-z has meaning b. As far as basic concepts for variety control go, that's all, really.
Metapattern's context is a recursive function of relationship and (partial) object. You'll find it explained at the beginning of the paper The pattern of metapattern. The recursive context-function makes the scale at which information variety can be disambiguated practically unlimited. I myself have recently done consultancy work for (Dutch) electronic government. At such a scale, it can only work on the basis of differential precision. Regretfully, policy makers don't see it that way, not yet.
I’ve made an original contribution to semiotics by extending the Peircean model of triadic semiosis to a formal ennead. With nine, rather than three, irreducible elements, the semiotic ennead’s explanatory power is of course greatly enhanced.
The semiotic ennead grounds a practical method I’ve developed for so-called information modeling. The method is called Metapattern.
In their turn, Metapattern and semiotic ennead can help to throw new light on metaphysical inquiries, too. I have pursued such an inquiry by analyzing Justus Buchler’s metaphysics of natural complexes.
What I hope that you may find of interest for a publication are, first of all, my development of an enneadic semiotics and, secondly, how it helps to, say, fold back Buchler’s metaphysics to Peirce’s idea of semiosis.
The semiotic ennead may serve as a highly practical metamodel for cognitive science. It grounds a practical method I’ve developed for so-called information modeling. The method is called Metapattern. It pervasively differentiates information according to context and time.
What I would like to suggest is that you might consider especially the ennead to provide an overview of, and possibly establish tighter relationships between, your different research efforts.
It is not a coincidence that one of the ennead’s formal elements is called motive and another concept. All its nine elements are irreducibly related which simply means that the metamodel holds integrative potential.
There's the attitude of trying to get the interpretation of Peirce's work right, i.e. in the sense of approaching what he himself might have meant. Another attitude is to make his work productive for novel tasks. Yes, of course I find that the same spread of attitudes hold for dealing with Buchler's work, and so on. So, even when I may be wrong about interpreting Peirce — but who's to say, really? — I may still be productive with applying such an interpretation. In this respect, I believe it is especially interesting that Peirce himself called himself an experimentalist, that is, setting himself apart from 'usual' scientists.
In Semiotics of identity management I mention identity and difference as tightly related concepts. Diachronically, we need a concept such as identity to maintain continuity despite differences (anyway, that's my idea ...). While I emphasize that I am certainly not an expert on the logic of identity, it seems to me that classical philosophy has tended to concentrate on a synchronic concept of identity (where it establishes a tautology). It turns out that identity, too, means different 'things' in correspondingly different contexts.
My work as an independent professional doesn't agree with a so-called restraint of trade. As an example you may readily appreciate, suppose you go to a dentist for treatment. Then it's hardly realistic, putting it mildly, that you request from her/him that for some time (s)he may subsequently not treat anyone else. In fact, you should be most happy that (s)he treats other people, too, and continues to do so. You certainly benefit from her/him becoming and remaining a far better dentist because of extended, varied experience.
I've just finished, written in the English language, a review of approx. 5.000 words of Alain Badiou's Being and Event. Subsequently looking for serious possibilities for publication, I retrieved the call for papers for a special issue on Badiou of Cosmos and History (http://www.continental-philosophy.org/category/badiou/). It says the deadline for submission has already expired, that is, on August 1st, 2006.
My initial question therefore is: Do you nevertheless still accept papers?
If so, are you also soliciting critical appraisals? For my conclusion is that Badiou's book is nonsense. There is pretense but no "vision," abstract or not, in Being and Event to meet "the requirements of the epoch." I've therefore titled my review Badiou qua Badiou, or vanity of void ontology. What you may find of interest for the readers of Cosmos and History is how I've documented my struggle to reach such a conclusion. I am thereby attempting to shift attention to social responsibility, a concept I find especially lacking with Badiou in Being and Event.
Yes, of course I also admit to "a particular view." It is one that I seemed to have failed to get across, though, which is what I regret. As the Topic Maps standard I took what there was available at the time, taking it for the actual position. You're saying that I was already way behind even then. Of course you know far better than I do, but I want to make points at what seems a different conceptual level than you are addressing. One is not better than the other, they're just quite different. You can find more information in Metapattern: context and time in information models (Addison-Wesley, 2001).
I'll be happy to look at the "copy of the latest draft of the Topic Maps Reference Model" you've kindly sent me, too. I haven't been discussing drafts in my paper, though. :-) I really thought I was looking at the current 'accepted' standard, but I certainly don't want to argue with you on that.
Talking about particular views, I predict that from my conceptual perspective I probably won't be able to make much sense of what increasingly seems a technical approach to managing information. It is precisely such a technical, or implementation approach — and I realize you won't agree with me on this as you'd probably find Topic Maps highly conceptual, too; yes, I agree, it certainly is possible to apply TM as such — which obstructs achieving requisite variety in information management. I suppose that you've read from my paper that I find that there's nothing that Topic Maps cannot do in that respect; it's just that it hasn't been designed — at least, that's still my idea; your remarks haven't modified it, not yet, anyway — with that variety in mind. Whatever later versions of the standard might appear of course don't change Topic Maps' original orientation.
I hope you can appreciate that my interest lies with Metapattern, rather than with Topic Maps. Should you want to write a paper comparing Metapattern with Topic Maps, I would greatly welcome it. Above, I've supplied you with a reference (which is also included at the end of my paper on Topic Maps).
I must say that I'm quite impressed by the competitive attitude of Topic Maps' proponents. I never intended to write a challenge, but rather to show how important approaches might converge on the basis of Metapattern as a conceptual metamodel. But you are not at all the first to suggest that I've unfairly represented Topic Maps (which you can take as a compliment for the TM community). I apologize for such unintended 'insult.' Meanwhile, you might be overlooking my serious proposal for moving beyond Topic Maps.
Credibility is believed to get enhanced by “Practice What You Preach.” Indeed, who can take seriously a smoker urging other people to stop smoking? However, diffusion of Rational Unified Process (RUP) as an innovation is only comparable to a software development project to a very limited extent. As Stan Rifkin reports in his paper Why new software process are not adopted (2003), following the same approach “would miss the point that planning a software project is by and large a solved problem, while planning human changes, especially by engineers and engineering managers, is not.” Rifkin adds that “it is too difficult to estimate the relationships among the variables.” So, diffusion is “a messy process of mutual adaptation, where the technology to be adopted is modified as it is assimilated and the organization transforms, too, as the technology is assimilated.” Rather than apply RUP self-referentially for its diffusion, an immediate paradigm for diffusion of innovation should be consulted.
The theory of innovation diffusion was originally limited to diffusion practice concerning a particular social group where members were treated individually, for example farmers. It is only more recently that innovation in organizations also demands scholarly attention.
Expanding his earlier work, in the fourth edition of Diffusion of Innovations (The Free Press, 1995) Everett M. Rogers distinguishes five stages in the particular innovation process in an organization: 1. agenda-setting, 2. matching, 3. redefining/restructuring, 4. clarifying, and 5. routinizing. The first two stages together constitute initiation, whereas the remaining three stages make up, please note, according to Rogers, implementation. Initiation and implementation are separated by the decision to adopt.
Where Rogers still maintains an essentially linear process for how an organization comes to grips with an innovation, Peter Clark and Neil Staunton in Innovation in Technology and Organization (Routledge, 1989) propose a (more) dynamic configuration: “In any enterprise there will be a diverse plurality of logics and these may or may not be hierarchized and orchestrated in particular directions.” When RUP is thus positioned in a field of “soft determinism and contingent specificity,” there is in fact no strict, universally valid, predetermined paradigm for its implementation. Working on the assumption of a ‘unified process’ for innovation diffusion is surely counterproductive.
You'll continue to find acceptance lacking when, from a staff position, you come up with a detailed planning. Instead, every line manager should own a plan.
The short-term dilemma, of course, is that a manager doesn't have sufficient overview yet of the innovation ... in order to plan its implementation properly for some extended period. However, neither do you sufficiently understand her/his department. And even when you did, you're simply not the manager, meaning that you are not (primarily) responsible, respectively held accountable for what goes on inside her/his department and the results (s)he has to deliver.
That is why especially a first step usually requires trust. From a position as trusted facilitator, ideally you should in fact help to make it deliberately small, have the manager and his employees learn from it (and you, too), and so on.
Please note, that is essentially the iterative approach that RUP now also favours for software development but that is already common sense for successful organization development. It strikes me as odd that such arguments should be discounted. Frankly, it suggests that whoever decided to have RUP implemented there doesn't really understand what (s)he wants to achieve with it, let alone how it practically functions.
Let me add that change strategy should vary according to the type of organization. No doubt, you’re dealing with so-called professional organizations. It follows that acceptance can be enforced on the basis of authority to an even lesser extend than elsewhere. For a professional finds that (s)he is his own judge. If you ignore that for implementing RUP, a stalemate ensues. In the process of drawing up an unrealistic plan, you'll have lost credibility where it practically counts, i.e. with managers and especially the practicing employees throughout the departments.
My suggestion is to openly share the change dilemma with the managers. The question then becomes: What do they want planned? Then facilitate drawing up what essentially is their planning. It is the only relationship from which you can be successful with other support activities such as training and mentoring.
Being a change agent implies responsibilities, too, but they’re quite different from those of the line manager(s).
I’ve spent some time sketching — a foundation for — a general banking model. In order to bracket traditional assumptions, it helps … that I am even unaware of most assumptions applied to banking.
I’ve started from what I find will become, say, a citizen’s right in the information age regarding her/his financial resources management, too. So, a particular (sub)set of financial resources — for easier reference, just think of a bank account — is no longer irreducibly tied up with a particular banking institution. I assume it to be transferable, instead. Then, it’s not just that the one or more ‘owners’ of the account can change its ‘account manager,’ that is, move the account from one bank to another. It should also be possible for such an account to change owner(s). Over time, contexts may change.
Of course I’m aware that at present no bank favors such flexibility on the part of its customers. It prefers to cement loyalty. But I feel it is only a question of time for legislation to develop similar to what has forced telephone service providers to adopt existing connection numbers. Or did I miss something, and is that already possible today for bank accounts, too?
Even when I am completely wrong on this, I feel confident that it always pays to inquire into flexibility. At the least, it will make one particular bank more agile. For example, when that bank acquires another bank, accounts are far more easily integrated when the relationship between account and bank can simply be changed (with the former relationships always available: audit trail). In the opposite direction, a bank may wish to divest some of its ‘account management’ business, which can then also be handled smoothly.
At least with me, such a model foundation also raises the question whether such account management is really — still tied to — the core business for a financial institution. Before, financial products & services were sort of shoved into the financial account. There didn’t seem to be a practical alternative.
I would say, from some tinkering with assumptions, that managing financial assets nowadays may, or even should, be made (more) independent from the account management. I can have a mortgage from financial institution A, implemented through my account at account manager X. In time, I can change my mortgage while keeping my account at X. Or retain the original mortgage while changing my account to Y. And/or at some point in time, transfer ‘my’ mortgage to someone else.
One-directional change is a counterproductive assumption. Yet, often there is some initiative taken at one, say, point with the intention of (an)other point(s) following suit. Adoption reflects the perspective of what originally might be considered ‘followers,’ while diffusion reflects the original initiator’s perspective.
Matching perspectives constitutes the productive change process.
Perspectives can only be properly matched from respecting different responsibilities.
Respect leads to trust.
Trust is the primary condition for productive collaboration in the absence of immediate organizational hierarchy.
So, without trust, nothing goes.
Worse, without trust but under some false pretense of collaboration, resources are simply wasted.
Mutually establishing realistic expectations is, … of course, a process; see above.
However, when an initiator organizes, well, tries to organize, a change process as dependent on rationality, (s)he often only considers her/his own direction. From such a one-sided perspective of hierarchical rationality, indeed, such a decision alone is already both necessary and sufficient to change the attitude of all internal and external employees involved. Implementation, then, only involves supplying the corresponding instruments (tools) and helping people on their changed way (some awareness, to catch up in attitude, but mostly knowledge & skills through training).
As professional change agents are only too aware, that’s not really how it works. It results in issues being played out in terms of potential competition, rather than collaboration and resolution.
Please note, not only an initiator may operate in a predominantly rational, technocratic mode. A follower can also ‘invite’ it. The latter’s ‘invitation,’ though, is not aimed at succeeding, … but at being able to escape from the change.
Rather than false rationality, from the follower’s perspective that’s usually really rational as long as perspectives haven’t been properly matched.
Whatever a follower’s attitude, or an initiator’s, for that matter, from the change agent’s perspective it is never wrong. It simply is, at a certain point in time.
There is always a danger that change is perceived as one-directional.
It never is, not in reality.
The initiator is not the unequivocal source of the change, with the follower the equally unequivocal destination, or target.
An initiator stands to learn at least as much, if not more, from the so-called follower. Change is bi-directional.
Only when an initiator is also open to change, does a process evolve.
All change needs is a promising start. Then take it from there.
Early on, getting something right is a bonus. At that stage it is essential to learn what goes wrong.
Often, optimizing process only requires keeping an open eye for what-happens-anyway. That way, change can be made to work with very few resources.
A note on terminology: Repeating a complete life cycle is what I call a macro iteration. Then, iterations within a single phase — and, of course, within a single life cycle — should actually be seen as meso, or mid-sized iterations. I’ll call them phase iterations when there’s a danger of misunderstanding. Repetitions ‘inside’ the workflow for a phase, or mid, iteration count as micro iterations.
Adoption/diffusion of a process method for software development is not … a software development project. Rather, it is organization development.
Ignoring real complexity, in particular of matching stakeholders’ perspectives, makes change dead-certain to fail.
Ignoring trust as critical success factor for adoption/diffusion leads to failure, all the more counterproductive because blaming becomes inevitable.
Neglecting commitments undermines credibility.
Confused expectations promote confusion.
Doing the right things at the wrong time ... is completely wrong.
Failure to recognize the client system’s rationality results in a stalemate, at best.
Especially big mistake: Premature closure of what should essentially be left open.
Don’t brood on the past; opportunities are always in the future.
Be careful, though, to bite off more than you can chew.
And don’t jump to conclusions ... when they’re not the participants’ conclusions.
Don’t taking established ways of work division for granted.
Don’t taking planning too seriously. You’d be missing opportunities ‘as they present themselves.’
Temper eagerness to do things right the first time around, as opposed to accept complexity which requires cycles to master.
Never forget about learning.
Don’t overlook conditional benefits.
Honor that genuine change is team work across units.
In terms of Rational Unified Process, for example, where relevant disciplines (requirements, analysis & design, implementation, etcetera) all reside under single operational management, the condition of artifacts and especially activities being tightly interdependent should not be too difficult to establish. However, outsourcing leads to even formalized direct operational responsibilities to reside with different parties. For a typical project, then, both principal and one or more so-called vendors need to make organized contributions across purposely established discontinuities. At the minimum, it clearly increases control complexity. And it might make the practice of iterative development illusory.
Developing software that will operate in isolation has fast become an exception. Increasingly, there is infrastructure for information management, too. As a consequence, a particular development is mainly ‘about’ applying such infrastructure (which itself is also developed as a consequence, and so on).
So-called disciplines do not operate in isolation. After all, it is a process, regardless of particular framework.
As the person-in-between, coordinating, that is, a change agent promotes that an open dialogue is undertaken immediately. Parties must start exchanging mutual realities/rationalities.
Against the background of the possibility of constructive disagreement (to agree to disagree), often it really shouldn’t be too difficult to recognize closer agreement, after all.
Openness requires being able to listen to explanations, suggestions, etcetera, and learning from it. It’s therefore useless to jump to necessarily one-sided conclusions. That is not what a change agent carries a characteristically double loyalty for. (S)he’ll has to see where the dialogue leads.
2006, web edition 2006 © Pieter Wisse