What do money and sex have in common ?
No. The correct answer was fungibility: Sex and money are each exchangeable within their respective universal market-places.
Subject to certain constraints, sexual reproduction can be attempted by any pair of mature organisms of the same species.
Likewise, under regulatory constraints, the owner of any commodity or service may attempt to trade with any other owner by means of assuming a fungible monetary value for tradable items or services.
Whilst this is not a new observation, it is notable that Information Science has so far failed to find its own fungible medium of exchange.
There exists no universal medium through which digital information is exchanged.
As long as networked devices can establish a common protocol they may engage in constrained transactions which might be seen as analogous to barter. Point-to-point exchange of specific, pre-determined data in a tightly controlled application interface or schema.
As Economics 101 points out, barter has significant limitations in its requirement to always move the traded items to point of sale in order to seek an unlikely coincidence of matching supply and demand.
So far we have experienced only this restricted way of working with Information Systems, so it is not always apparent to us that the current state of the art in Information Science places a similarly high logistical burden on users, as does barter to trade of goods and services.
Engineered applications generally require tightly constrained data as input and provide comparable output.
As users we have two alternatives; we must either use our flexible grey-matter to take on the complexity of resolving separately engineered information structures of multifarious systems; turning us into the bottleneck for the process and requiring us to anticipate the brittleness of the systems we use. Alternatively, we can ask system engineers to engineer a more flexible solution to the problem.
By analysing natural data structures; imposing semi-arbitrary classification of information; creating and enforcing standards and through using advanced system architectures engineers have been able to develop application interfaces which allow systems to inter-communicate at somewhat high levels of abstraction. Unfortunately the effort to develop and maintain these interfaces does not seem to provide sufficient payback to users for such information systems to become commonplace. It is as if we were devising a set of rules for trading any pair of commodities that somebody might bring to the market; a sow pig and three chickens is equivalent to a canoe of good and recent construction that can carry two people. It is not difficult to see how by extrapolation, the number of possible rules that would be needed in a functional market, even for local produce only, would soon exceed the utility of such a design.
And yet, humans are consistently able to evaluate and process new information quickly and effectively; to generate order from unstructured inputs. So how is it that we humans are able to do what so far computers cannot achieve? To begin with, it does not seem necessary to deploy the vast intellectual resources that many humans have at their disposal to resolve this type of problem. Other animal species are also adept at classifying objects and acting according to these models.
It is possible that larger brains in humans may be required for handing precisely the kind of tasks that information systems are also relatively good at. As a species we are uniquely endowed with rich culture, passed through generations by a host of communication media such as natural languages. Some of the challenges of language are common to those of system inter-operability described above. To communicate effectively we must first constrain the range of the information we wish to impart, taking account of the target’s likely comprehension.
For reasons of parsimony there are fewer ways of using language to describe the sensations of a smell than for, say, mathematical abstractions, which can be mutually constructed without internal reference.
Next, we approximate an idea using words, judiciously selected, from a finite and pre-determined vocabulary and we use these words according to well established custom in grammatical structure. All the while, we seek verbal and non-verbal cues from our counter-party to confirm their comprehension. Adjusting our presentation according to the direction we understand their thought is taking them.
For generations, information scientists have seen language acquisition and use as the apex of what systems ought to be capable of achieving. And indeed it is hard to disagree that the goal is ultimately worth pursuing.
Effort A-, Progress C
in focusing on the language metaphor, system designers have failed to
deliver even the basics. Systems still lack flexibility.
Conspicuously missing is the individual
perspective of the real world that is common to all living
creatures. And the explanation for all this is that we have engineered
systems to achieve specific goals. By taking this approach initially, engineers drastically reduced the information content available in the
data that we require our systems to process. They compartmentalized the
information, to such an extent that different systems’ data structures are by
and large not routinely inter-operable. The specific goals of each
system are certainly useful and the success of some systems in
meeting their goals cannot be denied. Yet the broader goal of universal
information inter-operability will never be achieved by merely doing more of the
When starting out to develop new systems, natural language was an unfortunate foundation to have built upon. Relative to the vast processing power of the brains of human and indeed other living organisms, language conveys little information by itself. Consequently, in qualitative terms, the paucity of information we have been able to endow upon digital systems has been a major constraint on their ability to improve.
Rather than continuing with
natural language, we’d do better to adopt a new
digital communication medium. One that will be able to overcome some of the
shortcomings of natural language and will also be able to assimilate natural languages and other information schema into it.
History of classification
It is nevertheless understandable how matters have come out this way. Complexity is the enemy of engineers and engineers are traditionally the people who develop models for computer systems. It was essential for engineers to seek to restrict information content to functional essentials, by means of data classification and other reductionistic techniques. A dominant theme in the development of human culture has been to seek to classify the world using tools provided by our intellect and to propagate these models back into our culture through communication. In other words, accepted models have necessarily been filtered by externalities such as the special talents of the evolved human brain and particular constraints of natural language and cultural acceptance.
It would be interesting to know what were the earliest examples of a hierarchy being used to model some abstract characteristic of the natural world; and how the model became adopted into culture. For instance, it could have been to describe a patriarchal or matriarchal lineage of certain people. Whether or not this is historically correct, we can clearly see from this example how, at each node in the tree, half of all the available information would have necessarily been lost.
At the level of the whole information system, such an arbitrary restriction of information would have a large impact on that system’s ability to truly document lineage. It is presumed that documenting a complete genetic history would have been the modeler’s implicit objective, but that the goal was not achievable due to the need for reduction of complexity under constraints imposed by the tools available. In particular, the ability of human culture to develop and communicate a multi-dimensional model, rather than the two dimensional hierarchy actually represented. We may speculate how culture might have developed differently had such tools, as would allow a more robust perspective of the world, been available to these early modelers.
Similarly, the development of scientific thought has been characterized by abstraction, functional modeling, communication and maintenance of standards and tropes pertinent to the field. Would Linnaeus, for instance, have used a hierarchical paradigm to classify living organisms, if he'd had the tools to model and communicate a more subtle, less arbitrary classification system based on observable characteristics ?
The key question here is not whether such hierarchical models are right or wrong. Any model must necessarily implement a degree of abstraction which identifies the model, apart from the thing modeled. And the utility of such models is demonstrable. Rather, we are interested in understanding what costs may have been incurred by simplification. It can be argued that hierarchical models introduce and sustain undesirable characteristics. At any degree of compexity beyond the trivial, administrators are invariably needed in order to maintain the purity of the model’s top-down schema. Such an administration, functionaing as a bureaucracy, can eventually become more focused on its parochial concerns than on the original purpose of its objectives. Administrators can lose sight of the trees for the tree structures. Dissent must be suppressed to prevent individual points of view from overturning the model, which could have a negative systemic impact. Thus, information must either be made to conform, or be excluded from the models. Exclusion or distortion of information at the place of impact can be a disappointment. Aggregated over the whole system it can be severely restrictive.
Perhaps the most significant missed opportunity of hierarchical modeling has been inter-operability. Structurally, it is not possible to reference one hierarchy from another hierarchy while maintaining the integrity of both. The sole exception being a merger of one hierarchy at its root, to the other at a single point below its root.
As scientific endeavour became ever more specialised, opportunities for cross-fertilisation between disciplines also became diminished, in part because of the inhibiting effect of the information structures used.