urbigenous.net > MCF > Towards a Theory of Meta-Content

Towards a Theory of Meta-Content

R.V.Guha

This document makes a case for a more principled approach to meta-content. In particular I argue for the use of an single expressive language for encoding meta-content irrespective of the source, location or format of the content itself.

What is meta-content?

We have fairly clear idea of what we mean by the term content. It includes most of the documents on our hard disks, pages on the WWW, messages in email folders, etc. Meta-content is anything about this content. The information presented to us by the Macintosh Finder is a very simple (almost trivial) example of meta-content. On the WWW, Yahoo and other hierarchical organizations of pages around topics is an example of meta-content. The hyper-indices used by search engines such as AppleSearch and Lycos are shallow form of meta-content.

There can be several sets of meta-content around a piece of content. Meta-content need not just pertain to what the content is about. It could pertain to the storage, age or access to the content. It could pertain to opinions or ratings of the content. It could pertain to the logical structure of an argument. In short the range of meta-content is almost as broad as that of content itself.

There is no clear line between content and meta-content. We can always view any piece of meta-content as content. The best example of this blurring occurs in the case of book reviews. A book review is a piece of meta information about a piece of content - the book being reviewed. We can however regard the review itself as content and provide descriptions of it, such as when it was written, who wrote it or the context of its writing and this would be meta-content about the review. In fact we could go one level meta to this itself and talk about the source of this meta-content, and so on.

Why bother with meta-content?

We are already significant users of meta-content. We are working with meta-content when we are working in the Finder or sorting email or using one of the WWW search engines. Almost all the time we spend in searching and organizing information is time spent with meta-content. What is missing is a general theory and approach to dealing with meta-content. At least some of the frustration in dealing with information can be solved by a more principled approach to meta-content.

To attempt listing all the uses of meta-content would be similar to try listing the uses of programming languages. But to motivate the issues, here are three sample uses. I have picked simple examples which do not require any piece of code that we don't know how to write today.

  1. Simple lexical word occurrence based searching is by far the most prevalent way of searching for information today. One of the shortcomings of this approach is its inability to distinguish between different word senses.
     
    Example : The user is using one of the WWW search engines (such as Lycos) to search for pages about lions - the animal - not Lion King or Red Lion Hotels or Lions Club. Since all that the search engine is looking for are occurrences of the four characters "lion", there is no way in which it can distinguish between these different uses of the word "lion".
     
    How is one to recognize the word sense of a particular occurrence of "lion" without solving the natural language understanding problem? One way of identifying occurrences of "lion" that have a significantly greater likelihood of referring to the animal is to use a subject categorization of the WWW pages. Yahoo, for example, contains a category corresponding to "Animals & Pets". Pages that occur under this category that use word "lion" are more likely to be using it to refer to the animal. Unfortunately, Yahoo does not index the words that occur in the content of pages. But our program can issue a query to a search engine such as Lycos, translate ("lift") the answers into a common meta-content language, filter out those pages that don't occur under the "Animals & Pets" part of the Yahoo hierarchy and give us a small set of pages all of which are most likely about lion the animal.
     
    Of course, we can still get false hits because of pages under the "Animals & Pets" part of Yahoo referring to, for example, some pet related activity organized by the Lions Club. Also, we are making an assumption that Lycos and Yahoo overlap significantly in their coverage of the WWW (a correct assumption). This approach at least gives us a start towards solving this problem.
     
    The point here is that as soon as we are able to take meta-content from different sources and put them together, even with relatively simple kinds of meta-content we can begin to do really interesting things.
     
  2. There is a strong one-to-one mapping between pages consumed by a reader and pages created by an author. In fact, it is so basic that we hardly even think about it. This can be contrasted with the vision of component based software where coherent assemblages of components from different authors/programmers are what is used at any point in time.
     
    The standard model of search assumes this one-to-one mapping. Assume for example that the user would like a page listing product offerings from toy companies with retail outlets in the Bay Area. There is no page currently on the WWW that has this information. It would be appropriate for a search engine to not find anything.
     
    However, the different pieces of information that would be on this hypothetical page are available from different sources on the WWW. Let us assume that we had meta-content describing the contents of these relevant sources. The system can then assemble for us the page that we want.
     
    The point here is that in a context where there is richly structured meta-content, searching gets transformed into structured query processing and inferencing - a process that is far more powerful than searching for occurrences of words. In such a context, we might begin to look at the processes of authoring and searching in an entirely different light. The search process will itself involve some aspect of authoring.
     
  3. One of the less desirable aspects of the WWW is that the user often finds herself in the middle of a document without any clues about the context of information she is being presented with. In this sense, the WWW takes todays "news bite" mode of information presentation to an extreme.
     
    Understanding the context of a document (assuming the document is more than just a bunch of pretty pictures) is an integral part of understanding the document. Capturing the entire context of a WWW page is a daunting task, but there are at least some simple contextual aspects of a page - the other pages that point to it, the other pages written by its author and the popularity of the page - that can be automatically gathered. In addition, the some authors might want to manually provide some meta-content about the context of their page(s).
     
    If a browser were to have the context meta-content associated with a page, this could be presented to the user in the background and in many cases could help with the comprehension of the page.
     

Some aspects of a theory of meta-content

One of the points I am trying to make is that we should use a common language to represent the myriad forms of meta-content we might want to use. This meta-content language is, at its core, just a representation language. We could use a relational database, object oriented database, frame system or logic based knowledge representation system to store and access the meta-content. All that is required is a persistent store and a query mechanism.

But a theory of meta-content is not the same as a scheme for storing and accessing meta-content. The theory should also provide some guidelines for the organization and use of the meta-content.

A final theory of meta-content will be to content what Lisp or SmallTalk are to programming. It should provide a conceptual framework for the creation, organization and consumption of content. We are in the "fortran era" of meta-content. However, there are at least certain elements that should be part of a good theory of meta-content. Listed below are some elements that I feel are essential.

  1. The representation, manipulation and storage of meta-content should not tied to that of the content it describes. For example, it should be possible to have a single collection of meta-content statements (I will refer to a collection of meta-content statements as an MT) that are about pages on the WWW, files on an AppleTalk network, messages in email folders, etc.
     
  2. The meta-content should be machine understandable. Unlike content, which is typically meant for direct human consumption, meta-content is typically meant for machine consumption. Having a few lines of arbitrary text as "comments" about each file (as in the Finder) does not constitute a theory of meta-content.
     
  3. The meta-content language should be very expressive. There is nothing more frustrating than wanting to tell the machine something but not being able to tell it. Non-expressiveness only leads to kludges. The appendix has more on the language itself.
     
    Statements in the meta-content language should be able to refer not just to files, WWW pages and such but also to people, places, activities and such.
     
  4. We should aim for richly structured pieces of meta-content. The quality of services provided by programs that use meta-content is proportional to the richness of the meta-content they use.
     
    Today, meta-content is very poor in structure. Simple searching based on the words occurring in a text is the predominant way of searching for information. Hierarchical organizations of content (such as in the Finder or Yahoo) represent the richest structures available today.
     
    An example of something that goes just beyond this is a program that runs around the web parsing corporate web pages and recognizing the pages that have to do with job listings, product information, news releases, etc. Some other program can then use this to "retrieve" documents for a user (as in our example where the user was interested in product offerings from Bay Area toy companies.)
     
  5. The authoring and publication of meta-content should be separable from its consumption. The programs/people who generate MTs could be different from those who consume it. It should also be separable in a stronger sense - different programs such as search engines, browsers and authoring tools should be able to tap into the same MT for very different purposes.
     
  6. The meta-content language should have reflective abilities. It should be possible, from within the language, to view any MT itself as content. This ability is required to combine information from two or more MTs whose vocabularies might vary. Also, just as the amount of meta-content grows, we will need meta-meta-content to manage the meta-content itself. At that point we should not have to invent new mechanisms for that purpose.
     
  7. It should be possible to aggregate two or more MTs into a single MT or into an MT of MTs. From a functional perspective, it should be possible to combine information obtained from different sources to fulfill a request. In the example of putting together a page of product offerings from Bay Area toy companies, XYZ cannot determine that a certain company is in the Bay Area (or even that it was a toy company). All it can do is to determine that a certain page is about the location or product offerings of a company. However, LMN might be able to integrate this information with information from Pacific Bell yellow pages to fulfill the user's request. There should be nothing in the meta-content language that makes this difficult.
     
    One consequence of this is that the representation of MTs should be independent of how they are constructed. Portions of an MT might come directly from a user and other portions might come from a program. But a program using the MT should not have to worry about this.
     
  8. The aggregation of MTs should be not just at the format & syntax level but at a semantic level.
     
    For example, consider two of the more popular WWW hierarchies - Yahoo and Magellan. They have similar categories but use different names for the same categories. It should be possible to map these different names into one another within the language. This is admittedly a trivial example. In general, we should be able to write statements that provide partial mappings from one ontology into another.
     
    This is related to the use of MTs which are about other MTs (the meta-meta level). This can be done either by mapping each directly into the other or by mapping both into a neutral common terminology.
     
  9. The meta-content language should come with a rich "starter set" vocabulary that provide the preferred terms to use to refer to people, places, time, etc. This is the analog of "standard libraries" that most programming languages have.In addition to making it easier to create MTs, this will also make it easier to aggregate MTs.
     
  10. The meta-content language is not really intended to be a programming language or a datastructure, but a communication language. The datastructure and programming language needs of a program that attempts to do animation in a 2.5 D layout of a million node hierarchy are going to be very different from those of a search program that deals with a billion pieces of meta-content but can take thirty seconds to respond. The databases/programs that have meta-content might store it anyway they want and the programs that use it might store it anyway they want. But there is a standard query language with an agreed upon semantics.

The meta-content language is not only independent of the format and location of the content, but it is also independent of the application internal data structure used by the application to store the meta-content.

More on near-term uses of the meta-content language

There are a couple of caveats that apply to any new representation of things:

  1. They will almost never enable some new functionality that could not have been somehow done using earlier techniques. So, why should we use this language for the uses listed here? The advantages of using a common language for all these uses/applications are
     
    1. it provides a high level launching point for building the application,
       
    2. it provides a separation between the problem of acquiring the meta-content and that of using it. Separation between creation and consumption of the meta-content is a necessary step towards creating a marketplace for meta-content. There is a significant "service component" to the creation and delivery of meta-content. This point by itself should be justification enough for the promotion of a common meta-content language.
       
    3. it promotes shared use of the same meta-content across several applications.
       
  2. Acquiring a critical mass of "stuff" in the representation so that others will start authoring in it. I would rather not assume that a large number of people are suddenly going to produce meta-content in our language. At least initially, it will all have to come from us, and that means, from programs we write. All the applications listed here only depend on meta-content which we know how to automatically collect.
     
    Unfortunately, programs we write will not be capable of producing meta-content that is very rich in structure. And so, the first generation of applications will most likely be a pale shadow of what is really possible. The goal at this stage is to build something that will provide some threshold utility over what is available today.
     

Appendix A : Rough description of the meta-content language.

The language is basically first order logic together with a context mechanism. I will assume the reader is familiar with first order logic and only briefly describe the context additions here.

In addition to the components of first order logic (i.e., predicates, variables, objects and logical parameters), we have a set of objects called contexts. The notation ist(c,p) is used for stating that the sentence p is true in the context c. Contexts are "first class objects" in that they can be quantified over, they can be arguments to predicates, etc. In particular ist(c,p) itself may occur as the second argument to the predicate ist, i.e., it might be true only in an outer context. It is this ability that allows us to talk about clusters of meta-content within the language.

Intuitively, contexts denote theories (i.e., logical closures of sets of axioms). In our use, typically these theories will be clusters of meta-content (MTs). Different contexts may use different predicates and terms and even associate different meanings with the same symbol. Lifting rules may be written to map the contents of one or more contexts into another context.

I have included a context mechanism into the language for two different reasons.

  1. To enable systems which are never permanently stuck with the concepts they use at a given time because they can always transcend the context they are in. Such a capability would allow the designer of the system to include only such phenomena as are required for the system's immediate purpose, retaining the assurance that if a broader system is required later, lifting axioms can be devised to restate the facts from the narrow context in the broader context with qualifications added as necessary.
     
  2. To enable a fluid transition back and forth between viewing a piece of meta-content as meta-content and viewing it as content (in order to use some meta-meta-content to access and manipulate it.)

There are several problems associated with the expressiveness of the language. The computational complexity of inferencing is too high. The language is too expressive for non-logicians to effectively use. It is also very difficult to build graphical interfaces for writing some of the more interesting statements in the language.

The only solution seems to be to use extremely limited subsets of first order logic for the meta-content itself. One interesting subset is to use a frames like language with simple inheritance as the only kind of rule. Interestingly, the most important kind of lifting rules, where one context subsumes the contents of another, can be captured as simple inheritance rules.