If you read at least the introductions of my GMN2009 (suggestion: search for the tags gmn2009 too see only the introduction), that will be completed in the first run by May 24th, you know that I applied some Artificial Intelligence approaches to my software development activities.
Actually, while working in late 1980s with Comshare o.b.o. Andersen in Italy, I wanted to build a PROLOG-based natural language processing (NLP) interface based on the ELIZA (a program emulating a psychologist) for DSS/EIS/Control/Audit models.
Why? Because the rules were sometimes, well, complex.
I mean- sometimes the customers required some analysis that, when the data was provided, could be delivered by a model “number crunching” across multiple dimensions.
If you are lost now… well, do not worry- it is only the usual approach of consultants of hiding behind words, so that they can overcharge a distracted customer 😀
Ok, just joking.
Let’s talk about something practical.
You know about time (one dimension).
And the quantity (another dimension).
At each point in time, you will have a certain position, given by two coordinates: the time itself and the quantity.
If you add a third element of analysis (say, country), you have three dimensions.
from Wikipedia (under CC, converted in PNG for publishing online)
Now, if you want to analyze by product, you add a fourth dimension.
And if you want to analyze also by target market, this is a fifth dimension.
And if you want add if the customer is a one-off or repeat, this is a sixth dimension.
And if you want to map also the distribution channel used, this is a seventh dimension.
Which means that each position in your model (or “data cell”) will represent:
- When= Time, conventionally called the X
- How much= Quantity, conventionally called the Y
- Where= Country, conventionally called the Z
- What= Product, at which point, you stop giving a letter or trying to represent graphically
- Who= Target market
- Who2= Repeat customer of not
- Who3= Distribution channel
Incidentally: our world is three dimensional, and by adding time you have a fourth dimension.
If it seems clear so far, now welcome to the real (business) world.
Some products are not available in every market, or for every combination of the “Who”, and some products are only available in certain countries.
And so on, and so on.
The nice point about this kind of analysis is that first comes reality, then comes a model to try to understand and monitor reality.
My idea was really simple allow to explain the reasoning path.
Some competitors had an “explain” function- but it was written by software designers for software designers: who cares about the formulas keywords used, I want to understand why.
My target? The senior managers using the models who, in late 1980s, often considered demeaning using a keyboard.
So, there needed to be a good reason.
Eventually, as it happened years later with BRIO, when I talked about a knowledge distribution product that I was building, I shelved the Artificial Intelligence module: yes, it could interesting.
But their idea was to get the specs and give a thank you, not to pay for it 😀
Thanks to Internet-based tools, including free websites and web 2.0, the new access to online information is driven by the unstructured interests and needs of the public, individual citizens.
And if you look around you, you will see a renewed interest in machine intelligence
The reason is quite simple: too much data, and not enough information and expertise to process it.
With minimal help, my 9 year old nephew was able to search, prepare, structure, print in less than an afternoon a research on Egyptian that would have taken in my time days in the library.
And… the sheer quality of what he prepared was way beyond most research reports that I paid for just few years ago.
Because he focused on the need and the knowledge, not on the process.
Moreover: as anybody (myself included, of course) with a keyboard and a connection can flood the Internet with written material, you need some “guide” to get your through.
In the beginning, Yahoo had human reviewers- it was acting as a library, classifying and categorizing links.
Then, Google came with an algorithm based on, basically, connections between material (roughly: network analysis; look on Wikipedia, and you can read everything about its history).
The future of searching
Long ago, in early 1980s, I sold my first program- built on a Spectrum Sinclair, it used the approach that I saw in videogames (it was before the real Windows), to partition the screen in independent areas, each one devoted to a specific sub-task.
The purpose of the program: input a 2nd degree equation, and obtain the solutions, the chart, and, if possible, a symbolic solution (right now, you can simply use this website).
Then, I saw in my favorite science and engineering shop in my hometown a software called Mathematica, that adopted a similar visual, symbolic approach.
More than a software to solve, well, engineering and mathematics problems, it is a mathematical toolkit and language to build models about anything.
Because, for example, recently I read in The New York Times twitter some “breaking news” well before Reuters or The New York Times itself posted the information.
And, in science/technology/business, sometimes twitterers (I like- it sounds like “critters”, a science fiction movie) post something that they should not post- or vent instant frustration that gives more than a glimpse of things to come.
The first one is a list of data, to be processed and transformed into information.
The second one is instead structured information, with a direct link to produce a report in Acrobat, or to enter Mathematica itself and do further analysis.
It is still preliminary. But it is certainly worth a try- and some “breathing space” (nobody remember how Google was).
So, we have now two search approaches.
The main issue that I see in Alpha is: who decides the structure- that’s selective.
I think that probably the best approach would not be for Alpha to become the next Google.
Or Google to add a further module and become Alpha-like.
They have two approaches. And eventually, probably, both will carve their own niche.
And create new positions (I mean- jobs), like “competitive webintelligence analyst”.
I think that an interesting by-product could be a Saas (Software as a service) that allows to use a real artificial intelligence front-end to guide the user through the needs, and “feed” both Google and Alpha, to return the “raw data” and the “processed information”.
By using an Eliza-style question-and-answer dialogue, also the most complex queries could be solved.
I had created something few years ago whose purpose was that- but, of course, I ended up having just a website (Omnianetwork.Net): the resources required would have been staggering, just to add a front-end to Google.
It would be now interesting to create such a service as a public domain/free service, akin to Archive.org, to allow sharing and searching the data available online, and convert it into knowledge.
Actually, it is something that is happening now.
If you read my article on “games” (GMN2009: GAMES), you saw a reference to training material on the game theory.
Including a full-semester video and audio course from Yale.
The National European Libraries created a joint website, based in Den Haag (The Hague), in The Netherlands, to search their catalogue.
I think that probably the future is on one or more “portals” that will connect to different resources using different approaches (knowledge is selection and structuring, not just collecting, in my view).
While, behind, each organization manages its own “slice” of knowledge.
Both Google and WolframAlpha will be probably building bricks for these “portals”.
The next project? Design an “intelligent” OpenSource to create knowledge portals by interfacing with whatever search resources are deemed of interest.
My approach to knowledge management? Well, I still think that what I wrote in 2003 online (update in 2008, but you can see both editions ) is true.
Knowledge can be consolidated, structured, packaged.
But updating knowledge must involve the people who create it and understand it as part of their ordinary activities.
Remove the knowledge from the people who created it, and it will become at best stale, at worst mislead whoever uses it.
Knowledge Management tools are just that- tools.