Forecasting the future and number crunching

Well, forecasting the past is a pointless exercise, I assume.

I am restudying statistics, restarting from scratch.

The first time I did it? Few times in school and at the beginning of the university, of course.

Why now? Because I want to understand and review the computations behind some existing models, instead of re-inventing the wheel- with my old models, I needed to understand but not to write computations.

The main difference? Models built on historical data and limited behavioural analysis, vs. models built on historical behaviour and limited data.

When in late 1980s I was sent to work on decision support systems (DSS), I wanted to understand something more about the logic of the formulas I was applying- therefore I bought and browsed/read/studied various absolutely boring books on applied statistics- mainly on economic and social studies.

DSS basically were built around a model of reality (how you expect reality to work) based on historical data (what you know), then introducing assumptions (what you expect to be reality at a specific point in time), to see what could be the results (the consequences at a future point in time).

But they worked also the other way around: you enter a desired end result for your model, and see which “changes” you must do in your assumptions to obtain that result.

Before that, statistics for me had been something that I had studied along with physics- as I enjoyed the mental abstractions that were contained in the analysis of particle physics experiments (I chose that subject for my final exam in high school).

But also in business, I came to DSS from something that I had not studied at school: accounting for banks and management accounting/reporting for large companies.

Well, my approach to number crunching and model building was a mix of all of that and more, where, frankly, knowing about statistics was useful to understand the behaviour of my models, but not to design them, as anyway the software was doing the number crunching work (e.g. a regression).

A small detail- between my high school physics and the accounting, I also added artificial intelligence, specifically a language called PROLOG, as I appreciated its “style” in approaching natural language analysis and other problems.

From a certain set of known facts (your “knowledgebase”) and basic rules on how to express new questions or facts, you asked the system to “check” if your questions did have a match in your knowledgebase, or prove that your new knowledge could be linked to existing knowledge, and expand the knowledgebase.

The most interesting concept that I learned from studying those funny photocopies coming from the CERN and other sources? Avoid influencing your results by introducing “bias” in your observations.

The most interesting method that I derived from management accounting? There is nothing really complex or impossible to check- if you work in a “matrioska” way; the secret is structuring the right numbers in the most appropriate way for what you want to control, verify, identify- and proceed from the general to the specific, or the other way around, according to what are your needs.

If you consider that nowadays the humble spreadsheet (including the free one available in OpenOffice) has more statistical functions embedded than you can use in everyday uses, and that the DSS software that I was using in the late 1980s added also a multidimensional aspect, I just had to invent a method to merge the concepts, methods, and experience of somebody that came before me- and apply it to “number crunching” in a business context.

Numbers, also when you are talking about, say, sales or staffing or cost projections, have a certain elegance and continuity.

A model should be simple, linear- as somebody else wrote: as simple as possible, but not simpler.

The complexity is in the interaction of its components (thanks to “Godel Escher Bach” and a book on “computability”, as well writing a compiler for a software URM machine for that insight).

And how do you model that?

Well, in early 1980s, while I was toying with politics (only teenagers, politicians, and lawyers have the time to spend hours discussing in multiple languages a single comma within a page :D), I was also occasionally working to sell home computers (Commodore, Sinclair, etc) and the first personal computers (Apple II), as well as gaming machines.

My first software sold for money was a simple program to solve 2nd degree equations- graphically and symbolically, on a Spectrum, a computer whose screen could be programmed as set of objects in motion- an interesting learning experience.

The programming approach? I considered it as a game- dividing the screen in sections, and having the user enter the equation to solve, any constraints, and then show the results in different ways: as a graph, as a symbolic solution, and as a numeric solution.

Working on what was basically a gaming machine (the Sinclair Spectrum) with a programming option, I experimented reproducing visually various physical concepts, and saw that the easiest way was to observe reality, build a general model, identify the main “characters”, to give them appropriate weight, identify potential secondary characters, and then make them interact.

As in any script (except in Altman movies), the main characters have distinctive features and a well-rounded personality, while supporting characters are, well, there just to highlight the main ones.

In my models, if the data were coming from accounting sources, usually the problem was convincing the customer which data should be left out as irrelevant for the model, and separate the “characters” (parts of the model, or “sub-models”) based on data from those based on pure assumptions.

A complexity that is compounded by a small detail: each component is actually a sub-model, that can evolve.

I remember building a controlling model out of a handful of data items, spread across products and business units, allowing to focus on anomalies or unusual behaviours from agents.

Such as rotating their customers through various investment options to generate commissions and management fee-based premiums for new customers, without actually adding any customers, or playing with costs.

And the model was built gradually- step by step, and streamlining/detailing the formulas describing each “character” by studying the interactions, and introducing the concept of feed-back.

In some models, I was also able to use some tricks (say, a formula producing a range of values) to actually have the model behave as if it modified itself.

An Excel spreadsheet is basically 2-dimensional, also if you can use other sheets to produce complex formulas- and more than once I had to save customers and partners from Excel models so convoluted, that they had lost the ability to modify them.

The DSS tools (eventually evolving with other names) that I used until recently allowed multiple dimensions of analysis- say: number of units sold of a specific product by a specific manager across few customer categories in a specific point in time.

Pity that most models are just reporting or studying the past- in recent years, the only times I had a chance to enjoy building forecast models was when I had to either review business/marketing plans (and their quixotical claims on market growth), or to negotiate and manage budgets on customer accounts

I saw highly advanced models whose complexity was not in the interaction between components, but in a kind of “unified theory of model behaviour”.

In my experience, those models are nice to forecast the past- as eventually the model becomes a theory, and data are bent out of shape to justify the model, until the model is more important that its purpose.

When I write about “interactions” I imply not just between people or organizations- but also with other environmental factors.

For example: it is fine for an accountant to say that an investment in a building for a new fair will be “depreciated” (i.e. lose value) across, say, 15 years.

But what is the point, if two years down the road a major multinational company, who was at the heart of the development in the district, ends it tax free status and is already negotiating under a “extend or we relocate” concept? Most models do not consider risks beyond the obvious (interest, exchange, tax rates).

If your model does something more than just consolidate numbers, and helps you to move forward, you will need the historical data, your model of how things are done, and at least a set of assumptions on how things could evolve; the assumptions need to be constantly monitored, so that you can verify if it still makes sense.

Leaving number crunching for large entities, I applied my modelling approach whenever there was a new project or initiative or partnership or negotiation.

And also to show to customers the logic of discussing a re-financing of the budget, or proposing alternative paths (such as postponing or scrapping some activities, to cover for new requests).

Beside collecting data and assessing any bias that either the collection method or the data contain, I usually profile both the environment and the key people involved- to understand their motivation, and see which interactions should be inside the model.

If you want- closer to game theory than statistics, as my PROLOG programming experiments in simulation were more oriented toward this “path finding” or “reasoning explanation” than projecting past figures into the future.

Once you have a general concept of “why” you need the model, and the basic building bricks, you can start identifying the relationships linking the building bricks to the the model- while setting aside, for the time being, the internal dynamics of each building brick.

Often, you do not collect all the information needed, also when it is available- and that’s why keeping simple each “building brick” is important- to allow modelling the interactions and revising the model, without clouding missing data with fancy formulas.

If you try to create a single “solve all” formula, instead of “connected clusters” (or, as some friends like to say, “connected dots”), you risk obfuscating the model to represent what you already know, and introducing bias that will make impossible to monitor and consider new factors that could influence the model.

Do you really need to do number crunching? Yes and no.

In a business context, e.g. when negotiating an agreement or facilitating an activity, the “properties” of each building brick include the relative market conditions, social role, educational background of the people involved, prior history in their interactions, and, of course, financial and personal motivation.

Whenever I reviewed a business and marketing plan, I was often puzzled by something that was missing: a model on how the staff would evolve, a model covering not just the number of people, but also their growth within the company.

After coaching two support teachers for my course while in the Army, while working on DSS I had some experience on preparing new staff to take over activities.

The main motivation? My time was too limited, and I was too junior to be a manager within the company; therefore, a way to free up some time was to coach other trainers.

I already described my coaching method in other articles, but the main point here is that I had to focus on what I had been asked to deliver to them.

Years later, I had my longer experience in corporate culture modelling- actually, more than one, across 15 years of on/off activities.

The first model was a methodology: how do you merge a company that has a “family” environment with the straightjacket that is a methodology, without wiping out creativity and morale?

There were then other “social evolution models” (or “corporate culture evolution” models, if you prefer), such a how to manage the transition to a new technology or a new organizational structure.

Sometimes the “model” was just a mental model- and often I came back to using an old book about what I could describe in simpler terms as: the way neurons fire back to stop more signals from overloading them.

Sometimes this “rough model” was converted into a set of parameters, and I played around with decision tables for simpler cases, or with small multidimensional models for more complex ones.

The secret? Convert each parameter into something that can be measured (see this article on how to convert qualitative information into a measure).

But then, convert the model into a simple visual and descriptive document: what matters is providing the logic to evolve, not explaining the smart way that you used to reach that result.

And this brings to a crucial point: no matter how good your model is, think about your real intended audience- who really should receive the results.

A consequence of complexity is that often instead of the decision maker some staff with enough time to spend is assigned to review the results.

Staff lacking the knowledge and insight to see where the results are leading to.

The consequence? When the decision maker is warned, it is already too late.

And I will spare you countless examples from my own experience.

But trust me: the logic behind a cultural evolution model is often more complex than that behind a financial planning model.

A cultural evolution model needs sub-models/profiling for each stakeholder involved (including how their own motivation will evolve), and requires having had experience or observed somebody else’s experience on the specific issues to accept its logic.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s