Data, Models & Decisions

Kok How Lee

This is one of those posts where I reuse and share some of my reflections done for the module. Data, Models & Decisions (DMD) is one of the courses that the class did in module one of the TIEMBA programme. With the explosion of data and now the capability to store and process it, DMD is particularly important.

"Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital." - Aaron Levenstein

Overall, the biggest takeaway for me from the DMD course is that decision-making is as much an art as it is science. While tools such as decision tree analysis and sensitivity analysis help in expediting as well as codification and communication of the decision-making process, a large contributing factor to good decision-making relies on the decision maker’s experience, understanding of the issue at hand and intuition – knowing what questions to ask and where to apply these tools. In the following sections, I shall share some of the insights I’ve drawn from various parts of the DMD course.

"If you do not know how to ask the right question, you discover nothing" - W.Edward Deming

Decision tree analysis is an extremely powerful tool for breaking down complex problems into smaller, simpler scenarios/outcomes for decision-making. This helps in a more objective evaluation of difficult choices. Often times, it helps in codifying or explaining what some people call intuition or guts. Other times, it actually points out that the obvious may not really be that obvious. As a result, decision tree is also an excellent communication tool for convincing stakeholders or even as an audit tool to explain previously made decisions.

However, it is also important to bear in mind that the analysis is only as good as the simplifying assumptions and inputs. Additionally, replacing expected payoff with risk profile as the key consideration can yield vastly different outcomes/choices. It is therefore important to define whether the objective of the exercise is capital preservation or profit maximisation.

Previously, we highlighted that an analysis is only as good as the assumptions and inputs. Tools such as sensitivity analysis and regression analysis can help to improve the veracity of these inputs and assumptions. Sensitivity analysis can help provide reasonable range estimation (e.g. high, low and base case assumptions/inputs), while regression analysis allows for best estimates on the variable of interest based on generalised correlation between that and other characteristics in the population samples.

Additionally, sensitivity analysis also allows the decision maker to identify variables that make the most difference, which means resources can be prioritised to 1) reduce the volatility of these variables through measures such as hedging, 2) collect, collate and better forecast data on these variables.

"Torture the data, and it will confess to anything.” - Ronald Coase

The last part of the course touched on sentiment analysis and design of experiments. Sentiment analysis in particular was of great interest to me, especially in this age of big data and social media. The ability to measure sentiment based on varying tones of a comment, message and report holds huge potential for product improvement, political analysis and forecasting among many others. Experiment design, be it to measure the impact of a certain policy and marketing campaign, is something that is so important but often neglected. 

A well-run experiment can help companies optimise their marketing campaign, understand their customers better and consequently improve return on investment. For instance, the constantly changing promotions ran by companies such as GRAB, Uber, Didi and Mobike are simply a series of randomised controlled experiments to understand consumer preferences and travel elasticities.

The ability to understand these tools and concepts, integrating and applying them will allow companies to gain huge insights and allocate resources more efficiently.