Complexity and Decision-Making

Economics,Social Science — Zac Townsend @ November 6, 2012 9:26 am

The Great Man theory of history is usually considered too limited (see yesterday's post). This argument is perhaps best expressed in War and Peace, where Tolstoy goes on long discussions on the imaginary significance of great men, including obviously Napoleon. Or, as Isaiah Berlin says in the "Hedgehog and the Fox: An Essay on Tolstoy's View of History," Tolstoy perceived a "central tragedy" of human life:

...if only men would learn how little the cleverest and most gifted among them can control, how little they can know of all the multitude of factors the orderly movement of which is the history of the world...

We can think of this in a more limited fashion when it comes to Generals or even CEOs. This comment on Hacker News put it well:

People overrate what people can honestly achieve in highly chaotic environments. 15% of corporate CEOs are replaced every year - notice how companies don't change much from year to year though - I have. However, changing often definitely let's us lionize the lucky ones (see hedge funds, startups, novels, movies, tv shows and any other at scale, highly path dependent, chaotic and random systems).

My question though is whether this is changing. "Big data" and associated analyses may give us the ability to understand large systems in ways that were never before possible.

Just as an example, Friedrich Hayek has a famous argument in his work that the reason the government cannot run the "commanding heights" of the economy is because of information. Simply put there is no way for the government to amass and understand the information necessary to choose prices an set supply and demand, and that price is the only true reflection of preferences. History might have caught up with Hayek. We are entering an era where massive datasets and computational social science will allow us to understand people's revealed preferences better than any mechanism in history--even prices. Obviously I have set up a straw-man here in a sense, but the greater point is that we can begin to understand people's behavior and preferences far better by gathering massive information about them and their surroundings than can be revealed by a thousand theorems in American Economic Review.

I am particularly interested in what this means for local, state, and federal governments. (Political campaigns already use massive amounts of voter history and consumer data to microtarget potential voters, see this book and this one.) Governments collect large amounts of data on the services provided to individuals and outcomes of those service. New York City is slowly building the capabilities to cross-reference and understand all of this data and its implications for human behavior. As they work to collect, knit together, and derive meaning from massive administrative datasets, the very nature of what governments can know about citizens and how they can provide services could change.

Cesar Hildalgo comments on the data collected by government and the hope they move to big data:

Governments are much slower, but they're starting to collect data, and they have always been a very information-intensive business. Governments invented taxing, and taxation requires fine-grain data on how much you earn and where you live. Governments, actually "states" a long time ago, invented last names. People in villages didn't need last names. You were able to get around with just a first name. They had to invent last names for taxation, for drafting, so government is a very information-intensive business. In their innovation agenda, in order to do the things that they do better, governments are going to need to embrace big data.

I see, little by little, that there are people inside all of these organizations that are starting to have that battle. They tend to be younger people and were born into this Internet generation. Sometimes it's hard for them to have this fight. As time goes on, there's going to be more and more people that are going to see the value of data that is not only monetization, but also is providing better services, is understanding the world better, is understanding diseases, understanding the way that cities work, mobility, many types of things. Not just targeting people with ads. I think that there's more than that.

The three main problem paradigms (prediction, modeling, and detection) of machine learning, data mining, and artificial intelligence can be used to ask questions about government as service. For example, can we predict in a child welfare context who is most likely to end up in a juvenile deliquescence context? Can we model the individuals who receive housing subsidies that are most likely to commit crime? Can we detect the spread of knowledge of a new government program? What predictions can we made using leading indicator data? Allow me to focus on New York City as an illustrative example. New York City has a system that allows users to screen families for more than 35 city, state, and federal benefit programs. On the other hand, City agencies collect a massive number of variables related to demographics, location, risk-assessment tools, court dates, child welfare contact, police action, and more. If this data could be knitted together, we could begin to understand the life-cycle of families in their behavior and use of government service, and begin to model needs profiles in a way never before possible.

So, we live in an era where we have a large number of sensors that are collecting more data, we have the means and methods to analyze the gathered data, and we have the ability to be dynamic, instantly responsive models. With this all together, we may enter an era where CEOs, Generals, and other leaders are able to understand and respond to chaotic systems.

0 Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.