You are currently browsing the tag archive for the ‘Decision’ tag.

Figure – Poker final hand rankings. Poker is a typical example of *bounded rationality* in our daily lives. Without having all the information available, you still have to make a decision. In one of his works, *Herbert Simon* states: “*boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information*“.

[…] *Bounded rationality* is the idea that in decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make decisions. It was proposed by *Herbert Simon* as an alternative basis for the mathematical modelling of decision making, as used in economics and related disciplines; it complements rationality as optimization, which views decision making as a fully rational process of finding an optimal choice given the information available. Another way to look at bounded rationality is that, because decision-makers lack the ability and resources to arrive at the optimal solution, they instead apply their rationality only after having greatly simplified the choices available. Thus the decision-maker is a satisfier, one seeking a satisfactory solution rather than the optimal one. *Simon* used the analogy of a pair of scissors, where one blade is the “cognitive limitations” of actual humans and the other the “structures of the environment”; minds with limited cognitive resources can thus be successful by exploiting pre-existing structure and regularity in the environment. Some models of human behaviour in the social sciences assume that humans can be reasonably approximated or described as “*rational*” entities (see for example rational choice theory). Many economics models assume that people are on average rational, and can in large enough quantities be approximated to act according to their preferences. The concept of bounded rationality revises this assumption to account for the fact that perfectly rational decisions are often not feasible in practice due to the finite computational resources available for making them. […] In Wikipedia, (link).

Book cover – *Herbert A. Simon*. *Models of Bounded Rationality*, Volume 1, Economic Analysis and Public Policy, *MIT Press 1984*. The Nobel Prize in Economics was awarded to Herbert Simon in 1978. At Carnegie-Mellon University he holds the title of Professor of Computer Science and Psychology. These two facts together delineate the range and uniqueness of his contributions in creating meaningful interactions among fields that developed in isolation but that are all concerned with human decision-making and problem-solving processes. In particular, Simon has brought the insights of decision theory, organization theory (especially as it applies to the business firm), behavior modeling, cognitive psychology, and the study of artificial intelligence to bear on economic questions. This has led not only to new conceptual dimensions for theoretical constructions, but also to a new humanizing realism in economics, a way of taking into account and dealing with human behavior and interactions that lie at the root of all economic activity. The sixty papers and essays contained in these two volumes are grouped under eight sections, each with a brief introductory essay. These are: Some Questions of Public Policy, Dynamic Programming Under Uncertainty; Technological Change; The Structure of Economic Systems; The Business Firm as an Organization; The Economics of Information Processing; Economics and Psychology; and Substantive and Procedural Reality. Most of Simon’s papers on classical and neoclassical economic theory are contained in volume one. The second volume collects his papers on behavioral theory, with some overlap between the two volumes. (from MIT).

[…] The type of rationality we assume in economics – perfect, logical, deductive rationality–is extremely useful in generating solutions to theoretical problems. But it demands much of human behavior – much more in fact than it can usually deliver. If we were to imagine the vast collection of decision problems economic agents might conceivably deal with as a sea or an ocean, with the easier problems on top and more complicated ones at increasing depth, then deductive rationality would describe human behavior accurately only within a few feet of the surface. For example, the game Tic-Tac-Toe is simple, and we can readily find a perfectly rational, minimax solution to it. But we do not find rational “solutions” at the depth of Checkers; and certainly not at the still modest depths of Chess and Go.

There are two reasons for perfect or deductive rationality to break down under complication. The obvious one is that beyond a certain complicatedness, our logical apparatus ceases to cope – our rationality is bounded. The other is that in interactive situations of complication, agents can not rely upon the other agents they are dealing with to behave under perfect rationality, and so they are forced to guess their behavior. This lands them in a world of subjective beliefs, and subjective beliefs about subjective beliefs. Objective, well-defined, shared assumptions then cease to apply. In turn, rational, deductive reasoning–deriving a conclusion by perfect logical processes from well-defined premises – itself cannot apply. The problem becomes ill-defined.

As economists, of course, we are well aware of this. The question is not whether perfect rationality works, but rather what to put in its place. How do we model bounded rationality in economics? Many ideas have been suggested in the small but growing literature on bounded rationality; but there is not yet much convergence among them. In the behavioral sciences this is not the case. Modern psychologists are in reasonable agreement that in situations that are complicated or ill-defined, humans use characteristic and predictable methods of reasoning. These methods are not deductive, but inductive. […] The system that emerges under inductive reasoning will have connections both with evolution and complexity. […]

in, Inductive Reasoning and Bounded Rationality (The El Farol Problem), by W. Brian Arthur, 1994.

This is something that you face it in everyday life, be it on bars, restaurants, supermarket cues or in highways. Buying a house or selling it. So, have a look and decide for yourself! The problem is as follows: There is a particular, finite population of people. Every Thursday night, all of these people want to go to the El Farol Bar. However, the El Farol is quite small, and it’s no fun to go there if it’s too crowded. So much so, in fact, that the following rules are in place:

- If less than 60% of the population go to the bar, they’ll all have a better time than if they stayed at home.
- If more than 60% of the population go to the bar, they’ll all have a worse time than if they stayed at home.

Unfortunately, it is necessary for everyone to decide at the same time whether they will go to the bar or not. They cannot wait and see how many others go on a particular Thursday before deciding to go themselves on that Thursday.

One aspect of the problem is that, no matter what method each person uses to decide if they will go to the bar or not, if everyone uses the same method it is guaranteed to fail. If everyone uses the same deterministic method, then if that method suggests that the bar will not be crowded, everyone will go, and thus it will be crowded; likewise, if that method suggests that the bar will be crowded, nobody will go, and thus it will not be crowded. Often the solution to such problems in game theory is to permit each player to use a mixed strategy, where a choice is made with a particular probability. In the case of the El Farol Bar problem, however, no mixed strategy exists that all players may use in equilibrium.

[…] Consider now a problem I will construct to illustrate inductive reasoning and how it might be modeled. N people decide independently each week whether to go to a bar that offers entertainment on a certain night. For concreteness, let us set N at 100. Space is limited, and the evening is enjoyable if things are not too crowded–specifically, if fewer than 60% of the possible 100 are present. There is no way to tell the numbers coming for sure in advance, therefore a person or agent: goes–deems it worth going–if he expects fewer than 60 to show up, or stays home if he expects more than 60 to go. (There is no need that utility differ much above and below 60.) Choices are unaffected by previous visits; there is no collusion or prior communication among the agents; and the only information available is the numbers who came in past weeks. (The problem was inspired by the bar El Farol in Santa Fe which offers Irish music on Thursday nights; but the reader may recognize it as applying to noontime lunch-room crowding, and to other coordination problems with limits to desired coordination.) Of interest is the dynamics of the numbers attending from week to week.

Notice two interesting features of this problem. First, if there were an obvious model that all agents could use to forecast attendance and base their decisions on, then a deductive solution would be possible. But this is not the case here. Given the numbers attending in the recent past, a large number of expectational models might be reasonable and defensible. Thus, not knowing which model other agents might choose, a reference agent cannot choose his in a well-defined way. There is no deductively rational solution–no “correct” expectational model. From the agents’ viewpoint, the problem is ill-defined and they are propelled into a world of induction. Second, and diabolically, any commonalty of expectations gets broken up: If all believe few will go, all will go. But this would invalidate that belief. Similarly, if all believe most will go, nobody will go, invalidating that belief. Expectations will be forced to differ.

At this stage, I invite the reader to pause and ponder how attendance might behave dynamically over time. Will it converge, and if so to what? Will it become chaotic? How might predictions be arrived at? […]

in “Inductive Reasoning and Bounded Rationality” (The El Farol Problem), by W. Brian Arthur, 1994.

## Recent Comments