21/05/2011
Addition
Addition
Addition is a mathematical operation that represents combining collections of objects together into a larger collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two other apples—which is the same as five apples. Therefore, 3 + 2 = 5. Besides counting fruits, addition can also represent combining other physical and abstract quantities using different kinds of numbers: negative numbers,fractions, irrational numbers, vectors, decimals and more.
Addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra.
Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, children learn to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day.
Contents[hide] 
[edit]Notation and terminology
Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example,
 1 + 1 = 2 (verbally, "one plus one is equal to two")
 2 + 2 = 4 (verbally, "two plus two is equal to four")
 5 + 4 + 2 = 11 (see "associativity" below)
 3 + 3 + 3 + 3 = 12 (see "multiplication" below)
There are also situations where addition is "understood" even though no symbol appears:
 A column of numbers, with the last number in the column underlined, usually indicates that the numbers in the column are to be added, with the sum written below the underlined number.
 A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number.^{[2]} For example,
3½ = 3 + ½ = 3.5.
This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead.
The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example,
The numbers or the objects to be added in general addition are called the "terms", the "addends", or the "summands"; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend theaugend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the symmetry of addition, "augend" is rarely used, and both terms are generally called addends.^{[3]}
All of this terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn acompound of ad "to" and dare "to give", from the ProtoIndoEuropean root *deh₃ "to give"; thus to add is to give to.^{[3]} Using the gerundivesuffix nd results in "addend", "thing to be added".^{[4]} Likewise from augere "to increase", one gets "augend", "thing to be increased".
"Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verbsummare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends.^{[6]} Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer.^{[7]}
[edit]Interpretations
Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations.
[edit]Combining sets
Possibly the most fundamental interpretation of addition lies in combining sets:
 When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections.
This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics; for the rigorous definition it inspires, seeNatural numbers below. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers.^{[8]}
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods.^{[9]} Rather than just combining collections of segments, rods can be joined endtoend, which illustrates another conception of addition: adding not the rods but the lengths of the rods.
[edit]Extending a length
A second interpretation of addition comes from extending an initial length by a given length:
 When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension.
The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum a + b play asymmetric roles, and the operation a + b is viewed as applying the unary operation +b to a. Instead of calling botha and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa.
[edit]Properties
[edit]Commutativity
Addition is commutative, meaning that one can reverse the terms in a sum lefttoright, and the result will be the same as the last one. Symbolically, if a and b are any two numbers, then
 a + b = b + a.
The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law".
[edit]Associativity
A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression
 "a + b + c"
be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that
 (a + b) + c = a + (b + c).
For example, (1 + 2) + 3 = 3 + 3 = 6 = 1 + 5 = 1 + (2 + 3). Not all operations are associative, so in expressions with other operations like subtraction, it is important to specify the order of operations.
[edit]Zero and one
When adding zero to any number, the quantity does not change; zero is the identity element for addition, also known as the additive identity. In symbols, for any a,
 a + 0 = 0 + a = a.
This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + a = a. In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement a + 0 = a.^{[10]}
In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a. Because of this succession, the value of some a + b can also be seen as the b^{th} successor of a, making addition iterated succession.
[edit]Units
To numerically add physical quantities with units, they must first be expressed with common units. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis.
[edit]Performing addition
[edit]Innate ability
Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected.^{[11]} A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that fivemonthold infants expect 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies.^{[12]} Another 1992 experiment with older toddlers, between 18 to 35 months, exploited their development of motor control by allowing them to retrieve pingpong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5.^{[13]}
Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training.^{[14]}
[edit]Elementary methods
Typically children master the art of counting first. When asked a problem requiring two items and three items to be combined, young children will model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they will learn or discover the strategy of "countingon": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers.^{[15]}Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child who is asked to add six and seven may know that 6+6=12 and then reason that 6+7 will be one more, or 13.^{[16]} Such derived facts can be found very quickly and most elementary school children eventually rely on a mixture of memorized and derived facts to add fluently.^{[17]}
[edit]Decimal system
The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 singledigit "addition facts". One could memorize all the facts by rote, but patternbased strategies are more enlightening and, for most people, more efficient:^{[18]}
 One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition.^{[18]}
 Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, some children are introduced to addition as a process that always increases the addends; word problemsmay help rationalize the "exception" of zero.^{[18]}
 Doubles: Adding a number to itself is related to counting by two and to multiplication. Doubles facts form a backbone for many related facts, and fortunately, children find them relatively easy to grasp.^{[18]}
 Neardoubles: Sums such as 6+7=13 can be quickly derived from the doubles fact 6+6=12 by adding one more, or from 7+7=14 but subtracting one.^{[18]}
 Five and ten: Sums of the form 5+x and 10+x are usually memorized early and can be used for deriving other facts. For example, 6+7=13 can be derived from 5+7=12 by adding one more.^{[18]}
 Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14.^{[18]}
As children grow older, they will commit more facts to memory, and learn to derive other facts rapidly and fluently. Many children never commit all the facts to memory, but can still find any basic fact quickly.^{[17]}
The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column.^{[19]} An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many other alternative methods.
[edit]Computers
Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a generalpurpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier.^{[20]}
Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance.
Adding machines, mechanical calculators whose primary function was addition, were the earliest automatic, digital computers. Wilhelm Schickard's 1623 Calculating Clock could add and subtract, but it was severely limited by an awkward carry mechanism. Burnt during its construction in 1624 and unknown to the world for more than three centuries, it was rediscovered in 1957^{[21]} and therefore had no impact on the development of mechanical calculators.^{[22]} Blaise Pascal invented the mechanical calculator in 1642^{[23]} with an ingenious gravityassisted carry mechanism. Pascal's calculator was limited by its carry mechanism in a different sense: its wheels turned only one way, so it could add but not subtract, except by the method of complements. By 1674 Gottfried Leibniz made the first mechanical multiplier; it was still powered, if not motivated, by addition.^{[24]}
Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multidigit algorithm taught to children. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer.^{[25]}
Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floatingpoint operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs.^{[26]}
Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish.^{[27]} In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend.^{[28]} In a highlevel programming language, evaluating a + b does not change either a or b; to change the value of a one uses the addition assignment operator a += b.
[edit]Addition of natural and real numbers
To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers.^{[29]} (In mathematics education,^{[30]} positive fractions are added before negative numbers are even considered; this is also the historical route.^{[31]})
[edit]Natural numbers
There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows:
 Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with N(A) = a and N(B) = b. Then a + b is defined as .^{[32]}
Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice.
The other popular definition is recursive:
 Let n^{+} be the successor of n, that is the number following n in the natural numbers, so 0^{+}=1, 1^{+}=2. Define a + 0 = a. Define the general sum recursively by a + (b^{+}) = (a + b)^{+}. Hence 1+1=1+0^{+}=(1+0)^{+}=1^{+}=2.^{[33]}
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N^{2}.^{[34]} On the other hand, some sources prefer to use a restricted Recursion Theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on bto define a function "a + ", and pastes these unary operations for all a together to form the full binary operation.^{[35]}
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades.^{[36]} He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers.
[edit]Integers
The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive ornegative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:
 For an integer n, let n be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define a + b = a + b. If a and b are both negative, define a + b = −(a+b). If a and b have different signs, define a + b to be the difference between a and b, with the sign of the term whose absolute value is larger.^{[37]}
Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider.
A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction:
 Given two integers a − b and c − d, where a, b, c, and d are natural numbers, define (a − b) + (c − d) = (a + c) − (b + d).^{[38]}
[edit]Rational numbers (Fractions)
Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication:
 Define
The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic.^{[39]} For a more rigorous and general discussion, see field of fractions.
[edit]Real numbers
A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be aDedekind cut of rationals: a nonempty set of rationals that is closed downward and has no greatest element. The sum of real numbers aand b is defined element by element:
 Define ^{[40]}
This definition was first published, in a slightly modified form, by Richard Dedekind in 1872.^{[41]} The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.^{[42]}
Unfortunately, dealing with multiplication of Dedekind cuts is a casebycase nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim a_{n}. Addition is defined term by term:
 Define ^{[43]}
This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different.^{[44]} One must prove that this operation is welldefined, dealing with coCauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.^{[45]}
[edit]Generalizations
 There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains...—Alexander Bogomolny
There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field ofabstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory.
[edit]Addition in abstract algebra
In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates:
 (a,b) + (c,d) = (a+c,b+d).
This addition operation is central to classical mechanics, in which vectors are interpreted as forces.
In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. In geometry, the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on manydimensional tori.
The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups.
[edit]Addition in set theory and category theory
A farreaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation.
In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as Direct sum and Wedge sum, are named to evoke their connection with addition.
[edit]Related operations
[edit]Arithmetic
Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x areinverse functions.
Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction.^{[46]}
Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number.
In the real and complex numbers, addition and multiplication can be interchanged by the exponential function:
 e^{a + b} = e^{a} e^{b}.^{[47]}
This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good firstorder approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra.^{[48]}
There are even more generalizations of multiplication than addition.^{[49]} In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.^{[50]}
Division is an arithmetic operation remotely related to addition. Since a/b = a(b^{−1}), division is right distributive over addition: (a + b) / c = a / c+ b / c.^{[51]} However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2.
[edit]Ordering
The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of differentorders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of (a + b) − b can accumulate an unacceptable roundoff error, perhaps even returning zero. See also Loss of significance.
The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two.^{[53]} Accordingly, there is no subtraction operation for infinite cardinals.^{[54]}
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:
 a + max (b, c) = max (a + b, a + c).
For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity.^{[55]} Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.^{[56]}
Tying these observations together, tropical addition is approximately related to regular addition through the logarithm:
 log (a + b) ≈ max (log a, log b),
which becomes more accurate as the base of the logarithm increases.^{[57]} The approximation can be made exact by extracting a constant h, named by analogy with Planck's constantfrom quantum mechanics,^{[58]} and taking the "classical limit" as h tends to zero:
In this sense, the maximum operation is a dequantized version of addition.^{[59]}
[edit]Other ways to add
Incrementation, also known as the successor operation, is the addition of 1 to a number.
Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero.^{[60]} An infinite summation is a delicate procedure known as a series.^{[61]}
Counting a finite set is equivalent to summing 1 over the set.
Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zerodimensional manifold reduces to summation.
Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics.
Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domainside addition; by contrast, vector addition is a kind of rangeside addition.
[edit]In literature
 In chapter 9 of Lewis Carroll's Through the LookingGlass, the White Queen asks Alice, "And you do Addition? ... What's one and one and one and one and one and one and one and one and one and one?" Alice admits that she lost count, and the Red Queen declares, "She can't do Addition".
 In George Orwell's Nineteen EightyFour, the value of 2 + 2 is questioned; the State contends that if it declares 2 + 2 = 5, then it is so. See Two plus two make five for the history of this idea.
[edit]Notes
 ^ From Enderton (p.138): "...select two sets K and L with card K = 2 and card L = 3. Sets of fingers are handy; sets of apples are preferred by textbooks."
 ^ Devine et al. p.263
 ^ ^{a} ^{b} Schwartzman p.19
 ^ "Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added".
 ^ Karpinski pp.56–57, reproduced on p.104
 ^ Schwartzman (p.212) attributes adding upwards to theGreeks and Romans, saying it was about as common as adding downwards. On the other hand, Karpinski (p.103) writes that Leonard of Pisa "introduces the novelty of writing the sum above the addends"; it is unclear whether Karpinski is claiming this as an original invention or simply the introduction of the practice to Europe.
 ^ Karpinski pp.150–153
 ^ See Viro 2001 for an example of the sophistication involved in adding with sets of "fractional cardinality".
 ^ Adding it up (p.73) compares adding measuring rods to adding sets of cats: "For example, inches can be subdivided into parts, which are hard to tell from the wholes, except that they are shorter; whereas it is painful to cats to divide them into parts, and it seriously changes their nature."
 ^ Kaplan pp.69–71
 ^ Wynn p.5
 ^ Wynn p.15
 ^ Wynn p.17
 ^ Wynn p.19
 ^ F. Smith p.130
 ^ Carpenter, Thomas; Fennema, Elizabeth; Franke, Megan Loef; Levi, Linda; Empson, Susan (1999).Children's mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. ISBN 0325001375.
 ^ ^{a} ^{b} Henry, Valerie J.; Brown, Richard S. (2008). "Firstgrade basic facts: An investigation into teaching and learning of an accelerated, highdemand memorization standard". Journal for Research in Mathematics Education 39 (2): 153–183. doi:10.2307/30034895.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} Fosnot and Dolk p. 99
 ^ The word "carry" may be inappropriate for education; Van de Walle (p.211) calls it "obsolete and conceptually misleading", preferring the word "trade".
 ^ Truitt and Rogers pp.1;44–49 and pp.2;77–78
 ^ Jean Marguin p. 48 (1994)
 ^ René Taton, p. 81 (1969)
 ^ Jean Marguin, p. 48 (1994) ; Quoting René Taton(1963)
 ^ Williams pp.122–140
 ^ Flynn and Overman pp.2, 8
 ^ Flynn and Overman pp.1–9
 ^ Karpinski pp.102–103
 ^ The identity of the augend and addend varies with architecture. For ADD in x86 see Horowitz and Hill p.679; for ADD in 68k see p.767.
 ^ Enderton chapters 4 and 5, for example, follow this development.
 ^ California standards; see grades 2, 3, and 4.
 ^ Baez (p.37) explains the historical development, in "stark contrast" with the set theory presentation: "Apparently, half an apple is easier to understand than a negative apple!"
 ^ Begle p.49, Johnson p.120, Devine et al. p.75
 ^ Enderton p.79
 ^ For a version that applies to any poset with thedescending chain condition, see Bergman p.100.
 ^ Enderton (p.79) observes, "But we want one binary operation +, not all these little oneplace functions."
 ^ Ferreirós p.223
 ^ K. Smith p.234, Sparks and Rees p.66
 ^ Enderton p.92
 ^ The verifications are carried out in Enderton p.104 and sketched for a general field of fractions over a commutative ring in Dummit and Foote p.263.
 ^ Enderton p.114
 ^ Ferreirós p.135; see section 6 of Stetigkeit und irrationale Zahlen.
 ^ The intuitive approach, inverting every element of a cut and taking its complement, works only for irrational numbers; see Enderton p.117 for details.
 ^ Textbook constructions are usually not so cavalier with the "lim" symbol; see Burrill (p. 138) for a more careful, drawnout development of addition with Cauchy sequences.
 ^ Ferreirós p.128
 ^ Burrill p.140
 ^ The set still must be nonempty. Dummit and Foote (p.48) discuss this criterion written multiplicatively.
 ^ Rudin p.178
 ^ Lee p.526, Proposition 20.9
 ^ Linderholm (p.49) observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'."
 ^ Dummit and Foote p.224. For this argument to work, one still must assume that addition is a group operation and that multiplication has an identity.
 ^ For an example of left and right distributivity, see Loday, especially p.15.
 ^ Compare Viro Figure 1 (p.2)
 ^ Enderton calls this statement the "Absorption Law of Cardinal Arithmetic"; it depends on the comparability of cardinals and therefore on the Axiom of Choice.
 ^ Enderton p.164
 ^ Mikhalkin p.1
 ^ Akian et al. p.4
 ^ Mikhalkin p.2
 ^ Litvinov et al. p.3
 ^ Viro p.4
 ^ Martin p.49
 ^ Stewart p.8
[edit]References
 History
 Bunt, Jones, and Bedient (1976). The historical roots of elementary mathematics. PrenticeHall. ISBN 0133890155.
 Ferreirós, José (1999). Labyrinth of thought: A history of set theory and its role in modern mathematics. Birkhäuser. ISBN 0817657495.
 Kaplan, Robert (2000). The nothing that is: A natural history of zero. Oxford UP. ISBN 0195128427.
 Karpinski, Louis (1925). The history of arithmetic. Rand McNally. LCC QA21.K3.
 Schwartzman, Steven (1994). The words of mathematics: An etymological dictionary of mathematical terms used in English. MAA. ISBN 0883855119.
 Williams, Michael (1985). A history of computing technology. PrenticeHall. ISBN 0133899179.
 Elementary mathematics
 Davison, Landau, McCracken, and Thompson (1999). Mathematics: Explorations & Applications (TE ed.). Prentice Hall. ISBN 0134358171.
 F. Sparks and C. Rees (1979). A survey of basic mathematics. McGrawHill. ISBN 0070599025.
 Education
 Begle, Edward (1975). The mathematics of the elementary school. McGrawHill. ISBN 0070043256.
 California State Board of Education mathematics content standards Adopted December 1997, accessed December 2005.
 D. Devine, J. Olson, and M. Olson (1991). Elementary mathematics for teachers (2e ed.). Wiley. ISBN 0471859478.
 National Research Council (2001). Adding it up: Helping children learn mathematics. National Academy Press. ISBN 0309069955.
 Van de Walle, John (2004). Elementary and middle school mathematics: Teaching developmentally (5e ed.). Pearson. ISBN 020538689X.
 Cognitive science
 Baroody and Tiilikainen (2003). "Two perspectives on addition development". The development of arithmetic concepts and skills. pp. 75. ISBN 080583155X.
 Fosnot and Dolk (2001). Young mathematicians at work: Constructing number sense, addition, and subtraction. Heinemann. ISBN 032500353X.
 Weaver, J. Fred (1982). "Interpretations of number operations and symbolic representations of addition and subtraction". Addition and subtraction: A cognitive perspective. pp. 60.ISBN 0898591716.
 Wynn, Karen (1998). "Numerical competence in infants". The development of mathematical skills. pp. 3. ISBN 086377816X.
 Mathematical exposition
 Bogomolny, Alexander (1996). "Addition". Interactive Mathematics Miscellany and Puzzles (cuttheknot.org). Retrieved 3 February 2006.
 Dunham, William (1994). The mathematical universe. Wiley. ISBN 0471536563.
 Johnson, Paul (1975). From sticks and stones: Personal adventures in mathematics. Science Research Associates. ISBN 0574191151.
 Linderholm, Carl (1971). Mathematics Made Difficult. Wolfe. ISBN 0723404151.
 Smith, Frank (2002). The glass wall: Why mathematics can seem difficult. Teachers College Press. ISBN 0807742422.
 Smith, Karl (1980). The nature of modern mathematics (3e ed.). Wadsworth. ISBN 0818503521.
 Advanced mathematics
 Bergman, George (2005). An invitation to general algebra and universal constructions (2.3e ed.). General Printing. ISBN 0965521141.
 Burrill, Claude (1967). Foundations of real numbers. McGrawHill. LCC QA248.B95.
 D. Dummit and R. Foote (1999). Abstract algebra (2e ed.). Wiley. ISBN 0471368571.
 Enderton, Herbert (1977). Elements of set theory. Academic Press. ISBN 0122384407.
 Lee, John (2003). Introduction to smooth manifolds. Springer. ISBN 0387954481.
 Martin, John (2003). Introduction to languages and the theory of computation (3e ed.). McGrawHill. ISBN 0072322004.
 Rudin, Walter (1976). Principles of mathematical analysis (3e ed.). McGrawHill. ISBN 007054235X.
 Stewart, James (1999). Calculus: Early transcendentals (4e ed.). Brooks/Cole. ISBN 0534362982.
 Mathematical research
 Akian, Bapat, and Gaubert (2005). "Minplus methods in eigenvalue perturbation theory and generalised LidskiiVishikLjusternik theorem". INRIA reports. arXiv:math.SP/0402090.
 J. Baez and J. Dolan (2001). "From Finite Sets to Feynman Diagrams". Mathematics Unlimited— 2001 and Beyond. pp. 29. arXiv:math.QA/0004133. ISBN 3540669132.
 Litvinov, Maslov, and Sobolevskii (1999). Idempotent mathematics and interval analysis. Reliable Computing, Kluwer.
 Loday, JeanLouis (2002). "Arithmetree". J. Of Algebra 258: 275. arXiv:math/0112034. doi:10.1016/S00218693(02)005100.
 Mikhalkin, Grigory (2006). "Tropical Geometry and its applications". To appear at the Madrid ICM. arXiv:math.AG/0601041.
 Viro, Oleg (2001), Dequantization of real algebraic geometry on logarithmic paper, in Cascuberta, Carles; MiróRoig, Rosa Maria; Verdera, Joan et al., "European Congress of Mathematics: Barcelona, July 10–14, 2000, Volume I", Progress in Mathematics (Basel: Birkhäuser) 201: 135–146, arXiv:0005163, ISBN 3764364173
 Computing
 M. Flynn and S. Oberman (2001). Advanced computer arithmetic design. Wiley. ISBN 0471412090.
 P. Horowitz and W. Hill (2001). The art of electronics (2e ed.). Cambridge UP. ISBN 0521370957.
 Jackson, Albert (1960). Analog computation. McGrawHill. LCC QA76.4 J3.
 T. Truitt and A. Rogers (1960). Basics of analog computers. John F. Rider. LCC QA76.4 T7.
 Marguin, Jean (1994) (in fr). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 16421942. Hermann. ISBN 9782705661663.
 Taton, René (1963) (in fr). Le calcul mécanique. Que saisje ? n° 367. Presses universitaires de France. pp. 20–28.
 Marguin, Jean (1994) (in fr). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 16421942. Hermann. ISBN 9782705661663.

22:12 Publié dans Addition  Lien permanent  Commentaires (0)   del.icio.us   Digg  Facebook
Arithmetic
Arithmetic
Arithmetic or arithmetics (from the Greek word ἀριθμός = number) is the oldest and most elementary branch of mathematics, used by almost everyone, for tasks ranging from simple daytoday counting to advanced science and business calculations. It involves the study of quantity, especially as the result of combining numbers. In common usage, it refers to the simpler properties when using the traditional operations ofaddition, subtraction, multiplication and division with smaller values of numbers. Professional mathematicians sometimes use the term (higher) arithmetic^{[1]} when referring to more advanced results related to number theory, but this should not be confused with elementary arithmetic.
Contents[hide] 
[edit]History
The prehistory of arithmetic is limited to a very small number of small artifacts which may indicate conception of addition and subtraction, the bestknown being the Ishango bone fromcentral Africa, dating from somewhere between 20,000 and 18,000 BC although its interpretation is disputed.^{[2]}
The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not includepositional notation. Although addition was generally straightforward, multiplication in Roman arithmetic required the assistance of a counting board to obtain the results.
Early number systems that included positional notation were not decimal, including the sexagesimal (base 60) system for Babylonian numerals and the vigesimal(base 20) system that defined Maya numerals. Because of this placevalue concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation.
The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, although it originated much later than the Babylonian and Egyptian examples. Prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, and their relationships to each other, in his Introduction to Arithmetic.
Greek numerals, derived from the hieratic Egyptian system, also lacked positional notation, and therefore imposed the same complexity on the basic operations of arithmetic. For example, the ancient mathematician Archimedes devoted his entire work The Sand Reckoner merely to devising a notation for a certain large integer.
The gradual development of HinduArabic numerals independently devised the placevalue concept and positional notation, which combined the simpler methods for computations with a decimal base and the use of a digit representing zero. This allowed the system to consistently represent both large and small integers. This approach eventually replaced all other systems. In the early 6th century AD, the Indian mathematician Aryabhata incorporated an existing version of this system in his work, and experimented with different notations. In the 7th century, Brahmagupta established the use of zero as a separate number and determined the results for multiplication, division, addition and subtraction of zero and all other numbers, except for the result of division by zero. His contemporary, the Syriac bishop Severus Sebokht described the excellence of this system as "...valuable methods of calculation which surpass description". The Arabs also learned this new method and called it hesab.
Although the Codex Vigilanus described an early form of Arabic numerals (omitting zero) by 976 AD, Fibonacci was primarily responsible for spreading their use throughout Europe after the publication of his book Liber Abaci in 1202. He considered the significance of this "new" representation of numbers, which he styled the "Method of the Indians" (Latin Modus Indorum), so fundamental that all related mathematical foundations, including the results of Pythagoras and the algorism describing the methods for performing actual calculations, were "almost a mistake" in comparison.
In the Middle Ages, arithmetic was one of the seven liberal arts taught in universities.
The flourishing of algebra in the medieval Islamic world and in Renaissance Europe was an outgrowth of the enormous simplification of computation through decimal notation.
Various types of tools exist to assist in numeric calculations. Examples include slide rules (for multiplication, division, and trigonometry) and nomographs in addition to the electricalcalculator.
[edit]Decimal arithmetic
Although decimal notation may conceptually describe any numerals from a system with a decimal base, it is commonly used exclusively for the written forms of numbers with Arabic numerals as the basic digits, especially when the numeral includes a decimal separator preceding a sequence of these digits to represent a fractional part of the number. In this common usage, the written form of the number implies the existence of positional notation. For example, 507.36 denotes 5 hundreds (10^{2}), plus 0 tens (10^{1}), plus 7 units (10^{0}), plus 3 tenths (10^{−1}) plus 6 hundredths (10^{−2}). The conception of zero as a number comparable to the other basic digits, and the corresponding definition of multiplication and addition with zero, is an essential part of this notation.
Algorism comprises all of the rules for performing arithmetic computations using this type of written numeral. For example, addition produces the sum of two arbitrary numbers. The result is calculated by the repeated addition of single digits from each number that occupies the same position, proceeding from right to left. An addition table with ten rows and ten columns displays all possible values for each sum. If an individual sum exceeds the value nine, the result is represented with two digits. The rightmost digit is the value for the current position, and the result for the subsequent addition of the digits to the left increases by the value of the second (leftmost) digit, which is always one. This adjustment is termed a carry of the value one.
The process for multiplying two arbitrary numbers is similar to the process for addition. A multiplication table with ten rows and ten columns lists the results for each pair of digits. If an individual product of a pair of digits exceeds nine, the carry adjustment increases the result of any subsequent multiplication from digits to the left by a value equal to the second (leftmost) digit, which is any value from one to eight (9 × 9 = 81). Additional steps define the final result.
Similar techniques exist for subtraction and division.
The creation of a correct process for multiplication relies on the relationship between values of adjacent digits. The value for any single digit in a numeral depends on its position. Also, each position to the left represents a value ten times larger than the position to the right. In mathematical terms, the exponent for the base of ten increases by one (to the left) or decreases by one (to the right). Therefore, the value for any arbitrary digit is multiplied by a value of the form 10^{n} with integer n. The list of values corresponding to all possible positions for a single digit is written as {..., 10^{2}, 10, 1, 10^{−1}, 10^{−2}, ...}.
Repeated multiplication of any value in this list by ten produces another value in the list. In mathematical terminology, this characteristic is defined as closure, and the previous list is described as closed under multiplication. It is the basis for correctly finding the results of multiplication using the previous technique. This outcome is one example of the uses ofnumber theory.
[edit]Arithmetic operations
The basic arithmetic operations are addition, subtraction, multiplication and division, although this subject also includes more advanced operations, such as manipulations ofpercentages, square roots, exponentiation, and logarithmic functions. Arithmetic is performed according to an order of operations. Any set of objects upon which all four arithmetic operations (except division by zero) can be performed, and where these four operations obey the usual laws, is called a field.
[edit]Addition (+)
Addition is the basic operation of arithmetic. In its simplest form, addition combines two numbers, the addends or terms, into a single number, the sum of the numbers.
Adding more than two numbers can be viewed as repeated addition; this procedure is known as summation and includes ways to add infinitely many numbers in an infinite series; repeated addition of the number one is the most basic form of counting.
Addition is commutative and associative so the order the terms are added in does not matter. The identity element of addition (the additive identity) is 0, that is, adding zero to any number yields that same number. Also, the inverse element of addition (the additive inverse) is the opposite of any number, that is, adding the opposite of any number to the number itself yields the additive identity, 0. For example, the opposite of 7 is −7, so 7 + (−7) = 0.
Addition can be given geometrically as follows:
 If a and b are the lengths of two sticks, then if we place the sticks one after the other, the length of the stick thus formed is a + b.
[edit]Subtraction (−)
Subtraction is the opposite of addition. Subtraction finds the difference between two numbers, the minuend minus the subtrahend. If the minuend is larger than the subtrahend, the difference is positive; if the minuend is smaller than the subtrahend, the difference is negative; if they are equal, the difference is zero.
Subtraction is neither commutative nor associative. For that reason, it is often helpful to look at subtraction as addition of the minuend and the opposite of the subtrahend, that isa − b = a + (−b). When written as a sum, all the properties of addition hold.
There are several methods for calculating results, some of which are particularly advantageous to machine calculation. For example, digital computers employ the method of two's complement. Of great importance is the counting up method by which change is made. Suppose an amount P is given to pay the required amount Q, with P greater than Q. Rather than performing the subtraction P − Q and counting out that amount in change, money is counted out starting at Q and continuing until reaching P. Although the amount counted out must equal the result of the subtraction P − Q, the subtraction was never really done and the value of P − Q might still be unknown to the changemaker.
[edit]Multiplication (× or ·)
Multiplication is the second basic operation of arithmetic. Multiplication also combines two numbers into a single number, the product. The two original numbers are called the multiplierand the multiplicand, sometimes both simply called factors.
Multiplication is best viewed as a scaling operation. If the real numbers are imagined as lying in a line, multiplication by a number, say x, greater than 1 is the same as stretching everything away from zero uniformly, in such a way that the number 1 itself is stretched to where x was. Similarly, multiplying by a number less than 1 can be imagined as squeezing towards zero. (Again, in such a way that 1 goes to the multiplicand.)
Multiplication is commutative and associative; further it is distributive over addition and subtraction. The multiplicative identity is 1, that is, multiplying any number by 1 yields that same number. Also, the multiplicative inverse is the reciprocal of any number (except zero; zero is the only number without a multiplicative inverse), that is, multiplying the reciprocal of any number by the number itself yields the multiplicative identity.
The product of a and b is written as a × b or a • b. When a or b are expressions not written simply with digits, it is also written by simple juxtaposition: ab. In computer programming languages and software packages in which one can only use characters normally found on a keyboard, it is often written with an asterisk: a * b.
[edit]Division (÷ or /)
Division is essentially the opposite of multiplication. Division finds the quotient of two numbers, the dividend divided by the divisor. Any dividend divided by zero is undefined. For positive numbers, if the dividend is larger than the divisor, the quotient is greater than one, otherwise it is less than one (a similar rule applies for negative numbers). The quotient multiplied by the divisor always yields the dividend.
Division is neither commutative nor associative. As it is helpful to look at subtraction as addition, it is helpful to look at division as multiplication of the dividend times the reciprocal of the divisor, that is a ÷ b = a × ^{1}/_{b}. When written as a product, it obeys all the properties of multiplication.
[edit]Number theory
The term arithmetic also refers to number theory. This includes the properties of integers related to primality, divisibility, and the solution of equations in integers, as well as modern research that is an outgrowth of this study. It is in this context that one runs across the fundamental theorem of arithmetic and arithmetic functions. A Course in Arithmetic by JeanPierre Serre reflects this usage, as do such phrases as first order arithmetic or arithmetical algebraic geometry. Number theory is also referred to as the higher arithmetic, as in the title ofHarold Davenport's book on the subject.
[edit]Arithmetic in education
Primary education in mathematics often places a strong focus on algorithms for the arithmetic of natural numbers, integers, rational numbers (fractions), and real numbers (using the decimal placevalue system). This study is sometimes known as algorism.
The difficulty and unmotivated appearance of these algorithms has long led educators to question this curriculum, advocating the early teaching of more central and intuitive mathematical ideas. One notable movement in this direction was the New Math of the 1960s and 1970s, which attempted to teach arithmetic in the spirit of axiomatic development from set theory, an echo of the prevailing trend in higher mathematics.^{[3]}
[edit]See also
[edit]Related topics
[edit]Footnotes
 ^ Davenport, Harold, The Higher Arithmetic: An Introduction to the Theory of Numbers (7th ed.), Cambridge University Press, Cambridge, UK, 1999, ISBN 0521634466
 ^ Rudman, Peter Strom (20007). How Mathematics Happened: The First 50,000 Years. Prometheus Books. p. 64. ISBN 9781591024774.
 ^ Mathematically Correct: Glossary of Terms
[edit]References
 Cunnington, Susan, The Story of Arithmetic: A Short History of Its Origin and Development, Swan Sonnenschein, London, 1904
 Dickson, Leonard Eugene, History of the Theory of Numbers (3 volumes), reprints: Carnegie Institute of Washington, Washington, 1932; Chelsea, New York, 1952, 1966
 Euler, Leonhard, Elements of Algebra, Tarquin Press, 2007
 Fine, Henry Burchard (1858–1928), The Number System of Algebra Treated Theoretically and Historically, Leach, Shewell & Sanborn, Boston, 1891
 Karpinski, Louis Charles (1878–1956), The History of Arithmetic, Rand McNally, Chicago, 1925; reprint: Russell & Russell, New York, 1965
 Ore, Øystein, Number Theory and Its History, McGraw–Hill, New York, 1948
 Weil, André, Number Theory: An Approach through History, Birkhauser, Boston, 1984; reviewed: Mathematical Reviews 85c:01004
[edit]External links
Look up arithmetic inWiktionary, the free dictionary. 
Wikibooks has more on the topic of 
Wikimedia Commons has media related to: Arithmetic 
 What is arithmetic?
 MathWorld article about arithmetic
 Interactive Arithmetic Lessons and Practice
 Talking Math Game for kids
 The New Student's Reference Work/Arithmetic (historical)
 Arithmetic Game
 Math Games for kids and adults
 The Great Calculation According to the Indians, of Maximus Planudes – an early Western work on arithmetic at Convergence

22:08 Publié dans Arithmetic  Lien permanent  Commentaires (0)   del.icio.us   Digg  Facebook
Game theory
Game theory
Game theory is included in the JEL classification codes: JEL: C7 
In mathematics, game theory models strategic situations, or games, in which an individual's success in making choices depends on the choices of others (Myerson, 1991). It is used in the social sciences (most notably in economics, management, operations research, political science, andsocial psychology) as well as in other formal sciences (logic, computer science, and statistics) and biology (particularly evolutionary biology andecology). While initially developed to analyze competitions in which one individual does better at another's expense (zero sum games), it has been expanded to treat a wide class of interactions, which are classified according to several criteria. Today, "game theory is a sort of umbrella or 'unified field' theory for the rational side of social science, where 'social' is interpreted broadly, to include human as well as nonhuman players (computers, animals, plants)." (Aumann 1987).
Traditional applications of game theory define and study equilibria in these games. In an equilibrium, each player of the game has adopted a strategy that cannot improve his outcome, given the others' strategy. Many equilibrium concepts have been developed (most famously the Nash equilibrium) to describe aspects of strategic equilibria. These equilibrium concepts are motivated differently depending on the area of application, although they often overlap or coincide. This methodology has received criticism, and debates continue over the appropriateness of particular equilibrium concepts, the appropriateness of equilibria altogether, and the usefulness of mathematical models in the social sciences.
Mathematical game theory had beginnings with some publications by Émile Borel, which led to his 1938 book Applications aux Jeux de Hasard. However, Borel's results were limited, and his conjecture about the nonexistence of a mixedstrategy equilibria in twoperson zerosum games was wrong. The modern epoch of game theory began with the statement of the theorem on the existence of mixedstrategy equilibria in twoperson zerosum games and its proof by John von Neumann. Von Neumann's original proof used Brouwer's fixedpoint theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decisionmaking under uncertainty.
This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight gametheorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.
History
The first known discussion of game theory occurred in a letter written by James Waldegrave in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a twoperson version of the card game le Her.
James Madison made what we now recognize as a gametheoretic analysis of the ways states can be expected to behave under different systems of taxation.^{[1]}^{[2]}
It was not until the publication of Antoine Augustin Cournot's Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth) in 1838 that a general gametheoretic analysis was pursued. In this work Cournot considers a duopoly and presents a solution that is a restricted version of theNash equilibrium.
Although Cournot's analysis is more general than Waldegrave's, game theory did not really exist as a unique field until John von Neumann published a paper in 1928.^{[3]} While the French mathematician Émile Borel did some earlier work on games, von Neumann can rightfully be credited as the inventor of game theory.^{[citation needed]} Von Neumann's work in game theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar Morgenstern. This foundational work contains the method for finding mutually consistent solutions for twoperson zerosum games. During this time period, work on game theory was primarily focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.
In 1950, the first discussion of the prisoner's dilemma appeared, and an experiment was undertaken on this game at the RAND corporation. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of noncooperative games in addition to cooperative ones.
Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley valuewere developed. In addition, the first applications of Game theory to philosophy and political science occurred during this time.
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed the concepts of complete information and Bayesian games. Nash, Selten and Harsanyi became Economics Nobel Laureates in 1994 for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts ofcorrelated equilibrium, trembling hand perfection, and common knowledge^{[4]} were introduced and analyzed.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples ofevolutionary game theory. Aumann contributed more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Roger Myerson, together with Leonid Hurwicz and Eric Maskin, was awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory." Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997).
Representation of games
The games studied in game theory are welldefined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form
The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured to the left). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multiplayer generalization of a decision tree. (Fudenberg & Tirole 1991, p. 67)
In the game pictured to the left, there are two players. Player 1 moves first and chooses either F or U. Player 2 sees Player 1's move and then chooses A or R. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player 1 gets 8 and Player 2 gets 2.
The extensive form can also capture simultaneousmove games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
Normal form
Player 2 chooses Left 
Player 2 chooses Right 

Player 1 chooses Up 
4, 3  –1, –1 
Player 1 chooses Down 
0, 0  3, 4 
Normal form or payoff matrix of a 2player, 2strategy game 
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 playsLeft. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensiveform game has an equivalent normalform game, however the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. (LeytonBrown & Shoham 2008, p. 35)
Characteristic function form
In cooperative games with transferable utility no individual payoffs are given. Instead, the characteristic function determines the payoff of each coalition. The standard assumption is that the empty coalition obtains a payoff of 0.
The origin of this form is to be found in the seminal book of von Neumann and Morgenstern who, when studying coalitional normal form games, assumed that when a coalition C forms, it plays against the complementary coalition () as if they were playing a 2player game. The equilibrium payoff of C is characteristic. Now there are different models to derive coalitional values from normal form games, but not all games in characteristic function form can be derived from normal form games.
Formally, a characteristic function form game (also known as a TUgame) is given as a pair (N,v), where N denotes a set of players and is a characteristic function.
The characteristic function form has been generalised to games without the assumption of transferable utility.
Partition function form
The characteristic function form ignores the possible externalities of coalition formation. In the partition function form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are partitioned (Thrall & Lucas 1963).
General and applied uses
As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Gametheoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin makes a few informal gametheoretic statements). This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smithin his book Evolution and the Theory of Games.
In addition to being used to predict and explain behavior, game theory has also been used to attempt to develop theories of ethical or normative behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Gametheoretic arguments of this type can be found as far back as Plato.^{[5]}
Description and modeling
The first known use is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has come under recent criticism. First, it is criticized because the assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, additional criticism of this use of game theory has been levied because some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments.^{[6]}
Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of how populations reach those points remains open.
Some game theorists have turned to evolutionary game theory in order to resolve these worries. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis
Cooperate  Defect  
Cooperate  1, 1  10, 0 
Defect  0, 10  5, 5 
The Prisoner's Dilemma 
On the other hand, some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a Nash equilibrium of a game constitutes one's best response to the actions of the other players, playing a strategy that is part of a Nash equilibrium seems appropriate. However, this use for game theory has also come under criticism. First, in some cases it is appropriate to play a nonequilibrium strategy if one expects others to play nonequilibrium strategies as well. For an example, see Guess 2/3 of the average.
Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player pursuing his own selfinterest leads both players to be worse off than had they not pursued their own selfinterests.
Economics and business
This article is incomplete and may require expansion or cleanup. Please help to improve the article, or discuss the issue on thetalk page. (November 2010) 
Economists have long used game theory to analyze a wide array of economic phenomena, including auctions, bargaining, duopolies, fair division, oligopolies, social network formation, and voting systems and to model across such broad classifications as mathematical economics,^{[7]} behavioral economics,^{[8]} political economy,^{[9]} and industrial organization.^{[10]}
This research usually focuses on particular sets of strategies known as equilibria in games. These "solution concepts" are usually based on what is required by norms of rationality. In noncooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. So, if all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players. Often in modeling situations the payoffs represent money, which presumably corresponds to an individual's utility. This assumption, however, can be faulty.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of some particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Naturally one might wonder to what use should this information be put. Economists and business professors suggest two primary uses: descriptive and prescriptive.
Political science
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed gametheoretic models in which the players are often voters, states, special interest groups, and politicians.
For early examples of game theory applied to political science, see the work of Anthony Downs. In his book An Economic Theory of Democracy (Downs 1957), he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a onedimensional policy space. The theorist shows how the political candidates will converge to the ideology preferred by the median voter.
A gametheoretic explanation for democratic peace is that public and open debate in democracies send clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a nondemocracy (Levy & Razin 2003).
Biology
Hawk  Dove  
Hawk  20, 20  80, 40 
Dove  40, 80  60, 60 
The hawkdove game 
Unlike economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less onequilibria that correspond to a notion of rationality, but rather on ones that would be maintained by evolutionary forces. The best known equilibrium in biology is known as the evolutionarily stable strategy (or ESS), and was first introduced in (Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication (Harper & Maynard Smith 2003). The analysis of signaling games and other communication games has provided some insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feedforward behavior akin to fashion, see Butterfly Economics.
Biologists have used the game of chicken to analyze fighting behavior and territoriality.^{[citation needed]}
Maynard Smith, in the preface to Evolution and the Theory of Games, writes, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.^{[11]}
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to Vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.^{[12]} All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary reasoning behind this selection with the equation c<b*r where the cost ( c ) to the altruist must be less than the benefit ( b ) to the recipient multiplied by the coefficient of relatedness ( r ). The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on, (through survival of its offspring) can forgo the option of having offspring itself because the same number of alleles are passed on. Helping a sibling for example (in diploid animals), has a coefficient of ½, because (on average) an individual shares ½ of the alleles in its sibling's offspring. Ensuring that enough of a sibling’s offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.^{[12]} The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was ½ in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multiagent systems.
Separately, game theory has played a role in online algorithms. In particular, the kserver problem, which has in the past been referred to as games with moving costs and requestanswer games (Ben David, Borodin & Karp et al. 1994). Yao's principle is a gametheoretic technique for proving lower bounds on the computational complexity of randomized algorithms, and especially of online algorithms.
The field of algorithmic game theory combines computer science concepts of complexity and algorithm design with game theory and economic theory. The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peertopeer systems, and security and information markets.^{[13]}
Philosophy
Stag  Hare  
Stag  3, 3  0, 2 
Hare  2, 0  2, 2 
Stag hunt 
Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis (Skyrms (1996), Grim, Kokalis, and AlaiTafti et al. (2004)). Following Lewis (1969) gametheoretic account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixedmotive game into a coordination game.^{[14]}
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from agents' interactions. Philosophers who have worked in this area include Bicchieri (1989, 1993),^{[15]} Skyrms (1990),^{[16]} and Stalnaker (1999).^{[17]}
In ethics, some authors have attempted to pursue the project, begun by Thomas Hobbes, of deriving morality from selfinterest. Since games like the Prisoner's dilemma present an apparent conflict between morality and selfinterest, explaining why cooperation is required by selfinterest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986).^{[18]}
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the Prisoner's dilemma, Stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1999)).
Some assumptions used in some parts of game theory have been challenged in philosophy; psychological egoism states that rationality reduces to selfinterest—a claim debated among philosophers. (see Psychological egoism#Criticisms)
Types of games
Cooperative or noncooperative
A game is cooperative if the players are able to form binding commitments. For instance the legal system requires them to adhere to their promises. In noncooperative games this is not possible.
Often it is assumed that communication among players is allowed in cooperative games, but not in noncooperative ones. This classification on two binary criteria has been rejected (Harsanyi 1974).
Of the two types of games, noncooperative games are able to model situations to the finest details, producing accurate results. Cooperative games focus on the game at large. Considerable efforts have been made to link the two approaches. The socalled Nashprogramme^{[clarification needed]} has already established many of the cooperative solutions as noncooperative equilibria.
Hybrid games contain cooperative and noncooperative elements. For instance, coalitions of players are formed in a cooperative game, but these play in a noncooperative fashion.
Symmetric and asymmetric
E  F  
E  1, 2  0, 0 
F  0, 0  1, 2 
An asymmetric game 
A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric.
Most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players.
Zerosum and nonzerosum
A  B  
A  –1, 1  3, –3 
B  0, 0  –2, 2 
A zerosum game 
Zerosum games are a special case of constantsum games, in which choices by players can neither increase nor decrease the available resources. In zerosum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zerosum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zerosum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famous prisoner's dilemma) are nonzerosum games, because some outcomes have net results greater or less than zero. Informally, in nonzerosum games, a gain by one player does not necessarily correspond with a loss by another.
Constantsum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zerosum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.
Simultaneous and sequential
Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential game (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect informationabout every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while he does not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, and extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extenisve form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
Perfect information and imperfect information
An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players know the moves previously made by all other players. Thus, only sequential games can be games of perfect information, since in simultaneous games not every player knows the actions of the others. Most games studied in game theory are imperfectinformation games, although there are some interesting examples of perfectinformation games, including the ultimatum game and centipede game. Recreational games of perfect information games include chess, go, and mancala. Many card games are games of imperfect information, for instancepoker or contract bridge.
Perfect information is often confused with complete information, which is a similar concept. Complete information requires that every player know the strategies and payoffs of the other players but not necessarily the actions. Games of incomplete information can be reduced however to games of imperfect information by introducing "moves by nature". (LeytonBrown & Shoham 2008, p. 60)
Combinatorial games
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and go. Games that involve imperfect or incomplete information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are however mathematical tools that can solve particular problems and answer some general questions.^{[19]}
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic(and sometimes nonconstructive) proof methods to solve games of certain types, including some "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.^{[20]}^{[21]} A typical game that has been solved this way ishex. A related field of study, drawing from computational complexity theory is game complexity, which is concerned with the estimating the computational difficulty of finding optimal strategies.^{[22]}
Research in artificial intelligence has addressed both perfect and imperfect (or incomplete) information games that have very complex combinatorial structure (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alphabeta pruning or use of artificial neural networkstrained by reinforcement learning, which make games more tractable in computing practice.^{[23]}^{[19]}
Infinitely long games
Games, as studied by economists and realworld game players, are generally finished in finitely many moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves, with the winner (or other payoff) not known until after all those moves are completed.
The focus of attention is usually not so much on what is the best way to play such a game, but simply on whether one or the other player has a winning strategy. (It can be proven, using the axiom of choice, that there are games—even with perfect information, and where the only outcomes are "win" or "lose"—for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory.
Discrete and continuous games
Much of game theory is concerned with finite, discrete games, that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any nonnegative quantities, including fractional quantities.
Differential games such as the continuous pursuit and evasion game are continuous games.
Manyplayer and population games
Games with an arbitrary, but finite number of players are often called nperson games (Luce & Raiffa 1957). Evolutionary game theory considers games involving a population of decision makers, where the frequency with which a particular decision is made can change over time in response to the decisions made by all individuals in the population. In biology, this is intended to model (biological) evolution, where genetically programmed organisms pass along some of their strategy programming to their offspring. In economics, the same theory is intended to capture population changes because people play the game many times within their lifetime, and consciously (and perhaps rationally) switch strategies. (Webb 2007)
Stochastic outcomes (and relation to other fields)
Individual decision problems with stochastic outcomes are sometimes considered "oneplayer games". These situations are not considered game theoretical by some authors.^{[by whom?]}They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multiagent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes(MDP).^{[citation needed]}
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves", also known as "moves by nature" (Osborne & Rubinstein 1994). This player is not typically considered a third player in what is otherwise a twoplayer game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worstcase over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.^{[24]} (See black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.^{[24]}
Metagames
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard (Howard 1971) whereby a situation is framed as a strategic game in which stakeholders try to realise their objectives by means of the options available to them. Subsequent developments have led to the formulation of drama theory.
See also
 Chainstore paradox
 Combinatorial game theory
 Glossary of game theory
 Intrahousehold bargaining
 List of games in game theory
 Quantum game theory
 Rationality
 Reverse Game Theory
 Selfconfirming equilibrium
Notes
 ^ James Madison, Vices of the Political System of the United States, April, 1787. Link
 ^ Jack Rakove, "James Madison and the Constitution", History Now, Issue 13 September 2007. Link
 ^ J. v. Neumann (1928). "Zur Theorie der Gesellschaftsspiele," Mathematische Annalen, 100(1), p p. 295320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p p. 1342.
 ^ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
 ^ Ross, Don. "Game Theory". The Stanford Encyclopedia of Philosophy (Spring 2008 Edition). Edward N. Zalta (ed.). Retrieved 20080821.
 ^ Experimental work in game theory goes by many names, experimental economics,behavioral economics, and behavioural game theory are several. For a recent discussion on this field see Camerer (2003).
 ^ Kenneth J. Arrow, and Michael D. Intriligator, ed. (1981), Handbook of Mathematical Economics (1981), v. 1.
 ^ Faruk Gul, 2008. "behavioural economics and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
 ^ Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," inHandbook of Mathematical Economics, , v. 1, pp. 285330.
 ^ Jean Tirole, 1988. The Theory of Industrial Organization, MIT Press. Description and chapterpreview links via scroll down.
 ^ Evolutionary Game Theory (Stanford Encyclopedia of Philosophy)
 ^ ^{a} ^{b} Biological Altruism (Stanford Encyclopedia of Philosophy)
 ^ Algorithmic Game Theory, Cambridge University Press
 ^ E. Ullmann Margalit, The Emergence of Norms, Oxford University Press, 1977. C. Bicchieri, The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, 2006.
 ^ "SelfRefuting Theories of Strategic Interaction: A Paradox of Common Knowledge ", Erkenntnis 30, 1989: 6985. See also Rationality and Coordination, Cambridge University Press, 1993.
 ^ The Dynamics of Rational Deliberation, Harvard University Press, 1990.
 ^ "Knowledge, Belief, and Counterfactual Reasoning in Games." In Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press, 1999.
 ^ For a more detailed discussion of the use of Game Theory in ethics see the Stanford Encyclopedia of Philosophy's entry game theory and ethics.
 ^ ^{a} ^{b} Jörg Bewersdorff (2005). Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd.. pp. ixxii and chapter 31. ISBN 9781568812106.
 ^ Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: In Introduction to Combinatorial Game Theory. A K Peters Ltd. pp. 34. ISBN 9781568812779.
 ^ Beck, József (2008). Combinatorial games: tictactoe theory. Cambridge University Press. pp. 13. ISBN 9780521461009.
 ^ Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd.. ISBN 9781568813226.
 ^ M. Tim Jones (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 9780763773373.
 ^ ^{a} ^{b} Hugh Brendan McMahan (2006), Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability, CMUCS06166, pp. 34
References and further reading
Wikibooks has a book on the topic of 
Wikiversity has learning materials about Game Theory 
Look up game theory inWiktionary, the free dictionary. 
Wikimedia Commons has media related to: Game theory 
Textbooks and general references
 Aumann, Robert J. (1987), game theory,, The New Palgrave: A Dictionary of Economics, 2, pp. 460–82.
 Aumann, Robert J., and Sergiu Hart, ed. (1992, 1994, 2002). Handbook of Game Theory with Economic Applications, 3 v., Elsevier. Table of Contents and "Review Article" (Abstract) links.
 The New Palgrave Dictionary of Economics, (2008). 2nd Edition:
 "game theory" by Robert J. Aumann. Abstract.
 "game theory in economics, origins of," by Robert Leonard. Abstract.
 "behavioural economics and game theory" by Faruk Gul. Abstract.
 Camerer, Colin (2003), Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation, ISBN 9780691090399Description amd Introduction, pp. 1–25.
 Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 9780262041690. Suitable for undergraduate and business students.
 Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, AddisonWesley, ISBN 9780201847581. Suitable for upperlevel undergraduates.
 Fudenberg, Drew; Tirole, Jean (1991), Game theory, MIT Press, ISBN 9780262061414. Acclaimed reference text. Description.
 Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press, ISBN 9780691003955. Suitable for advanced undergraduates.

 Published in Europe as Robert Gibbons (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 9780745011592.
 Gintis, Herbert (2000), Game theory evolving: a problemcentered introduction to modeling strategic behavior, Princeton University Press, ISBN 9780691009438
 Green, Jerry R.; MasColell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University Press, ISBN 9780195073409. Presents game theory in formal way suitable for graduate level.
 edited by Vincent F. Hendricks, Pelle G. Hansen. (2007), Hansen, Pelle G.; Hendricks, Vincent F., eds., Game Theory: 5 Questions, New York, London: Automatic Press / VIP,ISBN 9788799101344. Snippets from interviews.
 Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, Massachusetts: The MIT Press, ISBN 9780262582377
 Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit, Control and Optimization, New York: Dover Publications, ISBN 9780486406824
 LeytonBrown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 9781598295931. An 88page mathematical introduction; free online at many universities.
 Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your competition, New York: McGrawHill, ISBN 9780071400206. Suitable for a general audience.
 Myerson, Roger B. (1991), Game theory: analysis of conflict, Harvard University Press, ISBN 9780674341166
 Osborne, Martin J. (2004), An introduction to game theory, Oxford University Press, ISBN 9780195128956. Undergraduate textbook.
 Papayoanou, Paul (2010), Game Theory for Business, Probabilistic Publishing, ISBN 9780964793873. Primer for business men and women.
 Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 9780262650403. A modern introduction at the graduate level.
 Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb, Anchor, ISBN 9780385415804. A general history of game theory and game theoreticians.
 Rasmusen, Eric (2006), Games and Information: An Introduction to Game Theory (4th ed.), WileyBlackwell, ISBN 9781405136662
 Shoham, Yoav; LeytonBrown, Kevin (2009), Multiagent Systems: Algorithmic, GameTheoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 9780521899437. A comprehensive reference from a computational perspective; downloadable free online.
 Williams, John Davis (1954) (PDF), The Compleat Strategyst: Being a Primer on the Theory of Games of Strategy, Santa Monica: RAND Corp., ISBN 9780833042224 Praised primer and popular introduction for everybody, never out of print.
 Roger McCain's Game Theory: A Nontechnical Introduction to the Analysis of Strategy (Revised Edition)
 Christopher Griffin (2010) Game Theory: Penn State Math 486 Lecture Notes, pp. 169, CCBYNCSA license, suitable introduction for undergraduates
 Webb, James N. (2007), Game theory: decisions, interaction and evolution, Springer undergraduate mathematics series, Springer, ISBN 1846284236 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes.
 Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0716766302. Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation.
Historically important texts
 Aumann, R.J. and Shapley, L.S. (1974), Values of NonAtomic Games, Princeton University Press
 Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire des sciences politiques et sociales (Paris: M. Rivière & C.ie)
 Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul
 Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 0631124608
 Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York: Wiley

 reprinted edition: R. Duncan Luce ; Howard Raiffa (1989), Games and decisions: introduction and critical survey, New York: Dover Publications, ISBN 9780486659435
 Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press, ISBN 9780521288842
 Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature 246 (5427): 15–18, doi:10.1038/246015a0
 Nash, John (1950), "Equilibrium points in nperson games", Proceedings of the National Academy of Sciences of the United States of America 36 (1): 48–49,doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946
 Shapley, L. S. (1953), A Value for nperson Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.)
 Shapley, L. S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100.
 von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen 100 (1): p. 295–320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p p. 1342. Princeton University Press.
 von Neumann, John; Morgenstern, Oskar (1944), Theory of games and economic behavior, Princeton University Press
 Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings of the Fifth International Congress of Mathematicians 2: 501–4
Other print references
 Ben David, S.; Borodin, Allan; Karp, Richard; Tardos, G.; Wigderson, A. (1994), "On the Power of Randomization in Online Algorithms" (PDF), Algorithmica 11 (1): 2–14,doi:10.1007/BF01294260