Ok

En poursuivant votre navigation sur ce site, vous acceptez l'utilisation de cookies. Ces derniers assurent le bon fonctionnement de nos services. En savoir plus.

21/05/2011

The Structure of Finite Algebras

AMS Titles by Author | AMS Titles by Subject

Source : http://www.ams.org/publications/online-books/conm76-index

CONM/76 The Structure of Finite Algebras

David Hobby and Ralph McKenzie

Publication Date:  1988
Number of pages: 209pp.
Publisher:  AMS
ISBN 0-8218-3400-2 
CONM/76.E

 Download Individual Chapters - FREE (17 files - 17.33mb)

Frontmatter

  • Title
  • Copyright page
  • Dedication
  • Table of Contents
  • Introduction
Endmatter
  • Problems
  • Appendix, added in July 1996
  • Bibliography
  • Added in July 1996
  • Index to Terms
  • Index to Notation

22:28 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Omar Khayyam

Omar Khayyam

 
Omar Khayyām
Omar Khayyām
Omar Khayyām

Nom de naissance
غياث الدین ابو الفتح عمر بن ابراهیم خیام نيشابوری
Activité(s) Poète ,Mathématicien, philosophe, astronome,
Naissance 18 mai 1048
NichapurPerse,
Empire seldjoukide
Décès 4 décembre 1131
NichapurPerse,
Empire seldjoukide
Langue d'écriture persan

Source : http://fr.wikipedia.org/wiki/Omar_Khayyam

 

L'écrivain et savant persan connu en francophonie sous le nom d'Omar Khayyām1 ou de Khayyām 2 serait né le 18 mai 1048 àNichapur en Perse (actuel Iran) où il est mort le 4 décembre 11313.

On peut aussi trouver son nom orthographié Omar Khayam comme dans les traductions d'Armand Robin (1958) ou de M. F. Farzaneh et Jean Malaplate (dans l'édition critique de Sadegh Hedayat, Corti, 1993).

Sommaire

 [masquer]

Biographie[modifier]

La vie de Khayyam est entourée de mystère, et peu de sources sont disponibles pour nous permettre de la retracer avec précision. Les chercheurs pensent généralement qu'Omar Khayyam est né dans une famille d'artisans de Nichapur (son père était probablement fabricant de tentes). Il a passé son enfance dans la ville de Balhi, où il étudie sous la direction du cheik Mohammad Mansuri, un des chercheurs les plus célèbres de son temps. Dans sa jeunesse, Omar Khayyām étudie aussi sous la direction de l'imam Mowaffak de Nishapur, considéré comme le meilleur professeur du Khorassan.

La légende dit qu'Abou-Ali Hassan (Nizam al-Mulk) et Hassan Sabbah étudiaient alors également sous la direction de ce maître et qu'un pacte légendaire aurait été conclu entre les trois étudiants : « Celui d'entre nous qui atteindra la gloire ou la fortune devra partager à égalité avec les deux autres ». Cette alliance reste improbable lorsqu'on sait que Nizam al-Mulk était de 30 ans l'ainé d'Omar et que Hassan Sabbah devait avoir au moins 10 ans de plus que Khayyam.

Nizam al-Mulk devient cependant grand vizir de Perse et les deux autres se rendent à sa cour. Hassan Sabbah, ambitieux, demande une place au gouvernement ; il l'obtient immédiatement et s'en servira plus tard pour essayer de prendre le pouvoir à son bienfaiteur. Il devient après son échec chef des Hashishins. Khayyam, moins porté vers le pouvoir politique, ne demande pas de poste officiel, mais un endroit pour vivre, étudier la science et prier. Il reçoit alors une pension de 1 200 mithkals d'or de la part du trésor royal ; cette pension lui sera versée jusqu'à la mort de Nizam al-Mulk (tué par un assassin).

Nom de Khayyam[modifier]

Si on le déchiffre avec le système abjad, le résultat donne al-Ghaqi, le dissipateur de biens, expression qui dans la terminologie soufie est attribuée à "celui qui distribue ou ignore les biens du monde constituant un fardeau dans le voyage qu'il entreprend sur le sentier soufi" (Omar Ali-Shah[réf. nécessaire].

« Khayyam, qui cousait les tentes de l'intelligence,
Dans une forge de souffrances tomba, subitement brûla;
Des ciseaux coupèrent les attaches de la tente de sa vie;
Le brocanteur de destins le mit en vente contre du vent4. »

Mathématicien et astronome[modifier]

Omar Khayyâm est considéré comme « l'un des plus grands mathématiciens du Moyen âge5. » Mais ses travaux algébriques ne furent connus6 en Europe qu'au xixe siècle 7.

Dans ses Démonstrations de problèmes d'algèbre de 1070, Khayyam démontre que les équations cubiques peuvent avoir plus d’une racine. Il fait état aussi d’équations ayant deux solutions, mais n'en trouve pas à trois solutions. C'est le premier mathématicien qui ait traité systématiquement des équations cubiques, en employant d'ailleurs des tracés de coniques pour déterminer le nombre des racines réelles et les évaluer approximativement. Outre son traité d'algèbre, Omar Khayyâm a écrit plusieurs textes sur l'extraction des racines cubiques et sur certaines définitions d'Euclide, et a construit des tables astronomiques connues sous le nom de Zidj-e Malikshahi

Directeur de l'observatoire d'Ispahan en 1074, il réforme, à la demande du sultan Malik Shah, le calendrier persan (la réforme est connue sous le nom de réforme jelaléenne). Il introduit une année bissextile et mesure la longueur de l’année comme étant de 365,24219858156 jours. Or la longueur de l’année change à la sixième décimale pendant une vie humaine. L'estimation djélaléenne se montrera plus exacte que la grégorienne créée cinq siècles plus tard, bien que leur résultat pratique soit exactement le même, une année devant comporter un nombre entier de jours. À la fin du xixe siècle, l'année fait 365,242196 jours et aujourd’hui 365,242190 jours.

Poète et philosophe[modifier]

Ses poèmes sont appelés « rubaiyat » (persanرباعى [rabāʿi], pluriel رباعیات [rubāʿiyāt]8), ce qui signifie « quatrains ». Les quatrains de Khayyam, souvent cités en Occident pour leur scepticisme, recèleraient, selon Idries Shah, des "perles mystiques", faisant de Khayyam un soufi. Il aurait prôné l'ivresse de Dieu, et se disait infidèle mais croyant9. Au-delà du premier degré hédoniste, les quatrains auraient donc selon ce commentateur une dimension mystique.

Dans la pratique, si l'on s'en tient au texte, Khayyam se montre bel et bien fort critique vis-à-vis des religieux - et de la religion - de son temps. Quant au vin dont la mention revient fréquemment dans ses quatrains, le contexte où il se place constamment (agréable compagnie de jeunes femmes ou d'échansons, refus de poursuivre la recherche de cette connaissance que Khayyam a jadis tant aimée) ne lui laisse guère de latitude pour être allégorique.

On ne peut donc que constater l'existence de ces deux points de vue. Traduction de F. Toussaint pour les quatrains ci-après.

Chagrin et désespoir

(VIII)

« En ce monde, contente-toi d'avoir peu d'amis.
Ne cherche pas à rendre durable
la sympathie que tu peux éprouver pour quelqu'un.
Avant de prendre la main d'un homme,
demande-toi si elle ne te frappera pas, un jour. »

(CXX)

« Tu peux sonder la nuit qui nous entoure.
Tu peux foncer sur cette nuit... Tu n'en sortiras pas.
Adam et Ève, qu'il a dû être atroce, votre premier baiser,
puisque vous nous avez créés désespérés ! »

Lucidité et scepticisme

(CXLI)

« Contente-toi de savoir que tout est mystère :
la création du monde et la tienne,
la destinée du monde et la tienne.
Souris à ces mystères comme à un danger que tu mépriserais. »

« Ne crois pas que tu sauras quelque chose
quand tu auras franchi la porte de la Mort.
Paix à l'homme dans le noir silence de l'Au-Delà ! »

Sagesse et épicurisme

(XXV)

« Au printemps, je vais quelquefois m'asseoir à la lisière d'un champ fleuri.
Lorsqu'une belle jeune fille m'apporte une coupe de vin, je ne pense guère à mon salut.
Si j'avais cette préoccupation, je vaudrais moins qu'un chien. »

(CLXX)

« Luths, parfums et coupes,
lèvres, chevelures et longs yeux,
jouets que le Temps détruit, jouets !
Austérité, solitude et labeur,
méditation, prière et renoncement,
cendres que le Temps écrase, cendres ! »

C'est sur cette 170e pièce, comme en conclusion de ce qui précède, que se termine le recueil.

Distance par rapport à l'islam orthodoxe

(CVII)

« Autrefois, quand je fréquentais les mosquées,
je n'y prononçais aucune prière,
mais j'en revenais riche d'espoir.
Je vais toujours m'asseoir dans les mosquées,
où l'ombre est propice au sommeil. »

(CLIX)

« “ Allah est grand !” . Ce cri du moueddin ressemble à une immense plainte.
Cinq fois par jour, est-ce la Terre qui gémit vers son créateur indifférent ? »

(CLIII)

« Puisque notre sort, ici-bas, est de souffrir puis de mourir,
ne devons-nous pas souhaiter de rendre le plus tôt possible à la terre notre corps misérable ?
Et notre âme, qu'Allah attend pour la juger selon ses mérites, dites-vous ?
Je vous répondrai là-dessus quand j'aurai été renseigné par quelqu'un revenant de chez les morts. »

Notoriété universelle et image ambiguë[modifier]

Mausolée d'Omar Khayyam à Nichapur

Des agnostiques occidentaux voient en lui un de leurs frères né trop tôt, tandis que certains musulmans perçoivent plutôt chez lui un symbolismeésotérique, rattaché au soufisme.

Khayyam indiquerait, comme le fera Djalâl ad-Dîn Rûmî plus tard, que l'homme sur le chemin de Dieu n'a pas besoin de lieu dédié pour vénérer son Dieu, et que la fréquentation des sanctuaires religieux n'est ni une garantie du contact avec Dieu, ni un indicateur du respect d'une discipline intérieure.

L'actuelle république islamique d'Iran ne nie pas les positions de Khayyam, mais a fait paraître au début des années 80 une liste officielle des quatrains qu'elle considérait comme authentiques (comme pour les Pensées de Pascal, leur nombre et leur numérotation diffèrent selon les compilateurs).

La vision d'un Khayyam ésotériste n'est pas partagée par ceux qui voient en lui surtout un hédoniste tolérant et sceptique. En effet, si certains[réf. souhaitée]assimilent dans ses poèmes le vin à une sorte de manne céleste, d'autres comme Sadegh Hedayat considèrent plutôt le poète comme un chantre de la liberté individuelle, qui refuse de trancher sur des mystères lui semblant hors de portée de l'homme. Son appréciation simple des plaisirs terrestres après la quadruple déception de la religion (quatrains 25, 76, 141), des hommes (quatrains 8, 18, 33) de la science (quatrains 26 à 30, 77, 81) et de la condition humaine elle-même (quatrains 32, 67, 107, 120, 170) n'exclut aucune hypothèse (quatrains 1, 23 à 25, 52)10.

Si chacune des deux interprétations est controversée par les tenants de l'autre, elles ne s'excluent cependant pas nécessairement : Khayyam présentesans ordre et sans méthode, pour reprendre une expression de Montaigne dans la préface des Essais - donc sans stratégie visant à convaincre - ses espoirs, ses doutes et ses découragements dans ce qui semble un effort de vérité humaine. C'est peut-être une des raisons du succès mondial des quatrains.

Traductions[modifier]

Controverses autour des manuscrits et des traductions[modifier]

  • La diversité des manuscrits, et leur authenticité, ainsi que la connaissance de la langue et de la Perse du onzième siècle montrent les difficultés d'une traduction. Marguerite Yourcenar dit à ce propos : « Quoi qu'on fasse, on reconstruit toujours le monument à sa manière. Mais c'est déjà beaucoup de n'employer que des pierres authentiques »11. Armand Robin dresse une liste de ces pierres dans Ce qu'en 1958 on peut savoir sur les « quatrains » d'Omar Khayam lors de sa traduction (cf. Bibliographie).
    • Manuscrit de 1460 de la "Bodleian Library" d'Oxford, soit 158 quatrains traduits, en anglais par Fitzgerald (1859), en français par Charles Grolleau (1909). Une centaine de ces quatrains sont incertains.
    • Manuscrit de 464 quatrains traduits en français par J.-B. Nicolas (1861).
    • Manuscrit d'Istanbul, 375 quatrains étudiés fin XIXe début xxe siècle
    • Manuscrit de Lucknow, 845 quatrains étudiés fin XIXe début xxe siècle
    • Manuscrit de 1259 dit de "Chester-Beatty", du scribe Mohammed al Qâwim de Nichapour, 172 quatrains traduits en français par Vincent Monteil (1983).
    • Manuscrit de 1207 dit de "Cambridge", acheté en 1950. Anthologie de 250 quatrains traduits par le professeur Arthur J. Arberry (1952, il avait expertisé le manuscrit "Chester-Beatty".
    • Manuscrit de 1153 découvert "dans une immense bibliothèque familiale", 111 quatrains traduits en anglais par Omar Ali-Shah "de langue maternelle persane, soufi..." (1964).
  • Traductions et interprétations.

Le fait que les rubaiyat soient un recueil de quatrains - qui peuvent être sélectionnés et réarrangés subjectivement afin de démontrer une interprétation ou une autre - a mené à des versions qui diffèrent grandement. J.-B. Nicolas12 a pris le parti de dire que Khayyam se considère clairement comme un soufi. D'autres y ont vu des signes de mysticisme, ou même d'athéisme, et d'autres au contraire le signe d'un Islam dévot et orthodoxe. Fitzgerald a donné au Rubaiyat une atmosphère fataliste, mais s'il est dit qu'il a adouci l'impact du nihilisme de Khayyam et de ses préoccupations de la mort et du caractère transitoire de toutes choses. La question de savoir si Khayyam était pour ou contre la consommation de vin serait même pour certains controversée 13!

Dans la nouvelle traduction que Jean-Yves Lacroix (Le cure-dent, éd.Allias) fait des quatrains "Rubaï'yat" du grand persan, qualifiés de "serpent venimeux pour la loi divine", par le chroniqueur al-Qifti, Khayyam écrit : "Tout le monde sait que je n'ai jamais murmuré la moindre prière", et ailleurs ceci : "Referme ton Coran. Pense librement et regarde librement le ciel et la terre."

Les quatrains de Khayyâm font l'objet de quelques controverses de traduction ainsi que d'éditions. En Europe, Fitzgerald et Toussaint sont les références les plus courantes. Il est cependant difficile, comme dans toute traduction poétique, de rendre tout le sens original des vers. Le sens mystique de cette poésie peut échapper au non-spécialiste. Quant à Fitzgerald, il combine parfois des quatrains distincts pour rendre possible une rime (Toussaint, mécontent de la traduction de Fitzgerald, préfère une prose à laquelle il donne un soufflepoétique).

Le contenu original du recueil de quatrains de Kayyâm est aussi soumis à de vastes débats. En effet, la tradition attribue plus de 1000 quatrains à Khayyam; alors que la plupart des chercheurs ne lui en attribuent avec certitude que 50, avec environ 200 autres quatrains soumis à controverse14. Chez Toussaint et Fitzgerald, le nombre est de 170.

Le gouvernement iranien a fait paraître dans les années 1980 la liste des quatrains qu'il reconnait officiellement.

Découverte d'Omar Khayyâm en Occident suite aux traductions d'Edward Fitzgerald[modifier]

Ce fut la traduction anglaise d'Edward FitzGerald qui fit connaître au grand public, en 1859, l'œuvre poétique de Khayyam et qui servit de référence aux traductions dans beaucoup d'autres langues.

Fitzgerald dut effectuer un choix parmi les mille poèmes attribués à Khayyam par la tradition, car le genre littéraire qu'il avait inauguré avait connu un tel succès que l'on employait le terme générique khayyam pour désigner toute lamentation désabusée sur la condition humaine. Fitzgerald établit quatre éditions des quatrains comprenant entre 75 et 110 quatrains. Étonnamment, c'est encore souvent une des compilations établies par Fitzgerald qui sert de référence à une grande partie des autres traductions.

Les traductions de Fitzgerald sont encore très discutées, notamment dans ce qui concerne leur authenticité, Fitzgerald ayant profité de ces traductions pour réécrire totalement des passages hors de l'esprit du poète original, comme la plupart des traducteurs de l'époque le faisaient. Ainsi, Omar Ali-Shah prend l'exemple du premier quatrain afin de montrer les étonnantes divergences de sens entre la traduction anglaise et la traduction littérale française.

Texte persan en caractères latinsTraduction anglaise de FitzgeraldTraduction française d'après Fitzgerald
I.

Khurshid kamândi sobh bar bâm afgand
Kai Khusro i roz bâdah dar jâm afgand
Mai khur ki manadi sahri gi khizân
Awaza i ishrabu dar ayâm afgand.

I.

Awake ! for Morning in the Bowl of Night
Has flung the Stone that puts the Stars to Flight :
And Lo ! the Hunter of the East has caught
The Sultan's Turret in a Noose of Light

I.

Réveille-toi ! Car le matin, dans le bol de la nuit,
A jeté la pierre qui met en fuite les étoiles :
Et voyez ! Le chasseur de l'est a saisi
La tourelle du sultan dans un nœud de lumière.

Texte persan en caractères latins Traduction française du texte anglais d'Omar Ali-Shah
I.

Khurshid kamândi sobh bar bâm afgand
Kai Khusro i roz bâdah dar jâm afgand
Mai khur ki manadi sahri gi khizân
Awaza i ishrabu dar ayâm afgand.

  I.

Tandis que l'Aube, héraut du jour chevauchant tout le ciel,
Offre au monde endormi un toast "Au Vin"
Le Soleil répand l'or matinal sur les toits de la ville -
Royal Hôte du jour, remplissant sa cruche.

Traduction du persan en français de l'orientaliste Franz Toussaint[modifier]

L'orientaliste français Franz Toussaint préféra effectuer une nouvelle traduction à partir du texte original persan plutôt qu'à partir de l'anglais, avec le parti-pris de ne pas chercher à traduire les quatrains en quatrains, mais dans une prose poétique qu'il estimait plus fidèle. Sa traduction française, composée de 170 quatrains, a été contestée par les uns, défendue par d'autres avec vigueur. Aujourd'hui, après la disparition des Éditions d'art Henri Piazza qui l'ont largement diffusée entre 1924 et 1979, cette traduction fait elle-même l'objet de traductions dans d'autres langues. Toussaint, décédé en 1955, n'a pas été témoin de ce succès.

Dilemme des traducteurs[modifier]

Quelques quatrains semblent échapper à toute traduction définitive, en raison de la complexité de la langue persane. Ainsi, Khayyam mentionne un certain Bahram (probablementVahram V Gour) qui de son vivant prenait grand plaisir à attraper des onagres (Bahram ke Gour migerefti hame 'omr) et ajoute laconiquement que c'est la tombe qui a attrapé Bahram. Les mots onagre et tombe sont phonétiquement voisins en farsi, avec une phonie ressemblant à gour (Didi keh chegune gour bahram gereft).

L'édition récente de la traduction française des quatrains par Omar Ali-Shah critique la plupart des traductions antérieures, à commencer par celle de Fitzgerald ou certaines[réf. souhaitée]traductions françaises. Selon Omar Ali-Shah, le persan des quatrains de Khayyâm se réfère constamment au vocabulaire soufi et a été injustement traduit dans l'oubli de sa signification spirituelle. Ainsi il affirme que le "Vin" de Khayyâm est un vin spirituel, que la Tariqa est la Voie (sous entendue au sens soufi de chemin mystique vers Dieu) et non la "route" ou "route secondaire", présente selon lui dans certaines traductions (il ne précise pas lesquelles). Néanmoins les quatrains laissant paraître un scepticisme désabusé ne trouvent dans cette optique aucune explication.

On ne sait si la traduction effectuée par l'Imprimerie nationale est fidèle, mais elle ne contient pour sa part pas de métrique qui suggère (ou "rende") l'effet d'un travail poétique.

Quelques quatrains[modifier]

« Hier est passé, n’y pensons plus
Demain n’est pas là, n’y pensons plus
Pensons aux doux moments de la vie
Ce qui n’est plus, n’y pensons plus »

« Ce vase était le pauvre amant d’une bien-aimée
Il fut piégé par les cheveux d’une bien-aimée
L’anse que tu vois, au cou de ce vase
Fut le bras autour du cou d’une bien-aimée!  »

« Elle passe bien vite cette caravane de notre vie
Ne perds rien des doux moments de notre vie
Ne pense pas au lendemain de cette nuit
Prends du vin, il faut saisir les doux moments de notre vie »

— Dictionnaire des poètes renommés persans: A partir de l'apparition du persan dari jusqu'à nos jours, Téhéran, Aryan-Tarjoman, 2007.

Khayyâm l'inspirateur[modifier]

Omar Khayyâm, depuis sa découverte en Occident, a exercé une fascination récurrente sur des écrivains européens comme par exemple Marguerite Yourcenar, qui confessait "une autre figure historique (que celle de l'empereur Hadrienm'a tentée avec une insistance presque égale: Omar Khayyam... Mais (sa) vie... est celle du contemplateur, et du contempteur pur" tout en ajoutant, avec une humilité qui fait défaut à beaucoup de "traducteurs", "D'ailleurs, je ne connais pas la Perse et n'en sais pas la langue"15.

Il inspira aussi le roman Samarcande d'Amin Maalouf.

Musicalement, il inspira également les compositeurs suivants :

Divers[modifier]

  • Un cratère lunaire a été baptisé de son nom en 1970.
  • L'astéroïde 3095 Omarkhayyam a été nommé en son honneur en 1980.

Bibliographie[modifier]

Oeuvres poétiques[modifier]

  • L'amour, le désir, & le vin. Omar Khâyyâm (60 poèmes sur l'Amour et le Vin). Calligraphies de Lassaad Métoui. Paris, Alternatives, 2008, 128p.
  • Les Quatrains d'Omar Khâyyâm, traduits du Persan et présentés par Charles Grolleau, Ed. Charles Corrington, 1902. (Rééd. éditions Champ libre / Ivrea, 1978). (Rééd. éditions 1001 Nuits, 79p., 1995). (Rééd. éditions Allia, 2008).
  • Rubayat Omar Khayam, traduction d'Armand Robin (1958), (Rééd. Préf. d'André Velter, Poésie/Gallimard, 109p., 1994, ISBN : 207032785X ).
  • "Quatrains Omar Khayyâm suivi de Ballades Hâfez", Poèmes choisis, traduits et présentés par Vincent Monteil, bilingue Calligraphies de Blandine Furet, 171p., Coll. La Bibliothèque persane, Ed. Sindbad, 1983.
  • Les Chants d'Omar Khayyâm, édition critique, traduits du Persan par M. F. Farzaneh et Jean Malaplate, Éditions José Corti, 1993.
  • Les Chants d'Omar Khayam, traduits du Persan par Sadegh Hedayat, Éditions José Corti, 1993.
  • Quatrains d'Omar Khayyâm, édition bilingue, poèmes traduits du persan par Vincent-Mansour Monteil, Éditions Actes Sud, Collection Babel, 1998. ISBN : 2-7427-4744-3.
  • Cent un quatrain de libre pensée d'Omar Khayyâm, édition bilingue, traduit du persan par Gilbert Lazard, Éditions Gallimard, Connaissance de l'Orient, 2002. ISBN : 978-2-07-076720-5.
  • Les quatrains d'Omar Khayyâm, traduction du persan & préf. d'Omar Ali-Shah, trad. de l'anglais par Patrice Ricord, Coll. Spiritualités vivantes, Albin Michel, 146p., 2005, ISBN : 2226159134.

Oeuvres mathématiques[modifier]

Voir aussi la bibliographie des Irem (France).

  • Traité sur l'algèbre (vers 1070). Voir Roshid Rashed et Ahmed Djebbar, L'oeuvre algébrique d'al-Khayyâm, Alep, 1981.

A la fin de l'an 1077, il achève son commentaire sur certaines prémisses problématiques du Livre d'Euclide, en trois chapitres.

Études sur Khayyâm poète[modifier]

Études sur Khayyâm mathématicien[modifier]

  • R. Rashed, Al-Khayyam mathématicien, en collaboration avec B. Vahabzadeh, Paris, Librairie Blanchard, 1999, 438 p. Version anglaise : Omar Khayyam. The Mathematician, Persian Heritage Series n° 40, New York, Bibliotheca Persica Press, 2000, 268 p. (sans les textes arabes).
  • R. Rashed, L’Œuvre algébrique d'al-Khayyam (en collaboration avec A. Djebbar), Alep : Presses de l’Université d’Alep,1981, 336 p.
  • A. Djebbar, L'émergence du concept de nombre réel positif dans l'Épître d'al-Khayyâm (1048-1131)

Approches romanesques de Khayyâm[modifier]

  • Amin Maalouf évoque Omar Khayyâm ainsi que Nizam al-Mulk et Hassan ibn al-Sabbah dans son roman Samarcande (1988).
  • Omar Khayyâm apparaît également en filigrane dans le roman de Vladimir Bartol Alamut, compagnon de jeunesse de Hassan ibn al-Sabbah, le fondateur de la secte des Assassins.
  • Mehdi Aminrazavi, The Wine of Wisdom - The Life, Poetry and Philosophy of Omar Khayyam, Oneworld Oxford 2005.
  • Jacques Attali dans son roman "La confrérie des éveillés" (2005) fait référence à Omar Khayyâm
  • Linda Lê dans son roman "Les Trois Parques" (éd Christian Bourgois, 1997) le cite à plusieurs reprises
  • Marjane Satrapi cite un poème de Khayyâm dans la bande dessinée Poulet aux prunes
  • Denis Guedj dans son roman retraçant l'histoire des mathématiques Le Théorème du Perroquet
  • Jean-Yves Lacroix qui nous livre une biographie romancée d'Omar Khayyâm Le Cure-Dent (ISBN : 978-2-84485-283-0)
  • Xavier Philiponet dans le récit "Le troisième oeil", Les Joueurs d'Astres (avril 2010 - ISBN : 978-2-9531182-1-6), consacre un chapitre entier à Omar Khayyâm et à Samarcande.
  • Dans son ouvrage Le loup des mers, Jack London fait connaitre Omar Khayyâm au capitaine Loup Larsen par le narrateur. 3 jours durant, Humphrey récite les quatrains de Khayyâm à Larsen.
  • (ar) Rubayyat Al-Khayam : oeuvre du poète égyptien Ahmed Rami

Notes et références[modifier]

  1.  Ghiyath ed-din Abdoul Fath Omar Ibn Ibrahim al-Khayyām Nishabouri, (persan : غياث الدين ابو الفتح عمر بن ابراهيم خيام نيشابوري [ḡīyāṯ ad-dīn abū al-fatḥ ʿumar ben ibrāhīm ḫayām nīšābūrī])
  2.  (du persan: خيام [ḫayām], en arabe : خَيَّميّ [ḫayyamī] : fabricant de tentes)
  3.  Samarcande, par Amin Maalouf [archive]. La date de 1123 est parfois donnée.
  4.  Omar Khayyam, Rubayat, Poésie/Gallimard
  5.  G. Sarton, Introduction to the History of Science, Washington, 1927[réf. incomplète]
  6.  Franck Woepcke L'algèbre d'Omar Alkhayyämi, publiée, traduite et accompagnée de manuscrits inédits, 51 + 127p. Paris, 1851. Lire l'ouvrage sur Gallica [archive][réf. incomplète]
  7.  Armand Robin, Un algébriste lyrique, Omar Khayam, in La Gazette Littéraire de Lausanne, les 13 & 14 décembre 1958.
  8.  De l'arabe رباعية [rubāʿīya], pluriel رباعيات [rubāʿiyāt] ; « quatrains » (c.f. Steingass, Francis Joseph, A Comprehensive Persian-English dictionary..., London, Routledge & K. Paul, 1892 [lire en ligne [archive]]p. 0567). Le rythme des vers 1, 2 et 4 de chaque quatrain doivent en principe être sur le modèle « ¯ ¯ ˘ ˘ ¯ ¯ ˘ ˘ ¯ ¯ ˘ ˘ ¯ », (son bref : ˘ ; long : ¯) (c.f. Hayyim, Sulayman, New Persian-English dictionary...vol. 1, Teheran, Librairie-imprimerie Beroukhim, 1934-1936 [lire en ligne [archive]]p. 920). Le nombre 4 se dit ʾarbaʿa, أربعة et dérive de rubʿ, ربع, « le quart » en arabe et se dit šahār, چ‍ﮩ‍ار en persan.
  9.  Il écrit cependant : Sur la Terre bariolée, chemine quelqu'un qui n'est ni musulman ni infidèle, ni riche, ni humble. / Il ne révère ni Dieu, ni les lois. / Il ne croit pas à la vérité, il n'affirme jamais rien. / Sur la Terre bariolée, quel est cet homme brave et triste ? (Roubaïat 108, Toussaint)
  10.  Les numéros se réfèrent à la traduction de Toussaint
  11.  Carnets de notes de "Mémoires d'Hadrien", Folio Gallimard, impr de 2007, p.342 .
  12.  consul de France à Recht, interprète à la légation de France à Téhéran, traduction en 1861 des quatrains d'après un manuscrit comportant 464 quatrains
  13.  "Pourquoi ne pas en faire un Leibnitz écrivant de temps en temps des billets doux et de tout petits poèmes sur le coin d'une table, quand il en avait assez de tout ce qu'il avait dans le cerveau" Armand Robin in Traduction p.105, cf Bibliographie
  14.  "Omar Khayyam" [archive], Encyclopaedia Britannica
  15.  Carnets de notes de "Mémoires d'Hadrien", p.329, Folio/Gallimard, impr. 2007

Lien externe[modifier]

Sur les autres projets Wikimédia :

 

22:24 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Al-Khawarizmi

Al-Khawarizmi

Al-Khawarizmi1, né vers 783, originaire de Khiva dans la région du Khwarezm2 qui lui a donné son nom, mort vers 850 à Bagdad, est un mathématiciengéographeastrologue etastronome musulman perse dont les écrits, rédigés en langue arabe, ont permis l'introduction de l'algèbre en Europe3. Sa vie s'est déroulée en totalité à l'époque de la dynastieAbbasside. Il est à l'origine des mots « algorithme » (qui n'est autre que son nom latinisé: "algoritmi" 3) et « algèbre » (issu d'une méthode et du titre d'un de ses ouvrages) ou encore de l'utilisation des chiffres arabes dont la diffusion dans le Moyen-Orient et en Europe provient d'un autre de ses livres (qui lui-même traite des mathématiques indiennes).

Son apport en mathématiques fut tel qu'il est également surnommé « le père de l'algèbre », avec Diophante d'Alexandrie, dont il reprendra les travaux. En effet, il fut le premier à répertorier de façon systématique des méthodes de résolution d'équations en classant celles-ci.

Il ne faut pas confondre ce mathématicien arabo-musulman avec un autre mathématicien perse : Abu-'Abdollâh Mohammad Khuwârizmi (en) qui, lui, est l'auteur de Mafâtih al-'Olum(ouvrage de mathématiques écrit vers 976).

Un cratère de la Lune a été nommé en son honneur : le cratère Al-Khwarizmi.

Sommaire

 [masquer]

Apports[modifier]

Il n'inventa pas les algorithmes (le plus ancien algorithme alors connu était celui d'Euclide, et on en découvrira au XXe siècle dans les anciennes tablettes de Babylone, servant au calcul de l'impôt), mais il en formalise la théorie en regroupant leurs points communs, en particulier la nécessité d'un critère d'arrêt.

En mathématiques[modifier]

Première page du Kitāb al-mukhtaṣar fī ḥisāb al-jabr wa-l-muqābala

Il est l'auteur de plusieurs ouvrages de mathématiques dont l'un des plus célèbres est intitulé kitāb al-mukhtaṣar fī ḥisāb al-jabr wa'l-muqābalah (كتاب المختصر في حساب الجبر والمقابلة), ou Abrégé du calcul par la restauration et la comparaison, publié en 825. Ce livre contient six chapitres, consacrés chacun à un type particulier d'équation. Il ne contient aucun chiffre. Toutes les équations sont exprimées avec des mots. Le carré de l'inconnue est nommé « le carré » ou mâl, l'inconnue est « la chose » ou shay (šay) ou jidhr, la constante est le dirham ou adǎd. Le terme al-jabr4 fut repris par les Européens et devint plus tard le mot algèbre.

Un autre ouvrage, qui ne nous est pas parvenu, Kitāb 'al-ĵāmi` wa'l-tafrīq bī h'isāb ’al-Hind (كتاب الجامع و التفريق بحساب الهند, « Livre de l'addition et de la soustraction d'après le calcul indien »), qui décrit le système des chiffres « arabes » (en fait, empruntés aux Indiens), fut le vecteur de la diffusion de ces chiffres dans le Moyen-Orient et dans le Califat de Cordoue, d'où Gerbert d'Aurillac (Sylvestre II) les fera parvenir au monde chrétien5.

En astronomie[modifier]

Al-Khawarizmi est l'auteur d'un zij, paru en 820, et connu sous le nom de Zīj al-Sindhind (Table indienne).

Le principe des algorithmes était connu depuis l'Antiquité (algorithme d'Euclide), et Donald Knuth mentionne même leur usage par les Babyloniens.

Notes et références[modifier]

  1.  ou Al-Khuwarizmi dont le nom entier est en persan : Abû Ja`far Muhammad ben Mūsā Khwārezmī ابوجعفر محمد بن موسی خوارزمی ou Abû `Abd Allah Muhammad ben Mūsā al-Khawārizmī (arabe أبو عبد الله محمد بن موسى الخوارزمي , également orthographié comme Abu Abudllah Muhammad bin Musa al-Khwarizmi ou Al-Khorezmi)
  2.  On ignore s'il est né à Khiva puis a émigré à Bagdad ou si ce sont ses parents qui ont émigré ; auquel cas il pourrait être né à Bagdad.
  3. ↑ a et b Encyclopédia Britannica, al-Khwarizmi [archive].
  4.  Al-jabr est resté avec son sens originel de restauration / remise en place dans le mot espagnol algebrista qui désigne un « rebouteux » qui remet en place les articulations et les os démis. Voir (es) algebrista (2) [archive] sur Diccionario de la lengua española
  5.  Voir André Allard (éd. sc.), Muhammad Ibn Mūsā Al-Khwārizmī. Le calcul indien (algorismus), Librairie scientifique et technique A. Blanchard, Paris ; Société des Études classiques, Namur, 1992.(ISBN 978-2-87037-174-9)

Voir aussi[modifier]

Liens internes[modifier]

Liens externes[modifier]

Commons-logo.svg

Wikimedia Commons propose des documents multimédia libres sur Muhammad al‑Khwarizmi.

22:23 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Addition

Addition

From Wikipedia, the free encyclopedia
  (Redirected from Addition of natural numbers)
3 + 2 = 5 withapples, a popular choice in textbooks[1]

Addition is a mathematical operation that represents combining collections of objects together into a larger collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two other apples—which is the same as five apples. Therefore, 3 + 2 = 5. Besides counting fruits, addition can also represent combining other physical and abstract quantities using different kinds of numbers: negative numbers,fractionsirrational numbersvectors, decimals and more.

Addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra.

Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, children learn to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day.

Contents

 [hide]

[edit]Notation and terminology

The plus sign

Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example,

1 + 1 = 2 (verbally, "one plus one is equal to two")
2 + 2 = 4 (verbally, "two plus two is equal to four")
5 + 4 + 2 = 11 (see "associativity" below)
3 + 3 + 3 + 3 = 12 (see "multiplication" below)

There are also situations where addition is "understood" even though no symbol appears:

Columnar addition:
5 + 12 = 17
  • A column of numbers, with the last number in the column underlined, usually indicates that the numbers in the column are to be added, with the sum written below the underlined number.
  • A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number.[2] For example,
          3½ = 3 + ½ = 3.5.
    This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead.

The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example,

sum_{k=1}^5 k^2 = 1^2 + 2^2 + 3^2 + 4^2 + 5^2 = 55.

The numbers or the objects to be added in general addition are called the "terms", the "addends", or the "summands"; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend theaugend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the symmetry of addition, "augend" is rarely used, and both terms are generally called addends.[3]

All of this terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn acompound of ad "to" and dare "to give", from the Proto-Indo-European root *deh₃- "to give"; thus to add is to give to.[3] Using the gerundivesuffix -nd results in "addend", "thing to be added".[4] Likewise from augere "to increase", one gets "augend", "thing to be increased".

Redrawn illustration from The Art of Nombryng, one of the first English arithmetic texts, in the 15th century[5]

"Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verbsummare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends.[6] Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer.[7]

[edit]Interpretations

Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations.

[edit]Combining sets

AdditionShapes.svg

Possibly the most fundamental interpretation of addition lies in combining sets:

  • When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections.

This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics; for the rigorous definition it inspires, seeNatural numbers below. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers.[8]

One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods.[9] Rather than just combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods.

[edit]Extending a length

A second interpretation of addition comes from extending an initial length by a given length:

  • When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension.
A number-line visualization of the algebraic addition 2 + 4 = 6. A translation by 2 followed by a translation by 4 is the same as a translation by 6.
A number-line visualization of the unary addition 2 + 4 = 6. A translation by 4 is equivalent to four translations by 1.

The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum a + b play asymmetric roles, and the operation a + b is viewed as applying the unary operation +b to a. Instead of calling botha and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa.

[edit]Properties

[edit]Commutativity

4 + 2 = 2 + 4 with blocks

Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result will be the same as the last one. Symbolically, if a and b are any two numbers, then

a + b = b + a.

The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law".

[edit]Associativity

2+(1+3) = (2+1)+3 with segmented rods

A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression

"a + b + c"

be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers ab, and c, it is true that

(a + b) + c = a + (b + c).

For example, (1 + 2) + 3 = 3 + 3 = 6 = 1 + 5 = 1 + (2 + 3). Not all operations are associative, so in expressions with other operations like subtraction, it is important to specify the order of operations.

[edit]Zero and one

5 + 0 = 5 with bags of dots

When adding zero to any number, the quantity does not change; zero is the identity element for addition, also known as the additive identity. In symbols, for any a,

a + 0 = 0 + a = a.

This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + a = a. In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement a + 0 = a.[10]

In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a. Because of this succession, the value of some a + b can also be seen as the bth successor of a, making addition iterated succession.

[edit]Units

To numerically add physical quantities with units, they must first be expressed with common units. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis.

[edit]Performing addition

[edit]Innate ability

Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituationinfants look longer at situations that are unexpected.[11] A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies.[12] Another 1992 experiment with older toddlers, between 18 to 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5.[13]

Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training.[14]

[edit]Elementary methods

Typically children master the art of counting first. When asked a problem requiring two items and three items to be combined, young children will model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they will learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers.[15]Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child who is asked to add six and seven may know that 6+6=12 and then reason that 6+7 will be one more, or 13.[16] Such derived facts can be found very quickly and most elementary school children eventually rely on a mixture of memorized and derived facts to add fluently.[17]

[edit]Decimal system

The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient:[18]

  • One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition.[18]
  • Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, some children are introduced to addition as a process that always increases the addends; word problemsmay help rationalize the "exception" of zero.[18]
  • Doubles: Adding a number to itself is related to counting by two and to multiplication. Doubles facts form a backbone for many related facts, and fortunately, children find them relatively easy to grasp.[18]
  • Near-doubles: Sums such as 6+7=13 can be quickly derived from the doubles fact 6+6=12 by adding one more, or from 7+7=14 but subtracting one.[18]
  • Five and ten: Sums of the form 5+x and 10+x are usually memorized early and can be used for deriving other facts. For example, 6+7=13 can be derived from 5+7=12 by adding one more.[18]
  • Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14.[18]

As children grow older, they will commit more facts to memory, and learn to derive other facts rapidly and fluently. Many children never commit all the facts to memory, but can still find any basic fact quickly.[17]

The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column.[19] An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many other alternative methods.

[edit]Computers

Addition with an op-amp. See Summing amplifier for details.

Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier.[20]

Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance.

Part of Charles Babbage's Difference Engine including the addition and carry mechanisms

Adding machines, mechanical calculators whose primary function was addition, were the earliest automatic, digital computers. Wilhelm Schickard's 1623 Calculating Clock could add and subtract, but it was severely limited by an awkward carry mechanism. Burnt during its construction in 1624 and unknown to the world for more than three centuries, it was rediscovered in 1957[21] and therefore had no impact on the development of mechanical calculators.[22] Blaise Pascal invented the mechanical calculator in 1642[23] with an ingenious gravity-assisted carry mechanism. Pascal's calculator was limited by its carry mechanism in a different sense: its wheels turned only one way, so it could add but not subtract, except by the method of complements. By 1674 Gottfried Leibniz made the first mechanical multiplier; it was still powered, if not motivated, by addition.[24]

"Full adder" logic circuit that adds two binary digits, A and B, along with a carry input Cin, producing the sum bit, S, and a carry output, Cout.

Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm taught to children. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer.[25]

Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs.[26]

Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish.[27] In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend.[28] In a high-level programming language, evaluating a + b does not change either a or b; to change the value of a one uses the addition assignment operator a += b.

[edit]Addition of natural and real numbers

To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers.[29] (In mathematics education,[30] positive fractions are added before negative numbers are even considered; this is also the historical route.[31])

[edit]Natural numbers

There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows:

  • Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with N(A) = a and N(B) = b. Then a + b is defined as  N(A cup B).[32]

Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice.

The other popular definition is recursive:

  • Let n+ be the successor of n, that is the number following n in the natural numbers, so 0+=1, 1+=2. Define a + 0 = a. Define the general sum recursively by a + (b+) = (a + b)+. Hence 1+1=1+0+=(1+0)+=1+=2.[33]

Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N2.[34] On the other hand, some sources prefer to use a restricted Recursion Theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on bto define a function "a + ", and pastes these unary operations for all a together to form the full binary operation.[35]

This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades.[36] He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers.

[edit]Integers

Defining (−2) + 1 using only addition of positive numbers: (2 − 4) + (3 − 2) = 5 − 6.

The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive ornegative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:

  • For an integer n, let |n| be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define a + b = |a| + |b|. If a and b are both negative, define a + b = −(|a|+|b|). If a and b have different signs, define a + b to be the difference between |a| and |b|, with the sign of the term whose absolute value is larger.[37]

Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider.

A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction:

  • Given two integers a − b and c − d, where abc, and d are natural numbers, define (a − b) + (c − d) = (a + c) − (b + d).[38]

[edit]Rational numbers (Fractions)

Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication:

  • Define    frac ab + frac cd = frac{ad+bc}{bd}.

The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic.[39] For a more rigorous and general discussion, see field of fractions.

[edit]Real numbers

Adding π2/6 and e using Dedekind cuts of rationals

A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be aDedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers aand b is defined element by element:

This definition was first published, in a slightly modified form, by Richard Dedekind in 1872.[41] The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.[42]

Adding π2/6 and e using Cauchy sequences of rationals

Unfortunately, dealing with multiplication of Dedekind cuts is a case-by-case nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term:

This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different.[44] One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straigh­tforward, analogous definitions.[45]

[edit]Generalizations

There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains...Alexander Bogomolny

There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field ofabstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory.

[edit]Addition in abstract algebra

In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates:

(a,b) + (c,d) = (a+c,b+d).

This addition operation is central to classical mechanics, in which vectors are interpreted as forces.

In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. In geometry, the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori.

The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups.

[edit]Addition in set theory and category theory

A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation.

In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as Direct sum and Wedge sum, are named to evoke their connection with addition.

[edit]Related operations

[edit]Arithmetic

Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x areinverse functions.

Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction.[46]

Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number.

A circular slide rule

In the real and complex numbers, addition and multiplication can be interchanged by the exponential function:

ea + b = ea eb.[47]

This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra.[48]

There are even more generalizations of multiplication than addition.[49] In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.[50]

Division is an arithmetic operation remotely related to addition. Since a/b = a(b−1), division is right distributive over addition: (a + b) / c = a / cb / c.[51] However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2.

[edit]Ordering

Log-log plot of x + 1 and max (x, 1) fromx = 0.001 to 1000[52]

The maximum operation "max (ab)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of differentorders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straigh­tforward calculation of (a + b) − b can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance.

The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two.[53] Accordingly, there is no subtraction operation for infinite cardinals.[54]

Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:

a + max (bc) = max (a + ba + c).

For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity.[55] Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.[56]

Tying these observations together, tropical addition is approximately related to regular addition through the logarithm:

log (a + b) ≈ max (log a, log b),

which becomes more accurate as the base of the logarithm increases.[57] The approximation can be made exact by extracting a constant h, named by analogy with Planck's constantfrom quantum mechanics,[58] and taking the "classical limit" as h tends to zero:

max(a,b) = lim_{hto 0}hlog(e^{a/h}+e^{b/h}).

In this sense, the maximum operation is a dequantized version of addition.[59]

[edit]Other ways to add

Incrementation, also known as the successor operation, is the addition of 1 to a number.

Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero.[60] An infinite summation is a delicate procedure known as a series.[61]

Counting a finite set is equivalent to summing 1 over the set.

Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation.

Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straigh­tforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics.

Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.

[edit]In literature

  • In chapter 9 of Lewis Carroll's Through the Looking-Glass, the White Queen asks Alice, "And you do Addition? ... What's one and one and one and one and one and one and one and one and one and one?" Alice admits that she lost count, and the Red Queen declares, "She can't do Addition".
  • In George Orwell's Nineteen Eighty-Four, the value of 2 + 2 is questioned; the State contends that if it declares 2 + 2 = 5, then it is so. See Two plus two make five for the history of this idea.

[edit]Notes

  1. ^ From Enderton (p.138): "...select two sets K and L with card K = 2 and card L = 3. Sets of fingers are handy; sets of apples are preferred by textbooks."
  2. ^ Devine et al. p.263
  3. a b Schwartzman p.19
  4. ^ "Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added".
  5. ^ Karpinski pp.56–57, reproduced on p.104
  6. ^ Schwartzman (p.212) attributes adding upwards to theGreeks and Romans, saying it was about as common as adding downwards. On the other hand, Karpinski (p.103) writes that Leonard of Pisa "introduces the novelty of writing the sum above the addends"; it is unclear whether Karpinski is claiming this as an original invention or simply the introduction of the practice to Europe.
  7. ^ Karpinski pp.150–153
  8. ^ See Viro 2001 for an example of the sophistication involved in adding with sets of "fractional cardinality".
  9. ^ Adding it up (p.73) compares adding measuring rods to adding sets of cats: "For example, inches can be subdivided into parts, which are hard to tell from the wholes, except that they are shorter; whereas it is painful to cats to divide them into parts, and it seriously changes their nature."
  10. ^ Kaplan pp.69–71
  11. ^ Wynn p.5
  12. ^ Wynn p.15
  13. ^ Wynn p.17
  14. ^ Wynn p.19
  15. ^ F. Smith p.130
  16. ^ Carpenter, Thomas; Fennema, Elizabeth; Franke, Megan Loef; Levi, Linda; Empson, Susan (1999).Children's mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. ISBN 0-325-00137-5.
  17. a b Henry, Valerie J.; Brown, Richard S. (2008). "First-grade basic facts: An investigation into teaching and learning of an accelerated, high-demand memorization standard". Journal for Research in Mathematics Education 39 (2): 153–183. doi:10.2307/30034895.
  18. a b c d e f g Fosnot and Dolk p. 99
  19. ^ The word "carry" may be inappropriate for education; Van de Walle (p.211) calls it "obsolete and conceptually misleading", preferring the word "trade".
  20. ^ Truitt and Rogers pp.1;44–49 and pp.2;77–78
  21. ^ Jean Marguin p. 48 (1994)
  22. ^ René Taton, p. 81 (1969)
  23. ^ Jean Marguin, p. 48 (1994) ; Quoting René Taton(1963)
  24. ^ Williams pp.122–140
  25. ^ Flynn and Overman pp.2, 8
  26. ^ Flynn and Overman pp.1–9
  27. ^ Karpinski pp.102–103
  28. ^ The identity of the augend and addend varies with architecture. For ADD in x86 see Horowitz and Hill p.679; for ADD in 68k see p.767.
  29. ^ Enderton chapters 4 and 5, for example, follow this development.
  30. ^ California standards; see grades 23, and 4.
  31. ^ Baez (p.37) explains the historical development, in "stark contrast" with the set theory presentation: "Apparently, half an apple is easier to understand than a negative apple!"
  32. ^ Begle p.49, Johnson p.120, Devine et al. p.75
  33. ^ Enderton p.79
  34. ^ For a version that applies to any poset with thedescending chain condition, see Bergman p.100.
  35. ^ Enderton (p.79) observes, "But we want one binary operation +, not all these little one-place functions."
  36. ^ Ferreirós p.223
  37. ^ K. Smith p.234, Sparks and Rees p.66
  38. ^ Enderton p.92
  39. ^ The verifications are carried out in Enderton p.104 and sketched for a general field of fractions over a commutative ring in Dummit and Foote p.263.
  40. ^ Enderton p.114
  41. ^ Ferreirós p.135; see section 6 of Stetigkeit und irrationale Zahlen.
  42. ^ The intuitive approach, inverting every element of a cut and taking its complement, works only for irrational numbers; see Enderton p.117 for details.
  43. ^ Textbook constructions are usually not so cavalier with the "lim" symbol; see Burrill (p. 138) for a more careful, drawn-out development of addition with Cauchy sequences.
  44. ^ Ferreirós p.128
  45. ^ Burrill p.140
  46. ^ The set still must be nonempty. Dummit and Foote (p.48) discuss this criterion written multiplicatively.
  47. ^ Rudin p.178
  48. ^ Lee p.526, Proposition 20.9
  49. ^ Linderholm (p.49) observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'."
  50. ^ Dummit and Foote p.224. For this argument to work, one still must assume that addition is a group operation and that multiplication has an identity.
  51. ^ For an example of left and right distributivity, see Loday, especially p.15.
  52. ^ Compare Viro Figure 1 (p.2)
  53. ^ Enderton calls this statement the "Absorption Law of Cardinal Arithmetic"; it depends on the comparability of cardinals and therefore on the Axiom of Choice.
  54. ^ Enderton p.164
  55. ^ Mikhalkin p.1
  56. ^ Akian et al. p.4
  57. ^ Mikhalkin p.2
  58. ^ Litvinov et al. p.3
  59. ^ Viro p.4
  60. ^ Martin p.49
  61. ^ Stewart p.8

[edit]References

History
  • Bunt, Jones, and Bedient (1976). The historical roots of elementary mathematics. Prentice-Hall. ISBN 0-13-389015-5.
  • Ferreirós, José (1999). Labyrinth of thought: A history of set theory and its role in modern mathematics. Birkhäuser. ISBN 0-8176-5749-5.
  • Kaplan, Robert (2000). The nothing that is: A natural history of zero. Oxford UP. ISBN 0-19-512842-7.
  • Karpinski, Louis (1925). The history of arithmetic. Rand McNally. LCC QA21.K3.
  • Schwartzman, Steven (1994). The words of mathematics: An etymological dictionary of mathematical terms used in EnglishMAAISBN 0-88385-511-9.
  • Williams, Michael (1985). A history of computing technology. Prentice-Hall. ISBN 0-13-389917-9.
Elementary mathematics
  • Davison, Landau, McCracken, and Thompson (1999). Mathematics: Explorations & Applications (TE ed.). Prentice Hall. ISBN 0-13-435817-1.
  • F. Sparks and C. Rees (1979). A survey of basic mathematics. McGraw-Hill. ISBN 0-07-059902-5.
Education
Cognitive science
  • Baroody and Tiilikainen (2003). "Two perspectives on addition development". The development of arithmetic concepts and skills. pp. 75. ISBN 0-8058-3155-X.
  • Fosnot and Dolk (2001). Young mathematicians at work: Constructing number sense, addition, and subtraction. Heinemann. ISBN 0-325-00353-X.
  • Weaver, J. Fred (1982). "Interpretations of number operations and symbolic representations of addition and subtraction". Addition and subtraction: A cognitive perspective. pp. 60.ISBN 0-89859-171-6.
  • Wynn, Karen (1998). "Numerical competence in infants". The development of mathematical skills. pp. 3. ISBN 0-86377-816-X.
Mathematical exposition
  • Bogomolny, Alexander (1996). "Addition"Interactive Mathematics Miscellany and Puzzles (cut-the-knot.org). Retrieved 3 February 2006.
  • Dunham, William (1994). The mathematical universe. Wiley. ISBN 0-471-53656-3.
  • Johnson, Paul (1975). From sticks and stones: Personal adventures in mathematics. Science Research Associates. ISBN 0-574-19115-1.
  • Linderholm, Carl (1971). Mathematics Made Difficult. Wolfe. ISBN 0-7234-0415-1.
  • Smith, Frank (2002). The glass wall: Why mathematics can seem difficult. Teachers College Press. ISBN 0-8077-4242-2.
  • Smith, Karl (1980). The nature of modern mathematics (3e ed.). Wadsworth. ISBN 0-8185-0352-1.
Advanced mathematics
Mathematical research
Computing
  • M. Flynn and S. Oberman (2001). Advanced computer arithmetic design. Wiley. ISBN 0-471-41209-0.
  • P. Horowitz and W. Hill (2001). The art of electronics (2e ed.). Cambridge UP. ISBN 0-521-37095-7.
  • Jackson, Albert (1960). Analog computation. McGraw-Hill. LCC QA76.4 J3.
  • T. Truitt and A. Rogers (1960). Basics of analog computers. John F. Rider. LCC QA76.4 T7.
  • Marguin, Jean (1994) (in fr). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 1642-1942. Hermann. ISBN 978-2705661663.
  • Taton, René (1963) (in fr). Le calcul mécanique. Que sais-je ? n° 367. Presses universitaires de France. pp. 20–28.
  • Marguin, Jean (1994) (in fr). Histoire des instruments et machines à calculer, trois siècles de mécanique pensante 1642-1942. Hermann. ISBN 978-2705661663.

22:12 Publié dans Addition | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Arithmetic

Arithmetic

From Wikipedia, the free encyclopedia
 
Arithmetic tables for children, Lausanne, 1835

Arithmetic or arithmetics (from the Greek word ἀριθμός = number) is the oldest and most elementary branch of mathematics, used by almost everyone, for tasks ranging from simple day-to-day counting to advanced science and business calculations. It involves the study of quantity, especially as the result of combining numbers. In common usage, it refers to the simpler properties when using the traditional operations ofadditionsubtractionmultiplication and division with smaller values of numbers. Professional mathematicians sometimes use the term (higher) arithmetic[1] when referring to more advanced results related to number theory, but this should not be confused with elementary arithmetic.

Contents

 [hide]

[edit]History

The prehistory of arithmetic is limited to a very small number of small artifacts which may indicate conception of addition and subtraction, the best-known being the Ishango bone fromcentral Africa, dating from somewhere between 20,000 and 18,000 BC although its interpretation is disputed.[2]

The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not includepositional notation. Although addition was generally straightforward, multiplication in Roman arithmetic required the assistance of a counting board to obtain the results.

Early number systems that included positional notation were not decimal, including the sexagesimal (base 60) system for Babylonian numerals and the vigesimal(base 20) system that defined Maya numerals. Because of this place-value concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation.

The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, although it originated much later than the Babylonian and Egyptian examples. Prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, and their relationships to each other, in his Introduction to Arithmetic.

Greek numerals, derived from the hieratic Egyptian system, also lacked positional notation, and therefore imposed the same complexity on the basic operations of arithmetic. For example, the ancient mathematician Archimedes devoted his entire work The Sand Reckoner merely to devising a notation for a certain large integer.

The gradual development of Hindu-Arabic numerals independently devised the place-value concept and positional notation, which combined the simpler methods for computations with a decimal base and the use of a digit representing zero. This allowed the system to consistently represent both large and small integers. This approach eventually replaced all other systems. In the early 6th century AD, the Indian mathematician Aryabhata incorporated an existing version of this system in his work, and experimented with different notations. In the 7th century, Brahmagupta established the use of zero as a separate number and determined the results for multiplication, division, addition and subtraction of zero and all other numbers, except for the result of division by zero. His contemporary, the Syriac bishop Severus Sebokht described the excellence of this system as "...valuable methods of calculation which surpass description". The Arabs also learned this new method and called it hesab.

Although the Codex Vigilanus described an early form of Arabic numerals (omitting zero) by 976 AD, Fibonacci was primarily responsible for spreading their use throughout Europe after the publication of his book Liber Abaci in 1202. He considered the significance of this "new" representation of numbers, which he styled the "Method of the Indians" (Latin Modus Indorum), so fundamental that all related mathematical foundations, including the results of Pythagoras and the algorism describing the methods for performing actual calculations, were "almost a mistake" in comparison.

In the Middle Ages, arithmetic was one of the seven liberal arts taught in universities.

The flourishing of algebra in the medieval Islamic world and in Renaissance Europe was an outgrowth of the enormous simplification of computation through decimal notation.

Various types of tools exist to assist in numeric calculations. Examples include slide rules (for multiplication, division, and trigonometry) and nomographs in addition to the electricalcalculator.

[edit]Decimal arithmetic

Although decimal notation may conceptually describe any numerals from a system with a decimal base, it is commonly used exclusively for the written forms of numbers with Arabic numerals as the basic digits, especially when the numeral includes a decimal separator preceding a sequence of these digits to represent a fractional part of the number. In this common usage, the written form of the number implies the existence of positional notation. For example, 507.36 denotes 5 hundreds (102), plus 0 tens (101), plus 7 units (100), plus 3 tenths (10−1) plus 6 hundredths (10−2). The conception of zero as a number comparable to the other basic digits, and the corresponding definition of multiplication and addition with zero, is an essential part of this notation.

Algorism comprises all of the rules for performing arithmetic computations using this type of written numeral. For example, addition produces the sum of two arbitrary numbers. The result is calculated by the repeated addition of single digits from each number that occupies the same position, proceeding from right to left. An addition table with ten rows and ten columns displays all possible values for each sum. If an individual sum exceeds the value nine, the result is represented with two digits. The rightmost digit is the value for the current position, and the result for the subsequent addition of the digits to the left increases by the value of the second (leftmost) digit, which is always one. This adjustment is termed a carry of the value one.

The process for multiplying two arbitrary numbers is similar to the process for addition. A multiplication table with ten rows and ten columns lists the results for each pair of digits. If an individual product of a pair of digits exceeds nine, the carry adjustment increases the result of any subsequent multiplication from digits to the left by a value equal to the second (leftmost) digit, which is any value from one to eight (9 × 9 = 81). Additional steps define the final result.

Similar techniques exist for subtraction and division.

The creation of a correct process for multiplication relies on the relationship between values of adjacent digits. The value for any single digit in a numeral depends on its position. Also, each position to the left represents a value ten times larger than the position to the right. In mathematical terms, the exponent for the base of ten increases by one (to the left) or decreases by one (to the right). Therefore, the value for any arbitrary digit is multiplied by a value of the form 10n with integer n. The list of values corresponding to all possible positions for a single digit is written as {..., 102, 10, 1, 10−1, 10−2, ...}.

Repeated multiplication of any value in this list by ten produces another value in the list. In mathematical terminology, this characteristic is defined as closure, and the previous list is described as closed under multiplication. It is the basis for correctly finding the results of multiplication using the previous technique. This outcome is one example of the uses ofnumber theory.

[edit]Arithmetic operations

The basic arithmetic operations are addition, subtraction, multiplication and division, although this subject also includes more advanced operations, such as manipulations ofpercentagessquare roots, exponentiation, and logarithmic functions. Arithmetic is performed according to an order of operations. Any set of objects upon which all four arithmetic operations (except division by zero) can be performed, and where these four operations obey the usual laws, is called a field.

[edit]Addition (+)

Addition is the basic operation of arithmetic. In its simplest form, addition combines two numbers, the addends or terms, into a single number, the sum of the numbers.

Adding more than two numbers can be viewed as repeated addition; this procedure is known as summation and includes ways to add infinitely many numbers in an infinite series; repeated addition of the number one is the most basic form of counting.

Addition is commutative and associative so the order the terms are added in does not matter. The identity element of addition (the additive identity) is 0, that is, adding zero to any number yields that same number. Also, the inverse element of addition (the additive inverse) is the opposite of any number, that is, adding the opposite of any number to the number itself yields the additive identity, 0. For example, the opposite of 7 is −7, so 7 + (−7) = 0.

Addition can be given geometrically as follows:

If a and b are the lengths of two sticks, then if we place the sticks one after the other, the length of the stick thus formed is a + b.

[edit]Subtraction (−)

Subtraction is the opposite of addition. Subtraction finds the difference between two numbers, the minuend minus the subtrahend. If the minuend is larger than the subtrahend, the difference is positive; if the minuend is smaller than the subtrahend, the difference is negative; if they are equal, the difference is zero.

Subtraction is neither commutative nor associative. For that reason, it is often helpful to look at subtraction as addition of the minuend and the opposite of the subtrahend, that isa − b = a + (−b). When written as a sum, all the properties of addition hold.

There are several methods for calculating results, some of which are particularly advantageous to machine calculation. For example, digital computers employ the method of two's complement. Of great importance is the counting up method by which change is made. Suppose an amount P is given to pay the required amount Q, with P greater than Q. Rather than performing the subtraction P − Q and counting out that amount in change, money is counted out starting at Q and continuing until reaching P. Although the amount counted out must equal the result of the subtraction P − Q, the subtraction was never really done and the value of P − Q might still be unknown to the change-maker.

[edit]Multiplication (× or ·)

Multiplication is the second basic operation of arithmetic. Multiplication also combines two numbers into a single number, the product. The two original numbers are called the multiplierand the multiplicand, sometimes both simply called factors.

Multiplication is best viewed as a scaling operation. If the real numbers are imagined as lying in a line, multiplication by a number, say x, greater than 1 is the same as stretching everything away from zero uniformly, in such a way that the number 1 itself is stretched to where x was. Similarly, multiplying by a number less than 1 can be imagined as squeezing towards zero. (Again, in such a way that 1 goes to the multiplicand.)

Multiplication is commutative and associative; further it is distributive over addition and subtraction. The multiplicative identity is 1, that is, multiplying any number by 1 yields that same number. Also, the multiplicative inverse is the reciprocal of any number (except zero; zero is the only number without a multiplicative inverse), that is, multiplying the reciprocal of any number by the number itself yields the multiplicative identity.

The product of a and b is written as a × b or a • b. When a or b are expressions not written simply with digits, it is also written by simple juxtaposition: ab. In computer programming languages and software packages in which one can only use characters normally found on a keyboard, it is often written with an asterisk: a * b.

[edit]Division (÷ or /)

Division is essentially the opposite of multiplication. Division finds the quotient of two numbers, the dividend divided by the divisor. Any dividend divided by zero is undefined. For positive numbers, if the dividend is larger than the divisor, the quotient is greater than one, otherwise it is less than one (a similar rule applies for negative numbers). The quotient multiplied by the divisor always yields the dividend.

Division is neither commutative nor associative. As it is helpful to look at subtraction as addition, it is helpful to look at division as multiplication of the dividend times the reciprocal of the divisor, that is a ÷ b = a × 1/b. When written as a product, it obeys all the properties of multiplication.

[edit]Number theory

The term arithmetic also refers to number theory. This includes the properties of integers related to primalitydivisibility, and the solution of equations in integers, as well as modern research that is an outgrowth of this study. It is in this context that one runs across the fundamental theorem of arithmetic and arithmetic functionsA Course in Arithmetic by Jean-Pierre Serre reflects this usage, as do such phrases as first order arithmetic or arithmetical algebraic geometry. Number theory is also referred to as the higher arithmetic, as in the title ofHarold Davenport's book on the subject.

[edit]Arithmetic in education

Primary education in mathematics often places a strong focus on algorithms for the arithmetic of natural numbersintegersrational numbers (fractions), and real numbers (using the decimal place-value system). This study is sometimes known as algorism.

The difficulty and unmotivated appearance of these algorithms has long led educators to question this curriculum, advocating the early teaching of more central and intuitive mathematical ideas. One notable movement in this direction was the New Math of the 1960s and 1970s, which attempted to teach arithmetic in the spirit of axiomatic development from set theory, an echo of the prevailing trend in higher mathematics.[3]

[edit]See also

[edit]Related topics

[edit]Footnotes

  1. ^ Davenport, HaroldThe Higher Arithmetic: An Introduction to the Theory of Numbers (7th ed.), Cambridge University Press, Cambridge, UK, 1999, ISBN 0-521-63446-6
  2. ^ Rudman, Peter Strom (20007). How Mathematics Happened: The First 50,000 Years. Prometheus Books. p. 64. ISBN 978-1591024774.
  3. ^ Mathematically Correct: Glossary of Terms

[edit]References

  • Cunnington, Susan, The Story of Arithmetic: A Short History of Its Origin and Development, Swan Sonnenschein, London, 1904
  • Dickson, Leonard EugeneHistory of the Theory of Numbers (3 volumes), reprints: Carnegie Institute of Washington, Washington, 1932; Chelsea, New York, 1952, 1966
  • Euler, LeonhardElements of Algebra, Tarquin Press, 2007
  • Fine, Henry Burchard (1858–1928), The Number System of Algebra Treated Theoretically and Historically, Leach, Shewell & Sanborn, Boston, 1891
  • Karpinski, Louis Charles (1878–1956), The History of Arithmetic, Rand McNally, Chicago, 1925; reprint: Russell & Russell, New York, 1965
  • Ore, ØysteinNumber Theory and Its History, McGraw–Hill, New York, 1948
  • Weil, AndréNumber Theory: An Approach through History, Birkhauser, Boston, 1984; reviewed: Mathematical Reviews 85c:01004

[edit]External links

 

22:08 Publié dans Arithmetic | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Game theory

Game theory

From Wikipedia, the free encyclopedia
 
Economics
GDP PPP Per Capita IMF 2008.svg
General categories

Microeconomics · Macroeconomics
History of economic thought
Methodology · Mainstream & heterodox

Mathematical & quantitative methods

Mathematical economics  · Game theory
Optimization · Computational
Econometrics  · Experimental
Statistics · National accounting

Fields and subfields

Behavioral · Cultural · Evolutionary
Growth · Development · History
International · Economic systems
Monetary and Financial economics
Public and Welfare economics
Health · Education · Welfare
Population · Labour · Managerial
Business · Information
Industrial organization · Law
Agricultural · Natural resource
Environmental · Ecological
Urban · Rural · Regional · Geography

Lists

Journals · Publications
Categories · Topics · Economists

Business and Economics Portal
This box: view · talk · edit

In mathematicsgame theory models strategic situations, or games, in which an individual's success in making choices depends on the choices of others (Myerson, 1991). It is used in the social sciences (most notably in economicsmanagementoperations researchpolitical science, andsocial psychology) as well as in other formal sciences (logiccomputer science, and statistics) and biology (particularly evolutionary biology andecology). While initially developed to analyze competitions in which one individual does better at another's expense (zero sum games), it has been expanded to treat a wide class of interactions, which are classified according to several criteria. Today, "game theory is a sort of umbrella or 'unified field' theory for the rational side of social science, where 'social' is interpreted broadly, to include human as well as non-human players (computers, animals, plants)." (Aumann 1987).

Traditional applications of game theory define and study equilibria in these games. In an equilibrium, each player of the game has adopted a strategy that cannot improve his outcome, given the others' strategy. Many equilibrium concepts have been developed (most famously the Nash equilibrium) to describe aspects of strategic equilibria. These equilibrium concepts are motivated differently depending on the area of application, although they often overlap or coincide. This methodology has received criticism, and debates continue over the appropriateness of particular equilibrium concepts, the appropriateness of equilibria altogether, and the usefulness of mathematical models in the social sciences.

Mathematical game theory had beginnings with some publications by Émile Borel, which led to his 1938 book Applications aux Jeux de Hasard. However, Borel's results were limited, and his conjecture about the non-existence of a mixed-strategy equilibria in two-person zero-sum games was wrong. The modern epoch of game theory began with the statement of the theorem on the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.

This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.

Contents

 [hide]

History

The first known discussion of game theory occurred in a letter written by James Waldegrave in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her.

James Madison made what we now recognize as a game-theoretic analysis of the ways states can be expected to behave under different systems of taxation.[1][2]

It was not until the publication of Antoine Augustin Cournot's Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth) in 1838 that a general game-theoretic analysis was pursued. In this work Cournot considers a duopoly and presents a solution that is a restricted version of theNash equilibrium.

Although Cournot's analysis is more general than Waldegrave's, game theory did not really exist as a unique field until John von Neumann published a paper in 1928.[3] While the French mathematician Émile Borel did some earlier work on games, von Neumann can rightfully be credited as the inventor of game theory.[citation needed] Von Neumann's work in game theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar Morgenstern. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During this time period, work on game theory was primarily focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.

In 1950, the first discussion of the prisoner's dilemma appeared, and an experiment was undertaken on this game at the RAND corporation. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of non-cooperative games in addition to cooperative ones.

Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form gamefictitious playrepeated games, and the Shapley valuewere developed. In addition, the first applications of Game theory to philosophy and political science occurred during this time.

In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed the concepts of complete information and Bayesian games. Nash, Selten and Harsanyi became Economics Nobel Laureates in 1994 for their contributions to economic game theory.

In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts ofcorrelated equilibrium, trembling hand perfection, and common knowledge[4] were introduced and analyzed.

In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples ofevolutionary game theory. Aumann contributed more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.

In 2007, Roger Myerson, together with Leonid Hurwicz and Eric Maskin, was awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory." Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997).

Representation of games

The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.

Extensive form

An extensive form game

The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured to the left). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. (Fudenberg & Tirole 1991, p. 67)

In the game pictured to the left, there are two players. Player 1 moves first and chooses either F or UPlayer 2 sees Player 1's move and then chooses A or R. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player 1 gets 8 and Player 2 gets 2.

The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)

Normal form

  Player 2
chooses Left
Player 2
chooses Right
Player 1
chooses Up
43 –1–1
Player 1
chooses Down
00 34
Normal form or payoff matrix of a 2-player, 2-strategy game

The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 playsLeft. Then Player 1 gets a payoff of 4, and Player 2 gets 3.

When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.

Every extensive-form game has an equivalent normal-form game, however the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. (Leyton-Brown & Shoham 2008, p. 35)

Characteristic function form

In cooperative games with transferable utility no individual payoffs are given. Instead, the characteristic function determines the payoff of each coalition. The standard assumption is that the empty coalition obtains a payoff of 0.

The origin of this form is to be found in the seminal book of von Neumann and Morgenstern who, when studying coalitional normal form games, assumed that when a coalition C forms, it plays against the complementary coalition (Nsetminus C) as if they were playing a 2-player game. The equilibrium payoff of C is characteristic. Now there are different models to derive coalitional values from normal form games, but not all games in characteristic function form can be derived from normal form games.

Formally, a characteristic function form game (also known as a TU-game) is given as a pair (N,v), where N denotes a set of players and v:2^Nlongrightarrowmathbb{R} is a characteristic function.

The characteristic function form has been generalised to games without the assumption of transferable utility.

Partition function form

The characteristic function form ignores the possible externalities of coalition formation. In the partition function form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are partitioned (Thrall & Lucas 1963).

General and applied uses

As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.

Game-theoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even Charles Darwin makes a few informal game-theoretic statements). This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smithin his book Evolution and the Theory of Games.

In addition to being used to predict and explain behavior, game theory has also been used to attempt to develop theories of ethical or normative behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this type can be found as far back as Plato.[5]

Description and modeling

A three stage Centipede Game

The first known use is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has come under recent criticism. First, it is criticized because the assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, additional criticism of this use of game theory has been levied because some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the centipede gameguess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments.[6]

Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of how populations reach those points remains open.

Some game theorists have turned to evolutionary game theory in order to resolve these worries. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).

Prescriptive or normative analysis

  Cooperate Defect
Cooperate -1, -1 -10, 0
Defect 0, -10 -5, -5
The Prisoner's Dilemma

On the other hand, some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a Nash equilibrium of a game constitutes one's best response to the actions of the other players, playing a strategy that is part of a Nash equilibrium seems appropriate. However, this use for game theory has also come under criticism. First, in some cases it is appropriate to play a non-equilibrium strategy if one expects others to play non-equilibrium strategies as well. For an example, see Guess 2/3 of the average.

Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player pursuing his own self-interest leads both players to be worse off than had they not pursued their own self-interests.

Economics and business

Economists have long used game theory to analyze a wide array of economic phenomena, including auctionsbargainingduopoliesfair divisionoligopoliessocial network formation, and voting systems and to model across such broad classifications as mathematical economics,[7] behavioral economics,[8] political economy,[9] and industrial organization.[10]

This research usually focuses on particular sets of strategies known as equilibria in games. These "solution concepts" are usually based on what is required by norms of rationality. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. So, if all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.

The payoffs of the game are generally taken to represent the utility of individual players. Often in modeling situations the payoffs represent money, which presumably corresponds to an individual's utility. This assumption, however, can be faulty.

A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of some particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Naturally one might wonder to what use should this information be put. Economists and business professors suggest two primary uses: descriptive and prescriptive.

Political science

The application of game theory to political science is focused in the overlapping areas of fair divisionpolitical economypublic choicewar bargainingpositive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.

For early examples of game theory applied to political science, see the work of Anthony Downs. In his book An Economic Theory of Democracy (Downs 1957), he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. The theorist shows how the political candidates will converge to the ideology preferred by the median voter.

A game-theoretic explanation for democratic peace is that public and open debate in democracies send clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy (Levy & Razin 2003).

Biology

  Hawk Dove
Hawk 20, 20 80, 40
Dove 40, 80 60, 60
The hawk-dove game

Unlike economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less onequilibria that correspond to a notion of rationality, but rather on ones that would be maintained by evolutionary forces. The best known equilibrium in biology is known as the evolutionarily stable strategy (or ESS), and was first introduced in (Smith & Price 1973). Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.

In biology, game theory has been used to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.

Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication (Harper & Maynard Smith 2003). The analysis of signaling games and other communication games has provided some insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion, see Butterfly Economics.

Biologists have used the game of chicken to analyze fighting behavior and territoriality.[citation needed]

Maynard Smith, in the preface to Evolution and the Theory of Games, writes, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.[11]

One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to Vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival.[12] All of these actions increase the overall fitness of a group, but occur at a cost to the individual.

Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary reasoning behind this selection with the equation c<b*r where the cost ( c ) to the altruist must be less than the benefit ( b ) to the recipient multiplied by the coefficient of relatedness ( r ). The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on, (through survival of its offspring) can forgo the option of having offspring itself because the same number of alleles are passed on. Helping a sibling for example (in diploid animals), has a coefficient of ½, because (on average) an individual shares ½ of the alleles in its sibling's offspring. Ensuring that enough of a sibling’s offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring.[12] The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a co-efficient that was ½ in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.

Computer science and logic

Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.

Separately, game theory has played a role in online algorithms. In particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games (Ben David, Borodin & Karp et al. 1994). Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, and especially of online algorithms.

The field of algorithmic game theory combines computer science concepts of complexity and algorithm design with game theory and economic theory. The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets.[13]

Philosophy

  Stag Hare
Stag 3, 3 0, 2
Hare 2, 0 2, 2
Stag hunt

Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (19601967), Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis (Skyrms (1996), Grim, Kokalis, and Alai-Tafti et al. (2004)). Following Lewis (1969) game-theoretic account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.[14]

Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from agents' interactions. Philosophers who have worked in this area include Bicchieri (1989, 1993),[15] Skyrms (1990),[16] and Stalnaker (1999).[17]

In ethics, some authors have attempted to pursue the project, begun by Thomas Hobbes, of deriving morality from self-interest. Since games like the Prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986) and Kavka (1986).[18]

Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the Prisoner's dilemma, Stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (19962004) and Sober and Wilson (1999)).

Some assumptions used in some parts of game theory have been challenged in philosophy; psychological egoism states that rationality reduces to self-interest—a claim debated among philosophers. (see Psychological egoism#Criticisms)

Types of games

Cooperative or non-cooperative

A game is cooperative if the players are able to form binding commitments. For instance the legal system requires them to adhere to their promises. In noncooperative games this is not possible.

Often it is assumed that communication among players is allowed in cooperative games, but not in noncooperative ones. This classification on two binary criteria has been rejected (Harsanyi 1974).

Of the two types of games, noncooperative games are able to model situations to the finest details, producing accurate results. Cooperative games focus on the game at large. Considerable efforts have been made to link the two approaches. The so-called Nash-programme[clarification needed] has already established many of the cooperative solutions as noncooperative equilibria.

Hybrid games contain cooperative and non-cooperative elements. For instance, coalitions of players are formed in a cooperative game, but these play in a non-cooperative fashion.

Symmetric and asymmetric

  E F
E 1, 2 0, 0
F 0, 0 1, 2
An asymmetric game

A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric.

Most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured to the right is asymmetric despite having identical strategy sets for both players.

Zero-sum and non-zero-sum

  A B
A –1, 1 3, –3
B 0, 0 –2, 2
A zero-sum game

Zero-sum games are a special case of constant-sum games, in which choices by players can neither increase nor decrease the available resources. In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.

Many games studied by game theorists (including the famous prisoner's dilemma) are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.

Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric) zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.

Simultaneous and sequential

Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential game (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect informationabout every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while he does not know which of the other available actions the first player actually performed.

The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, and extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extenisve form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.

Perfect information and imperfect information

A game of imperfect information (the dotted line represents ignorance on the part of player 2, formally called an information set)

An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players know the moves previously made by all other players. Thus, only sequential games can be games of perfect information, since in simultaneous games not every player knows the actions of the others. Most games studied in game theory are imperfect-information games, although there are some interesting examples of perfect-information games, including the ultimatum game and centipede game. Recreational games of perfect information games include chessgo, and mancala. Many card games are games of imperfect information, for instancepoker or contract bridge.

Perfect information is often confused with complete information, which is a similar concept. Complete information requires that every player know the strategies and payoffs of the other players but not necessarily the actions. Games of incomplete information can be reduced however to games of imperfect information by introducing "moves by nature". (Leyton-Brown & Shoham 2008, p. 60)

Combinatorial games

Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and go. Games that involve imperfect or incomplete information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are however mathematical tools that can solve particular problems and answer some general questions.[19]

Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic(and sometimes non-constructive) proof methods to solve games of certain types, including some "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory.[20][21] A typical game that has been solved this way ishex. A related field of study, drawing from computational complexity theory is game complexity, which is concerned with the estimating the computational difficulty of finding optimal strategies.[22]

Research in artificial intelligence has addressed both perfect and imperfect (or incomplete) information games that have very complex combinatorial structure (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha-beta pruning or use of artificial neural networkstrained by reinforcement learning, which make games more tractable in computing practice.[23][19]

Infinitely long games

Games, as studied by economists and real-world game players, are generally finished in finitely many moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves, with the winner (or other payoff) not known until after all those moves are completed.

The focus of attention is usually not so much on what is the best way to play such a game, but simply on whether one or the other player has a winning strategy. (It can be proven, using the axiom of choice, that there are games—even with perfect information, and where the only outcomes are "win" or "lose"—for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory.

Discrete and continuous games

Much of game theory is concerned with finite, discrete games, that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.

Differential games such as the continuous pursuit and evasion game are continuous games.

Many-player and population games

Games with an arbitrary, but finite number of players are often called n-person games (Luce & Raiffa 1957). Evolutionary game theory considers games involving a population of decision makers, where the frequency with which a particular decision is made can change over time in response to the decisions made by all individuals in the population. In biology, this is intended to model (biological) evolution, where genetically programmed organisms pass along some of their strategy programming to their offspring. In economics, the same theory is intended to capture population changes because people play the game many times within their lifetime, and consciously (and perhaps rationally) switch strategies. (Webb 2007)

Stochastic outcomes (and relation to other fields)

Individual decision problems with stochastic outcomes are sometimes considered "one-player games". These situations are not considered game theoretical by some authors.[by whom?]They may be modeled using similar tools within the related disciplines of decision theoryoperations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes(MDP).[citation needed]

Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves", also known as "moves by nature" (Osborne & Rubinstein 1994). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.

For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen.[24] (See black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)

General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.[24]

Metagames

These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.

The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard (Howard 1971) whereby a situation is framed as a strategic game in which stakeholders try to realise their objectives by means of the options available to them. Subsequent developments have led to the formulation of drama theory.

See also

Notes

  1. ^ James Madison, Vices of the Political System of the United States, April, 1787. Link
  2. ^ Jack Rakove, "James Madison and the Constitution", History Now, Issue 13 September 2007. Link
  3. ^ J. v. Neumann (1928). "Zur Theorie der Gesellschaftsspiele," Mathematische Annalen, 100(1), p p. 295-320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p p. 13-42.
  4. ^ Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late 1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
  5. ^ Ross, Don. "Game Theory"The Stanford Encyclopedia of Philosophy (Spring 2008 Edition). Edward N. Zalta (ed.). Retrieved 2008-08-21.
  6. ^ Experimental work in game theory goes by many names, experimental economics,behavioral economics, and behavioural game theory are several. For a recent discussion on this field see Camerer (2003).
  7. ^ Kenneth J. Arrow, and Michael D. Intriligator, ed. (1981), Handbook of Mathematical Economics (1981), v. 1.
  8. ^ Faruk Gul, 2008. "behavioural economics and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  9. ^ Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," inHandbook of Mathematical Economics, , v. 1, pp. 285-330.
  10. ^ Jean Tirole, 1988. The Theory of Industrial Organization, MIT Press. Description and chapter-preview links via scroll down.
  11. ^ Evolutionary Game Theory (Stanford Encyclopedia of Philosophy)
  12. a b Biological Altruism (Stanford Encyclopedia of Philosophy)
  13. ^ Algorithmic Game TheoryCambridge University Press
  14. ^ E. Ullmann Margalit, The Emergence of Norms, Oxford University Press, 1977. C. Bicchieri, The Grammar of Society: the Nature and Dynamics of Social Norms, Cambridge University Press, 2006.
  15. ^ "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge ", Erkenntnis 30, 1989: 69-85. See also Rationality and Coordination, Cambridge University Press, 1993.
  16. ^ The Dynamics of Rational Deliberation, Harvard University Press, 1990.
  17. ^ "Knowledge, Belief, and Counterfactual Reasoning in Games." In Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press, 1999.
  18. ^ For a more detailed discussion of the use of Game Theory in ethics see the Stanford Encyclopedia of Philosophy's entry game theory and ethics.
  19. a b Jörg Bewersdorff (2005). Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd.. pp. ix-xii and chapter 31. ISBN 9781568812106.
  20. ^ Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: In Introduction to Combinatorial Game Theory. A K Peters Ltd. pp. 3-4. ISBN 978-1-56881-277-9.
  21. ^ Beck, József (2008). Combinatorial games: tic-tac-toe theory. Cambridge University Press. pp. 1-3. ISBN 9780521461009.
  22. ^ Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd.. ISBN 9781568813226.
  23. ^ M. Tim Jones (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 9780763773373.
  24. a b Hugh Brendan McMahan (2006), Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability, CMU-CS-06-166, pp. 3-4

References and further reading

Textbooks and general references

"game theory" by Robert J. Aumann. Abstract.
"game theory in economics, origins of," by Robert Leonard. Abstract.
"behavioural economics and game theory" by Faruk GulAbstract.
  • Published in Europe as Robert Gibbons (2001), A Primer in Game Theory, London: Harvester Wheatsheaf, ISBN 978-0-7450-1159-2.

Historically important texts

Other print references

Websites

 

22:07 Publié dans Game theory | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Mathematics: compiled from the best authors and intended to be the ..., Volume 1

22:03 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Mathematics: frontiers and perspectives Par V. I. Arnold

22:02 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Mathematics: the new golden age Par Keith J. Devlin

22:01 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook

Mathematics Par Timothy Gowers

22:00 | Lien permanent | Commentaires (0) | |  del.icio.us | | Digg! Digg |  Facebook