Dateline 2010: the world-historical situation

In the twilight century of western civilisation, the US, the last resting place of western power, has as its primary purpose the containment of rising China. China has as its primary purpose to put the world 'back to rights'. It is playing a waiting game, and is anxious not to jump the gun.

Dark Age Watch (DAW on hold.)

Issue du jour 1: War with Iran--important to containing China but delayed over two years

Issue du jour 2: The world economy--unbalanced, interwoven, delusional--some predict its unravelling

Issue du jour 3: Somalia--leading the world into a dark age

Issue du jour 4: Pirates exploit the decline of international order

Thursday, 15 October 2009

Technological evolution

We need a measure of technological sophistication.

Technology changes through history, and technological sophistication is closely related to scale and societal eigenmode.

The internet, for example, makes possible and is made possible by the high scale of the modern world.

  • It should be obvious that the internet makes the modern world possible. Consider the direct impact on many businesses if their web and email facilities were suddenly shut off, and consider the indirect impact on many others.
  • As for the modern world making the internet possible, imagine a group of, say, a hundred idealists who decide to cut themselves off on a desert island. Could they produce or maintain all the familiar internet facilities like Google, Amazon and Wikipedia? Obviously not. Even if they took this technology with them, they would soon fall behind what was happening in the outside world, where thousands, even millions, of people are continually advancing the relevant services and underlying software.

In order to talk comparatively about technological change, we need a uniform way of describing the degree of sophistication of any given technology. This has to be applicable to everything from a stone axe to a Saturn V rocket and beyond.

Technological sophistication reflects four factors relating to the creation of an artefact instantiating that technology:
  1. The sophistication of the inputs or precursor processes and materials.
  2. The amount of effort needed for preparation (e.g. assembling the materials in one place).
  3. The amount of effort needed for actually producing the artefact.
  4. The amount of skill required.

The higher the skill, the higher the effort of preparation and production, and the higher the sophistication of precursors or inputs, the greater the sophistication of the technology.
  • A crude stone axe requires no input except a stone, which is about as unsophisticated as one can get, and minimal preparation. It may take some skill but this is relatively easily acquired, and the process of production may involve only a few minutes of effort.
  • A space rocket has very sophisticated inputs, including specialist plastics and alloys, complex microelectronics, and a large ground-based infrastructure for mission control. Preparation and production may take many years of effort by many people, and they will be drawing on an extensive education, from kindergarten through university to specific on-the-job training.

We can reduce the four factors to a common form by considering them in terms of time resources (also known as effort), i.e. the number of people involved in each activity multiplied by the amount of time each person contributes. Specifically, we define technological sophistication as equivalent to total time resources to produce the artefact:
t = tm + tp + ts +

i
ti 
where
t = total time resource to produce artefact ( ≡ technological sophistication)
tm = time resource for actually making the artefact
tp = time resource for preparation
ts = time resource for skill acquisition
ti = total time resource to produce input i ( ≡ technological sophistication of input i)

Technological sophistication therefore has units of person-seconds (or equivalently person-hours, person-days, person-years, whichever is most suitable). Note that technological sophistication is a characteristic of an artefact, not of a society or of a period in history.

Technological sophistication can be thought of as the amount of time it would take one person, starting from scratch, to manufacture the artefact in question, including all its precursors and materials. If the artefact were, for example, a Saturn V, the person would have to begin by learning basic geology and making a spade or pick in preparation for mining the ore to produce the metal from which the rocket's parts would eventually be constructed. The technological sophistication of the rocket could amount to many human lifetimes.

The actual time resources going into the creation of an artefact are not, in general, equal to the theoretical time resources used in the above definition of technological sophistication. This is because few artefacts are made starting from scratch. Instead, effort is amortised over many artefacts.
  • For instance, once a digger is available for mining ore, it can be used on many projects, not just to produce the one Saturn V. The time resources absorbed by producing the digger are in reality shared across many projects, and the Saturn V is responsible only for a small fraction of that effort.
  • Similarly, if multiple copies of an artefact are produced, the effort for skill acquisition, and possibly some of the preparation, does not need to be repeated. The skill acquisition and preparation time resources per artefact are therefore a fraction of what they would be for just one artefact.

Nevertheless, it makes sense to define technological sophistication as if one were starting from scratch while producing only enough of everything to manufacture one final artefact, even though that does not happen in practice. It is this definition that gives the truest account of what goes into making an artefact.
  • Consider writing a letter on a computer versus writing it with quill pen and parchment. The effort involved in each case may be about the same (e.g. it could take an hour to produce the letter either way). This is also true of preparation (switching on the computer, trimming the quill pen) and skill acquisition (learning to type, learning proper calligraphy)--the effort may be about the same in each case. Even the time resources devoted to the inputs might be similar if the time to build a computer is about the same as the time to prepare a piece of parchment from animal skin. Therefore, if we considered only the actual time resources going into the modern and medieval letters, there would seem to be no difference in technological sophistication. However, there obviously is a big difference in technological sophistication, which is reflected in the fact that a lot more effort and knowhow went into the development of the modern computer than ever went into the development of parchment and quill pens. This is what our definition of technological sophistication captures by assuming one starts absolutely from scratch.

  • Consider also that one person probably could produce a parchment letter from scratch (e.g. starting by tanning the goatskin and mixing soot and egg white to make the ink* etc.). However, to make a computer starting from scratch (i.e. beginning at the level of mining the ore) would take a lifetime, or probably several lifetimes. The time resources going into a computer are so high that, to make computers feasible/affordable, the time resources have to be amortised over many units, both for the final artefact and for the components from which it is built. This industry can only survive if it is done on a large scale, serving a large customer base. In this respect, medieval society was neither populous nor connected enough to support computing, and parchment-based letter writing technology was all that was feasible.
*This is merely illustration, not an accurate description of how to make medieval ink.
To demonstrate this definition of technological sophistication, I will calculate the changing sophistication of cutting tools, from the stone age onwards.
I cannot provide absolutely accurate values of technological sophistication, especially for the more complex technologies. This would require a vast amount of research. The figures given below are only estimates. My main purpose is to show the definition of technological sophistication in practice.

The first cutting tools used by humans were made of stone (=lithics). The tools of the palaeolithic and mesolithic (old and middle stone ages) can be classified into five lithic modes, reflecting increasing levels of sophistication (see Grahame Clark, World prehistory: a new outline, 2nd edn [1969]). The cutting tools of the neolithic (new stone age) were of higher sophistication again. After this came the successive increases in sophistication of copper, bronze and finally iron or steel tools.


Mode 1 stone tools involve the creation of a cutting edge by the application of a few sharp blows from another stone. This requires skill, but not much effort, and little attention is paid to the final form, which is rough and ready. The tool may either be the stone or one of the flakes chipped from it. Sometimes the tool is 'retouched' by chipping off a few small flakes to restore a cutting edge after it has been blunted or broken in use.
Mode 1 stone tools: typically prepared with just a few blows, ad hoc in size and shape


Mode 1 tools were already in use with early hominids in Africa, 2.5 million years before the appearance of modern humans. Some of the tools used by Australian aborigines, who are fully modern humans, continue to belong to this most basic category.

-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsNone0
Skill acquisitionTen minutes to pick up the basic technique, though performance would improve with practice600
Preparation5 minutes to select a suitable stone and hammer stone. Any kinds of stone lying around would be suitable, provided they were of reasonable size and shape.300
Manufacture1 minute to knock off a few chips60
TOTAL≈ 1000



Mode 2 stone tools require at least twice as many blows as a Mode 1 tool, and they are made to take a definite, standardised, symmetrical form, that of the classic hand axe. Instead of chipping with another stone, a soft implement of wood, antler or bone is typically used, often with pressure flaking, to provide fine control over the shape. The removed flakes are not used.


Mode 2 stone tools: chipped all the way round with a soft hammer (wood, bone) to achieve a definite symmetrical form, requiring at least a dozen blows


Mode 2 tools were in use with pre-human hominids from about 1.5 million years before the emergence of humans. The technology developed in Africa and was carried into Europe and Asia by the pre-human hominids that colonised these regions from around 1 million years ago. Again, Mode 2 tools have continued in use with the Australian aborigines.

-- Key innovation --

Specialised ancillary tool (wood etc. hammer); aim of producing a repeatable, pre-conceived form.


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsThe bone/wood/antler flaking tool needs to be sourced and prepared, cutting it to the right length and maybe shaping it a bit. Perhaps twenty minutes.1200
Skill acquisitionIt should be possible to get the technique (from sourcing the stone and flaking tool to the design of the axe) in an hour.3600
PreparationA more specific size, shape and type of stone is required. Going to a likely site and selecting a suitable stone might take about half an hour.1800
ManufactureMore blows are required and more careful attention, in order to get the symmetrical shape. Perhaps 5 minutes.300
TOTAL≈ 7000



Mode 3 stone tools involve the Levallois technique, in which a stone core is first carefully prepared and then a single large flake is struck off from it with a sharp blow. In contrast to Mode 2, where the shape emerges gradually, allowing some trial and error, this technique requires a thorough understanding of how flint fractures and an ability to picture in advance the flake that will be produced. Preparation of the core requires a hundred or more shaping blows before the final blow that removes the flake. However, the precise shape of the flake is not particularly standardised.


Mode 3 stone tools: struck with one movement from a carefully prepared core, requiring around a hundred preliminary blows along with the expertise and imagination necessary to envisage how the stone will fracture at the last critical blow


Mode 3 tools are associated with the near-human Neanderthals and were in use from about 200,000 years ago. The first modern humans also sometimes used Mode 3 tools, and indeed the Australian aborigines have never used anything more than Mode 3.

-- Key innovation --

Extensive preparatory work during which finished item is not apparent.


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsAgain a special tool is used for flaking. To prepare it: twenty minutes.1200
Skill acquisitionA period of practising more basic techniques would be needed to develop the necessary understanding of stone's characteristics. One 8-hour day.28,800
PreparationSpecial types of stone, similar to Mode 2, would be required. Fetching time: 1 hour3600
ManufactureA long period of shaping the stone is required before striking off the final product: 10 minutes600
TOTAL≈ 35,000



Mode 4 stone tools are based on long, narrow blades with two sharp edges. These are struck from a core whose preparation is more complicated than for Mode 3, requiring some 250 blows with a bone rather than stone hammer, but then yielding five times as many tools from one block of stone. Blades are also versatile. For example, one edge may be blunted, to create a scraper, or the blade may be shaped into a burin, which has a sharp point and can be used to gouge holes in other materials.

Mode 4 stone tools: struck successively from a prepared core, having long sharp edges and possibly further shaped into specialised tools such as the burin (right), used for drilling holes


Mode 4 tools came into use among fully modern humans at the beginning of the Upper Palaeolithic, i.e. G 1 (c. 50,000 years ago). Whereas Mode 3 tool users stuck to stone almost exclusively, blade tools are associated with equal numbers of tools made from bone and antler. Only modern humans used Mode 4 tools, but not all modern humans used them, since the Australian aborigines and some extinct cultures of Southeast Asia never did.

-- Key innovation --

Preparatory work to produce savings downstream as many blades can be mass-produced from one core; creation of tools to make tools (e.g. burin is used for making holes in bone/ivory to produce needles).


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsAgain a specialist hammer: twenty minutes1200
Skill acquisitionMuch practice is needed for genuine competence: two 8-hour days57,600
PreparationGreater care is needed in selecting the best stone (flint or similar). This might take a day (8 hours) to fetch. In reality, stone might be traded so that people would not have to find it themselves, but this is the sort of efficiency saving we ignore in the calculation of technological sophistication28,800
ManufactureThe preparation of the core requires 250 blows and the blade is further refined after being struck: 25 minutes1500
TOTAL≈ 90,000



Mode 5 stone tools consist of microliths, i.e. small flakes of around an inch long, or less. They come in many precise forms, including triangle, rectangle, rhombus, trapezium, crescent and leaf-shape. They are not complete in themselves but belong to composite tools, such as knives, sickles, spears, harpoons and arrows, with several microliths being fixed into a bone or wooden handle or shaft using resin and possibly some kind of fibre.


Mode 5 stone tools: tiny microliths, chipped into precise shapes and stuck into wooden or bone handles/shafts, using resin and fibre, to produce composite tools such as an arrow (left, with trapezoid head) or harpoon (right)


Mode 5 tools appeared in Africa and India around G 475-875 (40,000-30,000 years ago). By G 1275-1675 (20,000-10,000 years ago) they were in use almost everywhere. There was, however, much more variation than for Modes 4 and below, with groups only a hundred miles apart favouring different shapes and styles of microlith. The greater sophistication of Mode 5 technology is also apparent from the way they were associated with simple 'machines' multiplying human muscle power, namely the bow and the spear thrower, both of which were in use by G 1475 (15,000 years ago, the bow may have been in use much earlier).

-- Key innovation --

Complex composite tools, themselves part of compound systems (e.g. bow and arrow).


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsThe inputs are finished stone blades (the microliths). I will assume these have the technological sophistication calculated above for Mode 4 (90,000). Another input is string, for which I will conservatively assume a technological sophistication of 8 hours118,800
Skill acquisitionTraining is needed in sourcing resin, carving a stick to the right size and shape, and hafting the microliths to it. 8 hours28,800
PreparationObtaining the resin and a suitable stick (the string and microliths are already available, having just been made). Half an hour.1800
ManufactureCarving the stick and attaching the microliths. Half an hour.1800
TOTAL≈ 150,000



Neolithic stone tools involve the imposition of a preconceived shape on a piece of stone. This is either by minutely detailed chipping, to create arrowheads and daggers, or by polishing, to create axes and hammers. In both cases, the form transcends the material of which it is made, in the sense that the object's function, rather than the behaviour of stone, is the primary driver. One is looking at an object that happens to be made of stone, rather than at a stone that has been hacked into a useful shape.


Neolithic stone tools: (not to scale) standardised forms determined by the intended function rather than by the properties of the stone, and involving either minutely detailed chipping to create arrowheads (left) and daggers (centre left), or polishing to create axeheads (centre right) and socketed axe-hammers (right)


Neolithic stone tools: (continued) a neolithic polished axehead, and the tool as it would have been used


The beginning of the Neolithic, and hence of tools like this, is synonymous with the beginning of farming around G 1600 (10,000 BC). The technique of polishing axeheads was perhaps suggested by the technique of grinding corn between two stones, where the stones became smooth as they rubbed against each other.

This technology emerged first in North Africa and the Middle East, and later in Europe, Asia and the Americas. It continued after the invention of metalworking, among people who could not afford or obtain metal tools. The above dagger dates from around G 1940 (1500 BC) and seems to have been inspired by bronze weapons.

Besides their practical purpose, polished axeheads had value as a medium of exchange and store of wealth. The clip below is of a hoard of axeheads found in a burial mound in Brittany, France, and shows they must have been produced in huge quantities.



-- Key innovation --

Form determined by function rather than by properties of underlying material.


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsA suitable block of stone would need to be prepared as a grinding platform (8 hours). Animal hide would need to be obtained (by hunting) and prepared for binding the axe in a tool (4 hours). An initial set of stone tools would be needed for the carving and cutting tasks associated with these and subsequent activities (assume sophistication of Mode 4 tools: 90,000 person-secs).133,200
Skill acquisitionThe basic grinding/polishing technique could be picked up quite easily, although to create sharp, smooth and symmetrical axes would require longer practice. 2 hours7200
PreparationThe hide is assumed available. Other parts to be sourced and fetched are: the stone to be polished, resin for gluing it in the handle, the handle itself, and one or more abrasives (sand) to be used in polishing. Total: 1.5 hours5400
ManufactureThe stone would be roughed out by chipping then polished by rubbing against the platform, using successively finer abrasives to get the final smooth surface (1 day). It would then be mounted in the handle (1 hour).32,400
TOTAL≈ 180,000



Copper tools are made either by beating the solid metal into shape or by melting it and casting it in moulds. In a very few places, such as parts of the North American Great Lakes, copper can be taken from the ground in virtually pure form. However, most of the time it has to be extracted from an ore by heating. It thus requires an extra stage of transformation compared to the shaping of a stone. Yet unlike a stone, the molten metal can be cast into arbitrary forms, and once a mould has been produced, identical copies can be turned out one after the other. Copper is also less brittle than stone and, if broken, can be melted down and recast. Since copper is not nearly as common as stone, the widespread use of copper requires long-distance exchange between producers and consumers. Typically, the ore is refined close to the mine location then transported in the form of standardised ingots.

Copper tools: the metal is either beaten into shape with a hammer (e.g. spearhead, left; note the groove round the hammer stone for attachment of a handle), or molten and cast in a mould (e.g. axehead, right); the resulting copper tool (centre, reconstruction) can be neater and more compact than its stone equivalent


Copper tools: (continued) this is a classic oxhide ingot of the ancient copper trade; the ingot is 27 inches by 16 inches (70 cm by 40 cm) and weighs around 82 lb (37 kg); it is a convenient shape for carrying by two people

Copper and gold were the first metals to be worked by humans, beginning in ancient Iraq around G 1760 (6000 BC). Copper was in common use in Europe and Egypt by G 1860 (3500 BC). The reconstructed copper axe above belonged to Ötzi, the man from G 1870 (3300 BC) whose body was found in an Alpine glacier in G 2080:16 (AD 1991). Copper chisels were used in the building of the Giza pyramids around G 1900 (2500 BC). This 'copper age', also known as the chalcolithic (chalcolithic = 'copper-stone'), technology does not seem to have reached sub-Saharan Africa until G 1960-1980 (1000 - 500 BC).

The discovery of copper metallurgy is related to the invention of pottery, which meant that people were already experimenting with heating earthy materials in a fire. Pottery, in turn, could have been suggested by the practice of heating stone tools to give them strength; this may have led people to experiment with the effects of heat on other materials.

-- Key innovation --

Transformation of raw material (ore) whose properties are not those of the finished product.


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsOne input is the copper ore. This requires developing some knowledge of geology (4 hours), and then the actual location and extraction of the ore (8 hours). A set of stone tools would be needed for this (assume Mode 4: 90,000 person-secs). There is also a need for a pottery crucible and charcoal for the fire: assume 8 hours to make these.162,000
Skill acquisitionIt is necessary to understand the construction of a cast and the melting and pouring of the copper. 12 hours.43,200
PreparationThe copper must first be produced from the copper ore. 8 hours.28,800
ManufactureA mould has to be made, then the copper poured. After the copper is removed from the mould, it requires tidying up and polishing. For this: 12 hours. Finally, the object needs to be mounted in a suitable manner: 4 hours.57,600
TOTAL≈ 290,000



Bronze tools are made from a mixture of typically 90 percent copper and 10 percent tin or arsenic. The metals are molten together and cast in a mould. Bronze is much harder than pure copper, and can hold a sharp edge. Tin-bronze is superior to arsenic-bronze, which therefore tends to be found only in early or less developed bronze industries. However, tin is even rarer than copper, so a tin-bronze industry presumes a well-developed trade network connecting the point where the ore is mined and refined with the regions where the bronze artefacts are to be produced.


Bronze tools: an alloy of copper and tin, bronze is strong than either and capable of carrying a sharp edge; varying sophistication is evident in both the amount of metal used and the complexity of the shape to achieve a given effect, here ranging from the flat axe (far left) to the same with small flanges that hold it more firmly in the handle (centre left), to the palstave axe with an attachment loop and shaped mounting area separate from the blade (centre right), and finally to the fully socketed axe (right)


The earliest bronze-working societies were in the areas of modern Turkey, Syria and Iraq, beginning around G 1860 (3500 BC). Bronze was in use in China by G 1912 (2200 BC), in north-western Europe by G 1930 (1800 BC), and in India and Egypt by G 1940 (1500 BC). In the Americas, metalworking, with gold, silver and copper, began around G 1980 (500 BC), and copper-silver and copper-gold alloys appeared around G 1993 (200 BC). Arsenical bronze did not appear until around G 2040 (AD 1000), and classic tin-bronze was only introduced by the Incas around G 2060 (AD 1475), shortly before the Spanish conquest. American pre-Columbian metalwork tended to consist of decorative and prestige objects, rather than tools or weapons.

Arsenic often occurs naturally in conjunction with copper, which would have facilitated the discovery of arsenical bronze and perhaps suggested the possibility of experimenting with other adulterating metals.

Styles of bronze artefacts evolved continuously, tending to become both more efficient and more mass produced in their appearance, as illustrated in the following clip.



I have participated in a couple of bronze-making workshops run by Dave Chapman. The first was to make a leaf-shaped sword based on one in the Pitt-Rivers museum; this was cast in a stone mould. The second was to make an early bronze age-style axehead, using the lost-wax technique.

Replica leaf-shaped bronze sword cast in stone mould

Replica early bronze age axehead made using lost-wax technique


-- Key innovation --

Combination of raw materials to produce substance not found in nature.


-- Sophistication estimate --

FactorDiscussionTime resources
(person-seconds)
InputsFor the ores, geology knowledge is required - more than for copper as there are now two metals involved, so 6 hours. For locating and mining the ores, two 8-hour days. Again there is a need for a set of stone tools (assume Mode 4: 90,000 person-secs), and for a pottery crucible and charcoal for the fire (8 hours).198,000
Skill acquisitionSimilar skills are needed as for copper, but now two metals are involved. Assume 50 percent more effort: 18 hours.64,800
PreparationThe metals need to be separately refined from their ores: 12 hours43,200
ManufactureThe actual melting and pouring of the bronze takes relatively little time, but there is much work first in creating the mould into which the metal will be poured and then in cleaning up and polishing the object after it has been removed from the mould. For this, two 8-hour days. Finally the object needs to be mounted in a suitably carved handle: 4 hours.72,000
TOTAL≈ 380,000



Iron tools are usually made from iron combined with small amounts of carbon (up to about 2 percent), and possibly with other elements, to create various kinds of steel. In terms of hardness and sharpness, decent bronze can actually be superior to an average piece of iron. On the other hand, deposits of iron ore are relatively common, which means that, compared to bronze, the technology is less reliant on far-flung trade networks, and this makes it cheaper. Iron has a higher melting point than bronze (around 1500°C compared to 1000°C), so that refining or casting it requires more sophisticated furnaces and handling equipment. However, the metal can be worked at lower temperature in a forge. A sword, for example, can be made from a bundle of rods heated and hammered together -- the metal becomes soft enough to take on a new shape, but does not actually melt. Iron can also be welded. This involves causing two pieces of metal to fuse by the local application of intense heat.


Iron tools: made from a metal that is widely available but only melts at high temperature (although becoming soft enough at lower temperatures to be worked in a forge); iron is usually mixed with small amounts of carbon, and sometimes other elements, to create various steel alloys; iron and steel remain in common use for a wide range of tools; here are shown an axehead of around G 1980 (500 BC) from the Black Sea region (top left), a replica Roman sword (left), a Roman axe (centre), a modern axe (right), and a chainsaw (bottom right)


Iron was first produced in the period after G 1920 (2000 BC), in India, the middle east and east Africa, but this was only in small quantities and as a kind of novelty. Around G 1960 (1000 BC), iron came into widespread use, overtaking bronze as the material of choice for tools and weapons. This occurred first in the middle east and Mediterranean countries from Egypt to Italy. In central Europe, iron technology took off 10 g later, i.e. around G 1970 (750 BC), and in north-western Europe 10 g later still, i.e. around G 1980 (500 BC). Iron technology also became fully established in sub-Saharan African in this same period, G 1960-1980. Africa was unusual in that iron and bronze came into use there at around the same time, instead of a lengthy bronze age preceding the take-up of iron. Iron was not known in the Americas until after the Columbian contact (G 2060:17 = AD 1492).

The addition of carbon to iron, to make steel, was a fairly natural development, since carbon would previously have been used in bronze casting, where it prevents a skin forming over the molten metal. The carbon came from charcoal (85-95 percent carbon), which is obtained by heating wood in the absence of oxygen and burns at the high temperatures needed for melting metal. In a primitive foundry, with a charcoal fire force fed by bellows, there would be plenty of carbon dust floating around in the air, and early metallurgists probably could not avoid it getting into the mix.

Over the generations, the technology of iron-making has evolved in several ways. One goal has been to allow iron to be handled in larger quantities, while another has been to adjust the amount of carbon and other elements so as to produce iron/steel with varying qualities (in terms of melting point, malleability, rust-resistance etc.) suitable for performing varying tasks. One major innovation, the Bessemer process, was made only just over 6 g (150 years) ago, and iron-making patents continue to be taken out to this day. Steel remains important, although plastics and sophisticated composite materials are increasingly dominant.

-- Key innovation --

Iron-making was perhaps not as revolutionary as some earlier transitions between lithic modes or the first use of metals, but the development of the high-temperature furnace was a breakthrough.


-- Sophistication estimate --

FactorDiscussionTime resources (seconds)
InputsA knowledge of geology is required: 4 hours. To obtain the ore (more widely available than copper ore): 4 hours. Also required are a crucible and high temperature furnace, along with charcoal fuel: two 8-hour days. Tools are needed to mine the ore and construct the furnace, for which assume a bronze package: 350,000 person-secs. 436400
Skill acquisitionThe necessary skills include producing the high temperatures for melting iron, handling the molten metal, and understanding how carbon or other ingredients affect the metal's properties: 20 hours.72,000
PreparationThe iron has to be smelted from its ore: 12 hours.43,200
ManufactureThe work involves creating a mould, melting the iron, and polishing the cast object into a finished product: 2 days. Finally, it has to be mounted: 4 hours.72,000
TOTAL≈ 625,000



Summary

The following table summarises the technological sophistication of different types of cutting tools and the times at which they first appeared.

Technology          Sophistication          Appearance     
Mode 11000Pre-G 1
Mode 27000Pre-G 1
Mode 335,000Pre-G 1
Mode 490,000G 1
Mode 5150,000G 500
Neolithic180,000G 1600
Copper290,000G 1760
Bronze380,000G 1860
Iron625,000G 1920

This chart shows growth of technological sophistication over time, based on the above table:



While these figures for technological sophistication are rough and ready, it is not surprising to see the kind of accelerating growth shown in the chart.

To make the numbers easier to write, it will be helpful to introduce some abbreviations. Thus, 90,000 person-seconds = 90x103 ps = 90 kps, where ps is short for person-seconds and kps is short for kilo-person-seconds, i.e. 1000 person-seconds; similarly we can have Mps (mega=106), Gps (giga=109) and Tps (tera=1012).



To conclude, I want to make two final points:
  • Only modern humans have used tools with sophistication 90 kps and above (Mode 4 lithics and higher). However, this does not mean modern humans only use tools above 90 kps. Humans continued to use Modes 1-3 lithics alongside more sophisticated tools, while some groups, like Australian aborigines, did not use anything higher than Mode 3. On aborigine tool-making, see R. Foley and M.M. Lahr, 'Mode 3 Technologies and the Evolution of Modern Humans', Cambridge Archaeological Journal, 1997, 7(1): 3-36; A. Brumm and M.W. Moore, 'Symbolic Revolutions and the Australian Archaeological Record', Cambridge Archaeological Journal, 2005, 15(2): 157-175.
  • The adoption of a more sophisticated technology does not mean the abandonment of less sophisticated ones. At most, less sophisticated technologies become rarer over time, as more sophisticated ones are taken up. However, a relatively simple technology, such as a hammer, can be well adapted to its purpose and remain in widespread use despite massive growth of technological sophistication in other areas. Thus lower mode lithics continued alongside higher ones, neolithic tools continued alongside bronze, and bronze continued alongside iron. In principle, an astronaut landing on the moon could still pick up a pebble to fashion a Mode 1 tool for a purpose like prising open an equipment canister.

Tuesday, 18 August 2009

The first is the best

In Works and Days, the ancient Greek poet Hesiod wrote that history began with a golden age, which was followed by a silver age, a bronze age, and finally the miserable iron age of his own time. He recognised that technological progress had occurred, but nevertheless believed that humanity's finest times lay in the past.

James Lovelock has called this grandfather's law, the belief that the old days were the best.

Yet there may be more at stake here than simple prejudice.

Suppose the world's population were asked to choose just one iconic building to stand for the whole of human architectural achievement. What would they vote for? The Taj Mahal, the Eiffel Tower, the US Capitol, The Forbidden City, the Parthenon, the Coliseum?

I think there is a good chance, when all is said and done, that they might settle on the Great Pyramid of Cheops. It is only within the last century that significantly taller buildings have appeared, and, while these may be more sophisticated than the Great Pyramid, they do not have its simplicity, nor are they likely to last as long.

That the Pyramid of Cheops should remain one of the world's largest and most iconic structures might seem extraordinary, considering it was built by people who were still using stone tools, but it illustrates a general principle: in many areas of human endeavour, first efforts are often the best.

The Apollo 11 landing, for example, will probably stand for all time as a highpoint of space exploration. People will one day return to the moon, and will eventually reach other planets and the stars beyond, and they will use technologies of unimaginably greater sophistication than those of Apollo. Yet whatever they do, the Apollo achievement will in some ways never be equalled -- going from a standing start to landing a series of crews on the moon within the decade, in the most primitive craft, and then returning them to earth without a single fatality.

Michael Collins, the Apollo 11 team member who remained in orbit while Neil Armstrong and Buzz Aldrin landed on the moon, has revealed that his biggest fear was that the lunar module ascent stage, which had never previously been tested under lunar conditions, would fail to fire, and he would have to return to earth alone. President Nixon had a speech prepared for this eventuality, in which he would have said that while Armstrong and Aldrin knew there was no hope of rescue they also knew their sacrifice would not be in vain. The speech was never needed, for the ascent stage performed flawlessly, and the mission was in every respect a triumph.

The observation that earliest examples are the best is encountered in all sorts of cultural phenomena.

  • The peak of Egyptian sculpture was achieved in the fifth dynasty, around the time the pyramids were being built (c. 2680 BC). This was never surpassed in the remaining two-and-a-half millennia of Egyptian history, though there was something of a renaissance in the eighteenth dynasty.
  • Experts on Mayan ceramics tend to note that the earliest designs are the most aesthetically pleasing and technically accomplished.
  • Drama in the modern sense began to develop in England from the mid-sixteenth century. Within forty years, it had already produced William Shakespeare, whose fame extends around the world. (A German friend once told me how he was shocked, when he was growing up, to discover that Shakespeare was not German.) In later centuries, Britain has produced other great dramatists, such as Oscar Wilde and George Bernard Shaw, but they are not in the same league.
  • The earliest known cave paintings, at Chauvet Cave in southern France (see right) have been described as "the best we know of Palaeolithic art...a confident peak from which later cave painting could only go downhill." (S. Oppenheimer, Out of Eden: The peopling of the world, [London, 2004], p. 121).

There are several possible reasons why the earliest examples of a given cultural activity should be superior to those that come later.
  1. People have a need for mastery, to prove that they can do something. Once they have mastered whatever it is, their interest wanes. To land on the moon is a fantastic challenge that can inspire people to heights of daring and ingenuity, pushing contemporary technology to its limit. To land on it again is a humdrum task that people will get round to in due course when technology has advanced to the point that they can scarcely avoid it. Once people had built the Great Pyramid, they had proved their point. They would never build quite so ambitious a pyramid again, and before long they would stop building pyramids altogether.
  2. The first patrons of a new cultural product are elites, who can afford to pay for quality. As time goes on, people ever lower down the social scale seek their own versions of the product, in imitation of their betters, and this demand is satisfied by mass production, skimping on materials and cutting corners. The earliest Mayan ceramics were rare items destined for royal usage. Later ones were cheap imitations to be found in every peasant home.
  3. The first geographers to explore a new continent will be the ones to discover the biggest mountains, widest rivers and most spectacular views. Their successors can only fill in the details and will inevitably seem lesser folk. Similarly, the first people to explore a new cultural medium will access its finest opportunities, leaving only lesser achievements for those that come later.
  4. In attempting to assert their own creative individuality, people distance themselves from the cultural forms of the past. When what was achieved in the past was perfection, cultural products that seek to be different and distant will end up looking flamboyant, bizarre or degraded.


The 'first-is-best' rule does not always apply, but even when it does not, the peak of achievement in a cultural activity often comes in a short burst, and involves a cluster of exceptional individuals. This was the finding of the anthropologist A L Kroeber in his book Configurations of Culture Growth, where he investigated cultural 'efflorescences' in fields such as painting, sculpture, philosophy and science, and in societies ranging from Greece and Rome to China and Japan.
  • The peak sometimes comes early in the efflorescence, and sometimes late. It is less common for the peak to come in the middle.
  • Wherever the peak comes, the people who are responsible for the peak, i.e. the highest achievers in the given field, tend to be contemporaries or nearly so. An outstanding example is the Italian Renaissance, where Raphael (1483-1520), Titian (1490-1576), Michelangelo (1475-1564)) and Leonardo da Vinci (1452-1519) were all active in each other's lifetimes.

As Kroeber argued, the clustering of talent shows that high cultural achievement is a sociological phenomenon, with a dynamic of its own, and is not dependent on the chance appearance of individual geniuses. In other words, phases of great brilliance, rather than being random occurrences, have sociological causes and are susceptible to sociological explanations. This means that they can and should be accommodated and accounted for in a theory of history.

In addition, the fact that humans' earliest cultural products can surpass their more recent ones teaches us something about our own situation: it is not because we are cleverer than ancient Egyptians or stone age hunters that we are more technically advanced. It is because we live at the latest moment in history, and are the beneficiaries of these ancient peoples' achievements. Rather than being in every way superior to those who inhabited the planet before us, we are in some respects their degenerate and less accomplished grandchildren.

Dating schemes

(To those of you seeking a utility for converting BP and BC dates, scroll down to the applet at the bottom. You will need the JRE.)

In the west, we number years counting up from the birth of Jesus Christ. The year 2009 literally means in the 2009th year of Christ's age (although Christ is no longer around on earth, he is still, in Christian belief, very much alive). This is the 'Dionysian era', named after the monk, Dionysius Exiguus, who introduced it in 525. It became widespread when it was adopted by the Venerable Bede in the 700s.

One way of representing the Dionysian era is with the phrase 'In the Year of Our Lord' or the Latin equivalent 'Anno Domini'. Hence, we can say 'In the Year of Our Lord 2009' or 'Anno Domini 2009', abbreviated to AD 2009. Notice that the letters 'AD' should logically come before the year number, although it is now so common to write 2009 AD that the logical version might be considered almost pedantic.

The plaque left behind by the Apollo 11 astronauts reads: "Here men from the planet Earth first set foot upon the Moon, July 1969 AD. We came in peace for all mankind." Its author, William Safire, who later wrote a newspaper column on language and grammar, was mortified when he realised he should have put AD 1969 instead of 1969 AD.
While the term "AD" is standard in English-speaking countries, alternative but equivalent terms are sometimes used in other parts of the west. E.g. the French use "l'an de grâce" = "the year of grace".

To refer to dates before AD 1, we count backwards, using the term "Before [the birth of] Christ", abbreviated to BC.

The introduction of the Dionysian era greatly simplified the problem of dating, which until then had used a series of weird and wonderful schemes, such as naming years after the annually elected Roman consuls, or specifying the year of a particular king's or emperor's reign.

However, the AD scheme still has one quirk, stemming from the fact that there is no year 0 (1 BC was followed by AD 1). It means that BC dates cannot be treated as simply negative AD dates. E.g. the difference between 1 BC and AD 1 is just 1 year, not 2 years as it would be if we used the mathematics of negative numbers, saying 1 - (-1) = 1 + 1 = 2. Obviously, this is not really a problem, and we just need to remember that to calculate the year-difference between a BC date and an AD date, we add the BC date to the AD date, then subtract 1.

As the traditional Christian ethos of western society has been called into question under the influence of multiculturalism, some have wished to distance themselves from the terms AD and BC, which are closely tied to Christian doctrine. Instead, the terms Common Era (CE) and BCE (Before Common Era) are increasingly in vogue, especially in academic works, as replacements for AD and BC respectively. (In calendrical terminology, an 'era' is a date from which other dates are reckoned.) The CE/BCE scheme still uses the year of Christ's birth as its era, but this is treated as just a convenient point that happens to be in common use, and its significance is not explicitly acknowledged.

When we look at dates in the past, it can be difficult to get a real feel for their significance. Suppose we are told for example that two European countries fought each other in 1530 and again in 1580. Anyone with a reasonable awareness of history can probably conjure up a mental picture of the 1500s, such as the costumes and technologies of that century and some of its more famous personalities and events. However, unless one has made an in-depth study of the period, the distinction between the 1530s and 1580s is much hazier. The result is that the dates 1530 and 1580 sound quite close together, and subconsciously, we think of the two wars as following pretty much one after the other, and involving the same people and the same issues. This in turn reinforces our view of the past as relatively unchanging when compared to the kaleidoscopic unfolding of events in our own lifetimes.

A similar thing applies when we are told the Athenians did something in 600 BC and something else in 400 BC, or the Hittites arose in 1600 BC and their empire collapsed in 1200 BC, or people built Stonehenge in 3000 BC, extended it in 2600 BC and buried someone there in 2000 BC. The numbers are rather abstract and lose their meaning, while the Athenians, Hittites or users of Stonehenge tend to exist in our minds as though they are the same people, doing first one thing then another. However, the later Athenians, Hittites etc. were in fact the many-times-great-grandchildren of the earlier ones, and the earlier and later sets of people would not in general have had the same thoughts, attitudes or experiences.

To get a more realistic feel for ancient dates, I suggest the technique of mentally converting them into equivalent modern dates.
  1. For example, when I read 1530 and 1580, I convert them in my mind into 1930 and 1980, e.g. imagining the 1530 people as being in the Depression Era, driving around in black sedans, and the 1580 people as watching 'Dallas' on TV while electing Ronald Reagan to the US presidency. Thus, I have a reasonable feel for the differences between 1930 and 1980, and this allows me to get a feel for the corresponding differences between 1530 and 1580, i.e. how personalities, costume and technology might have moved on, and how 1530 would have seemed quite old-fashioned from the perspective of 1580.
  2. For dates spanning centuries, I think of the earlier date as equivalent to the corresponding period before our own time. For example, to the Athenians of 400 BC, people and events of 600 BC would have seemed rather like the people and events of AD 1800 seem to us. Similarly, if the Hittite empire spanned the period 1600 BC to 1200 BC, it is rather like an empire that lasted from AD 1600 to the present. Finally, the Stonehenge dates of 3000, 2600 and 2000 BC would correspond to AD 1000, 1400 and the present. This should make it apparent that the rebuilding of Stonehenge was not just the continuation of a general programme of construction, but was a fresh initiative, undertaken by people who may have known very little about the original builders and did not necessarily think about Stonehenge in the same way. Ditto the people who performed the burial - to them Stonehenge was already an ancient monument and the way they were using it may have had little to do with the intentions of its builders.
In archaeology, most dates are BC. Yet prehistoric artefacts do not usually come with absolute dates attached, and what archaeologists work out first, especially with techniques like carbon dating, is how old artefacts are. Obviously, ages can be converted into dates by subtracting from the present. However, this can seem artificial, and it often makes more sense to talk of something having happened "thirty thousand years ago", rather than to convert this into "28,000 BC", especially as archaeological age estimates are seldom accurate to the year anyway.

In light of the artificiality of BC dates for their subject, archaeologists have adopted the approach of specifying dates in terms of years 'Before Present', abbreviated to BP. So something that happened 4200 years ago would be said to have happened 4200 BP. Now, if BP were taken to mean literally 'before the present', something dated to, say, 561 BP one year, would be 562 BP the next year, and 563 BP the year after that. Evidently, this is totally impractical. Therefore, archaeologists have adopted AD 1950 as the standard 'present'. Years BP means years before 1950.

In my work on this blog so far, I have felt the need for a consistent and meaningful dating scheme, as none of the existing methods seems fully satisfactory. I am conscious of the pedantry and parochialism of the BC/AD scheme, but I balk at the clumsy and merely cosmetic CE/BCE alternative. I do not want to keep switching arbitrarily between BP and BC/AD, and would like a standard approach. However, when I am dealing with events of the upper palaeolithic and rough orders of magnitude, quoting BC dates seems rather absurd, but when I refer to recent historical events the use of BP would become equally nonsensical, as I would find myself saying "the first world war broke out in 36 BP" and people would wonder what I was talking about. Furthermore, BC and BP involve counting backwards, whereas it would be preferable to be able to count forwards. It would also be good if the dating scheme could help drive home the distinction between 1530 and 1580, or between 1600 BC and 1200 BC etc.

These thoughts have led me to the idea of expressing dates in terms of 'generations' from a given starting point.
  • Since I am concerned with history from the upper palaeolithic onwards, the starting point, or era, I will use is 50,000 BC.
  • The generation length I choose is 25 years. This has the advantage that it divides neatly into 100 years and makes it possible to translate easily from ordinary years to generations. Obviously the 'generations' I am using are schematised but they are not wholly disconnected from reality. We won't go far wrong if we imagine that people's oldest grandchildren are being born 2 generations = 50 years after their own births.
  • The idea behind using generations is that it should drive home the point that the Athenians, Hittites or Stonehenge-users of generation N were not the same people as the Athenians, Hittites or Stonehenge-users of generation N+10 or N+20, whatever it might be, but were their distant descendants.
  • The use of generations also gives smaller and more manageable numbers, and I hope it should be easier to visualise and make sense of the spans of time involved.

To convert a span of years to the equivalent number of generations, we divide by 25. Alternatively, if it is an exact number of centuries, we multiply the number of centuries by 4.

If we call the people living in 50,000 BC, generation 1, then to convert a date into a generation number, we calculate 1 plus the number of generations that have passed since 50,000 BC.

For example, the Last Glacial Maximum (LGM) was at about 18,000 BC. This is 32,000 years after 50,000 BC (50,000 - 32,000 = 18,000). In terms of generations, it is 32,000 ÷ 25 = 1280 generations later. (Alternatively, it is 320 centuries, and 320 x 4 = 1280.) Therefore, the people living at the LGM would be generation 1281 (because 1 +1280 = 1281).

When we calculate generation numbers for AD dates, we have to take into account the absence of a year 0. It was in the year AD 1 that 50,000 years = 2000 generations had passed since 50,000 BC. Therefore, AD 1 corresponded to generation 2001. For any general AD date, the total number of generations since 50,000 BC is 2000 plus the number of generations since AD 1. To find the number of generations since AD 1, we find the number of years since AD 1 and divide by 25. However, the number of years since AD 1 is not the number of the year but the number of the year minus 1. Thus, the year AD 2 was not 2 years after AD 1 but only 1 year after AD 1 (2 - 1 = 1). It was AD 3 that was 2 years after AD 1 (3 - 1 = 2). In the same way, it was AD 26 that was 25 years or 1 generation after AD 1. Therefore, AD 26 (not AD 25) was the beginning of generation 2002.

The generation number today is 2081, and it began in 2001. This is because AD 2001 was 2000 years or 80 generations (2000 ÷ 25 = 80) after AD 1, making 2080 generations since 50,000 BC.

Clearly, the generation number only narrows a date down to a 25-year window. All the years from 2001 to 2025 correspond to generation 2081, say. For my purposes, this degree of precision is usually going to be enough, even for historical dates. When talking about the colonisation of Australia or the discovery of agriculture, a 25-year window is obviously more than adequate. However, I am equally content to know, for example, that Columbus's discovery of America was in generation 2060 while the American Revolution was 12 generations later, in generation 2072. I am not writing narrative history so it is not necessary to be absolutely precise (even in ordinary history, the year is often good enough and it is not necessary to give the exact day).

Nevertheless, it would be desirable to be able to refer to the exact year if necessary. We can do this by including the phase, which means the position of the year within the 25-year generation. For instance, the 1st year within generation 2081 was AD 2001, so that AD 2001 has phase 1. The 2nd year within generation 2081 was AD 2002, which has phase 2. The 3rd year was 2003, with phase 3, and so on. This continues up to AD 2025, which will have phase 25. The next year, AD 2026, will be the 1st year of generation 2082, with a phase of 1 again.

We can now put all this together, to obtain some conversion formulas.

For these formulas we will use '%' to mean 'the remainder after dividing by' (or, for the mathematically savvy, 'modulo'). For example,
  • 6 % 3 = 0
  • 7 % 3 = 1
  • 8 % 3 = 2
  • 9 % 3 = 0
  • 10 % 3 = 1
  • etc.

To convert from a BC/AD date to a generation date:

. . For BC dates:

. . . . generation = 1 + (50,000 - year) / 25
. . . . phase = 1 + (50,000 - year) % 25

. . For AD dates:

. . . . generation = 2001 + (year - 1) / 25
. . . . phase = 1 + (year - 1) % 25

To convert from a generation/phase to a BC/AD date:

. . For generation <= 2000 (less than or equal to 2000): . . . . year (BC) = 50,000 - (generation - 1) x 25 - (phase - 1) . . For generation > 2000 (greater than 2000):

. . . . year (AD) = (generation - 2001) x 25 + phase


We also need a notation for dates expressed in generations:
  • To represent an absolute date, we put a 'G' then the generation number, then a colon (':'), then the phase (if required). E.g. the year AD 1492, becomes G 2060:17, meaning it had phase 17 within generation number 2060.
  • To represent a duration, we put the number of generations, a colon, the additional phase (if required), then a 'g'. E.g. a duration of 65 years becomes 2:15 g, meaning 2 generations (50 years) plus an extra 15 years.

I will now provide some example dates, converted into generations (note: I have rounded some of the figures - e.g. 18,000 BC actually equates to G 1281, but I have rounded this to G 1280, since we are only talking approximate dates):

EventDate
ConventionalGeneration
Beginning of upper palaeolithicc. 50,000 BCc. G 1:1
Last glacial maximumc. 18,000 BCc. G 1280
Invention of agriculturec. 10,000 BCc. G 1600
Founding of Egyptian 1st dynastyc. 3100 BCc. G 1876
Pyramid of Cheopsc. 2500 BCc. G 1900
Beginning of bronze agec. 2100 BCc. G 1916
Beginning of iron agec. 1000 BCc. G 1960
Foundation of Rome753 BCG 1970:23
Birth of Christ4 BCG 2000:22
End of western Roman empireAD 476G 2020:1
Battle of HastingsAD 1066G 2043:16
Discovery of AmericaAD 1492G 2060:17
Battle of WaterlooAD 1815G 2073:15
Apollo 11 moon landingAD 1969G 2079:19
9/11 attacksAD 2001G 2081:1
TodayAD 2009G 2081:9


The generational dates should give a feel for relative timescales. E.g. the founding of the Egyptian first dynasty is around 200 g ago, compared to the 2080 g of the human story as a whole. The discovery of America is very recent at only 20 g ago.

It is also useful to note that one lifetime is approximately 3 g (75 years). So the period from the Battle of Waterloo to the first moon landing, which is 6 g -- meaning we would count grandparent, parent, child, twice -- is equivalent to 2 lifetimes laid end to end. From the building of the Pyramid of Cheops to today is 180 g or 60 lifetimes.
It is commonplace to note that life expectancy has been increasing, so it would not always be true that 3G = 1 lifetime. However, most of the increase in life expectancy is due to reduction in infant mortality not to people living longer. Even the Bible considers the typical lifespan to be 70-80 years. Bones of our most ancient, upper palaeolithic ancestors suggest they may have died younger, typically in their 40s, but the 'natural' human lifespan, under reasonably favourable conditions, seems to be around 75 years, as the Bible has it.
Here is an applet for converting AD/BC dates to generations and vice versa. (Instructions: (1) Enter a year into the year field, select AD, BC or BP; click "Convert to gen", and the generation number and phase appear in the generation fields. (2) Enter a generation number and phase into the generation fields; click "Convert to year", and the year appears in the year field with AD or BC selected as appropriate; click "Convert to BP" and the BP figure appears in the year field with BP selected. (3) To convert BC to BP etc., first convert to generations by (1) then convert back to AD/BC or BP by (2).)

(No program? See only a red X? You need to install the Java Runtime Environment (JRE). Click here.)


(Please ignore. This is for my reference only: original archive location = "http://marc.widdowson.googlepages.com/Generation.jar")

While I propose to start using generational dating in future posts, I will include in brackets the more conventional date in either BC/AD or BP format. This provides a compromise between consistency and intelligibility.

Finally, I note three other aspects of the generational dating scheme:
  1. If somebody was born in G n, then their parents were almost certainly born in G n-1, while their grandparents were almost certainly born in G n-2.
  2. If someone was born in G n, and did not die prematurely, they were probably still alive in G n+3 but dead by G n+4.
  3. The life of a person born in G n will typically just about overlap the lives of people born between G n-3 and G n+3.
  4. Each generation reacts against its predecessor, so that people tend to have more in common with their grandparents than with their parents. This implies a two-generation oscillation in social attitudes, which should show up in the generational scheme as a difference between odd-numbered and even-numbered generations. The pattern will not be exact since our standardised generation length of 25 years is not necessarily equal to the 'true' generation length. However, it might hold roughly over short periods of one or two centuries. This roughly 50-year (2 g) oscillation might be the same as the roughly 60-year Kondratiev wave.

Sunday, 9 August 2009

Scale and competition

In The Dynamic Society, Graeme Snooks stresses the importance of the demand for as opposed to the supply of ideas in driving technological change. In other words, necessity is the mother of invention.

A society's technology is wrapped up with its other characteristics in an eigenmode. An invention like writing should be seen not as a lucky discovery but as an inevitable concomitant of a particular level of social development. Inventing writing is not really that hard. It comes into existence in a high-scale society because such a society cannot function without some means of recording information. It is not fruitful to ask whether writing causes or is caused by a given scale. They go hand in hand, that is all it is meaningful to say.

To extend the point to a recent, familiar example, the internet is associated with an increase in the scale of global society (we can get in touch with more people, more easily). The conventional view would be that some boffins invented the internet, and scale increased as a result. However, it could equally be argued that the development of the internet was driven by the needs of governments and businesses struggling to deal with increases in social scale. We have all heard of inventions like Leonardo's helicopter that languish in limbo because they are 'ahead of their time', showing that merely coming up with an idea is not enough. With the internet, people only invested in it because it filled a real technological gap. Again, the eigenmode concept says we do not need to choose between these opposing viewpoints, i.e. as to whether the internet led to increased scale or increased scale led to the internet. The internet and increased scale both caused each other, while the precise steps by which this came about would not tell us much even if we knew what they were.

I say all this because Snooks's observations have made me think again about geographical influences on technological development, and how I may have been insufficiently rigorous when discussing this in an earlier post.

Thus, I previously put up the following diagram, as part of an explanation of why development first took off in the more centrally located regions of the world's landmasses.



The argument was that more centrally located regions had higher scale, i.e. higher social interactivity, because there were more people within shorter range than was the case for societies around the periphery, and this higher scale meant a higher level of technological development. (I went on to explain that as technology, especially sea-going technology, evolved, it changed which societies counted as central and thus changed which regions had the highest scale and were the most advanced.)

While I continue to stand by this argument, I may have been misleading in implying that it was the flow of ideas from neighbouring societies that was the critical factor stimulating the development of the centre.

What I now want to emphasise is that all we can really say here is that high scale (i.e. proximity of large populations, due to the central location) meant there was societal development and complexification. The details of how this happened are not critical. It may be that centrally located societies were stimulated by the strong flux of ideas reaching them from all the surrounding societies. However, Snooks would argue that the important thing was the pressure exerted on the central societies by their neighbours. In his view, the central societies, with so many close rivals, had to struggle harder to survive compared to the more isolated, peripheral societies, and it was this intense competition that stimulated or compelled them to develop. As before, it is fruitless to get into a debate about which of these viewpoints is correct. Probably both aspects played a part, and there may be other factors or mechanisms as well.

It is not my intention to provide a full review of Snooks's book, which is one of a series in which he sets out laws of history. However, it is worth saying that I do not agree with his assertion that the demand for ideas was the only issue, while for the most part I found his book pretty confused and simplistic.
  • Snooks presents his theory as describing biological as well as sociological evolution. This, to me, is a red herring. (It is, however, surprisingly common. Kenneth Boulding does this in Ecodynamics, as does Stuart Kauffman in At Home in the Universe and also arguably Richard Dawkins with his concept of gene-like social memes). Yes, there are superficial analogies between biological and sociological phenomena - e.g. the Roman empire was born, lived and died - but they disappear on close examination - e.g. the Roman empire did not actually 'die', nor was it really 'born'. Biology and sociology exist on quite different levels and need their own conceptual tools. On the sociological side, which is what I am concerned with, we need to use the ideas of politics, economics and cultural anthropology, not the ideas that make sense in biology.

  • Snooks argues that the lack of development of the Australian aborigines was because their isolation meant they were not exposed to significant competitive pressure. (Felipe Fernández-Armesto takes a similar view in Pathfinders [p. 11], where he describes the aborigines as "the 'dropouts' of 50,000 years ago, opting out of worlds of change in order to settle a new continent, where they could maintain a traditional way of life".) However, one could ask why the aboriginal tribes did not compete with each other; it might be thought that being cooped up in a small continent could even have increased competitive pressure. Elsewhere, Snooks introduces the notion of 'funnels of transformation', which are narrow regions like Mesoamerica and the Middle East, where many peoples passed through, creating pressure for development. Australia, apparently, had no such funnel. There is something in this, but why do certain geographical conformations have this funnelling effect? Snooks does not really address this. However, it emerges naturally from the 'scale' concept and the idea that sophisticated social mechanisms are needed to deal with intense interaction among close-packed populations connected by short lines of communication.

  • Snooks refers repeatedly to 'The Industrial Revolution', which seems to play a large role in his thinking. I find this especially surprising when he himself points out, for example, that sixteenth century growth rates, i.e. two centuries before the 'industrial revolution', exceeded those of any other period bar the 1950s and 1960s. (And he notes high growth rates at other times as well.) The 'industrial revolution', in the sense of a special period in history when technological change suddenly became dramatic, is an illusion. Industrial development during this part of the eighteenth and nineteenth centuries grew seamlessly out of what had gone before, and it was then just the latest twist in the ongoing acceleration of technological evolution. The term 'industrial revolution' originally arose as a pun, jokingly implying that, while France and other countries had political revolutions between about 1750 and 1850, Britain had an industrial revolution. The concept then stuck. It seems that people have a weakness for such explanations of history that assign special significance to particular periods and 'turning points'.

Despite my above criticisms, I would still recommend reading Snooks's books. His work has made me more aware of the issue of demand-side versus supply-side explanations of the evolution of ideas, and indeed of the fact I may have lazily slipped into naive, supply-side explanations myself. He also makes other worthwhile points, such as that co-operation and competition are both necessary in an economy. However, I have reservations concerning his overall model. It is not that it is necessarily 'wrong' in a straightforward sense, but I think it is too vague and impressionistic to be any real use as a theory of history.

Monday, 27 July 2009

World languages

The distribution of languages conveys information about ancient migrations and cultural encounters. To historians this is arguably more useful than the information provided by gene distributions, even though those are currently highly fashionable. Whereas genetics preserves a biological record of human movements, language preserves a sociological record.

For example, genetics seems to show that most modern Britons have ancestral roots in Britain going back to the Mesolithic period. This challenges the traditional belief that waves of Anglo-Saxon, Viking and Norman invaders arrived in more recent times. It suggests these invaders did not displace the existing population, and were in fact relatively few in number.

Yet this does not mean the Anglo-Saxon, Viking or Norman settlements were trivial and irrelevant, as some interpreters of the genetics research tend to imply. They were each historically important in shaping British society and culture, and their effects continue to reverberate down to the present.

  • Bede tells us of the struggles of the English (Anglo-Saxons) with the British, both on the battlefield and in the domain of religion, the result of which was that the Celtic Christianity of the indigenous 'British' was replaced by the Roman Catholicism of the newly converted 'English'. Many of those defining themselves as English may in fact have had British blood or British genes (perhaps having found it paid to be English since they were the ones winning the battles), but that makes little real difference to the process of cultural change, which is what matters for history. One legacy of the process Bede describes is the separate national consciousness of Welsh and English in modern Britain, and that has very real consequences, whatever the DNA might say. It has been argued this separate consciousness pre-existed the Anglo-Saxon invasion and that may be the case. Nevertheless, it is the Anglo-Saxon invasion that is used today to justify and explain the situation, so in that respect it remains historically pertinent.

  • The Vikings long raided Britain, causing disruption and defensive responses among communities on the coasts and major rivers, and they went on to establish more permanent jurisdiction, known as the Danelaw, over an area of eastern Britain. People in the rest of the country were forced to pay a special tax, the Danegeld, to keep the Vikings at bay. Although the consequences of the Viking occupation are not so obvious today, it dominated several centuries of British history, affecting settlement patterns, taxation, law and commerce on both sides of the divide. There may or may not be much Viking DNA in the modern British population, but British history is steeped in their presence.

  • The Norman conquest of 1066 dispossessed many of England's existing lords and landowners, and substituted a new French-leaning nobility. This ensured that for the next few centuries English kings would often be at war in France, defending their French territories. It also provided a conduit for the infusion of continental fashions into English society. The Normans' genetic contribution may have been minimal, given that they were a tiny elite, and indeed many of the great families that later claimed to have 'come over with William the Conqueror' probably had mainly Anglo-Saxon forebears. Nevertheless, their cultural contribution was profound, and 1066 is now usually seen as the beginning of English history, with everything beforehand a kind of prologue.

The point is that these invasions, though perhaps scarcely visible in British genes, all left their mark on Britain's language.
  • The fact that (most) British people speak English is thanks to the Anglo-Saxons, whose language it is.
  • The Vikings contributed a few words to the English language and a greater number to various dialects - the north-eastern dialect word 'ta' (for 'thank you') comes from the Scandinavian 'tak' - and the area of the former Danelaw is characterised by numerous place names ending in -by and -thorpe.
  • The Normans introduced many French words, giving English a particularly rich vocabulary, often with both Germanic and Romance words as alternatives for the same concept. Modern English even contains vestiges of the social difference between the French-speaking elite and their Anglo-Saxon subjects. The words for farm animals, which were tended by the peasants, are Anglo-Saxon (e.g. pig, cow), while the words for meat, which was eaten by the lords, are French (e.g. pork/porc, beef/boeuf). In general, Anglo-Saxon words tend to be more earthy and vulgar.

Anyone living in Britain and participating in British society, regardless of their genes (including any genes for skin colour), is an inheritor of the cultural legacy of 1066 and all the other events and processes that have made Britain what it is today. The attitudes and practices that people share by virtue of belonging to the British population are transmitted by social learning, not handed down in a person's DNA. Someone who looks and acts completely British could be descended from very recent immigrants, while a 'British' baby brought up in France would become culturally French. Language is a more meaningful indicator of the historical background of a given society than the genetic make-up of its individual members.

Furthermore, English is now spoken in many places outside Britain in a pattern that reflects the events of world history. It is the main language of North America, Australia and New Zealand because those are places that once belonged to the British Empire and where the indigenous population was overwhelmed by European settlers. It is also an important language in India and some African countries because they too belonged to the British Empire, though the native society survived British rule. English continues to spread around the world thanks to the influence of the United States and its cultural exports. Meanwhile some words have entered English from the languages of Britain's former colonies (see here for a list of loanwords, and here for discussion), again providing a linguistic record of historical contacts.

Nor is it just English, of course. The fact that a form of Dutch (Afrikaans) is spoken in South Africa or that French is spoken in Quebec similarly records past migrations.

Languages, meanwhile, change over time. When a language is carried into a new region, it may drift apart from its parent. American and British English have developed differences of grammar and vocabulary. French, Spanish and Italian all evolved from Latin, reflecting the fact the relevant countries were once part of the Roman Empire, but their evolution followed different paths.

This means that the imprint of history is to be found not just in the distribution of a particular language but in the distribution of a family of related languages that may have evolved from some parent language.

Furthermore, families of closely related languages can be grouped into larger families of more distantly related languages descending from a grandparent or great-grandparent language.

Some linguists have taken this grouping process to the limit, classifying all the world's languages into a small number of enormous groups. Their work remains controversial, as people dispute not only the assignment of languages to groups (e.g. Basque to Dene-Caucasian) but even the very existence of top-level language families.

The following figure shows the distribution of these proposed language groups (for the key, see below). It is based on C. Goucher and L. Walton, World History: Journeys from Past to Present, p. 6. As just explained, similarities of language reflect past (or ongoing) migrations and/or cultural contacts. Hence the distribution of language groups is also the distribution of historically connected societies. Note that the map does not reflect recent European migrations, i.e. it considers only the native American and aboriginal Australian languages--not the English, Spanish or Portuguese that in fact now dominate in those regions.

(To open the map in a full Google Maps page, click here. For a Google Earth equivalent, click here.)


View Larger Map

Here is the key to the language families:



Glottochronology allows linguists not only to determine that languages are related but also to determine when they split from a common ancestor. It relies on assumptions about the rates of change in languages' sounds, vocabulary and grammar. Again, both the technique and its findings remain controversial.

To the extent that glottochronology can be trusted, it means we can go beyond the simple conclusion that societies are historically connected, and deduce when the relevant contacts or migrations occurred.

Language is therefore a vital resource for those wishing to study how societies became what and where they are now. For those who want to pursue such a study of what language can tell us about the past, the Tower of Babel website is a good starting point, with detailed maps and databases of the world's languages.

Without embarking on a detailed study, what can we learn from the broad-brush map of top-level language groups shown above?

First, note that the most linguistically diverse areas are in the tropics: Africa, Central America and South-East Asia respectively. This point is reinforced by the map below, which shows the geographic centres of the world's languages. It can be seen that language is most diverse in the equatorial regions of Africa, America and the Far East. (The most linguistically diverse area on the planet is the island of New Guinea.)



This linguistic diversity of Africa, Central America and South-East Asia tells us that these are regions where societies have been settled in a relatively stable configuration for a long time, rubbing up against each other but never seeing any society completely overcome the others.

Thus, while it is a natural assumption that languages drift apart when the populations speaking them are isolated from each other, language in fact changes most rapidly among populations that are in contact but that perceive themselves as separate. Language differences are used to emphasise contrasting senses of identity. This was shown by Labov's work on how the inhabitants of a tourist resort changed their accent in order to differentiate themselves from the incoming holidaymakers. The original paper is: William Labov, 'The social motivation of a sound change', Word 19 (1963), pp. 273-309.

What is true of language is true of culture in general. With the emergence of modern humans during the upper palaeolithic, there was a quickening rate of local cultural divergence, and this can be linked to population pressure. As humans filled up the world, rival groups were increasingly forced into contact, and responded by creating distinctions between themselves through linguistic and stylistic markers.

The antiquity of societies in the intertropical belt can be linked to the changing climate of the Holocene. During the Last Glacial Maximum (LGM), northern and southern latitudes became at best inhospitable, at worst inaccessible, through being covered with enormous ice sheets. Societies in the inter-tropical zone have been adjusting to each other since the start of the upper palaeolithic some fifty thousand years ago. Societies outside this zone are the result of more recent expansions and have had less time to adjust and reach stasis.

  1. With the emergence of truly modern human societies at the start of the upper palaeolithic, humans spread rapidly all over the world, including to Australia and the Americas.

  2. Once the world was full, societies became tied to particular regions and developed separate identities. They evolved cultural and linguistic markers to differentiate themselves from their neighbours.

  3. As world climate cooled at the onset of the LGM, societies from northern and southern regions were forced back towards the tropics while retaining their separate sense of identity and concomitant linguistic markers. There emerged a situation in which numerous societies speaking multiple languages were concentrated in a relatively small space. This is the reason for the diversity of top-level language groups in the equatorial band.

  4. As climate warmed again and the ice sheets retreated, humans spread back away from the tropics. This was largely a single language group, the one on the periphery, best poised to move into the newly emerging lands. Those within Africa or South-East Asia were blocked by their neighbours from expanding north.

  5. In each region, societies continued to segment and differentiate themselves, so that languages continued to diversify. This process has proceeded furthest in the tropical zone, where societies have been packed together the longest.

Note that a similar argument can be constructed about genetic diversity. This suggests that the geneticists' current assumption, that humans spread out in one direction from a common source region, with diversity diminishing the further you get from the source, are overly simplistic. They need to take into account rebound effects, whether due to the LGM or for other reasons. This will make the analysis messier and more complicated, and should challenge some of their conclusions, e.g. about timings of migrations.

Another point to note from the distribution of top-level language groups is the presence of three such groups in the Americas.
  • The main group, Amerind, is only found in the Americas. It encompasses societies descended from the first settlers in the Americas, who came as part of the original expansion pulse of modern humans, fifty thousand years ago. We do not know how these people arrived. They probably came across the Bering land-bridge that in those times connected north-east Asia to north-west America. However, they may have come across the Atlantic or Pacific oceans, or some combination of these routes.

  • The second group, Dene-Caucasian, is found in both northern North America and east Asia. This represents a wave of settlers coming across the Bering land-bridge as part of the re-expansion into northern latitudes during early stages of warming after the LGM.

  • The third group, Eurasiatic, is the same as that spread across most of Asia and Europe. It represents societies that expanded more recently, during the Holocene proper, overwhelming earlier Dene-Caucasian settlers with higher technology. This expansion, no doubt like earlier ones, has had several major and minor phases. On the most recent, major phase, see The Horse, The Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World by David W. Anthony. These societies also penetrated into North America, travelling by boat across the top of the North Pacific. By this time their culture was adapted to northern living.

A third observation concerns the existence of vestiges of earlier settlers that held out against the Eurasiatic expansion just referred to. These comrpise Dene-Caucasian speakers in the Basque country of highland Spain, in the Caucasus, and in central Siberia, and Kartvelian speakers, again in the Caucasus. Other vestiges survived for varying periods but have by now given up their separate identities.

A final point is the adventurousness of the Austric-speaking societies of maritime south-east Asia. People from these societies settled the far-flung islands of the Pacific.
  • Maritime south-east Asia seems to have been the locus of early maritime technology, stimulated no doubt by the environment of the Indonesian archipelago, which presented sea journeys that were challenging but not too challenging.

  • The last great colonisations by these people were those of Hawaii, Easter Island, New Zealand and Madagascar, between 1000 and 2000 years ago.

  • In Madagascar, they encountered earlier settlers who had arrived from the African mainland during the original human expansion. The far-flung Pacific islands, however, had not been reached in the original human expansion, and Austric-speakers were the first to settle them.

  • Once the whole Pacific had been colonised, sea-going activity declined, and only short-distance journeys continued to be made. This demonstrates the point that humans expand very rapidly into empty lands, but settle down and lose their mobility when they have become surrounded by neighbours.

Friday, 22 May 2009

Understanding early human migrations

I wish to explain why I do not worry that my beliefs concerning the initial colonisation of the world--i.e. that it occurred in a sudden pulse 40-50,000 years ago--are at variance with current mainstream thinking on this subject. In a nutshell, it is because mainstream thinking is undoubtedly immature and will be revised extensively as time goes on.

In the 1970s, archaeologists thought they had a good idea of how the world was settled, and, in particular, their understanding seemed to supersede the older idea that humans had spread out from a single source during the upper palaeolithic.
[Today] many authorities believe that...leptolithic [upper palaeolithic] man (of the modern species, Homo sapiens sapiens) was the direct descendant of the Neandertalers. Such a view contrasts strongly with the earlier opinion that the leptolithic cultures were brought to Europe by an immigration from the Near East of H. sapiens, who wiped out the earlier and 'inferior' Neandertal population.
David and Ruth Whitehouse, Archaeological atlas of the world, 1975, p. 39.

This view has itself now been superseded, and the latest genetics research is taking us back to the original idea, that humans originated in one place and, from there, colonised the rest of the planet.
Clear genetic trees for both modern Y chromosomes and mtDNA point back to a recent common ancestor of all modern humans within the last 200,000 years and a migration out of Africa less than 100,000 years ago. This new line rather quickly replaced all pre-existing human genetic lines, including the Neanderthals...[T]here is no convincing evidence for [interbreeding] in our male and female gene lines.
Stephen Oppenheimer, Out of Eden: The peopling of the world, 2004, p. 347.

It would be premature to assume that we now have perfect knowledge and that the latest ideas will not be superseded in their turn. Genetics is a new tool, and in the rush to apply it there has been little attention paid to its limitations, which will gradually come to light. The development of understanding is a process of successive refinement. It will not be complete until history itself comes to an end.

A major reason why ideas about early human migrations (or any other aspect of the past) keep changing is that they focus on material evidence and often on only one kind of such evidence (pots, genes). Material remains can never provide a full record of the living, breathing past, and will always leave ambiguities. To round them out, it is necessary to take account of everything we know, from contemporary and historical experience, about how societies function. Although some archaeologists do study modern farmers and hunter-gatherers in order to understand those of the past, what I am really referring to is the kind of total historical theory based on abstract theoretical principles to which this website is devoted. Such a theory, treating all human experience as one, will allow us to build a more reliable understanding than comes from concentrating on pots, genes or whatever kind of evidence might come along in future.

Humans have always wondered how they got to be where they are now. Many peoples have foundation myths in which their ancestors arrived from somewhere else. The Aztecs recalled their origins on the North American plains. Many Andean societies believed they had emerged from underground. Some Maori tribes celebrate the ancestor Kupe who first came to New Zealand. The English, and to a lesser extent some continental European nations, link themselves to barbarian invaders who plundered the collapsing Roman Empire.

These traditional stories are often fantastical in their details, and they leave such questions as whether an invading group replaced the original population or merely superimposed the thin layer of a conquering elite.

In the early twentieth century, archaeology appeared to be shining the light of science on the problem. The distinctive styles of ancient artefacts (e.g. shape/decoration of pots and tools) seemed to demarcate distinctive cultures associated with different human groups. When artefacts of a particular style were found to spread from one area to another, this was assumed to reflect a displacement or expansion of the people of the corresponding culture.

By the later twentieth century, theorists began to doubt the formerly confident conclusions they had arrived at in this way. It was recognised that the diffusion of cultural objects did not necessarily require the movement of people. Artefacts might travel along trading networks, or one group's styles might simply be adopted by its neighbours. Archaeologists went almost to the opposite extreme, denying the possibility of population movement at all.

Today the evidence and its implications remain uncertain. The main conclusion is the sheer complexity of the interactions and travels of prehistoric peoples.
  • The Beaker Folk were once deemed to be recognisable by their pottery (see picture, right) and related artefacts, and the spread of Beaker objects in the third millennium BC was thought to represent a wave of technologically advanced people colonising much of north-western Europe. Later, it was thought to represent just the transmission of the technology and an associated belief system. The modern thinking is that both aspects played some part, though any migration was tentative and targeted, not the sweeping motion once envisaged.

  • The Anglo-Saxons were long thought to have invaded Britain after the collapse of the Roman Empire, bringing a new language, religion and way of life, and driving the original Britons west into Wales and Cornwall. This is the story we have from Bede, who lived within a couple of hundred years of the events. It seemed to be confirmed by the appearance of new kinds of artefacts in the archaeological record, around the relevant time. However, the archaeological evidence has come to look problematic, with close intermingling of the supposed 'British' and 'Anglo-Saxon' material, and some of the changes taking place apparently before the time of the alleged invasion. In the last decade or so, it has been increasingly argued (by both professional and amateur theorists) that there never were any Anglo-Saxon invasions, and the distinction between eastern 'English' and western 'British' has existed in Britain from Mesolithic times. This is despite explicit references to an invasion in contemporary sources (e.g. the Gallic Chronicle of 452, under the year 408). Most historians and archaeologists would today be cautious about making firm statements in any direction.
While the archaeology has looked increasingly debatable, it is now genetics that seems to have all the answers, and again it seems to be based on clear-cut scientific reasoning.

The genetics approach makes use of haplotypes, i.e. specific markers in mitochondrial and Y-chromosome DNA, which are inherited in only the female and male lines respectively, and which embody a precise record of a person's ancestry. For a clear description, see this charts and diagrams website. The technique was also discussed in an earlier post, where I mentioned my suspicion that the flaws in its assumptions will be revealed in due course.

The Genographic Project, a National Geographic website, presents an Atlas of the Human Journey, based on the new genetic findings, and allows you to get your own DNA tested and your origins worked out.

Before the use of haplotypes came along, human genes had already been studied for a long time by another method. This is population genetics, which considers the genes on ordinary chromosomes that can be passed down by either the male or female line. Unlike the haplotype technique, population genetics offers few easy answers and creates a complicated picture.

Population geneticists are sceptical of the new arguments of the haplotype researchers. At a 2005 symposium, geneticists working on haplotypes were invited to analyse an artificial dataset whose 'ground truth' was known. While it was agreed the precise results would not be revealed, it has been admitted they were less than spectacularly successful (see Simulations, genetics and human prehistory, ed. S. Matsumura et al., p. 192).

Another indication that haplotype data may be more ambiguous than its practitioners care to acknowledge is the fact that commercial companies offering ancestral DNA testing have been found to give inconsistent results. The biochemical reactions extracting the DNA profile are presumably precise and repeatable, but the significance of the results is more a matter for opinion and interpretation.

None of this is to say the new genetic methods, or indeed traditional archaeology, are useless for understanding early human migrations. The point is simply that we should take the latest pronouncements on this subject with a pinch of salt. We are still in the early stages of unravelling this aspect of our past.

Friday, 4 July 2008

Climate and history

While history should be explained in terms of society's internal dynamic and never purely in terms of external factors, the environment does impose constraints on what can happen. People cannot live on ice sheets, for example, and difficult terrain, such as forest or desert, means low scale and low development.

To understand how people colonised the planet, we therefore need to take account of the environmental context, i.e. the changes in climate, sea level and ice cover over the last 40,000 years.

The methods for reconstructing ancient climates are described by a number of books. The one I used was Global Environments Through the Quaternary by David Anderson, Andrew Goudie and Adrian Parker. The book's Amazon page links to some other titles covering the same area. Another one worth mentioning is Earth's Climate: Past and Future by William F Ruddiman.

The Quaternary Ice Age

Viewed on geological timescales, the 40 thousand years of the human story have occurred during an unusually cold phase of our planet's history. This is the Quaternary Period, which began some 2 million years ago and is regarded by geologists as an ice age.

Geological time is divided into various periods, which can be recognised by differences in the type of rock laid down (e.g. rocks from different periods contain different kinds of fossil). These periods are often named for the regions where the associated rocks were first noticed (e.g. the Jurassic takes its name from the Jura), although some get their names in other ways (e.g. the Cretaceous takes its name from the Latin for chalk, since that is what the rocks consist of).

The Quaternary is the most recent period. Its name comes from a former scheme that divided earth history into Primary, Secondary, Tertiary and Quaternary periods. Of these, only the Tertiary and Quaternary are still recognised (though even this terminology is now coming into question). Conventionally, the Quaternary began 1.8 million years ago, but many geologists now put this back to 2.6 million years. The controversy is irrelevant to us, since we are only interested in the last 40 thousand years - roughly the last 2 percent of the Quaternary - the time for which modern humans have been in existence.

What distinguishes the Quaternary is that during this period the earth has been in its 'ice house' mode, whereby there are ice caps at the north and south poles. It is not usual for the earth to have such ice caps. There were none during the great age of the dinosaurs, 100 million years ago, for example. However, from time to time, the earth's climate goes through a noticeably cooler phase. No one knows why this occurs, even though there are many theories. Whatever the reason, over the last 3 billion years, there have been some half dozen such glacial episodes, each lasting up to 100 million years. In the Quaternary, we are currently in the early stages (if past durations are anything to go by) of the latest of these ice ages.

The current cooling of the earth's climate began as long as 65 million years ago, at the end of the Cretaceous (when the dinosaurs died out). The cooling trend continued throughout the Tertiary, the period immediately before our own. In fact, ice sheets were already appearing in the late Tertiary. It is when these became persistent, and temperatures reached their current low, that is taken to mark the start of the Quaternary.

Climate chaos

While the Quaternary as a whole has been cold, temperatures have by no means remained constant during this time. In fact, not only have temperatures fluctuated but there have been smaller fluctuations within larger fluctuations. The more evidence climatologists gather, and the closer they look, the more ups and downs become apparent. Within the Quaternary, there have been warmer and colder millennia; within a given millennium, there have been warmer and colder centuries; within a given century, there have been warmer and colder decades; within a given decade, there have been warmer and colder years. Climate fluctuates on all scales.

The fact that climate fluctuates on all scales suggests it is a chaotic system. External inputs--such as variations in solar activity, continental drift, the passage of the solar system through interplanetary clouds, or gases released by volcanoes and living organisms--energise the system. They do not directly drive change, except possibly large, long-term change, but rather prevent the atmosphere from reaching equilibrium. The system remains unstable in itself, and is bound to change in an essentially patternless, unpredictable manner, on account of complex, multi-level feedbacks between climatic variables.

One of the earliest pieces of work in chaos theory was, in fact, the research of Edward Lorenz on computer models of the atmosphere. He found his simulated weather systems wandered all over the place in a way that had no simple relationship to the input conditions. Given that the weather seemed to vary continuously and unpredictably, he questioned whether it even makes sense to speak of the earth's 'normal' climate.

The diagram below shows some of the differing climatic regimes of the Quaternary, which will be explained in the following sections. Red represents warmer phases, and blue colder ones.



Glacials and interglacials

The grossest temperature fluctuations within the Quaternary are the glacials (or glaciations), when ice sheets grew massively around the world, and interglacials, when the ice sheets retreated and in some cases disappeared.

The most recent glacial is known as the Würm glacial in Alpine Europe or as the Wisconsin in North America (and has other names in other regions). This began around 75,000 years ago, was at a point of maximum coldness around 20,000 years ago (the Last Glacial Maximum or LGM), and came to an end around 11,500 years ago. We are currently in an interglacial.

Pleistocene and Holocene

The relatively warm period that began 11,500 years ago is considered to be a distinct sub-division (or epoch) of the Quaternary, called the Holocene. All the rest of the Quaternary, before this, is called the Pleistocence. Pleistocene means 'very recent'. Holocene means 'completely recent'.

Some geologists dispute the notion of the Holocene as a special epoch. They have a point, as it is only the last of a series of interglacials. The idea that it marks a new epoch seems to be exaggerating the importance of a relatively minor change simply because of our closeness to the event. That said, the Holocene provides a useful label for the current warm phase, whatever its geological status.

Marine isotope stages

The broad pattern of glacials and interglacials was first identified in the nineteenth century by the characteristic valleys and deposits of debris left by ancient glaciers. However, a much more detailed record of our planet's changing ice cover is now available from studies of ocean sediments.

Seawater molecules (H2O) contain two isotopes of oxygen, 16O, the lighter of the two, and 18O, the heavier. When water is evaporated from the oceans, to fall as rain or snow over the continents, the lighter water molecules, containing 16O, evaporate a little more easily. If the earth is going through a cold spell, and ice sheets are growing, these lighter water molecules become locked up in the ice, leaving the oceans with a higher concentration of the heavier molecules. Later, when the earth warms and the ice sheets melt, the lighter water molecules return to the oceans, reducing the concentration of heavier molecules there. This means that the concentration of 18O in seawater reflects the size of the earth's ice sheets, with higher concentrations of 18O when the ice volume is larger.

We can reconstruct past variation in 18O concentrations because the shells of tiny animals that live in the oceans reflect the chemical composition of the water that surrounds them. If 18O is more concentrated, these shells also contain a higher concentration of 18O. When the animals die, their shells fall to the bottom of the oceans and build up as layers of sediment, creating a record of the changing concentration of 18O and hence of the changing size of the earth's ice sheets.

Although this 'marine isotope' record shows fluctuations of all sizes and durations, geologists recognise in it an overarching pattern of swings between warmer (less ice) and colder (more ice) stages. The current warm swing is designated marine isotope stage (MIS) 1. The previous cold swing is MIS 2. The warm swing before that is MIS 3, and so on. Odd numbers correspond to warmer stages and even numbers to colder stages.

The marine isotope record does not tie up exactly with the more traditional division into glacials and interglacials, but rather reveals the complexity of climatic fluctuations. The current warming (MIS 1) had its beginnings in the last few millennia of the Würm glacial. The previous cold phase (MIS 2) was an exceptionally cold part of the Würm glacial, spanning the LGM. Before that, MIS 3 was a less cold phase, but still within the Würm glacial and not as warm as today.

The ending of the last glacial

The overlap between the nominal end of the last glacial and the beginning of MIS 1 reflects the fact that the colder climate did not terminate in a once-and-for-all manner. Within the warming, there were setbacks, involving a temporary return to colder conditions. First there was a warming lasting a little under 2000 years (the Bølling), then a short cold snap of about 3 centuries (the Older Dryas), then another warming of a little under 1000 years (the Allerød), then a longer cold snap of nearly 1500 years (the Younger Dryas). The end of the Younger Dryas marks the end of the glacial.

It should be apparent from all this that identifying the termination of the glacial is somewhat arbitrary, and requires a degree of hindsight we do not currently possess. Whether the present warming should be seen as part of a longer-term warm phase or just as a warmer interval in a longer-term cold phase depends on what happens in the future.

Climatic variation in the Holocene

During the Holocene, climates have continued to fluctuate. In Europe, the first 1500 years (the PreBoreal) were relatively cool. There then followed 7500 years of relative warmth, ending 2500 years ago (i.e. around 500 BC). The beginning and end of this phase (the Boreal and SubBoreal) were both warm and dry, while the middle part (the Atlantic), lasting about 3000 years, was warm and wet. Finally, the last 2500 years (the SubAtlantic) have been relatively cool again.

Chronozone
Climate
Chronology
SubAtlantic
generally deteriorating climate with cooler and wetter conditions
600 BC to present
SubBoreal
climatic optimum with warmer and drier conditions
3800 BC to 600 BC
Atlantic
climatic optimum with warmer and wetter conditions
6900 BC to 3800 BC
Boreal
climatic amelioration, warmer and drier
8100 BC to 6900 BC
PreBoreal
subarctic conditions
9600 BC to 8100 BC

Adapted from: D Anderson et al. Global environments through the Quaternary (Oxford 2007) p. 11.


Again, within the current (SubAtlantic) phase, there have been shorter term fluctuations. The heyday of the Roman Empire was relatively warm. The end of the Empire and the early medieval period (the 'Dark Ages') was colder. The high middle ages, the time of the monasteries and crusades, was warm, with grapes being grown in Britain, and the Vikings settling Greenland. The early modern period, the time of Shakespeare, Elizabeth I and Philip II, up to the Victorian period was colder (the 'Little Ice Age').

The last hundred years or so have been relatively warm, but still by no means uniformly so. The first half of the twentieth century was warm, and scientists spoke of global warming as a boon to humanity, bringing not just better weather but better growing conditions for crops. The late 1940s to 1970s were cooler, leading to talk of a renewed ice age, with soaring energy costs for heating, and the threat of famine; in the 1970s, British harvests were on average 11 days later than they had been in the mid-twentieth century. The 1980s and especially the 1990s were warm again, so that talk was once more of global warming, though now as a source of concern and even fear. Finally, temperatures in the first decade of the twenty-first century have shown little trend either way.

To repeat, climatic fluctuations occur on all scales, and the closer one looks, the more variation one sees. This is variation not just in time but also in space. Episodes like the medieval warm period and subsequent little ice age do not appear to have occurred in other regions the same way they occurred in Europe, and temperature changes in the southern hemisphere seem sometimes to have been in the opposite direction to those in the northern hemisphere.

Sea level changes

Changes in global ice cover cause corresponding changes in the global sea level. More ice means less water in the oceans and larger areas of dry land.

This would have affected people's ability to get from A to B, and is important for how they migrated around the world.

The chart at right (source: Wikipedia) shows changes in sea level through the late Quaternary. The light and dark shaded bands indicate the marine isotope stages (note that MIS 5 is subdivided into 5a, 5b etc.).

For most of human existence, sea level has been lower than today, reaching a minimum at the LGM, when it was more than 100 metres below the present level. Around 5000 years ago, however, sea level was some 10 metres higher than it is today.

We can translate the above chart into maps of how the continents would have looked at different times, courtesy of an applet developed by Sebastien Merkel at the University of Lille. You enter a given sea level (metres above or below the present) and the applet draws the land as it would then appear. (There are actually several applets, for the world as a whole and for different regions.)

I have used Sebastien's applet to create a slideshow of changing sea level, spaced at 5000-year intervals, from 40,000 years ago to the present.

Here is a Youtube version:



I have also created a set of Google Earth layers showing the ancient coastlines. (This does not include a layer for the present, since you can get that from Google Earth itself.)

Below, is an animated version, which requires the Google Earth plug-in to see it. Move the slider to change the date. (If you do not want to install the plug-in, but have a standalone Google Earth browser, you can download the animated coastlines here.)



Lower sea levels meant that the world's land surface was more connected in the past than it is today. The British Isles were joined to continental Europe. There was a land bridge, known as Beringia, between Asia and North America. The islands of modern Indonesia were mostly joined to each other and to the mainland. Australia was the only separate continent, cut off by a sea crossing of about 100 miles, but was joined to New Guinea. The entrance to the Black Sea was dry land, so the Black Sea was then a lake. However, the Gibraltar Strait remained submerged, so there was a short sea crossing between Africa and Spain, while the Mediterranean still opened into the Atlantic Ocean.

Early human migrants would have followed the coasts in spreading around the continents, and followed the rivers into the interior. The early settlement of Australia shows they could also cross the sea. Evidently, they had boats, which would have served them for both fishing and transport. They could thus have crossed between Africa and Spain, and reached offshore islands, such as those of the Mediterranean, Caribbean and South China Sea.

The rise in sea levels means that most of the sites occupied by human migrants 40-50,000 years ago are now beneath the waves. Future advances in underwater archaeology can be expected to reveal much more about this time, and give a clearer picture of the colonisation of our planet.

Climate maps

The pattern of coastlines and land connections represents only part of the information we need for thinking about how early humans moved around the planet. We also need to know the type of terrain that confronted them.

For example, while the low sea levels of the LGM produced the Beringia land bridge between Asia and North America, the heavy glaciation of that time meant that Beringia was blocked from the rest of the North American continent by an ice sheet. Given the traditional belief that humans only reached the Americas after the LGM, the moment at which the ice sheet had retreated enough to leave an ice-free corridor from Alaska to the Great Plains provides an important constraint on the timing of their arrival. (For an animation of the retreat of the North American ice sheet, see this site.)

My view is that the earth, including America, was colonised in essentially one great movement, at the start of the Upper Paleolithic, i.e. 20-30,000 years before the LGM. Beringia existed at that time, while the North American ice sheet extended only a little way beyond Hudson's Bay, and did not block movement via the west coast. That said, we should not discount the possibility that humans arrived in America via the Atlantic or Pacific. It may be a long voyage, but the Americas present a huge target.

I have prepared a set of maps, in Google Earth, of global climate/environment at 10,000 year intervals, from 40,000 years ago to the present. They are derived from those produced by the Quaternary Environments Network plus a certain amount of guesswork (the QEN maps are quite patchy in their chronological and regional coverage, and I have had to fill in the gaps to create a consistent set of maps at regular intervals).

Terrain is classified into eight types, using the following colour scheme. In a nutshell, the lighter the green, the drier and more open the terrain (plus yellow for desert, including polar desert, and white for ice).



You can see these maps either with the Google Earth plug-in below, or by accessing the Google Earth files directly here. Move the slider to change the date.



Note that these maps take into account the different coastlines at different periods, which is why they may show grassland etc. in what is now sea. The map for the present shows potential vegetation (i.e. as it would be in the absence of human influence). Actual vegetation can be very different due to the effects of industry and agriculture, the main thing being the widespread clearance of forests.

For most of human existence, the climate has been colder and drier than today, resulting in more desert and less woodland. However, around 10,000 years ago, climate was generally moister than today, and the Sahara desert was converted to grassland and steppe. That said, the global environment did not vary as one. Climatic change could, for example, mean a shift in wind patterns, carrying moisture away from one region and towards another, so that the first region became drier and the second one wetter.

Conclusion

This post has provided a narrative of climatic variation over the period of human existence, plus some relevant resources in terms of maps of changing coastlines and terrestrial environments. It has not reached any particular conclusions but is intended to provide the high-level background for subsequent discussion of humans' discovery and conquest of their world.