Information

Why has the khopesh never been used since ancient Egypt?


The ancient Egyptians used a weapon called the khopesh. It was a curved blade that was excellent for getting around shields and puncturing body parts like kidneys.

Why did no other army in the ancient or medieval world later ever use the khopesh again, at least as a weapon type intended for specific enemies?


The khopesh was a solution to the limitations of bronze as a sword material.

Bronze swords can't be too long because they break. Bronze is more brittle and less flexible than iron. For this reason, bronze swords were used only as secondary weapons. The Greek xyphos was only used when the doris (spear) broke and some hoplites never trained with it.

The khopesh is a sickle sword. The curved blade has a very forward point of balance, making it similar to an axe (in fact, it evolved from axes).

It's an excellent hacking weapon, which makes it very useful to attack limbs not protected by armor or even break shields. It wasn't used "to get around shields" but through them, and neither "to puncture" because it is difficult to change direction between cuts, the point is almost blunt and backwards and the sword is too short for good lunges.

When swords began to be made with iron/steel, the extreme shape of the khopesh became less valuable. There are still "hacking" swords, like the Greek kopis (with a name so similar) and later the shotel or the scimitar, swords that favor the cut instead of the thrust. The form we usually call "machete" (falchion, etc.) has been popular in all history.

So, basically, ancient and medieval armies used the khopesh, but because it was made from steel, it didn't have its exact shape.


Also used in Elam, Syria & Canaan (emphasis mine):

During the Middle Bronze Age the new sickle-sword spread rapidly through-out the Near East, appearing in Elam, Syria, Canaan, and eventually Egypt. Egypt seems to have been the last region to acquire the weapon. It doesn't appear in Middle Kingdom Egyptian art, making it likely that the weapon was initially acquired by Egyptians through trade or plunder from Canaan. There is mention of thirty-three "scimitars" - literally "reaping implements" (ECI 79 n49) - taken as plunder in Syria during the reign of Amenemhet II {l929-1895}. Presumably these are versions of the sickle-swords found in the royal tombs of Byblos in Syria and Shechem in Canaan during this period. The weapon does not seem to have been manufactured in Egypt until the New Kingdom, when it frequently appears in modified form as the Egypt khopesh (khps), or scimitar, where the haft of the weapon is reduced to about one third and the blade extended to two thirds AW 1:206-7; FP51).

Hamblin, 2006. Warfare in the Ancient Near East, p.71


Because Shields and Armor Changed

It was not because Bronze was softer or more brittle than iron as the accepted answer stipulates

The khopesh was mostly abandoned between ~1200-1100BCE which coincides nicely with the bronze age collapse, but the fact that it was historically made out of bronze has little to do with why the design was abandoned. Properly work hardened tin bronze can be tougher than mild steel (wrought iron) , and was not significantly bypassed by the properties of steel until the Medieval period when people started to figure out how to make medium carbon steels without too much sulfur and phosphorous contamination. Furthermore, not all bronze age swords were short. Many, like those used by the Minoan Cretians, had blade lengths in excess of 1 meter long.

The Bronze Age Collapse was the result of a devastating series of wars that began with the destruction of all the major Greek cities, and then spread to consume the entire Eastern Mediterranean. The bronze age ended, not because the iron of the day was better, but because these wars cut off the tin trade making bronze no longer a readily available alloy. Iron smelting was already discovered around 3000BCE long before the end of the bronze age, but bronze was the preferred metal for weapon making because the techniques available to make it weapon/armor ready were better.

It's also not because iron could not be used to make the same sort of shapes

While the exact shape of the khopesh was abandoned, the iron age saw several civilizations using equally recurved swords but with different edge profiles. The Dacian falx and Celtic sickle swords had nearly identical blades except that the cutting edge is on the inside of the hook instead of the outside.

So, if the absences of Bronze was not to blame, then the next most logical thing to look at is to see if the nature of warfare itself had changed.

It is unclear if the series of wars that ended the bronze age were caused by the Greeks themselves or not, but the one civilization that survived to leave us a written record were the Egyptians. In their descriptions and illustrations of the "Sea People" it is very clear that these attackers fought in a fashion very similar to the Mycenaean Greeks.

So, to understand the significance of these wars with the decline of the khopesh, you need to consider how the spread of Greek style warfare was fundamentally different from those regions that used the khopesh.

In the regions where the Khopesh was popular like Egypt and Canaan, most armies fought without any armor, but used large shields made out of wicker or a frame of wood covered in hide. Everything about the design of the Khopesh made it ideal for cutting through these shields. It was front heavy which gives it a lot of momentum when swung, but it also had a long, curved cutting surface so it could draw cut through these soft shields. There is also a kinesiological advantage to using a weapon that cuts in advance of the hand vs in-line with it.

In the Mycenaean Greek tradition of warfare, armor was much more popular. A panoply at the time would include bronze scale or even dendra style plate armor, a smaller but probably tougher shield, a boar-tusk helmet, a short straight sword, and a spear. While the Khopesh could cleave through hide and wicker just fine, cutting through metal armor (be it bronze or steel) is virtually impossible.

It was a curved blade that was excellent for getting around shields

This is a false assumption. Because the edge of the khopesh is on the outside of the curve, you have to turn your blade away from the target to hook a shield. Even when you do turn the khopesh backwards, the point does not lead the hand so it is not particularly easier to hit someone with around a shield than a straight sword. The point is also not inline with the thrust which means less penetration than a straight blade.

Against armor or a well made shield, a short, straight, well balanced sword is much more ideal because it is more maneuverable, and has a stronger more accurate thrust for getting into the armor's gaps or around the shield; so, when the sea peoples invaded the areas where the khopesh was common, the defenders switched to straight swords in response to seeing the advantages of them.

Throughout history, the general concept of the Khopesh would re-emerge over and over again in the form of the kopis, the scimitar, the cutlass, the kukri, etc. but it never again adopted the same questionmark like curve with an outer edge because every civilization moving forward knew that their blades might occasionally need to maneuver around a defense that would be too tough to just cut though.


In ancient Egypt, taxidermy was not used as a means to put animals on display, but rather, to preserve animals that were pets or were beloved by pharaohs and other nobility. They developed the first type of preservation of both humans and animals through the use of embalming tools, spices, injections, and oils.

The purpose of the preservation of animals was so that they could be buried alongside the pharaoh or nobility. One of the most notable animals the Egyptians preserved was a hippopotamus. While their methods were not used to put animals on display, it did pave the way for further developments and new taxidermy techniques.


Prehistory

When exactly early hominids first arrived in Egypt is unclear. The earliest migration of hominids out of Africa took place almost 2 million years ago, with modern humans dispersing out of Africa about 100,000 years ago. Egypt may have been used to reach Asia in some of these migrations.

Villages dependent on agriculture began to appear in Egypt about 7,000 years ago, and the civilization&rsquos earliest written inscriptions date back about 5,200 years they discuss the early rulers of Egypt. These early rulers include Iry-Hor, who, according to recently discovered inscriptions, founded Memphis, a city that served as Egypt&rsquos capital for much of its history. When and how Egypt was united is unclear and is a matter of debate among archaeologists and historians.

Egypt&rsquos climate was much wetter in prehistoric times than it is today. This means that some areas that are now barren desert were fertile. One famous archaeological site where this can be seen is at the "cave of swimmers" (as it is called today) on the Gilf Kebir plateau in southwest Egypt. The cave is now surrounded by miles of barren desert however, it has rock art showing what some scholars interpret as people swimming. The exact date of the rock art is unclear, although scholars think that it was created in prehistoric times.


Flooding of the River Nile

Flooding of the River Nile is a crucial natural cycle which has been celebrated since ancient times in Egypt. The flooding is an annual holiday which begins on August 15 and lasts for two weeks. The Ancient Egyptians believed that the river flooded annually due to Isis’s tears when she was crying for Osiris, her dead husband. The flooding is the outcome of an annual monsoon from May to August which causes huge precipitation in the Ethiopian highlands whose peak reaches 14,928ft. Atbarah River and the Blue Nile take a considerable percentage of the rainwater to the River Nile. A smaller amount flows through the White Nile and Sobat into River Nile.

Ancient Egyptians were not aware of these facts, and they could only observe the flooding of the Nile waters. The Egyptians could only forecast the exact flooding levels and date by transmitting the readings of the gauge at Aswan to the lower parts of Ancient Egypt. The only thing that was not predictable was the total discharge and extent of flooding. The Egyptian calendar was split into three seasons: Shemu (Harvest), Peret (Growth) and Akhet (Inundation). The Nile flooded during the Akhet season. The first sign of flooding was seen in Aswan in June.


Mumia_apothecary_jar_18th_century.jpg

Apothecary jar for mumia from the 18th century.

As supplies of bitumen became increasingly scarce, perhaps partially because of its wonder-drug reputation, these embalmed cadavers presented a potential new source. So what if it had to be scraped from the surface of ancient bodies?

The meaning of mumia shifted in a big way in the 12th century when Gerard of Cremona, a translator of Arabic-language manuscripts, defined the word as “the substance found in the land where bodies are buried with aloes by which the liquid of the dead, mixed with the aloes, is transformed and is similar to marine pitch.” After this point the meaning of mumia expanded to include not just asphalt and other hardened, resinous material from an embalmed body but the flesh of that embalmed body as well.

Take Two Drops of Mummy and Call Me in the Morning

Eating mummies for their reserves of medicinal bitumen may seem extreme, but this behavior still has a hint of rationality. As with a game of telephone, where meaning changes with each transference, people eventually came to believe that the mummies themselves (not the sticky stuff used to embalm them) possessed the power to heal. Scholars long debated whether bitumen was an actual ingredient in the Egyptian embalming process. For a long time they believed that what looked like bitumen slathered on mummies was actually resin, moistened and blackened with age. More recent studies have shown that bitumen was used at some point but not on the royal mummies many early modern Europeans might have thought they were ingesting. Ironically, Westerners may have believed themselves to be reaping medicinal benefits by eating Egyptian royalty, but any such healing power came from the remains of commoners, not long-dead pharaohs.

Even today the word mummy conjures images of King Tut and other carefully prepared pharaohs. But Egypt’s first mummies weren’t necessarily royalty, and they were preserved by accident by the dry sands in which they were buried more than 5,000 years ago. Egyptians then spent thousands of years trying to replicate nature’s work. By the early Fourth Dynasty, around 2600 BCE, Egyptians began experimenting with embalming techniques, and the process continued to evolve over the centuries. The earliest detailed accounts of embalming materials didn’t appear until Herodotus listed myrrh, cassia, cedar oil, gum, aromatic spices, and natron in the 5th century BCE. By the 1st century BCE, Diodorus Siculus had added cinnamon and Dead Sea bitumen to the list.

Eating mummies for their reserves of medicinal bitumen may seem extreme, but this behavior still has a hint of rationality.

Research published in 2012 by British chemical archaeologist Stephen Buckley shows that bitumen didn’t appear as an embalming ingredient until after 1000 BCE, when it was used as a cheaper substitute for more expensive resins. This is the period when mummification hit the mainstream.

Bitumen was useful for embalming for the same reasons it was valuable for medicine. It protected a cadaver’s flesh from moisture, insects, bacteria, and fungi, and its antimicrobial properties helped prevent decay. Some scholars have suggested that there was also a symbolic use for bitumen in mummification: its black color was associated with the Egyptian god Osiris, a symbol of fertility and rebirth.


4. Luxor

Located in upper Egypt, Luxor has always been known as an open-air museum with its intact ancient Egyptian temples that goes back to around 4,000 years ago. Luxor alone has one-third of the world’s ancient monuments.

I recommend a hot air balloon ride at sunrise to enjoy the spectacular view from above.

Luxor is also famous for the Valley of the Kings. This valley became a royal burial ground for pharaohs such as Tutankhamun, Seti I, and Ramses II, as well as queens, high priests, and other elites of the 18th, 19th, and 20th dynasties.

Once you finished exploring this one-of-its-kind city, you will love a Nile Cruise to head to your next destination, which I recommend to be Aswan or Nubia!


The Step Pyramid

T he implications of the architecture behind the Step Pyramid of Djoser are drastic, to say the least.

Djoser is the name given to the Third Dynasty ruler by New Kingdom Visitors to the site more than one thousand years later. The only royal name found on the walls of the complex is the king’s Horus name, Netjerykhet.

Prior to this Third Dynasty King, the most commonly used material in the construction of larger buildings was mudbrick. However, with Djoser’s reign, this, like many other things, changed.

His royal architect Imhotep, Chancellor, and Great Seer of the Sun god Ra (see here a glimpse of why I believe that the worship of the Sun directly influenced the construction of the Step Pyramid and why I agree with Czech astronomer Ladislav Krivsky) revolutionized ancient Egyptian architecture by building the Step Pyramid at Saqqara.

The Step Pyramid of Djoser was enclosed by a massive limestone wall, 10.5 meters in height, and 1.645 meters long. Inside it, a massive complex was built that spread across 15ha (37 acres( of land, the size of a large town in the 3rd millennium BC.

Inside this enclosure are a plethora of buildings, temples, and dummy structures, many of which have still not been fully understood to this day.

But of all the structure contained in the limestone wall, the centerpiece was the Step Pyramid, a massive monument rising some 65 meters into the air, contains around 330,400 cu—meters of clay and stone. The Pyramid was made of six superimposed structures stacked one atop the other.


A Brief History of Silver – Magic, Money, and Medicine

Silver in its pure native form (elemental silver) is actually quite rare in nature. Native silver usually occurs mixed in ores with other metals. Gold on the other hand is often found in its pure elemental form in nuggets on varying sizes. Silver can therefore be considered somewhat more rare and precious than gold.

The first known efforts by man to mine silver from the earth occurred in Anatolia, in the country presently known as Turkey. The ancient Egyptians also found richer than average silver deposits in the gold they mined. They eventually learned how to separate the two precious metals, and refine the silver into what was known to them as “white gold”. By the year 2500 BC, the Chaldeans were purifying silver from ores of lead they mined. This is generally a more efficient method of refining silver, and led to a flood of silver across the ancient world. Another ancient culture prominent in silver’s history was the Greeks, who discovered rich deposits of silver near ancient Athens. These discoveries eventually led to the infamous Larium mines, which produced silver for hundreds of years.

Silver was an invaluable metal to ancient peoples. Its malleability and durability lent to the widespread use of silver as money, art, and jewelry. Silver, like gold, is considered a noble metal – it will not readily react and will never rust. Naturally, these properties led to peoples ascribing a variety of supernatural and mystical powers to silver. In modern times, we know that silver can eventually develop a blackish surface tarnish if exposed to sulfur. However, it is only since the industrial revolution that there has been sufficient sulfur in the atmosphere to tarnish silver.

Ancient civilizations were laden with religious and mythical beliefs, and as such, silver came to be associated with various gods. Silver and gold were believed to be favored by the gods, who kept the metals shiny and rust-free. Silver also has unique anti-microbial properties, which were recognized even by ancient peoples who had no knowledge of modern medicine and biology. People observed that wine stored in silver vessels remained drinkable longer than containers of other materials. The Romans knew that dropping silver coins in water storage containers would mean fewer soldiers would become sick after drinking. Silver powders and tinctures were applied to wounds because it was known to prevent sepsis. It was also noticed that spoiled food and drink would often turn silver drinking cups and silverware black on contact, which led to the widespread custom of using silver in dinnerware. The probably didn’t know that the reason the poisoned food blackened the silver is because many spoiled foods contain high concentrations of sulfur. The common belief was that silver had supernatural powers, which explained these properties. These mystical properties of silver are further expressed in supernatural literature . Silver is believed to be harmful and potentially lethal to vampires, and one needs a silver bullet to shoot down a werewolf.

Silver is the most reflective of all metals, and this led to the common use of silver in mirrors. Silver’s reflectivity and color also led to its frequent association with the moon. Women are also often associated with the moon, and over time this has led to silver being linked to everything female. Silver is commonly used in magic and ancient shamanistic rituals, where it is attributed with all sorts of powers. Generally, it is seen as a beneficial substance, which strengthens the effects of magic, and protects and focuses the silver wearer. Silver is also believed to reflect harmful energies and spirits.

In more modern times, silver has found a myriad of roles in industry. Silver can be found in electronics, solar panels, photographic film, batteries, medicines, and countless other products crucial to modern living. It has been used as currency in coinage since ancient times. The ancient Greek silver drachmas, for example, were popular trade coins that spread through the entire Mediterranean region. Other ancient cultures in India, Persia, South America and Europe used silver coinage.

Silver has been used for coins instead of other materials for many reasons:

  • Silver is easily traded (liquid), and has a low buy and sell price spread.
  • Silver coins and bars are fungible. That is, one unit is equivalent to another unit of the same measure.
  • Silver is an easily transportable physical store of wealth. Precious metals like silver and gold have a high value to weight ratio.
  • Pure silver can be divided into smaller units without destroying its value. It can be melted and re-melted into various forms and measures.
  • Pure silver coins and bars have a definable weight, or measure, to verify its legitimacy.
  • Silver is durable. A silver coin will not rust away.
  • Silver has a stable value. It has always been a scarce and useful metal.

Despite the attempted demonetization of silver in the last century by government and banks, silver bullion, commemorative, and circulation coins are still minted today. They are popular among collectors and investors who desire a store of wealth they can physically control, which acts as a hedge against inflation. Silver’s investment demand has rapidly increased in the last several years as the governments of the world continue to inflate their currencies. With its myriad of industrial uses and growing investment demand, silver may one day be valued as equal or even higher than gold.


History of Pesticide Use

The practice of agriculture first began about 10,000 years ago in the Fertile Crescent of Mesopotamia (part of present day Iraq, Turkey, Syria and Jordan) where edible seeds were initially gathered by a population of hunter/gatherers 1 . Cultivation of wheat, barley, peas, lentils, chickpeas, bitter vetch and flax then followed as the population became more settled and farming became the way of life. Similarly, in China rice and millet were domesticated, whilst about 7,500 years ago rice and sorghum were farmed in the Sahel region of Africa. Local crops were domesticated independently in West Africa and possibly in New Guinea and Ethiopia. Three regions of the Americas independently domesticated corn, squashes, potato and sunflowers 2 .

It is clear that the farmed crops would suffer from pests and diseases causing a large loss in yield with the ever present possibility of famine for the population. Even today with advances in agricultural sciences losses due to pests and diseases range from 10-90%, with an average of 35 to 40%, for all potential food and fibre crops 3 . There was thus a great incentive to find ways of overcoming the problems caused by pests and diseases. The first recorded use of insecticides is about 4500 years ago by Sumerians who used sulphur compounds to control insects and mites, whilst about 3200 years ago the Chinese were using mercury and arsenical compounds for controlling body lice 4 . Writings from ancient Greece and Rome show that religion, folk magic and the use of what may be termed chemical methods were tried for the control of plant diseases, weeds, insects and animal pests. As there was no chemical industry, any products used had to be either of plant or animal derivation or, if of mineral nature, easily obtainable or available. Thus, for example, smokes are recorded as being used against mildew and blights. The principle was to burn some material such as straw, chaff, hedge clippings, crabs, fish, dung, ox or other animal horn to windward so that the smoke, preferably malodorous, would spread throughout the orchard, crop or vineyard. It was generally held that such smoke would dispel the blight or mildew. Smokes were also used against insects, as were various plant extracts such as bitter lupin or wild cucumber. Tar was also used on tree trunks to trap crawling insects. Weeds were controlled mainly by hand weeding but various “chemical” methods are also described such as the use of salt or sea water 5,6 . Pyrethrum, which is derived from the dried flowers of Chrysanthemum cinerariaefolium “Pyrethrum daisies”, has been used as an insecticide for over 2000 years. Persians used the powder to protect stored grain and later, Crusaders brought information back to Europe that dried round daisies controlled head lice 7 . Many inorganic chemicals have been used since ancient times as pesticides 8 , indeed Bordeaux Mixture, based on copper sulphate and lime, is still used against various fungal diseases.

Up until the 1940s inorganic substances, such as sodium chlorate and sulphuric acid, or organic chemicals derived from natural sources were still widely used in pest control. However, some pesticides were by-products of coal gas production or other industrial processes. Thus early organics such as nitrophenols, chlorophenols, creosote, naphthalene and petroleum oils were used for fungal and insect pests, whilst ammonium sulphate and sodium arsenate were used as herbicides. The drawback for many of these products was their high rates of application, lack of selectivity and phytotoxicity 9 . The growth in synthetic pesticides accelerated in the 1940s with the discovery of the effects of DDT, BHC, aldrin, dieldrin, endrin, chlordane, parathion, captan and 2,4-D. These products were effective and inexpensive with DDT being the most popular, because of its broad-spectrum activity 4 ,10 . DDT was widely used, appeared to have low toxicity to mammals, and reduced insect-born diseases, like malaria, yellow fever and typhus consequently, in 1949, Dr. Paul Muller won the Nobel Prize in Medicine for discovering its insecticidal properties. However, in 1946 resistance to DDT by house flies was reported and, because of its widespread use, there were reports of harm to non-target plants and animals and problems with residues 4,10 .

Throughout most of the 1950s, consumers and most policy makers were not overly concerned about the potential health risks in using pesticides. Food was cheaper because of the new chemical formulations and with the new pesticides there were no documented cases of people dying or being seriously hurt by their "normal" use 11 . There were some cases of harm from misuse of the chemicals. But the new pesticides seemed rather safe, especially compared to the forms of arsenic that had killed people in the 1920s and 1930s 12 . However, problems could arise through the indiscriminate use and in 1962 these were highlighted by Rachel Carson in her book Silent Spring 13 . This brought home the problems that could be associated with indiscriminate use of pesticides and paved the way for safer and more environmentally friendly products.

Research into pesticides continued and the 1970s and 1980s saw the introduction of the world’s greatest selling herbicide, glyphosate, the low use rate sulfonylurea and imidazolinone (imi) herbicides, as well as dinitroanilines and the aryloxyphenoxypropionate (fop) and cyclohexanediones (dim) families. For insecticides there was the synthesis of a 3 rd generation of pyrethroids, the introduction of avermectins, benzoylureas and Bt (Bacillus thuringiensis) as a spray treatment. This period also saw the introduction of the triazole, morpholine, imidazole, pyrimidine and dicarboxamide families of fungicides. As many of the agrochemicals introduced at this time had a single mode of action, thus making them more selective, problems with resistance occurred and management strategies were introduced to combat this negative effect.

In the 1990s research activities concentrated on finding new members of existing families which have greater selectivity and better environmental and toxicological profiles. In addition new families of agrochemicals have been introduced to the market such as the triazolopyrimidine, triketone and isoxazole herbicides, the strobilurin and azolone fungicides and chloronicotinyl, spinosyn, fiprole and diacylhydrazine insectides. Many of the new agrochemicals can be used at grams rather than the kilograms per hectare.

New insecticide 14 and fungicide 15 chemistry has allowed better resistance management and improved selectivity This period also saw the refinement of mature products in terms of use patterns with the introduction of newer and more user-friendly and environmentally safe formulations 9 . Integrated pest management systems, which use all available pest control techniques in order to discourage the development of pest populations and reduce the use of pesticides and other interventions to levels that are economically justified, have also contributed to reducing pesticide use 16 .

Today the pest management toolbox has expanded to include use of genetically engineered crops designed to produce their own insecticides or exhibit resistance to broad spectrum herbicide products or pests. These include herbicide tolerant crops like soybeans, corn, canola and cotton and varieties of corn and cotton resistant to corn borer and bollworm respectively 9 . In addition the use of Integrated Pest Management (IPM) systems which discourage the development of pest populations and reduce the use of agrochemicals have also become more widespread. These changes have altered the nature of pest control and have the potential to reduce and/or change the nature of agrochemicals used.

1 . Impetus for sowing and the beginning of agriculture: Ground collecting of wild cereals M.E. Kislev, E. Weiss and A. Hartmann, Proceedings of the National Academy of Sciences, 101 (9) 2692-2694 (2004)

2. Primal Seeds, Origin of Agriculture

3 . Economic Benefits of Pest Management R. Peshin, Encyclopedia of Pest Management, pages 224-227, Pub. Marcel Dekker, 2002


CONCLUSIONS

Since time immemorial people have tried to find medications to alleviate pain and cure different illnesses. In every period, every successive century from the development of humankind and advanced civilizations, the healing properties of certain medicinal plants were identified, noted, and conveyed to the successive generations. The benefits of one society were passed on to another, which upgraded the old properties, discovered new ones, till present days. The continuous and perpetual people's interest in medicinal plants has brought about today's modern and sophisticated fashion of their processing and usage.