A Culture of Decadence

John Love 

After the second World War, life for the average white middle-class American was better than it had ever been before. Technology was advancing at a breakneck speed, giving the average American access to inventions like the refrigerator, washing machine, and television. Nearly every average standard of living metric was at an all-time high, from diet and physical health to the strength of family and community networks. Life was good. However, much has changed since then.

Today, I believe we live in a culture of decadence, and I know I am not alone in feeling this sentiment. Many people, including New York Times columnist Ross Douthat, now believe we live in a time characterized by “economic stagnation, institutional decay, and cultural and intellectual exhaustion”. I believe a few core assumptions spawned during the so-called “good times” have invaded our cultural psyche and led us down a path to our current age of decadence.

Technological Advancement and a Cultural Decline

As previously mentioned, technology was advancing at a record pace in the post WW2 boom era. This brought many goods to the world. Life spans advanced greatly due to the advancements in modern medicine and food technology. Time spent doing household chores was greatly reduced by the dishwasher and washing machine, freeing up women to do less menial activities and jobs. However, with these benefits come some negative side-effects.

When technology advances so rapidly, our culture often has a hard time keeping pace. For example, take the case of television. Until 1973, a town in Canada codenamed “Notel” did not have access to television due to its isolated mountainous geography. Right before television was introduced to the town, Dr. Tannis MacBeth conducted a study comparing it to analogous nearby towns that did have television, with Notel as the best possible control group on the effects of television. The results speak for themselves; Notel residents showed higher IQs, less aggression, and more self control and empathy. 

Two years after television was introduced to the town, MacBeth returned to Notel. The change was apparent. Before the introduction of TV, the town had a strong sense of community, with regular sports matches and community dinners. After TV was introduced, the number of and demand for these events had diminished, as the new default norm after work was to go home and watch TV.

The Rise of the Pursuit of Pleasure 

The trend towards television is just one part of a general societal shift towards the emphasis on, and glorifying of, pleasure. This can be easily seen in the content made for the youth and teens. Before this period of decadence, the youth loved stories and magazines about great feats, ingenuity, and exploration, shown in the popularity of the western genre in America, and publications like Boys of the Empire abroad in the old British Empire. However, these traits of valor and bravery are no longer seen as what is desirable by the younger generations, with the new end-all-be-all being more along the lines of Dazed and Confused; sex, drugs, and just having a good time. After all, “if we’re all going to die anyways, shouldn’t we enjoy ourselves now?”

Disordered Judgment 

This feeling of nihilism and self-pleasure is in part driven by the decline of organized religion in the USA and the west as a whole. However, that does not keep the now churchless people from using and corrupting parts of Christianity’s teachings in pursuit of a new meaning in life. One of the most famous teachings of Jesus is “do not judge, or you too will be judged” (Matthew 7:1 NIV). So famous is this teaching that it persists in the minds of people even after they stop practicing their faith, who then interpret it as a decree against judgment of all kinds. We therefore act in a laissez-faire manner towards others, treating doing such as love and judgment as hate. All actions are good besides those seen as uniquely evil; Nazism, racism, and the like.

However, doing this ignores the rest of, and original meaning of, the passage, which says “how can you say to your brother, ‘Let me take the speck out of your eye,’ when all the time there is a plank in your own eye? You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother’s eye” (Matthew 7:4-5). This advice holds true not just to the faithful, but to all. Anyone honorable and just should look for the flaws they hate in others, then look for those same flaws in themselves. This allows us to improve ourselves and gives us greater empathy for the plights of others. However, people trend towards one of two extremes. Some refuse to judge others, citing a supposed lack of objective morality. Others easily judge others but refuse to judge themselves, believing themselves to be superior to those without any sense of objective morality, leading to nothing but vitriol and anger. 

This lack of healthy judgment does not just cause a stain on the behavior and mannerisms of an individual, but a negative impact on society at large. Postmodernism largely can be derived from this dismissal of objective fact and casting judgment, instead favoring subjectivity and skepticism. This intellectual stance is inherently meaningless and illogical, as it believes that multiple truths are possible, which is easily disprovable by allowing the ideology to interact with the real world; relativism in the fields of science, engineering, and technology would make the world unlivable. Because of our lack of ability as a society to judge, we find it hard to reject this very flawed stance, and thus it continues to be prominent in intellectual spheres. This results in the promotion of obscurantism, the decline of art and culture, and difficulty in the expansion of analytical or empirical knowledge.

To Conclude

One may read all this and be quick to place the blame on the generation that first went through these changes, the Baby Boomers. However, we have not done anything to change since then, only going further into these negative practices. Instead of watching television for many hours a day, we now spend hours each day on social media, isolating ourselves even more from others and the outside world. The media we consume is depicting more degenerate scenes than ever, such as in the hit show Euphoria. We still lack the ability to judge others and to improve ourselves. We are still a part of the same “macro generation”.

In order to break this cycle, we must strive to improve ourselves. Instead of scrolling social media today, go for a walk, call up a friend, or read a good book. Don’t seek leisure or pleasure as an end of itself, strive to be the best man or woman you can be. Only then will you have taken the plank out of your own eye. Only then can we end this culture of decadence.

We Need to Look Out for Our Future

John Love

As individuals, we oftentimes struggle to comprehend our actions past their immediate impacts. This often can cause negative impacts further on down the line. Some effects can be relatively minor. When faced with a good meal or a tasty sweet, we will eat past our limits, satisfying our taste buds in the moment, only to regret it by feeling bloated, or seeing a higher number on the scale. However, some effects can be much greater. Many Americans (75% to be precise) have retirement savings that fall short of even the most conservative of savings targets, and (21%) don’t save at all. So too can vary the effects of short-sightedness as a nation.

“A society grows great when old men plant trees in whose shade they know they shall never sit”. Some attribute this quote to the Ancient Greeks, some to modern motivational speakers, but the one thing all can agree on is that the sentiment rings true. Western societies of old were renowned for the ability to look beyond their own current short-term benefit for the greater good of their societies in the long run, as seen in the cathedrals of medieval Europe which would take hundreds of years to complete. However, nowadays, we have collectively decided to stop looking out for the needs of the future in order to satisfy the wants of the present.

Sacrificing Long-term Investment for Short-term Gains

Shareholder capitalism is the driving economic system in the USA and much of the modern world. In such a system, corporate leaders are legally required to operate companies in a way to maximize value for shareholders, meaning those who have purchased a share of the company. This usually takes form by the company executives doing anything they can to keep share (stock) prices high and growth metrics consistent. While the system allows for great short-term results for investors, it can oftentimes be to the detriment of the company’s employees and customers, or even to the detriment of the company itself. 

If a company makes a profit, it would be logical to assume it would be best for the company to reinvest its profits back into itself, in order to continue to achieve profits in the future. However, that is often not the case. Corporate leaders are under such great stress to keep share prices high, meaning they oftentimes have to cut back on investments in R&D, employees, or capital expenditures, all of which would help the company succeed in the long-run. This has not always been the case. Financial markets used to be seen as a way to easily create long-term investments in sound businesses. However, as these markets have grown more complex in modern times, the growing number of financial intermediaries has started to see investments in the market as a sort of paper asset; something meant to be traded in the short-term, not invested in the long-term. This new kind of trading makes some, including the intermediaries, quite wealthy in the short term. However, this kind of trading is also a zero-sum game, sorely lacking in long-term wealth creation.

The Debt to GDP Ratio: a Marker of Economic Health 

As financial markets have matured and grown more complex in the USA, so has the country’s debt to GDP ratio. The ratio, defined as the total government/sovereign debts of a country to its GDP, or economic output, is usually a bellwether for the health and performance of an economy. For example, a study done by the World Bank shows that countries with a debt to GDP ratio of more than 77% are expected to go through economic slowdowns and recessions. As of 2022, the ratio in the USA is over 120%, as seen in the graph below.

Traditionally, the debt to GDP ratio has remained quite low in the USA, only spiking in times of large government spending, mainly in times of war or financial recession. These instances of spending make sense in the short and long term; one cannot fight or win a war without spending big on the armed forces, and fiscal spending and loose monetary policy can help a struggling economy bounce back. However, both of these come with a caveat. Once the war or recession is over, spending must go down, and taxes must go up.

While America has been good historically about maintaining a low federal debt to GDP ratio, it has not been nearly as good since the 1980’s. In response to the recession, inflation, and oil crisis of the 1970’s, Reagan was elected president under a platform of lowering taxes in order to help the American economy recover. This resulted in the annual federal budget deficit growing from $41 billion in 1971 before his presidency to $212 billion in 1985 after his first term in office. 

By this time, the American economy was booming again, meaning that spending should be cut and taxes should be raised in order to maintain a stable debt to GDP ratio. However, that has not been the case. Besides Clinton’s second term in office, every over American president has seen an increase in the debt to GDP ratio. Spending continues to increase during each recession (from $161 billion in 2007 before the Great Recession to $1.41 trillion in 2009 after) but even when policy is tightened, it rarely ever results in a budget surplus, just less of a budget deficit.

Utopia vs Reality

Why would American politicians engage in such reckless fiscal policy? The answer is quite simple. Americans, as with all other people, hate to pay taxes. Additionally, Americans love government benefits and programs, such as medicare, social security, and infrastructure. These beliefs, while compatible in a theoretical utopia where scarcity does not exist, are not compatible in the real world. However, it is unlikely that any presidential candidate would be elected on a policy of making their voters’ lives worse, even if only in the short term. Instead, Republicans run under a policy of decreasing taxes (with negligible spending decreases), while Democrats run under a policy of increasing spending (with negligible tax increases). Both yield the same result in regards to the debt to GDP ratio. 

These problems mentioned are far from the only instances where we prioritize short term benefits of long term successes. We prioritize increases in home value over home affordability through restrictive zoning policy, helping retirees maintain home values at the expense of the young looking for a place to live. We raise our children in ways that keeps them protected and sheltered, but causes them to flounder in the real world. It’s in our personal lives, our culture, and our elected governments.

To Conclude 

When looking towards the future, we have two choices. Either to continue to try to benefit in the short term at the expense of either our future or future generation’s futures, or to bite the bullet now; to do things that may harm us in the short term, but will greatly benefit us many years from now. We can stop structuring our financial markets in their current predatory forms, which seek to leech from companies instead of investing in their futures. A potential solution to this would be structuring shareholder voting rights in a way that makes them dependent on the length of time they have held their shares, instead of just by the number of shares they currently hold. We can stop supporting politicians who care more about getting elected than the economic health of the nation. We can look out for the needs of the future instead of the wants of the present.

Why the West is Wealthy

John Love

Have you ever wondered why Mexico is so much poorer than the USA, despite sharing a border? Or why Asia, despite having such a great population, is poorer than the west? You’re not alone. Historians, economists, sociologists, and politicians have been asking the same questions for centuries, and have had many theories about why over the years.

In the past, theories have centered largely on differences in race, with Africa and Asia being doomed due to their supposed racial inferiority. However, as time passes, this theory has been discredited with the failures of societies like Nazi Germany, and the sucesses of Asian societies like Japan. Today, the theories and reasons for differences in the wealth of societies can be boiled down into three different categories; geography, culture, and institutions.

The Geography Hypothesis

The geography hypothesis postulates, in short, that countries’ successes are bound by their geographies more than by the cultures, leaders, or institutions within. The most famous work supporting this hypothesis is likely Jared Diamond’s Pulitzer Prize winning Guns, Germs, and Steel. In this book, Diamond rejects the idea that culture, institutions, and actions by specific leaders are what have determined the course of history, instead believing that where the countries are, and the resources they contain, are the main determinants of a nation’s future success. The most famous example given within the book, and one that is growing increasingly prevalent today, is that cold nations are destined to be more successful than warm ones. 

Diamond argues that warmer, equatorial areas have many negative factors, including tropical diseases and worse farmable foods (such as rice, maize, and bananas). Compare this to colder areas further from the equator, which are relatively free from diseases, and have crops that contain more protein and are easier to farm and store (such as barley, wheat, and flax). Additionally, warmer areas tend to cause physical laziness, as anyone who has lived through a Texas Summer can attest to. In colder regions, one could not afford to be lazy during the Summer months, as hard work and collaboration was required to be able survive the harsh Winter months.

When looking at a map of the world today, his first hypothesis, that cold nations fare better than cold ones, seems to ring true. Countries such as Canada, the USA, western Europe, and the Nordic Countries have some of the highest GDPs per capita in the world, especially when compared to some of the warmest and poorest countries in the world. For example, the USA has 86 times the GDP per capita of Burundi. Even within the USA, the northern states have tended to fare better economically than the southern states, with the wealthiest state in terms of GDP per capita being New York, with a GDP per capita of over twice that of the poorest state, Mississippi.

This hypothesis, while tending to be true in the modern day (with some exceptions. Russia has a GDP per capita of $27,044, compared to Qatar’s $85,300), doesn’t hold up as well over history. Throughout history, the wealthiest countries per capita have varied greatly. For example, around the time of Christ, Iraq had an estimated GDP per capita (in 2011 dollars) of $1225, while the Netherlands had a GDP per capita of $600. Nowadays, the Netherlands has a GDP per capita more than 3 times that of Iraq. This isn’t cherry picking data either; warmer nations in the Middle East and Mediterranean consistently had greater GDPs per capita than colder nations in western and northern Europe until at least the middle ages. What caused this to change? Why did these nations lose their economic advantage? One potential explanation can be found in the differences between these nations’ cultures.

The Cultural Hypothesis

Proponents of the cultural hypothesis believe that properties of cultures, most often their religions, social values, and family ties, are the main determinants to whether a nation is financially successful or not. Proponents of this hypothesis argue that some cultures are better at promoting fiscal responsibility, hard work, and the acceptance of technological advancements.

A tenant of the cultural hypothesis is that different cultures are better at adopting technological advancements, thus allowing those cultures to excel in the long-run. An example of this can be found in how different cultures across Eurasia reacted to the Mongol invasions of the 13th century. Within both China and the Muslim world, their societies turned insular, with study of Confucian texts or the Qaran being treated as more important than the study of the real world. This led to both cultures, who had been at the forefront of worldwide scientific development, falling behind. The west, on the other hand, adopted Thomas Aquinas’s idea of Natural Theology; that reason and the study of the Earth, God’s creation, is of the utmost importance. This led to the west eventually surpassing the rest of the world scientifically, and beginning the industrial revolution first. 

Perhaps the most famous example of the cultural hypothesis in action is the so-called “Protestant Work Ethic”. Coined by German sociologist Max Weber, the concept asserts that Protestant ethics and Calvinist doctrine helped lead to the prosperity of western civilization. In Protestant doctrine, hard work, education, and frugality were thought to be among the most important applications of being a steward of God, something Weber argued was not present within the pre-Reformation culture of Catholic Europe. This, in turn, helped to launch the spread of capitalism among the countries of western Europe, leading to the prosperity in those nations in the modern day. 

When looking at a map of Catholic and Protestant nations, this tends to hold true in broad strokes (the USA and Canada are wealthier than Latin America and northern Europe is wealthier than Mediterranean Europe). However, Weber’s concept is not without its critics; some Catholic nations are wealthier than nearby Protestant counterparts, and some academics argue capitalism first emerged before the Reformation in northern Italy. Additionally, one can argue that it is not the culture that led to capitalism that is important and leads to wealth, but the laws and institutions of a capitalistic society.

Institutional Hypothesis

The third of the main three hypotheses, the institutions hypothesis, postulates that the differences in ways societies organize and set laws are the main influences on the differences in prosperity between nations. Proponents of this hypothesis point to the fact that countries who are very similar geographically and culturally can have vastly different economic institutions and outcomes.

An example of this can be found in the differences between North and South Korea. North Korea, with its centralized economic planning, lack of markets, and lack of property rights, has become very impoverished, especially in relationship to its main neighbors, South Korea and China. South Korea, on the other hand, has excelled economically since adopting its laws protecting property and other free market policies. Before the two countries were split, both halves of the Korean peninsula had similar cultures, and both still are similar geographically, with both nations being dominated by mountainous terrain.

Another example of different institutions impacting the economic success of nations is the economic outcomes of former British colonies compared to former Spanish colonies. British colonies, namely the USA, Canada, and Australia, are all world powers and economic juggernauts. This can be argued to be due to English Common Law being the foundation of these countries’ legal systems. This system of laws helps to protect the civil rights and property of those living under the law, helping citizens living within these nations, as well as increasing investment by increasing stability. Many other former British colonies, such as Egypt and British India (including Bangladesh and Pakistan), while being poorer compared to the previously mentioned British colonies, are still wealthier in terms of GDP per capita when compared to many of their neighbors. This difference from their former colonial peers can be argued to be due to both countries having eliminated some facets of English Common Law, such as strong property rights, for at least a time after their independence.

Former Spanish colonies, on the other hand, are much poorer compared to their English counterparts. These colonies were designed not as economic engines, but as extractive economies, much to the detriment of the modern-day economies of these now nations. In addition, governing power came largely from the foreign Spanish crown compared to the more concentration in the local residents of the colonies under the English system. This caused Spanish colonies’ decision making abilities to be hamstrung, and limited economic growth. Furthermore, without English Common Law to set a baseline for individual and property rights, former Spanish colonies have not been able to set up stable market economies without threats of military dictatorships or socialist regimes.

Which Theory is Right?

This list of theories is by no means a comprehensive list. There are many more examples to support the hypotheses of each of these theories, as well as additional theories (such as the Great Leader Hypotheses, in which great figures in history determine which nations are prosperous. Think Alexander the Great, Confucius, or George Washington). I personally believe that each of these theories has its merits, and none are the sole contributing factor to why some nations are more prosperous than others.

In some cases, such as the natives of the New World, the lack of good geography and fauna made a prosperous civilization extremely difficult to come by. Very few domesticated animals and poor accessible mineral resources made an urbanized civilization hard to create, meaning that very few natives would have been exposed, and thus gained immunity, to the diseases which wiped out much of their population. In other cases, the culture of a nation can hold it back, such as the historically passive Hindu culture and caste system of India, which allowed for very little social mobility. And in even other cases, the institutions of a nation can hold back a great geographic position or culturally strong people, such as how communism held back Eastern Europe from prospering. When analyzing why a country is successful, try to look at the whole picture, not just a few of the factors, to understand what could have gone right, or what could have gone wrong.