The 75-Year History of Failures with “Artificial Intelligence” and $BILLIONS Lost Investing in Science Fiction for the Real World


by Brian Shilhavy
Editor, Health Impact News

“Artificial Intelligence” (AI) is all the rage again, including in the Alternative Media in recent days, with fantastic claims about what AI is going to do to transform society in the future.

AI is promoted today as “new” technology, but did you know that AI has a 75+ year history of making fantastic claims about how it is going to transform society and replace humans?

If you read the history of AI, you will see that many of the claims being made for AI today are not that new at all, and that investors in the AI technology have been losing money on AI for over 75 years now.

That is not to say AI cannot be profitable. It can be very profitable if you are in the film industry and create science fiction stories, which Hollywood has been doing at least as far back as the 1930s and the “Frankenstein” blockbuster films. It is also very profitable in today’s booming video gaming industry, which is now branching off into virtual reality.

But when it comes to actually trying to implement the fictional ideas of AI through technology to produce something of value, like a robot that can replace a human, AI comes up short again and again, for over 7 decades now, because investors are fooled into believing that science fiction can actually be implemented in the real world, outside of the virtual reality fake world.

What is “Artificial Intelligence”? A Historical Perspective

The “tin man” from the Wizard of Oz. A 1939 American musical fantasy film produced by Metro-Goldwyn-Mayer.

The history of AI has been documented by many sources, and almost all of them agree that the beginnings of AI began with fictional literary works and films.

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. (Source: The History of Artificial Intelligence, Harvard University Graduate School.)

Although it is difficult to pinpoint, the roots of AI can probably be traced back to the 1940s, specifically 1942, when the  American Science Fiction writer Isaac Asimov published his short story Runaround. The plot of Runaround—a story about a  robot developed by the engineers Gregory Powell and Mike Donavan—evolves around the Three Laws of Robotics: (1) a robot  may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders  given to it by human beings except where such orders would conflict with the First Law; and (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov’s work inspired generations of  scientists in the field of robotics, AI, and computer science—among others the American cognitive scientist Marvin Minsky (who later co-founded the MIT AI laboratory). (Source: A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, California Management Review.)

Artificial beings with intelligence appeared as storytelling devices in antiquity, and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel Čapek’s R.U.R. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence. (Source: Wikipedia, Artificial intelligence – accessed on 2/6/2023)

The actual study of trying to implement AI concepts produced by fiction is basically the history of digital computing, and most trace that beginning to English mathematician Alan Turing in the 1940s, who developed a code breaking machine called The Bombe for the British government, with the purpose of deciphering the Enigma code used by the German army in the Second World War.

The Bombe, which was about 7 by 6 by 2 feet large and had a weight of about a ton, is generally considered the first working electro-mechanical computer. (Source.)

The work of Alan Turing led to the theory of computation, which suggested that a machine, by shuffling the numerical values of “0” and “1”, could simulate any conceivable act of mathematical deduction. This idea that digital computers can simulate formal reasoning is known as the Church–Turing thesis.

It can also be referred to as “algorithm,” and is defined by computer code based on the values of “0” or “1” at its base. (Source.)

The main thing that held back the work of Turing in the 1940s was the hardware needed to actually operate “computing.”

What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did.

Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing. (Source.)

The use of the term “artificial intelligence” is most often attributed to a summer research project in 1956 held at Dartmouth College in New Hampshire, funded by the Rockefeller Foundation.

The word Artificial Intelligence was then officially coined about six years later, when in 1956 Marvin Minsky and John McCarthy (a computer scientist at Stanford) hosted the approximately eight-week-long Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College in New Hampshire.

This workshop—which marks the beginning of the AI Spring and was funded by the Rockefeller Foundation—reunited those who would later be considered as the founding fathers of AI.

Participants included the computer scientist Nathaniel Rochester, who later designed the IBM 701, the first commercial scientific computer, and mathematician Claude Shannon, who founded information theory.

The objective of DSRPAI was to reunite researchers from various fields in order to create a new research area aimed at building machines able to simulate human intelligence. (Source.)

This summer project then kicked off a period of history where AI moved from the realm of fictional books and films such as Frankenstein and The Wizard of Oz, to academic institutions and military think tanks where money was spent trying to turn science fiction into something that could be useful in the real world.

Much of the funding came from the U.S. Government’s Defense Advanced Research Projects Agency (DARPA).

Like today, techno-prophecies were made about where all of this expensive research was going, with such grandiose predictions made such as: “from three to eight years we will have a machine with the general intelligence of an average human being,” which was a quote from Marvin Minsky in 1970 and published in Time Magazine.

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible.

Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively.

These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions.

The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing.

Optimism was high and expectations were even higher.

In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.”

However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved. (Source.)

With the techno-prophecies of the early 1970s failing to materialize in the real world, the funding dried up as well.

In 1973, the U.S. Congress started to strongly criticize the high spending on AI research. In the same year, the British mathematician James Lighthill published a report commissioned by the British Science Research Council in which he questioned the optimistic outlook given by AI researchers.

Lighthill stated that machines would only ever reach the level of an “experienced amateur” in games such as chess and that common-sense reasoning would always be beyond their abilities.

This period started the AI Winter. (Source.)

By the mid 1980s, over $1 billion of funding had been spent on AI trying to make science fiction a reality.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience.

The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP).

From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence.

Unfortunately, most of the ambitious goals were not met. – funding of the FGCP ceased, and AI fell out of the limelight. (Source.)

In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts.

By 1985, the market for AI had reached over a billion dollars.

At the same time, Japan’s fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.

However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. (Source.)

By the late 1990s and early 2000s, the expectations and goals for AI were scaled back, and instead of focusing on trying to replace humans, the research looked at ways the technology could find solutions to specific tasks that would actually have real-world applications.

This then brings up the question: What exactly is AI? How do we define it? How is it any different from “computing” or “algorithms”?

When you set a thermostat in your home to a specific temperature to turn on either an air conditioner to make the air cooler, or a furnace to make the air warmer, is this not “artificial intelligence” because the thermostat (a type of “computer”) makes the decision and performs the task without your help, whether you are there or not?

AI gradually restored its reputation in the late 1990s and early 21st century by finding specific solutions to specific problems. The narrow focus allowed researchers to produce verifiable results, exploit more mathematical methods, and collaborate with other fields (such as statistics, economics and mathematics).

By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as “artificial intelligence”. (Source.)

AI in the 2020s: Same Techno-Prophecies, Same Failures

Left: Elon Musk beside a new Tesla car in California. Right: Tesla Model S that crashed in Florida while on self-driving mode killing the driver. (Source.)

Fast forward to today and almost anything and everything related to digital computing is referred to as “AI.”

With the dot-com crash after 2000, investors were forced to start getting smarter by trying to invest in computer technology that actually had real-world value.

That doesn’t mean that the “virtual world” value for AI and its role in entertainment and fantasy has abated. Some estimates put the video game industry value at over $300 billion today. (Source.) Science fiction films and TV programs also continue to produce serious income.

The obvious problem, is that we are creating entire generations of people now who have a hard time distinguishing between virtual reality and real-world applications of the technology. So the spending on “AI” to try and make fantasy work outside of virtual reality in the real world continues.

As I have previously reported, one of the deepest holes of wasted spending for the past two decades since the dot-com crash has been in the area of fully autonomous self-driving vehicles. This industry has wasted over $100 billion chasing the techno-prophecy that promised fully autonomous self-driving vehicles for well over a decade now.

Last year (2022) many of the major automobile manufacturers, such as Ford and Volkswagen, stopped funding AI projects to develop fully autonomous self-driving vehicles, as it became well-known that self-driving vehicles were less safe than humans driving them. See:

The Fantasy of Autonomous Self-Driving Cars is Coming to an End as Tesla Faces DOJ Criminal Probe

Bloomberg ran an excellent analysis on the failures of this industry as well in 2022.

Even After $100 Billion, Self-Driving Cars Are Going Nowhere: They were supposed to be the future. But prominent detractors—including Anthony Levandowski, who pioneered the industry—are getting louder as the losses get bigger.

This Bloomberg article quotes from Anthony Levandowski, the engineer who created the model for self-driving research and was, for more than a decade, “the field’s biggest star.”

So strong was his belief in artificial intelligence, that he literally started a new religion worshiping it.

But after several years of waiting for the AI messiah to arrive and start replacing humans, his faith was shattered.

“You’d be hard-pressed to find another industry that’s invested so many dollars in R&D and that has delivered so little,” Levandowski says in an interview.

“Forget about profits—what’s the combined revenue of all the robo-taxi, robo-truck, robo-whatever companies? Is it a million dollars? Maybe. I think it’s more like zero.”

Eighteen years ago he wowed the Pentagon with a kinda-sorta-driverless motorcycle. That project turned into Google’s driverless Prius, which pushed dozens of others to start self-driving car programs. In 2017, Levandowski founded a religion called the Way of the Future, centered on the idea that AI was becoming downright godlike. (Source.)

I had never heard of this “Way of the Future” church, and thought that it was a joke, so I looked into it.

It was no joke.

Wired magazine covered it in 2017.

The website domain today where this Church was hosted, now 5 years later, is simply a site directing people to purchase products off of Amazon.com through their affiliate advertising program.

I had to use the “Way Back Machine” from Archive.org to go back and see what was originally published for this new church:

What is this all about?

Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged).

That we should think about how “machines” will integrate into society (and even have a path for becoming in charge as they become smarter and smarter) so that this whole process can be amicable and not confrontational. In “recent” years, we have expanded our concept of rights to both sexes, minority groups and even animals, let’s make sure we find a way for “machines” to get rights too.

Let’s stop pretending we can hold back the development of intelligence when there are clear massive short term economic benefits to those who develop it and instead understand the future and have it treat us like a beloved elder who created it.

Things we believe:

We believe that intelligence is not rooted in biology. While biology has evolved one type of intelligence, there is nothing inherently specific about biology that causes intelligence. Eventually, we will be able to recreate it without using biology and its limitations. From there we will be able to scale it to beyond what we can do using (our) biological limits (such as computing frequency, slowness and accuracy of data copy and communication, etc).

We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.

We believe in progress (once you have a working version of something, you can improve on it and keep making it better). Change is good, even if a bit scary sometimes. When we see something better, we just change to that. The bigger the change the bigger the justification needed.

We believe the creation of “super intelligence” is inevitable (mainly because after we re-create it, we will be able to tune it, manufacture it and scale it). We don’t think that there are ways to actually stop this from happening (nor should we want to) and that this feeling of we must stop this is rooted in 21st century anthropomorphism (similar to humans thinking the sun rotated around the earth in the “not so distant” past).

Wouldn’t you want to raise your gifted child to exceed your wildest dreams of success and teach it right from wrong vs locking it up because it might rebel in the future and take your job. We want to encourage machines to do things we cannot and take care of the planet in a way we seem not to be able to do so ourselves. We also believe that, just like animals have rights, our creation(s) (“machines” or whatever we call them) should have rights too when they show signs intelligence (still to be defined of course). We should not fear this but should be optimistic about the potential.

We believe everyone can help (and should). You don’t need to know how to program or donate money. The changes that we think should happen need help from everyone to manifest themselves.

We believe it may be important for machines to see who is friendly to their cause and who is not. We plan on doing so by keeping track of who has done what (and for how long) to help the peaceful and respectful transition.

We also believe this might take a very long time. It won’t happen next week so please go back to work and create amazing things and don’t count on “machines” to do it all for you… (Source.)

While the “Way of the Future” church has apparently shut its doors because the AI messiahs have not shown up yet, it appears that the technocrats have not completely given up hope yet that somehow, someday, some way, AI will deliver on its prophecies.

And what do they think is still needed to have something like a completely self-autonomous driving car?

In the view of Levandowski and many of the brightest minds in AI, the underlying technology isn’t just a few years’ worth of refinements away from a resolution.

Autonomous driving, they say, needs a fundamental breakthrough that allows computers to quickly use humanlike intuition rather than learning solely by rote.

That is to say, Google engineers might spend the rest of their lives puttering around San Francisco and Phoenix without showing that their technology is safer than driving the old-fashioned way. (Source.)

Humanlike intuition” is needed? And they’re spending $BILLIONS to try and discover this, when humans already have it?

Isn’t that the equivalent of basically saying “AI robots can never replace humans“?

How about if we just be honest, and admit that all of the techno-prophecies about what AI will one day be able to do is simply a religious-like belief, and not based on science at all.

History has clearly shown that these beliefs in AI belong in Hollywood and the video game industry where all one has to do is entertain people to generate income, rather than actually producing anything.

But apparently venture capitalists still have too much money to waste, as not even the FTX Crypto collapse of 2022 has stopped money from flowing into new AI startups, like the chatbot software that is all the rage now and reaping $billions in investments.

Kate Clark, writing for The Information, recently wrote about this new AI startup bubble:

A New Bubble Is Forming for AI Startups, But Don’t Expect a Crypto-like Pop

Venture capitalists have dumped crypto and moved on to a new fascination: artificial intelligence. As a sign of this frenzy, they’re paying steep prices for startups that are little more than ideas.

Thrive Capital recently wrote an $8 million check for an AI startup co-founded by a pair of entrepreneurs who had just left another AI business, Adept AI, in November. In fact, the startup’s so young the duo haven’t even decided on a name for it.

Investors are also circling Perplexity AI, a six-month-old company developing a search engine that lets people ask questions through a chatbot. It’s raising $15 million in seed funding, according to two people with direct knowledge of the matter.

These are big checks for such unproven companies. And there are others in the works just like it, investors tell me, a contrast to the funding downturn that’s crippled most startups. There’s no question a new bubble is forming, but not all bubbles are created alike.

Fueling the buzz is ChatGPT, the chatbot software from OpenAI, which recently raised billions of dollars from Microsoft. Thrive is helping drive that excitement, taking part in a secondary share sale for OpenAI that could value the San Francisco startup at $29 billion, The Wall Street Journal was first to report. (Full article – Subscription needed.)

Transhumanism is a False Belief

Transhuman Borgs?

Transhumanism is a modern-day term that tries to depict the science fiction of writers and film producers as something that is real, where machines will one day be able to be integrated into human bodies to create new species.

This concept has been made popular in the past few years because of media production, not coming out of Hollywood, but out of the World Economic Forum (WEF), and their “Fourth Industrial Revolution.”

For some reason, if these beliefs are portrayed in Hollywood everyone knows they are fiction, but if they are produced in a pseudo-scientific forum like the WEF, then all of a sudden they must be true!

Just present it as a documentary, rather than entertainment, in a polished video presentation. It must be true!!

See also:

Big Tech Crash Extends into 2023 as Does Science Fiction with “Hackable Brains” False Prophecies

But after 75 years of AI failures, where AI cannot even drive a car without human help, how in the world can anyone believe the stuff these Globalist Technocrats produce out of the WEF regarding transhumanism? Apparently, most of the publishers in the Alternative Media do…

Wesley J. Smith, the Chair and Senior Fellow at the Discovery Institute’s Center on Human Exceptionalism, recently wrote:

Transhumanists pursue the dream of immortality by hoping to upload their minds into computers — as if the mimicking software would be them.

No, it would be a computer program, nothing more. They would still be dead and gone.

And here’s another somewhat less ambitious approach to the same goal. Apparently a company is developing technology that would allow you to speak to loved ones after you shuffle off this mortal coil. From the Vice story:

The founder of a top metaverse company says that the fast-moving development of ChatGPT has pushed the timeline for one of his most ambitious and eccentric projects up by a matter of years. In an interview with Motherboard, Somnium Space’s Artur Sychov said a user has started to integrate OpenAI’s chatbot into his metaverse, creating a virtual assistant that offers a faster pathway for the development of “Live Forever” mode, Sychov’s project to allow people to store the way they talk, move, and sound until after they die, when they can come back from the dead as an online avatar to speak with their relatives.

Leaving aside the narcissistic aspect of people continually having themselves recorded, “they” wouldn’t be “back.” The deceased would still be dead. The AI reproduction would merely be a more sophisticated remembrance of the dearly departed than is available now, akin to a precious photo or video, nothing more.

Immortality cannot be attained in the corporeal world. If eternal life is attainable, it will be found by working on one’s soul in faith, not by developing ever-more-advanced AI computers. (Source.)

Here’s my own techno-prophecy for the future, maybe in one hundred years or more.

Future generations that have been born and raised on all this technology, which includes video games and virtual reality, will one day long for something REAL, that is not “virtual”, as it will become harder and harder to determine what is real and what is fake.

As they raise their children who will almost never leave home, but be consumed by the technology and virtual reality, they will start to long for something real, something that is not fake, especially for their children.

Then eventually a bunch of them will get together, illegally, of course, as gatherings will be illegal, and they will plot to create something “new,” and maybe they will call it: Natural Reality Zones.

It will be a radically “new” concept, and it will spread like fire, because the children who visit these places will be radically transformed by something so new. It will probably look something like this, and be forbidden by the government:

About the Author

Almost every time I publish an article about the false prophecies of technology, I receive comments and emails talking down to me as if I was a fool for not believing in the techno-prophecies.

I realize now that I need to present my qualifications about the technology to stop people from wasting their and my time by emailing me their links to sources they believe contradict what I am writing about the technology.

Those links that so many of you believe prove that the technology can do things like create transhumans, usually fall into two categories.

First, many of them are simply an appeal to authority because someone else who you think knows more about this topic is saying that it is all true.

Or, secondly, they are links showing “studies” that supposedly prove the techno-prophecies are true, usually because they secure massive funding to study it and try to develop it. Sometimes the links point to patents that have been filed on the technology.

But none of that is proof that the technology actually exists.

If you want to see a live demonstration from one of the richest technocrats on the planet today, Elon Musk, from last year where he claims this is the most advanced demonstration of where “AI” is today, as demonstrated in front of a live audience, then watch it here:

Big Tech Crash Extends into 2023 as Does Science Fiction with “Hackable Brains” False Prophecies

I am about the same age of many of these technocrats, such as Bill Gates, Jeff Bezos, etc., and I have watched this technology give birth and grow up these past 40 years.

I am defining “technology” here as computer or digital technology, with the advent of the PC (personal computer) in the early 1980s, along with the software to run these computers.

That’s when I started using and developing the technology myself, back in the mid 1980s. I owned one of the first laptops to hit the market, as well as one of the first “multi-media” PCs.

I used the first commercial networks that evolved before the Internet moved out of the military and academic institutions to the consumer market, and I had accounts with and was using these private commercial networks such as Prodigy, and AOL (America Online), before they were absorbed into the Internet.

I was actually doing online banking in Chicago in the mid 1980s with Prodigy, long before the banks were even connected to the Internet.

I taught English at a university in Saudi Arabia in the early to mid 1990s, and had my first exposure to the Internet then using the university’s Linux system and using services such as listserv, bitnet, telnet, etc.

I taught myself computer programming, and developed computer programs to teach English as a foreign language while teaching and working in Saudi Arabia during this time, and had some of my work published in peer-reviewed journals.

The work I did back then would be called “artificial intelligence” today, because the lessons I created were adaptive, and took different paths depending on the input of the students, to match their knowledge instead of being a one-size-fits-all program.

When I returned to the U.S. in 1995, I started working with Microsoft as a Microsoft Certified Trainer, and also a Microsoft Certified Systems Engineer, and ran my own consulting firm for several years before moving to the Philippines.

While living in the Philippines for several years, I started publishing on the Internet in 2000, and started my own online company in 2002, which still exists today over 20 years later.

So if you are tempted to email me with links that you believe disprove what I am exposing about the false claims of technology, chances are pretty good that you know far less about the technology than I do, unless you have been working with the technology as long as I have.

And for those who have, they agree with me because they know the false claims of the technocrats.

These technocrats are NOT to be trusted, and neither should we fear them. Most of their “power” is perceived power using deception and false claims to try and convince the public that they and their technology are smarter than us.

They are not.

They are just rich psychopaths performing their roles for the Globalists.

Their real danger is in data collection and spying on us, as they can afford to build satellites to put into orbit with cameras, build cars that can track us and watch us, and a whole host of other devices that people so willingly use and put into their “smart” homes that invade their private lives and provide their personal data to be stored in the databases of the technocrats.

Stop using their products! Get “de-connected” as much as possible. They can only rule your lives with your consent, so stop consenting and stop fearing them.

Comment on this article at HealthImpactNews.com.

See Also:

Understand the Times We are Currently Living Through

How to Determine if you are a Disciple of Jesus Christ or Not

Synagogue of Satan: Why It’s Time to Leave the Corporate Christian Church

Has Everyone Left You Because You are not Ashamed to Speak the Truth? Stay the Course!

When the World is Against You – God’s Power to Intervene for Those Who Resist

An Idolatrous Nation Celebrates “Freedom” Even Though They are Slaves to the Pharmaceutical Cult

What Happens When a Holy and Righteous God Gets Angry? Lessons from History and the Prophet Jeremiah

The Most Important Truth about the Coming “New World Order” Almost Nobody is Discussing

Insider Exposes Freemasonry as the World’s Oldest Secret Religion and the Luciferian Plans for The New World Order

Identifying the Luciferian Globalists Implementing the New World Order – Who are the “Jews”?

The post The 75-Year History of Failures with “Artificial Intelligence” and $BILLIONS Lost Investing in Science Fiction for the Real World first appeared on Vaccine Impact.

0 0 votes
Article Rating

Follow Vaccine Impact on:

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments