AI Isn't Cutting-Edge Tech After All: Exploring Its Historical Foundations and Maturity

AI Isn't Cutting-Edge Tech After All: Exploring Its Historical Foundations and Maturity

Robert Lv13

AI Isn’t Cutting-Edge Tech After All: Exploring Its Historical Foundations and Maturity

Key Takeaways

  • AI has existed as an idea since ancient times, with the idea of artificial human-like beings written about by the Ancient Greeks.
  • The first system that used AI to function was created in 1955, and AI has evolved greatly since then, with advancements in natural language processing and other technologies.
  • Today, AI is highly significant, and its presence is expected to continue growing across industries, with AI-based chatbots and virtual assistants being popular applications.

The use of artificial intelligence has been increasing across most industries over the past decade or so, and this technology has a lot of potential. However, is AI a fairly new development, or did its roots begin in a much earlier time? Here’s how old AI really is.

When Was AI Conceptualized?

The idea of machines gaining consciousness, or at least mimicking human behavior, came about a very long time ago. In its most basic form, the idea of AI first popped up by the Ancient Greeks when the idea of humans creating artificial, human-like beings, was written about by poet Hesiod in the story of Talos .

In the following centuries, more stories, predictions, and myths of artificial human-like creations were written, such as Paracelsus discussion of the creation of an “artificial man” in his work ‘Of the Nature of Things .’ It wasn’t until the mid-20th century that the idea of artificial intelligence became a reality.

The Creation of AI

The first system that used artificial intelligence to function was created in 1955 by Herbert Simon, Clifford Shaw, and Allen Newell, and was named Logic Theorist. Simon, a political scientist and sociologist, along with Hewell, a computer scientist, developed Logic Theorist in order to artificially mimic certain human thought processes. The program itself was written by Shaw, a computer programmer who worked for RAND at the time. As Logic Theorist was being developed in 1955, the term “artificial intelligence” hadn’t even been coined yet; this was to come a year later from John McCarthy.

Logic Theorist was specifically designed to solve mathematical problems using human-attributed skills, therefore simulating a basic version of the human mind in doing so. As stated in a 2006 academic article on the topic of Logic Theorist , the program was “perhaps the first working program that simulated some aspects of peoples’ ability to solve complex problems.”

Glarysoft File Recovery Pro - Helps to recover your lost file/data, even permanently deleted data.

How AI Has Evolved

The first AI system created and the AI systems we see today differ vastly from one another. As our understanding of technology has grown, we’ve been able to continuously improve AI’s capabilities over time, which has given way to the impressive AI-based tools we see today. But this wasn’t an easy journey.

As explained by HODS, during the 1970s and 1980s, a period known as AI Winter began (a term which was coined in the latter of the two decades). In this period, the development and improvement of AI systems plateaued due to the limitations on computing power that existed at the time. A drop in funding for AI research also contributed to this stagnation, as scientists didn’t have the financial backing needed to make any significant advancements.

But as the late 80s came around, this freeze began to thaw, with AI developments beginning to ramp up again. Backpropagation, a machine learning training algorithm, stood as the catalyst for this resurgence.

Backpropagation was first introduced in 1970 by Paul Werbos. In his paper, ‘Generalization of Backpropagation with Application to a Recurrent Gas Market Model ‘, Werbos broke down how backpropagation worked and discussed some possible applications for the algorithm. In short, backpropagation allows a neural network to adjust and select the right pathways for the best output. During this process, the neural network can alter the relevancy of its connections by altering its weights and biases based on past performance, also known as a gradient descent.

In the 1980s, the implementation of backpropagation allowed researchers to efficiently train artificial neural networks (ANNs), which was a huge step forward in the AI field. Over the next few years, a few key advancements were made with AI systems, including improvements in natural language processing (NLP) , a technology that is crucial to many of the AI tools we see today.

At the turn of the century, AI was about to get even more advanced. During the 2000s, many important technologies were created or vastly improved upon with the help of AI, including search algorithms, social media tailoring, and algorithmic trading in the financial industry. Among these many notable moments for AI in the 2000s was IBM Watson.

IBM Watson began as a research project in 2006, and gave way to a number of incredible outcomes. One such outcome, the Watson supercomputer, used AI to answer questions that most preexisting computers could not.

Watson was a QA (question answering) supercomputer that implemented natural language processing, machine learning, and a number of additional technologies to perform its functions, gaining mainstream notoriety in 2011 after winning first-place in a game of ‘Jeopardy!’

Things didn’t slow down for AI in the 2010s, with advancements in the mid-to-late period of the decade entirely changing how we can receive and analyze data. So, what happened here?

Jutoh Plus - Jutoh is an ebook creator for Epub, Kindle and more. It’s fast, runs on Windows, Mac, and Linux, comes with a cover design editor, and allows book variations to be created with alternate text, style sheets and cover designs. Jutoh Plus adds scripting so you can automate ebook import and creation operations. It also allows customisation of ebook HTML via templates and source code documents; and you can create Windows CHM and wxWidgets HTB help files.

AI Today

Today, the global AI market is worth over $2 trillion, according to a Statista study . By 2026, the market is expected to exceed $5 trillion, and this number will likely only increase over time.

Perhaps the most notable example of AI in our world today is ChatGPT, a widely popular chatbot powered by an AI architecture known as a large language model (LLM) . Throughout the late 2010s, LLMs stood at the forefront of AI advancement. An LLM is a kind of neural network that is trained using vast amounts of data and taught to process human language, identify relevancy and language connections, and provide an effective response.

The first bona fide LLM was developed by Google in 2017, and came in the form of BERT, an algorithm used to improve search results. BERT, or Bidirectional Encoder Representations from Transformers, allowed users’ search inputs to be better interpreted by Google, allowing for more relevant and useful results.

BERT is still used by Google today, but is no longer the only LLM out there. As previously stated, ChatGPT currently stands as one of the most well-known LLMs, as it has a huge range of capabilities and can be used by almost anyone. Not only can ChatGPT answer factual questions, but it can also analyze and translate text, generate creative content, write code, and more.

ChatGPT does this using a Generative Pre-trained Transformer (GPT), developed by its creator company, OpenAI. Transformers are vital to LLMs, as they allow the neural network to determine the importance of each word in a given prompt in order to derive context. This, in turn, lets the LLM provide a useful response.

GPT-3.5 and GPT-4 are OpenAI’s currently available versions of the technology, with the former being entirely free and the latter being behind a $20 monthly paywall. Along with OpenAI’s ChatGPT LLM, you can also use a range of other LLM-based chatbots, including Claude and Google Bard.

But AI also has its uses in a lot of other technologies, such as virtual assistants. Siri, Alexa, Google Assistant, and Cortana all use AI to better understand users’ verbal commands. Moreover, the recommendations you’ll get on social media, online retailers, and similar platforms are also often powered by AI. You’ve likely come into contact with AI multiple times without even realizing it.

The Future of AI

The future of AI is a topic that has stirred a lot of concern, mainly due to the fact that AI’s potential is essentially endless. As technology advances, AI systems can gain greater computing power, more perfected neural networks, and increased capability overall. We’ll start with the more realistic future applications of AI, and then get into the more sci-fi aspects.

In recent years, AI has become the focus of car brands looking to introduce autonomous driving to customers. The most notable example of this is Tesla’s work with AI for its Autopilot feature.

In a more conceptual respect, AI may one day have the potential to meet or surpass human intelligence. You’ve likely heard of the “singularity”, which is a term referring to the point when the growth and evolution of AI becomes unstoppable. This would likely be the result of AI systems being able to evolve on their own and pass the point of human intelligence, meaning they no longer need to be trained or programmed by humans to develop. This is seen by many as the point at which humans lose control of AI technology.

Even today, AI outshines human performance in some settings. According to a study by Our World In Data , AI systems have been able to out-perform humans in a range of tasks, such as image recognition and language understanding, since the mid- to late-2010s. This goes to show that AI does, indeed, already have the potential to surpass human capabilities in certain areas.

In the next few years, we may see AI-based chatbots improve in their capabilities and accuracy, and it’s likely that AI’s presence will increase across almost all industries.

AI Has Undoubtedly Changed the World

Even though AI still has a long way to go before it can truly mimic the human brain, this technology has already changed the world. There’s no knowing how AI will develop in the future, and the question of AI surpassing human intelligence is still up in the air. But there’s no doubt that AI has already changed the online landscape, and likely the future of humanity as a whole.

  • Title: AI Isn't Cutting-Edge Tech After All: Exploring Its Historical Foundations and Maturity
  • Author: Robert
  • Created at : 2024-08-30 15:08:55
  • Updated at : 2024-08-31 15:08:55
  • Link: https://techtrends.techidaily.com/ai-isnt-cutting-edge-tech-after-all-exploring-its-historical-foundations-and-maturity/
  • License: This work is licensed under CC BY-NC-SA 4.0.