Table of Contents

Overview

The history of the Turing Test and Expert systems has revealed that people have overestimated the progress of A.I. since its early days. The hype has far surpassed the actual accomplishments of A.I. In the 1970s, the A.I. industry entered a phase known as “AI Winter.” This term was borrowed from the theory of “Nuclear Winter,” which predicted that mass use of nuclear weapons would cause global temperatures to plummet and the extinction of humanity. As a result, commercial and scientific activities in AI declined significantly during this period. It could be argued that A.I. is still recovering from the nearly two-decade-long winter. This section will discuss the triggers and processes that led to the A.I. winter and try to understand the causes and implications of the hype in A.I.’s history.

Trigger of AI Winter

The start of the A.I. winter can be traced back to when the government decided to reduce funding for A.I. research. This was mainly due to two notorious reports: the Automatic Language Processing Advisory Committee (ALPAC) info by the U.S. Government in 1966 and the Lighthill information for the British government in 1973. 

The ALPAC report

In the United States, one of the main reasons for funding research on artificial intelligence (AI) was to develop machine translation (M.T.) technology. During the Cold War, the U.S. government was particularly interested in the automatic and instantaneous translation of Russian. In 1954, the Georgetown-IBM experiment demonstrated the potential of M.T. Although the system was incomplete, with only six rules, a 250-word vocabulary, and a specialization in organic chemistry, it attracted significant public attention. The following day, the New York Times published an article titled “Russian is turned into English by a fast electronic translator,” similar reports appeared in newspapers and magazines across the country in the following months. This was the most extensive and influential publicity M.T. has ever received today. Almost all reports cited the prediction made by Leon Dostert, who led the experiment.

In five or maybe three years, electronic processes may be able to convert meanings between different languages in significant functional areas. This could become a reality.

In June 1956, spurred on by the positive attention and involvement from the Soviet government, American agencies began supporting M.T. research. However, progress over the following decade was sluggish, prompting the establishment of ALAPC to investigate the underlying reasons.

The ALAPC report was a significant event in the field of M.T. research, and its impact was widely recognized. It resulted in the cessation of substantial funding for M.T. in the U.S. However, the report also profoundly affected the general public and the scientific community, conveying a clear message that M.T. was deemed futile. The repercussions of the information were evident for two decades, during which people refrained from openly discussing their interest in M.T.

The ALAPC report focused on the economic benefits of M.T. research rather than its more comprehensive scientific value. According to the committee, M.T. research should show a clear potential for reducing costs, improving performance, or meeting operational needs [Hutchins96]. However, it is worth noting that the report only looked at the U.S. government and military’s requirements for translating Russian documents automatically and did not consider any other potential uses of M.T. systems for other languages.

The report began by examining the availability of Russian translators in academic and government agencies. It found no shortage of translators, with only 300 out of 4,000 contracted translators being utilized on average each month. The report concluded that machine translation (M.T.) needed significant quality, speed, and cost improvements to justify further research. However, M.T. was not performing satisfactorily in any of these aspects at the time of the report. Poor quality M.T. systems were a significant concern, as their output required extensive post-editing to be readable by humans. This post-editing could take longer than translating from scratch by a human translator.

The report’s conclusion emphasized the government’s low investment returns. The report accurately reflected the disappointment felt by the public and correctly noted that further basic research is needed for M.T. At present, M.T. is even an extent of continuous study. 

The Lighthill report

The “Lighthill Report” is a well-known publication titled “Artificial Intelligence: A General Survey,” authored by Professor Sir James Lighthill from Cambridge University in 1973. The British Science Research Council, the primary funding body for scientific research in British universities, requested Lighthill’s review of A.I. to aid in evaluating requests for support in A.I. research. Lighthill’s study provided a bleak outlook for A.I., stating that “in no part of the field have discoveries made so far produced the significant impact that was then promised”* Lighthill73]. Despite Lighthill’s lack of previous knowledge of A.I. as a hydro-dynamist, his paper still had a profound impact. Eventually, the British government discontinued funding for A.I. research in most British universities except four.

According to Lighthill, there are three categories of A.I. technology: Category A, which involves advanced automation or application; Category C, which is focused on studying the central nervous system; and Category B, which serves as a bridge between A and C. However, Category B is only valid if it aids Categories A and C. Lighthill concluded that A.I. research has not contributed significantly to either Category A or C. Therefore, it is not worth continuing. 

In the document, the central section was titled “Past Disappointments.” Lighthill gave an example of an automatic aeroplane landing system, where conventional engineering techniques, such as radio waves, proved more effective than A.I. methods. Lighthill believed that although AI techniques could help land an aircraft in an uncontrolled environment, it was not yet a practical solution. He also pointed out that the sophistication of chess programs was limited, with the highest level only reaching that of an “experienced amateur.” Lighthill used these examples to show that successful A.I. applications required substantial knowledge of the subject matter and that A.I. methods were not truly intelligent because they could not automatically acquire knowledge. Additionally, he mentioned the “Combinatorial Explosion” issue, which explained that existing A.I. techniques worked well only in small domains in the lab and were inadequate for large-scale, realistic problems.

Two reports indicated significant disappointments in A.I. technology, reducing government funding for A.I. research. It is crucial to note that government funding plays a vital role in developing all computing technologies, particularly during their initial stages. These reports warned of a potential A.I. winter.

The Duration of AI Winter

During AI winter, AI research programs had to disguise themselves under different names to continue receiving funding. Many somewhat ambiguous words came up during this time that carried a strong hint of AI, such as “Machine Learning,” “Informatics,” “Knowledge-based system,” and “Pattern recognition.” The re-branding of these fields permitted A.I. to resume progress in the winter. However, fewer perceived advancements were under A.I., further aggravating the overall support decline. 

The retail A.I. industry likely received a more severe impact in the winter. A.I. programs intrinsically need a large amount of computing capacity. In the early 70s, they started exceeding the standard research computer limit.

The situation was further exacerbated by the nature of LISP symbolic programming language, which was unsuitable for standard commercial computers optimized for assembly and FORTRAN language. So, beginning in the 70s, many organizations started offering machines specially tailored to the semantics of LISP and could run more comprehensive A.I. programs. However, with the onset of A.I. winter, the industry saw a transition of attraction from the LISP programming language and machines. Associated with the opening of the “P.C. revolution,” many LISP organizations such as Symbolics, LISP Machines Inc., and Liquid Inc. failed as the place A.I. demand could no longer pay the premium for the specialized machines. 

The popularity of the LISP programming language originated mainly from academics, where fast prototyping and script-like semantics were very favourable. It achieved a limited range of success in commercial software development as there were inherent inefficiencies associated with functional programming languages. As a result, multiple new concepts pioneered by the LISP language, such as trash collection, dynamic typing, and object-oriented programming, went into oblivion along with LISP. Although many new ideas were unfamiliar to A.I., they returned in the late 90s. 

The development of AI Winter showcased a cycle of positive feedback, wherein symptoms tend to exacerbate.

causes. The effect of the first triggers in the late 60s continued to be heightened throughout the next decades until movements in the A.I. ecosystem died down in the 80s. A special self-reinforcing element in this cruel cycle was the so-called “A.I. effect.” A.I. development refers to the trend for people to forgive advances in A.I. after the fact. When some intelligent behaviour is achieved in a computer, the inner workings are as plain as lines of computer code. The secret is gone, and people quickly ignore the accomplishment as mere calculations. At the AAAI conference, Michael Kearns asserted that individuals possess an inherent desire to establish their unique position in the vast expanse of the universe at a subconscious level. This suggests that people constantly seek to define their identity and purpose in the world, which can be a powerful motivator for personal growth and development. Such insights can be valuable for researchers and scholars seeking to understand the complex nature of human identity and behaviour. In the past, A.I. was often mocked for being “almost functional.” This led to a growing disappointment as A.I. research consistently fell short of its goals.

Discussion

The downfall of A.I. was triggered by disappointing reports from both ALPAC and Lighthill. The ALPAC report gave a false impression that machine translation (M.T.) of good quality was much closer to reality than it was. This led to more liberal funding and sponsorship of M.T. research than was appropriate. The early success of the Georgetown-IBM experiment was also artificial, as grammar and vocabulary rules were created specifically for the text samples used, making the system appear in the best light. There was speculation that Leon Dosert had shown the system too early to secure funding for additional research at Georgetown. M.T. researchers only saw it as a first effort or prototype, but the press and funding agencies did not take notice of that fact. The success of miniature systems can be deceiving, especially when combined with enthusiastic media coverage. Lighthill raised concerns about the scalability of A.I. methods in his “Combinatorial Explosion” argument. Funding large-scale applications requires more than just success in specific domains.

The disappointments in A.I. stemmed from a lack of understanding of the complexity of real-life problems. A.I. applications were often studied superficially, and there was a tendency to make overly optimistic predictions for publicizing purposes. John McCarthy, a pioneer of A.I. study, criticized that much of the work in A.I. was not focused on studying intellectual mechanisms but rather on generating amazement in the public. A.I. scientists frequently claimed to have discovered a general scheme of intelligent behaviour that could be applied to all problem-solving, but these claims were often premature. Many formalisms led to predictions that computers would become intelligent by specific times, but none proved accurate. Researchers could have avoided disappointment if they had truly understood their A.I. algorithms’ inner workings and shortcomings and exercised caution when making hopeful claims about any supposed panacea scheme. 

Picture1 HOA

Further, a deeper cause of the disappointments lies in the general lack of understanding of A.I. The grounds of the Lighthill information, i.e., the classification of A.I. technology, represented a standard view that A.I. was an applied science derived from biology. Such understanding inevitably led to high expectations of the productivity of A.I. technology. However, John McCarthy provided a more precise description of A.I.:

Narrowly applying A.I. methods to specific tasks or oversimplifying A.I. as merely replicating biological structures on a computer can lead to disappointment, as it neglects the more significant intellectual hurdles in A.I. research.

Finally, many emerging computing technologies besides AI also found similar wintery periods in their history. For example, the internet company reached via a roller-coaster in 2000. The commonality lies in the hypes often associated with new technologies’ rise and fall. The Hype Cycle, developed by the Gartner Group, provides a clear timeline for adopting new technologies.  

Figure 1. The topic discussed is the Hype Cycle and AI Winter, referenced in Menzies03.

According to Gartner, there are five phases in the model being illustrated.:

  1. Technological Triggers from events that generate significant press and interest.
  2. The peak of inflated expectations is marked by over-enthusiasm and more failures than successes.
  3. Trough of Disillusionment when technology was no longer fashionable.
  4. The slope of Enlightenment for people who persist in understanding technology’s benefits and practical applications.
  5. Plateau of Productivity as the benefits are widely accepted again.
Picture2 HOA

The history of AI leading up to the AI Winter aligns perfectly with the model’s characteristics. National Conference on Artificial Intelligence attendance closely matched the Hype Cycle pattern. 

Figure 2. I attended the National Conference on Artificial Intelligence, as mentioned in the source Menzies.

The prevalence of the Hype Cycle suggests that A.I. winter is determined by the nature of the technology and a common tendency in human cognition. Curiosity, excitement, and disappointment are all inherited parts of exploring the unknown. Without the initial great excitement and, thus, the subsequent heavy support, A.I. technology might not include taken off in the first position. Now that the hype around cooling down has subsided, individuals may have the opportunity to recognize the fundamental obstacles in this area and develop fresh perspectives for future research. Some advertising may be catalytic to the technology.  

Conclusion

In general, the examination of AI Winter brought to light several key takeaways:

Small-scale success in A.I. was deceptive. The intricate nature of A.I. suggests that numerous challenges can only be confronted and resolved when dealing with large-scale problems in real-world scenarios.

The study of A.I. presents a variety of complex intellectual hurdles to overcome. This is not restricted to particular applications or specific biological structures. It requires combined basic research in cognition, statistics, algorithms, linguistics, neurosciences, etc. 

Hype is a double-edged sword. It initially boosted the rise of A.I. but did great harm. Researchers, funders, and the public are responsible for restraining it so that the A.I. winter will not reoccur. 

Thank You For Reading, Hope You Liked It…!!! 😊
Please Like and Share...
Facebook
Twitter
WhatsApp
Email
Picture of admin@helpofai.com

admin@helpofai.com

Help Of Ai (HOA) have a powerful Ai features including Smart Editor, AI ReWriter, AI Video Generator, Sound Studio, AI Plagiarism, Content Detector, Image, Transcript and many more…
Days
Hours
Minutes
Seconds