Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Introduction to Artificial Intelligence

1.1 Introduction

Artificial intelligence (AI) represents humanity’s attempt to create machines that can perceive, reason, learn, and act intelligently. While computers excel at computationally intensive tasks, matching human intelligence and intuition has been an aspirational goal since the inception of general-purpose computers.

The field of AI has evolved through several distinct phases:

  1. Early Excitement (1950s-1960s): Initial optimism about machines that could reason

  2. First AI Winter (1970s): Reality check due to limited computational power

  3. Expert Systems Era (1980s): Rise of knowledge-based systems

  4. Second AI Winter (late 1980s-1990s): Collapse of LISP market and funding cuts

  5. Machine Learning Renaissance (2000s-2010s): Revival through statistical methods

  6. Deep Learning Revolution (2010s-present): Breakthrough with neural networks and massive data

1.2 What is Artificial Intelligence?

A branch in computer science that is concerned with the automation of intelligent behaviors.

Such as: Speech recognition, Visual perception, Language translation...

1.3 Roots of Artificial Intelligence

1.4 Timeline of AI History

The Turing Test

In 1950, Alan Turing proposed a test for machine intelligence that remains influential today.

Test Procedure:

  1. All communication is text-based

  2. Judge asks questions to two hidden respondents

  3. One is human, one is computer

  4. Computer “passes” if judge cannot reliably distinguish it

Current Status:

  • No system has convincingly passed the full Turing Test

  • Modern chatbots (ChatGPT) show impressive abilities but have clear limitations

  • AGI remains an aspirational goal, not a present reality

Criticisms of the Turing Test:

  • Tests deception rather than intelligence

  • Focuses on linguistic ability only

  • Doesn’t require understanding or consciousness

  • May be too anthropocentric

AI Subfields

AI Subfields AI Subfields continued

Applications of AI in Industry

  • Anomaly Detection: in processes and equipment

  • Optimize processes: Improve yield

  • Make smarter decisions and minimize risk

  • Predict future scenarios with neural networks

alt text alt text

1.5 Is artificial intelligence dangerous?

  • AI can be dangerous if misused or poorly designed

  • Risks include:

    • Job displacement

    • Privacy concerns

    • Bias and discrimination

    • Autonomous weapons

  • Importance of ethical AI development and regulation

    Autonomous weapons

1.6 How to achieve AI?