Showing posts with label Nature Language Process. Show all posts
Showing posts with label Nature Language Process. Show all posts

Wednesday, September 08, 2021

Short note on Artificial Inteligence

Artificial Intelligence is a branch of computer science concerned with making an intelligent machine behave like a human. The term "A.I." was introduced by John McCarthy in 1956. He was the designer of the language LISP(List Processing). LISP is the high-level programming language. We know about the computer that given a set of rules written in the programming language of the computer, the A.I. systems should obey the rules strictly. So, human scientists can test their theories about human behavior by converting their rules to a computer program and observing if the computer's behavior in executing these programs is like the natural behavior of a human being, or at least that small subset of human behavior they are studying. A computer scientist can look at modeling human behavior as a challenge to their programming abilities. If a person can do something, they can write a computer program that does the same thing. The aim of artificial intelligence is to try to make a computer perform tasks that humans tend to be good at.
 
The seeds of modern A.I. were planted by classical philosophers who attempted to explain the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the digital programmable computer in the 1940s, the machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The field of A.I. research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956s. Those who attended the workshop would become the leaders of A.I. research for decades. Many of them predicted that a machine as intelligent as a human being would exist in not more than a generation, and they were provided with millions of dollars to make this vision come true. Eventually, it became evident that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from Congress, the U.S. and British Governments stopped giving funds to un-directed research into artificial intelligence, and the difficult years that followed would later be known as an "A.I. winter." Seven years later, a visionary initiative by the Japanese government inspired governments and industry to provide A.I. with billions of dollars. Still, by the late 80s, the investors became disillusioned and withdrew funding again. Interest and funding in A.I. boomed in the first decades of the 21st century when machine learning was successfully applied to many problems in academia and industry. As in previous "A.I. summers," some observers predicted the imminent arrival of artificial general intelligence (a machine with intellectual capabilities that exceed the abilities of human beings)
 
In 1956:- The first Dartmouth College summer A.I. conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM(Deep Blue chess machine / international business machines (doubt)), and Claude Shannon. The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy. The first demonstration of the LT (Logic Theorist) was written by Allen Newell, J.C. Shaw, and Herbert A. Simon (Carnegie Institute of Technology) but the present name that is institute Carnegie Mellon University. This is often called the first A.I. program.
 
In 1957:-The general problem solver (GPS) was demonstrated by Newell, Shaw, and Simon.
 
In 1958-1960:- John McCarthy invented the Lisp programming language. Herbert Gelernter and Nathan Rochester described a theorem prove in geometry that exploits a semantic model of the domain in diagrams of typical cases. Teddington Conference on the Mechanization of thought processes was held in the U.K., and among the papers presented were John McCarthy's programs with common sense. 
 
In 1959:- John McCarthy and Marvin Minsky founded the MIT AI Lab. Margaret Masterman and colleagues at the University of Cambridge design semantic nets for machine translation. Ray Solomonoff lays the foundations of a mathematical theory of A.I., introducing universal Bayesian methods for inductive inference and prediction. Man-Computer Symbiosis by J.C.R. Licklider.
 
In 1961-2000:- James Slagle wrote the first symbolic integration program in lisp, which solved calculus problems at the college level. He referred to sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems un-provable by any theorem-proving A.I. deriving all provable theorems from the axioms. Since humans can see the truth of such theorems, machines were deemed inferior. Unimation's industrial robot animate worked on a general motors automobile assembly line. Thomas Evans demonstrated that computers could solve the same analogy problems as are given on I.Q. tests. Leonard and Charles published a pattern recognition program that generates, evaluates, and adjusts its operators, which described one of the first machine learning programs that could acquire and modify features. Danny Bobrow's dissertation project M.A. shows that computers can understand natural language well enough to solve algebra word problems correctly. Bertram Raphael's MIT dissertation on the program demonstrates the power of a logical representation of knowledge for question-answering systems. J. Alan Robinson invented a mechanical proof procedure, the resolution method, which allowed programs to work efficiently with formal logic as a representation language. It was a popular toy at A.I. centers on the ARPANET when a version that simulated the dialogue of a psychotherapist was programmed. Edward Feigenbaum initiated general a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. In 1967, the first successful knowledge-based program for scientific reasoning, and 1968 the first successful knowledge-based program in mathematics. In 1969 Roger Stanford defined the conceptual dependency model for natural language understanding and the first semantics-driven machine translation program. 1970 Jaime Carbonell developed scholar, an interactive program for computer-assisted instruction based on semantic nets as the representation of knowledge. Bill Woods described augmented transition network (ATN) as a representation for natural language understanding. 1973 the assembly robotics group at the University of Edinburgh builds a Freddy robot capable of using visual perception to locate and assemble models. 1975 The Meta-Dendral learning program reported new results in chemistry (some rules of mass spectrometry), the first scientific discovery by a computer to be published in a peer-review refereed journal. 1978 Herbert A. Simon won the "Nobel" prize in economics for the theory of bounded rationality, one of the milestones of A.I. known as satisfying. That year, The molten program, written by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge could be used to plan gene-cloning experiments. 1979 The Stanford cart, developed by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a room and circumnavigates the Stanford AI Lab. The late 1970 demonstrates the power of the ARPANET for scientific collaboration. 1980s Lisp machines were developed and marketed. First expert system shells and commercial applications. 1980 first national conference of the American Association for artificial intelligence (AAAI) was held at Stanford. 1981 Danny Hillis designs the connection machine, which utilizes parallel computing to bring new power to A.I. and computation in general. 1982 The fifth generation computer systems project, an initiative by Japan's ministry of international trade and industry. 1986 the Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets. Founders of the firm with the underlying engine developed by Paul Tarvydas. The alacrity system also included a small financial expert system that interpreted financial statements and models. The early 1990s is powerful enough to create a championship-level game-playing program by competing favorably with world-class players by TD-Gammon. 1990s major advances in all areas of A.I., with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. 1991 DART scheduling application deployed in the first gulf war paid back DARPA's investment of 30 years in A.I. research. 1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). 1995 (No Hands Across America), a semi-autonomous car drove coast-to-coast across the united states with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). The throttle and brakes were controlled by a human driver. In the late 1990s, Web crawlers and other AI-based information extraction programs become essential in the widespread use of the World Wide Web. Demonstration of an intelligent room and emotional agents at MIT's A.I. lab. Initiation of work on the oxygen architecture, which connects mobile and stationary computers in an adaptive network.
 
In 2001-2016:- 2004 NASA's robotic exploration rovers spirit and opportunity autonomously navigate the surface of Mars.2004 DARPA introduces the DARPA grand challenge requiring competitors to produce autonomous vehicles for prize money. 2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, can walk as fast as a human, delivering trays to customers in restaurant settings. 2005 blue brain is born, a project to stimulate the brain in molecular detail. 2009 google builds a self-driving car. 2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wireless. The award-winning machine learning for human motion capture technology for this device was developed by the computer vision group at Microsoft research. 2011 Apple's Siri, google's google now, and Microsoft's Cortana are smartphone apps that use natural language to answer questions, make recommendations, and perform actions. 2013 NEIL, the never-ending image learner, is released at Carnegie Mellon University to compare and analyze relationships between different images constantly. 2015 an open letter to ban the development and use of autonomous weapons signed by Hawking, Musk, Wozniak, and 3,000 researchers in A.I. and robotics.
 
Application of A.I.
 
1. Nature Language Process:- A computer system capable of understanding a message in the natural language would seem to require both the contextual knowledge and the process for making the inferences (from this contextual knowledge and the news) assumed by the message generator. Some progress has been made toward computer systems of this sort for understanding spoken and written fragments of language. Fundamental to the development of such a system are specific A.I. ideas about the structure for representing contextual knowledge and particular techniques for making inferences.
 
2. Expert consulting systems:- A.I. methods have also been employed to develop an automatic consulting system. These systems provide human users with an expert conclusion about specialized subject areas. Automated consulting systems have been built that can diagnose diseases, evaluate potential ore deposits, suggest structures for complex organic chemicals, and even provide advice about how to use another computer system.
 
3. Robotics:- Research on robots or robotics has helped to develop many A.I. ideas. It has led to several techniques for modeling the state of the world and describing the process of charge from one world state to another. It has led to a better understanding of how to generate a plane for action sequences and monitor the execution of these plans. Complex robot control problems have forced us to develop methods for planning at lower levels of abstraction, ignoring details and then planning at lower and lower levels, where details become essential. 
 
4. Automatic Programming:- The task of writing a computer program is related both to theorem proving and to robotics. Much of the primary research is in automatic programming—theorem proving and robot problem-solving overlaps. In a sense, existing complete already do automatic programming. The task is a full source code specification of what a program is to accomplish, and they write an object code program to do it.

Search Aptipedia