retroelectro
adapt to increasingly complex environments, Shannon hopes to build models replicating this adaptability in ‘automata’, ultimately advancing our understanding of mechanized intelligence. Proposal for research by M. L. Minsky As a graduate student, Marvin Minsky developed the first ‘neural network’ (The ‘Stochastic Neural Analog Reinforcement Calculator’ or ‘SNARC’) at Bell Labs in the early 1950s. A Navy Veteran, he had degrees from Harvard and Princeton. He founded MIT’s Artificial Intelligence Lab and generally stayed there from its inception in 1963 until he died in 2016. Minsky’s proposal focused on designing a machine capable of learning through sensory and ‘motor abstractions’. Minsky
grammar, and syntax, any thinking machine would likely need to operate in a similar way, governed by whitespace and syntax. Neuron nets “How can a set of (hypothetical) neurons be arranged so as to form concepts.” As scientists began to grapple with the challenge of mimicking
which may best be described as self-improvement.” The vision of creating a truly intelligent machine led to a fascinating concept: self- improvement. Researchers speculated that for a machine to be intelligent, it would need the ability to enhance its own capabilities over time. Abstraction “A number of types of ‘abstraction’ can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.” Abstraction, the ability to distill complex information into simpler concepts, was identified as a key process in human thought. To replicate this in machines, scientists needed to classify and define different types of abstraction. This task was seen as essential for enabling machines to interpret sensory data and other information in a human-like manner. Randomness and creativity “A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness.”
“We will concentrate on a problem of devising a way of programming a calculator to form concepts and to form generalizations. This, of course, is subject to change when the group gets together.”
Figure 2. Claude Shannon with his self-solving ‘mouse-in-a-maze’ machine, Theseus.
nature of creativity, they considered the role of randomness in the creative process. The intriguing idea emerged that the difference between routine and creative thinking might lie in the controlled injection of randomness. This theory suggested that when guided by intuition, randomness could be the secret ingredient that makes creative thinking possible. Proposal for research by C. E. Shannon Claude Shannon’s master’s thesis, A Symbolic Analysis of Relay and Switching Circuits, is credited with introducing Boolean logic to electronic circuits and creating the digital age. After completing his doctorate at MIT, Shannon worked at Bell Labs, where he colaborated with and mentored McCarthy and Minsky in 1951 and 1952. Together, they developed ‘Theseus’, a self- solving ‘mouse in a maze’ using relay logic. Shannon’s research proposal for the Summer Research Project delved into two key areas related to information theory and brain models:
Application of information theory to computing machines and brain models Shannon’s first research focus addresses the challenge of reliably transmitting information across noisy channels using unreliable components. He explores how information flows in parallel data streams over closed-loop networks and examines the complications that may arise, such as propagation delays and redundancy. Shannon proposes investigating new approaches to minimize these delays, ensuring reliable transmission of information across complex systems. The matched environment and brain model approach to Automata In the second topic, Shannon theorizes that both animal and human brain development occurs in stages, beginning with simpler environments and eventually moving toward more complex ones. As someone gets older, the more their brain can comprehend the universe around them. He wanted to explore the specific stages of brain development and express them mathematically. By understanding how brains
human thought, they turned to the brain’s fundamental building blocks: neurons. The question was how to arrange a set of hypothetical neurons to form concepts. Pioneers in the field had made strides in both theoretical and experimental work, but the problem remained far from solved. Theory of the size of a calculation “If we are given a well-defined problem, one way of solving it is to try all possible answers in order.” In their quest to solve complex problems, early computer scientists realized that brute-force methods were too time consuming. To address this, they sought to understand and measure how efficient a calculation could be. Self-improvement “Probably a truly intelligent machine will carry out activities
programmed to replicate that task. Here, they admit that the speed and memory sizes of the machines they had at the time were ‘insufficient’ to simulate higher brain function. An issue they felt they could tackle is that there was no programming language available to do such a thing in the first place. How can a computer be programmed to use a language “It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture... This idea has never been very precisely formulated nor have examples been worked out.” Up to this point, the closest thing available for programming was assembly language. Here, the thought was that since much of thinking is really made up of words,
As researchers delved into the
Figure 3. Marvin Minsky at Piano’
we get technical
22
23
Powered by FlippingBook