DigiKey-emag- Edge AI&ML-Vol-10

retroelectro

describes a machine that can be trained via a ‘trial and error’ process to perform specific tasks within an environment and exhibit ‘goal-seeking’ behavior. This hypothetical machine could process inputs, generate outputs, and adapt to success or failure by reading sensors and such, similar to Shannon’s Theseus project, for which Minsky designed the SNARC. Minsky emphasizes the importance of pairing sensory and motor controls for the machine to affect and learn from its environment effectively. Progress in the machine’s learning would depend on its ability to relate environmental changes to corresponding changes in its sensor readings. Minsky further explains that the machine should develop an internal abstract model of its environment, stored in memory. This internal ‘abstract’ model would allow it to first experiment internally before conducting external tests, enabling it to perform tasks more intelligently. The machine’s behavior would appear imaginative because it could

predict and anticipate changes in the environment based on its motor actions. Proposal for research by N. Rochester Nathaniel Rochester worked at IBM at the time. He graduated from MIT in 1941 and then worked developing RADAR systems for the US Navy during the war. He started at IBM in 1948 after the wartime development dried up. A few years later, IBM released the first in the 700 series of electronic computers, the IBM 701, for which Rochester was the lead developer. At the time of the proposal, Rochester was the head of a research group studying information theory and automatic pattern recognition. McCarthy and Rochester first met when IBM gifted an IBM 704 to MIT’s research lab, specifically for researching ‘neural networks.’ Rochester’s research proposal centers on the challenge of creating a machine capable of exhibiting originality in its problem-solving

and predict outcomes in the environment. He proposes that machines could similarly be designed to form abstractions of sensory data, define problems, and then simulate possible solutions, evaluating their success before acting. While this approach works for well-understood problems, Rochester notes that solving new or long-unsolved problems requires randomness and creativity. He argues that randomness could be key to overcoming the limitations of pre-programmed rules and enabling machines to behave in original ways, much like how scientists may rely on a ‘hunch’ to approach difficult problems. Rochester discusses the Monte Carlo method, which involves conducting hundreds or thousands of random experiments to approximate solutions to complex problems. He sees potential in applying this method to machine learning, suggesting that machines could explore many possibilities simultaneously and uncover solutions that traditional methods might miss.

simulating human- like randomness

in machines is challenging, as

the brain’s control mechanisms differ significantly from those of calculators and computers. Proposal for research by John McCarthy

Figure 5. John McCarthy while working with chess computers.

John McCarthy, an army veteran, is famously known for coining the term ‘Artificial Intelligence.’ Following his doctorate at Princeton, he took a few assistant professor positions in the area, landing at Dartmouth College in the summer of 1955. As a graduate student, he interned with Marvin Minsky at Bell Labs, where he was mentored by Claude Shannon. Following the Summer Research Project, however, he took a position at MIT with Marvin Minsky, continuing work in AI and developing the LISP programming language. McCarthy’s proposal focuses on studying the relationship between language and intelligence. It

argues that direct applications of trial-and-error methods to the interaction between sensory data and motor activity are unlikely to result in complex behaviors. Instead, he advocates for applying trial and error at a higher level of abstraction. He highlights language as a crucial tool people use to handle intricate phenomena, noting that human minds use language to formulate conjectures and test them. McCarthy points out that English has several advantageous properties for facilitating complex thought processes, properties that programming languages developed for computers often lack. These properties include the ability to use concise arguments that can be supplemented by informal mathematics, a way of incorporating other languages within English, and the ability for users to reference their own problem-solving progress. He also

Figure 4. Nathaniel Rochester designed the first electronic IBM computer.

programmed with a fixed set of rules to address specific

contingencies and failures, leaving them without the flexibility to act intuitively or with common sense. For example, in your calculator, if you divide by ‘0’, then you will likely get an error or some sort, but this is because the calculator was programmed to give an error when asked to divide by ‘0’ instead of it learning on its own that dividing by ‘0’ doesn’t work and developing its own rules. Rochester highlights the frustration from when machines fail due to rigid or contradictory rules and suggests that a more sophisticated approach is needed to enable machines to behave intelligently. Rochester draws on Kenneth Craik’s model of human thought, which theorizes that the brain constructs ‘engines’ that simulate

abilities. Typically, machines like automatic calculators are

“Unless the machine is provided with, or is able to develop, a way of abstracting sensory material, it can progress through a complicated environment only through painfully slow steps, and in general will not reach a high level of behavior.” - Minsky

However, he acknowledges that

“So the mathematician has the machine making a few thousand random experiments … the results of these experiments provide a rough guess as to what the answer may be.” – Rochester

we get technical

24

25

Powered by