Artificial intelligence (“AI”) is, simply put, “The science and engineering of making intelligent machines.” So research into AI has resulted in a lot of the technology we associate with the modern age.
Neuroscientists look deep into the workings of the brain, and are trying to build an artificial brain from scratch by running simulations of each individual neuron, through each neocortical column, up through an entire mind. (Check out If I Only Had a (Blue) Brain) Skynet and HAL are AI archetypes to be aspired to even while they’re feared, but looking into possibilities of artificial sentience makes us look critically at what makes us human. What is consciousness? Is it something we can build up or that just appears with self-awareness?
There is a lot of debate as to what an intelligent machine would be. Would it need to merely respond to input like a human, or really think like a human, be self-aware, or do its own thing apart from human intervention? Similarly, artificial creativity (aka computational creativity) aims to figure out what creativity is in a human, then replicate or enhance it using artificial methods.
What is creativity, and who has it?
A key development within AI programs is the incorporation of dynamic processes we associate with intelligent life. In a shift away from Turing-tradition weak AI, some projects have begun incorporating elements inspired from biological functions. Particularly salient are algorithms inspired by genetics and network structures based on neurological connections. Evolutionary algorithms, of which genetic algorithms are a subset, generate solutions to optimization problems using strategies such as reproduction, mutation, and inheritance.
Artificial neural networks were inspired by the inner workings of the brain and are often adaptive systems that change structure in response to information forms. Neural networks are generally “trained” by being provided with paradigmatic examples from the domain of interest — such as art, science, or technology. The network can learn by increasing or decreasing the dominance of any given neural node depending on the desirability or correctness of its output, just as neurons within a human brain reinforce commonly used neurological pathways but prune undesirable connections.
Using neural networks, Stephan Thaler built a “Creativity Machine” in 1994 that autonomously produced patentable inventions and composed music. Thaler compared the neural circuitry in the machine, achieved through genetic programming, to reflex reactions in the brain and spinal cord. The Creativity Machine consisted of two interconnected neural networks. One network was designed to be “noisy” by randomly removing small bits of information it had learned during training. The noise allowed the first network to generate novel output by filling in the missing information with patterns it extrapolated from training data. The other network was used to analyze the output and adjust the parameters of the first network to optimize performance. If the first network was too noisy, then it would generate output of dubious usefulness, yet if it was too constrained, it would not generate much at all.
If a machine is creative in and of itself, no one could claim authorship in what the machine produces. If the artificial neural network learns for itself, then IT would have a colorable claim to the intellectual property rights in its creations. Since machines aren’t terribly interested in economic incentives and proprietary rights, then what it produces should smoothly flow into the public domain. Early generations of the Creativity Machine created novel chemical patents and poetry, and more recently creativity machines have been used by the US military to design new weapons. The latest versions of the Machine have incorporated self-training artificial neural network objects (STANNO) that essentially allow the machines to “dream” in a virtual reality and run simulations and exercise crucial skills that it can perfect in an ongoing bootstrapping cycle. These machines can learn and train themselves with little to no human input beyond the initial engineering. As such, there are instances when there is no human creativity directly involved in the “creative” output, even if humans built the machine itself.
In closing: many have speculated that some day machines will become intelligent enough to self-manage and self-improve. Well, it’s happening. Perhaps it’s time we thought about what we train these machines to be.
- John McCarthy, Basic Questions, What is Artificial Intelligence? Stanford.edu (Revised November 12, 2007). Practical applications of artificial intelligence techniques include data mining, automated bots, self-managing systems, as well as common uses in computer aided design (“CAD”) or the non-player characters (“NPCs”) in video games.↵
- See Jeff Hawkins, On Intelligence, 207-10 (Times Books 2004).↵
- Simple neural networks consist of three layers – input, hidden, output – which consist of highly interconnected nodes. Somewhat problematically, the nodes are vertically interconnected but not laterally so input would come in, be processed by the “input neurons” which would filter the information through one or more “hidden unit” neurons, bounce back against the “output neurons” which would process everything and broadcast it back. More complicated neural networks attempt to allow for more human-like functions through being able to extrapolate part to whole and consider input over time.↵
- Neural networks are used in many types of data processing and classification. For example, geneticists train neural networks to predict which genetic sequences are likely to code for proteins, and some spam filters utilize neural networks to maximize accuracy and efficiency.↵