因为进化的结果提供了无限试错的经验，可以节省很多时间去重新探索。 there are four attributes of neocortical memory that are fundamentally different from computer memory: • The neocortex stores sequences of patterns. • The neocortex recalls patterns a...
â€Ś..To create such metaphors he had to see a succession of clever analogies.
In fact, highly creative works of art are appreciated because they violate our predictions. When you see a film that breaks the familiar mold of a character, story line, or cinematography (including special effects), you like it because it is not the same old same old. Paintings,...
Jeff Hawkins…..To create such metaphors he had to see a succession of clever analogies.p125/200In fact, highly creative works of art are appreciated because they violate our predictions. When you see a film that breaks the familiar mold of a character, story line, or cinematography (including special effects), you like it because it is not the same old same old. Paintings, music, poetry, novels— all creative artistic forms— strive to break convention and violate the expectations of an audience. There is a contradictory tension in what makes a work of art great. We want art to be familiar yet at the same time to be unique and unexpected. Too much familiarity is retread or kitsch; too much uniqueness is jarring and difficult to appreciate. The best works break some expected patterns while simultaneously teaching us new ones. Consider a great piece of classical music. The best music has an appeal at a simple level— good beat, simple melody and phrasing. Anyone can understand and appreciate it. However, it is also a little different and unexpected. But the more you listen to it, the more you see there is pattern in the unexpected parts, such as repeated unusual harmonies or key changes. The same is true with great literature or great movies. The more you read or see them, the more creative detail and complexity of structure you observe.p126/201A related question I often hear is, "If all brains are inherently creative, why are there differences in our creativity?" The memory-prediction framework points to two possible answers. One has to do with nature and the other with nurture.p126/201We may never know why Einstein was as creative and smart as he was, but it is a safe bet that part of his talent derived from genetic factors.Whatever the difference between brilliant and average brains, we are all creative. And through practice and study we can enhance our skills and talents. p127/203//并且近期的神经科学研究指出神经元可以再生，重新连接，并且与年龄无关，有遗传因素，遗传可能决定你是否持续在想，在感受，在做某种事情，所以意识的重要性不言而喻。遗传就好像祖先留下来的密码，而你需要找到解码的方式。Can You Train Yourself to be More Creative?Yes, most definitely. I have found there are ways to foster finding useful analogies when working on problems. First, you need to assume up front that there is an answer to what you are trying to solve. People give up too easily. You need confidence that a solution is waiting to be discovered and you must persist in thinking about the problem for an extended period of time.Second, you need to let your mind wander. You need to give your brain the time and space to discover the solution. Finding a solution to a problem is literally finding a pattern in the world, or a stored pattern in your cortex that is analogous to the problem you are working on. If you are stuck on a problem, the memory-prediction model suggests that you should find different ways to look at it to increase the likelihood of seeing an analogy with a past experience. If you just sit there and stare at it over and over, you won't get very far. Try taking the parts of your problem and rearranging them in different ways— literally and figuratively. //竟然没译出来... 拜托你是拿了稿费的就不能认真点儿么p128/203If you get stuck on a problem, go away for a little while. Do something else. Then start again, rephrasing the problem anew. If you do this enough times something will click sooner or later. It may take days or weeks, but eventually it will happen. The goal is to find an analogous situation somewhere in your past or present experience. To succeed you must ponder the problem often but also do other things so the cortex will have the opportunity to find an analogous memory.p129/204Kepler's excitement serves as a cautionary tale for scientists, and indeed for all thinkers. The brain is an organ that builds models and makes creative predictions, but its models and predictions can as easily be specious as valid. Our brains are always looking at patterns and making analogies. If correct correlations cannot be found, the brain is more than happy to accept false ones. Pseudoscience, bigotry, faith, and intolerance are often rooted in false analogy.p130/207People with a condition called synesthesia have brains that blur the distinction between the senses— certain sounds have a color, or certain textures have a color. This tells us that the qualitative aspect of a sense is not immutable. Through some sort of physical modification, a brain can impart a qualitative aspect of vision to an auditory input.p212/133By closing their eyes and imagining each and every turn, every obstacle, and even being on the winning stand, they increase their chances of success. Imagining is just another word for planning. This is where the predictive ability of our cortex pays off. It permits us to know what the consequences of our actions will be before we do them.p135/216Many aspects of the world around us are so consistent that nearly every human has the same internal model of them. As a baby, you learned that the light falling on a round object produces a certain shadow, and that you can assess the shape of most objects by cues from the natural world. You learned that if you flung a cup off your highchair, gravity always pulled it to the floor. You learned textures, geometry, colors, and the rhythms of day and night. The simple physical properties of the world are learned consistently by all people.But much of our world model is based on custom, culture, and what our parents teach us. These parts of our model are less consistent and might be totally different for different people. A child who is raised in a loving, caring home with parents who respond to his or her emotional needs will probably grow to adulthood predicting that the world is safe and loving. Children abused by one or both parents are more likely to see future events as dangerous and cruel, and believe that no one is to be trusted— no matter how well they are treated later. Much of psychology is based on the consequences of early life experience, attachment, and nurturance because that is when the brain first lays down its model of the world.Your culture thoroughly shapes your world model. For example, studies show that Asians and Westerners perceive space and objects differently. Asians attend more to the space between objects, whereas Westerners mostly attend to objects— a difference that translates into separate aesthetics and ways of solving problems. Research has shown that some cultures, such as tribes in Afghanistan and some communities in the American South, are built on principles of honor and, as a result, are more likely to accept the naturalness of violence. Differing religious beliefs learned early in life can lead to completely different models of morality, how men and women are to be treated, and even the value of life itself. Clearly these differing models of the world can't all be correct in some absolute, universal way, even though they may seem correct to an individual. Moral reasoning, both the good and the bad, is learned.Your culture (and family experience) teaches you stereotypes, which are unfortunately an unavoidable part of life. Throughout this book, you could substitute the word stereotype for invariant memory (or invariant representation) without substantially altering the meaning. Prediction by analogy is pretty much the same as judgment by stereotype. Negative stereotyping has terrible social consequences. If my theory of intelligence is right, we cannot rid people of their propensity to think in stereotypes, because stereotypes are how the cortex works. Stereotyping is an inherent feature of the brain.// 我不这么认为The way to eliminate the harm caused by stereotypes is to teach our children to recognize false stereotypes, to be empathetic, and to be skeptical. We need to promote these critical-thinking skills in addition to instilling the best values we know. Skepticism, the heart of the scientific method, is the only way we know how to ferret out fact from fiction. p138/218//我认为应该是当你意识到自己的模式是怎样的并且认识到这种模式是怎样形成的，你就不再刻板，只是这两件事都不容易Should We Build Intelligent Machines?Throughout the twenty-first century, intelligent machines will emerge from the realm of science fiction into fact. Before we get there, we should think about ethical issues, whether the possible dangers of intelligent machines outweigh the likely benefits.The prospect of machines that can think and act on their own has worried people for a long time. This is understandable. New areas of knowledge and new technologies always scare people when they first come along. Human creativity lets us imagine the terrible ways a new technology may take over our bodies, outmode our usefulness, or cancel out the very value of human life. But history shows that these dark imaginings almost never play out the way we expect. When the industrial revolution came along, we feared electricity (remember Frankenstein?) and steam engines. Machinery that had its own energy, that could move itself in complex ways, seemed miraculous and at the same time potentially sinister. But electricity and internal combustion engines are no longer strange and sinister. They are as much a part of our environment as air and water.When the information revolution began, we quickly came to fear computers. There were countless science-fiction stories about powerful computers or computer networks that spontaneously became self-aware and then turned on their organic masters. But now that computers have become integrated into daily life, this fear seems absurd. The computer in your home, or the Internet, has as much chance of spontaneously turning sentient as does a cash register.Any technology can be applied to good or evil ends, of course, but some are more inherently prone to misuse or catastrophe than others. Atomic energy is dangerous whether it's in the form of nuclear warheads or power plants because a single accident or a single misuse could harm or kill millions of people. And although nuclear energy is valuable, alternatives are available. Vehicular technology can take the form of tanks and fighter jets, or it can take the form of cars and passenger airplanes, and a mishap or misuse can cause harm to many people. But vehicles are arguably both more essential to modern life and less dangerous than nuclear power. The damage caused by a single misuse of an airplane is much less than that of a nuclear bomb. There are many technologies that are almost wholly beneficial. Telephones are an example. Overwhelmingly, their tendency to bring and keep people together exceeds any negative effects. The same goes for electricity and public health science. In my opinion, intelligent machines are going to be one of the least dangerous, most beneficial technologies we have ever developed.Still, some thinkers, like Sun Microsystems cofounder Bill Joy, fear that we may develop intelligent robots that could escape our control, swarm the Earth, and remake it according to their own agenda. The image puts me in mind of those magically animated broomsticks from the Sorcerer's Apprentice, regenerating themselves from their splinters and working tirelessly to bring about disaster. Along similar lines, some AI optimists offer extended-life prophecies that are unsettling. For instance, Ray Kurzweil talks about the day when nanorobots will crawl within your brain, recording every synapse and every connection, and then report all the information to a supercomputer, which will reconfigure itself into you! You'll become a "software" version of yourself that will be practically immortal. These two predictions about machine intelligence, the intelligent machines–run-amok scenario and the upload-your-brain-into-a-computer scenario, seem to surface over and over.Building intelligent machines is not the same as building self-replicating machines. There is no logical connection whatsoever between them. Neither brains nor computers can directly self-replicate, and brainlike memory systems will be no different. While one of the strengths of intelligent machines will be our ability to mass-produce them, that's a world apart from self-replication in the manner of bacteria and viruses. Self-replication does not require intelligence, and intelligence does not require self-replication.Further, I seriously doubt we will ever be able to copy our minds into machines. There are at present, as far as I know, no actual or imagined methods capable of recording the trillions of details that make "you." We would need to record and re-create all of your nervous system and your body, not just your neocortex. And we would need to understand how all of it works. One day, certainly, we might be to able do this, but the challenges extend far beyond understanding how the cortex works. Figuring out the neocortical algorithm and building it into machines from scratch is one thing, but scanning in the zillions of operational details of a living brain and replicating them in a machine is something completely different.* * *Beyond self-replication and the copying of minds, people have another concern with intelligent machines. Might intelligent machines somehow threaten large portions of the population, as nuclear bombs do? Might their presence lead to the superempowerment of small groups or malevolent individuals? Or might the machines become evil and work against us, like the implacable villains in The Terminator or the Matrix movies?The answer to these questions is no. As information devices, brainlike memory systems are going to be among the most useful technologies we have yet developed. But like cars and computers, they will only be tools. Just because they are going to be intelligent does not mean they will have special abilities to destroy property or manipulate people. And just as we wouldn't put the control of the world's nuclear arsenal under the authority of one person or one computer, we will have to be careful not to rely too much on intelligent machines, for they will fail as all technology does.This gets us to the malevolence question. Some people assume that being intelligent is basically the same as having human mentality. They fear that intelligent machines will resent being "enslaved" because humans hate being enslaved. They fear that intelligent machines will try to take over the world because intelligent people throughout history have tried to take over the world. But these fears rest on a false analogy. They are based on a conflation of intelligence— the neocortical algorithm— with the emotional drives of the old brain— things like fear, paranoia, and desire. But intelligent machines will not have these faculties. They will not have personal ambition. They will not desire wealth, social recognition, or sensual gratification. They will not have appetites, addictions, or mood disorders. Intelligent machines will not have anything resembling human emotion unless we painstakingly design them to. The strongest applications of intelligent machines will be where the human intellect has difficulty, areas in which our senses are inadequate, or in activities we find boring. In general, these activities have little emotional content.Intelligent machines will range from simple, single-application systems to very powerful superhuman intelligent systems, but unless we go out of our way to make them humanlike, they won't be. Maybe someday we will have to place restrictions on what people can do with intelligent machines, but that day is a long way off, and when it comes, the ethical issues are likely to be relatively easy compared with such present-day moral questions as those surrounding genetics and nuclear technology.p144/230