出版社: Academic Press
出版年: 2006310
页数: 856
定价: USD 89.95
装帧: Hardcover
ISBN: 9780123695314
作者简介 · · · · · ·
Serclios Theodoridis于1973年在雅典大学获得物理学学士学位，又分别于1975和1978年在英国伯明翰大学获得信号处理与通信硕士和博士学位。自1995年，他是希腊雅典大学信息与通信系教授。他有4篇论文获得IEEE的神经网络会刊的卓越论文奖，他是IET和IEEE高级会员。
Konstatinos Koutroumbas，1989年毕业于希腊佩特雷大学的计算机工程与信息学院，1990年在英国伦敦大学获得计算机科学硕士学位，1995年在希腊雅典大学获得博士学位。自2001年任职于希腊雅典国家天文台空间应用与遥感研究院，是国际知名的专家。
短评 · · · · · ·
Pattern Recognition, Third Edition的书评 · · · · · · (全部 4 条)
非常棒的教材
这是一本分类、聚类很好的工具书
这篇书评可能有关键情节透露
> 更多书评4篇
读书笔记 · · · · · ·
我来写笔记
allenchen (慎独)
From the moment we move away from the step function, all we have said before about mapping the input vectors onto the vertices of a unit hypercube is no longer valid. It is now the cost function that takes on the burden for correct classification.20110809 14:55

allenchen (慎独)
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.20110809 14:30
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.
回应 20110809 14:30 
allenchen (慎独)
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recog...20110809 14:13
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recognition. However, this method is confronted with a serious difficulty. This is discontinuity of the step(activation) function, prohibiting differentiation with respect to the unknown parameters(synaptic weights).
回应 20110809 14:13 
allenchen (慎独)
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.20110809 13:36
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.
回应 20110809 13:36

allenchen (慎独)
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.20110809 13:36
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.
回应 20110809 13:36 
allenchen (慎独)
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recog...20110809 14:13
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recognition. However, this method is confronted with a serious difficulty. This is discontinuity of the step(activation) function, prohibiting differentiation with respect to the unknown parameters(synaptic weights).
回应 20110809 14:13 
allenchen (慎独)
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.20110809 14:30
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.
回应 20110809 14:30 
allenchen (慎独)
From the moment we move away from the step function, all we have said before about mapping the input vectors onto the vertices of a unit hypercube is no longer valid. It is now the cost function that takes on the burden for correct classification.20110809 14:55

allenchen (慎独)
From the moment we move away from the step function, all we have said before about mapping the input vectors onto the vertices of a unit hypercube is no longer valid. It is now the cost function that takes on the burden for correct classification.20110809 14:55

allenchen (慎独)
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.20110809 14:30
A popluar family of continuous differentiable functions, which approximate the step function, is the family of sigmoid functions. A typical representative is the logistic function: f(x) = \frac{1}{1+exp(ax)} All these functions are also known as squashing functions since their output is limited in a finite range of values.
回应 20110809 14:30 
allenchen (慎独)
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recog...20110809 14:13
The other direction we will follow to design a multilayer perceptron is to fix the architecture and compute its synaptic parameters so as to minimize an appropriate cost function of its output. This method not only overcomes the drawback of the resulting large networks of the previous section but also makes these networks powerful tools for a number of other applications, beyond pattern recognition. However, this method is confronted with a serious difficulty. This is discontinuity of the step(activation) function, prohibiting differentiation with respect to the unknown parameters(synaptic weights).
回应 20110809 14:13 
allenchen (慎独)
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.20110809 13:36
A general principle adopted by most of these techniques is the decomposition of the problem into smaller problems that are easier to handle. For each smaller problem, a single node is employed. Its parameters are determined either iteratively using appropriate learning algorithms. From the way these algorithms build the network, they are sometimes referred to as constructive techniques.
回应 20110809 13:36
这本书的其他版本 · · · · · · ( 全部8 )
以下豆列推荐 · · · · · · ( 全部 )
 Machine Learning ([已注销])
 人工智能书 (英子)
 PRML (灯)
 Robotics (Mr. L)
谁读这本书?
二手市场
 > 点这儿转让 有13人想读,手里有一本闲着?
订阅关于Pattern Recognition, Third Edition的评论:
feed: rss 2.0
还没人写过短评呢
还没人写过短评呢