# 4. 线性界线

X1是横轴，X2是纵轴，W1是横轴的权重，W2是纵轴的权重，b是偏差，W是w1和w2的向量，x是X1和X2的向量，那么Wx是矩阵（w1,w2）和(x1,x2)的乘积。

# 7. 为何是神经网络？

• 感知器的结构与大脑神经元的结构很相似。
• 左图中是一个带有四个input的感知器，感知器所做得是根据方程式和输入，决定是返回0还是1.
• 右图中，类似的大脑神经元从树突中获得输入，这些输入都是神经冲动(nervous impulses)

# 8. 作为逻辑运算符的感知器

## 8.1 AND 感知器

``````

import pandas as pd

# TODO: Set weight1, weight2, and bias
weight1 = 1.0
weight2 = 1.0
bias = -1.5

# DON'T CHANGE ANYTHING BELOW
# Inputs and outputs
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, False, False, True]
outputs = []

# Generate and check output
for test_input, correct_output in zip(test_inputs, correct_outputs):
linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
output = int(linear_combination >= 0)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])

# Print output
num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', '  Input 2', '  Linear Combination', '  Activation Output', '  Is Correct'])
if not num_wrong:
print('Nice!  You got it all correct.\n')
else:
print('You got {} wrong.  Keep trying!\n'.format(num_wrong))
print(output_frame.to_string(index=False))

``````

``````Nice!  You got it all correct.

Input 1    Input 2    Linear Combination    Activation Output   Is Correct
0          0                  -1.5                    0          Yes
0          1                  -0.5                    0          Yes
1          0                  -0.5                    0          Yes
1          1                   0.5                    1          Yes
``````

• 增大权重
• 减小偏差大小

## 8.3 NOT 感知器

NOT 感知器 和我们刚刚研究的其他感知器不一样，NOT 运算仅关心一个输入。如果输入是 1，则运算返回 0，如果输入是 0，则返回 1。感知器的其他输入被忽略了。

``````import pandas as pd

# TODO: Set weight1, weight2, and bias
weight1 = 0
weight2 = -1
bias = 0.5

# DON'T CHANGE ANYTHING BELOW
# Inputs and outputs
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [True, False, True, False]
outputs = []

# Generate and check output
for test_input, correct_output in zip(test_inputs, correct_outputs):
linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
output = int(linear_combination >= 0)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])

# Print output
num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', '  Input 2', '  Linear Combination', '  Activation Output', '  Is Correct'])
if not num_wrong:
print('Nice!  You got it all correct.\n')
else:
print('You got {} wrong.  Keep trying!\n'.format(num_wrong))
print(output_frame.to_string(index=False))
``````
• 输出如下(Linear Combination是计算出来的结果，Activation Output是判断后的结果-大于等于0则为1，小于0则为0):
``````Nice!  You got it all correct.

Input 1    Input 2    Linear Combination    Activation Output   Is Correct
0          0                   0.5                    1          Yes
0          1                  -0.5                    0          Yes
1          0                   0.5                    1          Yes
1          1                  -0.5                    0          Yes
``````

## 8.4 XOR 感知器

NOT 运算仅关心一个输入，类似于程序语言中的一元运算符。

# 10. 感知器算法

1. 生成随机的weight和bias
2. 计算分类结果
3. 循环处理所有错误分类的点
4. 如果处于负区域，则调整每个weight为`wi + axi`，调整bias为`b + a`
5. 如果处理正区域，则调整每个weight为`wi - axi`，调整bias为`b - a`

## 10.1 编写感知器算法

• `data.csv`
``````0.78051,-0.063669,1
0.28774,0.29139,1
0.40714,0.17878,1
0.44274,0.59205,0
0.85176,0.6612,0
0.60436,0.86605,0
0.68243,0.48301,0
...
``````
• `perceptron.py`
``````import numpy as np
# Setting the random seed, feel free to change it and see different solutions.
np.random.seed(42)

def stepFunction(t):
if t >= 0:
return 1
return 0

def prediction(X, W, b):
return stepFunction((np.matmul(X,W)+b)[0])

# TODO: Fill in the code below to implement the perceptron trick.
# The function should receive as inputs the data X, the labels y,
# the weights W (as an array), and the bias b,
# update the weights and bias W, b, according to the perceptron algorithm,
# and return W and b.
def perceptronStep(X, y, W, b, learn_rate = 0.01):
for i in range(len(X)):
y_hat = prediction(X[i],W,b)
if y[i]-y_hat == 1:
W[0] += X[i][0]*learn_rate
W[1] += X[i][1]*learn_rate
b += learn_rate
elif y[i]-y_hat == -1:
W[0] -= X[i][0]*learn_rate
W[1] -= X[i][1]*learn_rate
b -= learn_rate
return W, b

# This function runs the perceptron algorithm repeatedly on the dataset,
# and returns a few of the boundary lines obtained in the iterations,
# for plotting purposes.
# Feel free to play with the learning rate and the num_epochs,
# and see your results plotted below.
def trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):
x_min, x_max = min(X.T[0]), max(X.T[0])
y_min, y_max = min(X.T[1]), max(X.T[1])
W = np.array(np.random.rand(2,1))
b = np.random.rand(1)[0] + x_max
# These are the solution lines that get plotted below.
boundary_lines = []
for i in range(num_epochs):
# In each epoch, we apply the perceptron step.
W, b = perceptronStep(X, y, W, b, learn_rate)
boundary_lines.append((-W[0]/W[1], -b/W[1]))
return boundary_lines

``````

# 15. 多类别分类和 Softmax

``````
import numpy as np

def softmax(L):
expL = np.exp(L)
sumExpL = sum(expL)
result = []
for i in expL:
result.append(i*1.0/sumExpL)
return result

# Note: The function np.divide can also be used here, as follows:
# def softmax(L):
#     expL = np.exp(L)
#     return np.divide (expL, expL.sum())

L = np.array([5,6,7,8,9])
print(softmax(L))

``````