Skip to content

Tags: oonisim/python-programs

Tags

NLP_NN_WORD2VEC_SGRAM

Toggle NLP_NN_WORD2VEC_SGRAM's commit message
word2vec cbow & sgram

NLP_NN_STANDARDIZATION

Toggle NLP_NN_STANDARDIZATION's commit message
added standardization layer. removed BN from network objective stack

NLP_NN_LAYER_STANDARDIZATION

Toggle NLP_NN_LAYER_STANDARDIZATION's commit message
NLP NN standardization layer implemented and testd

NLP_MULTI_LAYER_MNIST

Toggle NLP_MULTI_LAYER_MNIST's commit message
MNIST multilayer with standardized input.

NLP_NN_LAYER_SEQUENTIAL

Toggle NLP_NN_LAYER_SEQUENTIAL's commit message
NLP NN Sequential layer implemented and testd

NLP_TEST_NN_W0_ZERO_CLEARED

Toggle NLP_TEST_NN_W0_ZERO_CLEARED's commit message
Test effect of zero-clear bias weight for zero-centered data.

Because the data is almost zero-centered, the bias x0 is not required. Hence set the bias weight w0 to zero to short-cut the training. Without, the

W1_bias_0 = copy.deepcopy(W1)  # np.copy() is sufficient without deepcopy.
W2_bias_0 = copy.deepcopy(W2)
W1_bias_0[
    ::,
    0
] = 0
W2_bias_0[
    ::,
    0
] = 0

NLP_MATMUL_ADD_BIAS_INTERNALLY

Toggle NLP_MATMUL_ADD_BIAS_INTERNALLY's commit message
Add bias internally in matmul

NLP_BATCH_NORMALIZATION

Toggle NLP_BATCH_NORMALIZATION's commit message
Implemented batch normalization

NLP_TWO_LAYER_CLASSIFIER_ON_NON_LINEAR

Toggle NLP_TWO_LAYER_CLASSIFIER_ON_NON_LINEAR's commit message
Two layer (matmul + ReLU) * 2 + log loss output training on non-linea…

…r circles of A NOT B.

NLP_SOFTMAX_CROSS_ENTROPY_LOG_LOSS

Toggle NLP_SOFTMAX_CROSS_ENTROPY_LOG_LOSS's commit message
Softmax cross entropy log loss tested. Tag before modifying the cross…

…_entropy_log_loss() for sigmoid/binary label.