Skip to main content Scroll Top

Tests

Training Time and Error Benchmarking

Target error: 0.02 (Mean Squared Error)

Date/TimeTaskEpochErrorTraining Time (ms)
8/24/2017 10:34:41AMTrain on: C:\User\Perceptron network frame1:10.570838156
8/24/2017 10:35:11AMTrain on: C:\User\Perceptron network frame1:50.570126422
8/24/2017 10:35:18AMTrain on: C:\User\Perceptron network frame1:100.58272733
8/24/2017 10:35:24AMTrain on: C:\User\Perceptron network frame1:150.567751046
8/24/2017 10:35:32AMTrain on: C:\User\Perceptron network frame1:200.5676931451
8/24/2017 10:35:43AMTrain on: C:\User\Perceptron network frame1:300.575772091
8/24/2017 10:35:52AMTrain on: C:\User\Perceptron network frame1:400.572872746
8/24/2017 10:36:05AMTrain on: C:\User\Perceptron network frame1:500.573353261
8/24/2017 10:36:37AMTrain on: C:\User\Perceptron network frame1:1000.5671096350
 
8/24/2017 10:37:23AMTrain on: C:\User\Back propagation network frame1:10.1940597333
8/24/2017 10:38:44AMTrain on: C:\User\Back propagation network frame1:50.19022732573
8/24/2017 10:40:11AMTrain on: C:\User\Back propagation network frame1:100.18988272556
8/24/2017 10:42:08AMTrain on: C:\User\Back propagation network frame1:150.189822108296
8/24/2017 10:44:48AMTrain on: C:\User\Back propagation network frame1:200.189851137972
8/24/2017 10:51:23AMTrain on: C:\User\Back propagation network frame1:300.189813211678
8/24/2017 10:56:31AMTrain on: C:\User\Back propagation network frame1:400.18993287173
8/24/2017 11:02:52AMTrain on: C:\User\Back propagation network frame1:500.189789367179
8/24/2017 11:16:43AMTrain on: C:\User\Back propagation network frame1:1000.189803734109
 
8/24/2017 11:21:21AMTrain on: C:\User\Progress P-network frame1:10.058916343
8/24/2017 11:22:01AMTrain on: C:\User\Progress P-network frame1:50.018097640
8/24/2017 11:22:08AMTrain on: C:\User\Progress PANN frame1:100.018097640
8/24/2017 11:22:17AMTrain on: C:\User\Progress PANN frame1:150.018097671
8/24/2017 11:22:26AMTrain on: C:\User\Progress PANN frame1:200.0180973640
8/24/2017 11:22:35AMTrain on: C:\User\Progress PANN frame1:300.0180979252
8/24/2017 11:23:47AMTrain on: C:\User\Progress PANN frame1:400.01809712262
8/24/2017 11:24:15AMTrain on: C:\User\Progress PANN frame1:500.01809714702
8/24/2017 11:25:00AMTrain on: C:\User\Progress PANN frame1:500.01809715631
8/24/2017 11:25:55AMTrain on: C:\User\Progress PANN frame1:1000.01809730670
A graph comparing PANN to other technologies

PANN™ reduces its training error to desired minimum in less than one second.

Perceptron and Back Propagation ANNs have a substantial error, which reduces slowly. These ANNs show no tendency to reach target error.

Tested training set: 30,000 images.

Additional Comparison to Alternatives

ParametersPANN™NeuroSolutions Data Manager
Standard deviation0.00350.011
Working time3.297s1,938s = 32m 18s
Number of epochs844,000
A graph comparing PANN to other AI technologies

Number of: records, images, lines, samples, data set and sample data are used synonymously.

Comparison of training time between IBM SPSS Statistics 22 and PANN™, for the same problem, tested on Apple iMac 27″ 3.5GHz quad-core Intel Core i7 8GB of 1600MHz DDR3 memory; SSD.

NetworkImagesTraining time
PANN™7,0004s
IBM SPSS Statistics 227,0003h 43m

Training advantage factor:
13,400s ÷ 4s = 3,350

IBM SPSS Statistics 22 shows exponential growth of training time. PANN™ shows linear growth of training time.

PANN™ vs. Classical Neural Networks

Network Intelligence is proportional to the number of elements and problems at hand

A graph comparing PANN to other AI technologies

PANN™ requires minutes (hours, at most) to reach a level of network intelligence unreachable for classical neural networks in thousands of years.

Image compression test

Image files from CIFAR-10 test site

A graph comparing PANN to other AI technologies

GPU breakthrough

Amdahl’s Law and PANN™

PANN™’s simple matrix algebra mathematics allows for 100% parallel processing. Thus, speed increases linearly with additional GPUs and CPUs.

Progress, Inc.’s US patent application 15/449,614 covering matrix algebra application with PANN™, was filed on March 3rd, 2017.

A graph comparing PANN to other AI technologies

This allows building computers and other electronics with:

  • very high processing speed, and
  • reduced number of GPUs and CPUs
Trading speed: Comparison of PANN™ and nVidia cuDNN

PANN™’s training speed on CPUs and GPUs is a thousand times higher than that of existing ANNs.

PANN™ provides 60 (threads on GPU) × — 201,000 ×. Acceleration is proportional to the number (N) of GPUs: 201,000 × N.

PANN™ Allows to:

  • Improve ANN training speed thousands of times
  • Build supercomputers on GPUs
  • Build hypercomputers on GPUs
A graph comparing PANN to other AI technologies

Comparison of PANN™ with CPU/GPU

InputsOutputsImagesTraining time, in ms
CPUGPULog CPU
time
Log GPU
time
DifferenceTimes
1010103.00
100100102.003.100.300.49-1.100.65
1,0001,00010321.0019.902.511.30301.1016.13
5,0005,000107 872.00271.003.902.437,601.0029.05
CPU — central processing unit
GPU — graphics processing unit
1s = 1,000ms

Testing computer with CPU speed / GPU speed = 4

A graph comparing PANN to other AI technologies
InputsOutputsImagesTraining time, in ms
CPUGPULog CPU
time
Log GPU
time
DifferenceTimes
100100102.0069.100.301.84-67.100.03
1001,0001028.0069.301.451.84-41.300.40
100100,000103,719.0086.403.571.943,632.6043.04
100500,0001018,440.00125.304.272.1018,314.70147.17
CPU — central processing unit
GPU — graphics processing unit
1s = 1,000ms
A graph comparing PANN to other AI technologies