Training Time and Error Benchmarking
Target error: 0.02 (Mean Squared Error)
| Date/Time | Task | Epoch | Error | Training Time (ms) |
|---|---|---|---|---|
| 8/24/2017 10:34:41AM | Train on: C:\User\Perceptron network frame1: | 1 | 0.570838 | 156 |
| 8/24/2017 10:35:11AM | Train on: C:\User\Perceptron network frame1: | 5 | 0.570126 | 422 |
| 8/24/2017 10:35:18AM | Train on: C:\User\Perceptron network frame1: | 10 | 0.58272 | 733 |
| 8/24/2017 10:35:24AM | Train on: C:\User\Perceptron network frame1: | 15 | 0.56775 | 1046 |
| 8/24/2017 10:35:32AM | Train on: C:\User\Perceptron network frame1: | 20 | 0.567693 | 1451 |
| 8/24/2017 10:35:43AM | Train on: C:\User\Perceptron network frame1: | 30 | 0.57577 | 2091 |
| 8/24/2017 10:35:52AM | Train on: C:\User\Perceptron network frame1: | 40 | 0.57287 | 2746 |
| 8/24/2017 10:36:05AM | Train on: C:\User\Perceptron network frame1: | 50 | 0.57335 | 3261 |
| 8/24/2017 10:36:37AM | Train on: C:\User\Perceptron network frame1: | 100 | 0.567109 | 6350 |
| 8/24/2017 10:37:23AM | Train on: C:\User\Back propagation network frame1: | 1 | 0.194059 | 7333 |
| 8/24/2017 10:38:44AM | Train on: C:\User\Back propagation network frame1: | 5 | 0.190227 | 32573 |
| 8/24/2017 10:40:11AM | Train on: C:\User\Back propagation network frame1: | 10 | 0.189882 | 72556 |
| 8/24/2017 10:42:08AM | Train on: C:\User\Back propagation network frame1: | 15 | 0.189822 | 108296 |
| 8/24/2017 10:44:48AM | Train on: C:\User\Back propagation network frame1: | 20 | 0.189851 | 137972 |
| 8/24/2017 10:51:23AM | Train on: C:\User\Back propagation network frame1: | 30 | 0.189813 | 211678 |
| 8/24/2017 10:56:31AM | Train on: C:\User\Back propagation network frame1: | 40 | 0.18993 | 287173 |
| 8/24/2017 11:02:52AM | Train on: C:\User\Back propagation network frame1: | 50 | 0.189789 | 367179 |
| 8/24/2017 11:16:43AM | Train on: C:\User\Back propagation network frame1: | 100 | 0.189803 | 734109 |
| 8/24/2017 11:21:21AM | Train on: C:\User\Progress P-network frame1: | 1 | 0.058916 | 343 |
| 8/24/2017 11:22:01AM | Train on: C:\User\Progress P-network frame1: | 5 | 0.018097 | 640 |
| 8/24/2017 11:22:08AM | Train on: C:\User\Progress PANN frame1: | 10 | 0.018097 | 640 |
| 8/24/2017 11:22:17AM | Train on: C:\User\Progress PANN frame1: | 15 | 0.018097 | 671 |
| 8/24/2017 11:22:26AM | Train on: C:\User\Progress PANN frame1: | 20 | 0.018097 | 3640 |
| 8/24/2017 11:22:35AM | Train on: C:\User\Progress PANN frame1: | 30 | 0.018097 | 9252 |
| 8/24/2017 11:23:47AM | Train on: C:\User\Progress PANN frame1: | 40 | 0.018097 | 12262 |
| 8/24/2017 11:24:15AM | Train on: C:\User\Progress PANN frame1: | 50 | 0.018097 | 14702 |
| 8/24/2017 11:25:00AM | Train on: C:\User\Progress PANN frame1: | 50 | 0.018097 | 15631 |
| 8/24/2017 11:25:55AM | Train on: C:\User\Progress PANN frame1: | 100 | 0.018097 | 30670 |

PANN™ reduces its training error to desired minimum in less than one second.
Perceptron and Back Propagation ANNs have a substantial error, which reduces slowly. These ANNs show no tendency to reach target error.
Tested training set: 30,000 images.
Additional Comparison to Alternatives


| Parameters | PANN™ | NeuroSolutions Data Manager |
|---|---|---|
| Standard deviation | 0.0035 | 0.011 |
| Working time | 3.297s | 1,938s = 32m 18s |
| Number of epochs | 8 | 44,000 |

Number of: records, images, lines, samples, data set and sample data are used synonymously.
Comparison of training time between IBM SPSS Statistics 22 and PANN™, for the same problem, tested on Apple iMac 27″ 3.5GHz quad-core Intel Core i7 8GB of 1600MHz DDR3 memory; SSD.
| Network | Images | Training time |
|---|---|---|
| PANN™ | 7,000 | 4s |
| IBM SPSS Statistics 22 | 7,000 | 3h 43m |
Training advantage factor:
13,400s ÷ 4s = 3,350
IBM SPSS Statistics 22 shows exponential growth of training time. PANN™ shows linear growth of training time.
PANN™ vs. Classical Neural Networks
Network Intelligence is proportional to the number of elements and problems at hand

PANN™ requires minutes (hours, at most) to reach a level of network intelligence unreachable for classical neural networks in thousands of years.
Image compression test
Image files from CIFAR-10 test site
GPU breakthrough
Amdahl’s Law and PANN™
PANN™’s simple matrix algebra mathematics allows for 100% parallel processing. Thus, speed increases linearly with additional GPUs and CPUs.
Progress, Inc.’s US patent application 15/449,614 covering matrix algebra application with PANN™, was filed on March 3rd, 2017.

This allows building computers and other electronics with:
- very high processing speed, and
- reduced number of GPUs and CPUs
Trading speed: Comparison of PANN™ and nVidia cuDNN
PANN™’s training speed on CPUs and GPUs is a thousand times higher than that of existing ANNs.
PANN™ provides 60 (threads on GPU) × — 201,000 ×. Acceleration is proportional to the number (N) of GPUs: 201,000 × N.
PANN™ Allows to:
- Improve ANN training speed thousands of times
- Build supercomputers on GPUs
- Build hypercomputers on GPUs

Comparison of PANN™ with CPU/GPU
| Inputs | Outputs | Images | Training time, in ms | |||||
|---|---|---|---|---|---|---|---|---|
| CPU | GPU | Log CPU time | Log GPU time | Difference | Times | |||
| 10 | 10 | 10 | 3.00 | |||||
| 100 | 100 | 10 | 2.00 | 3.10 | 0.30 | 0.49 | -1.10 | 0.65 |
| 1,000 | 1,000 | 10 | 321.00 | 19.90 | 2.51 | 1.30 | 301.10 | 16.13 |
| 5,000 | 5,000 | 10 | 7 872.00 | 271.00 | 3.90 | 2.43 | 7,601.00 | 29.05 |
GPU — graphics processing unit
1s = 1,000ms
Testing computer with CPU speed / GPU speed = 4
| Inputs | Outputs | Images | Training time, in ms | |||||
|---|---|---|---|---|---|---|---|---|
| CPU | GPU | Log CPU time | Log GPU time | Difference | Times | |||
| 100 | 100 | 10 | 2.00 | 69.10 | 0.30 | 1.84 | -67.10 | 0.03 |
| 100 | 1,000 | 10 | 28.00 | 69.30 | 1.45 | 1.84 | -41.30 | 0.40 |
| 100 | 100,000 | 10 | 3,719.00 | 86.40 | 3.57 | 1.94 | 3,632.60 | 43.04 |
| 100 | 500,000 | 10 | 18,440.00 | 125.30 | 4.27 | 2.10 | 18,314.70 | 147.17 |
GPU — graphics processing unit
1s = 1,000ms


