Note
Go to the end to download the full example code.
Chatbot Tutorial#
Created On: Aug 14, 2018 | Last Updated: Jan 24, 2025 | Last Verified: Nov 05, 2024
Author: Matthew Inkawhich
In this tutorial, we explore a fun and interesting use-case of recurrent sequence-to-sequence models. We will train a simple chatbot using movie scripts from the Cornell Movie-Dialogs Corpus.
Conversational models are a hot topic in artificial intelligence research. Chatbots can be found in a variety of settings, including customer service applications and online helpdesks. These bots are often powered by retrieval-based models, which output predefined responses to questions of certain forms. In a highly restricted domain like a company’s IT helpdesk, these models may be sufficient, however, they are not robust enough for more general use-cases. Teaching a machine to carry out a meaningful conversation with a human in multiple domains is a research question that is far from solved. Recently, the deep learning boom has allowed for powerful generative models like Google’s Neural Conversational Model, which marks a large step towards multi-domain generative conversational models. In this tutorial, we will implement this kind of model in PyTorch.
> hello?
Bot: hello .
> where am I?
Bot: you re in a hospital .
> who are you?
Bot: i m a lawyer .
> how are you doing?
Bot: i m fine .
> are you my friend?
Bot: no .
> you're under arrest
Bot: i m trying to help you !
> i'm just kidding
Bot: i m sorry .
> where are you from?
Bot: san francisco .
> it's time for me to leave
Bot: i know .
> goodbye
Bot: goodbye .
Tutorial Highlights
Handle loading and preprocessing of Cornell Movie-Dialogs Corpus dataset
Implement a sequence-to-sequence model with Luong attention mechanism(s)
Jointly train encoder and decoder models using mini-batches
Implement greedy-search decoding module
Interact with trained chatbot
Acknowledgments
This tutorial borrows code from the following sources:
Yuan-Kuei Wu’s pytorch-chatbot implementation: ywk991112/pytorch-chatbot
Sean Robertson’s practical-pytorch seq2seq-translation example: spro/practical-pytorch
FloydHub Cornell Movie Corpus preprocessing code: floydhub/textutil-preprocess-cornell-movie-corpus
Preparations#
To get started, download the Movie-Dialogs Corpus zip file.
# and put in a ``data/`` directory under the current directory.
#
# After that, let’s import some necessities.
#
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
import json
# If the current `accelerator <https://pytorch.org/docs/stable/torch.html#accelerators>`__ is available,
# we will use it. Otherwise, we use the CPU.
device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
print(f"Using {device} device")
Using cuda device
Load & Preprocess Data#
The next step is to reformat our data file and load the data into structures that we can work with.
The Cornell Movie-Dialogs Corpus is a rich dataset of movie character dialog:
220,579 conversational exchanges between 10,292 pairs of movie characters
9,035 characters from 617 movies
304,713 total utterances
This dataset is large and diverse, and there is a great variation of language formality, time periods, sentiment, etc. Our hope is that this diversity makes our model robust to many forms of inputs and queries.
First, we’ll take a look at some lines of our datafile to see the original format.
corpus_name = "movie-corpus"
corpus = os.path.join("data", corpus_name)
def printLines(file, n=10):
with open(file, 'rb') as datafile:
lines = datafile.readlines()
for line in lines[:n]:
print(line)
printLines(os.path.join(corpus, "utterances.jsonl"))
b'{"id": "L1045", "conversation_id": "L1044", "text": "They do not!", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "They", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "not", "tag": "RB", "dep": "neg", "up": 1, "dn": []}, {"tok": "!", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": "L1044", "timestamp": null, "vectors": []}\n'
b'{"id": "L1044", "conversation_id": "L1044", "text": "They do to!", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "They", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "to", "tag": "TO", "dep": "dobj", "up": 1, "dn": []}, {"tok": "!", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n'
b'{"id": "L985", "conversation_id": "L984", "text": "I hope so.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "I", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "hope", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "so", "tag": "RB", "dep": "advmod", "up": 1, "dn": []}, {"tok": ".", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": "L984", "timestamp": null, "vectors": []}\n'
b'{"id": "L984", "conversation_id": "L984", "text": "She okay?", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "She", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "okay", "tag": "RB", "dep": "ROOT", "dn": [0, 2]}, {"tok": "?", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n'
b'{"id": "L925", "conversation_id": "L924", "text": "Let\'s go.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Let", "tag": "VB", "dep": "ROOT", "dn": [2, 3]}, {"tok": "\'s", "tag": "PRP", "dep": "nsubj", "up": 2, "dn": []}, {"tok": "go", "tag": "VB", "dep": "ccomp", "up": 0, "dn": [1]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 0, "dn": []}]}]}, "reply-to": "L924", "timestamp": null, "vectors": []}\n'
b'{"id": "L924", "conversation_id": "L924", "text": "Wow", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Wow", "tag": "UH", "dep": "ROOT", "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n'
b'{"id": "L872", "conversation_id": "L870", "text": "Okay -- you\'re gonna need to learn how to lie.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 4, "toks": [{"tok": "Okay", "tag": "UH", "dep": "intj", "up": 4, "dn": []}, {"tok": "--", "tag": ":", "dep": "punct", "up": 4, "dn": []}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 4, "dn": []}, {"tok": "\'re", "tag": "VBP", "dep": "aux", "up": 4, "dn": []}, {"tok": "gon", "tag": "VBG", "dep": "ROOT", "dn": [0, 1, 2, 3, 6, 12]}, {"tok": "na", "tag": "TO", "dep": "aux", "up": 6, "dn": []}, {"tok": "need", "tag": "VB", "dep": "xcomp", "up": 4, "dn": [5, 8]}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 8, "dn": []}, {"tok": "learn", "tag": "VB", "dep": "xcomp", "up": 6, "dn": [7, 11]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 11, "dn": []}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 11, "dn": []}, {"tok": "lie", "tag": "VB", "dep": "xcomp", "up": 8, "dn": [9, 10]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 4, "dn": []}]}]}, "reply-to": "L871", "timestamp": null, "vectors": []}\n'
b'{"id": "L871", "conversation_id": "L870", "text": "No", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "No", "tag": "UH", "dep": "ROOT", "dn": []}]}]}, "reply-to": "L870", "timestamp": null, "vectors": []}\n'
b'{"id": "L870", "conversation_id": "L870", "text": "I\'m kidding. You know how sometimes you just become this \\"persona\\"? And you don\'t know how to quit?", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 2, "toks": [{"tok": "I", "tag": "PRP", "dep": "nsubj", "up": 2, "dn": []}, {"tok": "\'m", "tag": "VBP", "dep": "aux", "up": 2, "dn": []}, {"tok": "kidding", "tag": "VBG", "dep": "ROOT", "dn": [0, 1, 3]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 2, "dn": [4]}, {"tok": " ", "tag": "_SP", "dep": "", "up": 3, "dn": []}]}, {"rt": 1, "toks": [{"tok": "You", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "know", "tag": "VBP", "dep": "ROOT", "dn": [0, 6, 11]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 3, "dn": []}, {"tok": "sometimes", "tag": "RB", "dep": "advmod", "up": 6, "dn": [2]}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 6, "dn": []}, {"tok": "just", "tag": "RB", "dep": "advmod", "up": 6, "dn": []}, {"tok": "become", "tag": "VBP", "dep": "ccomp", "up": 1, "dn": [3, 4, 5, 9]}, {"tok": "this", "tag": "DT", "dep": "det", "up": 9, "dn": []}, {"tok": "\\"", "tag": "``", "dep": "punct", "up": 9, "dn": []}, {"tok": "persona", "tag": "NN", "dep": "attr", "up": 6, "dn": [7, 8, 10]}, {"tok": "\\"", "tag": "\'\'", "dep": "punct", "up": 9, "dn": []}, {"tok": "?", "tag": ".", "dep": "punct", "up": 1, "dn": [12]}, {"tok": " ", "tag": "_SP", "dep": "", "up": 11, "dn": []}]}, {"rt": 4, "toks": [{"tok": "And", "tag": "CC", "dep": "cc", "up": 4, "dn": []}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 4, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "aux", "up": 4, "dn": []}, {"tok": "n\'t", "tag": "RB", "dep": "neg", "up": 4, "dn": []}, {"tok": "know", "tag": "VB", "dep": "ROOT", "dn": [0, 1, 2, 3, 7, 8]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 7, "dn": []}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 7, "dn": []}, {"tok": "quit", "tag": "VB", "dep": "xcomp", "up": 4, "dn": [5, 6]}, {"tok": "?", "tag": ".", "dep": "punct", "up": 4, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n'
b'{"id": "L869", "conversation_id": "L866", "text": "Like my fear of wearing pastels?", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Like", "tag": "IN", "dep": "ROOT", "dn": [2, 6]}, {"tok": "my", "tag": "PRP$", "dep": "poss", "up": 2, "dn": []}, {"tok": "fear", "tag": "NN", "dep": "pobj", "up": 0, "dn": [1, 3]}, {"tok": "of", "tag": "IN", "dep": "prep", "up": 2, "dn": [4]}, {"tok": "wearing", "tag": "VBG", "dep": "pcomp", "up": 3, "dn": [5]}, {"tok": "pastels", "tag": "NNS", "dep": "dobj", "up": 4, "dn": []}, {"tok": "?", "tag": ".", "dep": "punct", "up": 0, "dn": []}]}]}, "reply-to": "L868", "timestamp": null, "vectors": []}\n'
Create formatted data file#
For convenience, we’ll create a nicely formatted data file in which each line contains a tab-separated query sentence and a response sentence pair.
The following functions facilitate the parsing of the raw
utterances.jsonl data file.
loadLinesAndConversationssplits each line of the file into a dictionary of lines with fields:lineID,characterID, and text and then groups them into conversations with fields:conversationID,movieID, and lines.extractSentencePairsextracts pairs of sentences from conversations
# Splits each line of the file to create lines and conversations
def loadLinesAndConversations(fileName):
lines = {}
conversations = {}
with open(fileName, 'r', encoding='iso-8859-1') as f:
for line in f:
lineJson = json.loads(line)
# Extract fields for line object
lineObj = {}
lineObj["lineID"] = lineJson["id"]
lineObj["characterID"] = lineJson["speaker"]
lineObj["text"] = lineJson["text"]
lines[lineObj['lineID']] = lineObj
# Extract fields for conversation object
if lineJson["conversation_id"] not in conversations:
convObj = {}
convObj["conversationID"] = lineJson["conversation_id"]
convObj["movieID"] = lineJson["meta"]["movie_id"]
convObj["lines"] = [lineObj]
else:
convObj = conversations[lineJson["conversation_id"]]
convObj["lines"].insert(0, lineObj)
conversations[convObj["conversationID"]] = convObj
return lines, conversations
# Extracts pairs of sentences from conversations
def extractSentencePairs(conversations):
qa_pairs = []
for conversation in conversations.values():
# Iterate over all the lines of the conversation
for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it)
inputLine = conversation["lines"][i]["text"].strip()
targetLine = conversation["lines"][i+1]["text"].strip()
# Filter wrong samples (if one of the lists is empty)
if inputLine and targetLine:
qa_pairs.append([inputLine, targetLine])
return qa_pairs
Now we’ll call these functions and create the file. We’ll call it
formatted_movie_lines.txt.
# Define path to new file
datafile = os.path.join(corpus, "formatted_movie_lines.txt")
delimiter = '\t'
# Unescape the delimiter
delimiter = str(codecs.decode(delimiter, "unicode_escape"))
# Initialize lines dict and conversations dict
lines = {}
conversations = {}
# Load lines and conversations
print("\nProcessing corpus into lines and conversations...")
lines, conversations = loadLinesAndConversations(os.path.join(corpus, "utterances.jsonl"))
# Write new csv file
print("\nWriting newly formatted file...")
with open(datafile, 'w', encoding='utf-8') as outputfile:
writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n')
for pair in extractSentencePairs(conversations):
writer.writerow(pair)
# Print a sample of lines
print("\nSample lines from file:")
printLines(datafile)
Processing corpus into lines and conversations...
Writing newly formatted file...
Sample lines from file:
b'They do to!\tThey do not!\n'
b'She okay?\tI hope so.\n'
b"Wow\tLet's go.\n"
b'"I\'m kidding. You know how sometimes you just become this ""persona""? And you don\'t know how to quit?"\tNo\n'
b"No\tOkay -- you're gonna need to learn how to lie.\n"
b"I figured you'd get to the good stuff eventually.\tWhat good stuff?\n"
b'What good stuff?\t"The ""real you""."\n'
b'"The ""real you""."\tLike my fear of wearing pastels?\n'
b'do you listen to this crap?\tWhat crap?\n'
b"What crap?\tMe. This endless ...blonde babble. I'm like, boring myself.\n"
Load and trim data#
Our next order of business is to create a vocabulary and load query/response sentence pairs into memory.
Note that we are dealing with sequences of words, which do not have an implicit mapping to a discrete numerical space. Thus, we must create one by mapping each unique word that we encounter in our dataset to an index value.
For this we define a Voc class, which keeps a mapping from words to
indexes, a reverse mapping of indexes to words, a count of each word and
a total word count. The class provides methods for adding a word to the
vocabulary (addWord), adding all words in a sentence
(addSentence) and trimming infrequently seen words (trim). More
on trimming later.
# Default word tokens
PAD_token = 0 # Used for padding short sentences
SOS_token = 1 # Start-of-sentence token
EOS_token = 2 # End-of-sentence token
class Voc:
def __init__(self, name):
self.name = name
self.trimmed = False
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count SOS, EOS, PAD
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.num_words
self.word2count[word] = 1
self.index2word[self.num_words] = word
self.num_words += 1
else:
self.word2count[word] += 1
# Remove words below a certain count threshold
def trim(self, min_count):
if self.trimmed:
return
self.trimmed = True
keep_words = []
for k, v in self.word2count.items():
if v >= min_count:
keep_words.append(k)
print('keep_words {} / {} = {:.4f}'.format(
len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
))
# Reinitialize dictionaries
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count default tokens
for word in keep_words:
self.addWord(word)
Now we can assemble our vocabulary and query/response sentence pairs. Before we are ready to use this data, we must perform some preprocessing.
First, we must convert the Unicode strings to ASCII using
unicodeToAscii. Next, we should convert all letters to lowercase and
trim all non-letter characters except for basic punctuation
(normalizeString). Finally, to aid in training convergence, we will
filter out sentences with length greater than the MAX_LENGTH
threshold (filterPairs).
MAX_LENGTH = 10 # Maximum sentence length to consider
# Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
# Read query/response pairs and return a voc object
def readVocs(datafile, corpus_name):
print("Reading lines...")
# Read the file and split into lines
lines = open(datafile, encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
voc = Voc(corpus_name)
return voc, pairs
# Returns True if both sentences in a pair 'p' are under the MAX_LENGTH threshold
def filterPair(p):
# Input sequences need to preserve the last word for EOS token
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
# Filter pairs using the ``filterPair`` condition
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
# Using the functions defined above, return a populated voc object and pairs list
def loadPrepareData(corpus, corpus_name, datafile, save_dir):
print("Start preparing training data ...")
voc, pairs = readVocs(datafile, corpus_name)
print("Read {!s} sentence pairs".format(len(pairs)))
pairs = filterPairs(pairs)
print("Trimmed to {!s} sentence pairs".format(len(pairs)))
print("Counting words...")
for pair in pairs:
voc.addSentence(pair[0])
voc.addSentence(pair[1])
print("Counted words:", voc.num_words)
return voc, pairs
# Load/Assemble voc and pairs
save_dir = os.path.join("data", "save")
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
# Print some pairs to validate
print("\npairs:")
for pair in pairs[:10]:
print(pair)
Start preparing training data ...
Reading lines...
Read 221282 sentence pairs
Trimmed to 64313 sentence pairs
Counting words...
Counted words: 18082
pairs:
['they do to !', 'they do not !']
['she okay ?', 'i hope so .']
['wow', 'let s go .']
['what good stuff ?', 'the real you .']
['the real you .', 'like my fear of wearing pastels ?']
['do you listen to this crap ?', 'what crap ?']
['well no . . .', 'then that s all you had to say .']
['then that s all you had to say .', 'but']
['but', 'you always been this selfish ?']
['have fun tonight ?', 'tons']
Another tactic that is beneficial to achieving faster convergence during training is trimming rarely used words out of our vocabulary. Decreasing the feature space will also soften the difficulty of the function that the model must learn to approximate. We will do this as a two-step process:
Trim words used under
MIN_COUNTthreshold using thevoc.trimfunction.Filter out pairs with trimmed words.
MIN_COUNT = 3 # Minimum word count threshold for trimming
def trimRareWords(voc, pairs, MIN_COUNT):
# Trim words used under the MIN_COUNT from the voc
voc.trim(MIN_COUNT)
# Filter out pairs with trimmed words
keep_pairs = []
for pair in pairs:
input_sentence = pair[0]
output_sentence = pair[1]
keep_input = True
keep_output = True
# Check input sentence
for word in input_sentence.split(' '):
if word not in voc.word2index:
keep_input = False
break
# Check output sentence
for word in output_sentence.split(' '):
if word not in voc.word2index:
keep_output = False
break
# Only keep pairs that do not contain trimmed word(s) in their input or output sentence
if keep_input and keep_output:
keep_pairs.append(pair)
print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs)))
return keep_pairs
# Trim voc and pairs
pairs = trimRareWords(voc, pairs, MIN_COUNT)
keep_words 7833 / 18079 = 0.4333
Trimmed from 64313 pairs to 53131, 0.8261 of total
Prepare Data for Models#
Although we have put a great deal of effort into preparing and massaging our data into a nice vocabulary object and list of sentence pairs, our models will ultimately expect numerical torch tensors as inputs. One way to prepare the processed data for the models can be found in the seq2seq translation tutorial. In that tutorial, we use a batch size of 1, meaning that all we have to do is convert the words in our sentence pairs to their corresponding indexes from the vocabulary and feed this to the models.
However, if you’re interested in speeding up training and/or would like to leverage GPU parallelization capabilities, you will need to train with mini-batches.
Using mini-batches also means that we must be mindful of the variation of sentence length in our batches. To accommodate sentences of different sizes in the same batch, we will make our batched input tensor of shape (max_length, batch_size), where sentences shorter than the max_length are zero padded after an EOS_token.
If we simply convert our English sentences to tensors by converting
words to their indexes(indexesFromSentence) and zero-pad, our
tensor would have shape (batch_size, max_length) and indexing the
first dimension would return a full sequence across all time-steps.
However, we need to be able to index our batch along time, and across
all sequences in the batch. Therefore, we transpose our input batch
shape to (max_length, batch_size), so that indexing across the first
dimension returns a time step across all sentences in the batch. We
handle this transpose implicitly in the zeroPadding function.
The inputVar function handles the process of converting sentences to
tensor, ultimately creating a correctly shaped zero-padded tensor. It
also returns a tensor of lengths for each of the sequences in the
batch which will be passed to our decoder later.
The outputVar function performs a similar function to inputVar,
but instead of returning a lengths tensor, it returns a binary mask
tensor and a maximum target sentence length. The binary mask tensor has
the same shape as the output target tensor, but every element that is a
PAD_token is 0 and all others are 1.
batch2TrainData simply takes a bunch of pairs and returns the input
and target tensors using the aforementioned functions.
def indexesFromSentence(voc, sentence):
return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token]
def zeroPadding(l, fillvalue=PAD_token):
return list(itertools.zip_longest(*l, fillvalue=fillvalue))
def binaryMatrix(l, value=PAD_token):
m = []
for i, seq in enumerate(l):
m.append([])
for token in seq:
if token == PAD_token:
m[i].append(0)
else:
m[i].append(1)
return m
# Returns padded input sequence tensor and lengths
def inputVar(l, voc):
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
padList = zeroPadding(indexes_batch)
padVar = torch.LongTensor(padList)
return padVar, lengths
# Returns padded target sequence tensor, padding mask, and max target length
def outputVar(l, voc):
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
max_target_len = max([len(indexes) for indexes in indexes_batch])
padList = zeroPadding(indexes_batch)
mask = binaryMatrix(padList)
mask = torch.BoolTensor(mask)
padVar = torch.LongTensor(padList)
return padVar, mask, max_target_len
# Returns all items for a given batch of pairs
def batch2TrainData(voc, pair_batch):
pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True)
input_batch, output_batch = [], []
for pair in pair_batch:
input_batch.append(pair[0])
output_batch.append(pair[1])
inp, lengths = inputVar(input_batch, voc)
output, mask, max_target_len = outputVar(output_batch, voc)
return inp, lengths, output, mask, max_target_len
# Example for validation
small_batch_size = 5
batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)])
input_variable, lengths, target_variable, mask, max_target_len = batches
print("input_variable:", input_variable)
print("lengths:", lengths)
print("target_variable:", target_variable)
print("mask:", mask)
print("max_target_len:", max_target_len)
input_variable: tensor([[ 40, 90, 24, 280, 24],
[ 104, 36, 385, 994, 10],
[1784, 79, 39, 14, 2],
[ 72, 1697, 36, 2, 0],
[ 5, 1352, 14, 0, 0],
[ 409, 10, 2, 0, 0],
[ 6, 2, 0, 0, 0],
[ 2, 0, 0, 0, 0]])
lengths: tensor([8, 7, 6, 4, 3])
target_variable: tensor([[ 40, 162, 914, 24, 280],
[ 36, 14, 17, 694, 890],
[ 17, 2, 62, 10, 14],
[499, 0, 147, 2, 2],
[104, 0, 284, 0, 0],
[208, 0, 10, 0, 0],
[135, 0, 2, 0, 0],
[ 48, 0, 0, 0, 0],
[ 6, 0, 0, 0, 0],
[ 2, 0, 0, 0, 0]])
mask: tensor([[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, True, True, True, True],
[ True, False, True, True, True],
[ True, False, True, False, False],
[ True, False, True, False, False],
[ True, False, True, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False],
[ True, False, False, False, False]])
max_target_len: 10
Define Models#
Seq2Seq Model#
The brains of our chatbot is a sequence-to-sequence (seq2seq) model. The goal of a seq2seq model is to take a variable-length sequence as an input, and return a variable-length sequence as an output using a fixed-sized model.
Sutskever et al. discovered that by using two separate recurrent neural nets together, we can accomplish this task. One RNN acts as an encoder, which encodes a variable length input sequence to a fixed-length context vector. In theory, this context vector (the final hidden layer of the RNN) will contain semantic information about the query sentence that is input to the bot. The second RNN is a decoder, which takes an input word and the context vector, and returns a guess for the next word in the sequence and a hidden state to use in the next iteration.
Image source: https://jeddy92.github.io/JEddy92.github.io/ts_seq2seq_intro/
Encoder#
The encoder RNN iterates through the input sentence one token (e.g. word) at a time, at each time step outputting an “output” vector and a “hidden state” vector. The hidden state vector is then passed to the next time step, while the output vector is recorded. The encoder transforms the context it saw at each point in the sequence into a set of points in a high-dimensional space, which the decoder will use to generate a meaningful output for the given task.
At the heart of our encoder is a multi-layered Gated Recurrent Unit, invented by Cho et al. in 2014. We will use a bidirectional variant of the GRU, meaning that there are essentially two independent RNNs: one that is fed the input sequence in normal sequential order, and one that is fed the input sequence in reverse order. The outputs of each network are summed at each time step. Using a bidirectional GRU will give us the advantage of encoding both past and future contexts.
Bidirectional RNN:
Image source: https://colah.github.io/posts/2015-09-NN-Types-FP/
Note that an embedding layer is used to encode our word indices in
an arbitrarily sized feature space. For our models, this layer will map
each word to a feature space of size hidden_size. When trained, these
values should encode semantic similarity between similar meaning words.
Finally, if passing a padded batch of sequences to an RNN module, we
must pack and unpack padding around the RNN pass using
nn.utils.rnn.pack_padded_sequence and
nn.utils.rnn.pad_packed_sequence respectively.
Computation Graph:
Convert word indexes to embeddings.
Pack padded batch of sequences for RNN module.
Forward pass through GRU.
Unpack padding.
Sum bidirectional GRU outputs.
Return output and final hidden state.
Inputs:
input_seq: batch of input sentences; shape=(max_length, batch_size)input_lengths: list of sentence lengths corresponding to each sentence in the batch; shape=(batch_size)hidden: hidden state; shape=(n_layers x num_directions, batch_size, hidden_size)
Outputs:
outputs: output features from the last hidden layer of the GRU (sum of bidirectional outputs); shape=(max_length, batch_size, hidden_size)hidden: updated hidden state from GRU; shape=(n_layers x num_directions, batch_size, hidden_size)
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
# Initialize GRU; the input_size and hidden_size parameters are both set to 'hidden_size'
# because our input size is a word embedding with number of features == hidden_size
self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
dropout=(0 if n_layers == 1 else dropout), bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
# Convert word indexes to embeddings
embedded = self.embedding(input_seq)
# Pack padded batch of sequences for RNN module
packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
# Forward pass through GRU
outputs, hidden = self.gru(packed, hidden)
# Unpack padding
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
# Sum bidirectional GRU outputs
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:]
# Return output and final hidden state
return outputs, hidden
Decoder#
The decoder RNN generates the response sentence in a token-by-token fashion. It uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an EOS_token, representing the end of the sentence. A common problem with a vanilla seq2seq decoder is that if we rely solely on the context vector to encode the entire input sequence’s meaning, it is likely that we will have information loss. This is especially the case when dealing with long input sequences, greatly limiting the capability of our decoder.
To combat this, Bahdanau et al. created an “attention mechanism” that allows the decoder to pay attention to certain parts of the input sequence, rather than using the entire fixed context at every step.
At a high level, attention is calculated using the decoder’s current hidden state and the encoder’s outputs. The output attention weights have the same shape as the input sequence, allowing us to multiply them by the encoder outputs, giving us a weighted sum which indicates the parts of encoder output to pay attention to. Sean Robertson’s figure describes this very well:
Luong et al. improved upon Bahdanau et al.’s groundwork by creating “Global attention”. The key difference is that with “Global attention”, we consider all of the encoder’s hidden states, as opposed to Bahdanau et al.’s “Local attention”, which only considers the encoder’s hidden state from the current time step. Another difference is that with “Global attention”, we calculate attention weights, or energies, using the hidden state of the decoder from the current time step only. Bahdanau et al.’s attention calculation requires knowledge of the decoder’s state from the previous time step. Also, Luong et al. provides various methods to calculate the attention energies between the encoder output and decoder output which are called “score functions”:
where \(h_t\) = current target decoder state and \(\bar{h}_s\) = all encoder states.
Overall, the Global attention mechanism can be summarized by the
following figure. Note that we will implement the “Attention Layer” as a
separate nn.Module called Attn. The output of this module is a
softmax normalized weights tensor of shape (batch_size, 1,
max_length).
# Luong attention layer
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
if self.method not in ['dot', 'general', 'concat']:
raise ValueError(self.method, "is not an appropriate attention method.")
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(hidden_size))
def dot_score(self, hidden, encoder_output):
return torch.sum(hidden * encoder_output, dim=2)
def general_score(self, hidden, encoder_output):
energy = self.attn(encoder_output)
return torch.sum(hidden * energy, dim=2)
def concat_score(self, hidden, encoder_output):
energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh()
return torch.sum(self.v * energy, dim=2)
def forward(self, hidden, encoder_outputs):
# Calculate the attention weights (energies) based on the given method
if self.method == 'general':
attn_energies = self.general_score(hidden, encoder_outputs)
elif self.method == 'concat':
attn_energies = self.concat_score(hidden, encoder_outputs)
elif self.method == 'dot':
attn_energies = self.dot_score(hidden, encoder_outputs)
# Transpose max_length and batch_size dimensions
attn_energies = attn_energies.t()
# Return the softmax normalized probability scores (with added dimension)
return F.softmax(attn_energies, dim=1).unsqueeze(1)
Now that we have defined our attention submodule, we can implement the actual decoder model. For the decoder, we will manually feed our batch one time step at a time. This means that our embedded word tensor and GRU output will both have shape (1, batch_size, hidden_size).
Computation Graph:
Get embedding of current input word.
Forward through unidirectional GRU.
Calculate attention weights from the current GRU output from (2).
Multiply attention weights to encoder outputs to get new “weighted sum” context vector.
Concatenate weighted context vector and GRU output using Luong eq. 5.
Predict next word using Luong eq. 6 (without softmax).
Return output and final hidden state.
Inputs:
input_step: one time step (one word) of input sequence batch; shape=(1, batch_size)last_hidden: final hidden layer of GRU; shape=(n_layers x num_directions, batch_size, hidden_size)encoder_outputs: encoder model’s output; shape=(max_length, batch_size, hidden_size)
Outputs:
output: softmax normalized tensor giving probabilities of each word being the correct next word in the decoded sequence; shape=(batch_size, voc.num_words)hidden: final hidden state of GRU; shape=(n_layers x num_directions, batch_size, hidden_size)
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1):
super(LuongAttnDecoderRNN, self).__init__()
# Keep for reference
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
# Define layers
self.embedding = embedding
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout))
self.concat = nn.Linear(hidden_size * 2, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.attn = Attn(attn_model, hidden_size)
def forward(self, input_step, last_hidden, encoder_outputs):
# Note: we run this one step (word) at a time
# Get embedding of current input word
embedded = self.embedding(input_step)
embedded = self.embedding_dropout(embedded)
# Forward through unidirectional GRU
rnn_output, hidden = self.gru(embedded, last_hidden)
# Calculate attention weights from the current GRU output
attn_weights = self.attn(rnn_output, encoder_outputs)
# Multiply attention weights to encoder outputs to get new "weighted sum" context vector
context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
# Concatenate weighted context vector and GRU output using Luong eq. 5
rnn_output = rnn_output.squeeze(0)
context = context.squeeze(1)
concat_input = torch.cat((rnn_output, context), 1)
concat_output = torch.tanh(self.concat(concat_input))
# Predict next word using Luong eq. 6
output = self.out(concat_output)
output = F.softmax(output, dim=1)
# Return output and final hidden state
return output, hidden
Define Training Procedure#
Masked loss#
Since we are dealing with batches of padded sequences, we cannot simply
consider all elements of the tensor when calculating loss. We define
maskNLLLoss to calculate our loss based on our decoder’s output
tensor, the target tensor, and a binary mask tensor describing the
padding of the target tensor. This loss function calculates the average
negative log likelihood of the elements that correspond to a 1 in the
mask tensor.
def maskNLLLoss(inp, target, mask):
nTotal = mask.sum()
crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))
loss = crossEntropy.masked_select(mask).mean()
loss = loss.to(device)
return loss, nTotal.item()
Single training iteration#
The train function contains the algorithm for a single training
iteration (a single batch of inputs).
We will use a couple of clever tricks to aid in convergence:
The first trick is using teacher forcing. This means that at some probability, set by
teacher_forcing_ratio, we use the current target word as the decoder’s next input rather than using the decoder’s current guess. This technique acts as training wheels for the decoder, aiding in more efficient training. However, teacher forcing can lead to model instability during inference, as the decoder may not have a sufficient chance to truly craft its own output sequences during training. Thus, we must be mindful of how we are setting theteacher_forcing_ratio, and not be fooled by fast convergence.The second trick that we implement is gradient clipping. This is a commonly used technique for countering the “exploding gradient” problem. In essence, by clipping or thresholding gradients to a maximum value, we prevent the gradients from growing exponentially and either overflow (NaN), or overshoot steep cliffs in the cost function.
Image source: Goodfellow et al. Deep Learning. 2016. https://www.deeplearningbook.org/
Sequence of Operations:
Forward pass entire input batch through encoder.
Initialize decoder inputs as SOS_token, and hidden state as the encoder’s final hidden state.
Forward input batch sequence through decoder one time step at a time.
If teacher forcing: set next decoder input as the current target; else: set next decoder input as current decoder output.
Calculate and accumulate loss.
Perform backpropagation.
Clip gradients.
Update encoder and decoder model parameters.
Note
PyTorch’s RNN modules (RNN, LSTM, GRU) can be used like any
other non-recurrent layers by simply passing them the entire input
sequence (or batch of sequences). We use the GRU layer like this in
the encoder. The reality is that under the hood, there is an
iterative process looping over each time step calculating hidden states.
Alternatively, you can run these modules one time-step at a time. In
this case, we manually loop over the sequences during the training
process like we must do for the decoder model. As long as you
maintain the correct conceptual model of these modules, implementing
sequential models can be very straightforward.
def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding,
encoder_optimizer, decoder_optimizer, batch_size, clip, max_length=MAX_LENGTH):
# Zero gradients
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
# Set device options
input_variable = input_variable.to(device)
target_variable = target_variable.to(device)
mask = mask.to(device)
# Lengths for RNN packing should always be on the CPU
lengths = lengths.to("cpu")
# Initialize variables
loss = 0
print_losses = []
n_totals = 0
# Forward pass through encoder
encoder_outputs, encoder_hidden = encoder(input_variable, lengths)
# Create initial decoder input (start with SOS tokens for each sentence)
decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]])
decoder_input = decoder_input.to(device)
# Set initial decoder hidden state to the encoder's final hidden state
decoder_hidden = encoder_hidden[:decoder.n_layers]
# Determine if we are using teacher forcing this iteration
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
# Forward batch of sequences through decoder one time step at a time
if use_teacher_forcing:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
# Teacher forcing: next input is current target
decoder_input = target_variable[t].view(1, -1)
# Calculate and accumulate loss
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * nTotal)
n_totals += nTotal
else:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
# No teacher forcing: next input is decoder's own current output
_, topi = decoder_output.topk(1)
decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]])
decoder_input = decoder_input.to(device)
# Calculate and accumulate loss
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * nTotal)
n_totals += nTotal
# Perform backpropagation
loss.backward()
# Clip gradients: gradients are modified in place
_ = nn.utils.clip_grad_norm_(encoder.parameters(), clip)
_ = nn.utils.clip_grad_norm_(decoder.parameters(), clip)
# Adjust model weights
encoder_optimizer.step()
decoder_optimizer.step()
return sum(print_losses) / n_totals
Training iterations#
It is finally time to tie the full training procedure together with the
data. The trainIters function is responsible for running
n_iterations of training given the passed models, optimizers, data,
etc. This function is quite self explanatory, as we have done the heavy
lifting with the train function.
One thing to note is that when we save our model, we save a tarball
containing the encoder and decoder state_dicts (parameters), the
optimizers’ state_dicts, the loss, the iteration, etc. Saving the model
in this way will give us the ultimate flexibility with the checkpoint.
After loading a checkpoint, we will be able to use the model parameters
to run inference, or we can continue training right where we left off.
def trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename):
# Load batches for each iteration
training_batches = [batch2TrainData(voc, [random.choice(pairs) for _ in range(batch_size)])
for _ in range(n_iteration)]
# Initializations
print('Initializing ...')
start_iteration = 1
print_loss = 0
if loadFilename:
start_iteration = checkpoint['iteration'] + 1
# Training loop
print("Training...")
for iteration in range(start_iteration, n_iteration + 1):
training_batch = training_batches[iteration - 1]
# Extract fields from batch
input_variable, lengths, target_variable, mask, max_target_len = training_batch
# Run a training iteration with batch
loss = train(input_variable, lengths, target_variable, mask, max_target_len, encoder,
decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip)
print_loss += loss
# Print progress
if iteration % print_every == 0:
print_loss_avg = print_loss / print_every
print("Iteration: {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration / n_iteration * 100, print_loss_avg))
print_loss = 0
# Save checkpoint
if (iteration % save_every == 0):
directory = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size))
if not os.path.exists(directory):
os.makedirs(directory)
torch.save({
'iteration': iteration,
'en': encoder.state_dict(),
'de': decoder.state_dict(),
'en_opt': encoder_optimizer.state_dict(),
'de_opt': decoder_optimizer.state_dict(),
'loss': loss,
'voc_dict': voc.__dict__,
'embedding': embedding.state_dict()
}, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint')))
Define Evaluation#
After training a model, we want to be able to talk to the bot ourselves. First, we must define how we want the model to decode the encoded input.
Greedy decoding#
Greedy decoding is the decoding method that we use during training when
we are NOT using teacher forcing. In other words, for each time
step, we simply choose the word from decoder_output with the highest
softmax value. This decoding method is optimal on a single time-step
level.
To facilitate the greedy decoding operation, we define a
GreedySearchDecoder class. When run, an object of this class takes
an input sequence (input_seq) of shape (input_seq length, 1), a
scalar input length (input_length) tensor, and a max_length to
bound the response sentence length. The input sentence is evaluated
using the following computational graph:
Computation Graph:
Forward input through encoder model.
Prepare encoder’s final hidden layer to be first hidden input to the decoder.
Initialize decoder’s first input as SOS_token.
Initialize tensors to append decoded words to.
- Iteratively decode one word token at a time:
Forward pass through decoder.
Obtain most likely word token and its softmax score.
Record token and score.
Prepare current token to be next decoder input.
Return collections of word tokens and scores.
class GreedySearchDecoder(nn.Module):
def __init__(self, encoder, decoder):
super(GreedySearchDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, input_seq, input_length, max_length):
# Forward input through encoder model
encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)
# Prepare encoder's final hidden layer to be first hidden input to the decoder
decoder_hidden = encoder_hidden[:self.decoder.n_layers]
# Initialize decoder input with SOS_token
decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SOS_token
# Initialize tensors to append decoded words to
all_tokens = torch.zeros([0], device=device, dtype=torch.long)
all_scores = torch.zeros([0], device=device)
# Iteratively decode one word token at a time
for _ in range(max_length):
# Forward pass through decoder
decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs)
# Obtain most likely word token and its softmax score
decoder_scores, decoder_input = torch.max(decoder_output, dim=1)
# Record token and score
all_tokens = torch.cat((all_tokens, decoder_input), dim=0)
all_scores = torch.cat((all_scores, decoder_scores), dim=0)
# Prepare current token to be next decoder input (add a dimension)
decoder_input = torch.unsqueeze(decoder_input, 0)
# Return collections of word tokens and scores
return all_tokens, all_scores
Evaluate my text#
Now that we have our decoding method defined, we can write functions for
evaluating a string input sentence. The evaluate function manages
the low-level process of handling the input sentence. We first format
the sentence as an input batch of word indexes with batch_size==1. We
do this by converting the words of the sentence to their corresponding
indexes, and transposing the dimensions to prepare the tensor for our
models. We also create a lengths tensor which contains the length of
our input sentence. In this case, lengths is scalar because we are
only evaluating one sentence at a time (batch_size==1). Next, we obtain
the decoded response sentence tensor using our GreedySearchDecoder
object (searcher). Finally, we convert the response’s indexes to
words and return the list of decoded words.
evaluateInput acts as the user interface for our chatbot. When
called, an input text field will spawn in which we can enter our query
sentence. After typing our input sentence and pressing Enter, our text
is normalized in the same way as our training data, and is ultimately
fed to the evaluate function to obtain a decoded output sentence. We
loop this process, so we can keep chatting with our bot until we enter
either “q” or “quit”.
Finally, if a sentence is entered that contains a word that is not in the vocabulary, we handle this gracefully by printing an error message and prompting the user to enter another sentence.
def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH):
### Format input sentence as a batch
# words -> indexes
indexes_batch = [indexesFromSentence(voc, sentence)]
# Create lengths tensor
lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
# Transpose dimensions of batch to match models' expectations
input_batch = torch.LongTensor(indexes_batch).transpose(0, 1)
# Use appropriate device
input_batch = input_batch.to(device)
lengths = lengths.to("cpu")
# Decode sentence with searcher
tokens, scores = searcher(input_batch, lengths, max_length)
# indexes -> words
decoded_words = [voc.index2word[token.item()] for token in tokens]
return decoded_words
def evaluateInput(encoder, decoder, searcher, voc):
input_sentence = ''
while(1):
try:
# Get input sentence
input_sentence = input('> ')
# Check if it is quit case
if input_sentence == 'q' or input_sentence == 'quit': break
# Normalize sentence
input_sentence = normalizeString(input_sentence)
# Evaluate sentence
output_words = evaluate(encoder, decoder, searcher, voc, input_sentence)
# Format and print response sentence
output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')]
print('Bot:', ' '.join(output_words))
except KeyError:
print("Error: Encountered unknown word.")
Run Model#
Finally, it is time to run our model!
Regardless of whether we want to train or test the chatbot model, we must initialize the individual encoder and decoder models. In the following block, we set our desired configurations, choose to start from scratch or set a checkpoint to load from, and build and initialize the models. Feel free to play with different model configurations to optimize performance.
# Configure models
model_name = 'cb_model'
attn_model = 'dot'
#``attn_model = 'general'``
#``attn_model = 'concat'``
hidden_size = 500
encoder_n_layers = 2
decoder_n_layers = 2
dropout = 0.1
batch_size = 64
# Set checkpoint to load from; set to None if starting from scratch
loadFilename = None
checkpoint_iter = 4000
Sample code to load from a checkpoint:
loadFilename = os.path.join(save_dir, model_name, corpus_name,
'{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size),
'{}_checkpoint.tar'.format(checkpoint_iter))
# Load model if a ``loadFilename`` is provided
if loadFilename:
# If loading on same machine the model was trained on
checkpoint = torch.load(loadFilename)
# If loading a model trained on GPU to CPU
#checkpoint = torch.load(loadFilename, map_location=torch.device('cpu'))
encoder_sd = checkpoint['en']
decoder_sd = checkpoint['de']
encoder_optimizer_sd = checkpoint['en_opt']
decoder_optimizer_sd = checkpoint['de_opt']
embedding_sd = checkpoint['embedding']
voc.__dict__ = checkpoint['voc_dict']
print('Building encoder and decoder ...')
# Initialize word embeddings
embedding = nn.Embedding(voc.num_words, hidden_size)
if loadFilename:
embedding.load_state_dict(embedding_sd)
# Initialize encoder & decoder models
encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout)
if loadFilename:
encoder.load_state_dict(encoder_sd)
decoder.load_state_dict(decoder_sd)
# Use appropriate device
encoder = encoder.to(device)
decoder = decoder.to(device)
print('Models built and ready to go!')
Building encoder and decoder ...
Models built and ready to go!
Run Training#
Run the following block if you want to train the model.
First we set training parameters, then we initialize our optimizers, and
finally we call the trainIters function to run our training
iterations.
# Configure training/optimization
clip = 50.0
teacher_forcing_ratio = 1.0
learning_rate = 0.0001
decoder_learning_ratio = 5.0
n_iteration = 4000
print_every = 1
save_every = 500
# Ensure dropout layers are in train mode
encoder.train()
decoder.train()
# Initialize optimizers
print('Building optimizers ...')
encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio)
if loadFilename:
encoder_optimizer.load_state_dict(encoder_optimizer_sd)
decoder_optimizer.load_state_dict(decoder_optimizer_sd)
# If you have an accelerator, configure it to call
for state in encoder_optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.to(device)
for state in decoder_optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
state[k] = v.to(device)
# Run training iterations
print("Starting Training!")
trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer,
embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size,
print_every, save_every, clip, corpus_name, loadFilename)
Building optimizers ...
Starting Training!
Initializing ...
Training...
Iteration: 1; Percent complete: 0.0%; Average loss: 8.9677
Iteration: 2; Percent complete: 0.1%; Average loss: 8.8489
Iteration: 3; Percent complete: 0.1%; Average loss: 8.6647
Iteration: 4; Percent complete: 0.1%; Average loss: 8.4242
Iteration: 5; Percent complete: 0.1%; Average loss: 7.9713
Iteration: 6; Percent complete: 0.1%; Average loss: 7.5042
Iteration: 7; Percent complete: 0.2%; Average loss: 6.9863
Iteration: 8; Percent complete: 0.2%; Average loss: 7.2617
Iteration: 9; Percent complete: 0.2%; Average loss: 7.0283
Iteration: 10; Percent complete: 0.2%; Average loss: 6.5441
Iteration: 11; Percent complete: 0.3%; Average loss: 6.4359
Iteration: 12; Percent complete: 0.3%; Average loss: 5.9177
Iteration: 13; Percent complete: 0.3%; Average loss: 5.5309
Iteration: 14; Percent complete: 0.4%; Average loss: 5.5942
Iteration: 15; Percent complete: 0.4%; Average loss: 5.4137
Iteration: 16; Percent complete: 0.4%; Average loss: 5.3876
Iteration: 17; Percent complete: 0.4%; Average loss: 5.3524
Iteration: 18; Percent complete: 0.4%; Average loss: 5.0748
Iteration: 19; Percent complete: 0.5%; Average loss: 5.0086
Iteration: 20; Percent complete: 0.5%; Average loss: 4.9907
Iteration: 21; Percent complete: 0.5%; Average loss: 4.7849
Iteration: 22; Percent complete: 0.5%; Average loss: 4.9835
Iteration: 23; Percent complete: 0.6%; Average loss: 5.1221
Iteration: 24; Percent complete: 0.6%; Average loss: 4.8828
Iteration: 25; Percent complete: 0.6%; Average loss: 4.7687
Iteration: 26; Percent complete: 0.7%; Average loss: 4.6692
Iteration: 27; Percent complete: 0.7%; Average loss: 4.9793
Iteration: 28; Percent complete: 0.7%; Average loss: 4.7594
Iteration: 29; Percent complete: 0.7%; Average loss: 4.9713
Iteration: 30; Percent complete: 0.8%; Average loss: 4.7983
Iteration: 31; Percent complete: 0.8%; Average loss: 4.6639
Iteration: 32; Percent complete: 0.8%; Average loss: 4.8564
Iteration: 33; Percent complete: 0.8%; Average loss: 4.7391
Iteration: 34; Percent complete: 0.9%; Average loss: 4.6077
Iteration: 35; Percent complete: 0.9%; Average loss: 4.8570
Iteration: 36; Percent complete: 0.9%; Average loss: 4.6240
Iteration: 37; Percent complete: 0.9%; Average loss: 4.7627
Iteration: 38; Percent complete: 0.9%; Average loss: 4.6784
Iteration: 39; Percent complete: 1.0%; Average loss: 4.7290
Iteration: 40; Percent complete: 1.0%; Average loss: 4.7723
Iteration: 41; Percent complete: 1.0%; Average loss: 4.5816
Iteration: 42; Percent complete: 1.1%; Average loss: 4.8357
Iteration: 43; Percent complete: 1.1%; Average loss: 4.6893
Iteration: 44; Percent complete: 1.1%; Average loss: 4.7060
Iteration: 45; Percent complete: 1.1%; Average loss: 4.3711
Iteration: 46; Percent complete: 1.1%; Average loss: 4.6236
Iteration: 47; Percent complete: 1.2%; Average loss: 4.7331
Iteration: 48; Percent complete: 1.2%; Average loss: 4.6273
Iteration: 49; Percent complete: 1.2%; Average loss: 4.5580
Iteration: 50; Percent complete: 1.2%; Average loss: 4.4319
Iteration: 51; Percent complete: 1.3%; Average loss: 4.6779
Iteration: 52; Percent complete: 1.3%; Average loss: 4.4961
Iteration: 53; Percent complete: 1.3%; Average loss: 4.6399
Iteration: 54; Percent complete: 1.4%; Average loss: 4.5255
Iteration: 55; Percent complete: 1.4%; Average loss: 4.5374
Iteration: 56; Percent complete: 1.4%; Average loss: 4.5876
Iteration: 57; Percent complete: 1.4%; Average loss: 4.4755
Iteration: 58; Percent complete: 1.5%; Average loss: 4.6317
Iteration: 59; Percent complete: 1.5%; Average loss: 4.8395
Iteration: 60; Percent complete: 1.5%; Average loss: 4.6783
Iteration: 61; Percent complete: 1.5%; Average loss: 4.5508
Iteration: 62; Percent complete: 1.6%; Average loss: 4.4918
Iteration: 63; Percent complete: 1.6%; Average loss: 4.7047
Iteration: 64; Percent complete: 1.6%; Average loss: 4.4736
Iteration: 65; Percent complete: 1.6%; Average loss: 4.6185
Iteration: 66; Percent complete: 1.7%; Average loss: 4.4589
Iteration: 67; Percent complete: 1.7%; Average loss: 4.6220
Iteration: 68; Percent complete: 1.7%; Average loss: 4.6213
Iteration: 69; Percent complete: 1.7%; Average loss: 4.6121
Iteration: 70; Percent complete: 1.8%; Average loss: 4.5905
Iteration: 71; Percent complete: 1.8%; Average loss: 4.4507
Iteration: 72; Percent complete: 1.8%; Average loss: 4.4853
Iteration: 73; Percent complete: 1.8%; Average loss: 4.7812
Iteration: 74; Percent complete: 1.8%; Average loss: 4.7597
Iteration: 75; Percent complete: 1.9%; Average loss: 4.4568
Iteration: 76; Percent complete: 1.9%; Average loss: 4.3884
Iteration: 77; Percent complete: 1.9%; Average loss: 4.3959
Iteration: 78; Percent complete: 1.9%; Average loss: 4.4371
Iteration: 79; Percent complete: 2.0%; Average loss: 4.6048
Iteration: 80; Percent complete: 2.0%; Average loss: 4.2015
Iteration: 81; Percent complete: 2.0%; Average loss: 4.5601
Iteration: 82; Percent complete: 2.1%; Average loss: 4.4828
Iteration: 83; Percent complete: 2.1%; Average loss: 4.3503
Iteration: 84; Percent complete: 2.1%; Average loss: 4.5203
Iteration: 85; Percent complete: 2.1%; Average loss: 4.5748
Iteration: 86; Percent complete: 2.1%; Average loss: 4.5859
Iteration: 87; Percent complete: 2.2%; Average loss: 4.4729
Iteration: 88; Percent complete: 2.2%; Average loss: 4.5324
Iteration: 89; Percent complete: 2.2%; Average loss: 4.3732
Iteration: 90; Percent complete: 2.2%; Average loss: 4.5500
Iteration: 91; Percent complete: 2.3%; Average loss: 4.4336
Iteration: 92; Percent complete: 2.3%; Average loss: 4.4903
Iteration: 93; Percent complete: 2.3%; Average loss: 4.4008
Iteration: 94; Percent complete: 2.4%; Average loss: 4.4617
Iteration: 95; Percent complete: 2.4%; Average loss: 4.6883
Iteration: 96; Percent complete: 2.4%; Average loss: 4.3674
Iteration: 97; Percent complete: 2.4%; Average loss: 4.5508
Iteration: 98; Percent complete: 2.5%; Average loss: 4.3364
Iteration: 99; Percent complete: 2.5%; Average loss: 4.7339
Iteration: 100; Percent complete: 2.5%; Average loss: 4.3004
Iteration: 101; Percent complete: 2.5%; Average loss: 4.3175
Iteration: 102; Percent complete: 2.5%; Average loss: 4.4883
Iteration: 103; Percent complete: 2.6%; Average loss: 4.5355
Iteration: 104; Percent complete: 2.6%; Average loss: 4.2802
Iteration: 105; Percent complete: 2.6%; Average loss: 4.4496
Iteration: 106; Percent complete: 2.6%; Average loss: 4.4971
Iteration: 107; Percent complete: 2.7%; Average loss: 4.3787
Iteration: 108; Percent complete: 2.7%; Average loss: 4.5909
Iteration: 109; Percent complete: 2.7%; Average loss: 4.3611
Iteration: 110; Percent complete: 2.8%; Average loss: 4.4355
Iteration: 111; Percent complete: 2.8%; Average loss: 4.5000
Iteration: 112; Percent complete: 2.8%; Average loss: 4.4339
Iteration: 113; Percent complete: 2.8%; Average loss: 4.3753
Iteration: 114; Percent complete: 2.9%; Average loss: 4.4075
Iteration: 115; Percent complete: 2.9%; Average loss: 4.3355
Iteration: 116; Percent complete: 2.9%; Average loss: 4.3510
Iteration: 117; Percent complete: 2.9%; Average loss: 4.4689
Iteration: 118; Percent complete: 2.9%; Average loss: 4.1765
Iteration: 119; Percent complete: 3.0%; Average loss: 4.4101
Iteration: 120; Percent complete: 3.0%; Average loss: 4.3985
Iteration: 121; Percent complete: 3.0%; Average loss: 4.4656
Iteration: 122; Percent complete: 3.0%; Average loss: 4.3883
Iteration: 123; Percent complete: 3.1%; Average loss: 4.5977
Iteration: 124; Percent complete: 3.1%; Average loss: 4.3814
Iteration: 125; Percent complete: 3.1%; Average loss: 4.2317
Iteration: 126; Percent complete: 3.1%; Average loss: 4.3935
Iteration: 127; Percent complete: 3.2%; Average loss: 4.5228
Iteration: 128; Percent complete: 3.2%; Average loss: 4.3545
Iteration: 129; Percent complete: 3.2%; Average loss: 4.3522
Iteration: 130; Percent complete: 3.2%; Average loss: 4.4202
Iteration: 131; Percent complete: 3.3%; Average loss: 4.3792
Iteration: 132; Percent complete: 3.3%; Average loss: 4.4358
Iteration: 133; Percent complete: 3.3%; Average loss: 4.5251
Iteration: 134; Percent complete: 3.4%; Average loss: 4.5731
Iteration: 135; Percent complete: 3.4%; Average loss: 4.1155
Iteration: 136; Percent complete: 3.4%; Average loss: 4.0688
Iteration: 137; Percent complete: 3.4%; Average loss: 4.3959
Iteration: 138; Percent complete: 3.5%; Average loss: 4.0572
Iteration: 139; Percent complete: 3.5%; Average loss: 4.1666
Iteration: 140; Percent complete: 3.5%; Average loss: 4.1263
Iteration: 141; Percent complete: 3.5%; Average loss: 4.2930
Iteration: 142; Percent complete: 3.5%; Average loss: 4.1735
Iteration: 143; Percent complete: 3.6%; Average loss: 4.3252
Iteration: 144; Percent complete: 3.6%; Average loss: 4.2369
Iteration: 145; Percent complete: 3.6%; Average loss: 4.3786
Iteration: 146; Percent complete: 3.6%; Average loss: 4.3728
Iteration: 147; Percent complete: 3.7%; Average loss: 4.1624
Iteration: 148; Percent complete: 3.7%; Average loss: 4.1732
Iteration: 149; Percent complete: 3.7%; Average loss: 4.2397
Iteration: 150; Percent complete: 3.8%; Average loss: 4.2573
Iteration: 151; Percent complete: 3.8%; Average loss: 4.4492
Iteration: 152; Percent complete: 3.8%; Average loss: 4.1974
Iteration: 153; Percent complete: 3.8%; Average loss: 3.9867
Iteration: 154; Percent complete: 3.9%; Average loss: 4.1610
Iteration: 155; Percent complete: 3.9%; Average loss: 4.0461
Iteration: 156; Percent complete: 3.9%; Average loss: 4.1510
Iteration: 157; Percent complete: 3.9%; Average loss: 4.1212
Iteration: 158; Percent complete: 4.0%; Average loss: 4.2186
Iteration: 159; Percent complete: 4.0%; Average loss: 4.0340
Iteration: 160; Percent complete: 4.0%; Average loss: 4.4121
Iteration: 161; Percent complete: 4.0%; Average loss: 4.1275
Iteration: 162; Percent complete: 4.0%; Average loss: 4.2042
Iteration: 163; Percent complete: 4.1%; Average loss: 4.2995
Iteration: 164; Percent complete: 4.1%; Average loss: 4.2606
Iteration: 165; Percent complete: 4.1%; Average loss: 4.1400
Iteration: 166; Percent complete: 4.2%; Average loss: 4.2493
Iteration: 167; Percent complete: 4.2%; Average loss: 4.2389
Iteration: 168; Percent complete: 4.2%; Average loss: 4.2045
Iteration: 169; Percent complete: 4.2%; Average loss: 4.4763
Iteration: 170; Percent complete: 4.2%; Average loss: 4.3119
Iteration: 171; Percent complete: 4.3%; Average loss: 4.4006
Iteration: 172; Percent complete: 4.3%; Average loss: 4.2532
Iteration: 173; Percent complete: 4.3%; Average loss: 4.1543
Iteration: 174; Percent complete: 4.3%; Average loss: 4.2419
Iteration: 175; Percent complete: 4.4%; Average loss: 4.0877
Iteration: 176; Percent complete: 4.4%; Average loss: 4.1221
Iteration: 177; Percent complete: 4.4%; Average loss: 4.3432
Iteration: 178; Percent complete: 4.5%; Average loss: 4.1705
Iteration: 179; Percent complete: 4.5%; Average loss: 4.2470
Iteration: 180; Percent complete: 4.5%; Average loss: 4.1435
Iteration: 181; Percent complete: 4.5%; Average loss: 4.2203
Iteration: 182; Percent complete: 4.5%; Average loss: 4.1612
Iteration: 183; Percent complete: 4.6%; Average loss: 4.1335
Iteration: 184; Percent complete: 4.6%; Average loss: 4.0319
Iteration: 185; Percent complete: 4.6%; Average loss: 4.1879
Iteration: 186; Percent complete: 4.7%; Average loss: 3.8903
Iteration: 187; Percent complete: 4.7%; Average loss: 4.0495
Iteration: 188; Percent complete: 4.7%; Average loss: 4.2330
Iteration: 189; Percent complete: 4.7%; Average loss: 4.1349
Iteration: 190; Percent complete: 4.8%; Average loss: 4.2583
Iteration: 191; Percent complete: 4.8%; Average loss: 4.1122
Iteration: 192; Percent complete: 4.8%; Average loss: 4.2149
Iteration: 193; Percent complete: 4.8%; Average loss: 4.1816
Iteration: 194; Percent complete: 4.9%; Average loss: 4.4870
Iteration: 195; Percent complete: 4.9%; Average loss: 4.1052
Iteration: 196; Percent complete: 4.9%; Average loss: 3.8987
Iteration: 197; Percent complete: 4.9%; Average loss: 4.3020
Iteration: 198; Percent complete: 5.0%; Average loss: 4.0485
Iteration: 199; Percent complete: 5.0%; Average loss: 4.1462
Iteration: 200; Percent complete: 5.0%; Average loss: 4.1812
Iteration: 201; Percent complete: 5.0%; Average loss: 3.9534
Iteration: 202; Percent complete: 5.1%; Average loss: 4.0880
Iteration: 203; Percent complete: 5.1%; Average loss: 4.3541
Iteration: 204; Percent complete: 5.1%; Average loss: 4.5056
Iteration: 205; Percent complete: 5.1%; Average loss: 4.1442
Iteration: 206; Percent complete: 5.1%; Average loss: 3.9794
Iteration: 207; Percent complete: 5.2%; Average loss: 4.3374
Iteration: 208; Percent complete: 5.2%; Average loss: 3.8139
Iteration: 209; Percent complete: 5.2%; Average loss: 4.1454
Iteration: 210; Percent complete: 5.2%; Average loss: 4.2976
Iteration: 211; Percent complete: 5.3%; Average loss: 4.1819
Iteration: 212; Percent complete: 5.3%; Average loss: 4.0810
Iteration: 213; Percent complete: 5.3%; Average loss: 3.8918
Iteration: 214; Percent complete: 5.3%; Average loss: 4.0540
Iteration: 215; Percent complete: 5.4%; Average loss: 4.3341
Iteration: 216; Percent complete: 5.4%; Average loss: 4.1415
Iteration: 217; Percent complete: 5.4%; Average loss: 4.0394
Iteration: 218; Percent complete: 5.5%; Average loss: 4.1628
Iteration: 219; Percent complete: 5.5%; Average loss: 3.7505
Iteration: 220; Percent complete: 5.5%; Average loss: 4.1949
Iteration: 221; Percent complete: 5.5%; Average loss: 4.1310
Iteration: 222; Percent complete: 5.5%; Average loss: 3.8754
Iteration: 223; Percent complete: 5.6%; Average loss: 3.8692
Iteration: 224; Percent complete: 5.6%; Average loss: 4.2203
Iteration: 225; Percent complete: 5.6%; Average loss: 4.1983
Iteration: 226; Percent complete: 5.7%; Average loss: 3.9881
Iteration: 227; Percent complete: 5.7%; Average loss: 3.9298
Iteration: 228; Percent complete: 5.7%; Average loss: 4.2319
Iteration: 229; Percent complete: 5.7%; Average loss: 4.1279
Iteration: 230; Percent complete: 5.8%; Average loss: 4.0096
Iteration: 231; Percent complete: 5.8%; Average loss: 4.1995
Iteration: 232; Percent complete: 5.8%; Average loss: 4.1315
Iteration: 233; Percent complete: 5.8%; Average loss: 3.8511
Iteration: 234; Percent complete: 5.9%; Average loss: 4.1081
Iteration: 235; Percent complete: 5.9%; Average loss: 4.3616
Iteration: 236; Percent complete: 5.9%; Average loss: 3.8615
Iteration: 237; Percent complete: 5.9%; Average loss: 3.9285
Iteration: 238; Percent complete: 5.9%; Average loss: 3.9994
Iteration: 239; Percent complete: 6.0%; Average loss: 4.0291
Iteration: 240; Percent complete: 6.0%; Average loss: 4.3780
Iteration: 241; Percent complete: 6.0%; Average loss: 3.9633
Iteration: 242; Percent complete: 6.0%; Average loss: 3.9619
Iteration: 243; Percent complete: 6.1%; Average loss: 3.9780
Iteration: 244; Percent complete: 6.1%; Average loss: 3.8342
Iteration: 245; Percent complete: 6.1%; Average loss: 4.0704
Iteration: 246; Percent complete: 6.2%; Average loss: 4.0599
Iteration: 247; Percent complete: 6.2%; Average loss: 4.0480
Iteration: 248; Percent complete: 6.2%; Average loss: 4.0698
Iteration: 249; Percent complete: 6.2%; Average loss: 4.0862
Iteration: 250; Percent complete: 6.2%; Average loss: 4.2187
Iteration: 251; Percent complete: 6.3%; Average loss: 3.9466
Iteration: 252; Percent complete: 6.3%; Average loss: 4.1025
Iteration: 253; Percent complete: 6.3%; Average loss: 4.0623
Iteration: 254; Percent complete: 6.3%; Average loss: 4.2472
Iteration: 255; Percent complete: 6.4%; Average loss: 3.9243
Iteration: 256; Percent complete: 6.4%; Average loss: 3.9620
Iteration: 257; Percent complete: 6.4%; Average loss: 4.0553
Iteration: 258; Percent complete: 6.5%; Average loss: 3.9751
Iteration: 259; Percent complete: 6.5%; Average loss: 4.0414
Iteration: 260; Percent complete: 6.5%; Average loss: 3.8882
Iteration: 261; Percent complete: 6.5%; Average loss: 4.1518
Iteration: 262; Percent complete: 6.6%; Average loss: 4.1304
Iteration: 263; Percent complete: 6.6%; Average loss: 4.2028
Iteration: 264; Percent complete: 6.6%; Average loss: 3.9345
Iteration: 265; Percent complete: 6.6%; Average loss: 3.9047
Iteration: 266; Percent complete: 6.7%; Average loss: 4.1461
Iteration: 267; Percent complete: 6.7%; Average loss: 3.8547
Iteration: 268; Percent complete: 6.7%; Average loss: 4.0279
Iteration: 269; Percent complete: 6.7%; Average loss: 4.1638
Iteration: 270; Percent complete: 6.8%; Average loss: 3.8026
Iteration: 271; Percent complete: 6.8%; Average loss: 4.0010
Iteration: 272; Percent complete: 6.8%; Average loss: 3.7556
Iteration: 273; Percent complete: 6.8%; Average loss: 4.1937
Iteration: 274; Percent complete: 6.9%; Average loss: 4.1635
Iteration: 275; Percent complete: 6.9%; Average loss: 3.8932
Iteration: 276; Percent complete: 6.9%; Average loss: 3.6934
Iteration: 277; Percent complete: 6.9%; Average loss: 4.0370
Iteration: 278; Percent complete: 7.0%; Average loss: 3.9497
Iteration: 279; Percent complete: 7.0%; Average loss: 3.8581
Iteration: 280; Percent complete: 7.0%; Average loss: 4.1665
Iteration: 281; Percent complete: 7.0%; Average loss: 4.0340
Iteration: 282; Percent complete: 7.0%; Average loss: 4.0830
Iteration: 283; Percent complete: 7.1%; Average loss: 3.9049
Iteration: 284; Percent complete: 7.1%; Average loss: 4.0809
Iteration: 285; Percent complete: 7.1%; Average loss: 3.8332
Iteration: 286; Percent complete: 7.1%; Average loss: 3.7919
Iteration: 287; Percent complete: 7.2%; Average loss: 3.5050
Iteration: 288; Percent complete: 7.2%; Average loss: 4.0305
Iteration: 289; Percent complete: 7.2%; Average loss: 3.9768
Iteration: 290; Percent complete: 7.2%; Average loss: 3.8893
Iteration: 291; Percent complete: 7.3%; Average loss: 3.8792
Iteration: 292; Percent complete: 7.3%; Average loss: 3.9273
Iteration: 293; Percent complete: 7.3%; Average loss: 3.6877
Iteration: 294; Percent complete: 7.3%; Average loss: 4.1445
Iteration: 295; Percent complete: 7.4%; Average loss: 3.9141
Iteration: 296; Percent complete: 7.4%; Average loss: 3.8690
Iteration: 297; Percent complete: 7.4%; Average loss: 4.0055
Iteration: 298; Percent complete: 7.4%; Average loss: 3.9307
Iteration: 299; Percent complete: 7.5%; Average loss: 3.8353
Iteration: 300; Percent complete: 7.5%; Average loss: 4.0568
Iteration: 301; Percent complete: 7.5%; Average loss: 3.9099
Iteration: 302; Percent complete: 7.5%; Average loss: 3.9484
Iteration: 303; Percent complete: 7.6%; Average loss: 4.0159
Iteration: 304; Percent complete: 7.6%; Average loss: 3.8265
Iteration: 305; Percent complete: 7.6%; Average loss: 3.8975
Iteration: 306; Percent complete: 7.6%; Average loss: 3.6030
Iteration: 307; Percent complete: 7.7%; Average loss: 4.0749
Iteration: 308; Percent complete: 7.7%; Average loss: 3.8971
Iteration: 309; Percent complete: 7.7%; Average loss: 3.8870
Iteration: 310; Percent complete: 7.8%; Average loss: 4.0134
Iteration: 311; Percent complete: 7.8%; Average loss: 3.9102
Iteration: 312; Percent complete: 7.8%; Average loss: 3.8669
Iteration: 313; Percent complete: 7.8%; Average loss: 3.7283
Iteration: 314; Percent complete: 7.8%; Average loss: 3.6276
Iteration: 315; Percent complete: 7.9%; Average loss: 3.7574
Iteration: 316; Percent complete: 7.9%; Average loss: 4.0053
Iteration: 317; Percent complete: 7.9%; Average loss: 4.0940
Iteration: 318; Percent complete: 8.0%; Average loss: 4.0071
Iteration: 319; Percent complete: 8.0%; Average loss: 3.8063
Iteration: 320; Percent complete: 8.0%; Average loss: 3.6326
Iteration: 321; Percent complete: 8.0%; Average loss: 4.0798
Iteration: 322; Percent complete: 8.1%; Average loss: 3.8873
Iteration: 323; Percent complete: 8.1%; Average loss: 3.8080
Iteration: 324; Percent complete: 8.1%; Average loss: 3.9043
Iteration: 325; Percent complete: 8.1%; Average loss: 3.9297
Iteration: 326; Percent complete: 8.2%; Average loss: 3.8290
Iteration: 327; Percent complete: 8.2%; Average loss: 3.8424
Iteration: 328; Percent complete: 8.2%; Average loss: 4.0637
Iteration: 329; Percent complete: 8.2%; Average loss: 4.1607
Iteration: 330; Percent complete: 8.2%; Average loss: 3.8039
Iteration: 331; Percent complete: 8.3%; Average loss: 3.5755
Iteration: 332; Percent complete: 8.3%; Average loss: 3.8203
Iteration: 333; Percent complete: 8.3%; Average loss: 3.7876
Iteration: 334; Percent complete: 8.3%; Average loss: 3.8371
Iteration: 335; Percent complete: 8.4%; Average loss: 3.9825
Iteration: 336; Percent complete: 8.4%; Average loss: 3.8021
Iteration: 337; Percent complete: 8.4%; Average loss: 3.7078
Iteration: 338; Percent complete: 8.5%; Average loss: 3.7965
Iteration: 339; Percent complete: 8.5%; Average loss: 3.7356
Iteration: 340; Percent complete: 8.5%; Average loss: 4.1032
Iteration: 341; Percent complete: 8.5%; Average loss: 3.7303
Iteration: 342; Percent complete: 8.6%; Average loss: 3.9257
Iteration: 343; Percent complete: 8.6%; Average loss: 3.9932
Iteration: 344; Percent complete: 8.6%; Average loss: 3.8309
Iteration: 345; Percent complete: 8.6%; Average loss: 3.8157
Iteration: 346; Percent complete: 8.6%; Average loss: 4.2014
Iteration: 347; Percent complete: 8.7%; Average loss: 3.7933
Iteration: 348; Percent complete: 8.7%; Average loss: 3.8170
Iteration: 349; Percent complete: 8.7%; Average loss: 3.9705
Iteration: 350; Percent complete: 8.8%; Average loss: 4.1438
Iteration: 351; Percent complete: 8.8%; Average loss: 3.7620
Iteration: 352; Percent complete: 8.8%; Average loss: 3.8290
Iteration: 353; Percent complete: 8.8%; Average loss: 4.0410
Iteration: 354; Percent complete: 8.8%; Average loss: 4.1317
Iteration: 355; Percent complete: 8.9%; Average loss: 3.8459
Iteration: 356; Percent complete: 8.9%; Average loss: 4.0127
Iteration: 357; Percent complete: 8.9%; Average loss: 3.8084
Iteration: 358; Percent complete: 8.9%; Average loss: 3.7608
Iteration: 359; Percent complete: 9.0%; Average loss: 3.7037
Iteration: 360; Percent complete: 9.0%; Average loss: 3.7931
Iteration: 361; Percent complete: 9.0%; Average loss: 3.9486
Iteration: 362; Percent complete: 9.0%; Average loss: 3.8238
Iteration: 363; Percent complete: 9.1%; Average loss: 4.0060
Iteration: 364; Percent complete: 9.1%; Average loss: 4.0815
Iteration: 365; Percent complete: 9.1%; Average loss: 3.5604
Iteration: 366; Percent complete: 9.2%; Average loss: 3.8567
Iteration: 367; Percent complete: 9.2%; Average loss: 3.9598
Iteration: 368; Percent complete: 9.2%; Average loss: 3.8365
Iteration: 369; Percent complete: 9.2%; Average loss: 4.0385
Iteration: 370; Percent complete: 9.2%; Average loss: 3.8534
Iteration: 371; Percent complete: 9.3%; Average loss: 3.9619
Iteration: 372; Percent complete: 9.3%; Average loss: 3.6625
Iteration: 373; Percent complete: 9.3%; Average loss: 3.7770
Iteration: 374; Percent complete: 9.3%; Average loss: 4.0254
Iteration: 375; Percent complete: 9.4%; Average loss: 3.7801
Iteration: 376; Percent complete: 9.4%; Average loss: 3.7729
Iteration: 377; Percent complete: 9.4%; Average loss: 3.6190
Iteration: 378; Percent complete: 9.4%; Average loss: 3.8711
Iteration: 379; Percent complete: 9.5%; Average loss: 4.0142
Iteration: 380; Percent complete: 9.5%; Average loss: 3.6725
Iteration: 381; Percent complete: 9.5%; Average loss: 3.7654
Iteration: 382; Percent complete: 9.6%; Average loss: 3.6124
Iteration: 383; Percent complete: 9.6%; Average loss: 3.4957
Iteration: 384; Percent complete: 9.6%; Average loss: 3.7750
Iteration: 385; Percent complete: 9.6%; Average loss: 4.0406
Iteration: 386; Percent complete: 9.7%; Average loss: 3.9162
Iteration: 387; Percent complete: 9.7%; Average loss: 3.8136
Iteration: 388; Percent complete: 9.7%; Average loss: 3.9107
Iteration: 389; Percent complete: 9.7%; Average loss: 3.9300
Iteration: 390; Percent complete: 9.8%; Average loss: 3.8445
Iteration: 391; Percent complete: 9.8%; Average loss: 3.8330
Iteration: 392; Percent complete: 9.8%; Average loss: 3.5156
Iteration: 393; Percent complete: 9.8%; Average loss: 3.8662
Iteration: 394; Percent complete: 9.8%; Average loss: 3.9397
Iteration: 395; Percent complete: 9.9%; Average loss: 3.8498
Iteration: 396; Percent complete: 9.9%; Average loss: 3.9313
Iteration: 397; Percent complete: 9.9%; Average loss: 3.5495
Iteration: 398; Percent complete: 10.0%; Average loss: 3.9890
Iteration: 399; Percent complete: 10.0%; Average loss: 3.3176
Iteration: 400; Percent complete: 10.0%; Average loss: 3.6557
Iteration: 401; Percent complete: 10.0%; Average loss: 4.0952
Iteration: 402; Percent complete: 10.1%; Average loss: 3.8098
Iteration: 403; Percent complete: 10.1%; Average loss: 3.6158
Iteration: 404; Percent complete: 10.1%; Average loss: 3.6954
Iteration: 405; Percent complete: 10.1%; Average loss: 3.5176
Iteration: 406; Percent complete: 10.2%; Average loss: 3.8116
Iteration: 407; Percent complete: 10.2%; Average loss: 4.0575
Iteration: 408; Percent complete: 10.2%; Average loss: 3.8834
Iteration: 409; Percent complete: 10.2%; Average loss: 3.9984
Iteration: 410; Percent complete: 10.2%; Average loss: 3.8630
Iteration: 411; Percent complete: 10.3%; Average loss: 4.0027
Iteration: 412; Percent complete: 10.3%; Average loss: 3.9732
Iteration: 413; Percent complete: 10.3%; Average loss: 3.9559
Iteration: 414; Percent complete: 10.3%; Average loss: 3.8754
Iteration: 415; Percent complete: 10.4%; Average loss: 4.0068
Iteration: 416; Percent complete: 10.4%; Average loss: 3.8877
Iteration: 417; Percent complete: 10.4%; Average loss: 3.8971
Iteration: 418; Percent complete: 10.4%; Average loss: 3.7083
Iteration: 419; Percent complete: 10.5%; Average loss: 3.7787
Iteration: 420; Percent complete: 10.5%; Average loss: 3.7614
Iteration: 421; Percent complete: 10.5%; Average loss: 3.7064
Iteration: 422; Percent complete: 10.5%; Average loss: 3.8114
Iteration: 423; Percent complete: 10.6%; Average loss: 3.7065
Iteration: 424; Percent complete: 10.6%; Average loss: 3.7077
Iteration: 425; Percent complete: 10.6%; Average loss: 3.5929
Iteration: 426; Percent complete: 10.7%; Average loss: 3.6716
Iteration: 427; Percent complete: 10.7%; Average loss: 3.4459
Iteration: 428; Percent complete: 10.7%; Average loss: 4.1008
Iteration: 429; Percent complete: 10.7%; Average loss: 3.5332
Iteration: 430; Percent complete: 10.8%; Average loss: 3.8122
Iteration: 431; Percent complete: 10.8%; Average loss: 3.4046
Iteration: 432; Percent complete: 10.8%; Average loss: 3.8666
Iteration: 433; Percent complete: 10.8%; Average loss: 3.8032
Iteration: 434; Percent complete: 10.8%; Average loss: 3.6778
Iteration: 435; Percent complete: 10.9%; Average loss: 3.5258
Iteration: 436; Percent complete: 10.9%; Average loss: 4.1024
Iteration: 437; Percent complete: 10.9%; Average loss: 3.9715
Iteration: 438; Percent complete: 10.9%; Average loss: 3.9079
Iteration: 439; Percent complete: 11.0%; Average loss: 3.8847
Iteration: 440; Percent complete: 11.0%; Average loss: 3.5484
Iteration: 441; Percent complete: 11.0%; Average loss: 3.6256
Iteration: 442; Percent complete: 11.1%; Average loss: 3.7091
Iteration: 443; Percent complete: 11.1%; Average loss: 3.8648
Iteration: 444; Percent complete: 11.1%; Average loss: 4.0930
Iteration: 445; Percent complete: 11.1%; Average loss: 3.7707
Iteration: 446; Percent complete: 11.2%; Average loss: 3.8253
Iteration: 447; Percent complete: 11.2%; Average loss: 3.7380
Iteration: 448; Percent complete: 11.2%; Average loss: 3.6475
Iteration: 449; Percent complete: 11.2%; Average loss: 3.7344
Iteration: 450; Percent complete: 11.2%; Average loss: 3.8756
Iteration: 451; Percent complete: 11.3%; Average loss: 3.7633
Iteration: 452; Percent complete: 11.3%; Average loss: 3.7012
Iteration: 453; Percent complete: 11.3%; Average loss: 3.5666
Iteration: 454; Percent complete: 11.3%; Average loss: 3.4895
Iteration: 455; Percent complete: 11.4%; Average loss: 4.0164
Iteration: 456; Percent complete: 11.4%; Average loss: 3.9388
Iteration: 457; Percent complete: 11.4%; Average loss: 3.6576
Iteration: 458; Percent complete: 11.5%; Average loss: 3.5237
Iteration: 459; Percent complete: 11.5%; Average loss: 3.9305
Iteration: 460; Percent complete: 11.5%; Average loss: 3.6557
Iteration: 461; Percent complete: 11.5%; Average loss: 3.7201
Iteration: 462; Percent complete: 11.6%; Average loss: 3.8532
Iteration: 463; Percent complete: 11.6%; Average loss: 3.6276
Iteration: 464; Percent complete: 11.6%; Average loss: 3.8499
Iteration: 465; Percent complete: 11.6%; Average loss: 3.7729
Iteration: 466; Percent complete: 11.7%; Average loss: 3.9641
Iteration: 467; Percent complete: 11.7%; Average loss: 3.6395
Iteration: 468; Percent complete: 11.7%; Average loss: 3.7219
Iteration: 469; Percent complete: 11.7%; Average loss: 3.6572
Iteration: 470; Percent complete: 11.8%; Average loss: 3.8740
Iteration: 471; Percent complete: 11.8%; Average loss: 3.6155
Iteration: 472; Percent complete: 11.8%; Average loss: 3.5467
Iteration: 473; Percent complete: 11.8%; Average loss: 3.7896
Iteration: 474; Percent complete: 11.8%; Average loss: 3.5869
Iteration: 475; Percent complete: 11.9%; Average loss: 3.6987
Iteration: 476; Percent complete: 11.9%; Average loss: 3.5364
Iteration: 477; Percent complete: 11.9%; Average loss: 3.7209
Iteration: 478; Percent complete: 11.9%; Average loss: 3.7101
Iteration: 479; Percent complete: 12.0%; Average loss: 3.8650
Iteration: 480; Percent complete: 12.0%; Average loss: 3.6450
Iteration: 481; Percent complete: 12.0%; Average loss: 3.8937
Iteration: 482; Percent complete: 12.0%; Average loss: 3.9716
Iteration: 483; Percent complete: 12.1%; Average loss: 3.8603
Iteration: 484; Percent complete: 12.1%; Average loss: 3.8289
Iteration: 485; Percent complete: 12.1%; Average loss: 3.4791
Iteration: 486; Percent complete: 12.2%; Average loss: 3.6609
Iteration: 487; Percent complete: 12.2%; Average loss: 3.7869
Iteration: 488; Percent complete: 12.2%; Average loss: 3.8097
Iteration: 489; Percent complete: 12.2%; Average loss: 3.5806
Iteration: 490; Percent complete: 12.2%; Average loss: 3.5751
Iteration: 491; Percent complete: 12.3%; Average loss: 3.6585
Iteration: 492; Percent complete: 12.3%; Average loss: 3.8953
Iteration: 493; Percent complete: 12.3%; Average loss: 3.7717
Iteration: 494; Percent complete: 12.3%; Average loss: 3.8508
Iteration: 495; Percent complete: 12.4%; Average loss: 3.7069
Iteration: 496; Percent complete: 12.4%; Average loss: 3.8791
Iteration: 497; Percent complete: 12.4%; Average loss: 3.6629
Iteration: 498; Percent complete: 12.4%; Average loss: 3.6385
Iteration: 499; Percent complete: 12.5%; Average loss: 3.6278
Iteration: 500; Percent complete: 12.5%; Average loss: 3.7384
Iteration: 501; Percent complete: 12.5%; Average loss: 3.5171
Iteration: 502; Percent complete: 12.6%; Average loss: 3.8080
Iteration: 503; Percent complete: 12.6%; Average loss: 3.6109
Iteration: 504; Percent complete: 12.6%; Average loss: 3.8207
Iteration: 505; Percent complete: 12.6%; Average loss: 3.7491
Iteration: 506; Percent complete: 12.7%; Average loss: 3.5856
Iteration: 507; Percent complete: 12.7%; Average loss: 4.0429
Iteration: 508; Percent complete: 12.7%; Average loss: 3.6104
Iteration: 509; Percent complete: 12.7%; Average loss: 3.6777
Iteration: 510; Percent complete: 12.8%; Average loss: 3.9257
Iteration: 511; Percent complete: 12.8%; Average loss: 3.9674
Iteration: 512; Percent complete: 12.8%; Average loss: 4.0506
Iteration: 513; Percent complete: 12.8%; Average loss: 3.8899
Iteration: 514; Percent complete: 12.8%; Average loss: 3.7715
Iteration: 515; Percent complete: 12.9%; Average loss: 3.4001
Iteration: 516; Percent complete: 12.9%; Average loss: 3.7521
Iteration: 517; Percent complete: 12.9%; Average loss: 3.5753
Iteration: 518; Percent complete: 13.0%; Average loss: 3.6523
Iteration: 519; Percent complete: 13.0%; Average loss: 3.8037
Iteration: 520; Percent complete: 13.0%; Average loss: 3.8359
Iteration: 521; Percent complete: 13.0%; Average loss: 3.7518
Iteration: 522; Percent complete: 13.1%; Average loss: 3.8039
Iteration: 523; Percent complete: 13.1%; Average loss: 4.0000
Iteration: 524; Percent complete: 13.1%; Average loss: 3.6198
Iteration: 525; Percent complete: 13.1%; Average loss: 3.7087
Iteration: 526; Percent complete: 13.2%; Average loss: 3.5222
Iteration: 527; Percent complete: 13.2%; Average loss: 3.7202
Iteration: 528; Percent complete: 13.2%; Average loss: 3.6809
Iteration: 529; Percent complete: 13.2%; Average loss: 3.8738
Iteration: 530; Percent complete: 13.2%; Average loss: 3.9853
Iteration: 531; Percent complete: 13.3%; Average loss: 3.9304
Iteration: 532; Percent complete: 13.3%; Average loss: 3.7952
Iteration: 533; Percent complete: 13.3%; Average loss: 3.5060
Iteration: 534; Percent complete: 13.4%; Average loss: 3.6853
Iteration: 535; Percent complete: 13.4%; Average loss: 3.8735
Iteration: 536; Percent complete: 13.4%; Average loss: 3.6824
Iteration: 537; Percent complete: 13.4%; Average loss: 3.7468
Iteration: 538; Percent complete: 13.5%; Average loss: 3.5391
Iteration: 539; Percent complete: 13.5%; Average loss: 3.7282
Iteration: 540; Percent complete: 13.5%; Average loss: 3.5981
Iteration: 541; Percent complete: 13.5%; Average loss: 3.7984
Iteration: 542; Percent complete: 13.6%; Average loss: 3.7997
Iteration: 543; Percent complete: 13.6%; Average loss: 3.6686
Iteration: 544; Percent complete: 13.6%; Average loss: 3.9047
Iteration: 545; Percent complete: 13.6%; Average loss: 3.6045
Iteration: 546; Percent complete: 13.7%; Average loss: 3.6233
Iteration: 547; Percent complete: 13.7%; Average loss: 3.7402
Iteration: 548; Percent complete: 13.7%; Average loss: 3.8524
Iteration: 549; Percent complete: 13.7%; Average loss: 3.4094
Iteration: 550; Percent complete: 13.8%; Average loss: 3.8688
Iteration: 551; Percent complete: 13.8%; Average loss: 3.7893
Iteration: 552; Percent complete: 13.8%; Average loss: 3.8568
Iteration: 553; Percent complete: 13.8%; Average loss: 3.9488
Iteration: 554; Percent complete: 13.9%; Average loss: 3.6641
Iteration: 555; Percent complete: 13.9%; Average loss: 3.6934
Iteration: 556; Percent complete: 13.9%; Average loss: 3.7508
Iteration: 557; Percent complete: 13.9%; Average loss: 3.7550
Iteration: 558; Percent complete: 14.0%; Average loss: 3.7563
Iteration: 559; Percent complete: 14.0%; Average loss: 3.4859
Iteration: 560; Percent complete: 14.0%; Average loss: 3.7684
Iteration: 561; Percent complete: 14.0%; Average loss: 3.7683
Iteration: 562; Percent complete: 14.1%; Average loss: 3.9050
Iteration: 563; Percent complete: 14.1%; Average loss: 3.6082
Iteration: 564; Percent complete: 14.1%; Average loss: 3.8594
Iteration: 565; Percent complete: 14.1%; Average loss: 3.5786
Iteration: 566; Percent complete: 14.1%; Average loss: 3.6175
Iteration: 567; Percent complete: 14.2%; Average loss: 3.7797
Iteration: 568; Percent complete: 14.2%; Average loss: 3.5750
Iteration: 569; Percent complete: 14.2%; Average loss: 3.7774
Iteration: 570; Percent complete: 14.2%; Average loss: 3.5178
Iteration: 571; Percent complete: 14.3%; Average loss: 3.5666
Iteration: 572; Percent complete: 14.3%; Average loss: 3.7323
Iteration: 573; Percent complete: 14.3%; Average loss: 3.7744
Iteration: 574; Percent complete: 14.3%; Average loss: 3.3974
Iteration: 575; Percent complete: 14.4%; Average loss: 3.6833
Iteration: 576; Percent complete: 14.4%; Average loss: 3.7777
Iteration: 577; Percent complete: 14.4%; Average loss: 3.8178
Iteration: 578; Percent complete: 14.4%; Average loss: 3.5840
Iteration: 579; Percent complete: 14.5%; Average loss: 3.5485
Iteration: 580; Percent complete: 14.5%; Average loss: 3.8730
Iteration: 581; Percent complete: 14.5%; Average loss: 3.8940
Iteration: 582; Percent complete: 14.5%; Average loss: 3.5914
Iteration: 583; Percent complete: 14.6%; Average loss: 3.6704
Iteration: 584; Percent complete: 14.6%; Average loss: 3.7930
Iteration: 585; Percent complete: 14.6%; Average loss: 3.6630
Iteration: 586; Percent complete: 14.6%; Average loss: 3.6569
Iteration: 587; Percent complete: 14.7%; Average loss: 3.7160
Iteration: 588; Percent complete: 14.7%; Average loss: 3.4996
Iteration: 589; Percent complete: 14.7%; Average loss: 3.8272
Iteration: 590; Percent complete: 14.8%; Average loss: 3.6682
Iteration: 591; Percent complete: 14.8%; Average loss: 3.6012
Iteration: 592; Percent complete: 14.8%; Average loss: 3.7329
Iteration: 593; Percent complete: 14.8%; Average loss: 3.6346
Iteration: 594; Percent complete: 14.8%; Average loss: 3.6291
Iteration: 595; Percent complete: 14.9%; Average loss: 3.5679
Iteration: 596; Percent complete: 14.9%; Average loss: 3.5779
Iteration: 597; Percent complete: 14.9%; Average loss: 3.5496
Iteration: 598; Percent complete: 14.9%; Average loss: 3.7969
Iteration: 599; Percent complete: 15.0%; Average loss: 3.4528
Iteration: 600; Percent complete: 15.0%; Average loss: 3.7705
Iteration: 601; Percent complete: 15.0%; Average loss: 3.5590
Iteration: 602; Percent complete: 15.0%; Average loss: 3.5960
Iteration: 603; Percent complete: 15.1%; Average loss: 3.6093
Iteration: 604; Percent complete: 15.1%; Average loss: 3.7514
Iteration: 605; Percent complete: 15.1%; Average loss: 3.7414
Iteration: 606; Percent complete: 15.2%; Average loss: 3.5306
Iteration: 607; Percent complete: 15.2%; Average loss: 3.5596
Iteration: 608; Percent complete: 15.2%; Average loss: 3.6473
Iteration: 609; Percent complete: 15.2%; Average loss: 3.9049
Iteration: 610; Percent complete: 15.2%; Average loss: 3.5952
Iteration: 611; Percent complete: 15.3%; Average loss: 3.7238
Iteration: 612; Percent complete: 15.3%; Average loss: 3.5230
Iteration: 613; Percent complete: 15.3%; Average loss: 3.6684
Iteration: 614; Percent complete: 15.3%; Average loss: 3.6766
Iteration: 615; Percent complete: 15.4%; Average loss: 3.7386
Iteration: 616; Percent complete: 15.4%; Average loss: 3.6785
Iteration: 617; Percent complete: 15.4%; Average loss: 3.5180
Iteration: 618; Percent complete: 15.4%; Average loss: 3.7043
Iteration: 619; Percent complete: 15.5%; Average loss: 3.8317
Iteration: 620; Percent complete: 15.5%; Average loss: 3.5057
Iteration: 621; Percent complete: 15.5%; Average loss: 3.6357
Iteration: 622; Percent complete: 15.6%; Average loss: 3.7311
Iteration: 623; Percent complete: 15.6%; Average loss: 3.7211
Iteration: 624; Percent complete: 15.6%; Average loss: 3.5660
Iteration: 625; Percent complete: 15.6%; Average loss: 3.3923
Iteration: 626; Percent complete: 15.7%; Average loss: 3.8520
Iteration: 627; Percent complete: 15.7%; Average loss: 3.8315
Iteration: 628; Percent complete: 15.7%; Average loss: 3.6607
Iteration: 629; Percent complete: 15.7%; Average loss: 3.8664
Iteration: 630; Percent complete: 15.8%; Average loss: 3.9574
Iteration: 631; Percent complete: 15.8%; Average loss: 3.4593
Iteration: 632; Percent complete: 15.8%; Average loss: 3.7166
Iteration: 633; Percent complete: 15.8%; Average loss: 3.7794
Iteration: 634; Percent complete: 15.8%; Average loss: 3.5726
Iteration: 635; Percent complete: 15.9%; Average loss: 3.6716
Iteration: 636; Percent complete: 15.9%; Average loss: 3.5634
Iteration: 637; Percent complete: 15.9%; Average loss: 3.5444
Iteration: 638; Percent complete: 16.0%; Average loss: 3.9334
Iteration: 639; Percent complete: 16.0%; Average loss: 3.6855
Iteration: 640; Percent complete: 16.0%; Average loss: 3.4293
Iteration: 641; Percent complete: 16.0%; Average loss: 3.7961
Iteration: 642; Percent complete: 16.1%; Average loss: 3.7726
Iteration: 643; Percent complete: 16.1%; Average loss: 3.6211
Iteration: 644; Percent complete: 16.1%; Average loss: 3.5342
Iteration: 645; Percent complete: 16.1%; Average loss: 3.7779
Iteration: 646; Percent complete: 16.2%; Average loss: 3.6823
Iteration: 647; Percent complete: 16.2%; Average loss: 3.5495
Iteration: 648; Percent complete: 16.2%; Average loss: 3.5750
Iteration: 649; Percent complete: 16.2%; Average loss: 3.4393
Iteration: 650; Percent complete: 16.2%; Average loss: 3.7575
Iteration: 651; Percent complete: 16.3%; Average loss: 3.4783
Iteration: 652; Percent complete: 16.3%; Average loss: 3.6058
Iteration: 653; Percent complete: 16.3%; Average loss: 3.8463
Iteration: 654; Percent complete: 16.4%; Average loss: 3.6970
Iteration: 655; Percent complete: 16.4%; Average loss: 3.6046
Iteration: 656; Percent complete: 16.4%; Average loss: 3.7191
Iteration: 657; Percent complete: 16.4%; Average loss: 3.4647
Iteration: 658; Percent complete: 16.4%; Average loss: 3.6327
Iteration: 659; Percent complete: 16.5%; Average loss: 3.5413
Iteration: 660; Percent complete: 16.5%; Average loss: 3.6346
Iteration: 661; Percent complete: 16.5%; Average loss: 3.6465
Iteration: 662; Percent complete: 16.6%; Average loss: 3.5484
Iteration: 663; Percent complete: 16.6%; Average loss: 3.9470
Iteration: 664; Percent complete: 16.6%; Average loss: 3.7282
Iteration: 665; Percent complete: 16.6%; Average loss: 3.7753
Iteration: 666; Percent complete: 16.7%; Average loss: 3.5770
Iteration: 667; Percent complete: 16.7%; Average loss: 3.7366
Iteration: 668; Percent complete: 16.7%; Average loss: 3.7016
Iteration: 669; Percent complete: 16.7%; Average loss: 3.8357
Iteration: 670; Percent complete: 16.8%; Average loss: 3.5739
Iteration: 671; Percent complete: 16.8%; Average loss: 3.6595
Iteration: 672; Percent complete: 16.8%; Average loss: 3.8988
Iteration: 673; Percent complete: 16.8%; Average loss: 3.8008
Iteration: 674; Percent complete: 16.9%; Average loss: 3.4702
Iteration: 675; Percent complete: 16.9%; Average loss: 3.6384
Iteration: 676; Percent complete: 16.9%; Average loss: 3.8778
Iteration: 677; Percent complete: 16.9%; Average loss: 3.3870
Iteration: 678; Percent complete: 17.0%; Average loss: 3.6841
Iteration: 679; Percent complete: 17.0%; Average loss: 3.7699
Iteration: 680; Percent complete: 17.0%; Average loss: 3.8130
Iteration: 681; Percent complete: 17.0%; Average loss: 3.7629
Iteration: 682; Percent complete: 17.1%; Average loss: 3.5801
Iteration: 683; Percent complete: 17.1%; Average loss: 3.9418
Iteration: 684; Percent complete: 17.1%; Average loss: 3.5579
Iteration: 685; Percent complete: 17.1%; Average loss: 3.7449
Iteration: 686; Percent complete: 17.2%; Average loss: 3.7102
Iteration: 687; Percent complete: 17.2%; Average loss: 3.6320
Iteration: 688; Percent complete: 17.2%; Average loss: 3.3382
Iteration: 689; Percent complete: 17.2%; Average loss: 3.8538
Iteration: 690; Percent complete: 17.2%; Average loss: 3.6852
Iteration: 691; Percent complete: 17.3%; Average loss: 3.7397
Iteration: 692; Percent complete: 17.3%; Average loss: 3.3392
Iteration: 693; Percent complete: 17.3%; Average loss: 3.8371
Iteration: 694; Percent complete: 17.3%; Average loss: 3.6402
Iteration: 695; Percent complete: 17.4%; Average loss: 3.8674
Iteration: 696; Percent complete: 17.4%; Average loss: 3.5827
Iteration: 697; Percent complete: 17.4%; Average loss: 3.3658
Iteration: 698; Percent complete: 17.4%; Average loss: 3.6832
Iteration: 699; Percent complete: 17.5%; Average loss: 3.7427
Iteration: 700; Percent complete: 17.5%; Average loss: 3.7784
Iteration: 701; Percent complete: 17.5%; Average loss: 3.4988
Iteration: 702; Percent complete: 17.5%; Average loss: 3.7061
Iteration: 703; Percent complete: 17.6%; Average loss: 3.6004
Iteration: 704; Percent complete: 17.6%; Average loss: 3.5874
Iteration: 705; Percent complete: 17.6%; Average loss: 3.4132
Iteration: 706; Percent complete: 17.6%; Average loss: 3.5969
Iteration: 707; Percent complete: 17.7%; Average loss: 3.5648
Iteration: 708; Percent complete: 17.7%; Average loss: 3.5220
Iteration: 709; Percent complete: 17.7%; Average loss: 3.4916
Iteration: 710; Percent complete: 17.8%; Average loss: 3.7138
Iteration: 711; Percent complete: 17.8%; Average loss: 3.6902
Iteration: 712; Percent complete: 17.8%; Average loss: 3.4556
Iteration: 713; Percent complete: 17.8%; Average loss: 3.4787
Iteration: 714; Percent complete: 17.8%; Average loss: 3.6965
Iteration: 715; Percent complete: 17.9%; Average loss: 3.6139
Iteration: 716; Percent complete: 17.9%; Average loss: 3.3855
Iteration: 717; Percent complete: 17.9%; Average loss: 3.5513
Iteration: 718; Percent complete: 17.9%; Average loss: 3.4405
Iteration: 719; Percent complete: 18.0%; Average loss: 3.7319
Iteration: 720; Percent complete: 18.0%; Average loss: 3.6163
Iteration: 721; Percent complete: 18.0%; Average loss: 3.7134
Iteration: 722; Percent complete: 18.1%; Average loss: 3.6189
Iteration: 723; Percent complete: 18.1%; Average loss: 3.6989
Iteration: 724; Percent complete: 18.1%; Average loss: 3.7957
Iteration: 725; Percent complete: 18.1%; Average loss: 3.5342
Iteration: 726; Percent complete: 18.1%; Average loss: 3.3714
Iteration: 727; Percent complete: 18.2%; Average loss: 3.5367
Iteration: 728; Percent complete: 18.2%; Average loss: 3.5888
Iteration: 729; Percent complete: 18.2%; Average loss: 3.5716
Iteration: 730; Percent complete: 18.2%; Average loss: 3.6536
Iteration: 731; Percent complete: 18.3%; Average loss: 3.6755
Iteration: 732; Percent complete: 18.3%; Average loss: 3.3566
Iteration: 733; Percent complete: 18.3%; Average loss: 3.7513
Iteration: 734; Percent complete: 18.4%; Average loss: 3.4614
Iteration: 735; Percent complete: 18.4%; Average loss: 3.5590
Iteration: 736; Percent complete: 18.4%; Average loss: 3.5839
Iteration: 737; Percent complete: 18.4%; Average loss: 3.5506
Iteration: 738; Percent complete: 18.4%; Average loss: 3.3596
Iteration: 739; Percent complete: 18.5%; Average loss: 3.9398
Iteration: 740; Percent complete: 18.5%; Average loss: 3.9055
Iteration: 741; Percent complete: 18.5%; Average loss: 3.4500
Iteration: 742; Percent complete: 18.6%; Average loss: 3.7697
Iteration: 743; Percent complete: 18.6%; Average loss: 3.6409
Iteration: 744; Percent complete: 18.6%; Average loss: 3.4680
Iteration: 745; Percent complete: 18.6%; Average loss: 3.4288
Iteration: 746; Percent complete: 18.6%; Average loss: 3.7318
Iteration: 747; Percent complete: 18.7%; Average loss: 3.7366
Iteration: 748; Percent complete: 18.7%; Average loss: 3.4892
Iteration: 749; Percent complete: 18.7%; Average loss: 3.4980
Iteration: 750; Percent complete: 18.8%; Average loss: 3.6177
Iteration: 751; Percent complete: 18.8%; Average loss: 3.5624
Iteration: 752; Percent complete: 18.8%; Average loss: 3.3490
Iteration: 753; Percent complete: 18.8%; Average loss: 3.4964
Iteration: 754; Percent complete: 18.9%; Average loss: 3.5433
Iteration: 755; Percent complete: 18.9%; Average loss: 3.5425
Iteration: 756; Percent complete: 18.9%; Average loss: 3.4935
Iteration: 757; Percent complete: 18.9%; Average loss: 3.6162
Iteration: 758; Percent complete: 18.9%; Average loss: 3.5233
Iteration: 759; Percent complete: 19.0%; Average loss: 3.4903
Iteration: 760; Percent complete: 19.0%; Average loss: 3.6448
Iteration: 761; Percent complete: 19.0%; Average loss: 3.4129
Iteration: 762; Percent complete: 19.1%; Average loss: 3.4567
Iteration: 763; Percent complete: 19.1%; Average loss: 3.6260
Iteration: 764; Percent complete: 19.1%; Average loss: 3.6357
Iteration: 765; Percent complete: 19.1%; Average loss: 3.5911
Iteration: 766; Percent complete: 19.1%; Average loss: 3.3860
Iteration: 767; Percent complete: 19.2%; Average loss: 3.6708
Iteration: 768; Percent complete: 19.2%; Average loss: 3.5071
Iteration: 769; Percent complete: 19.2%; Average loss: 3.8534
Iteration: 770; Percent complete: 19.2%; Average loss: 3.5214
Iteration: 771; Percent complete: 19.3%; Average loss: 3.5695
Iteration: 772; Percent complete: 19.3%; Average loss: 3.3206
Iteration: 773; Percent complete: 19.3%; Average loss: 3.9209
Iteration: 774; Percent complete: 19.4%; Average loss: 3.4900
Iteration: 775; Percent complete: 19.4%; Average loss: 3.8013
Iteration: 776; Percent complete: 19.4%; Average loss: 3.4229
Iteration: 777; Percent complete: 19.4%; Average loss: 3.3376
Iteration: 778; Percent complete: 19.4%; Average loss: 3.2574
Iteration: 779; Percent complete: 19.5%; Average loss: 3.3396
Iteration: 780; Percent complete: 19.5%; Average loss: 3.1391
Iteration: 781; Percent complete: 19.5%; Average loss: 3.5524
Iteration: 782; Percent complete: 19.6%; Average loss: 3.4498
Iteration: 783; Percent complete: 19.6%; Average loss: 3.5892
Iteration: 784; Percent complete: 19.6%; Average loss: 3.5821
Iteration: 785; Percent complete: 19.6%; Average loss: 3.4151
Iteration: 786; Percent complete: 19.7%; Average loss: 3.5051
Iteration: 787; Percent complete: 19.7%; Average loss: 3.4656
Iteration: 788; Percent complete: 19.7%; Average loss: 3.7464
Iteration: 789; Percent complete: 19.7%; Average loss: 3.6749
Iteration: 790; Percent complete: 19.8%; Average loss: 3.4393
Iteration: 791; Percent complete: 19.8%; Average loss: 3.4857
Iteration: 792; Percent complete: 19.8%; Average loss: 3.4733
Iteration: 793; Percent complete: 19.8%; Average loss: 3.4619
Iteration: 794; Percent complete: 19.9%; Average loss: 3.7161
Iteration: 795; Percent complete: 19.9%; Average loss: 3.6916
Iteration: 796; Percent complete: 19.9%; Average loss: 3.5518
Iteration: 797; Percent complete: 19.9%; Average loss: 3.5397
Iteration: 798; Percent complete: 20.0%; Average loss: 3.7605
Iteration: 799; Percent complete: 20.0%; Average loss: 3.3767
Iteration: 800; Percent complete: 20.0%; Average loss: 3.3425
Iteration: 801; Percent complete: 20.0%; Average loss: 3.6607
Iteration: 802; Percent complete: 20.1%; Average loss: 3.5036
Iteration: 803; Percent complete: 20.1%; Average loss: 3.6164
Iteration: 804; Percent complete: 20.1%; Average loss: 3.6282
Iteration: 805; Percent complete: 20.1%; Average loss: 3.3015
Iteration: 806; Percent complete: 20.2%; Average loss: 3.4481
Iteration: 807; Percent complete: 20.2%; Average loss: 3.2945
Iteration: 808; Percent complete: 20.2%; Average loss: 3.4845
Iteration: 809; Percent complete: 20.2%; Average loss: 3.7300
Iteration: 810; Percent complete: 20.2%; Average loss: 3.3230
Iteration: 811; Percent complete: 20.3%; Average loss: 3.4715
Iteration: 812; Percent complete: 20.3%; Average loss: 3.7406
Iteration: 813; Percent complete: 20.3%; Average loss: 3.5987
Iteration: 814; Percent complete: 20.3%; Average loss: 3.6548
Iteration: 815; Percent complete: 20.4%; Average loss: 3.5691
Iteration: 816; Percent complete: 20.4%; Average loss: 3.4793
Iteration: 817; Percent complete: 20.4%; Average loss: 3.7156
Iteration: 818; Percent complete: 20.4%; Average loss: 3.6629
Iteration: 819; Percent complete: 20.5%; Average loss: 3.4442
Iteration: 820; Percent complete: 20.5%; Average loss: 3.3586
Iteration: 821; Percent complete: 20.5%; Average loss: 3.3689
Iteration: 822; Percent complete: 20.5%; Average loss: 3.6085
Iteration: 823; Percent complete: 20.6%; Average loss: 3.3575
Iteration: 824; Percent complete: 20.6%; Average loss: 3.5538
Iteration: 825; Percent complete: 20.6%; Average loss: 3.6595
Iteration: 826; Percent complete: 20.6%; Average loss: 3.2270
Iteration: 827; Percent complete: 20.7%; Average loss: 3.4557
Iteration: 828; Percent complete: 20.7%; Average loss: 3.5090
Iteration: 829; Percent complete: 20.7%; Average loss: 3.5863
Iteration: 830; Percent complete: 20.8%; Average loss: 3.3397
Iteration: 831; Percent complete: 20.8%; Average loss: 3.6224
Iteration: 832; Percent complete: 20.8%; Average loss: 3.4824
Iteration: 833; Percent complete: 20.8%; Average loss: 3.5428
Iteration: 834; Percent complete: 20.8%; Average loss: 3.4644
Iteration: 835; Percent complete: 20.9%; Average loss: 3.4857
Iteration: 836; Percent complete: 20.9%; Average loss: 3.6019
Iteration: 837; Percent complete: 20.9%; Average loss: 4.0313
Iteration: 838; Percent complete: 20.9%; Average loss: 3.3695
Iteration: 839; Percent complete: 21.0%; Average loss: 3.6107
Iteration: 840; Percent complete: 21.0%; Average loss: 3.8395
Iteration: 841; Percent complete: 21.0%; Average loss: 3.2692
Iteration: 842; Percent complete: 21.1%; Average loss: 3.6167
Iteration: 843; Percent complete: 21.1%; Average loss: 3.4677
Iteration: 844; Percent complete: 21.1%; Average loss: 3.6754
Iteration: 845; Percent complete: 21.1%; Average loss: 3.7368
Iteration: 846; Percent complete: 21.1%; Average loss: 3.4284
Iteration: 847; Percent complete: 21.2%; Average loss: 3.4353
Iteration: 848; Percent complete: 21.2%; Average loss: 3.4431
Iteration: 849; Percent complete: 21.2%; Average loss: 3.8129
Iteration: 850; Percent complete: 21.2%; Average loss: 3.5008
Iteration: 851; Percent complete: 21.3%; Average loss: 3.5038
Iteration: 852; Percent complete: 21.3%; Average loss: 3.8256
Iteration: 853; Percent complete: 21.3%; Average loss: 3.4449
Iteration: 854; Percent complete: 21.3%; Average loss: 3.7559
Iteration: 855; Percent complete: 21.4%; Average loss: 3.5090
Iteration: 856; Percent complete: 21.4%; Average loss: 3.4622
Iteration: 857; Percent complete: 21.4%; Average loss: 3.2889
Iteration: 858; Percent complete: 21.4%; Average loss: 3.5508
Iteration: 859; Percent complete: 21.5%; Average loss: 3.6091
Iteration: 860; Percent complete: 21.5%; Average loss: 3.5453
Iteration: 861; Percent complete: 21.5%; Average loss: 3.5477
Iteration: 862; Percent complete: 21.6%; Average loss: 3.7165
Iteration: 863; Percent complete: 21.6%; Average loss: 3.4953
Iteration: 864; Percent complete: 21.6%; Average loss: 3.7139
Iteration: 865; Percent complete: 21.6%; Average loss: 3.5419
Iteration: 866; Percent complete: 21.6%; Average loss: 3.5629
Iteration: 867; Percent complete: 21.7%; Average loss: 3.6388
Iteration: 868; Percent complete: 21.7%; Average loss: 3.7029
Iteration: 869; Percent complete: 21.7%; Average loss: 3.3400
Iteration: 870; Percent complete: 21.8%; Average loss: 3.6262
Iteration: 871; Percent complete: 21.8%; Average loss: 3.6103
Iteration: 872; Percent complete: 21.8%; Average loss: 3.4809
Iteration: 873; Percent complete: 21.8%; Average loss: 3.4438
Iteration: 874; Percent complete: 21.9%; Average loss: 3.4514
Iteration: 875; Percent complete: 21.9%; Average loss: 3.7070
Iteration: 876; Percent complete: 21.9%; Average loss: 3.2813
Iteration: 877; Percent complete: 21.9%; Average loss: 3.5660
Iteration: 878; Percent complete: 21.9%; Average loss: 3.6650
Iteration: 879; Percent complete: 22.0%; Average loss: 3.3871
Iteration: 880; Percent complete: 22.0%; Average loss: 3.2584
Iteration: 881; Percent complete: 22.0%; Average loss: 3.6568
Iteration: 882; Percent complete: 22.1%; Average loss: 3.6387
Iteration: 883; Percent complete: 22.1%; Average loss: 3.3470
Iteration: 884; Percent complete: 22.1%; Average loss: 3.5054
Iteration: 885; Percent complete: 22.1%; Average loss: 3.2777
Iteration: 886; Percent complete: 22.1%; Average loss: 3.5477
Iteration: 887; Percent complete: 22.2%; Average loss: 3.5653
Iteration: 888; Percent complete: 22.2%; Average loss: 3.6301
Iteration: 889; Percent complete: 22.2%; Average loss: 3.5113
Iteration: 890; Percent complete: 22.2%; Average loss: 3.5324
Iteration: 891; Percent complete: 22.3%; Average loss: 3.4610
Iteration: 892; Percent complete: 22.3%; Average loss: 3.5902
Iteration: 893; Percent complete: 22.3%; Average loss: 3.4940
Iteration: 894; Percent complete: 22.4%; Average loss: 3.7322
Iteration: 895; Percent complete: 22.4%; Average loss: 3.7156
Iteration: 896; Percent complete: 22.4%; Average loss: 3.4100
Iteration: 897; Percent complete: 22.4%; Average loss: 3.3929
Iteration: 898; Percent complete: 22.4%; Average loss: 3.5393
Iteration: 899; Percent complete: 22.5%; Average loss: 3.5685
Iteration: 900; Percent complete: 22.5%; Average loss: 3.5330
Iteration: 901; Percent complete: 22.5%; Average loss: 3.5801
Iteration: 902; Percent complete: 22.6%; Average loss: 3.6759
Iteration: 903; Percent complete: 22.6%; Average loss: 3.6003
Iteration: 904; Percent complete: 22.6%; Average loss: 3.8019
Iteration: 905; Percent complete: 22.6%; Average loss: 3.7457
Iteration: 906; Percent complete: 22.7%; Average loss: 3.6420
Iteration: 907; Percent complete: 22.7%; Average loss: 3.5594
Iteration: 908; Percent complete: 22.7%; Average loss: 3.1248
Iteration: 909; Percent complete: 22.7%; Average loss: 3.5218
Iteration: 910; Percent complete: 22.8%; Average loss: 3.4817
Iteration: 911; Percent complete: 22.8%; Average loss: 3.4053
Iteration: 912; Percent complete: 22.8%; Average loss: 3.3702
Iteration: 913; Percent complete: 22.8%; Average loss: 3.5852
Iteration: 914; Percent complete: 22.9%; Average loss: 3.3985
Iteration: 915; Percent complete: 22.9%; Average loss: 3.4500
Iteration: 916; Percent complete: 22.9%; Average loss: 3.8993
Iteration: 917; Percent complete: 22.9%; Average loss: 3.3578
Iteration: 918; Percent complete: 22.9%; Average loss: 3.5954
Iteration: 919; Percent complete: 23.0%; Average loss: 3.4335
Iteration: 920; Percent complete: 23.0%; Average loss: 3.8025
Iteration: 921; Percent complete: 23.0%; Average loss: 3.7765
Iteration: 922; Percent complete: 23.1%; Average loss: 3.4502
Iteration: 923; Percent complete: 23.1%; Average loss: 3.7500
Iteration: 924; Percent complete: 23.1%; Average loss: 3.3571
Iteration: 925; Percent complete: 23.1%; Average loss: 3.2654
Iteration: 926; Percent complete: 23.2%; Average loss: 3.7120
Iteration: 927; Percent complete: 23.2%; Average loss: 3.5343
Iteration: 928; Percent complete: 23.2%; Average loss: 3.5077
Iteration: 929; Percent complete: 23.2%; Average loss: 3.2325
Iteration: 930; Percent complete: 23.2%; Average loss: 3.5154
Iteration: 931; Percent complete: 23.3%; Average loss: 3.4658
Iteration: 932; Percent complete: 23.3%; Average loss: 3.4635
Iteration: 933; Percent complete: 23.3%; Average loss: 3.4102
Iteration: 934; Percent complete: 23.4%; Average loss: 3.5579
Iteration: 935; Percent complete: 23.4%; Average loss: 3.7341
Iteration: 936; Percent complete: 23.4%; Average loss: 3.5975
Iteration: 937; Percent complete: 23.4%; Average loss: 3.4826
Iteration: 938; Percent complete: 23.4%; Average loss: 3.6914
Iteration: 939; Percent complete: 23.5%; Average loss: 3.5294
Iteration: 940; Percent complete: 23.5%; Average loss: 3.4254
Iteration: 941; Percent complete: 23.5%; Average loss: 3.4201
Iteration: 942; Percent complete: 23.5%; Average loss: 3.5511
Iteration: 943; Percent complete: 23.6%; Average loss: 3.3995
Iteration: 944; Percent complete: 23.6%; Average loss: 3.5470
Iteration: 945; Percent complete: 23.6%; Average loss: 3.6714
Iteration: 946; Percent complete: 23.6%; Average loss: 3.3833
Iteration: 947; Percent complete: 23.7%; Average loss: 3.5781
Iteration: 948; Percent complete: 23.7%; Average loss: 3.4168
Iteration: 949; Percent complete: 23.7%; Average loss: 3.3897
Iteration: 950; Percent complete: 23.8%; Average loss: 3.7391
Iteration: 951; Percent complete: 23.8%; Average loss: 3.4689
Iteration: 952; Percent complete: 23.8%; Average loss: 3.6240
Iteration: 953; Percent complete: 23.8%; Average loss: 3.5929
Iteration: 954; Percent complete: 23.8%; Average loss: 3.5811
Iteration: 955; Percent complete: 23.9%; Average loss: 3.5016
Iteration: 956; Percent complete: 23.9%; Average loss: 3.3392
Iteration: 957; Percent complete: 23.9%; Average loss: 3.6288
Iteration: 958; Percent complete: 23.9%; Average loss: 3.5400
Iteration: 959; Percent complete: 24.0%; Average loss: 3.8342
Iteration: 960; Percent complete: 24.0%; Average loss: 3.6551
Iteration: 961; Percent complete: 24.0%; Average loss: 3.5424
Iteration: 962; Percent complete: 24.1%; Average loss: 3.3254
Iteration: 963; Percent complete: 24.1%; Average loss: 3.4494
Iteration: 964; Percent complete: 24.1%; Average loss: 3.5808
Iteration: 965; Percent complete: 24.1%; Average loss: 3.6079
Iteration: 966; Percent complete: 24.1%; Average loss: 3.4571
Iteration: 967; Percent complete: 24.2%; Average loss: 3.3497
Iteration: 968; Percent complete: 24.2%; Average loss: 3.5841
Iteration: 969; Percent complete: 24.2%; Average loss: 3.4764
Iteration: 970; Percent complete: 24.2%; Average loss: 3.3483
Iteration: 971; Percent complete: 24.3%; Average loss: 3.5130
Iteration: 972; Percent complete: 24.3%; Average loss: 3.3429
Iteration: 973; Percent complete: 24.3%; Average loss: 3.6474
Iteration: 974; Percent complete: 24.3%; Average loss: 3.5238
Iteration: 975; Percent complete: 24.4%; Average loss: 3.3342
Iteration: 976; Percent complete: 24.4%; Average loss: 3.3777
Iteration: 977; Percent complete: 24.4%; Average loss: 3.5110
Iteration: 978; Percent complete: 24.4%; Average loss: 3.2734
Iteration: 979; Percent complete: 24.5%; Average loss: 3.6451
Iteration: 980; Percent complete: 24.5%; Average loss: 3.3805
Iteration: 981; Percent complete: 24.5%; Average loss: 3.4948
Iteration: 982; Percent complete: 24.6%; Average loss: 3.5098
Iteration: 983; Percent complete: 24.6%; Average loss: 3.5675
Iteration: 984; Percent complete: 24.6%; Average loss: 3.2855
Iteration: 985; Percent complete: 24.6%; Average loss: 3.4607
Iteration: 986; Percent complete: 24.6%; Average loss: 3.2718
Iteration: 987; Percent complete: 24.7%; Average loss: 3.4356
Iteration: 988; Percent complete: 24.7%; Average loss: 3.2502
Iteration: 989; Percent complete: 24.7%; Average loss: 3.4954
Iteration: 990; Percent complete: 24.8%; Average loss: 3.4678
Iteration: 991; Percent complete: 24.8%; Average loss: 3.3082
Iteration: 992; Percent complete: 24.8%; Average loss: 3.5365
Iteration: 993; Percent complete: 24.8%; Average loss: 3.4208
Iteration: 994; Percent complete: 24.9%; Average loss: 3.2777
Iteration: 995; Percent complete: 24.9%; Average loss: 3.4547
Iteration: 996; Percent complete: 24.9%; Average loss: 3.5770
Iteration: 997; Percent complete: 24.9%; Average loss: 3.3767
Iteration: 998; Percent complete: 24.9%; Average loss: 3.6290
Iteration: 999; Percent complete: 25.0%; Average loss: 3.5502
Iteration: 1000; Percent complete: 25.0%; Average loss: 3.7027
Iteration: 1001; Percent complete: 25.0%; Average loss: 3.4950
Iteration: 1002; Percent complete: 25.1%; Average loss: 3.5409
Iteration: 1003; Percent complete: 25.1%; Average loss: 3.4783
Iteration: 1004; Percent complete: 25.1%; Average loss: 3.4754
Iteration: 1005; Percent complete: 25.1%; Average loss: 3.1935
Iteration: 1006; Percent complete: 25.1%; Average loss: 3.4731
Iteration: 1007; Percent complete: 25.2%; Average loss: 3.4304
Iteration: 1008; Percent complete: 25.2%; Average loss: 3.5595
Iteration: 1009; Percent complete: 25.2%; Average loss: 3.2592
Iteration: 1010; Percent complete: 25.2%; Average loss: 3.4289
Iteration: 1011; Percent complete: 25.3%; Average loss: 3.5131
Iteration: 1012; Percent complete: 25.3%; Average loss: 3.5201
Iteration: 1013; Percent complete: 25.3%; Average loss: 3.2373
Iteration: 1014; Percent complete: 25.4%; Average loss: 3.3758
Iteration: 1015; Percent complete: 25.4%; Average loss: 3.3403
Iteration: 1016; Percent complete: 25.4%; Average loss: 3.3417
Iteration: 1017; Percent complete: 25.4%; Average loss: 3.3256
Iteration: 1018; Percent complete: 25.4%; Average loss: 3.4696
Iteration: 1019; Percent complete: 25.5%; Average loss: 3.4116
Iteration: 1020; Percent complete: 25.5%; Average loss: 3.4420
Iteration: 1021; Percent complete: 25.5%; Average loss: 3.5688
Iteration: 1022; Percent complete: 25.6%; Average loss: 3.3767
Iteration: 1023; Percent complete: 25.6%; Average loss: 3.4275
Iteration: 1024; Percent complete: 25.6%; Average loss: 3.3518
Iteration: 1025; Percent complete: 25.6%; Average loss: 3.3672
Iteration: 1026; Percent complete: 25.7%; Average loss: 3.2592
Iteration: 1027; Percent complete: 25.7%; Average loss: 3.2851
Iteration: 1028; Percent complete: 25.7%; Average loss: 3.3408
Iteration: 1029; Percent complete: 25.7%; Average loss: 3.2385
Iteration: 1030; Percent complete: 25.8%; Average loss: 3.6180
Iteration: 1031; Percent complete: 25.8%; Average loss: 3.7811
Iteration: 1032; Percent complete: 25.8%; Average loss: 3.6011
Iteration: 1033; Percent complete: 25.8%; Average loss: 3.3141
Iteration: 1034; Percent complete: 25.9%; Average loss: 3.4590
Iteration: 1035; Percent complete: 25.9%; Average loss: 3.3139
Iteration: 1036; Percent complete: 25.9%; Average loss: 3.6135
Iteration: 1037; Percent complete: 25.9%; Average loss: 3.3165
Iteration: 1038; Percent complete: 25.9%; Average loss: 3.5990
Iteration: 1039; Percent complete: 26.0%; Average loss: 3.3636
Iteration: 1040; Percent complete: 26.0%; Average loss: 3.6980
Iteration: 1041; Percent complete: 26.0%; Average loss: 3.7072
Iteration: 1042; Percent complete: 26.1%; Average loss: 3.3279
Iteration: 1043; Percent complete: 26.1%; Average loss: 3.5330
Iteration: 1044; Percent complete: 26.1%; Average loss: 3.5946
Iteration: 1045; Percent complete: 26.1%; Average loss: 3.5565
Iteration: 1046; Percent complete: 26.2%; Average loss: 3.7377
Iteration: 1047; Percent complete: 26.2%; Average loss: 3.4804
Iteration: 1048; Percent complete: 26.2%; Average loss: 3.4865
Iteration: 1049; Percent complete: 26.2%; Average loss: 3.2366
Iteration: 1050; Percent complete: 26.2%; Average loss: 3.0979
Iteration: 1051; Percent complete: 26.3%; Average loss: 3.3402
Iteration: 1052; Percent complete: 26.3%; Average loss: 3.2435
Iteration: 1053; Percent complete: 26.3%; Average loss: 3.5019
Iteration: 1054; Percent complete: 26.4%; Average loss: 3.3436
Iteration: 1055; Percent complete: 26.4%; Average loss: 3.4624
Iteration: 1056; Percent complete: 26.4%; Average loss: 3.5759
Iteration: 1057; Percent complete: 26.4%; Average loss: 3.4316
Iteration: 1058; Percent complete: 26.5%; Average loss: 3.4246
Iteration: 1059; Percent complete: 26.5%; Average loss: 3.5691
Iteration: 1060; Percent complete: 26.5%; Average loss: 3.2951
Iteration: 1061; Percent complete: 26.5%; Average loss: 3.1187
Iteration: 1062; Percent complete: 26.6%; Average loss: 3.5095
Iteration: 1063; Percent complete: 26.6%; Average loss: 3.3168
Iteration: 1064; Percent complete: 26.6%; Average loss: 3.2514
Iteration: 1065; Percent complete: 26.6%; Average loss: 3.4187
Iteration: 1066; Percent complete: 26.7%; Average loss: 3.3420
Iteration: 1067; Percent complete: 26.7%; Average loss: 3.3965
Iteration: 1068; Percent complete: 26.7%; Average loss: 3.6370
Iteration: 1069; Percent complete: 26.7%; Average loss: 3.3953
Iteration: 1070; Percent complete: 26.8%; Average loss: 3.2160
Iteration: 1071; Percent complete: 26.8%; Average loss: 3.3588
Iteration: 1072; Percent complete: 26.8%; Average loss: 3.3535
Iteration: 1073; Percent complete: 26.8%; Average loss: 3.2983
Iteration: 1074; Percent complete: 26.9%; Average loss: 3.4144
Iteration: 1075; Percent complete: 26.9%; Average loss: 3.3487
Iteration: 1076; Percent complete: 26.9%; Average loss: 3.0569
Iteration: 1077; Percent complete: 26.9%; Average loss: 3.5018
Iteration: 1078; Percent complete: 27.0%; Average loss: 3.4444
Iteration: 1079; Percent complete: 27.0%; Average loss: 3.4004
Iteration: 1080; Percent complete: 27.0%; Average loss: 3.5225
Iteration: 1081; Percent complete: 27.0%; Average loss: 3.0570
Iteration: 1082; Percent complete: 27.1%; Average loss: 3.1967
Iteration: 1083; Percent complete: 27.1%; Average loss: 3.5183
Iteration: 1084; Percent complete: 27.1%; Average loss: 3.4649
Iteration: 1085; Percent complete: 27.1%; Average loss: 3.4474
Iteration: 1086; Percent complete: 27.2%; Average loss: 3.8973
Iteration: 1087; Percent complete: 27.2%; Average loss: 3.5619
Iteration: 1088; Percent complete: 27.2%; Average loss: 3.4813
Iteration: 1089; Percent complete: 27.2%; Average loss: 3.5106
Iteration: 1090; Percent complete: 27.3%; Average loss: 3.6807
Iteration: 1091; Percent complete: 27.3%; Average loss: 3.1485
Iteration: 1092; Percent complete: 27.3%; Average loss: 3.3107
Iteration: 1093; Percent complete: 27.3%; Average loss: 3.5739
Iteration: 1094; Percent complete: 27.4%; Average loss: 3.2044
Iteration: 1095; Percent complete: 27.4%; Average loss: 3.4004
Iteration: 1096; Percent complete: 27.4%; Average loss: 3.6171
Iteration: 1097; Percent complete: 27.4%; Average loss: 3.3204
Iteration: 1098; Percent complete: 27.5%; Average loss: 3.4213
Iteration: 1099; Percent complete: 27.5%; Average loss: 3.2818
Iteration: 1100; Percent complete: 27.5%; Average loss: 3.1425
Iteration: 1101; Percent complete: 27.5%; Average loss: 3.2003
Iteration: 1102; Percent complete: 27.6%; Average loss: 3.3168
Iteration: 1103; Percent complete: 27.6%; Average loss: 3.3159
Iteration: 1104; Percent complete: 27.6%; Average loss: 3.4803
Iteration: 1105; Percent complete: 27.6%; Average loss: 3.6283
Iteration: 1106; Percent complete: 27.7%; Average loss: 3.7063
Iteration: 1107; Percent complete: 27.7%; Average loss: 3.2756
Iteration: 1108; Percent complete: 27.7%; Average loss: 3.1132
Iteration: 1109; Percent complete: 27.7%; Average loss: 3.4848
Iteration: 1110; Percent complete: 27.8%; Average loss: 3.5246
Iteration: 1111; Percent complete: 27.8%; Average loss: 3.3526
Iteration: 1112; Percent complete: 27.8%; Average loss: 3.5368
Iteration: 1113; Percent complete: 27.8%; Average loss: 3.6266
Iteration: 1114; Percent complete: 27.9%; Average loss: 3.3898
Iteration: 1115; Percent complete: 27.9%; Average loss: 3.4547
Iteration: 1116; Percent complete: 27.9%; Average loss: 3.4910
Iteration: 1117; Percent complete: 27.9%; Average loss: 3.4710
Iteration: 1118; Percent complete: 28.0%; Average loss: 3.3515
Iteration: 1119; Percent complete: 28.0%; Average loss: 3.3311
Iteration: 1120; Percent complete: 28.0%; Average loss: 3.4355
Iteration: 1121; Percent complete: 28.0%; Average loss: 3.3531
Iteration: 1122; Percent complete: 28.1%; Average loss: 3.3008
Iteration: 1123; Percent complete: 28.1%; Average loss: 3.3486
Iteration: 1124; Percent complete: 28.1%; Average loss: 3.2903
Iteration: 1125; Percent complete: 28.1%; Average loss: 3.4498
Iteration: 1126; Percent complete: 28.1%; Average loss: 3.6292
Iteration: 1127; Percent complete: 28.2%; Average loss: 3.6457
Iteration: 1128; Percent complete: 28.2%; Average loss: 3.4517
Iteration: 1129; Percent complete: 28.2%; Average loss: 3.3525
Iteration: 1130; Percent complete: 28.2%; Average loss: 3.1865
Iteration: 1131; Percent complete: 28.3%; Average loss: 3.7445
Iteration: 1132; Percent complete: 28.3%; Average loss: 3.2832
Iteration: 1133; Percent complete: 28.3%; Average loss: 3.4884
Iteration: 1134; Percent complete: 28.3%; Average loss: 3.4539
Iteration: 1135; Percent complete: 28.4%; Average loss: 3.3450
Iteration: 1136; Percent complete: 28.4%; Average loss: 3.5163
Iteration: 1137; Percent complete: 28.4%; Average loss: 3.6730
Iteration: 1138; Percent complete: 28.4%; Average loss: 3.4980
Iteration: 1139; Percent complete: 28.5%; Average loss: 3.4120
Iteration: 1140; Percent complete: 28.5%; Average loss: 3.8214
Iteration: 1141; Percent complete: 28.5%; Average loss: 3.7444
Iteration: 1142; Percent complete: 28.5%; Average loss: 3.5663
Iteration: 1143; Percent complete: 28.6%; Average loss: 3.3830
Iteration: 1144; Percent complete: 28.6%; Average loss: 3.2595
Iteration: 1145; Percent complete: 28.6%; Average loss: 3.6170
Iteration: 1146; Percent complete: 28.6%; Average loss: 3.5608
Iteration: 1147; Percent complete: 28.7%; Average loss: 3.5602
Iteration: 1148; Percent complete: 28.7%; Average loss: 3.4814
Iteration: 1149; Percent complete: 28.7%; Average loss: 3.4253
Iteration: 1150; Percent complete: 28.7%; Average loss: 3.2796
Iteration: 1151; Percent complete: 28.8%; Average loss: 3.4333
Iteration: 1152; Percent complete: 28.8%; Average loss: 3.5837
Iteration: 1153; Percent complete: 28.8%; Average loss: 3.3733
Iteration: 1154; Percent complete: 28.8%; Average loss: 3.3650
Iteration: 1155; Percent complete: 28.9%; Average loss: 3.3925
Iteration: 1156; Percent complete: 28.9%; Average loss: 3.3622
Iteration: 1157; Percent complete: 28.9%; Average loss: 3.5346
Iteration: 1158; Percent complete: 28.9%; Average loss: 3.3993
Iteration: 1159; Percent complete: 29.0%; Average loss: 3.6456
Iteration: 1160; Percent complete: 29.0%; Average loss: 3.1799
Iteration: 1161; Percent complete: 29.0%; Average loss: 3.1856
Iteration: 1162; Percent complete: 29.0%; Average loss: 3.6562
Iteration: 1163; Percent complete: 29.1%; Average loss: 3.4562
Iteration: 1164; Percent complete: 29.1%; Average loss: 3.2275
Iteration: 1165; Percent complete: 29.1%; Average loss: 3.3771
Iteration: 1166; Percent complete: 29.1%; Average loss: 3.3313
Iteration: 1167; Percent complete: 29.2%; Average loss: 3.2640
Iteration: 1168; Percent complete: 29.2%; Average loss: 3.3774
Iteration: 1169; Percent complete: 29.2%; Average loss: 3.5335
Iteration: 1170; Percent complete: 29.2%; Average loss: 3.1957
Iteration: 1171; Percent complete: 29.3%; Average loss: 3.3304
Iteration: 1172; Percent complete: 29.3%; Average loss: 3.3880
Iteration: 1173; Percent complete: 29.3%; Average loss: 3.5970
Iteration: 1174; Percent complete: 29.3%; Average loss: 3.4693
Iteration: 1175; Percent complete: 29.4%; Average loss: 3.5326
Iteration: 1176; Percent complete: 29.4%; Average loss: 3.3651
Iteration: 1177; Percent complete: 29.4%; Average loss: 3.6594
Iteration: 1178; Percent complete: 29.4%; Average loss: 3.4350
Iteration: 1179; Percent complete: 29.5%; Average loss: 3.7035
Iteration: 1180; Percent complete: 29.5%; Average loss: 3.5005
Iteration: 1181; Percent complete: 29.5%; Average loss: 3.5195
Iteration: 1182; Percent complete: 29.5%; Average loss: 3.5163
Iteration: 1183; Percent complete: 29.6%; Average loss: 3.5198
Iteration: 1184; Percent complete: 29.6%; Average loss: 3.2338
Iteration: 1185; Percent complete: 29.6%; Average loss: 3.3958
Iteration: 1186; Percent complete: 29.6%; Average loss: 3.4614
Iteration: 1187; Percent complete: 29.7%; Average loss: 3.2707
Iteration: 1188; Percent complete: 29.7%; Average loss: 3.4770
Iteration: 1189; Percent complete: 29.7%; Average loss: 3.4978
Iteration: 1190; Percent complete: 29.8%; Average loss: 3.4308
Iteration: 1191; Percent complete: 29.8%; Average loss: 3.2581
Iteration: 1192; Percent complete: 29.8%; Average loss: 3.5005
Iteration: 1193; Percent complete: 29.8%; Average loss: 3.4145
Iteration: 1194; Percent complete: 29.8%; Average loss: 3.4410
Iteration: 1195; Percent complete: 29.9%; Average loss: 3.3277
Iteration: 1196; Percent complete: 29.9%; Average loss: 3.3863
Iteration: 1197; Percent complete: 29.9%; Average loss: 3.5020
Iteration: 1198; Percent complete: 29.9%; Average loss: 3.5083
Iteration: 1199; Percent complete: 30.0%; Average loss: 3.3127
Iteration: 1200; Percent complete: 30.0%; Average loss: 3.3241
Iteration: 1201; Percent complete: 30.0%; Average loss: 3.0744
Iteration: 1202; Percent complete: 30.0%; Average loss: 3.4632
Iteration: 1203; Percent complete: 30.1%; Average loss: 3.5075
Iteration: 1204; Percent complete: 30.1%; Average loss: 3.2945
Iteration: 1205; Percent complete: 30.1%; Average loss: 3.4797
Iteration: 1206; Percent complete: 30.1%; Average loss: 3.2957
Iteration: 1207; Percent complete: 30.2%; Average loss: 3.4036
Iteration: 1208; Percent complete: 30.2%; Average loss: 3.3812
Iteration: 1209; Percent complete: 30.2%; Average loss: 3.2934
Iteration: 1210; Percent complete: 30.2%; Average loss: 3.4460
Iteration: 1211; Percent complete: 30.3%; Average loss: 3.2659
Iteration: 1212; Percent complete: 30.3%; Average loss: 3.4973
Iteration: 1213; Percent complete: 30.3%; Average loss: 3.2760
Iteration: 1214; Percent complete: 30.3%; Average loss: 3.2438
Iteration: 1215; Percent complete: 30.4%; Average loss: 3.2699
Iteration: 1216; Percent complete: 30.4%; Average loss: 3.4419
Iteration: 1217; Percent complete: 30.4%; Average loss: 3.3888
Iteration: 1218; Percent complete: 30.4%; Average loss: 3.4649
Iteration: 1219; Percent complete: 30.5%; Average loss: 3.2310
Iteration: 1220; Percent complete: 30.5%; Average loss: 3.3129
Iteration: 1221; Percent complete: 30.5%; Average loss: 3.3018
Iteration: 1222; Percent complete: 30.6%; Average loss: 3.2379
Iteration: 1223; Percent complete: 30.6%; Average loss: 3.2491
Iteration: 1224; Percent complete: 30.6%; Average loss: 3.5126
Iteration: 1225; Percent complete: 30.6%; Average loss: 3.2817
Iteration: 1226; Percent complete: 30.6%; Average loss: 3.3177
Iteration: 1227; Percent complete: 30.7%; Average loss: 3.2455
Iteration: 1228; Percent complete: 30.7%; Average loss: 3.3451
Iteration: 1229; Percent complete: 30.7%; Average loss: 3.4134
Iteration: 1230; Percent complete: 30.8%; Average loss: 3.5416
Iteration: 1231; Percent complete: 30.8%; Average loss: 3.3598
Iteration: 1232; Percent complete: 30.8%; Average loss: 3.2630
Iteration: 1233; Percent complete: 30.8%; Average loss: 3.3090
Iteration: 1234; Percent complete: 30.9%; Average loss: 3.2365
Iteration: 1235; Percent complete: 30.9%; Average loss: 3.3608
Iteration: 1236; Percent complete: 30.9%; Average loss: 3.5540
Iteration: 1237; Percent complete: 30.9%; Average loss: 3.6398
Iteration: 1238; Percent complete: 30.9%; Average loss: 3.3913
Iteration: 1239; Percent complete: 31.0%; Average loss: 3.4549
Iteration: 1240; Percent complete: 31.0%; Average loss: 3.5788
Iteration: 1241; Percent complete: 31.0%; Average loss: 3.6203
Iteration: 1242; Percent complete: 31.1%; Average loss: 3.3872
Iteration: 1243; Percent complete: 31.1%; Average loss: 3.4481
Iteration: 1244; Percent complete: 31.1%; Average loss: 3.5431
Iteration: 1245; Percent complete: 31.1%; Average loss: 3.5494
Iteration: 1246; Percent complete: 31.1%; Average loss: 3.2603
Iteration: 1247; Percent complete: 31.2%; Average loss: 3.1153
Iteration: 1248; Percent complete: 31.2%; Average loss: 3.3627
Iteration: 1249; Percent complete: 31.2%; Average loss: 3.5035
Iteration: 1250; Percent complete: 31.2%; Average loss: 3.7244
Iteration: 1251; Percent complete: 31.3%; Average loss: 2.9725
Iteration: 1252; Percent complete: 31.3%; Average loss: 3.3687
Iteration: 1253; Percent complete: 31.3%; Average loss: 3.3505
Iteration: 1254; Percent complete: 31.4%; Average loss: 3.4157
Iteration: 1255; Percent complete: 31.4%; Average loss: 3.3456
Iteration: 1256; Percent complete: 31.4%; Average loss: 3.5342
Iteration: 1257; Percent complete: 31.4%; Average loss: 3.7370
Iteration: 1258; Percent complete: 31.4%; Average loss: 3.4114
Iteration: 1259; Percent complete: 31.5%; Average loss: 3.4559
Iteration: 1260; Percent complete: 31.5%; Average loss: 3.2935
Iteration: 1261; Percent complete: 31.5%; Average loss: 3.1072
Iteration: 1262; Percent complete: 31.6%; Average loss: 3.4643
Iteration: 1263; Percent complete: 31.6%; Average loss: 3.2874
Iteration: 1264; Percent complete: 31.6%; Average loss: 3.3807
Iteration: 1265; Percent complete: 31.6%; Average loss: 3.3983
Iteration: 1266; Percent complete: 31.6%; Average loss: 3.2151
Iteration: 1267; Percent complete: 31.7%; Average loss: 3.3642
Iteration: 1268; Percent complete: 31.7%; Average loss: 3.0878
Iteration: 1269; Percent complete: 31.7%; Average loss: 3.6190
Iteration: 1270; Percent complete: 31.8%; Average loss: 3.4926
Iteration: 1271; Percent complete: 31.8%; Average loss: 3.2080
Iteration: 1272; Percent complete: 31.8%; Average loss: 3.3260
Iteration: 1273; Percent complete: 31.8%; Average loss: 3.6174
Iteration: 1274; Percent complete: 31.9%; Average loss: 3.2172
Iteration: 1275; Percent complete: 31.9%; Average loss: 3.3851
Iteration: 1276; Percent complete: 31.9%; Average loss: 3.3057
Iteration: 1277; Percent complete: 31.9%; Average loss: 3.2023
Iteration: 1278; Percent complete: 31.9%; Average loss: 3.3451
Iteration: 1279; Percent complete: 32.0%; Average loss: 3.2549
Iteration: 1280; Percent complete: 32.0%; Average loss: 3.4662
Iteration: 1281; Percent complete: 32.0%; Average loss: 3.4990
Iteration: 1282; Percent complete: 32.0%; Average loss: 3.3381
Iteration: 1283; Percent complete: 32.1%; Average loss: 3.4179
Iteration: 1284; Percent complete: 32.1%; Average loss: 3.5251
Iteration: 1285; Percent complete: 32.1%; Average loss: 3.4308
Iteration: 1286; Percent complete: 32.1%; Average loss: 3.2705
Iteration: 1287; Percent complete: 32.2%; Average loss: 3.4758
Iteration: 1288; Percent complete: 32.2%; Average loss: 3.5897
Iteration: 1289; Percent complete: 32.2%; Average loss: 3.4418
Iteration: 1290; Percent complete: 32.2%; Average loss: 3.4034
Iteration: 1291; Percent complete: 32.3%; Average loss: 3.2418
Iteration: 1292; Percent complete: 32.3%; Average loss: 3.5878
Iteration: 1293; Percent complete: 32.3%; Average loss: 3.2458
Iteration: 1294; Percent complete: 32.4%; Average loss: 3.3567
Iteration: 1295; Percent complete: 32.4%; Average loss: 3.5488
Iteration: 1296; Percent complete: 32.4%; Average loss: 3.1679
Iteration: 1297; Percent complete: 32.4%; Average loss: 3.3555
Iteration: 1298; Percent complete: 32.5%; Average loss: 3.5130
Iteration: 1299; Percent complete: 32.5%; Average loss: 3.2494
Iteration: 1300; Percent complete: 32.5%; Average loss: 3.2121
Iteration: 1301; Percent complete: 32.5%; Average loss: 3.5097
Iteration: 1302; Percent complete: 32.6%; Average loss: 3.5001
Iteration: 1303; Percent complete: 32.6%; Average loss: 3.3930
Iteration: 1304; Percent complete: 32.6%; Average loss: 3.3583
Iteration: 1305; Percent complete: 32.6%; Average loss: 3.4863
Iteration: 1306; Percent complete: 32.6%; Average loss: 3.3154
Iteration: 1307; Percent complete: 32.7%; Average loss: 3.1691
Iteration: 1308; Percent complete: 32.7%; Average loss: 3.3528
Iteration: 1309; Percent complete: 32.7%; Average loss: 3.6236
Iteration: 1310; Percent complete: 32.8%; Average loss: 3.2453
Iteration: 1311; Percent complete: 32.8%; Average loss: 3.4827
Iteration: 1312; Percent complete: 32.8%; Average loss: 3.0283
Iteration: 1313; Percent complete: 32.8%; Average loss: 3.2514
Iteration: 1314; Percent complete: 32.9%; Average loss: 3.1634
Iteration: 1315; Percent complete: 32.9%; Average loss: 3.3001
Iteration: 1316; Percent complete: 32.9%; Average loss: 3.2565
Iteration: 1317; Percent complete: 32.9%; Average loss: 3.3846
Iteration: 1318; Percent complete: 33.0%; Average loss: 3.3350
Iteration: 1319; Percent complete: 33.0%; Average loss: 3.3087
Iteration: 1320; Percent complete: 33.0%; Average loss: 3.4932
Iteration: 1321; Percent complete: 33.0%; Average loss: 3.0886
Iteration: 1322; Percent complete: 33.1%; Average loss: 3.5343
Iteration: 1323; Percent complete: 33.1%; Average loss: 3.4054
Iteration: 1324; Percent complete: 33.1%; Average loss: 3.4476
Iteration: 1325; Percent complete: 33.1%; Average loss: 3.2123
Iteration: 1326; Percent complete: 33.1%; Average loss: 3.3934
Iteration: 1327; Percent complete: 33.2%; Average loss: 3.2498
Iteration: 1328; Percent complete: 33.2%; Average loss: 3.2216
Iteration: 1329; Percent complete: 33.2%; Average loss: 3.3350
Iteration: 1330; Percent complete: 33.2%; Average loss: 3.1715
Iteration: 1331; Percent complete: 33.3%; Average loss: 3.1441
Iteration: 1332; Percent complete: 33.3%; Average loss: 3.5293
Iteration: 1333; Percent complete: 33.3%; Average loss: 3.2777
Iteration: 1334; Percent complete: 33.4%; Average loss: 3.3753
Iteration: 1335; Percent complete: 33.4%; Average loss: 3.2583
Iteration: 1336; Percent complete: 33.4%; Average loss: 3.3213
Iteration: 1337; Percent complete: 33.4%; Average loss: 3.2357
Iteration: 1338; Percent complete: 33.5%; Average loss: 3.3865
Iteration: 1339; Percent complete: 33.5%; Average loss: 3.3587
Iteration: 1340; Percent complete: 33.5%; Average loss: 3.2669
Iteration: 1341; Percent complete: 33.5%; Average loss: 3.2565
Iteration: 1342; Percent complete: 33.6%; Average loss: 3.4353
Iteration: 1343; Percent complete: 33.6%; Average loss: 3.3155
Iteration: 1344; Percent complete: 33.6%; Average loss: 3.2615
Iteration: 1345; Percent complete: 33.6%; Average loss: 3.3499
Iteration: 1346; Percent complete: 33.7%; Average loss: 3.3992
Iteration: 1347; Percent complete: 33.7%; Average loss: 3.3341
Iteration: 1348; Percent complete: 33.7%; Average loss: 3.0641
Iteration: 1349; Percent complete: 33.7%; Average loss: 3.4529
Iteration: 1350; Percent complete: 33.8%; Average loss: 3.0967
Iteration: 1351; Percent complete: 33.8%; Average loss: 3.4435
Iteration: 1352; Percent complete: 33.8%; Average loss: 2.9339
Iteration: 1353; Percent complete: 33.8%; Average loss: 3.3915
Iteration: 1354; Percent complete: 33.9%; Average loss: 3.3461
Iteration: 1355; Percent complete: 33.9%; Average loss: 3.3141
Iteration: 1356; Percent complete: 33.9%; Average loss: 3.3043
Iteration: 1357; Percent complete: 33.9%; Average loss: 3.4777
Iteration: 1358; Percent complete: 34.0%; Average loss: 3.4711
Iteration: 1359; Percent complete: 34.0%; Average loss: 3.2906
Iteration: 1360; Percent complete: 34.0%; Average loss: 3.4328
Iteration: 1361; Percent complete: 34.0%; Average loss: 3.4079
Iteration: 1362; Percent complete: 34.1%; Average loss: 3.1755
Iteration: 1363; Percent complete: 34.1%; Average loss: 3.1894
Iteration: 1364; Percent complete: 34.1%; Average loss: 3.2520
Iteration: 1365; Percent complete: 34.1%; Average loss: 3.3296
Iteration: 1366; Percent complete: 34.2%; Average loss: 3.4477
Iteration: 1367; Percent complete: 34.2%; Average loss: 3.4416
Iteration: 1368; Percent complete: 34.2%; Average loss: 3.1817
Iteration: 1369; Percent complete: 34.2%; Average loss: 3.1382
Iteration: 1370; Percent complete: 34.2%; Average loss: 3.4894
Iteration: 1371; Percent complete: 34.3%; Average loss: 3.3088
Iteration: 1372; Percent complete: 34.3%; Average loss: 3.2634
Iteration: 1373; Percent complete: 34.3%; Average loss: 3.3667
Iteration: 1374; Percent complete: 34.4%; Average loss: 3.4230
Iteration: 1375; Percent complete: 34.4%; Average loss: 3.2535
Iteration: 1376; Percent complete: 34.4%; Average loss: 3.2763
Iteration: 1377; Percent complete: 34.4%; Average loss: 3.3875
Iteration: 1378; Percent complete: 34.4%; Average loss: 3.1983
Iteration: 1379; Percent complete: 34.5%; Average loss: 3.4005
Iteration: 1380; Percent complete: 34.5%; Average loss: 3.4155
Iteration: 1381; Percent complete: 34.5%; Average loss: 3.2601
Iteration: 1382; Percent complete: 34.5%; Average loss: 3.4974
Iteration: 1383; Percent complete: 34.6%; Average loss: 3.4876
Iteration: 1384; Percent complete: 34.6%; Average loss: 3.2175
Iteration: 1385; Percent complete: 34.6%; Average loss: 3.3750
Iteration: 1386; Percent complete: 34.6%; Average loss: 3.2959
Iteration: 1387; Percent complete: 34.7%; Average loss: 3.3136
Iteration: 1388; Percent complete: 34.7%; Average loss: 3.2199
Iteration: 1389; Percent complete: 34.7%; Average loss: 3.3659
Iteration: 1390; Percent complete: 34.8%; Average loss: 3.3112
Iteration: 1391; Percent complete: 34.8%; Average loss: 3.2928
Iteration: 1392; Percent complete: 34.8%; Average loss: 3.4390
Iteration: 1393; Percent complete: 34.8%; Average loss: 3.3026
Iteration: 1394; Percent complete: 34.8%; Average loss: 3.5703
Iteration: 1395; Percent complete: 34.9%; Average loss: 3.1602
Iteration: 1396; Percent complete: 34.9%; Average loss: 3.6111
Iteration: 1397; Percent complete: 34.9%; Average loss: 3.4526
Iteration: 1398; Percent complete: 34.9%; Average loss: 3.3102
Iteration: 1399; Percent complete: 35.0%; Average loss: 3.2991
Iteration: 1400; Percent complete: 35.0%; Average loss: 3.3250
Iteration: 1401; Percent complete: 35.0%; Average loss: 2.9113
Iteration: 1402; Percent complete: 35.0%; Average loss: 3.3546
Iteration: 1403; Percent complete: 35.1%; Average loss: 3.2563
Iteration: 1404; Percent complete: 35.1%; Average loss: 3.0790
Iteration: 1405; Percent complete: 35.1%; Average loss: 3.4005
Iteration: 1406; Percent complete: 35.1%; Average loss: 3.5099
Iteration: 1407; Percent complete: 35.2%; Average loss: 3.5311
Iteration: 1408; Percent complete: 35.2%; Average loss: 3.5618
Iteration: 1409; Percent complete: 35.2%; Average loss: 3.6133
Iteration: 1410; Percent complete: 35.2%; Average loss: 3.2928
Iteration: 1411; Percent complete: 35.3%; Average loss: 3.2234
Iteration: 1412; Percent complete: 35.3%; Average loss: 3.1712
Iteration: 1413; Percent complete: 35.3%; Average loss: 3.0984
Iteration: 1414; Percent complete: 35.4%; Average loss: 3.3264
Iteration: 1415; Percent complete: 35.4%; Average loss: 3.3573
Iteration: 1416; Percent complete: 35.4%; Average loss: 3.2736
Iteration: 1417; Percent complete: 35.4%; Average loss: 3.2339
Iteration: 1418; Percent complete: 35.4%; Average loss: 3.2718
Iteration: 1419; Percent complete: 35.5%; Average loss: 3.2627
Iteration: 1420; Percent complete: 35.5%; Average loss: 3.1621
Iteration: 1421; Percent complete: 35.5%; Average loss: 3.3909
Iteration: 1422; Percent complete: 35.5%; Average loss: 3.1570
Iteration: 1423; Percent complete: 35.6%; Average loss: 3.4295
Iteration: 1424; Percent complete: 35.6%; Average loss: 3.4028
Iteration: 1425; Percent complete: 35.6%; Average loss: 3.4407
Iteration: 1426; Percent complete: 35.6%; Average loss: 3.2752
Iteration: 1427; Percent complete: 35.7%; Average loss: 3.0910
Iteration: 1428; Percent complete: 35.7%; Average loss: 3.2460
Iteration: 1429; Percent complete: 35.7%; Average loss: 3.2867
Iteration: 1430; Percent complete: 35.8%; Average loss: 3.2188
Iteration: 1431; Percent complete: 35.8%; Average loss: 3.4557
Iteration: 1432; Percent complete: 35.8%; Average loss: 3.1674
Iteration: 1433; Percent complete: 35.8%; Average loss: 3.2161
Iteration: 1434; Percent complete: 35.9%; Average loss: 3.2229
Iteration: 1435; Percent complete: 35.9%; Average loss: 3.3507
Iteration: 1436; Percent complete: 35.9%; Average loss: 3.3177
Iteration: 1437; Percent complete: 35.9%; Average loss: 3.2452
Iteration: 1438; Percent complete: 35.9%; Average loss: 3.4029
Iteration: 1439; Percent complete: 36.0%; Average loss: 3.2786
Iteration: 1440; Percent complete: 36.0%; Average loss: 3.5239
Iteration: 1441; Percent complete: 36.0%; Average loss: 2.8716
Iteration: 1442; Percent complete: 36.0%; Average loss: 3.2957
Iteration: 1443; Percent complete: 36.1%; Average loss: 3.2057
Iteration: 1444; Percent complete: 36.1%; Average loss: 3.2093
Iteration: 1445; Percent complete: 36.1%; Average loss: 3.4530
Iteration: 1446; Percent complete: 36.1%; Average loss: 3.4159
Iteration: 1447; Percent complete: 36.2%; Average loss: 3.2303
Iteration: 1448; Percent complete: 36.2%; Average loss: 3.4006
Iteration: 1449; Percent complete: 36.2%; Average loss: 3.2597
Iteration: 1450; Percent complete: 36.2%; Average loss: 3.1486
Iteration: 1451; Percent complete: 36.3%; Average loss: 3.2493
Iteration: 1452; Percent complete: 36.3%; Average loss: 3.3558
Iteration: 1453; Percent complete: 36.3%; Average loss: 3.2705
Iteration: 1454; Percent complete: 36.4%; Average loss: 3.3070
Iteration: 1455; Percent complete: 36.4%; Average loss: 3.3996
Iteration: 1456; Percent complete: 36.4%; Average loss: 3.3812
Iteration: 1457; Percent complete: 36.4%; Average loss: 3.5273
Iteration: 1458; Percent complete: 36.4%; Average loss: 3.4004
Iteration: 1459; Percent complete: 36.5%; Average loss: 3.2609
Iteration: 1460; Percent complete: 36.5%; Average loss: 3.4931
Iteration: 1461; Percent complete: 36.5%; Average loss: 3.1994
Iteration: 1462; Percent complete: 36.5%; Average loss: 3.3970
Iteration: 1463; Percent complete: 36.6%; Average loss: 3.4160
Iteration: 1464; Percent complete: 36.6%; Average loss: 3.1655
Iteration: 1465; Percent complete: 36.6%; Average loss: 3.3437
Iteration: 1466; Percent complete: 36.6%; Average loss: 3.0108
Iteration: 1467; Percent complete: 36.7%; Average loss: 3.1035
Iteration: 1468; Percent complete: 36.7%; Average loss: 3.3709
Iteration: 1469; Percent complete: 36.7%; Average loss: 3.3805
Iteration: 1470; Percent complete: 36.8%; Average loss: 3.1902
Iteration: 1471; Percent complete: 36.8%; Average loss: 3.2189
Iteration: 1472; Percent complete: 36.8%; Average loss: 3.0524
Iteration: 1473; Percent complete: 36.8%; Average loss: 3.1169
Iteration: 1474; Percent complete: 36.9%; Average loss: 3.5064
Iteration: 1475; Percent complete: 36.9%; Average loss: 3.3970
Iteration: 1476; Percent complete: 36.9%; Average loss: 3.3141
Iteration: 1477; Percent complete: 36.9%; Average loss: 3.2827
Iteration: 1478; Percent complete: 37.0%; Average loss: 3.2451
Iteration: 1479; Percent complete: 37.0%; Average loss: 3.4474
Iteration: 1480; Percent complete: 37.0%; Average loss: 3.3095
Iteration: 1481; Percent complete: 37.0%; Average loss: 3.2228
Iteration: 1482; Percent complete: 37.0%; Average loss: 3.4599
Iteration: 1483; Percent complete: 37.1%; Average loss: 3.4774
Iteration: 1484; Percent complete: 37.1%; Average loss: 3.3090
Iteration: 1485; Percent complete: 37.1%; Average loss: 3.5553
Iteration: 1486; Percent complete: 37.1%; Average loss: 3.2463
Iteration: 1487; Percent complete: 37.2%; Average loss: 3.0203
Iteration: 1488; Percent complete: 37.2%; Average loss: 3.3033
Iteration: 1489; Percent complete: 37.2%; Average loss: 3.1384
Iteration: 1490; Percent complete: 37.2%; Average loss: 3.2887
Iteration: 1491; Percent complete: 37.3%; Average loss: 3.2004
Iteration: 1492; Percent complete: 37.3%; Average loss: 3.2898
Iteration: 1493; Percent complete: 37.3%; Average loss: 3.5410
Iteration: 1494; Percent complete: 37.4%; Average loss: 3.2754
Iteration: 1495; Percent complete: 37.4%; Average loss: 3.3105
Iteration: 1496; Percent complete: 37.4%; Average loss: 3.2335
Iteration: 1497; Percent complete: 37.4%; Average loss: 3.1527
Iteration: 1498; Percent complete: 37.5%; Average loss: 3.0610
Iteration: 1499; Percent complete: 37.5%; Average loss: 3.2645
Iteration: 1500; Percent complete: 37.5%; Average loss: 3.3724
Iteration: 1501; Percent complete: 37.5%; Average loss: 3.2614
Iteration: 1502; Percent complete: 37.5%; Average loss: 3.1476
Iteration: 1503; Percent complete: 37.6%; Average loss: 3.2956
Iteration: 1504; Percent complete: 37.6%; Average loss: 3.2306
Iteration: 1505; Percent complete: 37.6%; Average loss: 3.2336
Iteration: 1506; Percent complete: 37.6%; Average loss: 3.1960
Iteration: 1507; Percent complete: 37.7%; Average loss: 3.1332
Iteration: 1508; Percent complete: 37.7%; Average loss: 3.3136
Iteration: 1509; Percent complete: 37.7%; Average loss: 3.2377
Iteration: 1510; Percent complete: 37.8%; Average loss: 3.3246
Iteration: 1511; Percent complete: 37.8%; Average loss: 3.2722
Iteration: 1512; Percent complete: 37.8%; Average loss: 3.2855
Iteration: 1513; Percent complete: 37.8%; Average loss: 3.4685
Iteration: 1514; Percent complete: 37.9%; Average loss: 3.2029
Iteration: 1515; Percent complete: 37.9%; Average loss: 3.4033
Iteration: 1516; Percent complete: 37.9%; Average loss: 3.4400
Iteration: 1517; Percent complete: 37.9%; Average loss: 3.1741
Iteration: 1518; Percent complete: 38.0%; Average loss: 3.2997
Iteration: 1519; Percent complete: 38.0%; Average loss: 3.3158
Iteration: 1520; Percent complete: 38.0%; Average loss: 3.4597
Iteration: 1521; Percent complete: 38.0%; Average loss: 3.2690
Iteration: 1522; Percent complete: 38.0%; Average loss: 3.2758
Iteration: 1523; Percent complete: 38.1%; Average loss: 3.3588
Iteration: 1524; Percent complete: 38.1%; Average loss: 3.1880
Iteration: 1525; Percent complete: 38.1%; Average loss: 3.4091
Iteration: 1526; Percent complete: 38.1%; Average loss: 3.1329
Iteration: 1527; Percent complete: 38.2%; Average loss: 3.3221
Iteration: 1528; Percent complete: 38.2%; Average loss: 3.2561
Iteration: 1529; Percent complete: 38.2%; Average loss: 3.2830
Iteration: 1530; Percent complete: 38.2%; Average loss: 3.1161
Iteration: 1531; Percent complete: 38.3%; Average loss: 3.0602
Iteration: 1532; Percent complete: 38.3%; Average loss: 3.2351
Iteration: 1533; Percent complete: 38.3%; Average loss: 3.2324
Iteration: 1534; Percent complete: 38.4%; Average loss: 3.3812
Iteration: 1535; Percent complete: 38.4%; Average loss: 3.1562
Iteration: 1536; Percent complete: 38.4%; Average loss: 3.2998
Iteration: 1537; Percent complete: 38.4%; Average loss: 3.2734
Iteration: 1538; Percent complete: 38.5%; Average loss: 2.7647
Iteration: 1539; Percent complete: 38.5%; Average loss: 3.1862
Iteration: 1540; Percent complete: 38.5%; Average loss: 3.2737
Iteration: 1541; Percent complete: 38.5%; Average loss: 3.3207
Iteration: 1542; Percent complete: 38.6%; Average loss: 3.2240
Iteration: 1543; Percent complete: 38.6%; Average loss: 3.3933
Iteration: 1544; Percent complete: 38.6%; Average loss: 3.4304
Iteration: 1545; Percent complete: 38.6%; Average loss: 3.4470
Iteration: 1546; Percent complete: 38.6%; Average loss: 3.2784
Iteration: 1547; Percent complete: 38.7%; Average loss: 3.2898
Iteration: 1548; Percent complete: 38.7%; Average loss: 3.1266
Iteration: 1549; Percent complete: 38.7%; Average loss: 3.7085
Iteration: 1550; Percent complete: 38.8%; Average loss: 3.1412
Iteration: 1551; Percent complete: 38.8%; Average loss: 3.3900
Iteration: 1552; Percent complete: 38.8%; Average loss: 3.3368
Iteration: 1553; Percent complete: 38.8%; Average loss: 3.3565
Iteration: 1554; Percent complete: 38.9%; Average loss: 3.5470
Iteration: 1555; Percent complete: 38.9%; Average loss: 3.4423
Iteration: 1556; Percent complete: 38.9%; Average loss: 3.3178
Iteration: 1557; Percent complete: 38.9%; Average loss: 2.9431
Iteration: 1558; Percent complete: 39.0%; Average loss: 3.2284
Iteration: 1559; Percent complete: 39.0%; Average loss: 3.2477
Iteration: 1560; Percent complete: 39.0%; Average loss: 3.4029
Iteration: 1561; Percent complete: 39.0%; Average loss: 3.2218
Iteration: 1562; Percent complete: 39.1%; Average loss: 3.3428
Iteration: 1563; Percent complete: 39.1%; Average loss: 3.1307
Iteration: 1564; Percent complete: 39.1%; Average loss: 3.1675
Iteration: 1565; Percent complete: 39.1%; Average loss: 3.2568
Iteration: 1566; Percent complete: 39.1%; Average loss: 3.4463
Iteration: 1567; Percent complete: 39.2%; Average loss: 3.2803
Iteration: 1568; Percent complete: 39.2%; Average loss: 3.1995
Iteration: 1569; Percent complete: 39.2%; Average loss: 3.2313
Iteration: 1570; Percent complete: 39.2%; Average loss: 3.3836
Iteration: 1571; Percent complete: 39.3%; Average loss: 3.1900
Iteration: 1572; Percent complete: 39.3%; Average loss: 3.1521
Iteration: 1573; Percent complete: 39.3%; Average loss: 3.1894
Iteration: 1574; Percent complete: 39.4%; Average loss: 3.2558
Iteration: 1575; Percent complete: 39.4%; Average loss: 3.2768
Iteration: 1576; Percent complete: 39.4%; Average loss: 3.2660
Iteration: 1577; Percent complete: 39.4%; Average loss: 3.1501
Iteration: 1578; Percent complete: 39.5%; Average loss: 3.3736
Iteration: 1579; Percent complete: 39.5%; Average loss: 3.2788
Iteration: 1580; Percent complete: 39.5%; Average loss: 3.1940
Iteration: 1581; Percent complete: 39.5%; Average loss: 3.6187
Iteration: 1582; Percent complete: 39.6%; Average loss: 3.0492
Iteration: 1583; Percent complete: 39.6%; Average loss: 3.6141
Iteration: 1584; Percent complete: 39.6%; Average loss: 3.2673
Iteration: 1585; Percent complete: 39.6%; Average loss: 3.3570
Iteration: 1586; Percent complete: 39.6%; Average loss: 3.3023
Iteration: 1587; Percent complete: 39.7%; Average loss: 3.5556
Iteration: 1588; Percent complete: 39.7%; Average loss: 3.0490
Iteration: 1589; Percent complete: 39.7%; Average loss: 3.1045
Iteration: 1590; Percent complete: 39.8%; Average loss: 2.9904
Iteration: 1591; Percent complete: 39.8%; Average loss: 3.2292
Iteration: 1592; Percent complete: 39.8%; Average loss: 3.1664
Iteration: 1593; Percent complete: 39.8%; Average loss: 3.0775
Iteration: 1594; Percent complete: 39.9%; Average loss: 3.0289
Iteration: 1595; Percent complete: 39.9%; Average loss: 3.3664
Iteration: 1596; Percent complete: 39.9%; Average loss: 3.3480
Iteration: 1597; Percent complete: 39.9%; Average loss: 3.3657
Iteration: 1598; Percent complete: 40.0%; Average loss: 3.3031
Iteration: 1599; Percent complete: 40.0%; Average loss: 3.1987
Iteration: 1600; Percent complete: 40.0%; Average loss: 3.1996
Iteration: 1601; Percent complete: 40.0%; Average loss: 3.0931
Iteration: 1602; Percent complete: 40.1%; Average loss: 3.3244
Iteration: 1603; Percent complete: 40.1%; Average loss: 3.3195
Iteration: 1604; Percent complete: 40.1%; Average loss: 3.2042
Iteration: 1605; Percent complete: 40.1%; Average loss: 3.0110
Iteration: 1606; Percent complete: 40.2%; Average loss: 3.2909
Iteration: 1607; Percent complete: 40.2%; Average loss: 3.4815
Iteration: 1608; Percent complete: 40.2%; Average loss: 3.0713
Iteration: 1609; Percent complete: 40.2%; Average loss: 3.0685
Iteration: 1610; Percent complete: 40.2%; Average loss: 3.1198
Iteration: 1611; Percent complete: 40.3%; Average loss: 3.3782
Iteration: 1612; Percent complete: 40.3%; Average loss: 3.2228
Iteration: 1613; Percent complete: 40.3%; Average loss: 3.1116
Iteration: 1614; Percent complete: 40.4%; Average loss: 3.1477
Iteration: 1615; Percent complete: 40.4%; Average loss: 3.3601
Iteration: 1616; Percent complete: 40.4%; Average loss: 3.2215
Iteration: 1617; Percent complete: 40.4%; Average loss: 3.0668
Iteration: 1618; Percent complete: 40.5%; Average loss: 3.2546
Iteration: 1619; Percent complete: 40.5%; Average loss: 3.1816
Iteration: 1620; Percent complete: 40.5%; Average loss: 3.1743
Iteration: 1621; Percent complete: 40.5%; Average loss: 3.0550
Iteration: 1622; Percent complete: 40.6%; Average loss: 3.1844
Iteration: 1623; Percent complete: 40.6%; Average loss: 3.2004
Iteration: 1624; Percent complete: 40.6%; Average loss: 3.1701
Iteration: 1625; Percent complete: 40.6%; Average loss: 3.6294
Iteration: 1626; Percent complete: 40.6%; Average loss: 3.2425
Iteration: 1627; Percent complete: 40.7%; Average loss: 3.3056
Iteration: 1628; Percent complete: 40.7%; Average loss: 3.1709
Iteration: 1629; Percent complete: 40.7%; Average loss: 3.2972
Iteration: 1630; Percent complete: 40.8%; Average loss: 3.2493
Iteration: 1631; Percent complete: 40.8%; Average loss: 3.2967
Iteration: 1632; Percent complete: 40.8%; Average loss: 3.1424
Iteration: 1633; Percent complete: 40.8%; Average loss: 2.9650
Iteration: 1634; Percent complete: 40.8%; Average loss: 3.2061
Iteration: 1635; Percent complete: 40.9%; Average loss: 3.3901
Iteration: 1636; Percent complete: 40.9%; Average loss: 3.2673
Iteration: 1637; Percent complete: 40.9%; Average loss: 3.1117
Iteration: 1638; Percent complete: 40.9%; Average loss: 3.2322
Iteration: 1639; Percent complete: 41.0%; Average loss: 3.1342
Iteration: 1640; Percent complete: 41.0%; Average loss: 3.1885
Iteration: 1641; Percent complete: 41.0%; Average loss: 3.3173
Iteration: 1642; Percent complete: 41.0%; Average loss: 3.1884
Iteration: 1643; Percent complete: 41.1%; Average loss: 3.2050
Iteration: 1644; Percent complete: 41.1%; Average loss: 2.8775
Iteration: 1645; Percent complete: 41.1%; Average loss: 3.1130
Iteration: 1646; Percent complete: 41.1%; Average loss: 3.4739
Iteration: 1647; Percent complete: 41.2%; Average loss: 3.2247
Iteration: 1648; Percent complete: 41.2%; Average loss: 3.1590
Iteration: 1649; Percent complete: 41.2%; Average loss: 3.3660
Iteration: 1650; Percent complete: 41.2%; Average loss: 3.1890
Iteration: 1651; Percent complete: 41.3%; Average loss: 3.1083
Iteration: 1652; Percent complete: 41.3%; Average loss: 3.2978
Iteration: 1653; Percent complete: 41.3%; Average loss: 3.3990
Iteration: 1654; Percent complete: 41.3%; Average loss: 3.2624
Iteration: 1655; Percent complete: 41.4%; Average loss: 3.1726
Iteration: 1656; Percent complete: 41.4%; Average loss: 3.2106
Iteration: 1657; Percent complete: 41.4%; Average loss: 3.1238
Iteration: 1658; Percent complete: 41.4%; Average loss: 3.3776
Iteration: 1659; Percent complete: 41.5%; Average loss: 3.1127
Iteration: 1660; Percent complete: 41.5%; Average loss: 3.4151
Iteration: 1661; Percent complete: 41.5%; Average loss: 3.0635
Iteration: 1662; Percent complete: 41.5%; Average loss: 3.2381
Iteration: 1663; Percent complete: 41.6%; Average loss: 3.4107
Iteration: 1664; Percent complete: 41.6%; Average loss: 3.4089
Iteration: 1665; Percent complete: 41.6%; Average loss: 3.1427
Iteration: 1666; Percent complete: 41.6%; Average loss: 3.3037
Iteration: 1667; Percent complete: 41.7%; Average loss: 3.4504
Iteration: 1668; Percent complete: 41.7%; Average loss: 3.0540
Iteration: 1669; Percent complete: 41.7%; Average loss: 3.3464
Iteration: 1670; Percent complete: 41.8%; Average loss: 3.3697
Iteration: 1671; Percent complete: 41.8%; Average loss: 3.1482
Iteration: 1672; Percent complete: 41.8%; Average loss: 3.0597
Iteration: 1673; Percent complete: 41.8%; Average loss: 3.2390
Iteration: 1674; Percent complete: 41.9%; Average loss: 3.3887
Iteration: 1675; Percent complete: 41.9%; Average loss: 3.1776
Iteration: 1676; Percent complete: 41.9%; Average loss: 3.0507
Iteration: 1677; Percent complete: 41.9%; Average loss: 2.9863
Iteration: 1678; Percent complete: 41.9%; Average loss: 3.1910
Iteration: 1679; Percent complete: 42.0%; Average loss: 3.6251
Iteration: 1680; Percent complete: 42.0%; Average loss: 3.1217
Iteration: 1681; Percent complete: 42.0%; Average loss: 3.0321
Iteration: 1682; Percent complete: 42.0%; Average loss: 3.0983
Iteration: 1683; Percent complete: 42.1%; Average loss: 3.2249
Iteration: 1684; Percent complete: 42.1%; Average loss: 3.0918
Iteration: 1685; Percent complete: 42.1%; Average loss: 3.2616
Iteration: 1686; Percent complete: 42.1%; Average loss: 3.0851
Iteration: 1687; Percent complete: 42.2%; Average loss: 3.0399
Iteration: 1688; Percent complete: 42.2%; Average loss: 3.2663
Iteration: 1689; Percent complete: 42.2%; Average loss: 3.6563
Iteration: 1690; Percent complete: 42.2%; Average loss: 3.3812
Iteration: 1691; Percent complete: 42.3%; Average loss: 3.1511
Iteration: 1692; Percent complete: 42.3%; Average loss: 3.3258
Iteration: 1693; Percent complete: 42.3%; Average loss: 3.4110
Iteration: 1694; Percent complete: 42.4%; Average loss: 3.1868
Iteration: 1695; Percent complete: 42.4%; Average loss: 3.2952
Iteration: 1696; Percent complete: 42.4%; Average loss: 3.3234
Iteration: 1697; Percent complete: 42.4%; Average loss: 3.3566
Iteration: 1698; Percent complete: 42.4%; Average loss: 3.4101
Iteration: 1699; Percent complete: 42.5%; Average loss: 3.2529
Iteration: 1700; Percent complete: 42.5%; Average loss: 3.2332
Iteration: 1701; Percent complete: 42.5%; Average loss: 3.1302
Iteration: 1702; Percent complete: 42.5%; Average loss: 3.3687
Iteration: 1703; Percent complete: 42.6%; Average loss: 3.1476
Iteration: 1704; Percent complete: 42.6%; Average loss: 3.0277
Iteration: 1705; Percent complete: 42.6%; Average loss: 3.2638
Iteration: 1706; Percent complete: 42.6%; Average loss: 3.2702
Iteration: 1707; Percent complete: 42.7%; Average loss: 3.2387
Iteration: 1708; Percent complete: 42.7%; Average loss: 3.3197
Iteration: 1709; Percent complete: 42.7%; Average loss: 3.2621
Iteration: 1710; Percent complete: 42.8%; Average loss: 2.9938
Iteration: 1711; Percent complete: 42.8%; Average loss: 3.3258
Iteration: 1712; Percent complete: 42.8%; Average loss: 3.0890
Iteration: 1713; Percent complete: 42.8%; Average loss: 3.6095
Iteration: 1714; Percent complete: 42.9%; Average loss: 3.3512
Iteration: 1715; Percent complete: 42.9%; Average loss: 2.9541
Iteration: 1716; Percent complete: 42.9%; Average loss: 3.2970
Iteration: 1717; Percent complete: 42.9%; Average loss: 3.3994
Iteration: 1718; Percent complete: 43.0%; Average loss: 3.0629
Iteration: 1719; Percent complete: 43.0%; Average loss: 2.9826
Iteration: 1720; Percent complete: 43.0%; Average loss: 3.2798
Iteration: 1721; Percent complete: 43.0%; Average loss: 3.2825
Iteration: 1722; Percent complete: 43.0%; Average loss: 3.1715
Iteration: 1723; Percent complete: 43.1%; Average loss: 3.3072
Iteration: 1724; Percent complete: 43.1%; Average loss: 3.2425
Iteration: 1725; Percent complete: 43.1%; Average loss: 3.2648
Iteration: 1726; Percent complete: 43.1%; Average loss: 3.2995
Iteration: 1727; Percent complete: 43.2%; Average loss: 3.2765
Iteration: 1728; Percent complete: 43.2%; Average loss: 3.2206
Iteration: 1729; Percent complete: 43.2%; Average loss: 3.4010
Iteration: 1730; Percent complete: 43.2%; Average loss: 3.3551
Iteration: 1731; Percent complete: 43.3%; Average loss: 3.2555
Iteration: 1732; Percent complete: 43.3%; Average loss: 3.3251
Iteration: 1733; Percent complete: 43.3%; Average loss: 2.9463
Iteration: 1734; Percent complete: 43.4%; Average loss: 3.2982
Iteration: 1735; Percent complete: 43.4%; Average loss: 3.1850
Iteration: 1736; Percent complete: 43.4%; Average loss: 3.2149
Iteration: 1737; Percent complete: 43.4%; Average loss: 3.0848
Iteration: 1738; Percent complete: 43.5%; Average loss: 3.4000
Iteration: 1739; Percent complete: 43.5%; Average loss: 3.1406
Iteration: 1740; Percent complete: 43.5%; Average loss: 3.4902
Iteration: 1741; Percent complete: 43.5%; Average loss: 3.1772
Iteration: 1742; Percent complete: 43.5%; Average loss: 2.9069
Iteration: 1743; Percent complete: 43.6%; Average loss: 3.2991
Iteration: 1744; Percent complete: 43.6%; Average loss: 3.3429
Iteration: 1745; Percent complete: 43.6%; Average loss: 3.0230
Iteration: 1746; Percent complete: 43.6%; Average loss: 3.3042
Iteration: 1747; Percent complete: 43.7%; Average loss: 3.3483
Iteration: 1748; Percent complete: 43.7%; Average loss: 3.5824
Iteration: 1749; Percent complete: 43.7%; Average loss: 3.1267
Iteration: 1750; Percent complete: 43.8%; Average loss: 3.2874
Iteration: 1751; Percent complete: 43.8%; Average loss: 3.1686
Iteration: 1752; Percent complete: 43.8%; Average loss: 3.3388
Iteration: 1753; Percent complete: 43.8%; Average loss: 3.2387
Iteration: 1754; Percent complete: 43.9%; Average loss: 3.2589
Iteration: 1755; Percent complete: 43.9%; Average loss: 3.3628
Iteration: 1756; Percent complete: 43.9%; Average loss: 3.4778
Iteration: 1757; Percent complete: 43.9%; Average loss: 3.3497
Iteration: 1758; Percent complete: 44.0%; Average loss: 2.9971
Iteration: 1759; Percent complete: 44.0%; Average loss: 3.1216
Iteration: 1760; Percent complete: 44.0%; Average loss: 3.0502
Iteration: 1761; Percent complete: 44.0%; Average loss: 3.4261
Iteration: 1762; Percent complete: 44.0%; Average loss: 3.3394
Iteration: 1763; Percent complete: 44.1%; Average loss: 3.1951
Iteration: 1764; Percent complete: 44.1%; Average loss: 3.3443
Iteration: 1765; Percent complete: 44.1%; Average loss: 2.7812
Iteration: 1766; Percent complete: 44.1%; Average loss: 3.1652
Iteration: 1767; Percent complete: 44.2%; Average loss: 3.2538
Iteration: 1768; Percent complete: 44.2%; Average loss: 3.3535
Iteration: 1769; Percent complete: 44.2%; Average loss: 3.2385
Iteration: 1770; Percent complete: 44.2%; Average loss: 3.2567
Iteration: 1771; Percent complete: 44.3%; Average loss: 3.1825
Iteration: 1772; Percent complete: 44.3%; Average loss: 3.1488
Iteration: 1773; Percent complete: 44.3%; Average loss: 3.1791
Iteration: 1774; Percent complete: 44.4%; Average loss: 3.3675
Iteration: 1775; Percent complete: 44.4%; Average loss: 3.3444
Iteration: 1776; Percent complete: 44.4%; Average loss: 3.0171
Iteration: 1777; Percent complete: 44.4%; Average loss: 3.1287
Iteration: 1778; Percent complete: 44.5%; Average loss: 3.1481
Iteration: 1779; Percent complete: 44.5%; Average loss: 3.1951
Iteration: 1780; Percent complete: 44.5%; Average loss: 3.1957
Iteration: 1781; Percent complete: 44.5%; Average loss: 3.3952
Iteration: 1782; Percent complete: 44.5%; Average loss: 3.4256
Iteration: 1783; Percent complete: 44.6%; Average loss: 3.0526
Iteration: 1784; Percent complete: 44.6%; Average loss: 2.9873
Iteration: 1785; Percent complete: 44.6%; Average loss: 3.2838
Iteration: 1786; Percent complete: 44.6%; Average loss: 3.4532
Iteration: 1787; Percent complete: 44.7%; Average loss: 3.4196
Iteration: 1788; Percent complete: 44.7%; Average loss: 3.2043
Iteration: 1789; Percent complete: 44.7%; Average loss: 3.0286
Iteration: 1790; Percent complete: 44.8%; Average loss: 2.9583
Iteration: 1791; Percent complete: 44.8%; Average loss: 3.0257
Iteration: 1792; Percent complete: 44.8%; Average loss: 2.7808
Iteration: 1793; Percent complete: 44.8%; Average loss: 3.1514
Iteration: 1794; Percent complete: 44.9%; Average loss: 3.2737
Iteration: 1795; Percent complete: 44.9%; Average loss: 3.2825
Iteration: 1796; Percent complete: 44.9%; Average loss: 3.2237
Iteration: 1797; Percent complete: 44.9%; Average loss: 3.1440
Iteration: 1798; Percent complete: 45.0%; Average loss: 3.2237
Iteration: 1799; Percent complete: 45.0%; Average loss: 3.1375
Iteration: 1800; Percent complete: 45.0%; Average loss: 3.4507
Iteration: 1801; Percent complete: 45.0%; Average loss: 3.1137
Iteration: 1802; Percent complete: 45.1%; Average loss: 3.1971
Iteration: 1803; Percent complete: 45.1%; Average loss: 3.2653
Iteration: 1804; Percent complete: 45.1%; Average loss: 3.3665
Iteration: 1805; Percent complete: 45.1%; Average loss: 3.2305
Iteration: 1806; Percent complete: 45.1%; Average loss: 3.1893
Iteration: 1807; Percent complete: 45.2%; Average loss: 3.1688
Iteration: 1808; Percent complete: 45.2%; Average loss: 3.2250
Iteration: 1809; Percent complete: 45.2%; Average loss: 3.0329
Iteration: 1810; Percent complete: 45.2%; Average loss: 3.2236
Iteration: 1811; Percent complete: 45.3%; Average loss: 3.2071
Iteration: 1812; Percent complete: 45.3%; Average loss: 2.9032
Iteration: 1813; Percent complete: 45.3%; Average loss: 2.9606
Iteration: 1814; Percent complete: 45.4%; Average loss: 3.1072
Iteration: 1815; Percent complete: 45.4%; Average loss: 3.0906
Iteration: 1816; Percent complete: 45.4%; Average loss: 3.2019
Iteration: 1817; Percent complete: 45.4%; Average loss: 3.0157
Iteration: 1818; Percent complete: 45.5%; Average loss: 3.1911
Iteration: 1819; Percent complete: 45.5%; Average loss: 3.1921
Iteration: 1820; Percent complete: 45.5%; Average loss: 3.2470
Iteration: 1821; Percent complete: 45.5%; Average loss: 3.0591
Iteration: 1822; Percent complete: 45.6%; Average loss: 3.3088
Iteration: 1823; Percent complete: 45.6%; Average loss: 3.1945
Iteration: 1824; Percent complete: 45.6%; Average loss: 3.0563
Iteration: 1825; Percent complete: 45.6%; Average loss: 3.2119
Iteration: 1826; Percent complete: 45.6%; Average loss: 3.0624
Iteration: 1827; Percent complete: 45.7%; Average loss: 3.2560
Iteration: 1828; Percent complete: 45.7%; Average loss: 3.2533
Iteration: 1829; Percent complete: 45.7%; Average loss: 3.1337
Iteration: 1830; Percent complete: 45.8%; Average loss: 3.2273
Iteration: 1831; Percent complete: 45.8%; Average loss: 3.4976
Iteration: 1832; Percent complete: 45.8%; Average loss: 3.1607
Iteration: 1833; Percent complete: 45.8%; Average loss: 3.1203
Iteration: 1834; Percent complete: 45.9%; Average loss: 3.0393
Iteration: 1835; Percent complete: 45.9%; Average loss: 3.1065
Iteration: 1836; Percent complete: 45.9%; Average loss: 3.0437
Iteration: 1837; Percent complete: 45.9%; Average loss: 3.4095
Iteration: 1838; Percent complete: 46.0%; Average loss: 3.2891
Iteration: 1839; Percent complete: 46.0%; Average loss: 3.1812
Iteration: 1840; Percent complete: 46.0%; Average loss: 3.3283
Iteration: 1841; Percent complete: 46.0%; Average loss: 3.3209
Iteration: 1842; Percent complete: 46.1%; Average loss: 3.1900
Iteration: 1843; Percent complete: 46.1%; Average loss: 3.2484
Iteration: 1844; Percent complete: 46.1%; Average loss: 3.3544
Iteration: 1845; Percent complete: 46.1%; Average loss: 3.1091
Iteration: 1846; Percent complete: 46.2%; Average loss: 3.1929
Iteration: 1847; Percent complete: 46.2%; Average loss: 3.1815
Iteration: 1848; Percent complete: 46.2%; Average loss: 3.2486
Iteration: 1849; Percent complete: 46.2%; Average loss: 3.2282
Iteration: 1850; Percent complete: 46.2%; Average loss: 2.9989
Iteration: 1851; Percent complete: 46.3%; Average loss: 3.1863
Iteration: 1852; Percent complete: 46.3%; Average loss: 3.4067
Iteration: 1853; Percent complete: 46.3%; Average loss: 3.2082
Iteration: 1854; Percent complete: 46.4%; Average loss: 3.2785
Iteration: 1855; Percent complete: 46.4%; Average loss: 3.3747
Iteration: 1856; Percent complete: 46.4%; Average loss: 3.1456
Iteration: 1857; Percent complete: 46.4%; Average loss: 3.1870
Iteration: 1858; Percent complete: 46.5%; Average loss: 3.2537
Iteration: 1859; Percent complete: 46.5%; Average loss: 3.2692
Iteration: 1860; Percent complete: 46.5%; Average loss: 3.1879
Iteration: 1861; Percent complete: 46.5%; Average loss: 3.2469
Iteration: 1862; Percent complete: 46.6%; Average loss: 3.2049
Iteration: 1863; Percent complete: 46.6%; Average loss: 3.1014
Iteration: 1864; Percent complete: 46.6%; Average loss: 3.1133
Iteration: 1865; Percent complete: 46.6%; Average loss: 3.1178
Iteration: 1866; Percent complete: 46.7%; Average loss: 3.0179
Iteration: 1867; Percent complete: 46.7%; Average loss: 3.1701
Iteration: 1868; Percent complete: 46.7%; Average loss: 3.3251
Iteration: 1869; Percent complete: 46.7%; Average loss: 3.1607
Iteration: 1870; Percent complete: 46.8%; Average loss: 3.2089
Iteration: 1871; Percent complete: 46.8%; Average loss: 3.1621
Iteration: 1872; Percent complete: 46.8%; Average loss: 3.3727
Iteration: 1873; Percent complete: 46.8%; Average loss: 3.0894
Iteration: 1874; Percent complete: 46.9%; Average loss: 3.0530
Iteration: 1875; Percent complete: 46.9%; Average loss: 3.0392
Iteration: 1876; Percent complete: 46.9%; Average loss: 3.3275
Iteration: 1877; Percent complete: 46.9%; Average loss: 3.2805
Iteration: 1878; Percent complete: 46.9%; Average loss: 3.0333
Iteration: 1879; Percent complete: 47.0%; Average loss: 3.1586
Iteration: 1880; Percent complete: 47.0%; Average loss: 3.1585
Iteration: 1881; Percent complete: 47.0%; Average loss: 3.1715
Iteration: 1882; Percent complete: 47.0%; Average loss: 3.2450
Iteration: 1883; Percent complete: 47.1%; Average loss: 3.6144
Iteration: 1884; Percent complete: 47.1%; Average loss: 3.2692
Iteration: 1885; Percent complete: 47.1%; Average loss: 3.2761
Iteration: 1886; Percent complete: 47.1%; Average loss: 3.0325
Iteration: 1887; Percent complete: 47.2%; Average loss: 3.0972
Iteration: 1888; Percent complete: 47.2%; Average loss: 3.1938
Iteration: 1889; Percent complete: 47.2%; Average loss: 3.1295
Iteration: 1890; Percent complete: 47.2%; Average loss: 3.0874
Iteration: 1891; Percent complete: 47.3%; Average loss: 3.3238
Iteration: 1892; Percent complete: 47.3%; Average loss: 3.2715
Iteration: 1893; Percent complete: 47.3%; Average loss: 3.1978
Iteration: 1894; Percent complete: 47.3%; Average loss: 3.0901
Iteration: 1895; Percent complete: 47.4%; Average loss: 3.1968
Iteration: 1896; Percent complete: 47.4%; Average loss: 3.2934
Iteration: 1897; Percent complete: 47.4%; Average loss: 3.1383
Iteration: 1898; Percent complete: 47.4%; Average loss: 3.2630
Iteration: 1899; Percent complete: 47.5%; Average loss: 3.3959
Iteration: 1900; Percent complete: 47.5%; Average loss: 3.1393
Iteration: 1901; Percent complete: 47.5%; Average loss: 3.2345
Iteration: 1902; Percent complete: 47.5%; Average loss: 3.1546
Iteration: 1903; Percent complete: 47.6%; Average loss: 3.2041
Iteration: 1904; Percent complete: 47.6%; Average loss: 3.0299
Iteration: 1905; Percent complete: 47.6%; Average loss: 3.1293
Iteration: 1906; Percent complete: 47.6%; Average loss: 3.0386
Iteration: 1907; Percent complete: 47.7%; Average loss: 3.2141
Iteration: 1908; Percent complete: 47.7%; Average loss: 2.8928
Iteration: 1909; Percent complete: 47.7%; Average loss: 3.0346
Iteration: 1910; Percent complete: 47.8%; Average loss: 3.2183
Iteration: 1911; Percent complete: 47.8%; Average loss: 3.1884
Iteration: 1912; Percent complete: 47.8%; Average loss: 2.9810
Iteration: 1913; Percent complete: 47.8%; Average loss: 3.1253
Iteration: 1914; Percent complete: 47.9%; Average loss: 3.2205
Iteration: 1915; Percent complete: 47.9%; Average loss: 3.0170
Iteration: 1916; Percent complete: 47.9%; Average loss: 3.1652
Iteration: 1917; Percent complete: 47.9%; Average loss: 2.9246
Iteration: 1918; Percent complete: 47.9%; Average loss: 3.0870
Iteration: 1919; Percent complete: 48.0%; Average loss: 3.3475
Iteration: 1920; Percent complete: 48.0%; Average loss: 3.5096
Iteration: 1921; Percent complete: 48.0%; Average loss: 3.0381
Iteration: 1922; Percent complete: 48.0%; Average loss: 3.0902
Iteration: 1923; Percent complete: 48.1%; Average loss: 3.3573
Iteration: 1924; Percent complete: 48.1%; Average loss: 3.3599
Iteration: 1925; Percent complete: 48.1%; Average loss: 3.1934
Iteration: 1926; Percent complete: 48.1%; Average loss: 3.1238
Iteration: 1927; Percent complete: 48.2%; Average loss: 3.0758
Iteration: 1928; Percent complete: 48.2%; Average loss: 3.1309
Iteration: 1929; Percent complete: 48.2%; Average loss: 3.1304
Iteration: 1930; Percent complete: 48.2%; Average loss: 3.0470
Iteration: 1931; Percent complete: 48.3%; Average loss: 3.1914
Iteration: 1932; Percent complete: 48.3%; Average loss: 3.1885
Iteration: 1933; Percent complete: 48.3%; Average loss: 3.0388
Iteration: 1934; Percent complete: 48.4%; Average loss: 3.0372
Iteration: 1935; Percent complete: 48.4%; Average loss: 2.9806
Iteration: 1936; Percent complete: 48.4%; Average loss: 3.2232
Iteration: 1937; Percent complete: 48.4%; Average loss: 3.3090
Iteration: 1938; Percent complete: 48.4%; Average loss: 3.1786
Iteration: 1939; Percent complete: 48.5%; Average loss: 3.2134
Iteration: 1940; Percent complete: 48.5%; Average loss: 3.0538
Iteration: 1941; Percent complete: 48.5%; Average loss: 3.2921
Iteration: 1942; Percent complete: 48.5%; Average loss: 3.0853
Iteration: 1943; Percent complete: 48.6%; Average loss: 3.2331
Iteration: 1944; Percent complete: 48.6%; Average loss: 3.2072
Iteration: 1945; Percent complete: 48.6%; Average loss: 3.4785
Iteration: 1946; Percent complete: 48.6%; Average loss: 2.9828
Iteration: 1947; Percent complete: 48.7%; Average loss: 2.9508
Iteration: 1948; Percent complete: 48.7%; Average loss: 3.5496
Iteration: 1949; Percent complete: 48.7%; Average loss: 3.1289
Iteration: 1950; Percent complete: 48.8%; Average loss: 3.0523
Iteration: 1951; Percent complete: 48.8%; Average loss: 3.0588
Iteration: 1952; Percent complete: 48.8%; Average loss: 3.0513
Iteration: 1953; Percent complete: 48.8%; Average loss: 3.2701
Iteration: 1954; Percent complete: 48.9%; Average loss: 3.1780
Iteration: 1955; Percent complete: 48.9%; Average loss: 2.9624
Iteration: 1956; Percent complete: 48.9%; Average loss: 3.1952
Iteration: 1957; Percent complete: 48.9%; Average loss: 2.9746
Iteration: 1958; Percent complete: 48.9%; Average loss: 3.1789
Iteration: 1959; Percent complete: 49.0%; Average loss: 3.2643
Iteration: 1960; Percent complete: 49.0%; Average loss: 2.8616
Iteration: 1961; Percent complete: 49.0%; Average loss: 3.2614
Iteration: 1962; Percent complete: 49.0%; Average loss: 3.0526
Iteration: 1963; Percent complete: 49.1%; Average loss: 3.2833
Iteration: 1964; Percent complete: 49.1%; Average loss: 2.9454
Iteration: 1965; Percent complete: 49.1%; Average loss: 3.2621
Iteration: 1966; Percent complete: 49.1%; Average loss: 3.0893
Iteration: 1967; Percent complete: 49.2%; Average loss: 3.2134
Iteration: 1968; Percent complete: 49.2%; Average loss: 3.2407
Iteration: 1969; Percent complete: 49.2%; Average loss: 3.3096
Iteration: 1970; Percent complete: 49.2%; Average loss: 3.1080
Iteration: 1971; Percent complete: 49.3%; Average loss: 3.0142
Iteration: 1972; Percent complete: 49.3%; Average loss: 3.2731
Iteration: 1973; Percent complete: 49.3%; Average loss: 3.4868
Iteration: 1974; Percent complete: 49.4%; Average loss: 3.1199
Iteration: 1975; Percent complete: 49.4%; Average loss: 3.0493
Iteration: 1976; Percent complete: 49.4%; Average loss: 3.1532
Iteration: 1977; Percent complete: 49.4%; Average loss: 3.3482
Iteration: 1978; Percent complete: 49.5%; Average loss: 2.9784
Iteration: 1979; Percent complete: 49.5%; Average loss: 3.2034
Iteration: 1980; Percent complete: 49.5%; Average loss: 3.0286
Iteration: 1981; Percent complete: 49.5%; Average loss: 2.8742
Iteration: 1982; Percent complete: 49.5%; Average loss: 2.9711
Iteration: 1983; Percent complete: 49.6%; Average loss: 3.2717
Iteration: 1984; Percent complete: 49.6%; Average loss: 3.3487
Iteration: 1985; Percent complete: 49.6%; Average loss: 3.1621
Iteration: 1986; Percent complete: 49.6%; Average loss: 3.1107
Iteration: 1987; Percent complete: 49.7%; Average loss: 3.1927
Iteration: 1988; Percent complete: 49.7%; Average loss: 2.9303
Iteration: 1989; Percent complete: 49.7%; Average loss: 3.0200
Iteration: 1990; Percent complete: 49.8%; Average loss: 3.2569
Iteration: 1991; Percent complete: 49.8%; Average loss: 3.1894
Iteration: 1992; Percent complete: 49.8%; Average loss: 2.9170
Iteration: 1993; Percent complete: 49.8%; Average loss: 3.1459
Iteration: 1994; Percent complete: 49.9%; Average loss: 3.3948
Iteration: 1995; Percent complete: 49.9%; Average loss: 3.1232
Iteration: 1996; Percent complete: 49.9%; Average loss: 2.9305
Iteration: 1997; Percent complete: 49.9%; Average loss: 2.8886
Iteration: 1998; Percent complete: 50.0%; Average loss: 3.3030
Iteration: 1999; Percent complete: 50.0%; Average loss: 3.0960
Iteration: 2000; Percent complete: 50.0%; Average loss: 3.2913
Iteration: 2001; Percent complete: 50.0%; Average loss: 3.1615
Iteration: 2002; Percent complete: 50.0%; Average loss: 3.0965
Iteration: 2003; Percent complete: 50.1%; Average loss: 3.0974
Iteration: 2004; Percent complete: 50.1%; Average loss: 3.0813
Iteration: 2005; Percent complete: 50.1%; Average loss: 3.1367
Iteration: 2006; Percent complete: 50.1%; Average loss: 3.1961
Iteration: 2007; Percent complete: 50.2%; Average loss: 3.2221
Iteration: 2008; Percent complete: 50.2%; Average loss: 3.1590
Iteration: 2009; Percent complete: 50.2%; Average loss: 3.1506
Iteration: 2010; Percent complete: 50.2%; Average loss: 3.0128
Iteration: 2011; Percent complete: 50.3%; Average loss: 3.2422
Iteration: 2012; Percent complete: 50.3%; Average loss: 3.0215
Iteration: 2013; Percent complete: 50.3%; Average loss: 3.1729
Iteration: 2014; Percent complete: 50.3%; Average loss: 3.1252
Iteration: 2015; Percent complete: 50.4%; Average loss: 3.1548
Iteration: 2016; Percent complete: 50.4%; Average loss: 3.2367
Iteration: 2017; Percent complete: 50.4%; Average loss: 2.8376
Iteration: 2018; Percent complete: 50.4%; Average loss: 3.1875
Iteration: 2019; Percent complete: 50.5%; Average loss: 3.3657
Iteration: 2020; Percent complete: 50.5%; Average loss: 3.3521
Iteration: 2021; Percent complete: 50.5%; Average loss: 3.1347
Iteration: 2022; Percent complete: 50.5%; Average loss: 3.3619
Iteration: 2023; Percent complete: 50.6%; Average loss: 2.8478
Iteration: 2024; Percent complete: 50.6%; Average loss: 3.3792
Iteration: 2025; Percent complete: 50.6%; Average loss: 3.1803
Iteration: 2026; Percent complete: 50.6%; Average loss: 2.9781
Iteration: 2027; Percent complete: 50.7%; Average loss: 3.1255
Iteration: 2028; Percent complete: 50.7%; Average loss: 2.9772
Iteration: 2029; Percent complete: 50.7%; Average loss: 3.2694
Iteration: 2030; Percent complete: 50.7%; Average loss: 3.1925
Iteration: 2031; Percent complete: 50.8%; Average loss: 3.2006
Iteration: 2032; Percent complete: 50.8%; Average loss: 3.0220
Iteration: 2033; Percent complete: 50.8%; Average loss: 2.7527
Iteration: 2034; Percent complete: 50.8%; Average loss: 3.0773
Iteration: 2035; Percent complete: 50.9%; Average loss: 3.0677
Iteration: 2036; Percent complete: 50.9%; Average loss: 2.9347
Iteration: 2037; Percent complete: 50.9%; Average loss: 3.2037
Iteration: 2038; Percent complete: 50.9%; Average loss: 3.2424
Iteration: 2039; Percent complete: 51.0%; Average loss: 3.2766
Iteration: 2040; Percent complete: 51.0%; Average loss: 3.1153
Iteration: 2041; Percent complete: 51.0%; Average loss: 3.1877
Iteration: 2042; Percent complete: 51.0%; Average loss: 2.9961
Iteration: 2043; Percent complete: 51.1%; Average loss: 2.9694
Iteration: 2044; Percent complete: 51.1%; Average loss: 3.0603
Iteration: 2045; Percent complete: 51.1%; Average loss: 3.1616
Iteration: 2046; Percent complete: 51.1%; Average loss: 3.3332
Iteration: 2047; Percent complete: 51.2%; Average loss: 3.1617
Iteration: 2048; Percent complete: 51.2%; Average loss: 3.2225
Iteration: 2049; Percent complete: 51.2%; Average loss: 2.9827
Iteration: 2050; Percent complete: 51.2%; Average loss: 2.9887
Iteration: 2051; Percent complete: 51.3%; Average loss: 3.0270
Iteration: 2052; Percent complete: 51.3%; Average loss: 3.0424
Iteration: 2053; Percent complete: 51.3%; Average loss: 3.2729
Iteration: 2054; Percent complete: 51.3%; Average loss: 3.0727
Iteration: 2055; Percent complete: 51.4%; Average loss: 3.3532
Iteration: 2056; Percent complete: 51.4%; Average loss: 3.2882
Iteration: 2057; Percent complete: 51.4%; Average loss: 3.1864
Iteration: 2058; Percent complete: 51.4%; Average loss: 3.2492
Iteration: 2059; Percent complete: 51.5%; Average loss: 3.1152
Iteration: 2060; Percent complete: 51.5%; Average loss: 3.1598
Iteration: 2061; Percent complete: 51.5%; Average loss: 3.2821
Iteration: 2062; Percent complete: 51.5%; Average loss: 3.1712
Iteration: 2063; Percent complete: 51.6%; Average loss: 3.0602
Iteration: 2064; Percent complete: 51.6%; Average loss: 2.9962
Iteration: 2065; Percent complete: 51.6%; Average loss: 3.0554
Iteration: 2066; Percent complete: 51.6%; Average loss: 2.9547
Iteration: 2067; Percent complete: 51.7%; Average loss: 3.1610
Iteration: 2068; Percent complete: 51.7%; Average loss: 3.3597
Iteration: 2069; Percent complete: 51.7%; Average loss: 3.2854
Iteration: 2070; Percent complete: 51.7%; Average loss: 3.2837
Iteration: 2071; Percent complete: 51.8%; Average loss: 3.2382
Iteration: 2072; Percent complete: 51.8%; Average loss: 3.2694
Iteration: 2073; Percent complete: 51.8%; Average loss: 2.8747
Iteration: 2074; Percent complete: 51.8%; Average loss: 3.0446
Iteration: 2075; Percent complete: 51.9%; Average loss: 3.0351
Iteration: 2076; Percent complete: 51.9%; Average loss: 3.4074
Iteration: 2077; Percent complete: 51.9%; Average loss: 3.1550
Iteration: 2078; Percent complete: 51.9%; Average loss: 3.1246
Iteration: 2079; Percent complete: 52.0%; Average loss: 3.2109
Iteration: 2080; Percent complete: 52.0%; Average loss: 2.9621
Iteration: 2081; Percent complete: 52.0%; Average loss: 3.1017
Iteration: 2082; Percent complete: 52.0%; Average loss: 2.7995
Iteration: 2083; Percent complete: 52.1%; Average loss: 2.9485
Iteration: 2084; Percent complete: 52.1%; Average loss: 3.3603
Iteration: 2085; Percent complete: 52.1%; Average loss: 3.2510
Iteration: 2086; Percent complete: 52.1%; Average loss: 2.9379
Iteration: 2087; Percent complete: 52.2%; Average loss: 3.1878
Iteration: 2088; Percent complete: 52.2%; Average loss: 3.1299
Iteration: 2089; Percent complete: 52.2%; Average loss: 3.1659
Iteration: 2090; Percent complete: 52.2%; Average loss: 3.1484
Iteration: 2091; Percent complete: 52.3%; Average loss: 2.9840
Iteration: 2092; Percent complete: 52.3%; Average loss: 3.2708
Iteration: 2093; Percent complete: 52.3%; Average loss: 2.9718
Iteration: 2094; Percent complete: 52.3%; Average loss: 3.3906
Iteration: 2095; Percent complete: 52.4%; Average loss: 3.0195
Iteration: 2096; Percent complete: 52.4%; Average loss: 3.1280
Iteration: 2097; Percent complete: 52.4%; Average loss: 3.1130
Iteration: 2098; Percent complete: 52.4%; Average loss: 2.9602
Iteration: 2099; Percent complete: 52.5%; Average loss: 3.0890
Iteration: 2100; Percent complete: 52.5%; Average loss: 3.0430
Iteration: 2101; Percent complete: 52.5%; Average loss: 3.3791
Iteration: 2102; Percent complete: 52.5%; Average loss: 3.0404
Iteration: 2103; Percent complete: 52.6%; Average loss: 2.9540
Iteration: 2104; Percent complete: 52.6%; Average loss: 3.2697
Iteration: 2105; Percent complete: 52.6%; Average loss: 2.8141
Iteration: 2106; Percent complete: 52.6%; Average loss: 3.1806
Iteration: 2107; Percent complete: 52.7%; Average loss: 3.2243
Iteration: 2108; Percent complete: 52.7%; Average loss: 3.0978
Iteration: 2109; Percent complete: 52.7%; Average loss: 3.1776
Iteration: 2110; Percent complete: 52.8%; Average loss: 3.0445
Iteration: 2111; Percent complete: 52.8%; Average loss: 3.1184
Iteration: 2112; Percent complete: 52.8%; Average loss: 3.4417
Iteration: 2113; Percent complete: 52.8%; Average loss: 3.3090
Iteration: 2114; Percent complete: 52.8%; Average loss: 3.0701
Iteration: 2115; Percent complete: 52.9%; Average loss: 2.9201
Iteration: 2116; Percent complete: 52.9%; Average loss: 3.2525
Iteration: 2117; Percent complete: 52.9%; Average loss: 2.9560
Iteration: 2118; Percent complete: 52.9%; Average loss: 3.4223
Iteration: 2119; Percent complete: 53.0%; Average loss: 3.2335
Iteration: 2120; Percent complete: 53.0%; Average loss: 3.0983
Iteration: 2121; Percent complete: 53.0%; Average loss: 3.0506
Iteration: 2122; Percent complete: 53.0%; Average loss: 3.0845
Iteration: 2123; Percent complete: 53.1%; Average loss: 3.1852
Iteration: 2124; Percent complete: 53.1%; Average loss: 3.0808
Iteration: 2125; Percent complete: 53.1%; Average loss: 3.0904
Iteration: 2126; Percent complete: 53.1%; Average loss: 3.0581
Iteration: 2127; Percent complete: 53.2%; Average loss: 3.1964
Iteration: 2128; Percent complete: 53.2%; Average loss: 2.9927
Iteration: 2129; Percent complete: 53.2%; Average loss: 2.9588
Iteration: 2130; Percent complete: 53.2%; Average loss: 3.0105
Iteration: 2131; Percent complete: 53.3%; Average loss: 3.0822
Iteration: 2132; Percent complete: 53.3%; Average loss: 2.9769
Iteration: 2133; Percent complete: 53.3%; Average loss: 2.9754
Iteration: 2134; Percent complete: 53.3%; Average loss: 3.2250
Iteration: 2135; Percent complete: 53.4%; Average loss: 3.0454
Iteration: 2136; Percent complete: 53.4%; Average loss: 3.1644
Iteration: 2137; Percent complete: 53.4%; Average loss: 3.3373
Iteration: 2138; Percent complete: 53.4%; Average loss: 3.0577
Iteration: 2139; Percent complete: 53.5%; Average loss: 3.2700
Iteration: 2140; Percent complete: 53.5%; Average loss: 3.0290
Iteration: 2141; Percent complete: 53.5%; Average loss: 2.8429
Iteration: 2142; Percent complete: 53.5%; Average loss: 3.3855
Iteration: 2143; Percent complete: 53.6%; Average loss: 2.9711
Iteration: 2144; Percent complete: 53.6%; Average loss: 3.1375
Iteration: 2145; Percent complete: 53.6%; Average loss: 3.1186
Iteration: 2146; Percent complete: 53.6%; Average loss: 2.9837
Iteration: 2147; Percent complete: 53.7%; Average loss: 3.1893
Iteration: 2148; Percent complete: 53.7%; Average loss: 3.1531
Iteration: 2149; Percent complete: 53.7%; Average loss: 3.1012
Iteration: 2150; Percent complete: 53.8%; Average loss: 3.0022
Iteration: 2151; Percent complete: 53.8%; Average loss: 3.3139
Iteration: 2152; Percent complete: 53.8%; Average loss: 3.1064
Iteration: 2153; Percent complete: 53.8%; Average loss: 2.9993
Iteration: 2154; Percent complete: 53.8%; Average loss: 3.1809
Iteration: 2155; Percent complete: 53.9%; Average loss: 3.1385
Iteration: 2156; Percent complete: 53.9%; Average loss: 2.9968
Iteration: 2157; Percent complete: 53.9%; Average loss: 3.0580
Iteration: 2158; Percent complete: 53.9%; Average loss: 3.1416
Iteration: 2159; Percent complete: 54.0%; Average loss: 2.9879
Iteration: 2160; Percent complete: 54.0%; Average loss: 3.0494
Iteration: 2161; Percent complete: 54.0%; Average loss: 3.2595
Iteration: 2162; Percent complete: 54.0%; Average loss: 3.1467
Iteration: 2163; Percent complete: 54.1%; Average loss: 3.1041
Iteration: 2164; Percent complete: 54.1%; Average loss: 2.8093
Iteration: 2165; Percent complete: 54.1%; Average loss: 2.9628
Iteration: 2166; Percent complete: 54.1%; Average loss: 3.1432
Iteration: 2167; Percent complete: 54.2%; Average loss: 3.0674
Iteration: 2168; Percent complete: 54.2%; Average loss: 3.2925
Iteration: 2169; Percent complete: 54.2%; Average loss: 3.1030
Iteration: 2170; Percent complete: 54.2%; Average loss: 3.1607
Iteration: 2171; Percent complete: 54.3%; Average loss: 3.0815
Iteration: 2172; Percent complete: 54.3%; Average loss: 3.1635
Iteration: 2173; Percent complete: 54.3%; Average loss: 3.0857
Iteration: 2174; Percent complete: 54.4%; Average loss: 3.0782
Iteration: 2175; Percent complete: 54.4%; Average loss: 3.3357
Iteration: 2176; Percent complete: 54.4%; Average loss: 3.0787
Iteration: 2177; Percent complete: 54.4%; Average loss: 3.0243
Iteration: 2178; Percent complete: 54.4%; Average loss: 2.9352
Iteration: 2179; Percent complete: 54.5%; Average loss: 3.0739
Iteration: 2180; Percent complete: 54.5%; Average loss: 3.1272
Iteration: 2181; Percent complete: 54.5%; Average loss: 2.9976
Iteration: 2182; Percent complete: 54.5%; Average loss: 3.0696
Iteration: 2183; Percent complete: 54.6%; Average loss: 2.9507
Iteration: 2184; Percent complete: 54.6%; Average loss: 3.2398
Iteration: 2185; Percent complete: 54.6%; Average loss: 3.1508
Iteration: 2186; Percent complete: 54.6%; Average loss: 3.0823
Iteration: 2187; Percent complete: 54.7%; Average loss: 2.8472
Iteration: 2188; Percent complete: 54.7%; Average loss: 3.1374
Iteration: 2189; Percent complete: 54.7%; Average loss: 3.1290
Iteration: 2190; Percent complete: 54.8%; Average loss: 3.0017
Iteration: 2191; Percent complete: 54.8%; Average loss: 3.0786
Iteration: 2192; Percent complete: 54.8%; Average loss: 3.1933
Iteration: 2193; Percent complete: 54.8%; Average loss: 2.9073
Iteration: 2194; Percent complete: 54.9%; Average loss: 3.1540
Iteration: 2195; Percent complete: 54.9%; Average loss: 3.0673
Iteration: 2196; Percent complete: 54.9%; Average loss: 2.8850
Iteration: 2197; Percent complete: 54.9%; Average loss: 3.0750
Iteration: 2198; Percent complete: 54.9%; Average loss: 3.3076
Iteration: 2199; Percent complete: 55.0%; Average loss: 3.1254
Iteration: 2200; Percent complete: 55.0%; Average loss: 2.9518
Iteration: 2201; Percent complete: 55.0%; Average loss: 3.4106
Iteration: 2202; Percent complete: 55.0%; Average loss: 3.0467
Iteration: 2203; Percent complete: 55.1%; Average loss: 3.2525
Iteration: 2204; Percent complete: 55.1%; Average loss: 3.1485
Iteration: 2205; Percent complete: 55.1%; Average loss: 2.9525
Iteration: 2206; Percent complete: 55.1%; Average loss: 3.0722
Iteration: 2207; Percent complete: 55.2%; Average loss: 3.1427
Iteration: 2208; Percent complete: 55.2%; Average loss: 2.9837
Iteration: 2209; Percent complete: 55.2%; Average loss: 2.8696
Iteration: 2210; Percent complete: 55.2%; Average loss: 3.1792
Iteration: 2211; Percent complete: 55.3%; Average loss: 3.2736
Iteration: 2212; Percent complete: 55.3%; Average loss: 3.0997
Iteration: 2213; Percent complete: 55.3%; Average loss: 3.3162
Iteration: 2214; Percent complete: 55.4%; Average loss: 3.0543
Iteration: 2215; Percent complete: 55.4%; Average loss: 3.2006
Iteration: 2216; Percent complete: 55.4%; Average loss: 3.2189
Iteration: 2217; Percent complete: 55.4%; Average loss: 3.0145
Iteration: 2218; Percent complete: 55.5%; Average loss: 3.0734
Iteration: 2219; Percent complete: 55.5%; Average loss: 3.0758
Iteration: 2220; Percent complete: 55.5%; Average loss: 2.8386
Iteration: 2221; Percent complete: 55.5%; Average loss: 3.0896
Iteration: 2222; Percent complete: 55.5%; Average loss: 3.1227
Iteration: 2223; Percent complete: 55.6%; Average loss: 3.0687
Iteration: 2224; Percent complete: 55.6%; Average loss: 2.9712
Iteration: 2225; Percent complete: 55.6%; Average loss: 3.0230
Iteration: 2226; Percent complete: 55.6%; Average loss: 3.1718
Iteration: 2227; Percent complete: 55.7%; Average loss: 3.2168
Iteration: 2228; Percent complete: 55.7%; Average loss: 3.0715
Iteration: 2229; Percent complete: 55.7%; Average loss: 3.1592
Iteration: 2230; Percent complete: 55.8%; Average loss: 3.1037
Iteration: 2231; Percent complete: 55.8%; Average loss: 3.1571
Iteration: 2232; Percent complete: 55.8%; Average loss: 2.8425
Iteration: 2233; Percent complete: 55.8%; Average loss: 3.0250
Iteration: 2234; Percent complete: 55.9%; Average loss: 3.1982
Iteration: 2235; Percent complete: 55.9%; Average loss: 3.0243
Iteration: 2236; Percent complete: 55.9%; Average loss: 3.1260
Iteration: 2237; Percent complete: 55.9%; Average loss: 2.9199
Iteration: 2238; Percent complete: 56.0%; Average loss: 3.1112
Iteration: 2239; Percent complete: 56.0%; Average loss: 3.0639
Iteration: 2240; Percent complete: 56.0%; Average loss: 3.2016
Iteration: 2241; Percent complete: 56.0%; Average loss: 3.3070
Iteration: 2242; Percent complete: 56.0%; Average loss: 3.0757
Iteration: 2243; Percent complete: 56.1%; Average loss: 3.1562
Iteration: 2244; Percent complete: 56.1%; Average loss: 3.1295
Iteration: 2245; Percent complete: 56.1%; Average loss: 3.0391
Iteration: 2246; Percent complete: 56.1%; Average loss: 3.0745
Iteration: 2247; Percent complete: 56.2%; Average loss: 3.2417
Iteration: 2248; Percent complete: 56.2%; Average loss: 3.2657
Iteration: 2249; Percent complete: 56.2%; Average loss: 3.0046
Iteration: 2250; Percent complete: 56.2%; Average loss: 2.9461
Iteration: 2251; Percent complete: 56.3%; Average loss: 2.8856
Iteration: 2252; Percent complete: 56.3%; Average loss: 2.7938
Iteration: 2253; Percent complete: 56.3%; Average loss: 3.2884
Iteration: 2254; Percent complete: 56.4%; Average loss: 2.8962
Iteration: 2255; Percent complete: 56.4%; Average loss: 3.1368
Iteration: 2256; Percent complete: 56.4%; Average loss: 3.1053
Iteration: 2257; Percent complete: 56.4%; Average loss: 3.1103
Iteration: 2258; Percent complete: 56.5%; Average loss: 3.2070
Iteration: 2259; Percent complete: 56.5%; Average loss: 3.2243
Iteration: 2260; Percent complete: 56.5%; Average loss: 3.3060
Iteration: 2261; Percent complete: 56.5%; Average loss: 3.3790
Iteration: 2262; Percent complete: 56.5%; Average loss: 3.1651
Iteration: 2263; Percent complete: 56.6%; Average loss: 3.2335
Iteration: 2264; Percent complete: 56.6%; Average loss: 3.1744
Iteration: 2265; Percent complete: 56.6%; Average loss: 3.1973
Iteration: 2266; Percent complete: 56.6%; Average loss: 3.1652
Iteration: 2267; Percent complete: 56.7%; Average loss: 3.3111
Iteration: 2268; Percent complete: 56.7%; Average loss: 3.2705
Iteration: 2269; Percent complete: 56.7%; Average loss: 2.9600
Iteration: 2270; Percent complete: 56.8%; Average loss: 3.1375
Iteration: 2271; Percent complete: 56.8%; Average loss: 3.0359
Iteration: 2272; Percent complete: 56.8%; Average loss: 3.2223
Iteration: 2273; Percent complete: 56.8%; Average loss: 3.0060
Iteration: 2274; Percent complete: 56.9%; Average loss: 3.0528
Iteration: 2275; Percent complete: 56.9%; Average loss: 3.1650
Iteration: 2276; Percent complete: 56.9%; Average loss: 3.2438
Iteration: 2277; Percent complete: 56.9%; Average loss: 2.7522
Iteration: 2278; Percent complete: 57.0%; Average loss: 3.1004
Iteration: 2279; Percent complete: 57.0%; Average loss: 3.1077
Iteration: 2280; Percent complete: 57.0%; Average loss: 2.9915
Iteration: 2281; Percent complete: 57.0%; Average loss: 3.1750
Iteration: 2282; Percent complete: 57.0%; Average loss: 3.1409
Iteration: 2283; Percent complete: 57.1%; Average loss: 3.0043
Iteration: 2284; Percent complete: 57.1%; Average loss: 2.9327
Iteration: 2285; Percent complete: 57.1%; Average loss: 3.1149
Iteration: 2286; Percent complete: 57.1%; Average loss: 3.1714
Iteration: 2287; Percent complete: 57.2%; Average loss: 3.1427
Iteration: 2288; Percent complete: 57.2%; Average loss: 3.0129
Iteration: 2289; Percent complete: 57.2%; Average loss: 3.2388
Iteration: 2290; Percent complete: 57.2%; Average loss: 3.2221
Iteration: 2291; Percent complete: 57.3%; Average loss: 3.0104
Iteration: 2292; Percent complete: 57.3%; Average loss: 2.8813
Iteration: 2293; Percent complete: 57.3%; Average loss: 3.1123
Iteration: 2294; Percent complete: 57.4%; Average loss: 3.1865
Iteration: 2295; Percent complete: 57.4%; Average loss: 3.1488
Iteration: 2296; Percent complete: 57.4%; Average loss: 3.1431
Iteration: 2297; Percent complete: 57.4%; Average loss: 2.9209
Iteration: 2298; Percent complete: 57.5%; Average loss: 3.0813
Iteration: 2299; Percent complete: 57.5%; Average loss: 3.0110
Iteration: 2300; Percent complete: 57.5%; Average loss: 3.0721
Iteration: 2301; Percent complete: 57.5%; Average loss: 2.9130
Iteration: 2302; Percent complete: 57.6%; Average loss: 3.1787
Iteration: 2303; Percent complete: 57.6%; Average loss: 3.1660
Iteration: 2304; Percent complete: 57.6%; Average loss: 2.7454
Iteration: 2305; Percent complete: 57.6%; Average loss: 3.1480
Iteration: 2306; Percent complete: 57.6%; Average loss: 3.1294
Iteration: 2307; Percent complete: 57.7%; Average loss: 3.1462
Iteration: 2308; Percent complete: 57.7%; Average loss: 2.8211
Iteration: 2309; Percent complete: 57.7%; Average loss: 3.0286
Iteration: 2310; Percent complete: 57.8%; Average loss: 2.9441
Iteration: 2311; Percent complete: 57.8%; Average loss: 3.1251
Iteration: 2312; Percent complete: 57.8%; Average loss: 2.8802
Iteration: 2313; Percent complete: 57.8%; Average loss: 3.2242
Iteration: 2314; Percent complete: 57.9%; Average loss: 2.9639
Iteration: 2315; Percent complete: 57.9%; Average loss: 3.0197
Iteration: 2316; Percent complete: 57.9%; Average loss: 3.1668
Iteration: 2317; Percent complete: 57.9%; Average loss: 3.1809
Iteration: 2318; Percent complete: 58.0%; Average loss: 2.9720
Iteration: 2319; Percent complete: 58.0%; Average loss: 3.0380
Iteration: 2320; Percent complete: 58.0%; Average loss: 2.8116
Iteration: 2321; Percent complete: 58.0%; Average loss: 2.9268
Iteration: 2322; Percent complete: 58.1%; Average loss: 3.0346
Iteration: 2323; Percent complete: 58.1%; Average loss: 3.1640
Iteration: 2324; Percent complete: 58.1%; Average loss: 3.1167
Iteration: 2325; Percent complete: 58.1%; Average loss: 2.8885
Iteration: 2326; Percent complete: 58.1%; Average loss: 2.9017
Iteration: 2327; Percent complete: 58.2%; Average loss: 3.1084
Iteration: 2328; Percent complete: 58.2%; Average loss: 2.9918
Iteration: 2329; Percent complete: 58.2%; Average loss: 2.9203
Iteration: 2330; Percent complete: 58.2%; Average loss: 2.9109
Iteration: 2331; Percent complete: 58.3%; Average loss: 2.8863
Iteration: 2332; Percent complete: 58.3%; Average loss: 3.1468
Iteration: 2333; Percent complete: 58.3%; Average loss: 2.9875
Iteration: 2334; Percent complete: 58.4%; Average loss: 3.0780
Iteration: 2335; Percent complete: 58.4%; Average loss: 3.0201
Iteration: 2336; Percent complete: 58.4%; Average loss: 3.1307
Iteration: 2337; Percent complete: 58.4%; Average loss: 3.0465
Iteration: 2338; Percent complete: 58.5%; Average loss: 3.2366
Iteration: 2339; Percent complete: 58.5%; Average loss: 3.4331
Iteration: 2340; Percent complete: 58.5%; Average loss: 3.1236
Iteration: 2341; Percent complete: 58.5%; Average loss: 2.9198
Iteration: 2342; Percent complete: 58.6%; Average loss: 3.0133
Iteration: 2343; Percent complete: 58.6%; Average loss: 2.9411
Iteration: 2344; Percent complete: 58.6%; Average loss: 2.9121
Iteration: 2345; Percent complete: 58.6%; Average loss: 3.2238
Iteration: 2346; Percent complete: 58.7%; Average loss: 2.8825
Iteration: 2347; Percent complete: 58.7%; Average loss: 3.0535
Iteration: 2348; Percent complete: 58.7%; Average loss: 3.1272
Iteration: 2349; Percent complete: 58.7%; Average loss: 3.0360
Iteration: 2350; Percent complete: 58.8%; Average loss: 3.2393
Iteration: 2351; Percent complete: 58.8%; Average loss: 3.0790
Iteration: 2352; Percent complete: 58.8%; Average loss: 3.2993
Iteration: 2353; Percent complete: 58.8%; Average loss: 3.3225
Iteration: 2354; Percent complete: 58.9%; Average loss: 2.8577
Iteration: 2355; Percent complete: 58.9%; Average loss: 2.9782
Iteration: 2356; Percent complete: 58.9%; Average loss: 3.1777
Iteration: 2357; Percent complete: 58.9%; Average loss: 3.1653
Iteration: 2358; Percent complete: 59.0%; Average loss: 3.1195
Iteration: 2359; Percent complete: 59.0%; Average loss: 2.9775
Iteration: 2360; Percent complete: 59.0%; Average loss: 2.9467
Iteration: 2361; Percent complete: 59.0%; Average loss: 3.1119
Iteration: 2362; Percent complete: 59.1%; Average loss: 2.8101
Iteration: 2363; Percent complete: 59.1%; Average loss: 2.9123
Iteration: 2364; Percent complete: 59.1%; Average loss: 3.0645
Iteration: 2365; Percent complete: 59.1%; Average loss: 2.9947
Iteration: 2366; Percent complete: 59.2%; Average loss: 2.8189
Iteration: 2367; Percent complete: 59.2%; Average loss: 3.0736
Iteration: 2368; Percent complete: 59.2%; Average loss: 3.1171
Iteration: 2369; Percent complete: 59.2%; Average loss: 3.0211
Iteration: 2370; Percent complete: 59.2%; Average loss: 3.1870
Iteration: 2371; Percent complete: 59.3%; Average loss: 3.2016
Iteration: 2372; Percent complete: 59.3%; Average loss: 3.0459
Iteration: 2373; Percent complete: 59.3%; Average loss: 3.0949
Iteration: 2374; Percent complete: 59.4%; Average loss: 3.1999
Iteration: 2375; Percent complete: 59.4%; Average loss: 2.9559
Iteration: 2376; Percent complete: 59.4%; Average loss: 2.8808
Iteration: 2377; Percent complete: 59.4%; Average loss: 3.3126
Iteration: 2378; Percent complete: 59.5%; Average loss: 3.1430
Iteration: 2379; Percent complete: 59.5%; Average loss: 3.1294
Iteration: 2380; Percent complete: 59.5%; Average loss: 3.3463
Iteration: 2381; Percent complete: 59.5%; Average loss: 2.9956
Iteration: 2382; Percent complete: 59.6%; Average loss: 3.0552
Iteration: 2383; Percent complete: 59.6%; Average loss: 3.0598
Iteration: 2384; Percent complete: 59.6%; Average loss: 2.8995
Iteration: 2385; Percent complete: 59.6%; Average loss: 3.0686
Iteration: 2386; Percent complete: 59.7%; Average loss: 2.9967
Iteration: 2387; Percent complete: 59.7%; Average loss: 3.2053
Iteration: 2388; Percent complete: 59.7%; Average loss: 3.1688
Iteration: 2389; Percent complete: 59.7%; Average loss: 3.1419
Iteration: 2390; Percent complete: 59.8%; Average loss: 3.1862
Iteration: 2391; Percent complete: 59.8%; Average loss: 2.9159
Iteration: 2392; Percent complete: 59.8%; Average loss: 2.9708
Iteration: 2393; Percent complete: 59.8%; Average loss: 3.1303
Iteration: 2394; Percent complete: 59.9%; Average loss: 2.9206
Iteration: 2395; Percent complete: 59.9%; Average loss: 3.3261
Iteration: 2396; Percent complete: 59.9%; Average loss: 3.1418
Iteration: 2397; Percent complete: 59.9%; Average loss: 3.1940
Iteration: 2398; Percent complete: 60.0%; Average loss: 3.0047
Iteration: 2399; Percent complete: 60.0%; Average loss: 2.8955
Iteration: 2400; Percent complete: 60.0%; Average loss: 3.1216
Iteration: 2401; Percent complete: 60.0%; Average loss: 2.7779
Iteration: 2402; Percent complete: 60.1%; Average loss: 2.9333
Iteration: 2403; Percent complete: 60.1%; Average loss: 3.1830
Iteration: 2404; Percent complete: 60.1%; Average loss: 2.8827
Iteration: 2405; Percent complete: 60.1%; Average loss: 3.2037
Iteration: 2406; Percent complete: 60.2%; Average loss: 2.8206
Iteration: 2407; Percent complete: 60.2%; Average loss: 2.7791
Iteration: 2408; Percent complete: 60.2%; Average loss: 2.7770
Iteration: 2409; Percent complete: 60.2%; Average loss: 3.1777
Iteration: 2410; Percent complete: 60.2%; Average loss: 2.9960
Iteration: 2411; Percent complete: 60.3%; Average loss: 3.1159
Iteration: 2412; Percent complete: 60.3%; Average loss: 3.0140
Iteration: 2413; Percent complete: 60.3%; Average loss: 3.1245
Iteration: 2414; Percent complete: 60.4%; Average loss: 2.8609
Iteration: 2415; Percent complete: 60.4%; Average loss: 3.0542
Iteration: 2416; Percent complete: 60.4%; Average loss: 2.9416
Iteration: 2417; Percent complete: 60.4%; Average loss: 3.1126
Iteration: 2418; Percent complete: 60.5%; Average loss: 2.9259
Iteration: 2419; Percent complete: 60.5%; Average loss: 3.1029
Iteration: 2420; Percent complete: 60.5%; Average loss: 2.9979
Iteration: 2421; Percent complete: 60.5%; Average loss: 3.0361
Iteration: 2422; Percent complete: 60.6%; Average loss: 3.0488
Iteration: 2423; Percent complete: 60.6%; Average loss: 3.0126
Iteration: 2424; Percent complete: 60.6%; Average loss: 2.8545
Iteration: 2425; Percent complete: 60.6%; Average loss: 3.2068
Iteration: 2426; Percent complete: 60.7%; Average loss: 3.1456
Iteration: 2427; Percent complete: 60.7%; Average loss: 3.1989
Iteration: 2428; Percent complete: 60.7%; Average loss: 3.1395
Iteration: 2429; Percent complete: 60.7%; Average loss: 3.1532
Iteration: 2430; Percent complete: 60.8%; Average loss: 3.2671
Iteration: 2431; Percent complete: 60.8%; Average loss: 3.0271
Iteration: 2432; Percent complete: 60.8%; Average loss: 3.1716
Iteration: 2433; Percent complete: 60.8%; Average loss: 3.0810
Iteration: 2434; Percent complete: 60.9%; Average loss: 2.7967
Iteration: 2435; Percent complete: 60.9%; Average loss: 2.7483
Iteration: 2436; Percent complete: 60.9%; Average loss: 3.1457
Iteration: 2437; Percent complete: 60.9%; Average loss: 3.0020
Iteration: 2438; Percent complete: 61.0%; Average loss: 3.2609
Iteration: 2439; Percent complete: 61.0%; Average loss: 2.7873
Iteration: 2440; Percent complete: 61.0%; Average loss: 3.0489
Iteration: 2441; Percent complete: 61.0%; Average loss: 3.0253
Iteration: 2442; Percent complete: 61.1%; Average loss: 3.1105
Iteration: 2443; Percent complete: 61.1%; Average loss: 3.0353
Iteration: 2444; Percent complete: 61.1%; Average loss: 3.0857
Iteration: 2445; Percent complete: 61.1%; Average loss: 3.3480
Iteration: 2446; Percent complete: 61.2%; Average loss: 3.2690
Iteration: 2447; Percent complete: 61.2%; Average loss: 3.0704
Iteration: 2448; Percent complete: 61.2%; Average loss: 3.1106
Iteration: 2449; Percent complete: 61.2%; Average loss: 3.0685
Iteration: 2450; Percent complete: 61.3%; Average loss: 3.2848
Iteration: 2451; Percent complete: 61.3%; Average loss: 3.1682
Iteration: 2452; Percent complete: 61.3%; Average loss: 3.2259
Iteration: 2453; Percent complete: 61.3%; Average loss: 2.8256
Iteration: 2454; Percent complete: 61.4%; Average loss: 2.9663
Iteration: 2455; Percent complete: 61.4%; Average loss: 2.8566
Iteration: 2456; Percent complete: 61.4%; Average loss: 2.9189
Iteration: 2457; Percent complete: 61.4%; Average loss: 3.1689
Iteration: 2458; Percent complete: 61.5%; Average loss: 3.1516
Iteration: 2459; Percent complete: 61.5%; Average loss: 2.7921
Iteration: 2460; Percent complete: 61.5%; Average loss: 2.9963
Iteration: 2461; Percent complete: 61.5%; Average loss: 2.9360
Iteration: 2462; Percent complete: 61.6%; Average loss: 3.0643
Iteration: 2463; Percent complete: 61.6%; Average loss: 2.9337
Iteration: 2464; Percent complete: 61.6%; Average loss: 3.0586
Iteration: 2465; Percent complete: 61.6%; Average loss: 3.2495
Iteration: 2466; Percent complete: 61.7%; Average loss: 2.8991
Iteration: 2467; Percent complete: 61.7%; Average loss: 2.8905
Iteration: 2468; Percent complete: 61.7%; Average loss: 2.9900
Iteration: 2469; Percent complete: 61.7%; Average loss: 2.8405
Iteration: 2470; Percent complete: 61.8%; Average loss: 3.0347
Iteration: 2471; Percent complete: 61.8%; Average loss: 3.1597
Iteration: 2472; Percent complete: 61.8%; Average loss: 3.1502
Iteration: 2473; Percent complete: 61.8%; Average loss: 2.8665
Iteration: 2474; Percent complete: 61.9%; Average loss: 3.2096
Iteration: 2475; Percent complete: 61.9%; Average loss: 2.9214
Iteration: 2476; Percent complete: 61.9%; Average loss: 2.9636
Iteration: 2477; Percent complete: 61.9%; Average loss: 2.9369
Iteration: 2478; Percent complete: 62.0%; Average loss: 3.1269
Iteration: 2479; Percent complete: 62.0%; Average loss: 3.2667
Iteration: 2480; Percent complete: 62.0%; Average loss: 3.1637
Iteration: 2481; Percent complete: 62.0%; Average loss: 2.8544
Iteration: 2482; Percent complete: 62.1%; Average loss: 3.1016
Iteration: 2483; Percent complete: 62.1%; Average loss: 2.7889
Iteration: 2484; Percent complete: 62.1%; Average loss: 3.0368
Iteration: 2485; Percent complete: 62.1%; Average loss: 3.1059
Iteration: 2486; Percent complete: 62.2%; Average loss: 3.2594
Iteration: 2487; Percent complete: 62.2%; Average loss: 3.0247
Iteration: 2488; Percent complete: 62.2%; Average loss: 2.8928
Iteration: 2489; Percent complete: 62.2%; Average loss: 2.9729
Iteration: 2490; Percent complete: 62.3%; Average loss: 2.9137
Iteration: 2491; Percent complete: 62.3%; Average loss: 3.0212
Iteration: 2492; Percent complete: 62.3%; Average loss: 3.0233
Iteration: 2493; Percent complete: 62.3%; Average loss: 3.1743
Iteration: 2494; Percent complete: 62.4%; Average loss: 3.0530
Iteration: 2495; Percent complete: 62.4%; Average loss: 3.1795
Iteration: 2496; Percent complete: 62.4%; Average loss: 2.9842
Iteration: 2497; Percent complete: 62.4%; Average loss: 3.0195
Iteration: 2498; Percent complete: 62.5%; Average loss: 3.1736
Iteration: 2499; Percent complete: 62.5%; Average loss: 2.9895
Iteration: 2500; Percent complete: 62.5%; Average loss: 3.1359
Iteration: 2501; Percent complete: 62.5%; Average loss: 2.8333
Iteration: 2502; Percent complete: 62.5%; Average loss: 3.1269
Iteration: 2503; Percent complete: 62.6%; Average loss: 3.0549
Iteration: 2504; Percent complete: 62.6%; Average loss: 3.2510
Iteration: 2505; Percent complete: 62.6%; Average loss: 2.9155
Iteration: 2506; Percent complete: 62.6%; Average loss: 2.9115
Iteration: 2507; Percent complete: 62.7%; Average loss: 2.9230
Iteration: 2508; Percent complete: 62.7%; Average loss: 2.9838
Iteration: 2509; Percent complete: 62.7%; Average loss: 3.0885
Iteration: 2510; Percent complete: 62.7%; Average loss: 2.9786
Iteration: 2511; Percent complete: 62.8%; Average loss: 3.1011
Iteration: 2512; Percent complete: 62.8%; Average loss: 3.0806
Iteration: 2513; Percent complete: 62.8%; Average loss: 2.8309
Iteration: 2514; Percent complete: 62.8%; Average loss: 3.0174
Iteration: 2515; Percent complete: 62.9%; Average loss: 3.1679
Iteration: 2516; Percent complete: 62.9%; Average loss: 2.8480
Iteration: 2517; Percent complete: 62.9%; Average loss: 2.7689
Iteration: 2518; Percent complete: 62.9%; Average loss: 3.1078
Iteration: 2519; Percent complete: 63.0%; Average loss: 2.9649
Iteration: 2520; Percent complete: 63.0%; Average loss: 3.0112
Iteration: 2521; Percent complete: 63.0%; Average loss: 3.1670
Iteration: 2522; Percent complete: 63.0%; Average loss: 3.0615
Iteration: 2523; Percent complete: 63.1%; Average loss: 2.9889
Iteration: 2524; Percent complete: 63.1%; Average loss: 2.9638
Iteration: 2525; Percent complete: 63.1%; Average loss: 3.1207
Iteration: 2526; Percent complete: 63.1%; Average loss: 2.9509
Iteration: 2527; Percent complete: 63.2%; Average loss: 3.0689
Iteration: 2528; Percent complete: 63.2%; Average loss: 3.0261
Iteration: 2529; Percent complete: 63.2%; Average loss: 3.1077
Iteration: 2530; Percent complete: 63.2%; Average loss: 3.0860
Iteration: 2531; Percent complete: 63.3%; Average loss: 3.1168
Iteration: 2532; Percent complete: 63.3%; Average loss: 3.1949
Iteration: 2533; Percent complete: 63.3%; Average loss: 2.9313
Iteration: 2534; Percent complete: 63.3%; Average loss: 3.0361
Iteration: 2535; Percent complete: 63.4%; Average loss: 2.8950
Iteration: 2536; Percent complete: 63.4%; Average loss: 3.2540
Iteration: 2537; Percent complete: 63.4%; Average loss: 2.9422
Iteration: 2538; Percent complete: 63.4%; Average loss: 3.1153
Iteration: 2539; Percent complete: 63.5%; Average loss: 3.0752
Iteration: 2540; Percent complete: 63.5%; Average loss: 2.9277
Iteration: 2541; Percent complete: 63.5%; Average loss: 2.8672
Iteration: 2542; Percent complete: 63.5%; Average loss: 3.0995
Iteration: 2543; Percent complete: 63.6%; Average loss: 3.0790
Iteration: 2544; Percent complete: 63.6%; Average loss: 3.1075
Iteration: 2545; Percent complete: 63.6%; Average loss: 2.9523
Iteration: 2546; Percent complete: 63.6%; Average loss: 3.2824
Iteration: 2547; Percent complete: 63.7%; Average loss: 3.0544
Iteration: 2548; Percent complete: 63.7%; Average loss: 3.0881
Iteration: 2549; Percent complete: 63.7%; Average loss: 3.1549
Iteration: 2550; Percent complete: 63.7%; Average loss: 2.8493
Iteration: 2551; Percent complete: 63.8%; Average loss: 3.0098
Iteration: 2552; Percent complete: 63.8%; Average loss: 2.9038
Iteration: 2553; Percent complete: 63.8%; Average loss: 2.9215
Iteration: 2554; Percent complete: 63.8%; Average loss: 3.1478
Iteration: 2555; Percent complete: 63.9%; Average loss: 2.8944
Iteration: 2556; Percent complete: 63.9%; Average loss: 3.0087
Iteration: 2557; Percent complete: 63.9%; Average loss: 2.9883
Iteration: 2558; Percent complete: 63.9%; Average loss: 2.9813
Iteration: 2559; Percent complete: 64.0%; Average loss: 2.9587
Iteration: 2560; Percent complete: 64.0%; Average loss: 2.8310
Iteration: 2561; Percent complete: 64.0%; Average loss: 3.1036
Iteration: 2562; Percent complete: 64.0%; Average loss: 2.8954
Iteration: 2563; Percent complete: 64.1%; Average loss: 2.9977
Iteration: 2564; Percent complete: 64.1%; Average loss: 3.0980
Iteration: 2565; Percent complete: 64.1%; Average loss: 3.0149
Iteration: 2566; Percent complete: 64.1%; Average loss: 3.0338
Iteration: 2567; Percent complete: 64.2%; Average loss: 2.7970
Iteration: 2568; Percent complete: 64.2%; Average loss: 2.9522
Iteration: 2569; Percent complete: 64.2%; Average loss: 3.0697
Iteration: 2570; Percent complete: 64.2%; Average loss: 3.0363
Iteration: 2571; Percent complete: 64.3%; Average loss: 2.9534
Iteration: 2572; Percent complete: 64.3%; Average loss: 2.8975
Iteration: 2573; Percent complete: 64.3%; Average loss: 2.8203
Iteration: 2574; Percent complete: 64.3%; Average loss: 3.1844
Iteration: 2575; Percent complete: 64.4%; Average loss: 3.0432
Iteration: 2576; Percent complete: 64.4%; Average loss: 2.6609
Iteration: 2577; Percent complete: 64.4%; Average loss: 2.9433
Iteration: 2578; Percent complete: 64.5%; Average loss: 3.1609
Iteration: 2579; Percent complete: 64.5%; Average loss: 3.0516
Iteration: 2580; Percent complete: 64.5%; Average loss: 3.1884
Iteration: 2581; Percent complete: 64.5%; Average loss: 2.9809
Iteration: 2582; Percent complete: 64.5%; Average loss: 2.9642
Iteration: 2583; Percent complete: 64.6%; Average loss: 3.1269
Iteration: 2584; Percent complete: 64.6%; Average loss: 2.9235
Iteration: 2585; Percent complete: 64.6%; Average loss: 3.1152
Iteration: 2586; Percent complete: 64.6%; Average loss: 2.9154
Iteration: 2587; Percent complete: 64.7%; Average loss: 2.9997
Iteration: 2588; Percent complete: 64.7%; Average loss: 2.9737
Iteration: 2589; Percent complete: 64.7%; Average loss: 3.1851
Iteration: 2590; Percent complete: 64.8%; Average loss: 3.0746
Iteration: 2591; Percent complete: 64.8%; Average loss: 3.0736
Iteration: 2592; Percent complete: 64.8%; Average loss: 2.8225
Iteration: 2593; Percent complete: 64.8%; Average loss: 2.9701
Iteration: 2594; Percent complete: 64.8%; Average loss: 3.0926
Iteration: 2595; Percent complete: 64.9%; Average loss: 2.9422
Iteration: 2596; Percent complete: 64.9%; Average loss: 3.0347
Iteration: 2597; Percent complete: 64.9%; Average loss: 3.0552
Iteration: 2598; Percent complete: 65.0%; Average loss: 2.9342
Iteration: 2599; Percent complete: 65.0%; Average loss: 2.9643
Iteration: 2600; Percent complete: 65.0%; Average loss: 2.9978
Iteration: 2601; Percent complete: 65.0%; Average loss: 2.9693
Iteration: 2602; Percent complete: 65.0%; Average loss: 3.2252
Iteration: 2603; Percent complete: 65.1%; Average loss: 3.0352
Iteration: 2604; Percent complete: 65.1%; Average loss: 2.9877
Iteration: 2605; Percent complete: 65.1%; Average loss: 2.9323
Iteration: 2606; Percent complete: 65.1%; Average loss: 2.9469
Iteration: 2607; Percent complete: 65.2%; Average loss: 2.7574
Iteration: 2608; Percent complete: 65.2%; Average loss: 3.0883
Iteration: 2609; Percent complete: 65.2%; Average loss: 3.1155
Iteration: 2610; Percent complete: 65.2%; Average loss: 3.0649
Iteration: 2611; Percent complete: 65.3%; Average loss: 3.0018
Iteration: 2612; Percent complete: 65.3%; Average loss: 3.1841
Iteration: 2613; Percent complete: 65.3%; Average loss: 2.9810
Iteration: 2614; Percent complete: 65.3%; Average loss: 3.0224
Iteration: 2615; Percent complete: 65.4%; Average loss: 3.2351
Iteration: 2616; Percent complete: 65.4%; Average loss: 2.7673
Iteration: 2617; Percent complete: 65.4%; Average loss: 3.0494
Iteration: 2618; Percent complete: 65.5%; Average loss: 2.8780
Iteration: 2619; Percent complete: 65.5%; Average loss: 2.9536
Iteration: 2620; Percent complete: 65.5%; Average loss: 2.8164
Iteration: 2621; Percent complete: 65.5%; Average loss: 3.1733
Iteration: 2622; Percent complete: 65.5%; Average loss: 3.0005
Iteration: 2623; Percent complete: 65.6%; Average loss: 2.6649
Iteration: 2624; Percent complete: 65.6%; Average loss: 2.8761
Iteration: 2625; Percent complete: 65.6%; Average loss: 3.2471
Iteration: 2626; Percent complete: 65.6%; Average loss: 2.8190
Iteration: 2627; Percent complete: 65.7%; Average loss: 2.9581
Iteration: 2628; Percent complete: 65.7%; Average loss: 3.0161
Iteration: 2629; Percent complete: 65.7%; Average loss: 3.2684
Iteration: 2630; Percent complete: 65.8%; Average loss: 3.3107
Iteration: 2631; Percent complete: 65.8%; Average loss: 3.0210
Iteration: 2632; Percent complete: 65.8%; Average loss: 2.9288
Iteration: 2633; Percent complete: 65.8%; Average loss: 3.1511
Iteration: 2634; Percent complete: 65.8%; Average loss: 2.9170
Iteration: 2635; Percent complete: 65.9%; Average loss: 3.2501
Iteration: 2636; Percent complete: 65.9%; Average loss: 3.0515
Iteration: 2637; Percent complete: 65.9%; Average loss: 3.1142
Iteration: 2638; Percent complete: 66.0%; Average loss: 3.0626
Iteration: 2639; Percent complete: 66.0%; Average loss: 2.9838
Iteration: 2640; Percent complete: 66.0%; Average loss: 2.9518
Iteration: 2641; Percent complete: 66.0%; Average loss: 3.1271
Iteration: 2642; Percent complete: 66.0%; Average loss: 3.0067
Iteration: 2643; Percent complete: 66.1%; Average loss: 3.0490
Iteration: 2644; Percent complete: 66.1%; Average loss: 3.0389
Iteration: 2645; Percent complete: 66.1%; Average loss: 3.0729
Iteration: 2646; Percent complete: 66.1%; Average loss: 2.9344
Iteration: 2647; Percent complete: 66.2%; Average loss: 2.9009
Iteration: 2648; Percent complete: 66.2%; Average loss: 2.9723
Iteration: 2649; Percent complete: 66.2%; Average loss: 3.1050
Iteration: 2650; Percent complete: 66.2%; Average loss: 3.0571
Iteration: 2651; Percent complete: 66.3%; Average loss: 2.8668
Iteration: 2652; Percent complete: 66.3%; Average loss: 2.9548
Iteration: 2653; Percent complete: 66.3%; Average loss: 2.9722
Iteration: 2654; Percent complete: 66.3%; Average loss: 2.8832
Iteration: 2655; Percent complete: 66.4%; Average loss: 3.1768
Iteration: 2656; Percent complete: 66.4%; Average loss: 3.0233
Iteration: 2657; Percent complete: 66.4%; Average loss: 3.1687
Iteration: 2658; Percent complete: 66.5%; Average loss: 3.0417
Iteration: 2659; Percent complete: 66.5%; Average loss: 3.1343
Iteration: 2660; Percent complete: 66.5%; Average loss: 3.0237
Iteration: 2661; Percent complete: 66.5%; Average loss: 2.8906
Iteration: 2662; Percent complete: 66.5%; Average loss: 3.0473
Iteration: 2663; Percent complete: 66.6%; Average loss: 3.0533
Iteration: 2664; Percent complete: 66.6%; Average loss: 2.9063
Iteration: 2665; Percent complete: 66.6%; Average loss: 2.8994
Iteration: 2666; Percent complete: 66.6%; Average loss: 3.0611
Iteration: 2667; Percent complete: 66.7%; Average loss: 2.9517
Iteration: 2668; Percent complete: 66.7%; Average loss: 3.0570
Iteration: 2669; Percent complete: 66.7%; Average loss: 3.1459
Iteration: 2670; Percent complete: 66.8%; Average loss: 2.9101
Iteration: 2671; Percent complete: 66.8%; Average loss: 3.0858
Iteration: 2672; Percent complete: 66.8%; Average loss: 2.7744
Iteration: 2673; Percent complete: 66.8%; Average loss: 2.6155
Iteration: 2674; Percent complete: 66.8%; Average loss: 3.2035
Iteration: 2675; Percent complete: 66.9%; Average loss: 2.8818
Iteration: 2676; Percent complete: 66.9%; Average loss: 3.0628
Iteration: 2677; Percent complete: 66.9%; Average loss: 3.2500
Iteration: 2678; Percent complete: 67.0%; Average loss: 2.8774
Iteration: 2679; Percent complete: 67.0%; Average loss: 2.9722
Iteration: 2680; Percent complete: 67.0%; Average loss: 3.0053
Iteration: 2681; Percent complete: 67.0%; Average loss: 2.9738
Iteration: 2682; Percent complete: 67.0%; Average loss: 3.1108
Iteration: 2683; Percent complete: 67.1%; Average loss: 3.0110
Iteration: 2684; Percent complete: 67.1%; Average loss: 2.8704
Iteration: 2685; Percent complete: 67.1%; Average loss: 2.8683
Iteration: 2686; Percent complete: 67.2%; Average loss: 2.8904
Iteration: 2687; Percent complete: 67.2%; Average loss: 2.9122
Iteration: 2688; Percent complete: 67.2%; Average loss: 2.9720
Iteration: 2689; Percent complete: 67.2%; Average loss: 3.1178
Iteration: 2690; Percent complete: 67.2%; Average loss: 3.0845
Iteration: 2691; Percent complete: 67.3%; Average loss: 2.9864
Iteration: 2692; Percent complete: 67.3%; Average loss: 2.8723
Iteration: 2693; Percent complete: 67.3%; Average loss: 3.0085
Iteration: 2694; Percent complete: 67.3%; Average loss: 2.9094
Iteration: 2695; Percent complete: 67.4%; Average loss: 2.9530
Iteration: 2696; Percent complete: 67.4%; Average loss: 2.9444
Iteration: 2697; Percent complete: 67.4%; Average loss: 2.8381
Iteration: 2698; Percent complete: 67.5%; Average loss: 2.7954
Iteration: 2699; Percent complete: 67.5%; Average loss: 2.8808
Iteration: 2700; Percent complete: 67.5%; Average loss: 3.1020
Iteration: 2701; Percent complete: 67.5%; Average loss: 3.0904
Iteration: 2702; Percent complete: 67.5%; Average loss: 2.7689
Iteration: 2703; Percent complete: 67.6%; Average loss: 2.8128
Iteration: 2704; Percent complete: 67.6%; Average loss: 3.2111
Iteration: 2705; Percent complete: 67.6%; Average loss: 2.9842
Iteration: 2706; Percent complete: 67.7%; Average loss: 2.8912
Iteration: 2707; Percent complete: 67.7%; Average loss: 2.9886
Iteration: 2708; Percent complete: 67.7%; Average loss: 3.3287
Iteration: 2709; Percent complete: 67.7%; Average loss: 2.9146
Iteration: 2710; Percent complete: 67.8%; Average loss: 2.9380
Iteration: 2711; Percent complete: 67.8%; Average loss: 3.1938
Iteration: 2712; Percent complete: 67.8%; Average loss: 2.8940
Iteration: 2713; Percent complete: 67.8%; Average loss: 2.9836
Iteration: 2714; Percent complete: 67.8%; Average loss: 2.7057
Iteration: 2715; Percent complete: 67.9%; Average loss: 2.9255
Iteration: 2716; Percent complete: 67.9%; Average loss: 3.0103
Iteration: 2717; Percent complete: 67.9%; Average loss: 2.9567
Iteration: 2718; Percent complete: 68.0%; Average loss: 2.8760
Iteration: 2719; Percent complete: 68.0%; Average loss: 2.9047
Iteration: 2720; Percent complete: 68.0%; Average loss: 3.1092
Iteration: 2721; Percent complete: 68.0%; Average loss: 2.7312
Iteration: 2722; Percent complete: 68.0%; Average loss: 2.7852
Iteration: 2723; Percent complete: 68.1%; Average loss: 3.1487
Iteration: 2724; Percent complete: 68.1%; Average loss: 2.7801
Iteration: 2725; Percent complete: 68.1%; Average loss: 2.9709
Iteration: 2726; Percent complete: 68.2%; Average loss: 3.0618
Iteration: 2727; Percent complete: 68.2%; Average loss: 2.9820
Iteration: 2728; Percent complete: 68.2%; Average loss: 2.9472
Iteration: 2729; Percent complete: 68.2%; Average loss: 2.6411
Iteration: 2730; Percent complete: 68.2%; Average loss: 3.0458
Iteration: 2731; Percent complete: 68.3%; Average loss: 2.7600
Iteration: 2732; Percent complete: 68.3%; Average loss: 2.8667
Iteration: 2733; Percent complete: 68.3%; Average loss: 2.9776
Iteration: 2734; Percent complete: 68.3%; Average loss: 2.9455
Iteration: 2735; Percent complete: 68.4%; Average loss: 2.9274
Iteration: 2736; Percent complete: 68.4%; Average loss: 2.8199
Iteration: 2737; Percent complete: 68.4%; Average loss: 3.0890
Iteration: 2738; Percent complete: 68.5%; Average loss: 2.8525
Iteration: 2739; Percent complete: 68.5%; Average loss: 3.1658
Iteration: 2740; Percent complete: 68.5%; Average loss: 2.9149
Iteration: 2741; Percent complete: 68.5%; Average loss: 2.8385
Iteration: 2742; Percent complete: 68.5%; Average loss: 2.9738
Iteration: 2743; Percent complete: 68.6%; Average loss: 2.9675
Iteration: 2744; Percent complete: 68.6%; Average loss: 2.8366
Iteration: 2745; Percent complete: 68.6%; Average loss: 3.0248
Iteration: 2746; Percent complete: 68.7%; Average loss: 2.9565
Iteration: 2747; Percent complete: 68.7%; Average loss: 2.7112
Iteration: 2748; Percent complete: 68.7%; Average loss: 2.9862
Iteration: 2749; Percent complete: 68.7%; Average loss: 2.8125
Iteration: 2750; Percent complete: 68.8%; Average loss: 2.8237
Iteration: 2751; Percent complete: 68.8%; Average loss: 3.0470
Iteration: 2752; Percent complete: 68.8%; Average loss: 3.0035
Iteration: 2753; Percent complete: 68.8%; Average loss: 2.9950
Iteration: 2754; Percent complete: 68.8%; Average loss: 3.1112
Iteration: 2755; Percent complete: 68.9%; Average loss: 2.9374
Iteration: 2756; Percent complete: 68.9%; Average loss: 3.0840
Iteration: 2757; Percent complete: 68.9%; Average loss: 2.8090
Iteration: 2758; Percent complete: 69.0%; Average loss: 3.1298
Iteration: 2759; Percent complete: 69.0%; Average loss: 2.9901
Iteration: 2760; Percent complete: 69.0%; Average loss: 2.9684
Iteration: 2761; Percent complete: 69.0%; Average loss: 2.9467
Iteration: 2762; Percent complete: 69.0%; Average loss: 3.0737
Iteration: 2763; Percent complete: 69.1%; Average loss: 2.9684
Iteration: 2764; Percent complete: 69.1%; Average loss: 2.9597
Iteration: 2765; Percent complete: 69.1%; Average loss: 2.8289
Iteration: 2766; Percent complete: 69.2%; Average loss: 2.7168
Iteration: 2767; Percent complete: 69.2%; Average loss: 2.9380
Iteration: 2768; Percent complete: 69.2%; Average loss: 2.7418
Iteration: 2769; Percent complete: 69.2%; Average loss: 2.8882
Iteration: 2770; Percent complete: 69.2%; Average loss: 2.8777
Iteration: 2771; Percent complete: 69.3%; Average loss: 3.0406
Iteration: 2772; Percent complete: 69.3%; Average loss: 2.9849
Iteration: 2773; Percent complete: 69.3%; Average loss: 3.1402
Iteration: 2774; Percent complete: 69.3%; Average loss: 3.0718
Iteration: 2775; Percent complete: 69.4%; Average loss: 3.1531
Iteration: 2776; Percent complete: 69.4%; Average loss: 2.9245
Iteration: 2777; Percent complete: 69.4%; Average loss: 3.0400
Iteration: 2778; Percent complete: 69.5%; Average loss: 3.1181
Iteration: 2779; Percent complete: 69.5%; Average loss: 2.9887
Iteration: 2780; Percent complete: 69.5%; Average loss: 2.9169
Iteration: 2781; Percent complete: 69.5%; Average loss: 2.8779
Iteration: 2782; Percent complete: 69.5%; Average loss: 2.9222
Iteration: 2783; Percent complete: 69.6%; Average loss: 2.9562
Iteration: 2784; Percent complete: 69.6%; Average loss: 3.0585
Iteration: 2785; Percent complete: 69.6%; Average loss: 2.9296
Iteration: 2786; Percent complete: 69.7%; Average loss: 3.2789
Iteration: 2787; Percent complete: 69.7%; Average loss: 2.9222
Iteration: 2788; Percent complete: 69.7%; Average loss: 2.6774
Iteration: 2789; Percent complete: 69.7%; Average loss: 2.8645
Iteration: 2790; Percent complete: 69.8%; Average loss: 2.8281
Iteration: 2791; Percent complete: 69.8%; Average loss: 3.2805
Iteration: 2792; Percent complete: 69.8%; Average loss: 2.9456
Iteration: 2793; Percent complete: 69.8%; Average loss: 2.9166
Iteration: 2794; Percent complete: 69.8%; Average loss: 3.0444
Iteration: 2795; Percent complete: 69.9%; Average loss: 3.0245
Iteration: 2796; Percent complete: 69.9%; Average loss: 2.9060
Iteration: 2797; Percent complete: 69.9%; Average loss: 3.3147
Iteration: 2798; Percent complete: 70.0%; Average loss: 2.8014
Iteration: 2799; Percent complete: 70.0%; Average loss: 2.9676
Iteration: 2800; Percent complete: 70.0%; Average loss: 2.6976
Iteration: 2801; Percent complete: 70.0%; Average loss: 3.0034
Iteration: 2802; Percent complete: 70.0%; Average loss: 2.8883
Iteration: 2803; Percent complete: 70.1%; Average loss: 2.8688
Iteration: 2804; Percent complete: 70.1%; Average loss: 3.1687
Iteration: 2805; Percent complete: 70.1%; Average loss: 3.0178
Iteration: 2806; Percent complete: 70.2%; Average loss: 2.8227
Iteration: 2807; Percent complete: 70.2%; Average loss: 2.9587
Iteration: 2808; Percent complete: 70.2%; Average loss: 2.9437
Iteration: 2809; Percent complete: 70.2%; Average loss: 2.7922
Iteration: 2810; Percent complete: 70.2%; Average loss: 2.8750
Iteration: 2811; Percent complete: 70.3%; Average loss: 2.8557
Iteration: 2812; Percent complete: 70.3%; Average loss: 2.7271
Iteration: 2813; Percent complete: 70.3%; Average loss: 3.0849
Iteration: 2814; Percent complete: 70.3%; Average loss: 2.7036
Iteration: 2815; Percent complete: 70.4%; Average loss: 2.9678
Iteration: 2816; Percent complete: 70.4%; Average loss: 3.0545
Iteration: 2817; Percent complete: 70.4%; Average loss: 2.8535
Iteration: 2818; Percent complete: 70.5%; Average loss: 2.9874
Iteration: 2819; Percent complete: 70.5%; Average loss: 3.0658
Iteration: 2820; Percent complete: 70.5%; Average loss: 2.8187
Iteration: 2821; Percent complete: 70.5%; Average loss: 2.6173
Iteration: 2822; Percent complete: 70.5%; Average loss: 2.7550
Iteration: 2823; Percent complete: 70.6%; Average loss: 3.0473
Iteration: 2824; Percent complete: 70.6%; Average loss: 2.7725
Iteration: 2825; Percent complete: 70.6%; Average loss: 2.9399
Iteration: 2826; Percent complete: 70.7%; Average loss: 2.9249
Iteration: 2827; Percent complete: 70.7%; Average loss: 3.0302
Iteration: 2828; Percent complete: 70.7%; Average loss: 3.0538
Iteration: 2829; Percent complete: 70.7%; Average loss: 2.9765
Iteration: 2830; Percent complete: 70.8%; Average loss: 2.7801
Iteration: 2831; Percent complete: 70.8%; Average loss: 3.0060
Iteration: 2832; Percent complete: 70.8%; Average loss: 3.1318
Iteration: 2833; Percent complete: 70.8%; Average loss: 3.0651
Iteration: 2834; Percent complete: 70.9%; Average loss: 2.8576
Iteration: 2835; Percent complete: 70.9%; Average loss: 2.7553
Iteration: 2836; Percent complete: 70.9%; Average loss: 2.9088
Iteration: 2837; Percent complete: 70.9%; Average loss: 2.6560
Iteration: 2838; Percent complete: 71.0%; Average loss: 2.7468
Iteration: 2839; Percent complete: 71.0%; Average loss: 2.5710
Iteration: 2840; Percent complete: 71.0%; Average loss: 2.8881
Iteration: 2841; Percent complete: 71.0%; Average loss: 2.5676
Iteration: 2842; Percent complete: 71.0%; Average loss: 2.9333
Iteration: 2843; Percent complete: 71.1%; Average loss: 2.7670
Iteration: 2844; Percent complete: 71.1%; Average loss: 2.9497
Iteration: 2845; Percent complete: 71.1%; Average loss: 2.8754
Iteration: 2846; Percent complete: 71.2%; Average loss: 2.7976
Iteration: 2847; Percent complete: 71.2%; Average loss: 2.7636
Iteration: 2848; Percent complete: 71.2%; Average loss: 2.8556
Iteration: 2849; Percent complete: 71.2%; Average loss: 3.1143
Iteration: 2850; Percent complete: 71.2%; Average loss: 3.0316
Iteration: 2851; Percent complete: 71.3%; Average loss: 3.0404
Iteration: 2852; Percent complete: 71.3%; Average loss: 2.9800
Iteration: 2853; Percent complete: 71.3%; Average loss: 2.8132
Iteration: 2854; Percent complete: 71.4%; Average loss: 2.7723
Iteration: 2855; Percent complete: 71.4%; Average loss: 2.9254
Iteration: 2856; Percent complete: 71.4%; Average loss: 2.6468
Iteration: 2857; Percent complete: 71.4%; Average loss: 2.7376
Iteration: 2858; Percent complete: 71.5%; Average loss: 2.9094
Iteration: 2859; Percent complete: 71.5%; Average loss: 2.7635
Iteration: 2860; Percent complete: 71.5%; Average loss: 3.0673
Iteration: 2861; Percent complete: 71.5%; Average loss: 2.9742
Iteration: 2862; Percent complete: 71.5%; Average loss: 2.9803
Iteration: 2863; Percent complete: 71.6%; Average loss: 2.9614
Iteration: 2864; Percent complete: 71.6%; Average loss: 2.9568
Iteration: 2865; Percent complete: 71.6%; Average loss: 2.8518
Iteration: 2866; Percent complete: 71.7%; Average loss: 3.1368
Iteration: 2867; Percent complete: 71.7%; Average loss: 2.7761
Iteration: 2868; Percent complete: 71.7%; Average loss: 2.7534
Iteration: 2869; Percent complete: 71.7%; Average loss: 2.9827
Iteration: 2870; Percent complete: 71.8%; Average loss: 2.9082
Iteration: 2871; Percent complete: 71.8%; Average loss: 2.9055
Iteration: 2872; Percent complete: 71.8%; Average loss: 2.7378
Iteration: 2873; Percent complete: 71.8%; Average loss: 2.8103
Iteration: 2874; Percent complete: 71.9%; Average loss: 3.0206
Iteration: 2875; Percent complete: 71.9%; Average loss: 3.2515
Iteration: 2876; Percent complete: 71.9%; Average loss: 2.7706
Iteration: 2877; Percent complete: 71.9%; Average loss: 2.6890
Iteration: 2878; Percent complete: 72.0%; Average loss: 2.8035
Iteration: 2879; Percent complete: 72.0%; Average loss: 2.9565
Iteration: 2880; Percent complete: 72.0%; Average loss: 2.8803
Iteration: 2881; Percent complete: 72.0%; Average loss: 2.7489
Iteration: 2882; Percent complete: 72.0%; Average loss: 2.6326
Iteration: 2883; Percent complete: 72.1%; Average loss: 2.7035
Iteration: 2884; Percent complete: 72.1%; Average loss: 2.8486
Iteration: 2885; Percent complete: 72.1%; Average loss: 2.9400
Iteration: 2886; Percent complete: 72.2%; Average loss: 3.1438
Iteration: 2887; Percent complete: 72.2%; Average loss: 2.7724
Iteration: 2888; Percent complete: 72.2%; Average loss: 2.8803
Iteration: 2889; Percent complete: 72.2%; Average loss: 3.2009
Iteration: 2890; Percent complete: 72.2%; Average loss: 2.7684
Iteration: 2891; Percent complete: 72.3%; Average loss: 3.2246
Iteration: 2892; Percent complete: 72.3%; Average loss: 2.9228
Iteration: 2893; Percent complete: 72.3%; Average loss: 2.7688
Iteration: 2894; Percent complete: 72.4%; Average loss: 3.0342
Iteration: 2895; Percent complete: 72.4%; Average loss: 3.0984
Iteration: 2896; Percent complete: 72.4%; Average loss: 2.7456
Iteration: 2897; Percent complete: 72.4%; Average loss: 2.8128
Iteration: 2898; Percent complete: 72.5%; Average loss: 2.7381
Iteration: 2899; Percent complete: 72.5%; Average loss: 3.2141
Iteration: 2900; Percent complete: 72.5%; Average loss: 2.6974
Iteration: 2901; Percent complete: 72.5%; Average loss: 3.2045
Iteration: 2902; Percent complete: 72.5%; Average loss: 2.8978
Iteration: 2903; Percent complete: 72.6%; Average loss: 2.7016
Iteration: 2904; Percent complete: 72.6%; Average loss: 2.8775
Iteration: 2905; Percent complete: 72.6%; Average loss: 2.9560
Iteration: 2906; Percent complete: 72.7%; Average loss: 2.9312
Iteration: 2907; Percent complete: 72.7%; Average loss: 2.8211
Iteration: 2908; Percent complete: 72.7%; Average loss: 2.9401
Iteration: 2909; Percent complete: 72.7%; Average loss: 3.2993
Iteration: 2910; Percent complete: 72.8%; Average loss: 3.1776
Iteration: 2911; Percent complete: 72.8%; Average loss: 2.9322
Iteration: 2912; Percent complete: 72.8%; Average loss: 2.7870
Iteration: 2913; Percent complete: 72.8%; Average loss: 2.8152
Iteration: 2914; Percent complete: 72.9%; Average loss: 2.8374
Iteration: 2915; Percent complete: 72.9%; Average loss: 2.7000
Iteration: 2916; Percent complete: 72.9%; Average loss: 2.9289
Iteration: 2917; Percent complete: 72.9%; Average loss: 2.6556
Iteration: 2918; Percent complete: 73.0%; Average loss: 2.7131
Iteration: 2919; Percent complete: 73.0%; Average loss: 2.9849
Iteration: 2920; Percent complete: 73.0%; Average loss: 2.9162
Iteration: 2921; Percent complete: 73.0%; Average loss: 2.8818
Iteration: 2922; Percent complete: 73.0%; Average loss: 2.9776
Iteration: 2923; Percent complete: 73.1%; Average loss: 2.9765
Iteration: 2924; Percent complete: 73.1%; Average loss: 2.8742
Iteration: 2925; Percent complete: 73.1%; Average loss: 2.6370
Iteration: 2926; Percent complete: 73.2%; Average loss: 2.8123
Iteration: 2927; Percent complete: 73.2%; Average loss: 2.7309
Iteration: 2928; Percent complete: 73.2%; Average loss: 3.1246
Iteration: 2929; Percent complete: 73.2%; Average loss: 2.9417
Iteration: 2930; Percent complete: 73.2%; Average loss: 2.6263
Iteration: 2931; Percent complete: 73.3%; Average loss: 2.9436
Iteration: 2932; Percent complete: 73.3%; Average loss: 2.8075
Iteration: 2933; Percent complete: 73.3%; Average loss: 2.9640
Iteration: 2934; Percent complete: 73.4%; Average loss: 2.6118
Iteration: 2935; Percent complete: 73.4%; Average loss: 2.8350
Iteration: 2936; Percent complete: 73.4%; Average loss: 2.9050
Iteration: 2937; Percent complete: 73.4%; Average loss: 3.2951
Iteration: 2938; Percent complete: 73.5%; Average loss: 2.9660
Iteration: 2939; Percent complete: 73.5%; Average loss: 2.7640
Iteration: 2940; Percent complete: 73.5%; Average loss: 2.9731
Iteration: 2941; Percent complete: 73.5%; Average loss: 2.8948
Iteration: 2942; Percent complete: 73.6%; Average loss: 2.7833
Iteration: 2943; Percent complete: 73.6%; Average loss: 2.9825
Iteration: 2944; Percent complete: 73.6%; Average loss: 2.9497
Iteration: 2945; Percent complete: 73.6%; Average loss: 2.8787
Iteration: 2946; Percent complete: 73.7%; Average loss: 3.0099
Iteration: 2947; Percent complete: 73.7%; Average loss: 2.8666
Iteration: 2948; Percent complete: 73.7%; Average loss: 2.7745
Iteration: 2949; Percent complete: 73.7%; Average loss: 3.0214
Iteration: 2950; Percent complete: 73.8%; Average loss: 2.8211
Iteration: 2951; Percent complete: 73.8%; Average loss: 2.7760
Iteration: 2952; Percent complete: 73.8%; Average loss: 2.8491
Iteration: 2953; Percent complete: 73.8%; Average loss: 2.6295
Iteration: 2954; Percent complete: 73.9%; Average loss: 2.7983
Iteration: 2955; Percent complete: 73.9%; Average loss: 2.6004
Iteration: 2956; Percent complete: 73.9%; Average loss: 2.8007
Iteration: 2957; Percent complete: 73.9%; Average loss: 2.6231
Iteration: 2958; Percent complete: 74.0%; Average loss: 3.0967
Iteration: 2959; Percent complete: 74.0%; Average loss: 2.6976
Iteration: 2960; Percent complete: 74.0%; Average loss: 2.9035
Iteration: 2961; Percent complete: 74.0%; Average loss: 2.7975
Iteration: 2962; Percent complete: 74.1%; Average loss: 2.7284
Iteration: 2963; Percent complete: 74.1%; Average loss: 2.9908
Iteration: 2964; Percent complete: 74.1%; Average loss: 2.9319
Iteration: 2965; Percent complete: 74.1%; Average loss: 3.0000
Iteration: 2966; Percent complete: 74.2%; Average loss: 3.0771
Iteration: 2967; Percent complete: 74.2%; Average loss: 2.7732
Iteration: 2968; Percent complete: 74.2%; Average loss: 2.8123
Iteration: 2969; Percent complete: 74.2%; Average loss: 2.9557
Iteration: 2970; Percent complete: 74.2%; Average loss: 2.8569
Iteration: 2971; Percent complete: 74.3%; Average loss: 3.0067
Iteration: 2972; Percent complete: 74.3%; Average loss: 3.0110
Iteration: 2973; Percent complete: 74.3%; Average loss: 2.9510
Iteration: 2974; Percent complete: 74.4%; Average loss: 2.7795
Iteration: 2975; Percent complete: 74.4%; Average loss: 2.8455
Iteration: 2976; Percent complete: 74.4%; Average loss: 2.5821
Iteration: 2977; Percent complete: 74.4%; Average loss: 2.9718
Iteration: 2978; Percent complete: 74.5%; Average loss: 2.7949
Iteration: 2979; Percent complete: 74.5%; Average loss: 2.6559
Iteration: 2980; Percent complete: 74.5%; Average loss: 2.9543
Iteration: 2981; Percent complete: 74.5%; Average loss: 2.8579
Iteration: 2982; Percent complete: 74.6%; Average loss: 2.9636
Iteration: 2983; Percent complete: 74.6%; Average loss: 2.6860
Iteration: 2984; Percent complete: 74.6%; Average loss: 2.8188
Iteration: 2985; Percent complete: 74.6%; Average loss: 2.9248
Iteration: 2986; Percent complete: 74.7%; Average loss: 2.8349
Iteration: 2987; Percent complete: 74.7%; Average loss: 2.6990
Iteration: 2988; Percent complete: 74.7%; Average loss: 2.6005
Iteration: 2989; Percent complete: 74.7%; Average loss: 2.7985
Iteration: 2990; Percent complete: 74.8%; Average loss: 2.8673
Iteration: 2991; Percent complete: 74.8%; Average loss: 2.8840
Iteration: 2992; Percent complete: 74.8%; Average loss: 2.6953
Iteration: 2993; Percent complete: 74.8%; Average loss: 2.8289
Iteration: 2994; Percent complete: 74.9%; Average loss: 2.9132
Iteration: 2995; Percent complete: 74.9%; Average loss: 2.8303
Iteration: 2996; Percent complete: 74.9%; Average loss: 2.9072
Iteration: 2997; Percent complete: 74.9%; Average loss: 2.9230
Iteration: 2998; Percent complete: 75.0%; Average loss: 2.7891
Iteration: 2999; Percent complete: 75.0%; Average loss: 2.8957
Iteration: 3000; Percent complete: 75.0%; Average loss: 2.8737
Iteration: 3001; Percent complete: 75.0%; Average loss: 2.8934
Iteration: 3002; Percent complete: 75.0%; Average loss: 2.8810
Iteration: 3003; Percent complete: 75.1%; Average loss: 2.8016
Iteration: 3004; Percent complete: 75.1%; Average loss: 2.6074
Iteration: 3005; Percent complete: 75.1%; Average loss: 2.7419
Iteration: 3006; Percent complete: 75.1%; Average loss: 2.9264
Iteration: 3007; Percent complete: 75.2%; Average loss: 2.8015
Iteration: 3008; Percent complete: 75.2%; Average loss: 2.9257
Iteration: 3009; Percent complete: 75.2%; Average loss: 2.7780
Iteration: 3010; Percent complete: 75.2%; Average loss: 2.9801
Iteration: 3011; Percent complete: 75.3%; Average loss: 2.9700
Iteration: 3012; Percent complete: 75.3%; Average loss: 2.9900
Iteration: 3013; Percent complete: 75.3%; Average loss: 3.0331
Iteration: 3014; Percent complete: 75.3%; Average loss: 2.7929
Iteration: 3015; Percent complete: 75.4%; Average loss: 2.8229
Iteration: 3016; Percent complete: 75.4%; Average loss: 3.0506
Iteration: 3017; Percent complete: 75.4%; Average loss: 2.6807
Iteration: 3018; Percent complete: 75.4%; Average loss: 3.0159
Iteration: 3019; Percent complete: 75.5%; Average loss: 2.9724
Iteration: 3020; Percent complete: 75.5%; Average loss: 2.9481
Iteration: 3021; Percent complete: 75.5%; Average loss: 2.9513
Iteration: 3022; Percent complete: 75.5%; Average loss: 2.7272
Iteration: 3023; Percent complete: 75.6%; Average loss: 2.8780
Iteration: 3024; Percent complete: 75.6%; Average loss: 2.9713
Iteration: 3025; Percent complete: 75.6%; Average loss: 2.8262
Iteration: 3026; Percent complete: 75.6%; Average loss: 2.9883
Iteration: 3027; Percent complete: 75.7%; Average loss: 2.6539
Iteration: 3028; Percent complete: 75.7%; Average loss: 2.8720
Iteration: 3029; Percent complete: 75.7%; Average loss: 2.8764
Iteration: 3030; Percent complete: 75.8%; Average loss: 2.9450
Iteration: 3031; Percent complete: 75.8%; Average loss: 3.1883
Iteration: 3032; Percent complete: 75.8%; Average loss: 3.0377
Iteration: 3033; Percent complete: 75.8%; Average loss: 2.6199
Iteration: 3034; Percent complete: 75.8%; Average loss: 2.8545
Iteration: 3035; Percent complete: 75.9%; Average loss: 2.9269
Iteration: 3036; Percent complete: 75.9%; Average loss: 2.5840
Iteration: 3037; Percent complete: 75.9%; Average loss: 2.5942
Iteration: 3038; Percent complete: 75.9%; Average loss: 2.9527
Iteration: 3039; Percent complete: 76.0%; Average loss: 2.7986
Iteration: 3040; Percent complete: 76.0%; Average loss: 2.6576
Iteration: 3041; Percent complete: 76.0%; Average loss: 3.0597
Iteration: 3042; Percent complete: 76.0%; Average loss: 2.8059
Iteration: 3043; Percent complete: 76.1%; Average loss: 2.8118
Iteration: 3044; Percent complete: 76.1%; Average loss: 2.8716
Iteration: 3045; Percent complete: 76.1%; Average loss: 2.7269
Iteration: 3046; Percent complete: 76.1%; Average loss: 2.6697
Iteration: 3047; Percent complete: 76.2%; Average loss: 3.0192
Iteration: 3048; Percent complete: 76.2%; Average loss: 2.6631
Iteration: 3049; Percent complete: 76.2%; Average loss: 2.9076
Iteration: 3050; Percent complete: 76.2%; Average loss: 2.9328
Iteration: 3051; Percent complete: 76.3%; Average loss: 2.7759
Iteration: 3052; Percent complete: 76.3%; Average loss: 2.9889
Iteration: 3053; Percent complete: 76.3%; Average loss: 2.7445
Iteration: 3054; Percent complete: 76.3%; Average loss: 2.7664
Iteration: 3055; Percent complete: 76.4%; Average loss: 2.8200
Iteration: 3056; Percent complete: 76.4%; Average loss: 3.0524
Iteration: 3057; Percent complete: 76.4%; Average loss: 2.8026
Iteration: 3058; Percent complete: 76.4%; Average loss: 2.9343
Iteration: 3059; Percent complete: 76.5%; Average loss: 2.7815
Iteration: 3060; Percent complete: 76.5%; Average loss: 2.7296
Iteration: 3061; Percent complete: 76.5%; Average loss: 3.0104
Iteration: 3062; Percent complete: 76.5%; Average loss: 2.8482
Iteration: 3063; Percent complete: 76.6%; Average loss: 2.9636
Iteration: 3064; Percent complete: 76.6%; Average loss: 3.0932
Iteration: 3065; Percent complete: 76.6%; Average loss: 2.8751
Iteration: 3066; Percent complete: 76.6%; Average loss: 2.9809
Iteration: 3067; Percent complete: 76.7%; Average loss: 2.8049
Iteration: 3068; Percent complete: 76.7%; Average loss: 2.9057
Iteration: 3069; Percent complete: 76.7%; Average loss: 2.7936
Iteration: 3070; Percent complete: 76.8%; Average loss: 3.0530
Iteration: 3071; Percent complete: 76.8%; Average loss: 2.8860
Iteration: 3072; Percent complete: 76.8%; Average loss: 2.7040
Iteration: 3073; Percent complete: 76.8%; Average loss: 2.8605
Iteration: 3074; Percent complete: 76.8%; Average loss: 3.0214
Iteration: 3075; Percent complete: 76.9%; Average loss: 2.7845
Iteration: 3076; Percent complete: 76.9%; Average loss: 2.7235
Iteration: 3077; Percent complete: 76.9%; Average loss: 2.8897
Iteration: 3078; Percent complete: 77.0%; Average loss: 2.8738
Iteration: 3079; Percent complete: 77.0%; Average loss: 2.7909
Iteration: 3080; Percent complete: 77.0%; Average loss: 2.6971
Iteration: 3081; Percent complete: 77.0%; Average loss: 2.8084
Iteration: 3082; Percent complete: 77.0%; Average loss: 2.5646
Iteration: 3083; Percent complete: 77.1%; Average loss: 2.9479
Iteration: 3084; Percent complete: 77.1%; Average loss: 2.7448
Iteration: 3085; Percent complete: 77.1%; Average loss: 3.0027
Iteration: 3086; Percent complete: 77.1%; Average loss: 2.8079
Iteration: 3087; Percent complete: 77.2%; Average loss: 2.6177
Iteration: 3088; Percent complete: 77.2%; Average loss: 2.7676
Iteration: 3089; Percent complete: 77.2%; Average loss: 2.8454
Iteration: 3090; Percent complete: 77.2%; Average loss: 3.1089
Iteration: 3091; Percent complete: 77.3%; Average loss: 2.9546
Iteration: 3092; Percent complete: 77.3%; Average loss: 2.8640
Iteration: 3093; Percent complete: 77.3%; Average loss: 2.9625
Iteration: 3094; Percent complete: 77.3%; Average loss: 2.9898
Iteration: 3095; Percent complete: 77.4%; Average loss: 3.2034
Iteration: 3096; Percent complete: 77.4%; Average loss: 2.6945
Iteration: 3097; Percent complete: 77.4%; Average loss: 2.8235
Iteration: 3098; Percent complete: 77.5%; Average loss: 2.8468
Iteration: 3099; Percent complete: 77.5%; Average loss: 2.8602
Iteration: 3100; Percent complete: 77.5%; Average loss: 2.7432
Iteration: 3101; Percent complete: 77.5%; Average loss: 3.0285
Iteration: 3102; Percent complete: 77.5%; Average loss: 3.0102
Iteration: 3103; Percent complete: 77.6%; Average loss: 2.8848
Iteration: 3104; Percent complete: 77.6%; Average loss: 2.9388
Iteration: 3105; Percent complete: 77.6%; Average loss: 3.0118
Iteration: 3106; Percent complete: 77.6%; Average loss: 2.8944
Iteration: 3107; Percent complete: 77.7%; Average loss: 2.9613
Iteration: 3108; Percent complete: 77.7%; Average loss: 2.7550
Iteration: 3109; Percent complete: 77.7%; Average loss: 2.9466
Iteration: 3110; Percent complete: 77.8%; Average loss: 2.9728
Iteration: 3111; Percent complete: 77.8%; Average loss: 2.9667
Iteration: 3112; Percent complete: 77.8%; Average loss: 3.0147
Iteration: 3113; Percent complete: 77.8%; Average loss: 2.7174
Iteration: 3114; Percent complete: 77.8%; Average loss: 2.7620
Iteration: 3115; Percent complete: 77.9%; Average loss: 2.8064
Iteration: 3116; Percent complete: 77.9%; Average loss: 2.8897
Iteration: 3117; Percent complete: 77.9%; Average loss: 2.7415
Iteration: 3118; Percent complete: 78.0%; Average loss: 2.6005
Iteration: 3119; Percent complete: 78.0%; Average loss: 2.7443
Iteration: 3120; Percent complete: 78.0%; Average loss: 2.9433
Iteration: 3121; Percent complete: 78.0%; Average loss: 2.8083
Iteration: 3122; Percent complete: 78.0%; Average loss: 2.7998
Iteration: 3123; Percent complete: 78.1%; Average loss: 2.6117
Iteration: 3124; Percent complete: 78.1%; Average loss: 2.9413
Iteration: 3125; Percent complete: 78.1%; Average loss: 2.8819
Iteration: 3126; Percent complete: 78.1%; Average loss: 2.9276
Iteration: 3127; Percent complete: 78.2%; Average loss: 2.7037
Iteration: 3128; Percent complete: 78.2%; Average loss: 2.7136
Iteration: 3129; Percent complete: 78.2%; Average loss: 2.7105
Iteration: 3130; Percent complete: 78.2%; Average loss: 2.9309
Iteration: 3131; Percent complete: 78.3%; Average loss: 2.9661
Iteration: 3132; Percent complete: 78.3%; Average loss: 2.9727
Iteration: 3133; Percent complete: 78.3%; Average loss: 2.9963
Iteration: 3134; Percent complete: 78.3%; Average loss: 3.0398
Iteration: 3135; Percent complete: 78.4%; Average loss: 2.9339
Iteration: 3136; Percent complete: 78.4%; Average loss: 2.6943
Iteration: 3137; Percent complete: 78.4%; Average loss: 2.7315
Iteration: 3138; Percent complete: 78.5%; Average loss: 2.7150
Iteration: 3139; Percent complete: 78.5%; Average loss: 2.7296
Iteration: 3140; Percent complete: 78.5%; Average loss: 2.7252
Iteration: 3141; Percent complete: 78.5%; Average loss: 2.5710
Iteration: 3142; Percent complete: 78.5%; Average loss: 2.6085
Iteration: 3143; Percent complete: 78.6%; Average loss: 2.7828
Iteration: 3144; Percent complete: 78.6%; Average loss: 2.9356
Iteration: 3145; Percent complete: 78.6%; Average loss: 3.0146
Iteration: 3146; Percent complete: 78.6%; Average loss: 3.1151
Iteration: 3147; Percent complete: 78.7%; Average loss: 2.9539
Iteration: 3148; Percent complete: 78.7%; Average loss: 3.1828
Iteration: 3149; Percent complete: 78.7%; Average loss: 2.8574
Iteration: 3150; Percent complete: 78.8%; Average loss: 2.7243
Iteration: 3151; Percent complete: 78.8%; Average loss: 2.6467
Iteration: 3152; Percent complete: 78.8%; Average loss: 2.9655
Iteration: 3153; Percent complete: 78.8%; Average loss: 2.8363
Iteration: 3154; Percent complete: 78.8%; Average loss: 3.0328
Iteration: 3155; Percent complete: 78.9%; Average loss: 2.6598
Iteration: 3156; Percent complete: 78.9%; Average loss: 3.0374
Iteration: 3157; Percent complete: 78.9%; Average loss: 2.7707
Iteration: 3158; Percent complete: 79.0%; Average loss: 2.9893
Iteration: 3159; Percent complete: 79.0%; Average loss: 3.0518
Iteration: 3160; Percent complete: 79.0%; Average loss: 2.7990
Iteration: 3161; Percent complete: 79.0%; Average loss: 2.7292
Iteration: 3162; Percent complete: 79.0%; Average loss: 2.8481
Iteration: 3163; Percent complete: 79.1%; Average loss: 2.9587
Iteration: 3164; Percent complete: 79.1%; Average loss: 3.1039
Iteration: 3165; Percent complete: 79.1%; Average loss: 2.9562
Iteration: 3166; Percent complete: 79.1%; Average loss: 2.8616
Iteration: 3167; Percent complete: 79.2%; Average loss: 2.8051
Iteration: 3168; Percent complete: 79.2%; Average loss: 2.9125
Iteration: 3169; Percent complete: 79.2%; Average loss: 2.5866
Iteration: 3170; Percent complete: 79.2%; Average loss: 2.7619
Iteration: 3171; Percent complete: 79.3%; Average loss: 2.7401
Iteration: 3172; Percent complete: 79.3%; Average loss: 2.7738
Iteration: 3173; Percent complete: 79.3%; Average loss: 2.8589
Iteration: 3174; Percent complete: 79.3%; Average loss: 2.8823
Iteration: 3175; Percent complete: 79.4%; Average loss: 2.7904
Iteration: 3176; Percent complete: 79.4%; Average loss: 3.0113
Iteration: 3177; Percent complete: 79.4%; Average loss: 2.6455
Iteration: 3178; Percent complete: 79.5%; Average loss: 2.6796
Iteration: 3179; Percent complete: 79.5%; Average loss: 2.5787
Iteration: 3180; Percent complete: 79.5%; Average loss: 2.6542
Iteration: 3181; Percent complete: 79.5%; Average loss: 2.6439
Iteration: 3182; Percent complete: 79.5%; Average loss: 2.9058
Iteration: 3183; Percent complete: 79.6%; Average loss: 2.7914
Iteration: 3184; Percent complete: 79.6%; Average loss: 2.9720
Iteration: 3185; Percent complete: 79.6%; Average loss: 2.9058
Iteration: 3186; Percent complete: 79.7%; Average loss: 2.9869
Iteration: 3187; Percent complete: 79.7%; Average loss: 2.8183
Iteration: 3188; Percent complete: 79.7%; Average loss: 2.8371
Iteration: 3189; Percent complete: 79.7%; Average loss: 2.7776
Iteration: 3190; Percent complete: 79.8%; Average loss: 2.7613
Iteration: 3191; Percent complete: 79.8%; Average loss: 2.9674
Iteration: 3192; Percent complete: 79.8%; Average loss: 2.9549
Iteration: 3193; Percent complete: 79.8%; Average loss: 2.9594
Iteration: 3194; Percent complete: 79.8%; Average loss: 2.7329
Iteration: 3195; Percent complete: 79.9%; Average loss: 2.8074
Iteration: 3196; Percent complete: 79.9%; Average loss: 2.6561
Iteration: 3197; Percent complete: 79.9%; Average loss: 2.7443
Iteration: 3198; Percent complete: 80.0%; Average loss: 2.6066
Iteration: 3199; Percent complete: 80.0%; Average loss: 2.8317
Iteration: 3200; Percent complete: 80.0%; Average loss: 2.7500
Iteration: 3201; Percent complete: 80.0%; Average loss: 2.8004
Iteration: 3202; Percent complete: 80.0%; Average loss: 2.6914
Iteration: 3203; Percent complete: 80.1%; Average loss: 2.8438
Iteration: 3204; Percent complete: 80.1%; Average loss: 2.7708
Iteration: 3205; Percent complete: 80.1%; Average loss: 2.9199
Iteration: 3206; Percent complete: 80.2%; Average loss: 2.5783
Iteration: 3207; Percent complete: 80.2%; Average loss: 2.8577
Iteration: 3208; Percent complete: 80.2%; Average loss: 2.8174
Iteration: 3209; Percent complete: 80.2%; Average loss: 2.7610
Iteration: 3210; Percent complete: 80.2%; Average loss: 2.6854
Iteration: 3211; Percent complete: 80.3%; Average loss: 2.7561
Iteration: 3212; Percent complete: 80.3%; Average loss: 2.9036
Iteration: 3213; Percent complete: 80.3%; Average loss: 2.8052
Iteration: 3214; Percent complete: 80.3%; Average loss: 3.0007
Iteration: 3215; Percent complete: 80.4%; Average loss: 2.7575
Iteration: 3216; Percent complete: 80.4%; Average loss: 2.6973
Iteration: 3217; Percent complete: 80.4%; Average loss: 2.7970
Iteration: 3218; Percent complete: 80.5%; Average loss: 2.8918
Iteration: 3219; Percent complete: 80.5%; Average loss: 2.9152
Iteration: 3220; Percent complete: 80.5%; Average loss: 2.6621
Iteration: 3221; Percent complete: 80.5%; Average loss: 2.8784
Iteration: 3222; Percent complete: 80.5%; Average loss: 3.0926
Iteration: 3223; Percent complete: 80.6%; Average loss: 2.8750
Iteration: 3224; Percent complete: 80.6%; Average loss: 2.8580
Iteration: 3225; Percent complete: 80.6%; Average loss: 2.9297
Iteration: 3226; Percent complete: 80.7%; Average loss: 2.8308
Iteration: 3227; Percent complete: 80.7%; Average loss: 2.7933
Iteration: 3228; Percent complete: 80.7%; Average loss: 2.7561
Iteration: 3229; Percent complete: 80.7%; Average loss: 2.6562
Iteration: 3230; Percent complete: 80.8%; Average loss: 2.6789
Iteration: 3231; Percent complete: 80.8%; Average loss: 2.8531
Iteration: 3232; Percent complete: 80.8%; Average loss: 3.0052
Iteration: 3233; Percent complete: 80.8%; Average loss: 2.6047
Iteration: 3234; Percent complete: 80.8%; Average loss: 2.4950
Iteration: 3235; Percent complete: 80.9%; Average loss: 2.7674
Iteration: 3236; Percent complete: 80.9%; Average loss: 2.9976
Iteration: 3237; Percent complete: 80.9%; Average loss: 2.7712
Iteration: 3238; Percent complete: 81.0%; Average loss: 2.7501
Iteration: 3239; Percent complete: 81.0%; Average loss: 2.7968
Iteration: 3240; Percent complete: 81.0%; Average loss: 2.6339
Iteration: 3241; Percent complete: 81.0%; Average loss: 2.8814
Iteration: 3242; Percent complete: 81.0%; Average loss: 2.7787
Iteration: 3243; Percent complete: 81.1%; Average loss: 2.8817
Iteration: 3244; Percent complete: 81.1%; Average loss: 2.8635
Iteration: 3245; Percent complete: 81.1%; Average loss: 2.7408
Iteration: 3246; Percent complete: 81.2%; Average loss: 2.9884
Iteration: 3247; Percent complete: 81.2%; Average loss: 3.0731
Iteration: 3248; Percent complete: 81.2%; Average loss: 2.7087
Iteration: 3249; Percent complete: 81.2%; Average loss: 2.8798
Iteration: 3250; Percent complete: 81.2%; Average loss: 2.7542
Iteration: 3251; Percent complete: 81.3%; Average loss: 2.6715
Iteration: 3252; Percent complete: 81.3%; Average loss: 3.0571
Iteration: 3253; Percent complete: 81.3%; Average loss: 3.3330
Iteration: 3254; Percent complete: 81.3%; Average loss: 2.6983
Iteration: 3255; Percent complete: 81.4%; Average loss: 3.0250
Iteration: 3256; Percent complete: 81.4%; Average loss: 2.9733
Iteration: 3257; Percent complete: 81.4%; Average loss: 2.6828
Iteration: 3258; Percent complete: 81.5%; Average loss: 2.6881
Iteration: 3259; Percent complete: 81.5%; Average loss: 3.0033
Iteration: 3260; Percent complete: 81.5%; Average loss: 2.8634
Iteration: 3261; Percent complete: 81.5%; Average loss: 2.8723
Iteration: 3262; Percent complete: 81.5%; Average loss: 2.9353
Iteration: 3263; Percent complete: 81.6%; Average loss: 2.9779
Iteration: 3264; Percent complete: 81.6%; Average loss: 2.8291
Iteration: 3265; Percent complete: 81.6%; Average loss: 2.8049
Iteration: 3266; Percent complete: 81.7%; Average loss: 2.7633
Iteration: 3267; Percent complete: 81.7%; Average loss: 3.0322
Iteration: 3268; Percent complete: 81.7%; Average loss: 2.7045
Iteration: 3269; Percent complete: 81.7%; Average loss: 2.9799
Iteration: 3270; Percent complete: 81.8%; Average loss: 2.6534
Iteration: 3271; Percent complete: 81.8%; Average loss: 3.0116
Iteration: 3272; Percent complete: 81.8%; Average loss: 2.7875
Iteration: 3273; Percent complete: 81.8%; Average loss: 3.0172
Iteration: 3274; Percent complete: 81.8%; Average loss: 3.0087
Iteration: 3275; Percent complete: 81.9%; Average loss: 2.8148
Iteration: 3276; Percent complete: 81.9%; Average loss: 2.7528
Iteration: 3277; Percent complete: 81.9%; Average loss: 2.7721
Iteration: 3278; Percent complete: 82.0%; Average loss: 2.8741
Iteration: 3279; Percent complete: 82.0%; Average loss: 2.8069
Iteration: 3280; Percent complete: 82.0%; Average loss: 2.6741
Iteration: 3281; Percent complete: 82.0%; Average loss: 2.7394
Iteration: 3282; Percent complete: 82.0%; Average loss: 2.8972
Iteration: 3283; Percent complete: 82.1%; Average loss: 2.6988
Iteration: 3284; Percent complete: 82.1%; Average loss: 2.7338
Iteration: 3285; Percent complete: 82.1%; Average loss: 2.8319
Iteration: 3286; Percent complete: 82.2%; Average loss: 2.8005
Iteration: 3287; Percent complete: 82.2%; Average loss: 2.7547
Iteration: 3288; Percent complete: 82.2%; Average loss: 2.6666
Iteration: 3289; Percent complete: 82.2%; Average loss: 2.8168
Iteration: 3290; Percent complete: 82.2%; Average loss: 2.5369
Iteration: 3291; Percent complete: 82.3%; Average loss: 2.7085
Iteration: 3292; Percent complete: 82.3%; Average loss: 2.7805
Iteration: 3293; Percent complete: 82.3%; Average loss: 2.7245
Iteration: 3294; Percent complete: 82.3%; Average loss: 3.1890
Iteration: 3295; Percent complete: 82.4%; Average loss: 2.9149
Iteration: 3296; Percent complete: 82.4%; Average loss: 2.7657
Iteration: 3297; Percent complete: 82.4%; Average loss: 2.7136
Iteration: 3298; Percent complete: 82.5%; Average loss: 2.9279
Iteration: 3299; Percent complete: 82.5%; Average loss: 3.0155
Iteration: 3300; Percent complete: 82.5%; Average loss: 2.7234
Iteration: 3301; Percent complete: 82.5%; Average loss: 2.9938
Iteration: 3302; Percent complete: 82.5%; Average loss: 2.8014
Iteration: 3303; Percent complete: 82.6%; Average loss: 2.6247
Iteration: 3304; Percent complete: 82.6%; Average loss: 2.5831
Iteration: 3305; Percent complete: 82.6%; Average loss: 2.7363
Iteration: 3306; Percent complete: 82.7%; Average loss: 2.7949
Iteration: 3307; Percent complete: 82.7%; Average loss: 2.8906
Iteration: 3308; Percent complete: 82.7%; Average loss: 2.7224
Iteration: 3309; Percent complete: 82.7%; Average loss: 2.8082
Iteration: 3310; Percent complete: 82.8%; Average loss: 2.9249
Iteration: 3311; Percent complete: 82.8%; Average loss: 2.8466
Iteration: 3312; Percent complete: 82.8%; Average loss: 2.8677
Iteration: 3313; Percent complete: 82.8%; Average loss: 2.6547
Iteration: 3314; Percent complete: 82.8%; Average loss: 2.8269
Iteration: 3315; Percent complete: 82.9%; Average loss: 2.7362
Iteration: 3316; Percent complete: 82.9%; Average loss: 2.5891
Iteration: 3317; Percent complete: 82.9%; Average loss: 2.9444
Iteration: 3318; Percent complete: 83.0%; Average loss: 2.6676
Iteration: 3319; Percent complete: 83.0%; Average loss: 2.5693
Iteration: 3320; Percent complete: 83.0%; Average loss: 2.6553
Iteration: 3321; Percent complete: 83.0%; Average loss: 3.1853
Iteration: 3322; Percent complete: 83.0%; Average loss: 2.8278
Iteration: 3323; Percent complete: 83.1%; Average loss: 2.7003
Iteration: 3324; Percent complete: 83.1%; Average loss: 2.6991
Iteration: 3325; Percent complete: 83.1%; Average loss: 2.5779
Iteration: 3326; Percent complete: 83.2%; Average loss: 2.7598
Iteration: 3327; Percent complete: 83.2%; Average loss: 2.6882
Iteration: 3328; Percent complete: 83.2%; Average loss: 2.6490
Iteration: 3329; Percent complete: 83.2%; Average loss: 2.5372
Iteration: 3330; Percent complete: 83.2%; Average loss: 2.9420
Iteration: 3331; Percent complete: 83.3%; Average loss: 2.6753
Iteration: 3332; Percent complete: 83.3%; Average loss: 2.8137
Iteration: 3333; Percent complete: 83.3%; Average loss: 2.5238
Iteration: 3334; Percent complete: 83.4%; Average loss: 2.9174
Iteration: 3335; Percent complete: 83.4%; Average loss: 2.7533
Iteration: 3336; Percent complete: 83.4%; Average loss: 2.8130
Iteration: 3337; Percent complete: 83.4%; Average loss: 2.8712
Iteration: 3338; Percent complete: 83.5%; Average loss: 2.3551
Iteration: 3339; Percent complete: 83.5%; Average loss: 3.0112
Iteration: 3340; Percent complete: 83.5%; Average loss: 2.8787
Iteration: 3341; Percent complete: 83.5%; Average loss: 2.8312
Iteration: 3342; Percent complete: 83.5%; Average loss: 2.5435
Iteration: 3343; Percent complete: 83.6%; Average loss: 2.8911
Iteration: 3344; Percent complete: 83.6%; Average loss: 2.6441
Iteration: 3345; Percent complete: 83.6%; Average loss: 2.8864
Iteration: 3346; Percent complete: 83.7%; Average loss: 2.8653
Iteration: 3347; Percent complete: 83.7%; Average loss: 2.9198
Iteration: 3348; Percent complete: 83.7%; Average loss: 2.9492
Iteration: 3349; Percent complete: 83.7%; Average loss: 2.7297
Iteration: 3350; Percent complete: 83.8%; Average loss: 2.8134
Iteration: 3351; Percent complete: 83.8%; Average loss: 2.9488
Iteration: 3352; Percent complete: 83.8%; Average loss: 2.8816
Iteration: 3353; Percent complete: 83.8%; Average loss: 2.4518
Iteration: 3354; Percent complete: 83.9%; Average loss: 2.7271
Iteration: 3355; Percent complete: 83.9%; Average loss: 2.8993
Iteration: 3356; Percent complete: 83.9%; Average loss: 2.6532
Iteration: 3357; Percent complete: 83.9%; Average loss: 2.9320
Iteration: 3358; Percent complete: 84.0%; Average loss: 2.9911
Iteration: 3359; Percent complete: 84.0%; Average loss: 2.7944
Iteration: 3360; Percent complete: 84.0%; Average loss: 3.0085
Iteration: 3361; Percent complete: 84.0%; Average loss: 2.9394
Iteration: 3362; Percent complete: 84.0%; Average loss: 2.7193
Iteration: 3363; Percent complete: 84.1%; Average loss: 2.9555
Iteration: 3364; Percent complete: 84.1%; Average loss: 2.7566
Iteration: 3365; Percent complete: 84.1%; Average loss: 2.6247
Iteration: 3366; Percent complete: 84.2%; Average loss: 2.7382
Iteration: 3367; Percent complete: 84.2%; Average loss: 2.6969
Iteration: 3368; Percent complete: 84.2%; Average loss: 2.9589
Iteration: 3369; Percent complete: 84.2%; Average loss: 2.6176
Iteration: 3370; Percent complete: 84.2%; Average loss: 2.8449
Iteration: 3371; Percent complete: 84.3%; Average loss: 2.6390
Iteration: 3372; Percent complete: 84.3%; Average loss: 2.8939
Iteration: 3373; Percent complete: 84.3%; Average loss: 2.8959
Iteration: 3374; Percent complete: 84.4%; Average loss: 2.6608
Iteration: 3375; Percent complete: 84.4%; Average loss: 2.9910
Iteration: 3376; Percent complete: 84.4%; Average loss: 3.0239
Iteration: 3377; Percent complete: 84.4%; Average loss: 2.8846
Iteration: 3378; Percent complete: 84.5%; Average loss: 2.5685
Iteration: 3379; Percent complete: 84.5%; Average loss: 2.8665
Iteration: 3380; Percent complete: 84.5%; Average loss: 2.7580
Iteration: 3381; Percent complete: 84.5%; Average loss: 2.8552
Iteration: 3382; Percent complete: 84.5%; Average loss: 2.9449
Iteration: 3383; Percent complete: 84.6%; Average loss: 2.7559
Iteration: 3384; Percent complete: 84.6%; Average loss: 2.8599
Iteration: 3385; Percent complete: 84.6%; Average loss: 2.8505
Iteration: 3386; Percent complete: 84.7%; Average loss: 2.6894
Iteration: 3387; Percent complete: 84.7%; Average loss: 2.8143
Iteration: 3388; Percent complete: 84.7%; Average loss: 2.6176
Iteration: 3389; Percent complete: 84.7%; Average loss: 2.6595
Iteration: 3390; Percent complete: 84.8%; Average loss: 2.7361
Iteration: 3391; Percent complete: 84.8%; Average loss: 2.6742
Iteration: 3392; Percent complete: 84.8%; Average loss: 2.4893
Iteration: 3393; Percent complete: 84.8%; Average loss: 2.6910
Iteration: 3394; Percent complete: 84.9%; Average loss: 2.7149
Iteration: 3395; Percent complete: 84.9%; Average loss: 2.9347
Iteration: 3396; Percent complete: 84.9%; Average loss: 2.9463
Iteration: 3397; Percent complete: 84.9%; Average loss: 2.7062
Iteration: 3398; Percent complete: 85.0%; Average loss: 2.6465
Iteration: 3399; Percent complete: 85.0%; Average loss: 2.8524
Iteration: 3400; Percent complete: 85.0%; Average loss: 2.7265
Iteration: 3401; Percent complete: 85.0%; Average loss: 2.8430
Iteration: 3402; Percent complete: 85.0%; Average loss: 2.7522
Iteration: 3403; Percent complete: 85.1%; Average loss: 2.7838
Iteration: 3404; Percent complete: 85.1%; Average loss: 2.7734
Iteration: 3405; Percent complete: 85.1%; Average loss: 2.8113
Iteration: 3406; Percent complete: 85.2%; Average loss: 2.9767
Iteration: 3407; Percent complete: 85.2%; Average loss: 2.7896
Iteration: 3408; Percent complete: 85.2%; Average loss: 2.8635
Iteration: 3409; Percent complete: 85.2%; Average loss: 2.7222
Iteration: 3410; Percent complete: 85.2%; Average loss: 2.6737
Iteration: 3411; Percent complete: 85.3%; Average loss: 2.7309
Iteration: 3412; Percent complete: 85.3%; Average loss: 2.7945
Iteration: 3413; Percent complete: 85.3%; Average loss: 2.4796
Iteration: 3414; Percent complete: 85.4%; Average loss: 2.5152
Iteration: 3415; Percent complete: 85.4%; Average loss: 2.9534
Iteration: 3416; Percent complete: 85.4%; Average loss: 2.9114
Iteration: 3417; Percent complete: 85.4%; Average loss: 2.7668
Iteration: 3418; Percent complete: 85.5%; Average loss: 2.7216
Iteration: 3419; Percent complete: 85.5%; Average loss: 2.7876
Iteration: 3420; Percent complete: 85.5%; Average loss: 2.7566
Iteration: 3421; Percent complete: 85.5%; Average loss: 2.8427
Iteration: 3422; Percent complete: 85.5%; Average loss: 2.8872
Iteration: 3423; Percent complete: 85.6%; Average loss: 2.7596
Iteration: 3424; Percent complete: 85.6%; Average loss: 2.9765
Iteration: 3425; Percent complete: 85.6%; Average loss: 2.8705
Iteration: 3426; Percent complete: 85.7%; Average loss: 2.8591
Iteration: 3427; Percent complete: 85.7%; Average loss: 2.7527
Iteration: 3428; Percent complete: 85.7%; Average loss: 2.8470
Iteration: 3429; Percent complete: 85.7%; Average loss: 2.8831
Iteration: 3430; Percent complete: 85.8%; Average loss: 2.4198
Iteration: 3431; Percent complete: 85.8%; Average loss: 2.7363
Iteration: 3432; Percent complete: 85.8%; Average loss: 2.5565
Iteration: 3433; Percent complete: 85.8%; Average loss: 2.9529
Iteration: 3434; Percent complete: 85.9%; Average loss: 2.7397
Iteration: 3435; Percent complete: 85.9%; Average loss: 2.9667
Iteration: 3436; Percent complete: 85.9%; Average loss: 2.6194
Iteration: 3437; Percent complete: 85.9%; Average loss: 2.8362
Iteration: 3438; Percent complete: 86.0%; Average loss: 2.8041
Iteration: 3439; Percent complete: 86.0%; Average loss: 2.7526
Iteration: 3440; Percent complete: 86.0%; Average loss: 2.9423
Iteration: 3441; Percent complete: 86.0%; Average loss: 2.8427
Iteration: 3442; Percent complete: 86.1%; Average loss: 2.5859
Iteration: 3443; Percent complete: 86.1%; Average loss: 2.7284
Iteration: 3444; Percent complete: 86.1%; Average loss: 2.8330
Iteration: 3445; Percent complete: 86.1%; Average loss: 3.0725
Iteration: 3446; Percent complete: 86.2%; Average loss: 2.8052
Iteration: 3447; Percent complete: 86.2%; Average loss: 2.5332
Iteration: 3448; Percent complete: 86.2%; Average loss: 2.8889
Iteration: 3449; Percent complete: 86.2%; Average loss: 2.7940
Iteration: 3450; Percent complete: 86.2%; Average loss: 2.6807
Iteration: 3451; Percent complete: 86.3%; Average loss: 2.9548
Iteration: 3452; Percent complete: 86.3%; Average loss: 2.7014
Iteration: 3453; Percent complete: 86.3%; Average loss: 2.7773
Iteration: 3454; Percent complete: 86.4%; Average loss: 2.7812
Iteration: 3455; Percent complete: 86.4%; Average loss: 2.6772
Iteration: 3456; Percent complete: 86.4%; Average loss: 2.8876
Iteration: 3457; Percent complete: 86.4%; Average loss: 2.8706
Iteration: 3458; Percent complete: 86.5%; Average loss: 2.6216
Iteration: 3459; Percent complete: 86.5%; Average loss: 2.5615
Iteration: 3460; Percent complete: 86.5%; Average loss: 2.7425
Iteration: 3461; Percent complete: 86.5%; Average loss: 3.2615
Iteration: 3462; Percent complete: 86.6%; Average loss: 2.5941
Iteration: 3463; Percent complete: 86.6%; Average loss: 2.6503
Iteration: 3464; Percent complete: 86.6%; Average loss: 2.7168
Iteration: 3465; Percent complete: 86.6%; Average loss: 2.8192
Iteration: 3466; Percent complete: 86.7%; Average loss: 2.7710
Iteration: 3467; Percent complete: 86.7%; Average loss: 2.6374
Iteration: 3468; Percent complete: 86.7%; Average loss: 2.6457
Iteration: 3469; Percent complete: 86.7%; Average loss: 3.1214
Iteration: 3470; Percent complete: 86.8%; Average loss: 2.7612
Iteration: 3471; Percent complete: 86.8%; Average loss: 2.9156
Iteration: 3472; Percent complete: 86.8%; Average loss: 2.5887
Iteration: 3473; Percent complete: 86.8%; Average loss: 2.8506
Iteration: 3474; Percent complete: 86.9%; Average loss: 3.0231
Iteration: 3475; Percent complete: 86.9%; Average loss: 2.7657
Iteration: 3476; Percent complete: 86.9%; Average loss: 2.7487
Iteration: 3477; Percent complete: 86.9%; Average loss: 2.7630
Iteration: 3478; Percent complete: 87.0%; Average loss: 2.6255
Iteration: 3479; Percent complete: 87.0%; Average loss: 2.5134
Iteration: 3480; Percent complete: 87.0%; Average loss: 2.6127
Iteration: 3481; Percent complete: 87.0%; Average loss: 2.6256
Iteration: 3482; Percent complete: 87.1%; Average loss: 2.8629
Iteration: 3483; Percent complete: 87.1%; Average loss: 2.8521
Iteration: 3484; Percent complete: 87.1%; Average loss: 2.9215
Iteration: 3485; Percent complete: 87.1%; Average loss: 2.8800
Iteration: 3486; Percent complete: 87.2%; Average loss: 2.8826
Iteration: 3487; Percent complete: 87.2%; Average loss: 2.7339
Iteration: 3488; Percent complete: 87.2%; Average loss: 2.7265
Iteration: 3489; Percent complete: 87.2%; Average loss: 2.6762
Iteration: 3490; Percent complete: 87.2%; Average loss: 2.8876
Iteration: 3491; Percent complete: 87.3%; Average loss: 2.8557
Iteration: 3492; Percent complete: 87.3%; Average loss: 2.8067
Iteration: 3493; Percent complete: 87.3%; Average loss: 2.8012
Iteration: 3494; Percent complete: 87.4%; Average loss: 2.6266
Iteration: 3495; Percent complete: 87.4%; Average loss: 2.9209
Iteration: 3496; Percent complete: 87.4%; Average loss: 2.6566
Iteration: 3497; Percent complete: 87.4%; Average loss: 2.4138
Iteration: 3498; Percent complete: 87.5%; Average loss: 2.8948
Iteration: 3499; Percent complete: 87.5%; Average loss: 2.9890
Iteration: 3500; Percent complete: 87.5%; Average loss: 2.8858
Iteration: 3501; Percent complete: 87.5%; Average loss: 2.8892
Iteration: 3502; Percent complete: 87.5%; Average loss: 2.7708
Iteration: 3503; Percent complete: 87.6%; Average loss: 2.8064
Iteration: 3504; Percent complete: 87.6%; Average loss: 2.7272
Iteration: 3505; Percent complete: 87.6%; Average loss: 2.8758
Iteration: 3506; Percent complete: 87.6%; Average loss: 2.8260
Iteration: 3507; Percent complete: 87.7%; Average loss: 2.8110
Iteration: 3508; Percent complete: 87.7%; Average loss: 2.8147
Iteration: 3509; Percent complete: 87.7%; Average loss: 2.5350
Iteration: 3510; Percent complete: 87.8%; Average loss: 2.7618
Iteration: 3511; Percent complete: 87.8%; Average loss: 2.7367
Iteration: 3512; Percent complete: 87.8%; Average loss: 2.5429
Iteration: 3513; Percent complete: 87.8%; Average loss: 2.8456
Iteration: 3514; Percent complete: 87.8%; Average loss: 2.8184
Iteration: 3515; Percent complete: 87.9%; Average loss: 2.7593
Iteration: 3516; Percent complete: 87.9%; Average loss: 2.9297
Iteration: 3517; Percent complete: 87.9%; Average loss: 2.7752
Iteration: 3518; Percent complete: 87.9%; Average loss: 2.7934
Iteration: 3519; Percent complete: 88.0%; Average loss: 2.7745
Iteration: 3520; Percent complete: 88.0%; Average loss: 2.7127
Iteration: 3521; Percent complete: 88.0%; Average loss: 2.5408
Iteration: 3522; Percent complete: 88.0%; Average loss: 2.6953
Iteration: 3523; Percent complete: 88.1%; Average loss: 2.6942
Iteration: 3524; Percent complete: 88.1%; Average loss: 2.8877
Iteration: 3525; Percent complete: 88.1%; Average loss: 2.5886
Iteration: 3526; Percent complete: 88.1%; Average loss: 2.8689
Iteration: 3527; Percent complete: 88.2%; Average loss: 2.6981
Iteration: 3528; Percent complete: 88.2%; Average loss: 2.7380
Iteration: 3529; Percent complete: 88.2%; Average loss: 2.6991
Iteration: 3530; Percent complete: 88.2%; Average loss: 2.8321
Iteration: 3531; Percent complete: 88.3%; Average loss: 2.6135
Iteration: 3532; Percent complete: 88.3%; Average loss: 2.7975
Iteration: 3533; Percent complete: 88.3%; Average loss: 3.0085
Iteration: 3534; Percent complete: 88.3%; Average loss: 2.7302
Iteration: 3535; Percent complete: 88.4%; Average loss: 2.7317
Iteration: 3536; Percent complete: 88.4%; Average loss: 2.8889
Iteration: 3537; Percent complete: 88.4%; Average loss: 2.7129
Iteration: 3538; Percent complete: 88.4%; Average loss: 2.8233
Iteration: 3539; Percent complete: 88.5%; Average loss: 2.5585
Iteration: 3540; Percent complete: 88.5%; Average loss: 2.7944
Iteration: 3541; Percent complete: 88.5%; Average loss: 2.8150
Iteration: 3542; Percent complete: 88.5%; Average loss: 2.7078
Iteration: 3543; Percent complete: 88.6%; Average loss: 2.7777
Iteration: 3544; Percent complete: 88.6%; Average loss: 2.8696
Iteration: 3545; Percent complete: 88.6%; Average loss: 2.7816
Iteration: 3546; Percent complete: 88.6%; Average loss: 2.6814
Iteration: 3547; Percent complete: 88.7%; Average loss: 2.5339
Iteration: 3548; Percent complete: 88.7%; Average loss: 3.0442
Iteration: 3549; Percent complete: 88.7%; Average loss: 2.8717
Iteration: 3550; Percent complete: 88.8%; Average loss: 2.5845
Iteration: 3551; Percent complete: 88.8%; Average loss: 2.8913
Iteration: 3552; Percent complete: 88.8%; Average loss: 2.8831
Iteration: 3553; Percent complete: 88.8%; Average loss: 2.5544
Iteration: 3554; Percent complete: 88.8%; Average loss: 2.7045
Iteration: 3555; Percent complete: 88.9%; Average loss: 2.7274
Iteration: 3556; Percent complete: 88.9%; Average loss: 2.7077
Iteration: 3557; Percent complete: 88.9%; Average loss: 2.8538
Iteration: 3558; Percent complete: 88.9%; Average loss: 2.6295
Iteration: 3559; Percent complete: 89.0%; Average loss: 2.6692
Iteration: 3560; Percent complete: 89.0%; Average loss: 2.8003
Iteration: 3561; Percent complete: 89.0%; Average loss: 2.5662
Iteration: 3562; Percent complete: 89.0%; Average loss: 2.5610
Iteration: 3563; Percent complete: 89.1%; Average loss: 2.6709
Iteration: 3564; Percent complete: 89.1%; Average loss: 2.7066
Iteration: 3565; Percent complete: 89.1%; Average loss: 2.9992
Iteration: 3566; Percent complete: 89.1%; Average loss: 2.6012
Iteration: 3567; Percent complete: 89.2%; Average loss: 2.7902
Iteration: 3568; Percent complete: 89.2%; Average loss: 2.6511
Iteration: 3569; Percent complete: 89.2%; Average loss: 2.6067
Iteration: 3570; Percent complete: 89.2%; Average loss: 2.5993
Iteration: 3571; Percent complete: 89.3%; Average loss: 2.7052
Iteration: 3572; Percent complete: 89.3%; Average loss: 2.5281
Iteration: 3573; Percent complete: 89.3%; Average loss: 2.7585
Iteration: 3574; Percent complete: 89.3%; Average loss: 2.7528
Iteration: 3575; Percent complete: 89.4%; Average loss: 2.6565
Iteration: 3576; Percent complete: 89.4%; Average loss: 2.5340
Iteration: 3577; Percent complete: 89.4%; Average loss: 2.9019
Iteration: 3578; Percent complete: 89.5%; Average loss: 2.7524
Iteration: 3579; Percent complete: 89.5%; Average loss: 2.6416
Iteration: 3580; Percent complete: 89.5%; Average loss: 2.8374
Iteration: 3581; Percent complete: 89.5%; Average loss: 2.7307
Iteration: 3582; Percent complete: 89.5%; Average loss: 2.7991
Iteration: 3583; Percent complete: 89.6%; Average loss: 2.7595
Iteration: 3584; Percent complete: 89.6%; Average loss: 2.7182
Iteration: 3585; Percent complete: 89.6%; Average loss: 2.7601
Iteration: 3586; Percent complete: 89.6%; Average loss: 2.6779
Iteration: 3587; Percent complete: 89.7%; Average loss: 2.6435
Iteration: 3588; Percent complete: 89.7%; Average loss: 2.7722
Iteration: 3589; Percent complete: 89.7%; Average loss: 2.5008
Iteration: 3590; Percent complete: 89.8%; Average loss: 2.7120
Iteration: 3591; Percent complete: 89.8%; Average loss: 2.6623
Iteration: 3592; Percent complete: 89.8%; Average loss: 2.7538
Iteration: 3593; Percent complete: 89.8%; Average loss: 2.7138
Iteration: 3594; Percent complete: 89.8%; Average loss: 2.6425
Iteration: 3595; Percent complete: 89.9%; Average loss: 2.5590
Iteration: 3596; Percent complete: 89.9%; Average loss: 2.9277
Iteration: 3597; Percent complete: 89.9%; Average loss: 2.8287
Iteration: 3598; Percent complete: 90.0%; Average loss: 2.7855
Iteration: 3599; Percent complete: 90.0%; Average loss: 2.6606
Iteration: 3600; Percent complete: 90.0%; Average loss: 2.9628
Iteration: 3601; Percent complete: 90.0%; Average loss: 2.9510
Iteration: 3602; Percent complete: 90.0%; Average loss: 2.7397
Iteration: 3603; Percent complete: 90.1%; Average loss: 2.7124
Iteration: 3604; Percent complete: 90.1%; Average loss: 2.6421
Iteration: 3605; Percent complete: 90.1%; Average loss: 2.6359
Iteration: 3606; Percent complete: 90.1%; Average loss: 2.7313
Iteration: 3607; Percent complete: 90.2%; Average loss: 2.8215
Iteration: 3608; Percent complete: 90.2%; Average loss: 3.0418
Iteration: 3609; Percent complete: 90.2%; Average loss: 2.6691
Iteration: 3610; Percent complete: 90.2%; Average loss: 2.6880
Iteration: 3611; Percent complete: 90.3%; Average loss: 2.7029
Iteration: 3612; Percent complete: 90.3%; Average loss: 2.6654
Iteration: 3613; Percent complete: 90.3%; Average loss: 2.6689
Iteration: 3614; Percent complete: 90.3%; Average loss: 2.9069
Iteration: 3615; Percent complete: 90.4%; Average loss: 2.8511
Iteration: 3616; Percent complete: 90.4%; Average loss: 2.7859
Iteration: 3617; Percent complete: 90.4%; Average loss: 2.6409
Iteration: 3618; Percent complete: 90.5%; Average loss: 2.5545
Iteration: 3619; Percent complete: 90.5%; Average loss: 2.5503
Iteration: 3620; Percent complete: 90.5%; Average loss: 2.4519
Iteration: 3621; Percent complete: 90.5%; Average loss: 2.7404
Iteration: 3622; Percent complete: 90.5%; Average loss: 2.5949
Iteration: 3623; Percent complete: 90.6%; Average loss: 2.6594
Iteration: 3624; Percent complete: 90.6%; Average loss: 2.5480
Iteration: 3625; Percent complete: 90.6%; Average loss: 2.7288
Iteration: 3626; Percent complete: 90.6%; Average loss: 2.6305
Iteration: 3627; Percent complete: 90.7%; Average loss: 2.4757
Iteration: 3628; Percent complete: 90.7%; Average loss: 2.7828
Iteration: 3629; Percent complete: 90.7%; Average loss: 2.6561
Iteration: 3630; Percent complete: 90.8%; Average loss: 2.6882
Iteration: 3631; Percent complete: 90.8%; Average loss: 2.6821
Iteration: 3632; Percent complete: 90.8%; Average loss: 2.8865
Iteration: 3633; Percent complete: 90.8%; Average loss: 2.7930
Iteration: 3634; Percent complete: 90.8%; Average loss: 2.8731
Iteration: 3635; Percent complete: 90.9%; Average loss: 2.8932
Iteration: 3636; Percent complete: 90.9%; Average loss: 2.6577
Iteration: 3637; Percent complete: 90.9%; Average loss: 2.8576
Iteration: 3638; Percent complete: 91.0%; Average loss: 2.5843
Iteration: 3639; Percent complete: 91.0%; Average loss: 2.6472
Iteration: 3640; Percent complete: 91.0%; Average loss: 2.7376
Iteration: 3641; Percent complete: 91.0%; Average loss: 2.7860
Iteration: 3642; Percent complete: 91.0%; Average loss: 2.7021
Iteration: 3643; Percent complete: 91.1%; Average loss: 2.8267
Iteration: 3644; Percent complete: 91.1%; Average loss: 2.4940
Iteration: 3645; Percent complete: 91.1%; Average loss: 2.4316
Iteration: 3646; Percent complete: 91.1%; Average loss: 2.7669
Iteration: 3647; Percent complete: 91.2%; Average loss: 2.8125
Iteration: 3648; Percent complete: 91.2%; Average loss: 2.5776
Iteration: 3649; Percent complete: 91.2%; Average loss: 2.6557
Iteration: 3650; Percent complete: 91.2%; Average loss: 2.6198
Iteration: 3651; Percent complete: 91.3%; Average loss: 2.6538
Iteration: 3652; Percent complete: 91.3%; Average loss: 2.6557
Iteration: 3653; Percent complete: 91.3%; Average loss: 2.5738
Iteration: 3654; Percent complete: 91.3%; Average loss: 2.8869
Iteration: 3655; Percent complete: 91.4%; Average loss: 2.9378
Iteration: 3656; Percent complete: 91.4%; Average loss: 2.5412
Iteration: 3657; Percent complete: 91.4%; Average loss: 2.5504
Iteration: 3658; Percent complete: 91.5%; Average loss: 2.5977
Iteration: 3659; Percent complete: 91.5%; Average loss: 2.7018
Iteration: 3660; Percent complete: 91.5%; Average loss: 2.8918
Iteration: 3661; Percent complete: 91.5%; Average loss: 2.7883
Iteration: 3662; Percent complete: 91.5%; Average loss: 2.5903
Iteration: 3663; Percent complete: 91.6%; Average loss: 2.7988
Iteration: 3664; Percent complete: 91.6%; Average loss: 2.5127
Iteration: 3665; Percent complete: 91.6%; Average loss: 2.7208
Iteration: 3666; Percent complete: 91.6%; Average loss: 2.8109
Iteration: 3667; Percent complete: 91.7%; Average loss: 2.6483
Iteration: 3668; Percent complete: 91.7%; Average loss: 2.8302
Iteration: 3669; Percent complete: 91.7%; Average loss: 2.6553
Iteration: 3670; Percent complete: 91.8%; Average loss: 3.0120
Iteration: 3671; Percent complete: 91.8%; Average loss: 2.7848
Iteration: 3672; Percent complete: 91.8%; Average loss: 2.6769
Iteration: 3673; Percent complete: 91.8%; Average loss: 2.6282
Iteration: 3674; Percent complete: 91.8%; Average loss: 2.7295
Iteration: 3675; Percent complete: 91.9%; Average loss: 2.7143
Iteration: 3676; Percent complete: 91.9%; Average loss: 2.5716
Iteration: 3677; Percent complete: 91.9%; Average loss: 2.7935
Iteration: 3678; Percent complete: 92.0%; Average loss: 2.8182
Iteration: 3679; Percent complete: 92.0%; Average loss: 2.6473
Iteration: 3680; Percent complete: 92.0%; Average loss: 2.7498
Iteration: 3681; Percent complete: 92.0%; Average loss: 2.7282
Iteration: 3682; Percent complete: 92.0%; Average loss: 2.5375
Iteration: 3683; Percent complete: 92.1%; Average loss: 2.5238
Iteration: 3684; Percent complete: 92.1%; Average loss: 2.6492
Iteration: 3685; Percent complete: 92.1%; Average loss: 2.6260
Iteration: 3686; Percent complete: 92.2%; Average loss: 2.7377
Iteration: 3687; Percent complete: 92.2%; Average loss: 2.5227
Iteration: 3688; Percent complete: 92.2%; Average loss: 2.9575
Iteration: 3689; Percent complete: 92.2%; Average loss: 2.8390
Iteration: 3690; Percent complete: 92.2%; Average loss: 2.7277
Iteration: 3691; Percent complete: 92.3%; Average loss: 2.7572
Iteration: 3692; Percent complete: 92.3%; Average loss: 2.4945
Iteration: 3693; Percent complete: 92.3%; Average loss: 2.7131
Iteration: 3694; Percent complete: 92.3%; Average loss: 3.0265
Iteration: 3695; Percent complete: 92.4%; Average loss: 2.9605
Iteration: 3696; Percent complete: 92.4%; Average loss: 2.7964
Iteration: 3697; Percent complete: 92.4%; Average loss: 2.4079
Iteration: 3698; Percent complete: 92.5%; Average loss: 2.4906
Iteration: 3699; Percent complete: 92.5%; Average loss: 2.5959
Iteration: 3700; Percent complete: 92.5%; Average loss: 2.5600
Iteration: 3701; Percent complete: 92.5%; Average loss: 2.7676
Iteration: 3702; Percent complete: 92.5%; Average loss: 2.5792
Iteration: 3703; Percent complete: 92.6%; Average loss: 2.6941
Iteration: 3704; Percent complete: 92.6%; Average loss: 2.4646
Iteration: 3705; Percent complete: 92.6%; Average loss: 2.6584
Iteration: 3706; Percent complete: 92.7%; Average loss: 2.8391
Iteration: 3707; Percent complete: 92.7%; Average loss: 2.7096
Iteration: 3708; Percent complete: 92.7%; Average loss: 2.5859
Iteration: 3709; Percent complete: 92.7%; Average loss: 2.4536
Iteration: 3710; Percent complete: 92.8%; Average loss: 2.7044
Iteration: 3711; Percent complete: 92.8%; Average loss: 2.6563
Iteration: 3712; Percent complete: 92.8%; Average loss: 2.8460
Iteration: 3713; Percent complete: 92.8%; Average loss: 2.5310
Iteration: 3714; Percent complete: 92.8%; Average loss: 2.4377
Iteration: 3715; Percent complete: 92.9%; Average loss: 2.7251
Iteration: 3716; Percent complete: 92.9%; Average loss: 2.7344
Iteration: 3717; Percent complete: 92.9%; Average loss: 2.8797
Iteration: 3718; Percent complete: 93.0%; Average loss: 2.9595
Iteration: 3719; Percent complete: 93.0%; Average loss: 2.7771
Iteration: 3720; Percent complete: 93.0%; Average loss: 2.6576
Iteration: 3721; Percent complete: 93.0%; Average loss: 2.6256
Iteration: 3722; Percent complete: 93.0%; Average loss: 2.5817
Iteration: 3723; Percent complete: 93.1%; Average loss: 2.7199
Iteration: 3724; Percent complete: 93.1%; Average loss: 2.7414
Iteration: 3725; Percent complete: 93.1%; Average loss: 2.4906
Iteration: 3726; Percent complete: 93.2%; Average loss: 2.6650
Iteration: 3727; Percent complete: 93.2%; Average loss: 2.6611
Iteration: 3728; Percent complete: 93.2%; Average loss: 2.8548
Iteration: 3729; Percent complete: 93.2%; Average loss: 2.4581
Iteration: 3730; Percent complete: 93.2%; Average loss: 2.5229
Iteration: 3731; Percent complete: 93.3%; Average loss: 2.8147
Iteration: 3732; Percent complete: 93.3%; Average loss: 2.6782
Iteration: 3733; Percent complete: 93.3%; Average loss: 2.3824
Iteration: 3734; Percent complete: 93.3%; Average loss: 2.6514
Iteration: 3735; Percent complete: 93.4%; Average loss: 2.6404
Iteration: 3736; Percent complete: 93.4%; Average loss: 2.6637
Iteration: 3737; Percent complete: 93.4%; Average loss: 2.6848
Iteration: 3738; Percent complete: 93.5%; Average loss: 2.9877
Iteration: 3739; Percent complete: 93.5%; Average loss: 2.6647
Iteration: 3740; Percent complete: 93.5%; Average loss: 2.4273
Iteration: 3741; Percent complete: 93.5%; Average loss: 2.6972
Iteration: 3742; Percent complete: 93.5%; Average loss: 2.8252
Iteration: 3743; Percent complete: 93.6%; Average loss: 2.7190
Iteration: 3744; Percent complete: 93.6%; Average loss: 2.7739
Iteration: 3745; Percent complete: 93.6%; Average loss: 2.5601
Iteration: 3746; Percent complete: 93.7%; Average loss: 2.6571
Iteration: 3747; Percent complete: 93.7%; Average loss: 2.6198
Iteration: 3748; Percent complete: 93.7%; Average loss: 2.7366
Iteration: 3749; Percent complete: 93.7%; Average loss: 2.9103
Iteration: 3750; Percent complete: 93.8%; Average loss: 2.5969
Iteration: 3751; Percent complete: 93.8%; Average loss: 2.4738
Iteration: 3752; Percent complete: 93.8%; Average loss: 2.6347
Iteration: 3753; Percent complete: 93.8%; Average loss: 2.5298
Iteration: 3754; Percent complete: 93.8%; Average loss: 2.7946
Iteration: 3755; Percent complete: 93.9%; Average loss: 2.7999
Iteration: 3756; Percent complete: 93.9%; Average loss: 2.6498
Iteration: 3757; Percent complete: 93.9%; Average loss: 2.8439
Iteration: 3758; Percent complete: 94.0%; Average loss: 2.5057
Iteration: 3759; Percent complete: 94.0%; Average loss: 2.5144
Iteration: 3760; Percent complete: 94.0%; Average loss: 2.5888
Iteration: 3761; Percent complete: 94.0%; Average loss: 2.6864
Iteration: 3762; Percent complete: 94.0%; Average loss: 2.7400
Iteration: 3763; Percent complete: 94.1%; Average loss: 2.8270
Iteration: 3764; Percent complete: 94.1%; Average loss: 2.5577
Iteration: 3765; Percent complete: 94.1%; Average loss: 2.6489
Iteration: 3766; Percent complete: 94.2%; Average loss: 2.4966
Iteration: 3767; Percent complete: 94.2%; Average loss: 2.4537
Iteration: 3768; Percent complete: 94.2%; Average loss: 2.6680
Iteration: 3769; Percent complete: 94.2%; Average loss: 2.5815
Iteration: 3770; Percent complete: 94.2%; Average loss: 2.7102
Iteration: 3771; Percent complete: 94.3%; Average loss: 2.5469
Iteration: 3772; Percent complete: 94.3%; Average loss: 2.8005
Iteration: 3773; Percent complete: 94.3%; Average loss: 2.8022
Iteration: 3774; Percent complete: 94.3%; Average loss: 2.5847
Iteration: 3775; Percent complete: 94.4%; Average loss: 2.4928
Iteration: 3776; Percent complete: 94.4%; Average loss: 2.8989
Iteration: 3777; Percent complete: 94.4%; Average loss: 2.5363
Iteration: 3778; Percent complete: 94.5%; Average loss: 2.6962
Iteration: 3779; Percent complete: 94.5%; Average loss: 2.6829
Iteration: 3780; Percent complete: 94.5%; Average loss: 2.8136
Iteration: 3781; Percent complete: 94.5%; Average loss: 2.5867
Iteration: 3782; Percent complete: 94.5%; Average loss: 2.8782
Iteration: 3783; Percent complete: 94.6%; Average loss: 2.8201
Iteration: 3784; Percent complete: 94.6%; Average loss: 2.6383
Iteration: 3785; Percent complete: 94.6%; Average loss: 2.5777
Iteration: 3786; Percent complete: 94.7%; Average loss: 2.4788
Iteration: 3787; Percent complete: 94.7%; Average loss: 2.5911
Iteration: 3788; Percent complete: 94.7%; Average loss: 2.7576
Iteration: 3789; Percent complete: 94.7%; Average loss: 2.7454
Iteration: 3790; Percent complete: 94.8%; Average loss: 2.7091
Iteration: 3791; Percent complete: 94.8%; Average loss: 2.6213
Iteration: 3792; Percent complete: 94.8%; Average loss: 2.5496
Iteration: 3793; Percent complete: 94.8%; Average loss: 2.7322
Iteration: 3794; Percent complete: 94.8%; Average loss: 2.3886
Iteration: 3795; Percent complete: 94.9%; Average loss: 2.6657
Iteration: 3796; Percent complete: 94.9%; Average loss: 2.8564
Iteration: 3797; Percent complete: 94.9%; Average loss: 2.4456
Iteration: 3798; Percent complete: 95.0%; Average loss: 2.5692
Iteration: 3799; Percent complete: 95.0%; Average loss: 2.8010
Iteration: 3800; Percent complete: 95.0%; Average loss: 2.2614
Iteration: 3801; Percent complete: 95.0%; Average loss: 2.7984
Iteration: 3802; Percent complete: 95.0%; Average loss: 2.7884
Iteration: 3803; Percent complete: 95.1%; Average loss: 2.7712
Iteration: 3804; Percent complete: 95.1%; Average loss: 2.5229
Iteration: 3805; Percent complete: 95.1%; Average loss: 2.4450
Iteration: 3806; Percent complete: 95.2%; Average loss: 2.6248
Iteration: 3807; Percent complete: 95.2%; Average loss: 2.5651
Iteration: 3808; Percent complete: 95.2%; Average loss: 2.4469
Iteration: 3809; Percent complete: 95.2%; Average loss: 2.7313
Iteration: 3810; Percent complete: 95.2%; Average loss: 2.5153
Iteration: 3811; Percent complete: 95.3%; Average loss: 2.5567
Iteration: 3812; Percent complete: 95.3%; Average loss: 2.6061
Iteration: 3813; Percent complete: 95.3%; Average loss: 2.4531
Iteration: 3814; Percent complete: 95.3%; Average loss: 2.6182
Iteration: 3815; Percent complete: 95.4%; Average loss: 2.5087
Iteration: 3816; Percent complete: 95.4%; Average loss: 2.7127
Iteration: 3817; Percent complete: 95.4%; Average loss: 2.5835
Iteration: 3818; Percent complete: 95.5%; Average loss: 2.8127
Iteration: 3819; Percent complete: 95.5%; Average loss: 2.5900
Iteration: 3820; Percent complete: 95.5%; Average loss: 2.6101
Iteration: 3821; Percent complete: 95.5%; Average loss: 2.6999
Iteration: 3822; Percent complete: 95.5%; Average loss: 2.6357
Iteration: 3823; Percent complete: 95.6%; Average loss: 2.6386
Iteration: 3824; Percent complete: 95.6%; Average loss: 2.6161
Iteration: 3825; Percent complete: 95.6%; Average loss: 2.5907
Iteration: 3826; Percent complete: 95.7%; Average loss: 2.6566
Iteration: 3827; Percent complete: 95.7%; Average loss: 2.4484
Iteration: 3828; Percent complete: 95.7%; Average loss: 2.6673
Iteration: 3829; Percent complete: 95.7%; Average loss: 2.6375
Iteration: 3830; Percent complete: 95.8%; Average loss: 2.3896
Iteration: 3831; Percent complete: 95.8%; Average loss: 2.5844
Iteration: 3832; Percent complete: 95.8%; Average loss: 2.8421
Iteration: 3833; Percent complete: 95.8%; Average loss: 2.4778
Iteration: 3834; Percent complete: 95.9%; Average loss: 2.6856
Iteration: 3835; Percent complete: 95.9%; Average loss: 2.7101
Iteration: 3836; Percent complete: 95.9%; Average loss: 2.8520
Iteration: 3837; Percent complete: 95.9%; Average loss: 2.4607
Iteration: 3838; Percent complete: 96.0%; Average loss: 2.6522
Iteration: 3839; Percent complete: 96.0%; Average loss: 2.8054
Iteration: 3840; Percent complete: 96.0%; Average loss: 2.7752
Iteration: 3841; Percent complete: 96.0%; Average loss: 2.7811
Iteration: 3842; Percent complete: 96.0%; Average loss: 2.4438
Iteration: 3843; Percent complete: 96.1%; Average loss: 2.8324
Iteration: 3844; Percent complete: 96.1%; Average loss: 2.4786
Iteration: 3845; Percent complete: 96.1%; Average loss: 2.6973
Iteration: 3846; Percent complete: 96.2%; Average loss: 2.5719
Iteration: 3847; Percent complete: 96.2%; Average loss: 2.6068
Iteration: 3848; Percent complete: 96.2%; Average loss: 2.4777
Iteration: 3849; Percent complete: 96.2%; Average loss: 2.6046
Iteration: 3850; Percent complete: 96.2%; Average loss: 2.5650
Iteration: 3851; Percent complete: 96.3%; Average loss: 2.7757
Iteration: 3852; Percent complete: 96.3%; Average loss: 2.6050
Iteration: 3853; Percent complete: 96.3%; Average loss: 2.6056
Iteration: 3854; Percent complete: 96.4%; Average loss: 2.7620
Iteration: 3855; Percent complete: 96.4%; Average loss: 2.8233
Iteration: 3856; Percent complete: 96.4%; Average loss: 2.6272
Iteration: 3857; Percent complete: 96.4%; Average loss: 2.6986
Iteration: 3858; Percent complete: 96.5%; Average loss: 2.8050
Iteration: 3859; Percent complete: 96.5%; Average loss: 2.6891
Iteration: 3860; Percent complete: 96.5%; Average loss: 2.7739
Iteration: 3861; Percent complete: 96.5%; Average loss: 2.5481
Iteration: 3862; Percent complete: 96.5%; Average loss: 2.6327
Iteration: 3863; Percent complete: 96.6%; Average loss: 2.6629
Iteration: 3864; Percent complete: 96.6%; Average loss: 2.5793
Iteration: 3865; Percent complete: 96.6%; Average loss: 2.5490
Iteration: 3866; Percent complete: 96.7%; Average loss: 2.6750
Iteration: 3867; Percent complete: 96.7%; Average loss: 2.5447
Iteration: 3868; Percent complete: 96.7%; Average loss: 2.4130
Iteration: 3869; Percent complete: 96.7%; Average loss: 2.7150
Iteration: 3870; Percent complete: 96.8%; Average loss: 2.5076
Iteration: 3871; Percent complete: 96.8%; Average loss: 2.6300
Iteration: 3872; Percent complete: 96.8%; Average loss: 2.8918
Iteration: 3873; Percent complete: 96.8%; Average loss: 2.6902
Iteration: 3874; Percent complete: 96.9%; Average loss: 2.6164
Iteration: 3875; Percent complete: 96.9%; Average loss: 2.9506
Iteration: 3876; Percent complete: 96.9%; Average loss: 2.8916
Iteration: 3877; Percent complete: 96.9%; Average loss: 2.6287
Iteration: 3878; Percent complete: 97.0%; Average loss: 2.5706
Iteration: 3879; Percent complete: 97.0%; Average loss: 2.8168
Iteration: 3880; Percent complete: 97.0%; Average loss: 2.8103
Iteration: 3881; Percent complete: 97.0%; Average loss: 2.6701
Iteration: 3882; Percent complete: 97.0%; Average loss: 2.4896
Iteration: 3883; Percent complete: 97.1%; Average loss: 2.7113
Iteration: 3884; Percent complete: 97.1%; Average loss: 2.9153
Iteration: 3885; Percent complete: 97.1%; Average loss: 2.4936
Iteration: 3886; Percent complete: 97.2%; Average loss: 2.6706
Iteration: 3887; Percent complete: 97.2%; Average loss: 2.5846
Iteration: 3888; Percent complete: 97.2%; Average loss: 2.7374
Iteration: 3889; Percent complete: 97.2%; Average loss: 2.7016
Iteration: 3890; Percent complete: 97.2%; Average loss: 2.7296
Iteration: 3891; Percent complete: 97.3%; Average loss: 2.6881
Iteration: 3892; Percent complete: 97.3%; Average loss: 2.4964
Iteration: 3893; Percent complete: 97.3%; Average loss: 2.7003
Iteration: 3894; Percent complete: 97.4%; Average loss: 2.7689
Iteration: 3895; Percent complete: 97.4%; Average loss: 2.4430
Iteration: 3896; Percent complete: 97.4%; Average loss: 2.6478
Iteration: 3897; Percent complete: 97.4%; Average loss: 2.7050
Iteration: 3898; Percent complete: 97.5%; Average loss: 2.3715
Iteration: 3899; Percent complete: 97.5%; Average loss: 2.4509
Iteration: 3900; Percent complete: 97.5%; Average loss: 2.7987
Iteration: 3901; Percent complete: 97.5%; Average loss: 2.6503
Iteration: 3902; Percent complete: 97.5%; Average loss: 2.6403
Iteration: 3903; Percent complete: 97.6%; Average loss: 2.6875
Iteration: 3904; Percent complete: 97.6%; Average loss: 2.6681
Iteration: 3905; Percent complete: 97.6%; Average loss: 2.7524
Iteration: 3906; Percent complete: 97.7%; Average loss: 2.6458
Iteration: 3907; Percent complete: 97.7%; Average loss: 2.6074
Iteration: 3908; Percent complete: 97.7%; Average loss: 2.5235
Iteration: 3909; Percent complete: 97.7%; Average loss: 2.6419
Iteration: 3910; Percent complete: 97.8%; Average loss: 2.5056
Iteration: 3911; Percent complete: 97.8%; Average loss: 2.8410
Iteration: 3912; Percent complete: 97.8%; Average loss: 2.4414
Iteration: 3913; Percent complete: 97.8%; Average loss: 2.6087
Iteration: 3914; Percent complete: 97.9%; Average loss: 2.6337
Iteration: 3915; Percent complete: 97.9%; Average loss: 2.6766
Iteration: 3916; Percent complete: 97.9%; Average loss: 2.6845
Iteration: 3917; Percent complete: 97.9%; Average loss: 2.6012
Iteration: 3918; Percent complete: 98.0%; Average loss: 2.7017
Iteration: 3919; Percent complete: 98.0%; Average loss: 2.3983
Iteration: 3920; Percent complete: 98.0%; Average loss: 2.6920
Iteration: 3921; Percent complete: 98.0%; Average loss: 2.7779
Iteration: 3922; Percent complete: 98.0%; Average loss: 2.5649
Iteration: 3923; Percent complete: 98.1%; Average loss: 2.6647
Iteration: 3924; Percent complete: 98.1%; Average loss: 2.5300
Iteration: 3925; Percent complete: 98.1%; Average loss: 2.4739
Iteration: 3926; Percent complete: 98.2%; Average loss: 2.7045
Iteration: 3927; Percent complete: 98.2%; Average loss: 2.8321
Iteration: 3928; Percent complete: 98.2%; Average loss: 2.5566
Iteration: 3929; Percent complete: 98.2%; Average loss: 2.5265
Iteration: 3930; Percent complete: 98.2%; Average loss: 2.5537
Iteration: 3931; Percent complete: 98.3%; Average loss: 2.7507
Iteration: 3932; Percent complete: 98.3%; Average loss: 2.6466
Iteration: 3933; Percent complete: 98.3%; Average loss: 2.4592
Iteration: 3934; Percent complete: 98.4%; Average loss: 2.6940
Iteration: 3935; Percent complete: 98.4%; Average loss: 2.4690
Iteration: 3936; Percent complete: 98.4%; Average loss: 2.5306
Iteration: 3937; Percent complete: 98.4%; Average loss: 2.6120
Iteration: 3938; Percent complete: 98.5%; Average loss: 2.5561
Iteration: 3939; Percent complete: 98.5%; Average loss: 2.6312
Iteration: 3940; Percent complete: 98.5%; Average loss: 2.5883
Iteration: 3941; Percent complete: 98.5%; Average loss: 2.4378
Iteration: 3942; Percent complete: 98.6%; Average loss: 2.7191
Iteration: 3943; Percent complete: 98.6%; Average loss: 2.5410
Iteration: 3944; Percent complete: 98.6%; Average loss: 2.5397
Iteration: 3945; Percent complete: 98.6%; Average loss: 2.3133
Iteration: 3946; Percent complete: 98.7%; Average loss: 2.5999
Iteration: 3947; Percent complete: 98.7%; Average loss: 2.6999
Iteration: 3948; Percent complete: 98.7%; Average loss: 2.4258
Iteration: 3949; Percent complete: 98.7%; Average loss: 2.6387
Iteration: 3950; Percent complete: 98.8%; Average loss: 2.8639
Iteration: 3951; Percent complete: 98.8%; Average loss: 2.6980
Iteration: 3952; Percent complete: 98.8%; Average loss: 2.8548
Iteration: 3953; Percent complete: 98.8%; Average loss: 2.5589
Iteration: 3954; Percent complete: 98.9%; Average loss: 2.6482
Iteration: 3955; Percent complete: 98.9%; Average loss: 2.5818
Iteration: 3956; Percent complete: 98.9%; Average loss: 2.5540
Iteration: 3957; Percent complete: 98.9%; Average loss: 2.5336
Iteration: 3958; Percent complete: 99.0%; Average loss: 2.7245
Iteration: 3959; Percent complete: 99.0%; Average loss: 2.4131
Iteration: 3960; Percent complete: 99.0%; Average loss: 2.4995
Iteration: 3961; Percent complete: 99.0%; Average loss: 2.6014
Iteration: 3962; Percent complete: 99.1%; Average loss: 2.5670
Iteration: 3963; Percent complete: 99.1%; Average loss: 2.4730
Iteration: 3964; Percent complete: 99.1%; Average loss: 2.4302
Iteration: 3965; Percent complete: 99.1%; Average loss: 2.4215
Iteration: 3966; Percent complete: 99.2%; Average loss: 2.7706
Iteration: 3967; Percent complete: 99.2%; Average loss: 2.4771
Iteration: 3968; Percent complete: 99.2%; Average loss: 2.4999
Iteration: 3969; Percent complete: 99.2%; Average loss: 2.7363
Iteration: 3970; Percent complete: 99.2%; Average loss: 2.7672
Iteration: 3971; Percent complete: 99.3%; Average loss: 2.7598
Iteration: 3972; Percent complete: 99.3%; Average loss: 2.6914
Iteration: 3973; Percent complete: 99.3%; Average loss: 2.6367
Iteration: 3974; Percent complete: 99.4%; Average loss: 2.5510
Iteration: 3975; Percent complete: 99.4%; Average loss: 2.5416
Iteration: 3976; Percent complete: 99.4%; Average loss: 2.7459
Iteration: 3977; Percent complete: 99.4%; Average loss: 2.5209
Iteration: 3978; Percent complete: 99.5%; Average loss: 2.5614
Iteration: 3979; Percent complete: 99.5%; Average loss: 2.5514
Iteration: 3980; Percent complete: 99.5%; Average loss: 2.5577
Iteration: 3981; Percent complete: 99.5%; Average loss: 2.5287
Iteration: 3982; Percent complete: 99.6%; Average loss: 2.6252
Iteration: 3983; Percent complete: 99.6%; Average loss: 2.5812
Iteration: 3984; Percent complete: 99.6%; Average loss: 2.5684
Iteration: 3985; Percent complete: 99.6%; Average loss: 2.6958
Iteration: 3986; Percent complete: 99.7%; Average loss: 2.5775
Iteration: 3987; Percent complete: 99.7%; Average loss: 2.5995
Iteration: 3988; Percent complete: 99.7%; Average loss: 2.6701
Iteration: 3989; Percent complete: 99.7%; Average loss: 2.6282
Iteration: 3990; Percent complete: 99.8%; Average loss: 2.7365
Iteration: 3991; Percent complete: 99.8%; Average loss: 2.7266
Iteration: 3992; Percent complete: 99.8%; Average loss: 2.5220
Iteration: 3993; Percent complete: 99.8%; Average loss: 2.6839
Iteration: 3994; Percent complete: 99.9%; Average loss: 2.6884
Iteration: 3995; Percent complete: 99.9%; Average loss: 2.2967
Iteration: 3996; Percent complete: 99.9%; Average loss: 2.4534
Iteration: 3997; Percent complete: 99.9%; Average loss: 2.6195
Iteration: 3998; Percent complete: 100.0%; Average loss: 2.6342
Iteration: 3999; Percent complete: 100.0%; Average loss: 2.4696
Iteration: 4000; Percent complete: 100.0%; Average loss: 2.4503
Run Evaluation#
To chat with your model, run the following block.
# Set dropout layers to ``eval`` mode
encoder.eval()
decoder.eval()
# Initialize search module
searcher = GreedySearchDecoder(encoder, decoder)
# Begin chatting (uncomment and run the following line to begin)
# evaluateInput(encoder, decoder, searcher, voc)
Conclusion#
That’s all for this one, folks. Congratulations, you now know the fundamentals to building a generative chatbot model! If you’re interested, you can try tailoring the chatbot’s behavior by tweaking the model and training parameters and customizing the data that you train the model on.
Check out the other tutorials for more cool deep learning applications in PyTorch!
Total running time of the script: (2 minutes 16.167 seconds)