Skip to content

wjxabai/Next-Generation-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 

Repository files navigation

According to the drawbacks of large language model, I design a new agent which can overcome the drawbacks. The agent contains four deep neural networks, which is abstract network, concrete network, decision network, execution network. Each of the four networks just do one thing, so it focus what it can do just like brain works. The input is perception information by sensor. The output1 are classes and attributes(common sense), the output2 is memory or consciousness, the output3 is logic and theory, the output4 is action and practice. The No.1 and No.2 constitute an auto-encoder. The dimensions of classes are very high if the grain size is very small when one-hot encoding used, so binary encoding can be used. With the help of muti-level classes and sentence structure, output1 is produced as one sentence and common sense. With the help of two order dimensions, the output3 is a sentence, so the reasoning speed is more higher than LLM. The probability of sentence is joint probability of words, so only high probability of words can produce, so the hallucination problem alleviates. With the help of select gate tanh, when n is small, the computation time is one half of self-attention. When n is large, the time decrease much more. With the help of multi-value functions, the independent consciousness is produced. It is initiative, not rely on prompt. Causal reasoning and Continuous reasoning are realized by concatenate output3 with the input of No.3 as the new input of No.3.