Skip to content

Releases: NyarchLinux/NyarchAssistant

1.3.5 - Agent Update

20 Mar 17:31

Choose a tag to compare

🛠 Improved Newelle Agent capabilities
📝 Added tools to explore and manage files (write, read,grep,glob, listdir)
🔐 Added files permissions: you can now specify for which files allow read,write or ask
✨ Added support for skills, Newelle can now activate skills, supporting huge libraries of existing ones
👥 Subagents: Newelle can now launch subagents to do subtasks. These will run with custom tools and skills
⏰ Scheduled Tasks: You can now schedule AI tasks to any moment
✔️ Added TODO List tool: Newelle can now keep track of complex tasks completion using the new TODO tool
⚙️ Tool lazy loading: choose which tools to lazy load to make context small and load their schemas only when needed
📄 Context Manager: Newelle will now dinamically manage the context in order to keep it in the limits
🔒 Added support for Oauth login for MCP tools
🗂 Added chat folders: organize your chats into folders
💻 Added commands: extensions can now add commands to Newelle

1.3.0

06 Mar 16:57

Choose a tag to compare

🗯 Multiple chats: you can now open and use multiple chats at the same time
🎤 Added wakeword support: you can now awake Newelle with a wakeword
📞 Added call mode: you can now call your models in real time
🔊 Streaming TTS support: TTS now answers faster for EdgeTTS, OpenAI Handlers and Kokoro
🎙 You can now compile Whisper.CPP with hardware acceleration
➕ New handlers: EdgeTTS, Llama.cpp (Embedding)
💾 Added "Agentic Memory", a new memory system that uses tools and new technologies in order to create a long term memory
🛠 Memory and RAG handlers can now expose tools, added tools to create TTS and STT
👁 Added vision support for Llama.cpp

image

1.2.5 - Chat branching and instant message rendering

25 Jan 18:54
e05bd09

Choose a tag to compare

What's Changed

⚡️ Implemented live message rendering while streaming
💬 New chat history design
🌳 Implemented chat branching from any message

Minor Improvements:

  • Hide virtualization setting for non flatpak
  • Add option to run system llama.cpp
  • Improve chat name generation

Fixes:

  • Improve stability
  • Add a cancel button on llama.cpp install
  • Implement native tool calling for openai and ollama
  • Fix ollama not seeing pasted jpeg
  • Fix google import error
2026-01-20T11-05-22-296Z_tagliato.mp4
image

1.2.0 - llama.cpp, model library and better document reading

17 Jan 13:37

Choose a tag to compare

⚡️ Add llama.cpp, with options to recompile it with any backend
📖 Implement a new model library for ollama / llama.cpp
🔎 Implement hybrid search, improving document reading

🐧 New tools to explore Arch wiki, AUR and official packages enabled by default
🗑 Removed smart prompts

💻 Add command execution tool
🗂 Add tool groups
🔗 Improve MCP server adding, supporting also STDIO for non flatpak
📝 Add semantic memory handler
📤 Add ability to import/export chats
📁 Add custom folders to the RAG index
ℹ️ Improved message information menu, showing the token count and token speed

Frame 62 Frame 63
Frame 66 Frame 67
Frame 69 Frame 71

Minor Improvements:

  • Newelle is now easier to package for non flatpak
  • Switch to Model2Vec as default embedding
  • Add a spinner while the model is loading
  • Improve application responsiveness
  • Implement lazy loading for messages
  • Option to hide history on launch
  • Add search in file explorer
  • Switch to ddgs for websearch
  • Add tools to show video/images
  • Better message generation stopping
  • Add more options to the websearch tool
  • Add keyboard shortcut to stop message generation
  • Add tool settings to profiles
  • Add an option for parallel tool execution
  • You can now delete and edit console/tool messages

Fixes:

  • Fix issues with openai handlers as secondary llma
  • Fix ddgs not streaming
  • Fix some random crashes
  • Fix issues with automatic STT

1.1.0 - VRM and MCP support

05 Jan 16:22

Choose a tag to compare

  • Added full support for VRM 3D Models, with expressions, animations and lipsync
  • Added support for MCP (Model Context Protocol) and tools. Now you can import the tools from more than 10'000 servers
  • Extensions can now easily add tools
  • You can now add an LLM model to your favourites in the model popup
  • You can now start/stop recording and stop TTS using commands (that can be bind to keyboard shortcut based on your DE)
Screenshot From 2026-01-05 17-21-58

1.0.1 - Runtime Update

17 Dec 10:25

Choose a tag to compare

  • Update runtime
  • Fix edge tts not working
  • Sync with Newelle

1.0.0 - Mini Apps

24 Jul 13:18
644e3df

Choose a tag to compare

Nyarch Assistant 1.0.0 has been released!

📱 Mini Apps support! Extensions can now show custom mini apps on the sidebar

🌐 Added integrated browser Mini App: browse the web directly in Nyarch Assistant and attach web pages
📁 Improved integrated file manager, supporting multiple file operations
👨‍💻 Integrated file editor: edit files and codeblocks directly in Nyarch Assistant
🖥 Integrated Terminal mini app: open the terminal directly in Nyarch Assistant
💬 Programmable prompts: add dynamic content to prompts with conditionals and random strings
✍️ Add ability to manually edit chat name
🪲 Minor bug fixes
🚩 Added support for multiple languages for Kokoro TTS and Whisper.CPP
✨ New animation on chat change

🧩 Honorable Extensions:
📆 Newelle Calendar
🖼 Newelle Image Generator

1.0.0 beta

10 Jul 07:40
a687b82

Choose a tag to compare

1.0.0 beta Pre-release
Pre-release

Big Improvements:

Newelle Canvas (name of subject to change): display some mini apps inside Newelle
- Integrated browser
- Integrated file editor
- Improved integrated file explorer
- Extensions can create and add "mini apps" or control them

  • Add ability to export/import profiles
  • Programmable Prompts
  • Ability to change chat name manually

Minor Improvements:

  • Add developer settings
  • Add stdout monitor
  • Add support for multiple languages in Kokoro TTS
  • Improve thinking widget
  • Settings button in the titlebar
  • Changed Custom TTS behavior
  • New animation on chat change
  • Fix GPT4Free
  • Add option to change color scheme for integrated editor
  • Add language option for WhisperCPP
  • Ability to send images via URL

Fixes:

  • Tables are scrollable
  • Fix message not streaming when thinking
  • Extensions hot reload
  • Minor bugs

0.9.6

11 May 11:15
cfa5c08

Choose a tag to compare

🌐 Website Reading: include website content in questions in order to get answers
🔎 Web Search: search for things online using SearXNG, Tavily or DDG
🔢 LaTeX Rendering: improved rendering for inline equations
📄 Improve Document Management:
- If documents in chat are shorter than a certain token count, they are sent entirely
- If documents are too large, RAG is done on them
- Documents added with drag and drop can now be read
- Added a new document reading widget
💭 New thinking widget
👤 Selective Profiles: Create profiles that only change some selected groups of settings
New Chat placeholder: Added a new empty chat placeholder with some tips
🧠 LLM Updates: Custom provider support and reasoning support for openrouter, llama4 vision support for groq, advanced settings for Gemini

0.9.0

11 Apr 19:21

Choose a tag to compare

Added idle motions for Live2D.
🔈 Added TTS support for Groq and OpenAI compatible APIs
🎙️ Added Whisper.CPP support with model manager for speech recognition
📃 Added a new API for extensions to create and manage RAG indexes
🧠 Improved the model selection popup
🔢 Improved LaTeX rendering
🚀 A ton of performance improvements and refinements
🔎 Add the ability to zoom the interface

Pretty sure that I added also something else but I forgot