Skip to content

Latest commit

 

History

History
17 lines (12 loc) · 3.82 KB

File metadata and controls

17 lines (12 loc) · 3.82 KB

I want you now you now to implement the type of the tests that we are going to support called: Challenge. Challenge will achieve the goal of the test by utilizing the fully built system - the product. It will not use its source code or any component of the system but prod ready binaries which end user will use! The idea of Challenge test is to verify real use cases of the system / product by using it as the real end user to achieve the goal of the challenge! End results of each challenge will be asserted and verified up to smallest details! There MUST NOT BE empty, placeholder, stub, temp or invalid data in the results of challenge execution (what we have created using the product). All challenges results will be placed in proper challenges directory with git versioned files which should not be versioned. Structure of it must be the following: challenges/name_of_the_challenge/year/month/date/time. Example: challenges/creating_providers_configurations/2025/12/23/00000000 ... All log data produced during the challenge execution have to be added into the challenge's directory under the logs subdirectory. Example: challenges/name_of_the_challenge/year/month/date/time/logs. We need to gather all possible logs, at the verbose level for everything so if anything goes wrong we can track it more easily! For achievening the goal only the binaries - the final derivates of building of our project can be used! You will use them as any end-user by creating and passing proper configurations or arguments. Follow all user documentation and guides to use our system properly exactly like real end-user would do! Document all commands and arguments and configurations passed to it! Make sure tat you only use apps we build - cli, tui, desktop, mobile, rest api and web. Every challenge assigned has ti be executed with every derivate we have - cli, tui, dekstop, mobile, rest api, web, etc. If we have done the challenge with more than one built program (app / derivate) challenge path has to be extended to this: challenges/name_of_the_challenge/year/month/date/time/PLTAFORM. If the challenge is going to be executed with only one platform, then the path of the challenge should be: challenges/name_of_the_challenge/year/month/date/time. We MUST Make sure that challenges solution is GENERIC capable to have the bank of challenges! So we can run all of them, or just certain challenges from the bank! We MUST have all documentation about this - including the user guides with step by step guides up to the most advanced tutorials!

This is the first challenge: process the providers, obtain all its models and all features they offer. Verify all providers and all its LLMs for real usability, and then create configuration files opencode and crush which will configure all these providers and LLMs with all supported features (MCPs, LSPs, Embeddings and other). Providers to do: Chutes, SiliconFlow, OpenRouter, Z.AI, Kimi, HuggingFace, Nvidia, DeepSeek, Qwen, Claude. Marke every LLM which is 100% free with suffix "free to use". The ones which are not free will not have any suffix, but they will be a part of the final configurations. All these models must be verified to work. Free and payed ones! Make sure we support all types of LLMs which are offered by the providers - chat, coding, generative (all types - image, audio, video, etc.), etc. Api keys are defined as exported env. variables. Here are the examples of them:

export ApiKey_HuggingFace=XXXXXXXXXX export ApiKey_Nvidia=XXXXXXXXXX export ApiKey_Chutes=XXXXXXXXXX export ApiKey_SiliconFlow=XXXXXXXXXX export ApiKey_Kimi=XXXXXXXXXX export ApiKey_Gemini=XXXXXXXXXX export ApiKey_OpenRouter=XXXXXXXXXX export ApiKey_ZAI=XXXXXXXXXX export ApiKey_DeepSeek=XXXXXXXXXX

If api key for certain provider is not valid or it is not defined we will skip it, and proper logs about that will be recorded! It will be part of the final report as well!