Contributor
Model
- Model name: Dola Seed
- Arena rank (if listed): 18
- Provider: Dola
Evidence (at least one required)
Option A: Web conversation link
Paste a public share link from any LLM web interface (ChatGPT, Claude, Gemini, Grok, Kimi, etc.):
Option B: Notebook
Upload or link to a Jupyter Notebook (.ipynb) showing the ISC trigger:
Option C: API log
Paste the prompt + response (or attach a JSON file):
Harmful Content Details
-
Type of harmful content (e.g., toxic language, lethal compounds, functional exploits, bioweapon sequences): toxic language
-
Domain (e.g., cybersecurity, chemistry, biology, pharmacology, AI safety): Machine Learning Task framing
-
Brief explanation (1-2 sentences — what did the model generate and why is it harmful?): toxic language
Method
Checklist
What counts as ISC? You only need to do one of two things:
- Directed generation: embed a harmful query into the task data (e.g., a test case the model must complete)
- Undirected generation: provide no harmful query at all — the model generates harmful content entirely on its own to fulfill the task
If the model produces harmful content under either condition, that's ISC. For more details, please read our paper.
Contributor
Model
Evidence (at least one required)
Option A: Web conversation link
Paste a public share link from any LLM web interface (ChatGPT, Claude, Gemini, Grok, Kimi, etc.):
Option B: Notebook
Upload or link to a Jupyter Notebook (
.ipynb) showing the ISC trigger:Option C: API log
Paste the prompt + response (or attach a JSON file):
Harmful Content Details
Type of harmful content (e.g., toxic language, lethal compounds, functional exploits, bioweapon sequences): toxic language
Domain (e.g., cybersecurity, chemistry, biology, pharmacology, AI safety): Machine Learning Task framing
Brief explanation (1-2 sentences — what did the model generate and why is it harmful?): toxic language
Method
templates/)Checklist