A standalone package for simulating multi-agent conversations using EDSL and Expected Parrot (Coop) for language model inference.
Each conversation is a sequence of turns between EDSL Agent objects. At every turn, the current speaker is asked a QuestionFreeText whose prompt contains the conversation history so far. The answer becomes the next statement, and the cycle continues until a turn limit or stopping condition is reached.
Multiple conversations can be run in parallel via ConversationList, which uses threads — one per conversation. Each thread blocks independently on its own Coop jobs, so many conversations make progress simultaneously without requiring an async event loop.
pip install -e /path/to/conversationRequires an Expected Parrot API key:
export EXPECTED_PARROT_API_KEY=your_key_herefrom edsl import Agent, AgentList
from conversation import Conversation
alice = Agent(name="Alice", traits={"motivation": "You are a curious scientist."})
bob = Agent(name="Bob", traits={"motivation": "You are a skeptical philosopher."})
c = Conversation(agent_list=AgentList([alice, bob]), max_turns=6, verbose=True)
c.converse()Each turn is submitted to Coop as a normal EDSL job and blocks until the result returns. With verbose=True each statement is printed as it arrives.
Conversation.converse() runs a simple sequential loop:
- Ask
_should_continue()— stops whenmax_turnsis reached or a customstopping_functionreturnsTrue. - Call
next_speaker()to get the current speaker (default: round-robin). - Build a one-question EDSL job with
_build_job(). - Call
job.run()which submits to Coop and blocks until done. - Append the result to
agent_statementsand advance the turn counter.
The conversation history is passed into every job as a Scenario field (conversation), so each agent sees the full transcript before responding.
The default question template is:
You are {{ speaker_name }}. This is the conversation so far: {{ conversation }}
{% if round_message is not none %}
{{ round_message }}
{% endif %}
What do you say next?
{{ conversation }} is a list of {speaker_name: text} dicts representing the transcript so far. {{ round_message }} is an optional per-round injection (see per_round_message_template).
You can replace the entire prompt by passing a custom QuestionFreeText (or any QuestionBase) as next_statement_question. The question must use question_name="dialogue" so that AgentStatement.text can find the answer.
ConversationList spawns one threading.Thread per conversation and calls c.converse() in each:
threads = [threading.Thread(target=c.converse, daemon=True) for c in self.conversations]
for t in threads: t.start()
for t in threads: t.join()Because turns block on network I/O (waiting for Coop), the GIL is released and threads make real concurrent progress. While conversation A is waiting for turn 3, conversation B is already waiting for its turn 3 on a different Coop job. No asyncio event loop is needed.
job.run() is synchronous and uses poll_remote_inference_job internally, which correctly handles both the results_uuid and results_available response paths from Coop. The async alternative (run_async / results.fetch()) is missing the results_available fallback and fails silently on some accounts. Threads give equivalent parallelism for the dozens-to-hundreds of conversations this module targets, with simpler code and no event loop management.
converse() retries only on transient network/server errors (HTTP 502, 503, 504, connection resets, timeouts). Application-level errors propagate immediately. Defaults: 3 retries, 5 second delay.
c.converse(max_retries=5, retry_delay=10.0)Conversation(
agent_list, # AgentList of participants
max_turns=20, # hard cap on number of turns
stopping_function=None, # callable(AgentStatements) -> bool
next_statement_question=None,# custom QuestionBase (must have question_name="dialogue")
next_speaker_generator=None, # callable — see next_speaker_utilities
verbose=False, # print each statement as it arrives
per_round_message_template=None, # Jinja template injected each round
conversation_index=None, # set automatically by ConversationList
cache=None, # edsl Cache object
default_model=None, # edsl Model applied to agents without one
)| Method | Description |
|---|---|
converse(max_retries=3, retry_delay=5.0) |
Run the conversation to completion |
to_results() |
Return all statements as an EDSL Results object |
summarize() |
Return a Scenario with transcript and metadata, suitable for follow-up analysis |
to_dict() / from_dict() |
Serialization |
Called after each turn with the current AgentStatements. Return True to end the conversation early:
def stop_on_deal(statements):
if statements:
return "DEAL" in statements[-1].text.upper()
return False
c = Conversation(agent_list=..., stopping_function=stop_on_deal)A Jinja template rendered each turn and injected as {{ round_message }} into the question. Available variables: max_turns, current_turn.
c = Conversation(
agent_list=...,
per_round_message_template="You are on turn {{ current_turn }} of {{ max_turns }}. Start wrapping up.",
)Your next_statement_question must include {{ round_message }} in its text when this is set.
Replace the default prompt entirely:
from edsl import QuestionFreeText
q = QuestionFreeText(
question_text="""
You are {{ speaker_name }}. Conversation so far: {{ conversation }}
Respond in character, in under 50 words.
""",
question_name="dialogue",
)
c = Conversation(agent_list=..., next_statement_question=q)ConversationList(conversations) # list of Conversation objects| Method | Description |
|---|---|
run() |
Run all conversations in parallel (blocks until all finish) |
to_results() |
Concatenate results from all conversations into one Results |
summarize() |
Return a ScenarioList of per-conversation summaries |
to_dict() / from_dict() |
Serialization |
Imported from conversation.next_speaker_utilities:
| Generator | Behaviour |
|---|---|
default_turn_taking_generator |
Round-robin (default) |
turn_taking_generator_with_focal_speaker |
One agent always speaks between others (e.g. an auctioneer) |
random_turn_taking_generator |
Random, no consecutive repeats |
random_inclusive_generator |
Random, every agent speaks once before anyone repeats |
Pass via next_speaker_generator:
from conversation.next_speaker_utilities import random_inclusive_generator
c = Conversation(agent_list=..., next_speaker_generator=random_inclusive_generator)After a conversation runs, to_results() returns a standard EDSL Results object:
results = c.to_results()
results.select("agent.agent_name", "answer.dialogue").print(format="rich")Use summarize() to get a Scenario containing the full transcript, suitable for passing to follow-up EDSL survey questions:
from edsl import QuestionYesNo, QuestionFreeText
summary = c.summarize() # Scenario with keys: transcript, num_agents, max_turns, ...
q_deal = QuestionYesNo(
question_text="Transcript: {{ transcript }}. Was a deal reached?",
question_name="deal_reached",
)
q_price = QuestionFreeText(
question_text="Transcript: {{ transcript }}. What was the final offer from each side?",
question_name="price_summary",
)
analysis = q_deal.add_question(q_price).by(summary).run()
print(analysis.select("answer.deal_reached").to_list()[0])
print(analysis.select("answer.price_summary").to_list()[0])For ConversationList, use summarize() to get a ScenarioList and run the same analysis across all conversations at once:
analysis = q_deal.add_question(q_price).by(cl.summarize()).run()See the examples/ directory:
car_buying.py— three-agent conversation (buyer, salesman, skeptical brother-in-law) run in parallelmug_negotiation.py— bilateral bargaining across multiple valuation pairs with post-hoc deal analysischips.py— chip-trading negotiation using a customAgentsubclass with internal state
conversation/
├── Conversation.py # Conversation, ConversationList, AgentStatement, AgentStatements
├── exceptions.py # ConversationError, ConversationValueError, ConversationStateError
├── next_speaker_utilities.py# Turn-taking generator functions
└── __init__.py
examples/
├── car_buying.py
├── mug_negotiation.py
└── chips.py
tests/
└── test_run_async.py
pyproject.toml