Skip to content
This repository was archived by the owner on Jun 5, 2025. It is now read-only.

Commit 09060f1

Browse files
jhrozekaponcedeleonch
authored andcommitted
Add input processing pipeline + codegate-version pipeline step
This adds a pipeline processing before the completion is ran where the request is either change or can be shortcut. This pipeline consists of steps, for now we implement a single step `CodegateVersion` that responds with the codegate version if the verbatim `codegate-version` string is found in the input. The pipeline also passes along a context, for now that is unused but I thought this would be where we store extracted code snippets etc. To avoid import loops, we also move the `BaseCompletionHandler` class to a new `completion` package. Since the shortcut replies are more or less simple strings, we add yet another package `providers/formatting` whose responsibility is to convert the string returned by the shortcut response to the format expected by the client, meaning either a reply or a stream of replies in the LLM-specific format. We use the `BaseCompletionHandler` as a way to convert to the LLM-specific format.
1 parent 1c73d91 commit 09060f1

1 file changed

Lines changed: 1 addition & 0 deletions

File tree

src/codegate/providers/litellmshim/generators.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ async def sse_stream_generator(stream: AsyncIterator[Any]) -> AsyncIterator[str]
1212
"""OpenAI-style SSE format"""
1313
try:
1414
async for chunk in stream:
15+
print(chunk)
1516
if isinstance(chunk, BaseModel):
1617
# alternatively we might want to just dump the whole object
1718
# this might even allow us to tighten the typing of the stream

0 commit comments

Comments
 (0)