Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,23 @@

All notable changes to `uipath_llm_client` (core package) will be documented in this file.

## [1.8.0] - 2026-04-08

### Added
- `UiPathLiteLLM` — provider-agnostic LLM client powered by LiteLLM
- `completion` / `acompletion` for chat completions across all providers
- `embedding` / `aembedding` for embeddings
- Automatic model discovery from the UiPath backend — detects vendor, api_flavor, and model family
- Optional `vendor_type` and `api_flavor` overrides (same pattern as LangChain factory)
- Supports OpenAI (chat-completions + responses API), Gemini, Bedrock (invoke + converse), and Vertex AI Claude
- All HTTP routed through UiPath httpx transport (auth, retry, headers) — no direct calls to Google/AWS/OpenAI
- Explicit completion parameters with full IDE autocomplete
- `litellm` as an optional dependency (`uv add uipath-llm-client[litellm]`)
- `_strict_response_validation` parameter to all Anthropic client classes

### Changed
- Updated dependency versions: `uipath-platform>=0.1.21`, `anthropic>=0.91.0`, `litellm>=1.83.4`

## [1.7.0] - 2026-04-03

### Added
Expand Down
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# UiPath LLM Client

A Python client for interacting with UiPath's LLM services. This package provides both a low-level HTTP client and framework-specific integrations (LangChain, LlamaIndex) for accessing LLMs through UiPath's infrastructure.
A Python client for interacting with UiPath's LLM services. This package provides both a low-level HTTP client and a LangChain integration for accessing LLMs through UiPath's infrastructure.

## Architecture Overview

This repository is organized as a monorepo with the following packages:

- **`uipath_llm_client`** (root): Core HTTP client with authentication, retry logic, and request handling
- **`uipath_langchain_client`** (packages/): LangChain-compatible chat models and embeddings
- **`uipath_llamaindex_client`** (packages/): LlamaIndex-compatible integrations

### Supported Backends

Expand Down Expand Up @@ -816,7 +815,7 @@ uv run pyright

### Testing

Tests use [VCR.py](https://vcrpy.readthedocs.io/) to record and replay HTTP interactions. Cassettes (recorded responses) are stored in `tests/cassettes/` using Git LFS.
Tests use [VCR.py](https://vcrpy.readthedocs.io/) to record and replay HTTP interactions. Cassettes (recorded responses) are stored in `tests/cassettes.db` (SQLite) via `pytest-recording`.

**Important:** Tests must pass locally before submitting a PR. The CI pipeline does not make any real API requests—it only runs tests using the pre-recorded cassettes.

Expand Down Expand Up @@ -855,7 +854,9 @@ uipath-llm-client/
│ ├── clients/ # Native SDK wrappers
│ │ ├── openai/ # UiPathOpenAI, UiPathAzureOpenAI, etc.
│ │ ├── anthropic/ # UiPathAnthropic, UiPathAnthropicBedrock, etc.
│ │ └── google/ # UiPathGoogle
│ │ ├── google/ # UiPathGoogle
│ │ ├── normalized/ # UiPathNormalizedClient (provider-agnostic)
│ │ └── litellm/ # UiPathLiteLLM (via LiteLLM)
│ ├── settings/ # Backend-specific settings & auth
│ │ ├── base.py # UiPathBaseSettings, UiPathAPIConfig
│ │ ├── platform/ # PlatformSettings, PlatformAuth
Expand All @@ -878,9 +879,9 @@ uipath-llm-client/
│ │ ├── vertexai/ # UiPathChatAnthropicVertex
│ │ ├── bedrock/ # UiPathChatBedrock, UiPathChatBedrockConverse
│ │ ├── fireworks/ # UiPathChatFireworks, UiPathFireworksEmbeddings
│ │ ├── litellm/ # UiPathChatLiteLLM, UiPathLiteLLMEmbeddings
│ │ └── azure/ # UiPathAzureAIChatCompletionsModel
│ └── uipath_llamaindex_client/ # LlamaIndex integration (planned)
└── tests/ # Test suite with VCR cassettes
└── tests/ # Test suite with VCR cassettes (SQLite)
```

## License
Expand Down
10 changes: 10 additions & 0 deletions packages/uipath_langchain_client/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,16 @@

All notable changes to `uipath_langchain_client` will be documented in this file.

## [1.8.0] - 2026-04-08

### Added
- `UiPathChatLiteLLM` — LangChain chat model powered by LiteLLM, supporting all UiPath gateway providers
- `langchain-litellm` as an optional dependency for LiteLLM integration

### Changed
- Updated dependency versions: `anthropic[bedrock,vertex]>=0.91.0`
- Version bump to match core package 1.8.0

## [1.7.1] - 2026-04-04

### Added
Expand Down
17 changes: 10 additions & 7 deletions packages/uipath_langchain_client/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,35 +5,38 @@ dynamic = ["version"]
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"langchain>=1.2.13",
"uipath-llm-client>=1.7.0",
"langchain>=1.2.15",
"uipath-llm-client>=1.8.0",
]

[project.optional-dependencies]
openai = [
"langchain-openai>=1.1.11",
"langchain-openai>=1.1.12",
]
google = [
"langchain-google-genai>=4.2.1",
]
anthropic = [
"langchain-anthropic>=1.4.0",
"anthropic[bedrock,vertex]>=0.86.0",
"anthropic[bedrock,vertex]>=0.91.0",
]
aws = [
"langchain-aws[anthropic]>=1.4.1",
"langchain-aws[anthropic]>=1.4.3",
]
vertexai = [
"langchain-google-vertexai>=3.2.2",
]
azure = [
"langchain-azure-ai>=1.1.1",
"langchain-azure-ai>=1.2.1",
]
fireworks = [
"langchain-fireworks>=1.1.0",
]
litellm = [
"langchain-litellm>=0.6.4",
]
all = [
"uipath-langchain-client[openai,aws,google,anthropic,azure,vertexai,fireworks]"
"uipath-langchain-client[openai,aws,google,anthropic,azure,vertexai,fireworks,litellm]"
]

[build-system]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
__title__ = "UiPath LangChain Client"
__description__ = "A Python client for interacting with UiPath's LLM services via LangChain."
__version__ = "1.7.1"
__version__ = "1.8.0"
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
"""LangChain-compatible LLM client wrappers for UiPath providers."""

from uipath_langchain_client.clients.normalized import UiPathChat, UiPathEmbeddings

__all__ = [
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
"""LangChain chat model for Anthropic Claude via UiPath.

Requires the ``anthropic`` optional extra::

uv add uipath-langchain-client[anthropic]
"""

from uipath_langchain_client.clients.anthropic.chat_models import UiPathChatAnthropic

__all__ = ["UiPathChatAnthropic"]
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
"""LangChain chat models and embeddings for Azure AI via UiPath.

Requires the ``azure`` optional extra::

uv add uipath-langchain-client[azure]
"""

from uipath_langchain_client.clients.azure.chat_models import UiPathAzureAIChatCompletionsModel
from uipath_langchain_client.clients.azure.embeddings import UiPathAzureAIEmbeddingsModel

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
"""LangChain chat models and embeddings for AWS Bedrock via UiPath.

Requires the ``aws`` optional extra::

uv add uipath-langchain-client[aws]
"""

from uipath_langchain_client.clients.bedrock.chat_models import (
UiPathChatAnthropicBedrock,
UiPathChatBedrock,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
"""LangChain chat models and embeddings for Fireworks AI via UiPath.

Requires the ``fireworks`` optional extra::

uv add uipath-langchain-client[fireworks]
"""

from uipath_langchain_client.clients.fireworks.chat_models import UiPathChatFireworks
from uipath_langchain_client.clients.fireworks.embeddings import UiPathFireworksEmbeddings

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
"""LangChain chat models and embeddings for Google Gemini via UiPath.

Requires the ``google`` optional extra::

uv add uipath-langchain-client[google]
"""

from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.google.embeddings import UiPathGoogleGenerativeAIEmbeddings

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
"""UiPath LangChain LiteLLM chat model and embeddings.

Requires the ``litellm`` optional extra::

uv add uipath-langchain-client[litellm]
"""

from uipath_langchain_client.clients.litellm.chat_models import UiPathChatLiteLLM
from uipath_langchain_client.clients.litellm.embeddings import UiPathLiteLLMEmbeddings

__all__ = ["UiPathChatLiteLLM", "UiPathLiteLLMEmbeddings"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# pyright: reportAttributeAccessIssue=false
"""UiPath LangChain chat model powered by LiteLLM.

Inherits ``UiPathBaseChatModel`` and ``ChatLiteLLM`` — UiPath handles
authentication, routing, retries; LiteLLM handles provider-specific formatting.

Example:
>>> from uipath_langchain_client.clients.litellm import UiPathChatLiteLLM
>>>
>>> chat = UiPathChatLiteLLM(model="gpt-5.2-2025-12-11")
>>> response = chat.invoke("Hello!")
>>> print(response.content)
"""

from typing import Any, Dict, Optional

from langchain_core.callbacks import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from pydantic import Field, model_validator
from typing_extensions import Self

from uipath.llm_client.clients.litellm import UiPathLiteLLM
from uipath.llm_client.settings.constants import ApiFlavor, ApiType, RoutingMode, VendorType
from uipath_langchain_client.base_client import UiPathBaseChatModel
from uipath_langchain_client.settings import UiPathAPIConfig

try:
from langchain_litellm import ChatLiteLLM
except ImportError as e:
raise ImportError(
"The 'litellm' extra is required to use UiPathChatLiteLLM. "
"Install it with: uv add uipath-langchain-client[litellm]"
) from e

# Keys that the core UiPathLiteLLM handles internally — strip them
# from ChatLiteLLM's kwargs before delegating to the core client.
_CORE_HANDLED_KEYS = frozenset(
{
"model",
"api_base",
"api_key",
"custom_llm_provider",
"client",
"async_client",
"num_retries",
"max_retries",
"num_ctx",
"base_model",
}
)


class UiPathChatLiteLLM(UiPathBaseChatModel, ChatLiteLLM): # type: ignore[override]
"""LangChain chat model that routes through UiPath LLM Gateway via LiteLLM.

Discovers the model from the UiPath backend and uses LiteLLM for
provider-specific request formatting. Authentication, URL routing,
and retries are handled by the UiPath httpx client.

Args:
model: The model name (e.g., "gpt-5.2-2025-12-11", "gemini-2.5-flash").
settings: UiPath client settings. Defaults to environment-based settings.
vendor_type: Filter/override vendor type from discovery.
api_flavor: Override API flavor (e.g., ApiFlavor.RESPONSES, ApiFlavor.CONVERSE).
"""

api_config: UiPathAPIConfig = Field(
default_factory=lambda: UiPathAPIConfig(
routing_mode=RoutingMode.PASSTHROUGH,
vendor_type=VendorType.OPENAI, # placeholder — overridden by _setup_uipath
api_type=ApiType.COMPLETIONS,
freeze_base_url=True,
),
)

vendor_type: VendorType | str | None = Field(default=None, exclude=True)
api_flavor: ApiFlavor | str | None = Field(default=None, exclude=True)

# Internal core client — handles discovery, HTTPHandler lifecycle, provider resolution
_core: UiPathLiteLLM | None = None

@model_validator(mode="after")
def _setup_uipath(self) -> Self:
"""Create a UiPathLiteLLM core client and configure ChatLiteLLM fields."""
core = UiPathLiteLLM(
model_name=self.model_name,
byo_connection_id=self.byo_connection_id,
client_settings=self.client_settings,
vendor_type=self.vendor_type,
api_flavor=self.api_flavor,
timeout=self.request_timeout,
max_retries=self.max_retries,
default_headers=self.default_headers,
captured_headers=self.captured_headers,
retry_config=self.retry_config,
logger=self.logger,
)
self._core = core

# Sync api_config from discovery for UiPathBaseLLMClient httpx clients
self.api_config = UiPathAPIConfig(
vendor_type=core._api_config.vendor_type,
api_flavor=core._api_config.api_flavor,
)

# Set ChatLiteLLM fields so _default_params builds correct kwargs
self.custom_llm_provider = core._custom_llm_provider
self.api_key = "PLACEHOLDER"
self.api_base = str(core._completion_client.client.base_url)
self.model_name = core._litellm_model # type: ignore[assignment]

return self

def completion_with_retry(
self, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any
) -> Any:
"""Delegate to the core UiPathLiteLLM.completion()."""
assert self._core is not None
for key in _CORE_HANDLED_KEYS:
kwargs.pop(key, None)
return self._core.completion(**kwargs)

async def acompletion_with_retry(
self, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any
) -> Any:
"""Delegate to the core UiPathLiteLLM.acompletion()."""
assert self._core is not None
for key in _CORE_HANDLED_KEYS:
kwargs.pop(key, None)
return await self._core.acompletion(**kwargs)

@property
def _llm_type(self) -> str:
return "uipath-litellm"

@property
def _identifying_params(self) -> Dict[str, Any]:
return {
"model": self.model_name,
"provider": self._core._custom_llm_provider if self._core else None,
"api_base": self.api_base,
}
Loading