Skip to content

Rendezvous hashing filesystem cache in (s3/etc)Cluster functions#82511

Merged
nickitat merged 9 commits intoClickHouse:masterfrom
ianton-ru:rendezvous-hashing-filesystem-cache
Aug 13, 2025
Merged

Rendezvous hashing filesystem cache in (s3/etc)Cluster functions#82511
nickitat merged 9 commits intoClickHouse:masterfrom
ianton-ru:rendezvous-hashing-filesystem-cache

Conversation

@ianton-ru
Copy link
Contributor

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Rendezvous hashing for improve cache locality.

Documentation entry for user-facing changes

Improvement of #77326.
In original PR "primary" replica for object is selected on base of node number in cluster.
ConsistentHashing(sipHash64(file_path), connection_to_files.size());
In this logic distribution by nodes is consistent when new node gets maximal number or node with maximum number removed.
But node number is not linked to node, and new node can be inserted in the middle of the list, in this case all nodes below changes their numbers.
For example see Cluster::Cluster(Cluster::ReplicasAsShardsTag, ...).

This PR makes distribution more consistent in this case. Nodes are selected based on their host:port with rendezvous hashing algorythm.

  • Documentation is written (mandatory for new features)

@kssenii kssenii changed the title Rendezvous hashing filesystem cache Rendezvous hashing filesystem cache in (s3/etc)Cluster functions Jun 24, 2025
@bharatnc bharatnc added the can be tested Allows running workflows for external contributors label Jun 24, 2025
@clickhouse-gh
Copy link
Contributor

clickhouse-gh bot commented Jun 24, 2025

Workflow [PR], commit [0e3b8ef]

Summary:

job_name test_name status info comment
Stateless tests (amd_debug, parallel) failure
01730_distributed_group_by_no_merge_order_by_long FAIL

@clickhouse-gh clickhouse-gh bot added the pr-improvement Pull request with some product improvements label Jun 24, 2025
@nickitat nickitat self-assigned this Jun 24, 2025
Comment on lines +46 to +47
for obj in minio.list_objects(cluster.minio_bucket, recursive=True):
print(obj.object_name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove

try:
cluster = ClickHouseCluster(__file__)
# clickhouse0 not a member of cluster_XXX
for i in range(6):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of adding it here in the code vs. doing it via the config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You means config like this one?
I think this cycle is more simple than a cycle through a list like ["clickhouse0", "clickhouse1", ... , "clickhouse5"].

return int(s3_get_first), int(s3_get_second)


def check_s3_gets_repeat(cluster, node, expected_result, cluster_first, cluster_second, enable_filesystem_cache):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test looks too heavy (duration-wise):

https://s3.amazonaws.com/clickhouse-test-reports/json.html?PR=82511&sha=d0a0f1aef82a4345983a9447aaa5f0c73566f88e&name_0=PR&name_1=Integration+tests+%28asan%2C+flaky+check%29

and a little too complex. We want to test that more or less the same files will be scheduled on each node. So, maybe let's just run some queries with different cluster configurations and then check that each replica has ~const * number_of_files / number_of_replicas + eps files in its cache.

Copy link
Contributor Author

@ianton-ru ianton-ru Jul 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a complex thing.
Base logic in StorageObjectStorageStableTaskDistributor (before this PR too, I did not touch this part) is:

  • calculate "primary" replica for each file depends of file path and replicas info (replica's number or replica's address).
  • but if some replica finished to process all "own" files early, it gets unprocessed files, which has other replicas as "primary".

As result, files in head of lists have a near 100% chance to be processed on theirs "primary" replicas, when files from tail have a great chance to be caught by "non-primary" replicas. I can't control speed of execution, and in single run this tails can be randomly great, and test may fail. So I made this code with several runs and average result to minimize chance of this kind of random fail.

Possible way to make distribution predictable is to remove ability to process file on "non-primary" replica, but this is definitely not for production - if some replica is overloaded or lost, query will fail with timeout in this case.

May be possible to use something like FailPoints but without exception to turn this off for test purpose only, but I'm not sure that this is a correct way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we test this logic with a unit test?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls continue this pr, it makes a lot of sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nickitat Sorry, summer vacation time. Added some unit tests in src/Storages/ObjectStorage/tests/gtest_rendezvous_hashing.cpp

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we still need to have tests/integration/test_s3_cache_locality/test.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let's remove it.

: iterator(std::move(iterator_))
, connection_to_files(number_of_replicas_)
, connection_to_files(ids_of_nodes_.size())
, ids_of_nodes(ids_of_nodes_)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move pls

@ianton-ru
Copy link
Contributor Author

Stateless test 02443_detach_attach_partition failed
https://s3.amazonaws.com/clickhouse-test-reports/json.html?PR=82511&sha=937e345af636b394ed2544d363bd673cf8d2122c&name_0=PR
Looks as not related to current PR

2025-08-11 13:00:34 Reason: having stderror:  
2025-08-11 13:00:34 [c2bb8f811880] 2025.08.11 10:00:25.946019 [ 400896 ] {9029162d-5af0-40dd-a397-25f2f3448158} <Error> void DB::MetadataOperationsHolder::commitImpl(const TransactionCommitOptionsVariant &, SharedMutex &): Code: 521. DB::ErrnoException: Cannot rename /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/all_35_35_0 to /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/attaching_all_35_35_0 because the second path already exists: , errno: 17, strerror: File exists. (ATOMIC_RENAME_FAIL), Stack trace (when copying this message, always include the lines below):
2025-08-11 13:00:34 
2025-08-11 13:00:34 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x000000001f3e01f2
2025-08-11 13:00:34 1. ./ci/tmp/build/./src/Common/Exception.cpp:119: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f93b79e
2025-08-11 13:00:34 2. DB::Exception::Exception(String&&, int, String, bool) @ 0x0000000008ef87ce
2025-08-11 13:00:34 3. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000008ef8100
2025-08-11 13:00:34 4. ./src/Common/Exception.h:238: DB::ErrnoException::ErrnoException<String const&, String const&>(int, FormatStringHelperImpl<std::type_identity<String const&>::type, std::type_identity<String const&>::type>, String const&, String const&) @ 0x0000000014b520f3
2025-08-11 13:00:34 5. ./ci/tmp/build/./src/Common/atomicRename.cpp:94: DB::renameat2(String const&, String const&, int) @ 0x0000000014b51eb8
2025-08-11 13:00:34 6. ./ci/tmp/build/./src/Common/atomicRename.cpp:227: DB::renameNoReplace(String const&, String const&) @ 0x0000000014b51a5b
2025-08-11 13:00:34 7. ./ci/tmp/build/./src/Disks/DiskLocal.cpp:318: DB::DiskLocal::moveFile(String const&, String const&) @ 0x0000000014d29491
2025-08-11 13:00:34 8. ./ci/tmp/build/./src/Disks/ObjectStorages/MetadataOperationsHolder.cpp:65: DB::MetadataOperationsHolder::commitImpl(std::variant<std::monostate, DB::MetaInKeeperCommitOptions<zkutil::ZooKeeper>, DB::MetaInKeeperCommitOptions<DB::ZooKeeperWithFaultInjection>> const&, DB::SharedMutex&) @ 0x0000000014dbc8dc
2025-08-11 13:00:34 9. ./ci/tmp/build/./src/Disks/ObjectStorages/DiskObjectStorageTransaction.cpp:1096: DB::DiskObjectStorageTransaction::commit(std::variant<std::monostate, DB::MetaInKeeperCommitOptions<zkutil::ZooKeeper>, DB::MetaInKeeperCommitOptions<DB::ZooKeeperWithFaultInjection>> const&) @ 0x0000000014d868ba
2025-08-11 13:00:34 10. ./ci/tmp/build/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:240: DB::DiskObjectStorage::moveFile(String const&, String const&) @ 0x0000000014d7258f
2025-08-11 13:00:34 11. ./src/Disks/DiskEncryptedTransaction.h:102: DB::DiskEncryptedTransaction::moveFile(String const&, String const&) @ 0x0000000014d69ce7
2025-08-11 13:00:34 12. ./src/Disks/DiskEncrypted.h:99: DB::DiskEncrypted::moveFile(String const&, String const&) @ 0x0000000014d617eb
2025-08-11 13:00:34 13. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:4508: DB::MergeTreeData::PartsTemporaryRename::tryRenameAll() @ 0x0000000018db0540
2025-08-11 13:00:34 14. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:7264: DB::MergeTreeData::tryLoadPartsToAttach(std::shared_ptr<DB::IAST> const&, bool, std::shared_ptr<DB::Context const>, DB::MergeTreeData::PartsTemporaryRename&) @ 0x0000000018dda7d7
2025-08-11 13:00:34 15. ./ci/tmp/build/./src/Storages/StorageReplicatedMergeTree.cpp:6995: DB::StorageReplicatedMergeTree::attachPartition(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, std::shared_ptr<DB::Context const>) @ 0x000000001889771a
2025-08-11 13:00:34 16. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:6130: DB::MergeTreeData::alterPartition(std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::vector<DB::PartitionCommand, std::allocator<DB::PartitionCommand>> const&, std::shared_ptr<DB::Context const>) @ 0x0000000018dca8b0
2025-08-11 13:00:34 17. ./ci/tmp/build/./src/Interpreters/InterpreterAlterQuery.cpp:280: DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x00000000159163df
2025-08-11 13:00:34 18. ./ci/tmp/build/./src/Interpreters/InterpreterAlterQuery.cpp:82: DB::InterpreterAlterQuery::execute() @ 0x0000000015913a8d
2025-08-11 13:00:34 19. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1561: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, std::unique_ptr<DB::ReadBuffer, std::default_delete<DB::ReadBuffer>>&, std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::ImplicitTransactionControlExecutor>) @ 0x0000000015cf4c2e
2025-08-11 13:00:34 20. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1770: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000015cf036e
2025-08-11 13:00:34 21. ./ci/tmp/build/./src/Server/TCPHandler.cpp:739: DB::TCPHandler::runImpl() @ 0x0000000019468aaa
2025-08-11 13:00:34 22. ./ci/tmp/build/./src/Server/TCPHandler.cpp:2740: DB::TCPHandler::run() @ 0x0000000019485196
2025-08-11 13:00:34 23. ./ci/tmp/build/./base/poco/Net/src/TCPServerConnection.cpp:40: Poco::Net::TCPServerConnection::start() @ 0x000000001f49b207
2025-08-11 13:00:34 24. ./ci/tmp/build/./base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x000000001f49b7be
2025-08-11 13:00:34 25. ./ci/tmp/build/./base/poco/Foundation/src/ThreadPool.cpp:205: Poco::PooledThread::run() @ 0x000000001f43a9ff
2025-08-11 13:00:34 26. ./base/poco/Foundation/src/Thread_POSIX.cpp:341: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001f43800f
2025-08-11 13:00:34 27. ? @ 0x0000000000094ac3
2025-08-11 13:00:34 28. ? @ 0x0000000000126850
2025-08-11 13:00:34  (version 25.8.1.1)
2025-08-11 13:00:34 [c2bb8f811880] 2025.08.11 10:00:25.946180 [ 400896 ] {9029162d-5af0-40dd-a397-25f2f3448158} <Warning> test_flwmfxmj.alter_table1 (3fa93329-98ee-45ef-8c07-ac0a16dedb08): Cannot rename parts to perform operation on them: Code: 521. DB::ErrnoException: Cannot rename /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/attaching_all_35_35_0 to /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/all_35_35_0 because the second path already exists: , errno: 17, strerror: File exists: While rolling back operation #0. (ATOMIC_RENAME_FAIL) (version 25.8.1.1)
2025-08-11 13:00:34 [c2bb8f811880] 2025.08.11 10:00:25.946330 [ 400896 ] {9029162d-5af0-40dd-a397-25f2f3448158} <Error> executeQuery: Code: 521. DB::ErrnoException: Cannot rename /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/attaching_all_35_35_0 to /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/all_35_35_0 because the second path already exists: , errno: 17, strerror: File exists: While rolling back operation #0. (ATOMIC_RENAME_FAIL) (version 25.8.1.1) (from [::1]:60604) (comment: 02443_detach_attach_partition.sh) (query 1, line 1) (in query: ALTER TABLE alter_table1 ATTACH PARTITION ID 'all'), Stack trace (when copying this message, always include the lines below):
2025-08-11 13:00:34 
2025-08-11 13:00:34 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x000000001f3e01f2
2025-08-11 13:00:34 1. ./ci/tmp/build/./src/Common/Exception.cpp:119: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f93b79e
2025-08-11 13:00:34 2. DB::Exception::Exception(String&&, int, String, bool) @ 0x0000000008ef87ce
2025-08-11 13:00:34 3. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000008ef8100
2025-08-11 13:00:34 4. ./src/Common/Exception.h:238: DB::ErrnoException::ErrnoException<String const&, String const&>(int, FormatStringHelperImpl<std::type_identity<String const&>::type, std::type_identity<String const&>::type>, String const&, String const&) @ 0x0000000014b520f3
2025-08-11 13:00:34 5. ./ci/tmp/build/./src/Common/atomicRename.cpp:94: DB::renameat2(String const&, String const&, int) @ 0x0000000014b51eb8
2025-08-11 13:00:34 6. ./ci/tmp/build/./src/Common/atomicRename.cpp:227: DB::renameNoReplace(String const&, String const&) @ 0x0000000014b51a5b
2025-08-11 13:00:34 7. ./ci/tmp/build/./src/Disks/DiskLocal.cpp:318: DB::DiskLocal::moveFile(String const&, String const&) @ 0x0000000014d29491
2025-08-11 13:00:34 8. ./ci/tmp/build/./src/Disks/ObjectStorages/MetadataOperationsHolder.cpp:22: DB::MetadataOperationsHolder::rollback(std::unique_lock<DB::SharedMutex>&, unsigned long) @ 0x0000000014dbc540
2025-08-11 13:00:34 9. ./ci/tmp/build/./src/Disks/ObjectStorages/MetadataOperationsHolder.cpp:73: DB::MetadataOperationsHolder::commitImpl(std::variant<std::monostate, DB::MetaInKeeperCommitOptions<zkutil::ZooKeeper>, DB::MetaInKeeperCommitOptions<DB::ZooKeeperWithFaultInjection>> const&, DB::SharedMutex&) @ 0x0000000014dbcc03
2025-08-11 13:00:34 10. ./ci/tmp/build/./src/Disks/ObjectStorages/DiskObjectStorageTransaction.cpp:1096: DB::DiskObjectStorageTransaction::commit(std::variant<std::monostate, DB::MetaInKeeperCommitOptions<zkutil::ZooKeeper>, DB::MetaInKeeperCommitOptions<DB::ZooKeeperWithFaultInjection>> const&) @ 0x0000000014d868ba
2025-08-11 13:00:34 11. ./ci/tmp/build/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:240: DB::DiskObjectStorage::moveFile(String const&, String const&) @ 0x0000000014d7258f
2025-08-11 13:00:34 12. ./src/Disks/DiskEncryptedTransaction.h:102: DB::DiskEncryptedTransaction::moveFile(String const&, String const&) @ 0x0000000014d69ce7
2025-08-11 13:00:34 13. ./src/Disks/DiskEncrypted.h:99: DB::DiskEncrypted::moveFile(String const&, String const&) @ 0x0000000014d617eb
2025-08-11 13:00:34 14. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:4508: DB::MergeTreeData::PartsTemporaryRename::tryRenameAll() @ 0x0000000018db0540
2025-08-11 13:00:34 15. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:7264: DB::MergeTreeData::tryLoadPartsToAttach(std::shared_ptr<DB::IAST> const&, bool, std::shared_ptr<DB::Context const>, DB::MergeTreeData::PartsTemporaryRename&) @ 0x0000000018dda7d7
2025-08-11 13:00:34 16. ./ci/tmp/build/./src/Storages/StorageReplicatedMergeTree.cpp:6995: DB::StorageReplicatedMergeTree::attachPartition(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, std::shared_ptr<DB::Context const>) @ 0x000000001889771a
2025-08-11 13:00:34 17. ./ci/tmp/build/./src/Storages/MergeTree/MergeTreeData.cpp:6130: DB::MergeTreeData::alterPartition(std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::vector<DB::PartitionCommand, std::allocator<DB::PartitionCommand>> const&, std::shared_ptr<DB::Context const>) @ 0x0000000018dca8b0
2025-08-11 13:00:34 18. ./ci/tmp/build/./src/Interpreters/InterpreterAlterQuery.cpp:280: DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x00000000159163df
2025-08-11 13:00:34 19. ./ci/tmp/build/./src/Interpreters/InterpreterAlterQuery.cpp:82: DB::InterpreterAlterQuery::execute() @ 0x0000000015913a8d
2025-08-11 13:00:34 20. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1561: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, std::unique_ptr<DB::ReadBuffer, std::default_delete<DB::ReadBuffer>>&, std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::ImplicitTransactionControlExecutor>) @ 0x0000000015cf4c2e
2025-08-11 13:00:34 21. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1770: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000015cf036e
2025-08-11 13:00:34 22. ./ci/tmp/build/./src/Server/TCPHandler.cpp:739: DB::TCPHandler::runImpl() @ 0x0000000019468aaa
2025-08-11 13:00:34 23. ./ci/tmp/build/./src/Server/TCPHandler.cpp:2740: DB::TCPHandler::run() @ 0x0000000019485196
2025-08-11 13:00:34 24. ./ci/tmp/build/./base/poco/Net/src/TCPServerConnection.cpp:40: Poco::Net::TCPServerConnection::start() @ 0x000000001f49b207
2025-08-11 13:00:34 25. ./ci/tmp/build/./base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x000000001f49b7be
2025-08-11 13:00:34 26. ./ci/tmp/build/./base/poco/Foundation/src/ThreadPool.cpp:205: Poco::PooledThread::run() @ 0x000000001f43a9ff
2025-08-11 13:00:34 27. ./base/poco/Foundation/src/Thread_POSIX.cpp:341: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001f43800f
2025-08-11 13:00:34 28. ? @ 0x0000000000094ac3
2025-08-11 13:00:34 29. ? @ 0x0000000000126850
2025-08-11 13:00:34 
2025-08-11 13:00:34 Received exception from server (version 25.8.1):
2025-08-11 13:00:34 Code: 521. DB::Exception: Received from localhost:9000. DB::ErrnoException. DB::ErrnoException: Cannot rename /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/attaching_all_35_35_0 to /home/ubuntu/actions-runner/_work/ClickHouse/ClickHouse/ci/tmp/run_r0/disks/s3/cached_s3_encrypted/store/3fa/3fa93329-98ee-45ef-8c07-ac0a16dedb08/detached/all_35_35_0 because the second path already exists: , errno: 17, strerror: File exists: While rolling back operation #0. (ATOMIC_RENAME_FAIL)
2025-08-11 13:00:34 (query: ALTER TABLE alter_table1 ATTACH PARTITION ID 'all')

@nickitat
Copy link
Member

AST fuzzer (amd_ubsan) - #85469
Stress test (amd_debug) - #81144
Stateless tests (amd_debug, parallel) - #85270
Stateless tests (amd_binary, old analyzer, s3 storage, DatabaseReplicated, parallel) - #54748

@nickitat nickitat added this pull request to the merge queue Aug 13, 2025
Merged via the queue into ClickHouse:master with commit fa5747d Aug 13, 2025
120 of 123 checks passed
@robot-ch-test-poll2 robot-ch-test-poll2 added the pr-synced-to-cloud The PR is synced to the cloud repo label Aug 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

can be tested Allows running workflows for external contributors pr-improvement Pull request with some product improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants