Skip to content

Conversation

@CarlosFelipeOR
Copy link
Collaborator

Changelog category (leave one):

  • CI Fix or Improvement (changelog entry is not required)
  • Not for changelog (changelog entry is not required)

CI/CD Options

Exclude tests:

  • Fast test
  • Integration Tests
  • Stateless tests
  • Stateful tests
  • Performance tests
  • All with ASAN
  • All with TSAN
  • All with MSAN
  • All with UBSAN
  • All with Coverage
  • All with Aarch64
  • All Regression
  • Disable CI Cache

Regression jobs to run:

  • Fast suites (mostly <1h)
  • Aggregate Functions (2h)
  • Alter (1.5h)
  • Benchmark (30m)
  • ClickHouse Keeper (1h)
  • Iceberg (2h)
  • LDAP (1h)
  • Parquet (1.5h)
  • RBAC (1.5h)
  • SSL Server (1h)
  • S3 (2h)
  • Tiered Storage (2h)

@github-actions
Copy link

github-actions bot commented Jan 28, 2026

Workflow [PR], commit [b5db9c9]

@MyroTk MyroTk merged commit f2260bb into antalya-25.8 Jan 29, 2026
69 checks passed
@CarlosFelipeOR
Copy link
Collaborator Author

This PR only updates the regression commit hash. The remaining CI failures were reviewed and can be ignored:

@Selfeer
Copy link
Collaborator

Selfeer commented Jan 29, 2026

The reason the /s3/minio/export tests/export part/system monitoring/system tables columns test fail here: https://github.com/Altinity/ClickHouse/actions/runs/21453767736/job/61812473251 is because it seems lke the changes made in #1330 are not part of this PR hence the required field missing in the system table:

Missing columns in system.exports: {'query_id'}. Expected: ['bytes_read_uncompressed', 'create_time', 'destination_database', 
'destination_file_paths', 'destination_table', 'elapsed', 'memory_usage', 'part_name', 'peak_memory_usage', 'query_id', 'rows_read', 
'source_database', 'source_table', 'total_rows_to_read', 'total_size_bytes_compressed', 'total_size_bytes_uncompressed'], Actual: 
['bytes_read_uncompressed', 'create_time', 'destination_database', 'destination_file_paths', 'destination_table', 'elapsed', 
'memory_usage', 'part_name', 'peak_memory_usage', 'rows_read', 'source_database', 'source_table', 'total_rows_to_read', 'total_size_bytes_compressed', 'total_size_bytes_uncompressed']

You can see the export part test green in the antalya branch itself after the merge: https://github.com/Altinity/ClickHouse/actions/runs/21463972117/job/61822269243

Run python3 -u s3/regression.py --clickhouse https://altinity-build-artifacts.s3.amazonaws.com/REFs/antalya-25.8/f3a7dcee7b8562c69d7c37f353fbb4a39ba0c03f/build_amd_release/clickhouse --storage minio --test-to-end --no-colors --local --collect-service-logs --output new-fails --attr project="${GITHUB_REPOSITORY}" project.id="${GITHUB_REPOSITORY_ID}" user.name="${GITHUB_ACTOR}" version="25.8.14.20001.altinityantalya" package="$clickhouse_path" repository="https://github.com/Altinity/clickhouse-regression" commit.hash="$(git rev-parse HEAD)" job.name=$GITHUB_JOB job.retry=$GITHUB_RUN_ATTEMPT job.url="${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}" arch="$(uname -i)" --cicd --log raw.log --with-analyzer --only ":/try*" "minio/export tests/export part/*" || EXITCODE=$?; .github/add_link_to_logs.sh; exit $EXITCODE

Coverage

SRS-015 ClickHouse S3 External Storage
  137 requirements (137 untested 100.0%)

SRS-015 ClickHouse Export Part to S3
  50 requirements (44 satisfied 88.0%, 6 untested 12.0%)

Total
  187 requirements (44 satisfied 23.5%, 143 untested 76.5%)

1 module (1 ok)
16 features (16 ok)
83 scenarios (83 ok)
49 combinations (49 ok)
169 examples (169 ok)
1386 retries (831 ok, 243 failed, 312 retried)
70600 steps (70565 ok, 35 retried)

Total time 1h 20m

@Selfeer
Copy link
Collaborator

Selfeer commented Jan 29, 2026

Summary

A test involving a distributed RIGHT SEMI JOIN between an icebergS3Cluster(...) table function and an Iceberg catalog table failed with an unexpected error code when executed on a replicated cluster.

Report: https://altinity-build-artifacts.s3.amazonaws.com/REFs/1352/merge/b5db9c9ec117976e02c68ee92fe521e94a2610dc/regression/x86_64/with_analyzer/zookeeper/without_thread_fuzzer/swarms/report.html


Failing Test Details

  • Query type: RIGHT SEMI JOIN
  • Left side: icebergS3Cluster(...) table function
  • Right side: Iceberg catalog table
  • Settings:
    • object_storage_cluster_join_mode = 'allow'
    • object_storage_cluster = 'replicated_cluster_three_nodes'

Error Observed

The test expected the join query to return exitcode 81, but ClickHouse returned exitcode 10 (NOT_FOUND_COLUMN_IN_BLOCK).

      Received exception from server (version 25.8.14):
    Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Received from clickhouse1:9000. DB::Exception: Not found column __table2.boolean_col in block __table1.boolean_col Nullable(Bool) Nullable(size = 0, UInt8(size = 0), UInt8(size = 0)), __table1.long_col Nullable(Int64) Nullable(size = 0, Int64(size = 0), UInt8(size = 0)), __table1.double_col Nullable(Float64) Nullable(size = 0, Float64(size = 0), UInt8(size = 0)), __table1.string_col Nullable(String) Nullable(size = 0, String(size = 0), UInt8(size = 0)), __table1.timestamp_col Nullable(DateTime64(6)) Nullable(size = 0, DateTime64(size = 0), UInt8(size = 0)), __table1.date_col Nullable(Date) Nullable(size = 0, UInt16(size = 0), UInt8(size = 0)), __table1.time_col Nullable(Int64) Nullable(size = 0, Int64(size = 0), UInt8(size = 0)), __table1.timestamptz_col Nullable(DateTime64(6, 'UTC')) Nullable(size = 0, DateTime64(size = 0), UInt8(size = 0)), __table1.integer_col Nullable(Int32) Nullable(size = 0, Int32(size = 0), UInt8(size = 0)), __table1.float_col Nullable(Float32) Nullable(size = 0, Float32(size = 0), UInt8(size = 0)), __table1.decimal_col Nullable(Decimal(10, 2)) Nullable(size = 0, Decimal64(size = 0), UInt8(size = 0)). (NOT_FOUND_COLUMN_IN_BLOCK)
    (query: SELECT *
            FROM icebergS3Cluster(replicated_cluster, 'http://minio:9000/warehouse/data2', '[masked]:Secret(name='minio_root_user')', '[masked]:Secret(name='minio_root_password')') AS t1
            LEFT SEMI JOIN database_9654bf5b_fcc7_11f0_9e01_92000714a5df.`namespace_96b2d081_fcc7_11f0_90ab_92000714a5df.table_96b2d1bd_fcc7_11f0_83ee_92000714a5df` AS t2
             ON t1.timestamptz_col = t2.timestamptz_col ORDER BY tuple(*) SETTINGS object_storage_cluster_join_mode='allow', object_storage_cluster='replicated_cluster' FORMAT Values
    )

    Assertion values
      assert r.exitcode == exitcode, error(r.output)
             ^ is <testflows.connect.shell.Command object at 0x72fbf885da90>
      assert r.exitcode == exitcode, error(r.output)
             ^ is = 10
      assert r.exitcode == exitcode, error(r.output)
                           ^ is 81
      assert r.exitcode == exitcode, error(r.output)
                        ^ is = False
      assert r.exitcode == exitcode, error(r.output)
      ^ is False

The test seems to be flaky based on the observations in Altinity/clickhouse-regression#94

And by no means could've been caused by the changes in #1330

The bug involves:
• icebergS3Cluster table function
• RIGHT SEMI JOIN clause
• object_storage_cluster_join_mode='allow' setting
• Column resolution (NOT_FOUND_COLUMN_IN_BLOCK - looking for __table2.* in __table1.* block)

PR #1330 touches none of these areas:
• No changes to Iceberg functionality
• No changes to cluster table functions
• No changes to JOIN processing
• No changes to distributed query execution
• No changes to column resolution logic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants