== Physical Plan ==
* CometColumnarToRow (25)
+- CometHashAggregate (24)
   +- CometExchange (23)
      +- CometHashAggregate (22)
         +- CometProject (21)
            +- CometSortMergeJoin (20)
               :- CometSort (11)
               :  +- CometHashAggregate (10)
               :     +- CometExchange (9)
               :        +- CometHashAggregate (8)
               :           +- CometProject (7)
               :              +- CometBroadcastHashJoin (6)
               :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales (1)
               :                 +- CometBroadcastExchange (5)
               :                    +- CometProject (4)
               :                       +- CometFilter (3)
               :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim (2)
               +- CometSort (19)
                  +- CometHashAggregate (18)
                     +- CometExchange (17)
                        +- CometHashAggregate (16)
                           +- CometProject (15)
                              +- CometBroadcastHashJoin (14)
                                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales (12)
                                 +- ReusedExchange (13)


(1) CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales
Output [3]: [ss_item_sk#1, ss_customer_sk#2, ss_sold_date_sk#3]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ss_sold_date_sk#3), dynamicpruningexpression(ss_sold_date_sk#3 IN dynamicpruning#4)]
ReadSchema: struct<ss_item_sk:int,ss_customer_sk:int>

(2) CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim
Output [2]: [d_date_sk#5, d_month_seq#6]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_month_seq), GreaterThanOrEqual(d_month_seq,1200), LessThanOrEqual(d_month_seq,1211), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_month_seq:int>

(3) CometFilter
Input [2]: [d_date_sk#5, d_month_seq#6]
Condition : (((isnotnull(d_month_seq#6) AND (d_month_seq#6 >= 1200)) AND (d_month_seq#6 <= 1211)) AND isnotnull(d_date_sk#5))

(4) CometProject
Input [2]: [d_date_sk#5, d_month_seq#6]
Arguments: [d_date_sk#5], [d_date_sk#5]

(5) CometBroadcastExchange
Input [1]: [d_date_sk#5]
Arguments: [d_date_sk#5]

(6) CometBroadcastHashJoin
Left output [3]: [ss_item_sk#1, ss_customer_sk#2, ss_sold_date_sk#3]
Right output [1]: [d_date_sk#5]
Arguments: [ss_sold_date_sk#3], [d_date_sk#5], Inner, BuildRight

(7) CometProject
Input [4]: [ss_item_sk#1, ss_customer_sk#2, ss_sold_date_sk#3, d_date_sk#5]
Arguments: [ss_item_sk#1, ss_customer_sk#2], [ss_item_sk#1, ss_customer_sk#2]

(8) CometHashAggregate
Input [2]: [ss_item_sk#1, ss_customer_sk#2]
Keys [2]: [ss_customer_sk#2, ss_item_sk#1]
Functions: []

(9) CometExchange
Input [2]: [ss_customer_sk#2, ss_item_sk#1]
Arguments: hashpartitioning(ss_customer_sk#2, ss_item_sk#1, 5), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=1]

(10) CometHashAggregate
Input [2]: [ss_customer_sk#2, ss_item_sk#1]
Keys [2]: [ss_customer_sk#2, ss_item_sk#1]
Functions: []

(11) CometSort
Input [2]: [customer_sk#7, item_sk#8]
Arguments: [customer_sk#7, item_sk#8], [customer_sk#7 ASC NULLS FIRST, item_sk#8 ASC NULLS FIRST]

(12) CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales
Output [3]: [cs_bill_customer_sk#9, cs_item_sk#10, cs_sold_date_sk#11]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(cs_sold_date_sk#11), dynamicpruningexpression(cs_sold_date_sk#11 IN dynamicpruning#12)]
ReadSchema: struct<cs_bill_customer_sk:int,cs_item_sk:int>

(13) ReusedExchange [Reuses operator id: 5]
Output [1]: [d_date_sk#13]

(14) CometBroadcastHashJoin
Left output [3]: [cs_bill_customer_sk#9, cs_item_sk#10, cs_sold_date_sk#11]
Right output [1]: [d_date_sk#13]
Arguments: [cs_sold_date_sk#11], [d_date_sk#13], Inner, BuildRight

(15) CometProject
Input [4]: [cs_bill_customer_sk#9, cs_item_sk#10, cs_sold_date_sk#11, d_date_sk#13]
Arguments: [cs_bill_customer_sk#9, cs_item_sk#10], [cs_bill_customer_sk#9, cs_item_sk#10]

(16) CometHashAggregate
Input [2]: [cs_bill_customer_sk#9, cs_item_sk#10]
Keys [2]: [cs_bill_customer_sk#9, cs_item_sk#10]
Functions: []

(17) CometExchange
Input [2]: [cs_bill_customer_sk#9, cs_item_sk#10]
Arguments: hashpartitioning(cs_bill_customer_sk#9, cs_item_sk#10, 5), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=2]

(18) CometHashAggregate
Input [2]: [cs_bill_customer_sk#9, cs_item_sk#10]
Keys [2]: [cs_bill_customer_sk#9, cs_item_sk#10]
Functions: []

(19) CometSort
Input [2]: [customer_sk#14, item_sk#15]
Arguments: [customer_sk#14, item_sk#15], [customer_sk#14 ASC NULLS FIRST, item_sk#15 ASC NULLS FIRST]

(20) CometSortMergeJoin
Left output [2]: [customer_sk#7, item_sk#8]
Right output [2]: [customer_sk#14, item_sk#15]
Arguments: [customer_sk#7, item_sk#8], [customer_sk#14, item_sk#15], FullOuter

(21) CometProject
Input [4]: [customer_sk#7, item_sk#8, customer_sk#14, item_sk#15]
Arguments: [customer_sk#7, customer_sk#14], [customer_sk#7, customer_sk#14]

(22) CometHashAggregate
Input [2]: [customer_sk#7, customer_sk#14]
Keys: []
Functions [3]: [partial_sum(CASE WHEN (isnotnull(customer_sk#7) AND isnull(customer_sk#14)) THEN 1 ELSE 0 END), partial_sum(CASE WHEN (isnull(customer_sk#7) AND isnotnull(customer_sk#14)) THEN 1 ELSE 0 END), partial_sum(CASE WHEN (isnotnull(customer_sk#7) AND isnotnull(customer_sk#14)) THEN 1 ELSE 0 END)]

(23) CometExchange
Input [3]: [sum#16, sum#17, sum#18]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=3]

(24) CometHashAggregate
Input [3]: [sum#16, sum#17, sum#18]
Keys: []
Functions [3]: [sum(CASE WHEN (isnotnull(customer_sk#7) AND isnull(customer_sk#14)) THEN 1 ELSE 0 END), sum(CASE WHEN (isnull(customer_sk#7) AND isnotnull(customer_sk#14)) THEN 1 ELSE 0 END), sum(CASE WHEN (isnotnull(customer_sk#7) AND isnotnull(customer_sk#14)) THEN 1 ELSE 0 END)]

(25) CometColumnarToRow [codegen id : 1]
Input [3]: [store_only#19, catalog_only#20, store_and_catalog#21]

===== Subqueries =====

Subquery:1 Hosting operator id = 1 Hosting Expression = ss_sold_date_sk#3 IN dynamicpruning#4
BroadcastExchange (30)
+- * CometColumnarToRow (29)
   +- CometProject (28)
      +- CometFilter (27)
         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim (26)


(26) CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim
Output [2]: [d_date_sk#5, d_month_seq#6]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_month_seq), GreaterThanOrEqual(d_month_seq,1200), LessThanOrEqual(d_month_seq,1211), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_month_seq:int>

(27) CometFilter
Input [2]: [d_date_sk#5, d_month_seq#6]
Condition : (((isnotnull(d_month_seq#6) AND (d_month_seq#6 >= 1200)) AND (d_month_seq#6 <= 1211)) AND isnotnull(d_date_sk#5))

(28) CometProject
Input [2]: [d_date_sk#5, d_month_seq#6]
Arguments: [d_date_sk#5], [d_date_sk#5]

(29) CometColumnarToRow [codegen id : 1]
Input [1]: [d_date_sk#5]

(30) BroadcastExchange
Input [1]: [d_date_sk#5]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=4]

Subquery:2 Hosting operator id = 12 Hosting Expression = cs_sold_date_sk#11 IN dynamicpruning#4


