Skip to content

Conversation

@tac0turtle
Copy link
Contributor

Overview

@tac0turtle tac0turtle marked this pull request as ready for review January 19, 2026 10:47
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tac0turtle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the project's documentation by introducing a new section on Data Availability (DA) layer batching strategies and their associated configuration parameters. The update provides users with clear guidance on how to configure block submission to the DA layer, enabling them to optimize for latency, cost, or throughput based on their specific needs. This addition improves the clarity and completeness of the configuration reference.

Highlights

  • New DA Batching Strategy Documentation: Added comprehensive documentation for the DA (Data Availability) batching strategies, detailing how blocks are batched before submission to the DA layer.
  • Detailed Strategy Explanations: Provided in-depth descriptions for four distinct batching strategies: immediate, size, time, and adaptive, outlining their trade-offs in terms of latency, cost efficiency, and throughput.
  • New DA Batching Configuration Parameters: Documented new configuration options related to DA batching, including DA Batch Size Threshold, DA Batch Max Delay, and DA Batch Min Items, along with their descriptions, YAML examples, command-line flags, and default values.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@claude
Copy link
Contributor

claude bot commented Jan 19, 2026

Claude finished @tac0turtle's task in 1m 10s —— View job


Code Review Complete ✓

Todo List

  • Read the changed documentation file
  • Compare changes with base branch
  • Review documentation for technical accuracy and clarity
  • Provide comprehensive feedback

Summary

This PR adds comprehensive documentation for DA (Data Availability) batching strategies and their associated configuration parameters. The documentation is well-structured and provides clear guidance for users configuring their ev-node deployments.

Overall Assessment

✅ Strengths:

  • Clear, detailed explanations of each batching strategy with trade-off analysis
  • Consistent formatting that matches the existing documentation style
  • Helpful examples and default values for all parameters
  • Good use of bold formatting to highlight strategy names and key concepts
  • Practical guidance on when to use each strategy

⚠️ Issues Found:

1. Table of Contents Inconsistency (Minor)

Location: docs/learn/config.md:32

The new TOC entry for "DA Request Timeout" was added as part of this PR, but this configuration parameter already existed in the documentation (lines 498-513 in the original file). The TOC update appears to be fixing a previous omission rather than documenting something new in this PR.

Observation: This is actually a good catch and fix, but it suggests the documentation may have been out of sync.

2. Strategy Description Clarity (Minor Enhancement)

Location: docs/learn/config.md:519-526

The description states "All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit."

Suggestion: Consider adding a brief note about what happens when batches are split (e.g., "split batches are submitted as separate DA transactions"). This would help users understand the cost implications.

3. Default Value Alignment (Documentation)

Location: docs/learn/config.md:579 (DA Batch Max Delay)

The description says "When set to 0, defaults to the DA BlockTime value" and the default is listed as 0 (uses DA BlockTime). This is clear, but it might be worth noting the effective default value in parentheses for user convenience.

Suggestion: Consider: _Default:_ 0 (effectively "6s" when using default DA BlockTime)

4. Parameter Interdependencies (Enhancement)

Location: Throughout the new batching configuration section

The documentation explains which parameters apply to which strategies, but users might benefit from a summary table showing the relationship:

| Strategy   | Uses Size Threshold | Uses Max Delay | Uses Min Items |
|------------|---------------------|----------------|----------------|
| immediate  ||||
| size       ||||
| time       ||||
| adaptive   ||||

This would make it easier to understand configuration requirements at a glance.

5. Technical Accuracy Check (Question)

Location: docs/learn/config.md:520

The documentation mentions "fraction of max blob size" for the size threshold. Can you confirm:

  • Is the maximum blob size auto-detected from the DA layer, or is it a hardcoded constant?
  • Does this maximum blob size vary between DA layers (Celestia, Avail, etc.)?

If it varies or requires configuration, it might be worth adding a note or cross-reference.


Documentation Quality Checklist

Formatting: Consistent markdown, proper code blocks, correct heading hierarchy
Completeness: All parameters documented with YAML examples, CLI flags, defaults, and constants
Clarity: Clear descriptions with practical guidance
Examples: Appropriate example values provided
Structure: Follows existing documentation patterns
Table of Contents: Updated correctly (with the minor note about DA Request Timeout)
Grammar & Spelling: No issues found


Recommendations

Priority: Low - The documentation is production-ready as-is. The suggestions above are enhancements that would improve usability but are not blockers.

  1. Consider adding the parameter interdependency table for quick reference
  2. Consider clarifying what happens when batches are split by the submitter
  3. Verify maximum blob size behavior across different DA layers (if applicable)

Approval Recommendation:Approve - This is high-quality documentation that significantly improves the configuration reference. The minor suggestions can be addressed in follow-up PRs if desired.


@github-actions
Copy link
Contributor

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 19, 2026, 10:47 AM

@tac0turtle tac0turtle enabled auto-merge January 19, 2026 10:48
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds documentation for new Data Availability (DA) batching strategy configurations. The new documentation is detailed and covers several new parameters. My review found a couple of minor inaccuracies in the descriptions when compared to the implementation code. Specifically, the valid range for batch_size_threshold could be described more precisely, and the statement about batch_min_items being respected by all strategies is not entirely correct as the immediate strategy does not use it. I've added suggestions to clarify these points in the documentation.

### DA Batch Size Threshold

**Description:**
The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The description states the range for batch_size_threshold is between 0.0 and 1.0. This is slightly ambiguous. The implementation defaults to 0.8 if the value is less than or equal to 0.0. To be more precise, it would be better to state that the value must be in the range (0.0, 1.0].

Suggested change
The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.
The minimum blob size threshold (as a fraction of the maximum blob size, in the range (0.0, 1.0]) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.

### DA Batch Min Items

**Description:**
The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The documentation states that "All strategies respect this minimum." However, based on the implementation of ImmediateStrategy, it seems to submit as soon as pendingCount > 0 without checking for minItems. The size, time, and adaptive strategies do respect this minimum. The documentation should be updated to reflect this.

Suggested change
The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum.
The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. This is respected by the `size`, `time`, and `adaptive` strategies.

@codecov
Copy link

codecov bot commented Jan 19, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.24%. Comparing base (dfd76ac) to head (4c82c17).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2992      +/-   ##
==========================================
- Coverage   59.30%   59.24%   -0.07%     
==========================================
  Files         105      105              
  Lines        9998     9998              
==========================================
- Hits         5929     5923       -6     
- Misses       3442     3447       +5     
- Partials      627      628       +1     
Flag Coverage Δ
combined 59.24% <ø> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tac0turtle tac0turtle added this pull request to the merge queue Jan 19, 2026
Merged via the queue into main with commit 50b35a9 Jan 19, 2026
33 of 37 checks passed
@tac0turtle tac0turtle deleted the marko/strategy_docs branch January 19, 2026 11:12
@github-actions
Copy link
Contributor

PR Preview Action v1.8.0
Preview removed because the pull request was closed.
2026-01-19 11:13 UTC

alpe added a commit that referenced this pull request Jan 19, 2026
* main:
  chore: update calculator for strategies  (#2995)
  chore: adding tracing for da submitter (#2993)
  feat(tracing): part 10 da retriever tracing (#2991)
  chore: add da posting strategy to docs (#2992)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants