-
Notifications
You must be signed in to change notification settings - Fork 246
chore: add da posting strategy to docs #2992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @tac0turtle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the project's documentation by introducing a new section on Data Availability (DA) layer batching strategies and their associated configuration parameters. The update provides users with clear guidance on how to configure block submission to the DA layer, enabling them to optimize for latency, cost, or throughput based on their specific needs. This addition improves the clarity and completeness of the configuration reference. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Claude finished @tac0turtle's task in 1m 10s —— View job Code Review Complete ✓Todo List
SummaryThis PR adds comprehensive documentation for DA (Data Availability) batching strategies and their associated configuration parameters. The documentation is well-structured and provides clear guidance for users configuring their ev-node deployments. Overall Assessment✅ Strengths:
1. Table of Contents Inconsistency (Minor)Location: The new TOC entry for "DA Request Timeout" was added as part of this PR, but this configuration parameter already existed in the documentation (lines 498-513 in the original file). The TOC update appears to be fixing a previous omission rather than documenting something new in this PR. Observation: This is actually a good catch and fix, but it suggests the documentation may have been out of sync. 2. Strategy Description Clarity (Minor Enhancement)Location: The description states "All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit." Suggestion: Consider adding a brief note about what happens when batches are split (e.g., "split batches are submitted as separate DA transactions"). This would help users understand the cost implications. 3. Default Value Alignment (Documentation)Location: The description says "When set to 0, defaults to the DA BlockTime value" and the default is listed as Suggestion: Consider: 4. Parameter Interdependencies (Enhancement)Location: Throughout the new batching configuration section The documentation explains which parameters apply to which strategies, but users might benefit from a summary table showing the relationship: | Strategy | Uses Size Threshold | Uses Max Delay | Uses Min Items |
|------------|---------------------|----------------|----------------|
| immediate | ❌ | ❌ | ✅ |
| size | ✅ | ❌ | ✅ |
| time | ❌ | ✅ | ✅ |
| adaptive | ✅ | ✅ | ✅ |This would make it easier to understand configuration requirements at a glance. 5. Technical Accuracy Check (Question)Location: The documentation mentions "fraction of max blob size" for the size threshold. Can you confirm:
If it varies or requires configuration, it might be worth adding a note or cross-reference. Documentation Quality Checklist✅ Formatting: Consistent markdown, proper code blocks, correct heading hierarchy RecommendationsPriority: Low - The documentation is production-ready as-is. The suggestions above are enhancements that would improve usability but are not blockers.
Approval Recommendation: ✅ Approve - This is high-quality documentation that significantly improves the configuration reference. The minor suggestions can be addressed in follow-up PRs if desired. |
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds documentation for new Data Availability (DA) batching strategy configurations. The new documentation is detailed and covers several new parameters. My review found a couple of minor inaccuracies in the descriptions when compared to the implementation code. Specifically, the valid range for batch_size_threshold could be described more precisely, and the statement about batch_min_items being respected by all strategies is not entirely correct as the immediate strategy does not use it. I've added suggestions to clarify these points in the documentation.
| ### DA Batch Size Threshold | ||
|
|
||
| **Description:** | ||
| The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The description states the range for batch_size_threshold is between 0.0 and 1.0. This is slightly ambiguous. The implementation defaults to 0.8 if the value is less than or equal to 0.0. To be more precise, it would be better to state that the value must be in the range (0.0, 1.0].
| The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size. | |
| The minimum blob size threshold (as a fraction of the maximum blob size, in the range (0.0, 1.0]) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size. |
| ### DA Batch Min Items | ||
|
|
||
| **Description:** | ||
| The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation states that "All strategies respect this minimum." However, based on the implementation of ImmediateStrategy, it seems to submit as soon as pendingCount > 0 without checking for minItems. The size, time, and adaptive strategies do respect this minimum. The documentation should be updated to reflect this.
| The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum. | |
| The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. This is respected by the `size`, `time`, and `adaptive` strategies. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2992 +/- ##
==========================================
- Coverage 59.30% 59.24% -0.07%
==========================================
Files 105 105
Lines 9998 9998
==========================================
- Hits 5929 5923 -6
- Misses 3442 3447 +5
- Partials 627 628 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Overview