RFC 004: Add delayed rewards support for trajectory-based scoring#337
Merged
RFC 004: Add delayed rewards support for trajectory-based scoring#337
Conversation
Extends RFC 004 to address Issue #107: the per-step `forward(action, obs)` API doesn't support delayed rewards where score depends on future events. Key additions: - TrajectoryRubric base class that accumulates (action, obs) pairs - ExponentialDiscountingTrajectoryRubric with gamma-based credit assignment - CPU-only memory model to avoid GPU pressure - Examples: Chess (win/loss), Cursor Plan Mode, Codenames - Environment integration and training loop patterns Design insight: Since OpenEnv doesn't batch (one env = one trajectory), the rubric itself accumulates the trajectory internally. No separate trajectory buffer needed. Resolves: #107
9 tasks
Contributor
Greptile OverviewGreptile SummaryThis PR extends RFC 004 to add delayed rewards support through a Key additions:
Design highlights:
The RFC additions are well-structured, include clear examples, and fit naturally into the existing Rubric framework. Confidence Score: 5/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Agent
participant Env as Environment
participant TR as TrajectoryRubric
participant Buffer as Internal Trajectory Buffer
Note over Env,TR: Episode Start
Agent->>Env: reset()
Env->>TR: reset()
TR->>Buffer: Clear trajectory []
Env-->>Agent: initial_observation
Note over Env,TR: During Episode (Step 1)
Agent->>Env: step(action_1)
Env->>TR: __call__(action_1, obs_1)
TR->>Buffer: append((action_1, obs_1))
Note over TR: obs_1.done = False
TR-->>Env: return 0.0 (intermediate_reward)
Env-->>Agent: obs_1 (reward=0.0)
Note over Env,TR: During Episode (Step 2)
Agent->>Env: step(action_2)
Env->>TR: __call__(action_2, obs_2)
TR->>Buffer: append((action_2, obs_2))
Note over TR: obs_2.done = False
TR-->>Env: return 0.0 (intermediate_reward)
Env-->>Agent: obs_2 (reward=0.0)
Note over Env,TR: Final Step (obs.done=True)
Agent->>Env: step(action_T)
Env->>TR: __call__(action_T, obs_T)
TR->>Buffer: append((action_T, obs_T))
Note over TR: obs_T.done = True
TR->>TR: score_trajectory(buffer)
Note over TR: Compute final score from<br/>full trajectory
TR-->>Env: return final_score
Env-->>Agent: obs_T (reward=final_score)
Note over Agent,Buffer: Credit Assignment (optional)
Agent->>Env: rubric.compute_step_rewards()
TR->>TR: Apply discounting strategy<br/>r_t = gamma^(T-1-t) * R_final
TR-->>Agent: [r_0, r_1, ..., r_T]
Note over Agent,Buffer: Next Episode
Agent->>Env: reset()
Env->>TR: reset()
TR->>Buffer: Clear trajectory []
|
Contributor
Author
|
Merging this quickly so we have a complete RFC |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Extends RFC 004 to address Issue #107: the per-step
forward(action, obs)API doesn't support delayed rewards where score depends on future events.Examples of delayed reward scenarios:
Key Additions to RFC 004
Self-Accumulating TrajectoryRubric
Since OpenEnv doesn't batch (one env = one trajectory), the rubric itself accumulates the trajectory internally:
TrajectoryRubric.__call__(action, obs)records step internally0.0(or configurable intermediate reward) untilobs.done=Truereset()clears the internal bufferSequential,RubricDict, etc.ExponentialDiscountingTrajectoryRubric
Standard gamma-based discounting:
r_t = gamma^(T-1-t) * R_finalMemory Model
CPU-only trajectories to avoid GPU pressure. Environments with GPU tensors must move them to CPU before returning from
step().Examples
Test Plan
Resolves: #107