Skip to content

_reward_feet_distance() return issue #37

@likeLyl

Description

@likeLyl

Hi,

Thank you for the excellent work.

I'm running the default configuration with g1_ground and found that _reward_feet_distance() is a little weird, as shown below:

    def _reward_feet_distance(self):
        left_foot_pos = self.rigid_body_states[:, self.left_foot_indices, :3].clone()
        right_foot_pos = self.rigid_body_states[:, self.right_foot_indices,  #:3].clone()
        feet_distances = torch.norm(left_foot_pos - right_foot_pos, dim=-1)
        reward = tolerance(feet_distances, [0, 0.4], 0.38, 0.05)
        return (feet_distances > 0.9).squeeze(1)

The function seems to encourage the feet to maintain a certain distance, but it returns (feet_distances > 0.9).squeeze(1), which is not related to the previous computation. Should it return reward instead?

I have no idea whether I'm misunderstanding the function or it's a bug? If this is intentional, could you explain the design?

Thanks for your time, and for the great work on this project.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions