Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Comments: Walking Engine next gen #411

Open
schluis opened this issue Jul 2, 2023 · 2 comments
Open

Request for Comments: Walking Engine next gen #411

schluis opened this issue Jul 2, 2023 · 2 comments

Comments

@schluis
Copy link
Contributor

schluis commented Jul 2, 2023

Walking by inverted pendulum

  • Like a segway, weightshift
  • Stabiliziation is basis

Walking with staightend knees

  • Increase walkhipheight to maximum

Regarding Stabilization

  • predicting impossible steps earlier
    • not only for next cycle
  • Stabilizing step with blocked heel -> heel-foot needs to be swing foot asap
@schluis schluis pinned this issue Jul 2, 2023
@schluis schluis changed the title Requests for comments: Walking Engine next gen Request for Comments: Walking Engine next gen Jul 2, 2023
@philipniklas-r
Copy link
Contributor

As suggested by @schluis my thoughts for a next gen walking:

The coolest idea

Learned walk/motion like in https://arxiv.org/pdf/2304.13653.pdf

  • As the simulator is open source, I am confident that something like a fast stable walk should be able to be learned. Also as our (B-Human) walk already showed at the GORE walking on the tip of the toes is not as harmful as one might think (at least as long as you do not do it the wrong way) it should only be a question of "can this neural network fit on the NAO?"

Learning simple predictions

  • We tried out the prediction of leg joint positions with a NN with good results. Paper will be presented at the symposium
  • Doing the same with more sensor data could be interesting too

Other ideas that are independent of what kind of a walk is implemented (except if it is learned)

Classical compensation. Disclaimer: I did NOT test this idea yet

  • Most problems arise from the fact, that joints are not doing what they are commanded to. Either the support foot has stuck joints because of the limited strength of the motor, the joints do something else because of wear and tear, or the motor controllers can not handle simple position errors (which upsets me the most ...)
  • I tried to handle the support foot this year, to prevent its joints doing stupid stuff (like having stuck joints while the not stuck joints destabilize the robot even further)
  • Another idea I did not try out yet but could give good results:
    • Apply some of the position (translation) errors from the support foot to the swing foot.
    • Here some examples:
      • When the robot is tilting backwards, most often the knee of the support foot is stuck and has more positive values than requested. This results in a more backward translation and therefore in a way bigger forward step than planned. Applying the error on the swing foot would automatically pull the swing foot backward
      • When the robot is tilting forward (worn out robots like to do that) the support foot hip pitch is stuck and has more negative values than requested. This results in a more forward translation and therefore in a way bigger backward (or way smaller forward) step than planned. Applying the error on the swing foot would automatically pull the swing foot forward
    • This compensation approach could give (in theory) good results for worn-out robots and also stabilize the walk in extreme situations, as this compensation would auto correct the walk.
  • One disadvantage: the robot might walk more often unintentionally against the ball to stabilize itself

Torso orientation estimation

  • The better the torso orientation estimation is, the better you can handle orientation errors of the swing foot (and prevent it from crashing into the ground)
  • I really dislike the angle estimations from the build in IMU, as they can often give extremly wrong values, e.g. we had cases where the robot thought to fall forward, but what actually falling backwards. The pitch orientation was 30 degrees off ...
  • Our current published estimation is bad too. Sidewalking causes the roll direction to be off by a few degrees. Also when turning (letting the robot rotate around in the yaw direction) the pitch and roll direction are off by a few degrees for multiple seconds sometimes
  • Once you got a good enough estimation, a prediction is helpful too to react faster to disturbances or return to normal walking earlier

Reducing the yaw direction rotation speed

  • We are currently reducing the yaw rotation speed based on the state of the robot. For example when tilted forward the legs are not allowed to rotate outwards. We observed that the pitch joints are often getting stuck and as the HipYawPitch is responsible for this rotation and then rotates the robot further forward. Therefor we reduce the speed that this rotation happens. Note: this is not done at the beginning of the step, but updated in every motion frame, so whenever new joint positions are generated. This increased the stability of worn-out robots significantly. One of the few things I can highly recommend from our changes from this year.

@philipniklas-r
Copy link
Contributor

Another idea:

Preprocessing the FSR Sensor Data. Deciding when to switch the swing and support foot is a hard problem!

Problems

  • When walking on the heels/tip of the toes/<other edge cases> it can easily happen that wrong support foot switches occure
    • This is because both feet measure close to nothing. Once one sole measures some pressure a wrong switch can happen while the now new swing foot actually still supports most for the robots weight
  • The rUNSWift walk is a very hard walk (and as a result the head shakes a lot). Yet it can often happen that both feet measure some pressure, but a support foot switch would bring the robot to fall over, because the weight is not yet fully on the new support foot

I am currently writing this down for our (B-Human) Wiki and noticed, that in your current implementation you do:

let left_foot_pressure = context.sensor_data.force_sensitive_resistors.left.sum();
let right_foot_pressure = context.sensor_data.force_sensitive_resistors.right.sum();
let has_support_changed = match self.swing_side {
    Side::Left => left_foot_pressure > context.config.foot_pressure_threshold,
    Side::Right => right_foot_pressure > context.config.foot_pressure_threshold,
};
if has_support_changed && self.t > context.config.minimal_step_duration { ...

This check ignores that the swing foot could have like 200g pressure while the current support foot has 5000g pressure. The current code would initiate a support foot switch which might be too early. This could also explains why your robots very rapidly start new walking steps when falling backwards and occasionally fall randomly to the side/diagonal (but this is just a blind guess from my side).

Proposed Change

  • Compare the pressures of both soles

  • For a detected change, the swing sole must have more pressure than the support foot (so kind of similar to what you did before I think)

  • The swing sole must have a minimum pressure. You are using 200g for that. When I evaluated that 1.5 years ago I think 300g was a better fit.

  • Test around if the sensor weights from rUNSWift (0.8 factor for outer sensors, 0.3 factor for inner sensors) improve the detection. We never did that.

  • If the support foot switches occure to slow now, you might want to add some kind of prediction. HTWK Robots version did not work for us (but predicts 2-3 frames earlier) and ours needs some more overhead (and predicts about 1 frame earlier).

  • As an extra you could add an automatic calibration for the FSRs (so min and max pressures). I don't know if this improves the detection, but could make it more robust.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants