Skip to content
Omega Core edited this page May 19, 2024 · 5 revisions

The pygame screen uses a very different coordinate system than the reference coordinate system of the robot. It's origin is in the top-left corner of the game window, positive X to the right and positive Y down. All the pygame methods take in coordinates in their system, so to exclude the confusion between the two, we worked with the data in our coordinates and finally transform it to pygame's coordinates, with custom methods:

# snippet from 'robot.py'
def toFieldCoords(self, pose: Pose):
    return Pose((self.constants.screen_size.half_h - pose.y) * self.constants.HALF_UNIT_MEASURE_LINE / self.constants.PIXELS_2_DEC, 
                (pose.x - self.constants.screen_size.half_w) * self.constants.HALF_UNIT_MEASURE_LINE / self.constants.PIXELS_2_DEC, 
                normalizeDegrees(pose.head - 90))
    
def toWindowCoords(self, pose: Pose):
    return Pose(self.constants.screen_size.half_w + pose.y * self.constants.PIXELS_2_DEC / self.constants.HALF_UNIT_MEASURE_LINE, 
                self.constants.screen_size.half_h - pose.x * self.constants.PIXELS_2_DEC / self.constants.HALF_UNIT_MEASURE_LINE, 
                normalizeDegrees(pose.head + 90))


for the next section, code can be found in the 'robot.py' file, I'll not include it in the documentation


Now we can talk about the robot itself.
Pygame loads the default robot image from the library and finds it's dimensions (in pixels). Knowing robot dimensions (in centimeters), it uses the pixel conversion to scale down the robot image to the size of the coordinate system. Anoter separate scale factor (the ''scale percent'') is used in the end to get the final dimension of the displayed robot.

We store two different instances of the robot image: the original photo, scaled & the 'rotating' photo being displayed on the screen. This trick assures that no image corruption happens when rotating the original image. When the robot turns, we rotatate the image.

But how does the robot move? Actual moving requires a bit of math. Knowing the maximum robot linear velocity (in cm/s), we can multiply that number with the value of the left joystick (from [-1,1] interval) to scale it accordingly to the user input. Let's use the joystick's vertical ('Y') axis from the left joystick on our controller for this.
Turning is done with a similar technique, knowing robot's angular velocity (in rad/s), we multiply it with the right joystick's horizontal ('X') axis value and we get a scaled angular velocity.

Combining this into the kinematics of a two wheel differential drive robot (also known as a Tank Drive) we decompose the linear velocity in respective 'x' and 'y' components. Transforming the 3 velocities got (x, y and ang velocities) into distances and adding them up to the last position, we get our new position.

This is called arcade drive, which is also a robot centric drive. This means that, if you move the left joystick forward, the robot goes forward, so it's like you drive from the robot's perspective. Another way commonly used for holonomic (omnidirectional) robots is the field centric (or driver centric) mode.

For two wheel drive, being non-holonomic (can't go sideways without turning), field centric sounded quite unintuitive, but we made it work. Firstly, on this driving mode, we got rid of the right joystick. Using only the left joystick, we likened the stick to the tip of a vector, on it's own coordinate system. Combining 'x' and 'y' inputs, we defined the linear velocity hypot(x, y) and the robot orientation atan2(y, x). Because we get the orientation, not the angular velocity, a PID controller was used to get the angular velocity needed for the same calculations as for the robot centric mode.

So, knowing the linear and angular velocities of the robot, we have everything to know how to move it anywhere on the field!

Knowing robot's position and it's dimensions, drawing a rectangle (like a hitbox) it trivially easy. This 'box' is used to check if the robots get's out of the screen. When the screen border is on, robot isn't allowed to exit the user's field of view. Before updating the position, we check if it violates the border. If so, just don't go in that direction. If we already are outside the border when you activate the screen border, the robot will back off to the screen, depending in which direction is furthest out. If it's compleately outside the screen, it'll just teleport back into the center.

How does is teleport? Setting the pose. You can also set a pose the robot to be at, btw :D

This class is also responsable for displaying robot position information (in the right-down corner) and cursor position (in the left-down corner), both in field coordinates. This is done exactly like the 'x' and 'y' text renders from the background.

next page →

← previous page