Robot Learns to Play Badminton – And Might Train Human Athletes
Swiss scientists have built a quadruped robot that can play badminton against human opponents. ANYmal-D shows that it is possible to program hand-eye coordination in robotics, functionality that has proven challenging until now.
Why researchers created the robot
Robotic limbs have evolved to offer impressive mobility and dexterity, but the real difficulty lies in synchronising them with visual sensors capable of simulating ultra-fast reflexes: a vital function for tasks like catching or dodging a projectile. The human combination of eyes, nervous system, and brain has tended to be much better at this kind of fast focus and rapid visual processing.
Nevertheless, a research team at ETH Zürich set out to develop a robot with reflexes fast enough to serve as a capable sporting partner for a human. According to their paper in Science Robotics, ANYmal-D is built on a four-legged platform originally developed by the Swiss robotics company ANYbotics for use in industrial settings.
The team added a dynamic arm to hold a racket, as well as a stereoscopic camera to track the shuttlecock in motion, along with other environmental cues needed. They then trained ANYmal-D’s controller, which is the “brain” that processes sensor information and sends commands to motors or actuators, using reinforcement learning algorithms.
These algorithms let the robot try out actions in a virtual environment, learning effective movement strategies by using rewards and penalties as feedback. In this instance, the virtual environment was a badminton court, and the robot used it to learn several different skills, including predicting the shuttlecock’s trajectory and adjusting its camera to follow it.
ANYmal-D was equipped with a “perception noise model” that allowed it to compare its simulated learning experiences with real-time data when facing human opponents. Through this process, ANYmal-D learned to reposition itself toward the centre of the court after each shot and to stand up on its hind legs to give its camera a better view of the shuttlecock.
It would also not attempt to make a return if it could result in it harming itself. Researcher Yuntao Ma said a key lesson ANYmal-D had to learn was to get its speed right when approaching the shuttlecock.
“When it moves slowly, the chances of a successful play are lower,” Ma told Ars Technica. “When it moves fast, the camera gets shaky, which increases the margin of error in tracking the shuttlecock. It’s a trade-off, and we wanted it to learn resolving such trade-offs.”
ANYmal-D eventually learnt to hit 10 successive shots in a rally against a human opponent, and can reach a maximum executed swing velocity of 12.06 m/s, demonstrating the feasibility of autonomous sports-playing robots as training tools for athletes.
However, its reaction time is still constrained by several factors: a limited field of view that restricts how long it can track the shuttlecock, the margin of error in positioning introduced by its camera, and the maximum speed of its actuators. As a result, ANYmal-D struggles to return fast or aggressive shots, but these limitations could be improved with hardware upgrades.
More ways to use this research method
“Beyond badminton, the method offers a template for deploying legged manipulators in other dynamic tasks where accurate sensing and rapid, whole-body responses are both critical,” the researchers said in a statement.
Curious who’s building the bots that could beat you at your next badminton match? eWeek rounds up the seven robotics companies to watch in 2025. Also, check out Gemini’s robots, which are performing slam dunks and making salads.
The post Robot Learns to Play Badminton – And Might Train Human Athletes appeared first on eWEEK.