Got the leader arm together and working.
It’s still pretty awkward to use, but learning it is like a video game.
The next recommended step is to create an ACT policy, which is not a pretrained model that needs finetuning. It learns to mimic your teleoperations from scratch. So the plan is to capture around 30 episodes of teleop-ing the same task with the robot arm, while introducing a little variability, like shifting the lighting & changing exact positions.
Ridiculously, the part I’m stuck on now is deciding what task to train it to do.
In the meantime, I had Cursor whip me up a little user interface so I don’t have to keep entering commands manually. I’ve got the Raspberry Pi running the arm connected to a small touchscreen I used with my old CNC machine. Got that screen rigged up with a 3D-printed clamp alongside the leader and follower arms, which are also clamped to the table.
It’s starting to feel like the future..! Just need to figure out what to train it to do.
I bet this will be a major problem in the future; folks can’t figure out useful tasks for their bots to do. So they’ll rent them out to the folk who do know what to do with them, who we’ll call, Entrepreneurs.
