I’ve just assembled Wheelson and I am interested in learning how to code it for autonomous driving. It’s advertised in the creator’s booklet that I can:
- learn how computer vision works
- calibrate the camera
- make my car navigate a road autonomously
- recognise a QR code
- recognise objects via camera and image processing algorithms
Reading through existing posts tells me that being able to code my Wheelson to recognise QR codes is something that the team is working on. Maybe I can program one of these codes to trigger autonomous driving mode (as explored below).
I’m guessing that object recognition is also something that the team is working on? Especially since when my Wheelson is connected to CircuitBlocks, the VISION component on the left pane does not show, where an advertisement video says that devices with cameras should trigger a VISION category to display for programming.
What about calibrating the camera? What does that entail?
Where can I learn more about how computer vision works?
Another newbie question: How does uploading code onto Wheelson work? I’ve tried running code when Wheelson is connected (fine except for Blinky where it returns some range error) but disconnecting the cable just makes everything stop and since the firmware was flashed when MicroPython was installed, I no longer have access to the firmware 6-item menu. How does this all work? None of the guides have been helpful in helping me understand the process for coding, uploading, then triggering/accessing functions via device screen navigation.
My main goal is to have Wheelson navigate autonomously like an on-road vehicle. Meaning following trail markers (examples: keeping to the right of a trail of clothing pegs, stopping/starting whenever stop sign is in/removed from field of vision) and avoiding object collision (example: random object is placed in front of Wheelson).