Libraries and examples for using the AIY Vision Kit with CogniFly (or any RPI Zero actually...)
Before you start, make sure to backup your current RPI Zero W sd card (instructions here). After that, you can restore the image supplied in this release. The same release has the dataset and TensorFlow checkpoints used in these step-by-step instructions. All notebooks were created using Google Colab, so there's no need for a computer with GPU. By the time these scripts were created, there was a problem with the AIY Vision Bonnet that would force you to connect the camera directly to the RPI Zero W, therefore everything here expects you to connect the camera directly to the RPI Zero W and the original examples from aiyprojects-raspbian will not work without some "little" modifications.
- Start by collecting images for the dataset (Darth_Vader_Convert_RPI_Videos_2_PNG_images.ipynb)
- Annotate the images (Darth_Vader_Annotate_Images.ipynb)
- Train (actually, it's transfer learning based on the TensorFlow Object Detection API - Darth_Vader_Training.ipynb)
- Export the model and compile it (Darth_Vader_Exporting_Testing_Compiling.ipynb)
- And that's it, you just need to deploy it to the AIY Vision Kit (the image supplied already has the model and all the necessary files to test it)!