Hello! 👋 I'm Jake, a final-year Creative Computing student with a passion for programming. My time at IADT has provided me with extensive experience in various technologies and fields, including frontend development, mobile app development, games, and artificial intelligence. To share my enthusiasm for programming with others, I now aspire to become a lecturer. In pursuit of this goal, I'll be completing a master's in Computer Science next year.
A three-part demonstration of the system, beginning with creating an account, uploading and annotating images, and discussing a previously trained model. We then test the system in a controlled setting using photographs, and test it again in a more realistic setting with my two dogs, Maggie and Lola.
Users annotate their pet photos to train a custom computer vision model capable of recognising them.
An example of a trained model. A user annotated and uploaded 235 images, resulting in a 95.6% accurate model.
When the system sees a photo of Lola, it correctly recognises her as 'dog-pug'. It also knows with a high degree of accuracy that she is Lola, and not some other pug.
Here the system predicts 'cat-bengal'. This prediction will cause the Raspberry Pi's buzzer to activate, and a screenshot of the cat to be captured automatically.
Testing the system in a more natural setting. Here Lola was again correctly identified as 'dog-pug', then recognised individually, meaning no action is taken.
Testing using a different dog. The system sees Maggie, and realises she is also a pug, but is not Lola. The Raspberry Pi's buzzer is activated and a screenshot is captured.
Captured screenshots are categorised by species, and uploaded to Cloudinary directories unique to each user.
Reviewing some automatically captured screenshots of Lily and Holly, identified as 'cat-bengal' or 'cat-abyssinian'.
Reviewing some automatically captured screenshots of Maggie and Lola, identified as 'dog-pug'.
The Raspberry Pi 4B with its piezo buzzer connected via a breadboard.
A more technical explanation of the system, including authentication, configuring Darknet, image annotation, and the Roboflow integration.
RAID is a system which leverages computer vision technology not only to identify animals by species, but to distinguish one individual from another of the same species based on pet images uploaded by the user, and is capable of doing this in real-time via a live camera feed. It's intended to act as a deterrent for wild animals or stray pets by running a two-stage object detection system, first detecting the animal's species, and then determining whether the animal is the user's pet, or not. When an unknown animal is detected, a buzzer will sound to scare it away, and RAID will automatically capture a screenshot of the animal, categorised by species.