According to Ambcrypto on June 30, because of the sudden unexpected rise in the price of Bitcoin, the price of counterfeit coins has fallen sharply relative to BTC. Most of the counterfeit coins have…
It was around December 2017 that I decided to do something about it and started thinking hard on how to do it myself. I brought it up in conversation with friends, and I started to get a lot of good guidance and advice. Two of them were instrumental to achieve the goal:
I started from zero, with no knowledge about any of the technologies I would use beside Python and it roughly took me 200 hours spread across 6 months.
I defined my objective as: Build a machine that can reliably sort between 10–20 types of Lego bricks reliably without manual feeding and using an image-based neural network classification model.
I was confident I could take the Google Cloud use case and replicated it. As it’s true in any technology endeavor along the way I identified a lean path to the objective and made some trade-offs to achieve a reasonable ‘time to market’.
This part of a 5 Blog Series to cover the mechanical and software design for the Lego Sorter, as well as sharing the training set and some evaluation sets:
1. Lego Sorter using TensorFlow on Raspberry
2. Mechanical Separation (Design, Motors and Sensors)
3. Overview of the Software stack
4. Using Inception V3 to Identify LEGO vs. Generic Bricks
5. Try It Yourself: 2 Big Data Sets so you can Replicate this Project
I will provide more details in the Mechanical and Software blogs, but at a high level, this is how I designed the separator:
I’m using a Motor and Servo HAT, as well as a custom board to control the IR Beam Sensors and backlight LEDs. I’m using GPIO and PWM signals in Python to control the movement of the entire machine and using image recognition using OpenCV to detect any shortcomings in the mechanical separation (e.g. two Lego pieces in a single image).
I used a retrained Inception V3 model to classify the 11 brick classes. I ran the training on a GPU TensorFlow library that leverages my desktop’s CUDA enabled NVIDIA GPU.
Disclaimer: Below is the results of the first run and they are quite exceptional. I do believe there will be significant variation across runs and I expect the yield to fluctuate in the 75–85% range.
My initial run was highly accurate in terms of mechanical and classifier accuracy, but I did see the same drop as mentioned in the article when you go from the trained accuracy output to the real-world implementation.
I came very close to replicate the case with the key differences being:
Automatic Feeder and Separation: Having a automatic feeder and separation mechanism automated the capture of the training set, which provided a material time saving.
Training Set and Camera: My setup has a single camera and a training set 3 times smaller.
Overall, this is how the scorecard came out:
This would not have been possible without the great help of these fantastic companies, individuals and organizations:
This blog is my learning on writing Custom Lint for Kotlin code, and provide a working example as reference. The learning came from extracting information from various sources, which I will provide…
Recently I was reading Twitter and stumbled across a tweet by Dan Abramov. He shared a short code snippet that caught my eye. It included some JavaScript which accessed an input element from the DOM…
Just follow easy instructions and get 20 XR reward from XRWeb! http://bit.do/eUfUi #XRWeb #ExtendedReality #XRToken #ICO #BlockchainTechnology #SpatialWeb #SpatialXRApps