If you missed our Kickstarter, you can pre-order your Pixy here. Thank you!
"This is the single most important robotics product since the Arduino." Ted Macy, Contributing Editor, Robot Magazine
"Charmed Labs is bringing the eventual robot uprising one step closer with their camera sensor." Chris-Rachael Oseland, Austin Post
"This vision sensor could be the future eyes of robots." Mashable.com
"The sheer power, flexibility and ease of use of Pixy could kick-start a whole new generation of robotics." Codeduino.com
"It’s revolutionary because of its speed and simplicity." StartupsFM
Update: we recently made another video! It shows Pixy playing with some lasers, and potentially bugging some cats, although no actual cats were annoyed in the video. Watch below:
Image sensors are useful because they are so flexible. With the right algorithm, an image sensor can sense or detect practically anything. But there are two drawbacks with image sensors: 1) they output lots of data, dozens of megabytes per second, and 2) processing this amount of data can overwhelm many processors. And if the processor can keep up with the data, much of its processing power won't be available for other tasks.
Pixy addresses these problems by pairing a powerful dedicated processor with the image sensor. Pixy processes images from the image sensor and only sends the useful information (e.g. purple dinosaur detected at x=54, y=103) to your microcontroller. And it does this at frame rate (50 Hz). The information is available through one of several interfaces: UART serial, SPI, I2C, digital out, or analog out. So your Arduino or other microcontroller can talk easily with Pixy and still have plenty of CPU available for other tasks.
It's possible to hook up multiple Pixys to your microcontroller -- for example, a robot with 4 Pixys and 360 degrees of sensing. Or use Pixy without a microcontroller and use the digital or analog outputs to trigger events, switches, servos, etc.
Purple dinosaurs (and other things)
Pixy uses a hue-based color filtering algorithm to detect objects. Most of us are familiar with RGB (red, green, and blue) to represent colors. Pixy calculates the hue and saturation of each RGB pixel from the image sensor and uses these as the primary filtering parameters. The hue of an object remains largely unchanged with changes in lighting and exposure. Changes in lighting and exposure can have a frustrating effect on color filtering algorithms, causing them to break. Pixy’s filtering algorithm is robust when it comes to lighting and exposure changes and significantly better than previous versions of the CMUcam.
Seven color signatures
Pixy remembers up to 7 different color signatures, which means that if you have 7 different objects with unique colors, Pixy’s color filtering algorithm will have no problem identifying them. If you need more than seven, you can use color codes (see below).
Hundreds of objects
Pixy can find literally hundreds of objects at a time. It uses a connected components algorithm to determine where one object begins and another ends. Pixy then compiles the sizes and locations of each object and reports them through one of its interfaces (e.g. SPI).
50 frames per second
What does “50 frames per second” mean? In short, it means Pixy is fast. Pixy processes an entire 640x400 image frame every 1/50th of a second (20 milliseconds). This means that you get a complete update of all detected objects' positions every 20 ms. At this rate, tracking the path of falling/bouncing ball is possible. (A ball traveling at 30 mph moves less than a foot in 20 ms.)
Teach it the objects you're interested in
Pixy is unique because you can physically teach it what you are interested in sensing. Purple dinosaur? Place the dinosaur in front of Pixy and press the button. Orange ball? Place the ball in front of Pixy and press the button. It’s easy, and it's fast.
More specifically, you teach Pixy by holding the object in front of its lens while holding down the button located on top. While doing this, the RGB LED under the lens provides feedback regarding which object it is looking at directly. For example, the LED turns orange when an orange ball is placed directly in front of Pixy. Release the button and Pixy generates a statistical model of the colors contained in the object and stores them in flash. It will then use this statistical model to find objects with similar color signatures in its frame from then on.
Pixy can learn seven color signatures, numbered 1-7. Color signature 1 is the default signature. To teach Pixy the other signatures (2-7) requires a simple button pressing sequence.
About the cost of two sonar sensors
We’ve done our best to keep the cost of Pixy as low as possible. Improvements in technology deserve much of the credit, but this Kickstarter campaign is a big help also. The Kickstarter funds allow us to manufacture in sufficient quantity to get the parts and manufacturing costs down. The result is that Pixy is available to a wider audience, which has always been the point of the CMUcam: to put a capable, easy to use vision sensor in the hands of lots of people.
PixyMon lets you see what Pixy sees
PixyMon is an application that runs on your PC or Mac. It allows you to see what Pixy sees, either as raw or processed video. It also allows you to configure your Pixy, set the output port and manage color signatures. PixyMon communicates with Pixy over a standard mini USB cable.
PixyMon is great for debugging your application. You can plug a USB cable into the back of Pixy and run PixyMon and then see what Pixy sees while it is hooked to your Arduino or other microcontroller -- no need to unplug anything. PixyMon is open source, like everything else. It's written using the Qt framework.
What’s a “color code”?
A color code (CC) is two or more color tags placed close together. Pixy can detect and decode CCs and present them as special objects. CCs are useful if you have lots of objects you want to detect and identify (i.e. more than could be detected with the seven separate color signatures alone.)
In the video, we created CCs with 2 color tags using 4 different colors (identified by teaching Pixy 4 different color signatures). Depending on your requirements, such a scheme (2 tags, 4 colors) can detect up to 12 unique objects. CCs with 3, 4 and 5 tags and/or more different colors are possible and can allow for many, many more unique objects. (In fact, thousands of unique codes are possible by using CCs with 5 tags and 6 colors. Hello inkjet printer!)
Why Color Codes?
CCs are useful if you have lots of objects you want to detect and identify, more than could be detected with the seven separate color signatures alone. CCs also improve detection accuracy by decreasing false detections. That is, there is a low probability that specific colors will occur both in a specific order and close together. The drawback is that you need to place a CC on each object you’re interested in detecting. Often the object you’re interested in (yellow ball, purple toy) has a unique color signature and CCs aren’t needed. Objects with CCs and objects without CCs can be used side-by-side with no problems, so you are free to use CCs for some objects and not others.
CCs give you an accurate angle estimate of the object (in addition to the position and size). This is a computational “freebie” that some applications may find useful. The angle estimate, decoded CCs, regular objects and all of their positions and sizes are provided at 50 frames per second.
CCs might be particularly useful for helping a robot navigate. For example, an indoor environment with CCs uniquely identifying each doorway and hallway would be both low-cost and robust.
- Rich LeGrand has been playing with robots and embedded systems for 25 years. He started Charmed Labs in 2002 with the goal of “bringing advanced technologies to new audiences by making the technologies easy to use and affordable. And to have fun in the process.” (That's Pixy!)
- Scott Robinson started working on Pixy as a graduate student at CMU. His focus has typically been in embedded systems, AI, and computer vision, which is why the CMUcam has been such a great fit for him. He's now working at Sandia National Laboratories as an R&D Engineer and is continuing to provide frequent updates to the CMUcam project. You can check out his blog at stackabuse.com
- Anthony Rowe is a faculty member at Carnegie Mellon University who has worked in the area of embedded systems for more than a decade His group at CMU studies the design of next generation connected sensing and control systems. Part of this work included earlier versions of the CMUcam, which we hope to take to the next level through this project.
How will the funds be used?
In order to get the pricing down for Pixy, we need to manufacture in sufficient quantities (1000 or more.) The funds will be used to buy parts and to make manufacturing and test fixtures for future builds.
- Processor: NXP LPC4330, 204 MHz, dual core
- Image sensor: Omnivision OV9715, 1/4", 1280x800
- Lens field-of-view: 75 degrees horizontal, 47 degrees vertical
- Lens type: standard M12 (several different types available)
- Power consumption: 140 mA typical
- Power input: USB input (5V) or unregulated input (6V to 10V)
- RAM: 264K bytes
- Flash: 1M bytes
- Available data outputs: UART serial, SPI, I2C, USB, digital, analog
- Dimensions: 2.1" x 1.75" x 1.4"
Pixy running your algorithm
Pixy will ship with the software and firmware that will do everything seen in the video and described here. You can use this software and firmware as-is (we expect 90% of users to do this.) Or you can build on it, modify it, or completely replace it. Pixy is a good, low-cost, general purpose vision platform for running other algorithms -- a QR code decoder, a face detector, an interactive pointing device (a la mouse), or something entirely different and new. All software and firmware is open source and GPL licensed. All hardware is open under the Open Source Hardware License. (This means you can do pretty much anything you want -- even sell what you've created.)
Communication (protocols and such)
Pixy communicates through UART, SPI, I2C, digital or analog output. You can configure which interface you want to use through PixyMon. The serial interfaces (UART, SPI, I2C) use a simple binary protocol. Every 20 ms, Pixy outputs a list of objects it has detected with each object represented by an "Object Block" (see below).
- The protocol is data-efficient binary.
- The objects in each frame are sorted by size, with the largest objects sent first.
- You can configure the maximum number of objects sent per frame
- I2C relies on polling to receive updates.
- USB is also available, and uses a different protocol (same protocol Pixymon uses.)
No microcontroller necessary
Pixy can also use a simple digital and analog output to communicate with simple devices such as relays, motors and lights.
- When communicating via analog, the analog output represents the X or Y position of the largest detected object of a specified color signature that exceeds a specified size.
- When communicating through digital, the digital output goes high when an object of a specified color signature exceeds a specified size.
Image sensors are consumer devices -- new sensors are released every year that take advantage of new process technologies, and older sensors are discontinued. It's not unusual for an image sensor to have a lifetime of only a couple years. Any device that uses an image sensor will have to contend with this issue -- there will come a day when the specific image sensor in the design will no longer be available. Previous CMUcams had this problem and had to be discontinued.
Pixy addresses this issue on several fronts:
- We chose an image sensor that is targeted for the automotive industry. Automotive products (silicon devices) have longer lifetimes.
- We chose a basic sensor with raw Bayer output. This is the most common type of sensor, and when/if we need to switch sensors, we will have a large selection to choose from, and the code changes will be minimal.
- We did not choose a camera module (sensor/lens combination). We mount the image sensor ourselves and use a standard M12 lens holder and lens. Camera modules tend to have the shortest product lifetimes because they target the fast-paced smart phone, tablet, and laptop industry. An M12 lens is also replaceable -- you can choose a different lens with a field-of-view and f-stop that works best for your application. (We chose a lens with a 75-degree horizontal field-of-view and f1.2, which is great for robotics applications, but you can easily replace.)
Pixy is ready
We have lined up a reputable contract manufacturer that can easily handle the quantities we anticipate. They will handle all component procurement, PCB fabrication, component placement and assembled product testing. This contract manufacturer was able to complete every manufacturing step and deliver a working product in our most recent production run. When we have the funds for a larger production run, our contract manufacturer will be ready!
Risks and challenges
Pixy has already been through two revisions and two small production runs. The current version is mature and ready for production. The latest was a trial production run with the contract manufacturer that we plan to use for large-quantity production. Still, there are risks with any hardware product, particularly with parts procurement and manufacturing issues, but we feel we have reduced these risks significantly.
We have verified all algorithms that will ship with the completed version, but there is still a fair amount of software that needs to be readied for release. Reducing the amount of time this takes will be our biggest challenge.Learn about accountability on Kickstarter
- (30 days)