Back in 2016, we noticed a new type of robot enter the market. Something that was built to be in your home like a pet, rather than something to do laundry. Social robots where on the rise, but there was a big problem. Every single one was closed, and needed a server to run.
Now, after almost 3 years of work, we are happy to bring you Iris, an open source social robot for makers, tinkerers, and developers.
You can read more about our past prototypes down below.
We're a small group of makers based around Sebastopol CA. Robotics has always been a passion of ours, and we've finally decided to make our latest and greatest work available for everyone.
Thank you to everyone who decides to back our project, or even view our page. You're all helping us get one step closer to bringing our dream into reality.
Quick Note: What you see on this page is the most recent prototype. Scroll down to view our work on smoothing out and better refining the outer case.
The eyes on the display are optional. You can choose between those and a general display with battery life and other info. Scroll down to view more on this and other features.
Iris is capable of recognizing hand gestures, as well as learning new ones. These can be taught over time by repeating a gesture and then the desired reaction.
Iris is capable of several different types of tracking, these include but are not limited to: Object Tracking, Color Tracking and Facial Tracking.
Tracking can be used to follow something or someone, or help bring something of interest into view. She can also dynamically shift between tracking and recognition.
Recognition differs from tracking in a few aspects. Recognition is centered around learned objects and faces. She is able to identify the specific object or face instead of just knowing one is there.
Blue = Recognized Object/Face
Pink = Training/Learning Object of Interest
Black = New Unknown Object of Interest
Green = Attempting to Recognize
Iris is able to determine the direction of sound, as well as what is being said. Currently we are working on the ability to distinguish different voices to know who is talking to her, but this may prove to intensive while trying to run other processes.
Since Iris doesn't have an internet connection, we have to rely on her learning new words on her own. Iris starts off with a basic set of phrases and sentences built in, and using 3 different types of conversational systems, Iris is able to learn new information, words, and even physical things like faces and objects.
A quick Demo video of how Iris learns about a persons relationship with another through context alone.
(We used the name Nick to make things clearer to understand)
#1 | Basic Questions System
The first and most basic way of getting new information is by asking straightforward questions, taking those answers, and using them to build a usable knowledge set. This can also be corrected by asking her to forget or change what she knows, and then bringing up the topic you'd like to change.
This system isn't encouraged and is considered the lowest quality method of learning. Although it's very straight forward and accurate, we limit how much she can use it in a given day (how many times it can be worked into a conversation within 24 hours) because it can get old very quickly, and doesn't feel as organic as we'd like.
#2 | Object Based Systems
The second method of learning is object tagging. This is done with physical objects/faces, and hypothetical "objects". Once she is taught/ hears a new word, this method along with number 3 allows her to tag certain objects and concepts with words or phrases.
Once a new object is trained, it's assigned a tag or tags, and can be drawn up at any time for whatever she needs. This is also partially used in dealing with object permanence, which is talked about lower on this page.
Sub-tags are mainly used to help save on space and time, since basic or common features from past objects can be grouped together for future use. It also speeds up learning new objects as time goes on.
#3 | Context Calculations
The third and final system used is context clues. This is the least pretty but allows for a much more natural conversation, and helps give the impression she is learning on her own without needed to ask direct questions.
Since Iris doesn't use an internet connection, we have to get creative with how we use the limited amount of power and battery life she has. To solve this issue, Iris uses a type of Dream State when charging.
Once docked, Iris is able to preform most of her more power hungry tasks like constructing her internal area map, and committing newly learned objects and words to long-term memory without draining the battery. This also helps us know we don't need to worry about putting unnecessary stress on the battery itself, and can rely on having a constant power source.
Unlike every other smart speaker and home robot around today, Iris doesn't use a wake word. Some examples are Alexa's name, and "Hey Google" for the Google Assistant.
These are fine for virtual assistants or your phone, but don't feel very good when used with a robot, which is supposed to be more of a companion. Saying the same phrase over and over against also tends to get old really fast, and brings down the actual amount of time you're willing to spend interacting with her.
Iris replaces wake words with her own unique method of listening in. Instead, she listens for anything that could be directed at her, the tone of voice, and if another voice responds afterwards.
Part 1: Once she hears a sentence, she'll wait about 10 seconds for someone else (a different tone of voice) to be heard. If she does, she knows you're talking to someone else.
Part 2: If no other voice responds within this time, she'll then assume you are talking to her. She stays in this "mode" for around 50 seconds. Once that time passes, she'll go back to part 1 and repeat the cycle.
Object permanence refers to the ability to remember an object exists, even when it isn't visible or in your general area. This is useful in a few areas. Firstly, it helps Iris understand the world around her, and let her know that she is able to effect objects and people that aren't directly in front of her. It also helps with storing information and learning about a new area.
Here is a simple test/example of what object permanence is, and how she uses it. In this clip, she uses a voice from a prior version.
Another aspect that sets Iris apart from the rest is emergent behavior or being able to change how she acts and reacts over time base on past events. Over time, 2 units will react differently to the same stimulus, creating a unique personalities.
Multiple colors are available. (Red, Pink, and Orange). If you would like a color different than Blue and white, please leave it as a note once you choose a tier. You also receive a t-shirt by backing our project, or just order one by itself.
One of the major advantages of not having an internet connection is that she is almost 100% secure. There's no more fear about having a camera on wheels in your home.
Iris will never (and can't) send data back to us or any other 3rd party, so you can rest easy knowing everything she sees and hears is completely confidential.
- 2x 5 Megapixel Omnivision OV5647 in a fixed-focus lens. https://cdn.sparkfun.com/datasheets/Dev/RaspberryPi/ov5647_full.pdf
- 5x HC-SR04 Ultrasonic sensors to help with short range navigation, cliff detection, distance calculations, and low level scanning.
- x2 40mm Brushless Cooling Fans
- 2x LX-16A Full Metal Geared servos, allowing for pan/tilt of the head/cameras.
- 1x PureAudio Array Microphone, it has 2 mics built in, allowing for directional awareness.
- Power Provided by 6500 mAh Battery, paired with Sleepy Pi 2, this prevents any possible corruption.
- 5" Touchscreen Display, Allowing for easy menu access, emotional display, quick access to settings and if needed, troubleshooting.
Charging Time: 8-10 Hours (Based on usage throughout the day)
Run Time: 4-8 Hours (Based on if she's interacting with known people, new people, known locations, and new objects)
Iris Uses a Glyph to identify her charger:
Iris comes built with a 5" touchscreen display built into her chest. This can be used from showing settings and options, troubleshooting, easier setup, and most importantly, displaying emotional states. We can also display any needed information easier than ever, like battery life, freeing up time for more important interactions.
You can choose from displaying eyes, or a more standard menu/status screen.
We're currently working on smoothing out and generally refining the outer shell. Below are a few examples of face plates being tested.
Part of our funding is aimed towards refining and generally improving the apprentice of the outer shell. We found it more important to use our limited budget to actually make something that works, instead of spending thousands on an empty shell that looks pretty and doesn't do anything.
Once this campaign finishes, we will release the hardware portion of the project on our GitHub page (Linked in the updates section when released).
The majority of our resources and effort went into making the software portion of our project, However, we know the importance of an open community and how that can extend the life of a product exponentially. Once all units have shipped, we will release the source code, along with any other helpful resources related to developing for Iris.
Iris is the 4th generation of robot we've built based around companionship and interaction.
Below is a quick history of our past prototypes and how we came to create Iris:
A1 (Lyn) was our first robot we ever made to function as a companion. She wasn't very cute and that was a big part of what we wanted. Lyn was also much slower and not as intelligent as Iris or even B1 or B2.
Brite 1 and 2 were much cuter, but still weren't as intelligent as we were hoping. We couldn't physically fit everything we needed inside the smaller shell, which also meant a smaller battery. We also felt she needed some type of display or something to get easy access to her current status, battery. etc...
Iris is our final and best design to date. She's large enough to fit a much more powerful battery, and enough processing power to use an LCD display. She's cooled by 2 fans mounted on her back, and also has an ultrasonic sensor facing behind her to let her dock easier than the B1-B2s.
Although the internals and software portions of Iris are essentially finished, we still need to work on the outer casing, and getting them into production. Our funding goal is to sell 30 units, which will allow us actually ship units at an acceptable price, since we're able to keep the Kickstarter price as low as possible by ordering the required parts in bulk. We've also removed unnecessary senors like an accelerometer without taking out anything that would hurt overall quality or performance.
After the campaign finishes and all the backers are happy with what they got, we do plan on continuing sales of Iris though 3rd party website(s). If you've backed our Kickstarter, you'll receive a discount on any future purchases you make. The site will be listed in the "Updates" tab when finalized You must use the same Email address here and on our site to receive the discount. You'll be able to buy replacement parts, and possibly upgrades, although that is unlikely.
NOTICE: Due to our recent re-branding and reworking of our ads layout, please visit our main site to make a pledge or pre-order components. Future details will also only be posted there, along with post-campaign updates. Click Here To Visit.
If you would like to show your support without ordering a full unit, we now offer Iris Micro figures. Bring home a tiny version of this cute robot. Put it on your desk, or carry it with you. This figure doesn't have any internals, it's just an action/display figure.
Risks and challenges
We have done our best to minimize risk by doing the bulk of manufacturing ourselves. However, the prices of components can always fluctuate, which can lead to delays.
Redesigns of the internal thermal management layout have been finished. This was a feature originally intended for version 2, but has been carried over due to its success in past units.Learn about accountability on Kickstarter
- (50 days)