This project is to develop inexpensive lidar from affordable components - laser pointer, small single board computer running linux and a webcam. Unlike more expensive lidars calculating timing between light emitted and returned, this lidar will have software calculating angles and distances to reflected spots and output serial signal with XYZ coordinates of reflected points in relation to 0,0,0 position of a camera.
These funds will help produce precision machined parts and buy components that are required to make this lidar precise.Even with foam prototype the error was no more than 2%, e.g. at distances of 50 inches it could have an error of 1 inch.
Currently prototype works between 1 ft and 16 ft reliably, but running MATLAB on an embedded computer is not a way to go, so I'm writing concise C code to capture images, recognize reflected spots and triangulate distances to them.
Since childhood we have great ideas how to make robots do interesting, amazing things. Those who actually get into robotics see how frustrating it is to use same old weak hardware and processors, same unreliable sonic rangefinders, same toy servos. I want to bring more capabilities to inventors, engineers, designers, that may start on small budget but don't have $1600 for a most basic lidar on the market.
Current lidars have sophisticated lasers and detectors, that are very expensive. They calculate time between when laser emitted and laser reflection received. Those numbers are in billionths of a second. Knowing the speed of light it can calculate distance to the object. My lidar will work on a different principle, that is not new: Laser is pointing the same way as a camera. if laser is reflected by very near object- point will be "in peripheral vision" of a camera. If it is reflected far far away, it will be almost in the center camera's vision. By knowing which pixel is "shining" in camera I can calculate (triangulate) distance to that reflected spot.
Robots with one camera have no depth perception. There are ways to make robot know where is it located in space so it won't hit anything - it can be complex software for one camera, two cameras for stereoscopic vision and some powerful software or a lidar. Lidar (Light Detection and Ranging) allows robot to see it's surrounding in 3D more reliably and simply. Hardware can be as simple as a processor for a toy robot, software also can be quite simple. Both of these allow lidar to be inexpensive and available.
Both Kinect and Neato vacuum cleaner LiDARs are products on their own and built for own purpose. I want to create a new product that is specifically made for robot developers, where all you have to do is plug USB to your robot and read XYZ coordinates in character or other form.
Support this project
- (30 days)