In a nutshell, Computer Vision.
AppSeed uses Computer Vision to identify what you draw and let you make it into the active UI element. By isolating what you draw, we have much more control over how the prototype works and speeds up the process of making the prototype. It is this key difference that lets you draw a rectangle, and make it into an input text box that allows beta testers to use it (including the animation and behaviours involved, such as bringing up the keyboard. This also allows for you to move elements, edit each piece individually in Photoshop and make a drawn box looks and act as a map. (Note: Protosketch does use openCV for cropping).