About this project
The Mustang-200 is an affordable and scalable advanced computing accelerator for speeding up computations, calculations and applications. Equipped with 2 Intel® Core™ i5/i7 processors, 32GB (4x 8GB) RAM, and 1TB (2x 512GB) Intel NVMe SSDs, this PCIe card can be used with your existing system, enabling high-performance computing without costing a fortune. You can install multiple Mustang-200 to further boost your computing capabilities, and assign tasks to any of these units as needed.
Every CPU on the Mustang-200 is accompanied with 16GB (2x 8GB) RAM and an Intel® 600P series 512GB NVMe SSD.
Once installed in a PCIe x4 slot, the host computer will be connected to both computing nodes on the Mustang-200 with 10GbE networks. The advantage of utilizing network-based structures is that no proprietary hardware is needed thus a lower cost is achieved. The computing nodes are powered by QTS-Lite, a lightweight version of QNAP's award-winning QTS operating system, and the eMMC component will serve as storage for QTS-Lite.
The integrated QTS-Lite operating system supports various virtualization technologies such as containers and virtual machines, so you can convert your physical system into a virtual one (P2V) and assign it to one of the nodes on the Mustang-200. Performance can be instantly boosted without interruption or additional physical space requirements.
No matter what kind of software used, it can be hosted inside the Mustang-200, allowing you to do more and achieve more in performance-critical applications such as artificial intelligence, academic research, and simulations.
The Mustang-200 needs no proprietary hardware and can be immediately installed into your existing system. If you need to perform additional calculations, you can always add additional Mustang-200 as they work independently from each other. The maximum amount of Mustang-200 is limited only by the number of available PCIe x4 slots in your system. This gives you enormous potential to expand your total computing capabilities.
With Mustang-200, every additional CPU works independently, so you can assign tasks to any nodes of your choice, and have real-time control over how every node works.
With robust computing capabilities and scalable characteristics, the Mustang-200 is perfectly suited for fog computing. With fog computing, you can pre-process data generated within your organization or across your devices on-premise, to filter out irrelevant information and only keep valuable insights, and then further utilize them by sending or uploading to cloud platforms. You can save a great deal of cloud platform and bandwidth fees as your data to be analyzed is filtered and only relevant data will be further dealt with.
Video transcoding and streaming are also ideal applications of the Mustang-200. The powerful processors of the Mustang-200 can easily process high-definition 360° surround videos. The networked structure of the Mustang-200 is also perfectly applicable for render farms where a lot of parallel computing resources are needed. The Mustang-200 can help creative professionals streamline their workflows and accelerate their processes.
This is a single shot demo of video transcoding/streaming and virtualization technologies of the Mustang-200. Note that the user interface and detailed controls are still under active development.
Below are links to GitHub that contain our source code and documentation of transcoding host server SDKs and web applications. Be advised that the Mustang-200 is still under active development and programs may be changed without prior notice.
Risks and challenges
We have already developed working prototypes of the Mustang-200 with Intel Core i5 CPUs. While we are still polishing final details and preparing the product for further testing, we will endeavor to begin shipping both i5 and i7 versions of the Mustang-200 to our backers in mid October 2017.Learn about accountability on Kickstarter
Support this project
- (60 days)