Mustang-200 Accelerate to the Future
Multiple
Applications &
Tasking
Micro-Server 1-1
Big Data computing
(6)
Micro-Server 1-2
Face recognition
(5)
Micro-Server 1-1
Big Data computing
Micro-Server 1-2
Face recognition
(4)
(1)
Micro-Server 1-1
Big data computing
(2)
Micro-Server 2-1
License plate
identification
Micro-Server 1-1
Big Data computing
Micro-Server 1-2
Face recognition
(3)
Micro-Server 1-2
Face recognition
Micro-Server 2-2
Content encryption
Micro-Server 2-1
Video transcoding
Micro-Server 2-2
Live Streaming
½
Intelligent, Versatile, Dense Computing Accelerator for standard servers and
cloud networks.
In the era of information explosion, various digital services such as over-the-top (OTT), multiple-system operator (MSO), content delivery
network (CDN), SaaS providers are faced with shortage of computing resources. These service providers need more computing power that is
large and strong enough to cope with enormous amount of information, data, audio and video, etc.
In the past, in the absence of space constraints, we will increase the number of servers to deal with huge amounts of data, but the space is still
limited.
In the limited space, we can only use FPGA expansion cards or GPU expansion cards to increase the performance of the server. However,
these cards often have only one single function and are lack of flexibility. The only one solution is to create an intelligent, versatile, dense
computing accelerator.
Therefore, Mustang-200 is born!
1 2 3 4 5 6 7 8 9
Video Transcoding/
Live Streaming
Numerous File
Encryption/Decryption
Numerous Face
Analysis
Numerous Car Plate
Analysis
w w w. i e iwo r l d . co m
Accelerator Card Comparison
General Purpose GPU
Multiple Applications & Tasking
Flexibility
Power Consumption
Development Cost
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
Fixed Function HW
(ASIC , FPGA)
No
Low
Low
High
Flexible Accelerator
(CPU, x86 architecture)
Yes
High
Low
Low
No
Low
High
High
About IEI-2018-V10
Mustang-200 Overview
½
10Gbps Network Based x86 Computing Accelerator
● 10 Gigabit Ethernet based x86 computing nodes support decentralized computing architecture
● Perfectly integrated QNAP QTS-Lite provides a flexible and secure developing environment
● Support virtualization technology, virtual machine (VM) & container technology
● Fit standard server, compatible with PCI-Express x4, x8, x16
● Increase computing power without changing or adding servers
● Achieve higher densities computing and lower the total cost
½
Mustang-200 Block Diagram
Every CPU on the Mustang-200 is accompanied with 16GB (2 x 8GB) RAM and an Intel® 600P series 512GB NVMe SSD. Once installed in
a PCIe x4 slot, the host computer will be connected to both computing nodes on the Mustang-200 with 10GbE networks. The advantage of
utilizing network-based structures is that no proprietary hardware is needed thus a lower cost is achieved. The computing nodes are powered by
QTS-Lite, a lightweight version of QNAP's award-winning QTS operating system, and the eMMC component will serve as storage for QTS-Lite.
DDR4 SO-DIMMs
16GB
Intel® 600P
512GB NVMe
PCIe x4
DDR4 SO-DIMMs
16GB
Intel® i7-7567U
Iris™ Plus 650
eMMC 5.0
PCIe x4
Intel® i7-7567U
Iris™ Plus 650
eMMC 5.0
Intel® 600P
512GB NVMe
PCIe x4
PCIe x4
10G Mac
10G Ethernet
10G Mac
10G Mac
10G Ethernet
10G Mac
PCIe x4
PCIe Switch
PCIe x4
PCIe x4 Golden Finger
½
Mustang-200: The best platform for application developers
Application
Development
● IEI which has focused on Industrial PC for 20
years provides stable and durable H/W platform.
● QNAP QTS-Lite which supports virtualization
technology, information security and data
protection is a flexible, secure and friendly S/W
platform.
● Mustang-200 combines IEI H/W and QNAP S/
W as a perfect platform for you to integrate your
software application into a solution
w w w. i e iwo r l d . co m
Hardware
Platform
Software
Platform
About IEI-2018-V10
½
Distribute tasks among units of Mustang-200
With Mustang-200, every additional CPU works independently, so you can assign tasks to any nodes of your choice, and have real-time control
of every node works.
½
Scalable infrastructure to suit your needs
The Mustang-200 needs no proprietary hardware and can be
immediately installed into your existing system. If you need to perform
additional calculations, you can always add additional Mustang-200
as they work independently from each other. The maximum amount of
Mustang-200 is limited only by the number of available PCIe x4 slots
in your system. This gives you enormous potential to expand your
total computing capabilities.
½
Perfect for fog computing
With robust computing capabilities and scalable characteristics, the Mustang-200 is perfectly suited for fog/edge computing. With fog/edge
computing, you can pre-process data generated within your organization or across your devices on-premise, to filter out irrelevant information
and only keep valuable insights, and then further utilize them by sending or uploading to cloud platforms. You can save a great deal of cloud
platform and bandwidth fees as your data to be analyzed is filtered and only relevant data will be further dealt with.
w w w. i e iwo r l d . co m
Public Cloud
Fog to cloud communication
Public Cloud
Platform
Fog computing
Fog computing
Fog computing
About IEI-2018-V10
Software of the Mustang-200
The integrated QTS-Lite operating system supports various virtualization technologies such as containers and virtual machines, so you can
convert your physical system into a virtual one (P2V) and assign it to one of the nodes on the Mustang-200. Performance can be instantly
boosted without interruption or additional physical space requirements.
No matter what kind of software you use, it can be hosted inside the Mustang-200, allowing you to do more and achieve more in performance-
critical applications such as artificial intelligence, academic research, and simulations.
½
QTS Lite Features
● Real-time computing
● Batch computing
● Parallel processing
● Automatic load balancing in each computing node
● Combine multiple cards as cluster via QTS Lite
Assign each node to compute at the same time
● Control and manage QTS Lite via APIs
3 ways to drive Mustang-200 Accelerator
Container Station
Design your owned Container Applications
QNAP Container Station exclusively integrates LXC (Linux Container) and Docker® lightweight virtualization technologies, allowing you to
operate multiple isolated Linux® systems on a QNAP NAS as well as download thousands of apps from all over the world.
Container Station extended the JeOS (Just enough OS) concept and uses lightweight virtualization technology to allow developers and IT
administrators to easily and freely switch between Mustang-200 and cloud.
● Micro services, quick deployment
● Best partner for IoT maintenance and operation
● A growing number of popular Apps
w w w. i e iwo r l d . co m
About IEI-2018-V10
Virtualization Station
Running your existing S/W application
QNAP's Virtualization Station is a full virtualization solution for x86-based IEIxQNAP Mustang-200 with virtualization extensions that allows you
to operate and manage multiple virtual machines (VM) on Mustang-200.Virtualization Station adds incredible versatility to your Mustang-200, so
that you can build up a really high density computing environment by creating virtual machines (VM) and run the programs or the services that
you already have on several PCs
UNIX
Virtualization
Station
Mustang-200 App (QPKG)
Development
Use the following approaches to design applications:
1. QTS-Lite App (QPKG) development platform allows developers to design s/w applications running on Mustang-200.
2. Development Toolkit (API & SDK): Developers can design smart phone or PC applications that can remotely manage and access files and
s/w applications on Mustang-200.
The development platform is designed for use by professional software developers, network and system integrators, and independent
software developers to construct complete hardware and software integration platforms and develop applications. We welcome all
passionate professionals to join our development team and help create a win-win future for IEI x QNAP and you.
w w w. i e iwo r l d . co m
About IEI-2018-V10