This is my first blog post, so please feel free to leave feedback with questions or comments, especially if you feel I've gotten anything wrong or if there's some critical bit of infomation missing.
Windows CE currently ships audio driver samples descended from three distict codebases: MDD/PDD, WaveDev2, and UAM. There are historical and functional reasons for this, but the existence of different driver models that all do more-or-less the same thing has caused some confusion. I'll try to clear things up a little in this posting.
First off, all three sample designs adhere to the same WaveAPI driver interface. They all hook into the system as device drivers, export WAV_Open, WAV_IOControl, WAV_Close, etc. entry points, and handle IOCTL_WAV_MESSAGE IoControl codes to interact with the waveapi subsystem. That upper-edge is hardware independent, and all the hardware dependent code goes into the driver. The difference between the samples is in their internal design.
MDD/PDD
The oldest design and the one most in use today among Windows CE embedded platforms is the MDD/PDD model. The MDD/PDD implementation splits the driver into two pieces, a "sort-of hardware independent" MDD layer, and a "really hardware dependent" PDD layer. The MDD portion is shipped as public code (in public/COMMON/oak/drivers/wavedev/mdd), and generates a library named wavemdd.lib. The PDD layer must be written (or ported from public/COMMON/oak/drivers/wavedev/pdd) by the OEM. To build a complete driver, the two layers are statically linked together. Between the MDD and PDD layers there is a functional interface defined by public/common/oak/inc/waveddsi.h.
The waveapi driver interface already does a pretty good job of distilling hardware dependencies down to the driver level, so one might wonder how MDD/PDD can further separate hardware independent/dependent layers. To do this, the MDD layer makes a couple of assumptions about the way the hardware works and what types of features you want to support.
Here are some assumptions MDD/PDD makes:
· Only one device (waveOutGetNumDevs always returns 1)
· Only one stream per device (e.g. one input and one output stream). Note that waveapi includes an internal “software mixer” which can virtualize the single output stream into multiple streams at the application level.
· Input and output DMA share the same interrupt.
By making these assumptions, the MDD/PDD model greatly simplifies the PDD layer, and the MDD/PDD driver is relatively easy to port if you have fairly generic audio hardware and you have fairly generic needs. However, if your hardware is nonstandard, or if you need to implement some special handling, you may find yourself itching to modify the MDD source code. At that point you may be fighting against the MDD/PDD interface design and creating more complexity than needed.
Wavedev2
At the start of the Smartphone project in 2000 we had a number of audio requirements which we found the MDD/PDD model could not meet without major changes to the MDD/PDD interface. In addition, at that point in time (WinCE 3.0) there was no waveapi “software mixer” to allow us to play multiple sounds concurrently, so we knew we would have to take care of that in the driver. The solution was to start over and implement a new design which became informally known as wavedev2 (the original wave driver was under platform/hornet/drivers/wavedev, so when it came time to start on the new design it got put in the wavedev2 subdirectory).
Wavedev2 is a monolithic design in which all the source files are located in a single directory. To port a wavedev2 driver you just copy all the files from an existing sample and start modifying. This actually isn’t as bad as it sounds because in most cases the only files you need to modify are hwctxt.h and hwctxt.cpp. In retrospect it would have been better to put the files in different directories to make this a little more clear, reduce the tendency of OEMs to make random changes in the other files, and simplify the task of fixing bugs in the other files. That's probably something we'll be looking at cleaning up in the future.
The most recent wavedev2 sample was shipped as part of WinCE 6 under public/common/oak/drivers/wavedev/wavedev2/ensoniq. This latest wavedev2 driver includes the following features which are not found on the other driver implementations:
- “MIDI” synthesizer. I put MIDI in quotes because, frankly, it’s a pretty minimal implementation which only supports sine wave output (this will probably be another blog topic). However, it works great for the types of things a phone needs to do: play DTMF and call progress tones and simple melodies. (See
The Wavedev2 MIDI Implementation )
- Sample-rate-conversion and mixing on both input and output streams. The driver can mix multiple output streams at different sample rates into a single output stream (something that can now be done with the MDD/PDD driver using the software mixer). It can also split the single input stream and source it to multiple applications at different sample rates (something no other driver design can currently do).
- A “gain class” interface. Each output stream is associated with a specific class. Whenever an app creates a new stream it is associated with class 0, although the app can move its stream to a different class via a waveOutMessage call to the driver. The system can use a separate waveOutMessage call to the driver to control the volume level on a per-class basis. This interface is used by the shell to do things like mute audio playback when a phone call is in progress. This is probably another blog topic for later. (See
The Wavedev2 Gainclass Implementation )
- A “forcespeaker” interface which is used by the shell to “hint” to the driver that a specific sound should be played out the speaker even if a headset is plugged in. This is typically used to allow an OEM to play ringtones out a speaker even if a headset is plugged in. (See
The Wavedev2 ForceSpeaker API )
- Support for an S/PDIF interface and for streaming of WMAPro compressed content across S/PDIF. This is a recent addition, specific to the Ensoniq version, which was used as a proof-of-concept for the Tomatin project. (See
Multichannel Audio in Windows CE )
[Note: a previous version of this blog claimed that a wavedev2 sample shipped with the Tomatin (NMD) feature pack under public/fp_nmd/common/oak/drivers/wavedev/wavedev2/ensoniq. I was wrong; the files did not ship in that release. I apologize to anyone I misled. The sample code in the CE6 release should be backward compatible, although I have no idea of whether there are any licensing issues with using CE6 sample code with a CE5 device]
If you’re developing a Windows Mobile Smartphone or PocketPC Phone, you pretty much have to start with the wavedev2 sample: the system depends on a number of the extensions implemented in wavedev2. On the other hand, if you’re developing an embedded Windows CE product you can use whichever design best fits your needs.
UAM
During the development of WinCE 4.2 the audio team was working on adding support for DirectSound and needed a sample driver to demonstrate exposing DirectSound support from the device driver. As was discovered during the Smartphone effort, retrofitting the MDD/PDD driver would entail a number of changes. Instead, a new monolithic driver was written using some bits of the wavedev2 design, with added support for the Ensoniq-specific feature of mixing two audio streams in hardware (and falling back to the software mixer for any additional streams). While there are superficial similarities between UAM and Wavedev2, they're still pretty different though.
However, support for DirectSound was dropped in WinCE 5.0, and it’s very rare to find audio designs that support mixing audio streams in hardware. There’s absolutely nothing wrong with it, and many OEMs still use it as the basis for their audio driver ports; but for new designs it doesn’t add much value to either of the other models.