The Web Audio API allows developers to leverage powerful audio processing techniques in the browser using JavaScript, without the need for plugins. As well as defining and processing file-based audio sources in real-time, it can synthesize sounds based upon various waveforms; this can be useful for web apps that are often consumed over low-bandwidth networks.
In this tutorial, I'm going to introduce you to the Web Audio API by presenting some of its more useful methods. I'll demonstrate how it can be used to load and play an mp3 file, as well as to add notification sounds to a user interface (demo).
If you like this article and want to go into this topic in more depth, I'm producing a 5-part screencast series for SitePoint Premium named You Ain't Heard Nothing Yet!
What Can I Do With the Web Audio API?
The use cases for the API in production are diverse, but some of the most common include:
- Real-time audio processing e.g. adding reverb to a user's voice
- Generating sound effects for games
- Adding notification sounds to user interfaces
In this article, we'll ultimately write some code to implement the third use case.
Is it Well Supported by Browsers?
Web Audio is supported by Chrome, Edge, Firefox, Opera, and Safari. That said, at the time of writing Safari considers this browser feature experimental and requires a webkit
prefix.
Using the API
The entry point of the Web Audio API is a global constructor called AudioContext. When instantiated, it provides methods for defining various nodes that conform to the AudioNode interface. These can be split into three groups:
- Source nodes - e.g. MP3 source, synthesised source
- Effect nodes - e.g. Panning
- Destination nodes - exposed by an
AudioContext
instance asdestination
; this represents a user's default output device, such as speakers or headphones
These nodes can be chained in a variety of combinations using the connect method. Here's the general idea of an audio graph build with the Web Audio API.
Source: MDN
Here's an example of converting an MP3 file to an AudioBufferSourceNode and playing it via the AudioContext
instance's destination
node:
See the Pen Playing an MP3 file with the Web Audio API by SitePoint (@SitePoint) on CodePen.
Generating Audio
As well as supporting recorded audio via AudioBufferSourceNode
, the Web Audio API provides another source node called OscillatorNode. It allows frequencies to be generated against a specified waveform. But what does that actually mean?
At a high level, frequency determines the pitch of the sound measured in Hz. The higher the frequency, the higher the pitch will be. As well as custom waves, OscillatorNode
provides some predefined waveforms, which can be specified via an instance's type
property:
Source: Omegatron/Wikipedia
'sine'
- sounds similar to whistling'square'
- this was often used for synthesizing sounds with old video game consoles'triangle'
- almost a hybrid of a sine and square wave'sawtooth'
- generates a strong, buzzing sound
Here's an example of how OscillatorNode can be used to synthesise sound in real-time:
See the Pen Generating sound with OscillatorNode by SitePoint (@SitePoint) on CodePen.
Continue reading %Web Audio API: Add Bandwidth-Friendly Sound to Your Web Page%
by James Wright via SitePoint
No comments:
Post a Comment