This number is based on the Computer ID number of the computer on which the software is installed. Each computer has a unique number, similar to a license plate. An activation code is created based on that number. When you register the software, Sony will generate an activation code for you. Once the code is entered, the software will not time out. Since the activation number is based on the Computer ID, it is important that you have the software installed on the computer where you will be using it.
A Microsoft technology that enables different programs to share information. ActiveX extends Microsoft Windows-based architecture to include Internet and corporate intranet features and capabilities. Developers use it to build user interactivity into programs and World Wide Web pages.
Adaptive Delta Pulse Code Modulation (ADPCM)
A method of compressing audio data. Although the theory for compression using ADPCM is standard, there are many different algorithms employed. For example, Microsoft’s ADPCM algorithm is not compatible with the International Multimedia Association’s (IMA) approved ADPCM.
Advanced Streaming Format (ASF)
See Windows Media Format.
A type of distortion that occurs when digitally recording high frequencies with a low sample rate. For example, in a motion picture, when a car’s wheels appear to slowly spin backward while the car is quickly moving forward, you are seeing the effects of aliasing. Similarly, when you try to record a frequency greater than one half of the sampling rate (the Nyquist Frequency), instead of hearing a high pitch, you may hear a low-frequency rumble.
To prevent aliasing, an anti-aliasing filter is used to remove high-frequencies before recording. Once the sound has been recorded, aliasing distortion is impossible to remove without also removing other frequencies from the sound. This same anti-aliasing filter must be applied when resampling to a lower sample rate.
A low-latency audio driver model developed by Steinberg Media Technologies AG.
The attack of a sound is the initial portion of the sound. Percussive sounds (drums, piano, guitar plucks) are said to have a fast attack. This means that the sound reaches its maximum amplitude in a very short time. Sounds that slowly swell up in volume (soft strings and wind sounds) are said to have a slow attack.
Audio Compression Manager (ACM)
The Audio Compression Manager, from Microsoft, is a standard interface for audio compression and signal processing for Windows. The ACM can be used by Windows programs to compress and decompress .wav files.
An audio proxy (.sfap0) file is created when an audio stream is not efficient to access or if it does not seek accurately. The application will take the audio stream from the file and save it to a separate and more manageable audio proxy file. While audio proxy files may be large (because they are uncompressed), the performance increase is significant.
The file is saved as a proprietary *.sfap0 file, with the same name as the original media file and has the same characteristics as the original audio stream. For example, movie.avi yields a movie.avi.sfap0 audio proxy. Additional audio streams in the same file are saved as movie.avi.sfap1, movie.avi.sfap2, etc. This is a one-time process that will greatly speed up editing. The conversion happens automatically and does not result in a loss of quality or synchronization. The original source file remains unchanged (the entire process is nondestructive). Audio proxy files can be safely deleted at any time since the application will recreate these files as needed.
Audio proxy files are saved to the same folder as the source media. If the source media folder is read-only (e.g. a CD-ROM), the files will be saved to a temporary directory.
ASF Stream Redirector file. See Redirector file.
A decrease in the level of a signal.
When discussing audio equalization, each frequency band has a width associated with it that determines the range of frequencies that are affected by the EQ. An EQ band with a wide bandwidth will affect a wider range of frequencies than one with a narrow bandwidth.
When discussing network connections, refers to the rate of signals transmitted; the amount of data that can be transmitted in a fixed amount of time (stated in bits/second): a 56 Kbps network connection is capable of receiving 56,000 bits of data per second.
Beats Per Minute (BPM)
The tempo of a piece of music can be written as a number of beats in one minute. If the tempo is 60 BPM, a single beat will occur once every second.
The most elementary unit in digital systems. Its value can only be 1 or 0, corresponding to a voltage in an electronic circuit. Bits are used to represent values in the binary numbering system. As an example, the 8-bit binary number 10011010 represents the unsigned value of 154 in the decimal system. In digital sampling, a binary number
is used to store individual sound levels, called samples.
The number of bits used to represent a single sample. For example, 8- or 16-bit are common sample sizes. While 8-bit samples take up less memory (and hard disk space), they are inherently noisier than 16-bit samples.
Memory used as an intermediate repository in which data is temporarily held while waiting to be transferred between two locations. A buffer ensures that there is an uninterrupted flow of data between computers. Media players may need to rebuffer when there is network congestion.
A virtual pathway where signals from tracks and effects are mixed. A bus’s output is a physical audio device in the computer from which the signal will be heard.
Refers to a set of 8 bits. An 8-bit sample requires one byte of memory to store, while a 16-bit sample takes two bytes of memory to store
A clip refers to a media file on a track. A single audio track can contain any combination of loops, one-shots, or Beatmapped clips. MIDI tracks can contain only MIDI clips.
When a clip is used on the timeline, an event is drawn to represent the clip.
The clipboard is where data that you have cut or copied from an ACID project is stored. You can then paste the data back into a project at a different location.
Occurs when the amplitude of a sound is above the maximum allowed recording level. In digital systems, clipping is seen as a clamping of the data to a maximum value, such as 32,767 in 16-bit data. Clipping causes sound to distort.
Coder/decoder: refers to any technology for compressing and decompressing data. The term codec can refer to software, hardware, or a combination of both technologies.
Compression Ratio (audio)
A compression ratio controls the ratio of input to output levels above a specific threshold. This ratio determines how much a signal has to rise above the threshold for every 1 dB of increase in the output. For example, with a ratio of 3:1, the input level must increase by three decibels to produce a one-decibel output-level increase:
Threshold = -10 dB
Compression Ratio = 3:1
Input = -7dB
Output = -9 dB
Because the input is 3dB louder than the threshold and the compression ratio is 3:1, the resulting signal is 1 dB louder than the threshold.
Compression Ratio (file size)
The ratio of the size of the original uncompressed file to the compressed contents. For example, a 3:1 compression ratio means that the compressed file is one-third the size of the original.
Each computer has a unique number, similar to a license plate. An activation number is created based on that number. Since the activation number is based on the Computer ID, it is important that you have the software installed on the computer where you will be using it. The Computer ID is automatically detected and provided to you when you install the software.
The Computer ID is used for registration purposes only. It doesn’t give Sony access to any personal information and can’t be used for any purpose other than for generating a unique activation number for you to use the software.
Mixing two pieces of audio by fading one out as the other fades in.
DC offset occurs when hardware, such as a sound card, adds DC current to a recorded audio signal. This current results in a recorded wave is not centered around the zero baseline. Glitches and other unexpected results can occur when sound effects are applied to files that contain DC offsets.
In the following example, the red line represents 0 dB. The lower waveform exhibits DC offset; note that the waveform is centered approximately 2 dB above the baseline.
A unit used to represent a ratio between two numbers using a logarithmic scale. For example, when comparing the numbers 14 and 7, you could say 14 is two times greater than the number 7; or you could say 14 is 6 dB greater than the number 7. Where did we pull that 6 dB from? Engineers use the equation dB = 20 x log (V1/V2) when comparing two instantaneous values. Decibels are commonly used when dealing with sound because the ear perceives loudness in a logarithmic scale.
In an ACID project, most measurements are given in decibels. For example, if you want to double the amplitude of a sound, you apply a 6 dB gain. A sample value of 32,767 (maximum positive sample value for 16-bit sound) can be referred to as having a value of 0 dB. Likewise, a sample value of 16,384 can be referred to having a value of -6 dB.
A program that enables Windows to connect different hardware and software. For example, a sound card device driver is used by Windows software to control sound card recording and playback.
Digital Rights Management (DRM)
A system for delivering songs, videos, and other media over the Internet in a file format that protects copyrighted material. Current proposals include some form of certificates that validate copyright ownership and restrict unauthorized redistribution.
Digital Signal Processing (DSP)
A general term describing anything that alters digital data. Signal processors have existed for a very long time (tone controls, distortion boxes, wah-wah pedals) in the analog (electrical) domain. Digital Signal Processors alter the data after it has been digitized by using a combination of programming and mathematical techniques. DSP te
chniques are used to perform many effects such as equalization and reverb simulation.
Since most DSP is performed with simple arithmetic operations (additions and multiplications), both your computer’s processor and specialized DSP chips can be used to perform any DSP operation. The difference is that DSP chips are optimized specifically for mathematical functions while your computer’s microprocessor is not. This results in a difference in processing speed.
Disk-based files are usually longer audio clips that are played from hard disk rather than being stored in RAM. Disk-based files are used for vocals or any other long audio file that does not loop.
DLS and DLS-2 refer to the Downloadable Sounds specifications. DLS extends General MIDI by allowing you to use your own sounds for MIDI files, rather than relying on General MIDI sound set. With DLS, you can add new instrument sounds by simply downloading a new sample bank.
Drag and Drop
A quick way to perform certain operations using the mouse. To drag and drop, you click and hold a highlighted selection, drag it (hold the left mouse button down and move the mouse) and drop it (let go of the mouse button) at another position on the screen.
The difference between the maximum and minimum signal levels. It can refer to a musical performance (high- volume vs. low-volume signals) or to electrical equipment (peak level before distortion vs. noise floor). For example, orchestral music has a wide dynamic range, while thrash metal has a very small (always loud) range.
Envelopes allow you to automate the change of a certain parameter over time. In the case of volume, you can create a fade out (which requires a change over time) by adding an envelope and creating a point in the line to indicate where the fade starts. Then you pull the end point of the envelope down to -infinity.
Equalizing a sound file is a process by which certain frequency bands are raised or lowered in level. EQ has various uses. The most common use for an ACID project users is to simply adjust the subjective timbral qualities of a sound.
An instance of a media file on a track. An event may play an entire media file or a portion of the file.
A file format specifies the way in which data is stored on your floppy disks or hard drive. In Windows, the most common file format is the Microsoft .wav format. For information on the different file formats supported by ACID software, click here .
Audio uses frame rates only for the purposes of synching to video or other audio. To synchronize with audio a rate of 30 non-drop is typically used. To synchronize with video, 29.97 drop is usually used.
The frequency spectrum of a signal refers to its range of frequencies. In audio, the frequency range is basically 20 Hz to 20,000 Hz. The frequency spectrum sometimes refers to the distribution of these frequencies. For example, bass-heavy sounds have a large frequency content in the low end (20 Hz – 200 Hz) of the spectrum.
A groove refers to the rhythmic pattern of a piece of music. By deviating from a machine-quantized beat, individual beats may be played early or late to change the feel of the music. Applying a groove can simulate the timing patterns of human musicians, lending a human feel to MIDI-generated music or quantizing several distinct pieces of music to a common timing.
The unit of measurement for frequency or cycles per second (CPS).
An in-place plug-in processes audio data so that the output length always matches the input length. A non-in-place plug-in’s output length need not match a given input length at any time: for example, Time Stretch, Gapper/Snipper, Pitch-Shift (without preserving duration), and some
Vibrato settings can create an output that is longer or shorter than the input.
Plug-ins that generate tails when there is no more input but otherwise operate in-place (such as reverb and delay) are considered in-place plug-ins.
The insertion point (also referred to as the cursor position) is analogous to the cursor in a word processor. It is where markers or commands may be inserted depending on the operation. The Insertion Point appears as a vertical flashing black line and can be moved by clicking the left mouse button anywhere in the full-view control area.
Loops are small audio clips that are designed to create a repeating beat or pattern. Loops are usually one to four measures long and are stored completely in RAM for playback.
A marker is an anchored, accessible reference point in a file.
Media Control Interface (MCI)
A standard way for Windows programs to communicate with multimedia devices such as sound cards and CD players. If a device has an MCI device driver, it can easily be controlled by most multimedia Windows software.
A MIDI device-specific timing reference. It is not absolute time like MIDI Time Code (MTC); instead it is a tempo-dependent number of “ticks” per quarter note. MIDI clock is convenient for synchronizing devices that need to perform tempo changes mid-song. ACID software supports MIDI clock out, but does not support MIDI clock in.
A MIDI port is the physical MIDI connection on a piece of MIDI hardware. This port can be a MIDI in, out or through. Your computer must have a MIDI-capable card to output MIDI time code to an external device or to receive MIDI time code from an external device.
MIDI Time Code (MTC)
MTC is an addendum to the MIDI 1.0 specification and provides a way to specify absolute time for synchronizing MIDI-capable applications. MTC is essentially a MIDI representation of SMPTE time code.
Multiple-bit-rate encoding (also known as Intelligent Streaming for the Windows Media platform and SureStream™ for the RealMedia G2 platform) allows you to create a single file that contains streams for several bit rates. A multiple-bit-rate file can accommodate users with different Internet connection speeds, or these files can automatically change to a different bit rate to compensate for network congestion without interrupting playback.
To take advantage of multiple-bit-rate encoding, you must publish your media files to a Windows Media server or a RealServerG2.
Musical Instrument Device Interface (MIDI)
A standard language of control messages that provides for communication between any MIDI-compliant devices. Anything from synthesizers to lights to factory equipment can be controlled via MIDI. ACID software uses MIDI for synchronization purposes.
Refers to raising the volume so that the highest level sample in the file reaches a user defined level. Use normalization to make sure you are using all of the dynamic range available to you.
The Nyquist Frequency (or Nyquist Rate) is one half of the sample rate and represents the highest frequency that can be recorded using the sample rate without aliasing. For example, the Nyquist Frequency of 44,100 Hz is 22,050 Hz. Any frequencies higher than 22,050 Hz will produce aliasing distortion in the sample if no anti-aliasing filter is used while recording.
A media file that cannot be located on the computer. If you choose to leave the media offline, you can continue to edit events on the track; the events will point to the original location of the source media file.
One-shots are RAM-based audio clips that are not designed to loop. Things such as cymbal crashes and sound bites could be considered one-shots. Longer files can be treated as one-shots if your computer has sufficient memory.
To place a mono or stereo sound source perceptually between 2 or more speakers.
Peak Data File
The file created when a file is opened for the first time. This file stores the information regarding the graphic display of the waveform so that opening a file is almost instan
taneous. This file is stored in the directory where the audio file resides and has a .sfk extension. If this file is not in the same directory as the audio file or is deleted, it will be recalculated the next time you open the file.
Inverting the phase of sound data reverses the polarity of a waveform around its baseline. Inverting a waveform does not change the sound of a file; however, when you mix different sound files, phase cancellation can occur, producing a “hollow” sound. Inverting one of the files can prevent phase cancellation.
In the following example, the red line represents the baseline, and the lower waveform is the inverted image of the upper waveform.
Pre-roll is the amount of time elapsed before an event occurs. Post-roll is the amount of time after the event. The time selection defines the pre- and post-roll when recording into a selected event.
See Audio Proxy File.
Pulse Code Modulation (PCM)
PCM is the most common representation of uncompressed audio signals. This method of coding yields the highest fidelity possible when using digital storage. PCM is the standard format for .wav and .aif files.
To conform to prescribed values. For example, if a recorded MIDI file consisted of notes with irregular timing, you could quantize the notes to a straight time. If a file consisted of notes played in straight time, you could quantize those notes to a groove to apply a different feel. Snapping is a form of quantization that forces edits to divisions on the timeline grid or ruler.
Real-Time Streaming Protocol (RTSP)
A proposed standard for controlling broadcast of streaming media. RTSP was submitted by a body of companies including RealNetworks and Netscape.
A metafile that provides information to a media player about streaming-media files. To start a streaming media presentation, a Web page will include a link to a redirector file. Linking to a redirector file allows a file to stream; if you link to the media file, it will be downloaded before playback.
Windows Media redirector files use the .asx or .wax extension; RealMedia redirector files use the .ram, .rpm, or .smi extension.
The act of recalculating samples in a sound file at a different rate than the file was originally recorded. If a sample is resampled at a lower rate, sample points are removed from the sound file, decreasing its size, but also decreasing its available frequency range. When resampling to a higher sample rate, the software will interpolate extra sample points in the sound file. This increases the size of the sound file, but does not increase the quality. When down-sampling, one must be aware of aliasing.
The word sample is used in many different (and often confusing) ways when talking about digital sound. Here are some of the different meanings:
A discrete point in time which a sound signal is divided into when digitizing. For example, an audio CD-ROM contains 44,100 samples per second. Each sample is really only a number that contains the amplitude value of a waveform measured over time.
A sound that has been recorded in a digital format; used by musicians who make short recordings of musical instruments to be used for composition and performance of music or sound effects. These recordings are called samples. In this Help system, we try to use sound file instead of sample whenever referring to a digital recording.
The act of recording sound digitally, i.e. to sample an instrument means to digitize and store it.
The Sample Rate (also referred to as the Sampling Rate or Sampling Frequency) is the number of samples per second used to store a sound. High sample rates, such as 44,100 Hz provide higher fidelity than lower sample rates, such as 11,025 Hz. However, more storage space is required when using higher sample rates.
See Bit Depth.
The Sample Value (also referred to as sample amplitude) is the number stored by a single sample. In 16-bit audio, these values range from -32768 to 32767. In 8-bit audio, they range from -128 to 127. The maximum allowed sample value is often referred to as 100% or 0 dB.
Secure Digital Music Initiative (SDMI)
The Secure Digital Music Initiative (SDMI) is a consortium of recording industry and technology companies organized to develop standards for the secure distribution of digital music. The SDMI specification will answer consumer demand for convenient accessibility to quality digital music, enable copyright protection for artists’ work, and enable technology and music companies to build successful businesses.
A context-sensitive menu that appears when you click on certain areas of the screen. The functions available in the shortcut menu depend on the object being clicked on as well as the state of the program. As with any menu, you can select an item from the shortcut menu to perform an operation. Shortcut menus are used frequently for quick access to many commands.
The signal-to-noise ratio (SNR) is a measurement of the differ
ence between a recorded signal and noise levels. A high SNR is always the goal.
The maximum signal-to-noise ratio of digital audio is determined by the number of bits per sample. In 16-bit audio, the signal to noise ratio is 96 dB, while in 8-bit audio its 48 dB. However, in practice this SNR is never achieved, especially when using low-end electronics.
Society of Motion Picture and Television Engineers (SMPTE)
SMPTE time code is used to synchronize time between devices. The time code is calculated in Hours:Minutes:Second:Frames, where Frames are fractions of a second based on the frame rate. Frame rates for SMPTE time code are 24, 25, 29.97 and 30 frames per second.
A soft synth is a software-based synthesizer. Downloadable Sounds (DLS) and Virtual Studio Technology Instruments (VSTi) are two types of soft synths.
A method of data transfer in which a file is played while it is downloading. Streaming technologies allow Internet users to receive data as a steady, continuous stream after a brief buffering period. Without streaming, users would have to download files completely before playback.
Tempo is the rhythmic rate of a musical composition, usually specified in beats per minute (BPM).
A threshold determines the level at which the signal processor begins acting on the signal. During normalization, levels above this threshold are attenuated (cut).
The format used to display the time ruler and selection times. These can include: Time, Seconds, Frames and all standard SMPTE frame rates.
A discrete timeline for audio data. Audio events sit on tracks and determine when a sound starts and stops. Multiple audio tracks are played together to give you a composite sound that you hear through your speakers.
The Track List contains the master controls for each track. From here you can adjust the mix, select playback devices, and reorder tracks.
The majority of the Track View is made up of the space where you will draw events on each track.
�-Law (mu-Law) is a companded compression algorithm for voice signals defined by the Geneva Recommendations (G.711). The G.711 recommendation defines �-Law as a method of encoding 16-bit PCM signals into a non-linear 8-bit format. The algorithm is commonly used in European and Asian telecommunications. �-Law is very similar to A-Law, however, each uses a slightly different coder and decoder.
These commands allow you to change a project back to a previous state, when you don’t like the changes you have made, or reapply the changes after you have undone them.
Virtual MIDI Router (VMR)
A software-only router for MIDI data between programs. The VMR is used to receive MIDI time code and send MIDI clock. No MIDI hardware or cables are required for a VMR, so routing can only be performed between programs running on the same PC. Sony supplies a VMR called the Sony Virtual MIDI Router.
A Virtual Studio Technology instrument (VSTi) is a software synthesizer plug-in produced by Steinberg Media Technologies AG.
An digital audio standard developed by Microsoft and IBM. One minute of uncompressed audio requires 10 MB of storage.
A waveform is the visual representation of wave-like phenomena, such as sound or light. For example, when the amplitude of sound pressure is graphed over time, pressure variations usually form a smooth waveform.
Each event shows a graph of the sound data waveform. The vertical axis corresponds to the amplitude of the wave. For 16-bit sounds, the amplitude range is -32,768 to +32,767. For 8-bit sounds, the range is -128 to +127. The horizontal axis corresponds to time, with the leftmost point being the start of the waveform. In memory, the horizontal axis corresponds to the number of samples from the start of the sound file.
Microsoft’s Windows Media file format that can handle audio and video presentations and other data such as scripts, URL flips, images and HTML tags. Advanced Streaming Format files can be saved with the .asf or .wma extensions.