If I connect a signal to the DX88p which is connected to a DLive Surface how much latency does it add, compared to when I connect the signal direct to the surface I/O ?
As far as I know, using the DX protocol adds 8 samples = 83 µs of latency, if I’ve interpreted your question correctly.
Shouldn’t that be the same 0,7ms as any input? ![]()
The .7ms of latency that A&H advertises is roughly the entire amount of latency audio experiences going through the system. This figure includes the initial A→D conversion, any and all processing in the console, and another D→A conversion as the audio leaves the console. The actual latency the system produces is lower than 0.70ms and while adding stage boxes to the system will increase the total latency for that I/O, it is still going to measure out under the 0.70ms number. However any added I/O (ie a DX stage box connected to the Mixrack) will have a different total latency time than I/O connected directly to the Mixrack.
That being said, adding more I/O to the system through the use of stage boxes does not “double” the total latency because the only difference is the “transport” latency (the time it takes for the audio to go from the stage box to the console). All of the other possible latency is already included in the console’s “0.7ms” advertised latency number. What this means is that even though audio coming into an attached stage box will go through a A→D conversion in the stage box, it doesn’t go through another A→D conversion in the console nor is the audio being “double processed” in the console, etc, so the added latency is very low and not “double”.
In technical terms, sending audio to/from the system from different sources (ie a stage box vs the Mixrack vs a Dante source vs a Madi source, etc, etc, etc) will all have different total latency times from I/O connected directly to the Mixrack. However these times are going to be close enough to each other that you generally don’t need to worry about compensating for it unless you are creating identical audio paths that are traveling to/from different sources/destinations that will be summed together down the audio path (ie both signals are coming out of the same PA speakers). If the identical audio sources are going to different destinations (one to PA in room and one to a broadcast mix), then you don’t need to worry about it. But if you are sending the two identical audio sources to the same destination, but through different audio paths, then you want to measure the differences and delay the faster path to match the slower path in order to prevent comb filtering from occurring.
Interesting. So any DX input will not be compensated to a surface input? It’s only a few samples of course, but nevertheless good to know.
I thought the dLive does 0,7ms always to be able to manage all possible inputs alligned.
The DLive automatically compensates for any/all of the internal processing, but latency that occurs before or after the Mixrack’s internal processing cannot be automatically compensated for because there is literally an infinite number of possible variables that could occur.
This includes adding stage boxes to the system because even adding stage boxes adds too many variables - is it a single DX box, are there two DX boxes daisy chained together, is it a GX box, is it a GX box with a DX box connected, is it a GX box with a DX box with a DX box daisy chained to it, etc, etc, etc. All of those scenarios would produce different overall latency times for each and every stage box being used.
Again, we are talking about 83 µs (0.083ms) of latency for a single instance of the DX protocol, and the odds of routing identical audio paths to the same destination through different paths (one going out the Mixrack IO while the other goes out the attached stage box) is pretty low. This is because if the audio is going to the same destination, it would be natural to set the system up so that I/O travel through the physical box (ie both out the Mixrack, or both out the stage box, etc). So unless it is a very strange situation, you shouldn’t have to worry about the differences in total latency due to what physical I/O is being used.
Yeah, just good to know. Maybe I test that on Monday. Analog split mic into the surface and a DX on the mixrack. If I can do that, I will report. ![]()
It’s really irrelevant.
You shouldn’t have a problem splitting a source into two different analog inputs - even if those analog inputs are on different pieces of hardware (ie the DX88p and Surface). I say that because if you are going to the trouble of splitting the source into two different analog inputs, then clearly you must want to do different processing on the two different channels and/or it’s going to two different destinations. This means that even though the original source is identical, the audio is not going to be identical when it is outputted or summed together. Therefore there is no need to try to time align the two audio paths because they are either not identical or they are not going to be summed together before being outputted.
PS - this raises another point that we haven’t discussed in this thread yet. Even I/O on the Surface is going to have a different total latency number than I/O on the Mixrack. Again, it is normally not something you have to worry about, but technically it is a different figure.
Actually I don’t want to split the source, I just want to know if connecting the source to a DX88P adds latency, compared to connecting the source to the surface. That’s all.