View Single Post
  #12  
Old 12-05-2014, 09:46 AM
Still-One Still-One is offline
Guest
 
Join Date: Apr 2009
Location: Milford, MI
Posts: 32,465
Default

I copied this post from a Meridian owners site. This is very promising.

Back in October, Bob Stuart invited a few of us to HQ to talk about a new technology. Like the ferrets that we were, some Googling turned up the MQA and MQL trademark applications. Further digging revealed a patent (surprised no-one else has found this yet!) describing an encoding+decoding system. Adding other bits and pieces to the jigsaw such as the SE loudspeakers tweeter, “higher bandwidth” analogue electronics, and “DAC management” with other snippets gleaned from chats with Meridian staff at shows and events, we guessed what MQA might be. Yet we were absolutely not prepared for what we were about to hear.

Bob extended the definition of “lossless” audio to include the A2D and D2A processes. Meridian has developed “pipelines” which characterise the actual ADCs used (and additional analogue processing equipment as well). This information is encoded as metadata in the MQA file and, once decoded, can be used to *manage the DACs* (!!!), essentially providing an audibly lossless A2D2A chain.

He then went on to challenge what was meant by "HiRes" audio.

In the last decade, there have been tremendous strides taken in psychoacoustics and, importantly, neuroscience (which has informed the psychoacoustics). The short of it is that the industry has been grossly mistaken about the relative importance of the frequency domain vs. the time domain. Yes, there is the anecdotal evidence that higher sample rates are better, but no-one has ever really articulated why (other than the pre- and post-ringing "naturalness" arguments).

The latest findings, grounded in science, are that, when it comes to human hearing, the time domain is up to 5x more important than the frequency domain. If you hear a twig snap in the woods, you know immediately where it is (time domain); you actually “decode” what it was afterwards (frequency domain). This is evolution at work: hearing is the most important sense for survival: it works when your eyes are shut, when you’re not looking in the relevant direction, and in the dark.

The human hearing system is sensitive to about 10 microseconds in time resolution and here’s the kicker: much/most of this resolution is destroyed in anything encoded digitally below a 192kHz sampling rate.

That’s right: 96kHz is NOT enough.

However, is the public about to download or stream 192/24 audio? No, because it’s not *convenient*. How then to provide audio of the highest quality to the masses? The short of it is that Meridian has found a way of folding the time resolution information into a regular PCM file with a lower sample rate (it’s actually hidden below the noise floor). It’s a stroke of genius and means that MQA files appear to anything other than an MQA decoder as a playable PCM file. But an MQA decoder can "unfold" the file to the original sample rate, adding back the time resolution information.

Another crucial learning from neuroscience is that the brain has three times as many nerves sending signals TO the cochlea than sending information FROM the cochlea to the brain. This is a incredible fact; the brain actively switches the ear’s sensitivity (to frequency) depending on the situation (natural sounds, animal sounds, and speech). The encoding algorithm takes into account these different hearing modes (don’t ask me how!) and the "compression" applied to the master file (which can be anything from a (non-ideal) 44.1/16 master up to 8x sample rate) is not lossy in the conventional sense. There is nothing removed from the file that would allow a human being to differentiate between the MQA encoding and the master as heard in the studio. Lossy? No, that would be an extremely unfair and naive description. "Encoded for human hearing" would be more accurate.

So what is MQA? It stands for “Master Quality Authenticated”. Master Quality because it is able to deliver essentially what the recording artist heard in the studio. Authenticated because the audio data are signed (no, not DRM) so that an MQA decoder can verify the authenticity of the MQA file; that it is intact and as intended when it left the studio, having been signed off by the artist and producer.

MQA has broad music industry backing from execs, artists, and producers. Meridian has been working on it for the past 4-5 years and for the last three years has taken the technology on a roadshow, demonstrating it and working with recording artists and producers. MQA is very much artist endorsed. It is an enabling technology: Meridian isn’t going to be MQA-encoding the whole back catalogue of recorded music; that’s the job of the studios. The first MQA files are expected to be released early in 2015. All of the major studios are on board, plus smaller labels. MQA decoding will not be restricted to Meridian hardware and software.

Given the number of parties involved, it is frankly staggering that this has all come together. That is has is testament to Bob's vision, determination, and no small amount of hard work by him and his team.

Oh, and it sounds more real than you have ever heard. Period. Hearing Louis Armstrong through a pair of 7200SEs – as if he was in the room – was a jaw-dropping moment that I will never forget. It *is* that good.


Postscript: Obviously we couldn’t comment until now, but we are very grateful to Bob, Richard, John, Chris, and the other staff at Meridian for trusting us with this information. On a very poignant note, our trip to HQ was the last time we met with Paul Webb, only a few days before he died. He would have loved to have seen the end of this part of the journey and the beginning of another.
Reply With Quote