What is the single, most-requested feature for any video encoder? Don’t let the title of this blog post fool you—it is actually quality. After all, everyone loves a crisp, high-definition image. So, how do you improve the quality of an encoder? Simple, by adding and enabling new encoding features. But, adding those features comes at a price—and that is most often speed.
Quality vs speed: The cause-and-effect relationship
New encoder features typically target quality improvements. However, the compromise that improved quality generally requires is reduced speed. It doesn’t have to be like this. At MainConcept®, we accommodate this simple cause-and-effect relationship in every one of our video encoders by allowing the user to select a “performance level.” The lower the level, the faster the encoder will perform, though at reduced quality. The higher the performance level, the more time encoding will take.
However, we wanted to give our customers better quality without increasing encoding time. To do this, we set out to solve the not-so-simple dilemma of how to balance the need for quality and speed.
How do you improve the quality of an encoder without extending encoding time?
To improve the quality without extending processing time, we streamline the design of our encoders so they work more efficiently, thus using less calculation time which increases encoding speed. Ultimately, building acceleration into our encoding algorithms offers our customers flexibility—they can choose to save on encoding time at a fixed quality setting, or select a higher performance level to improve the quality while maintaining encoding time. And, of course, users can balance the two factors depending on specific use case requirements.
For example, many over-the-top (OTT) streaming and delivery customers need to save on encoding time during live events, adjusting quality as a compromise to prevent video buffering due to bandwidth congestion. Conversely, customers in the production and broadcast space require the highest video quality (4:2:2) for mastering broadcast encodings for playout, dedicating more encoding time to meet more stringent video quality demands. By building encoding efficiency directly into our video encoders, our customers receive out-of-the-box flexibility to tune the encoder to match their needs.
In our MainConcept SDK release earlier this week, we made performance-enhancing changes to our SDKs available. The most impressive of these updates is to the MainConcept AVC/H.264 Video Encoder. The overall speed for all AVC formats has improved by roughly 20%. Our ARM-based SDKs for Apple Silicon and Windows for ARM achieve an additional 20% speed increase by further optimizing our Assembler Code.
How can you speed up AVC even more?
In case this speed improvement is not enough, MainConcept’s AVC/H.264 video encoder now includes access to NVIDIA NVENC technology through the same, familiar MainConcept API, blending the benefits of software and hardware encoding. Built-in hardware detection and software fallbacks allow for the simplest integration into any AVC/H.264 workflow.
Are there changes to the MainConcept AVC decoder?
While making these performance-enhancing changes to the AVC encoder, our engineering teams also optimized routines in the MainConcept AVC/H.264 video decoder so it uses fewer CPU cycles, yielding a 10% overall speed boost compared to previous versions.
What other changes were made to the MainConcept SDKs this summer?
Other feature extensions in this week’s release improve the HEVC SDK’s High Dynamic Range (HDR) capabilities in two different ways. First, by offering pre-defined, simple-to-use templates for common Colorimetry functions. Second, by adding HDR signaling at container level (MP4 and MXF) for improved interoperability. Either way, MainConcept’s components all integrate seamlessly into any HDR workflow. We’ve also made some exciting updates to MPEG-2, but that is for another blog post...
To learn more, read the press release or watch our inaugural BrightTALK webinar How to Improve Digital Video Workflow Efficiency using MainConcept SDKs live on 22 July or afterwards on demand.