Created
September 22, 2019 10:51
-
-
Save catid/8b08c6d0cfa1c6248ef290f6e4d4d0d4 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some experiment results from today: | |
(0) Using NVENC directly is much more advantageous than using NvPipe library, which uses it indirectly. | |
(1) NVENC lossless video encode mode does worse than the prior open source lossless library (6-8 Mbps versus 4-5) | |
(2) NVENC only allows for one encoding context at a time, since the second available context must be used for color images. So multiple depth images need to be combined before encoding, or it must be paired with software encoding. Still not sure how to make this practical. | |
(3) Using Zstd for high bits is best done the naive way without any filtering. Tried a few options and found that works best. | |
(4) Rescaling the input data to 0...2047 (as wide as possible) to reduce the impact of low bit errors ends up introducing more high bit errors. | |
(5) Using Zstd for high bits + NVENC for low bits produces files that can be arbitrarily small, in trade for errors in the output. It's hard to judge however the impact of the errors it introduces. If the bandwidth required for acceptable results exceeds the lossless solution there's no reason to do the lossy one. This is probably application-dependent. | |
(6) HEVC encodings produce fewer errors than H264 and run fast. | |
Going to experiment with software encoding with x265 tomorrow to see if it makes this more practical. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment