Skip to content Skip to footer

NVIDIA’s AI Paper Presents a Proposal for Compact NGP (Neural Graphics Primitives): A Machine Learning Platform that Employs Learned Probes and Hash Tables for Maximum Speed and Compression

Be excited! Neural Graphics Primitives (NGP) are revolutionizing the way we integrate old and new assets across various applications. Representing images, shapes, volumetric and spatial-directional data, they are aiding in novel view synthesis (NeRFs), generative modeling, light caching, and more. To take it to the next level, researchers at NVIDIA and the University of Toronto have proposed Compact NGP, a machine-learning framework that merges the speed associated with hash tables and the efficiency of index learning with learned probing methods.

This combination is achieved by unifying all feature grids into a shared framework where they function as indexing functions mapping into a table of feature vectors. Compact NGP has been specifically crafted with content distribution in focus, aiming to amortize compression overhead. Its design ensures decoding on user equipment remains low-cost, low-power, and multi-scale, enabling graceful degradation in bandwidth-constrained environments.

The data structures can be amalgamated in innovative ways through basic arithmetic combinations of their indices, resulting in cutting-edge compression versus quality trade-offs. In mathematical terms, these arithmetic combinations involve assigning the different data structures to subsets of the bits within the indexing function, significantly reducing the cost of learned indexing, which otherwise scales exponentially with the number of bits.

Their approach inherits the speed advantages of hash tables while achieving significantly improved compression, approaching levels comparable to JPEG in image representation. It retains differentiability and does not rely on a dedicated decompression scheme like an entropy code. Compact NGP demonstrates versatility across various user-controllable compression rates and offers streaming capabilities, allowing partial results to be loaded, especially in bandwidth-limited environments.

Eager to explore the potential of Compact NGP, the researchers conducted an evaluation of NeRF compression on both real-world and synthetic scenes, comparing it with several contemporary NeRF compression techniques primarily based on TensoRF. Across both scenes, Compact NGP demonstrated superior performance compared to Instant NGP concerning the trade-off between quality and size.

Compact NGP is tailor-made for real-world applications where random access decompression, level of detail streaming, and high performance are key. The potential applications are endless, from streaming applications, video game texture compression, live training, and beyond. We can’t wait to see how Compact NGP will transform the world of AI!

Leave a comment

0.0/5