Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields

Neural Radiance Fields (NeRF) have attracted significant attention due to their ability to synthesize novel scene views with great accuracy. However, inherent to their underlying formulation, the sampling of points along a ray with zero width may result in ambiguous representations that lead to further rendering artifacts such as aliasing in the final scene. To address this issue, the recent variant mipNeRF proposes an Integrated Positional Encoding (IPE) based on a conical view frustum. Although this is expressed with an integral formulation, mip-NeRF instead approximates this integral as the expected value of a multivariate Gaussian distribution. This approximation is reliable for short frustums but degrades with highly elongated regions, which arises when dealing with distant scene objects under a larger depth of field. In this paper, we explore the use of an exact approach for calculating the IPE by using a pyramid-based integral formulation instead of an approximated conical-based one. We denote this formulation as Exact-NeRF and contribute the first approach to offer a precise analytical solution to the IPE within the NeRF domain. Our exploratory work illustrates that such an exact formulation (Exact-NeRF) matches the accuracy of mip-NeRF and furthermore provides a natural extension to more challenging scenarios without further modification, such as in the case of unbounded scenes. Our contribution aims to both address the hitherto unexplored issues of frustum approximation in earlier NeRF work and additionally provide insight into the potential future consideration of analytical solutions in future NeRF extensions.


Introduction
Novel view synthesis is a classical and long-standing task in computer vision that has been thoroughly re-investigated via recent work on Neural Radiance Fields (NeRF) [23].NeRF learns an implicit representation of a 3D scene from a set of 2D images via a Multi-Layer Perceptron (MLP) that Figure 1.Comparison of Exact-NeRF (ours) with mip-NeRF 360 [2].Our method is able to both match the performance and obtain superior depth estimation over a larger depth of field.
predicts the visual properties of 3D points uniformly sampled along the viewing ray given its coordinates and viewing direction.This parameterization gives NeRF the dual ability to both represent 3D scenes and synthesize unseen views.In its original formulation, NeRF illustrates strong reconstruction performance for synthetic datasets comprising object-centric scenes and no background (bounded) and forward-facing real-world scenes.Among its applications, NeRF has been used for urban scene representation [29,32,36], human body reconstruction [3,19], image processing [14,20,22] and physics [10,17].
Nonetheless, the underlying sparse representation of 3D points learnt by the MLP may cause ambiguities that can lead to aliasing and blurring.To overcome these issues, Barron et al. proposed mip-NeRF [1], an architecture that uses cone tracing instead of rays.This architecture encodes conical frustums as the inputs of the MLP by approximating the integral of a sine/cosine function over a region in the space with a multivariate Gaussian.This reparameterization notably increases the reconstruction quality of multi-scale datasets.However, this approximation is only really valid for bounded scenes, where the conic frustums do not suffer from large elongations attributable to a large depth of field within the scene.
The NeRF concept has been extended to represent increasingly difficult scenes.For instance, mip-NeRF 360 [2] learns a representation of unbounded scenes with a central object by giving more capacity to points that are near the camera, modifying the network architecture and introducing a regularizer that penalizes 'floaters' (unconnected depth regions in free space) and other small unconnected regions.In order to model distant regions, mip-NeRF 360 transforms the multivariate Gaussians with a contraction function.This modification allows a better representation and outperforms standard mip-NeRF for an unbounded scenes dataset.However, the modification of the Gaussians requires attentive analysis to encode the correct information in the contracted space, which includes the linearization of the contraction function to accommodate the Gaussian approximations.This leads to a degraded performance of mip-NeRF 360 when the camera is far from the object.Additionally, mip-NeRF 360 struggles to render thin structures such as tree branches or bicycle rays.
Motivated by this, we present Exact-NeRF as an exploration of an alternative exact parameterization of underlying volumetric regions that are used in the context of mip-NeRF (Fig. 1).We propose a closed-form volumetric positional encoding formulation (Sec.3) based on pyramidal frustums instead of the multivariate Gaussian approximation used by mip-NeRF and mip-NeRF 360.Exact-NeRF matches the performance of mip-NeRF on a synthetic dataset, but gets a sharper reconstruction around edges.Our approach can be applied without further modification to the contracted space of mip-NeRF 360.Our naive implementation of Exact-NeRF for the unbounded scenes of mip-NeRF 360 has a small decrease in performance, but it is able to get cleaner reconstructions of the background.Additionally, the depth map estimations obtained by Exact-NeRF are less noisy than mip-NeRF 360.Our key contribution is the formulation of a general integrated positional encoding framework that can be applied to any shape that can be broken into triangles (i.e., a polyhedron).We intend that our work serves as a motivation to investigate different shapes and analytical solutions of volumetric positional encoding.The code is available at https://github.com/KostadinovShalon/exact-nerf.

Related Work
Already numerous work has focused on improving NeRF since its original inception [23], such as decreasing the training time [5,6,9,13,37], increasing the synthesis speed [11,31,38], reducing the number of input images [27] and improving the rendering quality [1,2,18,21,33,39].With the latter, one of the focuses is to change the positional encoding to account for the volumetric nature of the regions that contribute to pixel rendering [1,2].

Positional Encoding
NeRF uses a positional encoding (PE) on the raw coordinates of the input points in order to induce the network to learn higher-frequency features [28].However, the sampled points in NeRF are intended to represent a region in the volumetric space.This can lead to ambiguities that may cause aliasing.In this sense, mip-NeRF [1] uses a volumetric rendering by casting cones instead of rays, changing the input of the MLP from points to cone frustums.These regions are encoded using an integrated positional encoding (IPE), which aims to integrate the PE over the cone frustums.Given that the associated integral has no closed-form solution, they formulate the IPE as the expected value of the positional encoding in a 3D Gaussian distribution centred in the frustum.The IPE reduces aliasing by reducing the ambiguity of single-point encoding.Mip-NeRF 360 [2] uses a contracted space representation to extend the mip-NeRF parameterization to 360°unbounded scenes, since they found that the approximation given in mip-NeRF degrades for elongated frustums which arise in the background.Additionally, and similar to DONeRF [26], mip-NeRF 360 samples the intervals of the volumetric regions using the inverse of the distance in order to assign a bigger capacity to nearer objects.By contrast, in this work we explore the use of pyramid-based frustums in order to enable an exact integral formulation of the IPE which can be applied for both bounded and unbounded scenes alike.

NeRF and Mip-NeRF parameterization
NeRF uses an MLP f with parameters Θ to get the colour c ∈ R 3 and density σ ∈ [0, +∞) given a point x ∈ R 3 and a viewing direction v ∈ S 2 , where S 2 is the unit sphere, such that: whereby NeRF uses the positional encoding and γ : R → R 2L is applied to each coordinate of x and each component of v independently.The sampling strategy of NeRF consists of sampling random points along the ray that passes through a pixel.This ray is represented by r(t) = td + o, where o is the camera position and d is the vector that goes from the camera centre to the pixel in the image plane.The ray is divided into N intervals and the points r(t i ) are drawn from a uniform distribution over each interval, such that: where t n and t f are the near and far planes.In this sense, the colour and density of each point over the ray are obtained by Finally, the pixel colour Ĉ(r) is obtained using numerical quadrature, where δ i = t i+1 − t i .This process is carried out hierarchically by using coarse Ĉc and fine Ĉf samplings, where the 3D points in the latter are drawn from the PDF formed by the weights of the density values of the coarse sampling.The loss is then the combination of the mean-squared error of the coarse and fine renderings for all rays r ∈ R, i.e., Here we find mip-NeRF [1] is similar to NeRF, but it utilises cone tracing instead of ray tracing.This change has the direct consequence of replacing ray intervals by conical frustums F (d, o, ρ, t i , t i+1 ), where ρ is the radius of the circular section of the cone at the image plane (Fig. 2a).This leads to the need for a new positional encoding that summarizes the function in Eq. ( 2) over the region defined by the frustum.The proposed IPE is thus given by: Since the integral in the numerator of Eq. ( 6) has no closed-form solution, mip-NeRF proposes to approximate it by considering the cone frustums as multivariate Gaussians.Subsequently, the approximated IPE γ * is given by: where µ = o + µ t d is the centre of the Gaussian for a frustum defined by o and d with mean distance along the ray µ t , Σ is the covariance matrix, • denotes element-wise product and: This formulation was empirically shown to be accurate for bounded scenes where a central object is the main part of the scene and no background information is present.However, the approximation deteriorates for highly elongated frustums.To avoid this, mip-NeRF 360 [2] instead uses a contracted space where points that are beyond the unit sphere are mapped using the function: Subsequently, the new µ and Σ values are given by f (µ) and J f (µ)ΣJ f (µ) , where J f is the Jacobian matrix of f .Empirically, this re-parameterization now allows learning the representation of scenes with distant backgrounds (i.e. over a longer depth of field).

Exact-NeRF
In this paper, we present Exact-NeRF as an exploration of how the IPE approximations of earlier work [1,2] based on conic parameterization can be replaced with a square pyramid-based formulation in order to obtain an exact IPE γ E , as shown in Fig. 2. The motivation behind this formulation is to match the volumetric rendering with the pixel footprint, which in turn is a rectangle.

Volume of pyramidal frustums
A pyramidal frustum can be defined by a set of 8 vertices V = {v i } 8 i=1 and 6 quadrilateral faces F = {f j } 6 j=1 .In order to get the volume in the denominator of Eq. ( 6), we use the divergence theorem: with F = 1 3 [x, y, z] , yielding to the solution for the volume as: Without losing generality, we divide each face into triangles, giving a set of triangular faces T such that the polyhedra formed by faces F and T are the same.Each triangle τ is defined by three points P τ,0 , P τ,1 and P τ,2 , with P τ,i ∈ V, such that the cross product of the edges E τ,1 = P τ,1 − P τ,0 and E τ,2 = P τ,2 − P τ,0 points outside the frustum (Fig. 3).As a result, Eq. ( 11) equates to the sum of the surface integral for each triangle τ ∈ T , The points lying in the triangle P τ,0 P τ,1 P τ,2 can hence be parameterized as: The differential term of Eq. ( 12) is then: By substituting Eq. ( 15) into Eq.( 12), and noting that [x, y, z] = P τ (u, v), we obtain: Since the dot product of any point P τ in a face τ with a vector N τ normal to τ is constant, the product inside the integral of Eq. ( 16) is constant.Subsequently, P τ (u, v) can be replaced with any point, such as P τ,0 .Finally, the required volume is obtained as:

Integration over the PE Function
Following from earlier, we can obtain the numerator of the IPE in Eq. ( 6) using the divergence theorem.We will base our analysis on the sine function and the x coordinate, i.e., γ(x) = sin(2 l x).Substituting F = − 1 2 l cos(2 l x), 0, 0 in Eq. ( 10) we obtain: Following the same strategy of dividing the surface into triangular faces as in the earlier volume calculation, Eq. ( 18) can be written as: where î is the unit vector in the x direction and: From Eq. ( 13), the x coordinate can be parameterized as: Substituting Eq. ( 21) in Eq. ( 20) and solving the integral, we obtain: Furthermore, Eq. ( 22) can be written as: where ] and (•) •n is the element-wise power.
In general, we can also obtain the expression in Eq. ( 19) for the k-th coordinate of x as: where X τ = P τ,0 P τ,1 P τ,2 and e k are the vectors that form the canonical basis in R 3 .Similarly, the integral over the cosine function is defined as: where: Finally, we get the exact IPE (EIPE) of the frustum used by our Exact-NeRF approach by dividing Eqs. ( 24) and ( 26) by Eq. ( 17) as follows: where σ τ = σ 1,τ σ 2,τ σ 3,τ and ξ τ = ξ 1,τ ξ 2,τ ξ 3,τ .It's worth mentioning that Eq. ( 28) fails when a coordinate value repeats in any of the points of a triangle (i.e., there is a triangle τ such that P τ,i = P τ,j for a i = j).For these cases, l'Hopital's rule can be used to evaluate this limit (see Supplementary Material).
Despite starting our analysis with squared pyramids, it can be noted that Eq. ( 28) is true for any set of vertices V, meaning that this parameterization can be applied for any shape with known vertices.This is particularly useful for scenarios where the space may be deformed and frustums may not be perfect pyramids, such as in mip-Nerf 360 [2].Additionally, it can be noted that our EIPE is multiplied by a factor of 2 −3l , meaning that when L → ∞ then γ E → 0 which hence makes our implementation robust to large values of L. This property of our Exact-NeRF formulation is consistent with that of the original mip-NeRF [1].

Implementation Details
Exact-NeRF is implemented using the original code of mip-NeRF, which is based on JAXNeRF [4].Apart from the change of the positional encoding, no further modification is made.We use the same sampling strategy of ray intervals defined in Eq. ( 3), but sampling N + 1 points to define N intervals.In order to obtain the vertices of the pyramid frustums, we use the coordinates of the corners of each pixel and multiply them by the t i values to get the front and back faces of the frustums.Double precision (64-bit float) is used for calculating the EIPE itself, as it relies upon arithmetic over very low numerical decimals that are otherwise prone to numerical precision error (see Eq. ( 22)).After calculation, the EIPE result is transformed back to single precision (32-bit float).
We compare our implementation of Exact-NeRF against the original mip-NeRF baseline on the benchmark Blender dataset [23], down-sampled by a factor of 2. We follow a similar training strategy as in mip-NeRF: training both models for 800k iterations (instead of 1 million, as we observed convergence at this point) with a batch size of 4096 using Adam optimization [15] with a logarithmically annealed learning rate, 5 × 10 −4 → 5 × 10 −6 .All training is carried out using 2 × NVIDIA Tesla V100 GPU per scene.
Additionally, we compare the use of the EIPE against mip-NeRF 360 on the dataset of Barron et al. [2].Similarly, we used the reference code from MultiNeRF [24], which contains an implementation of mip-NeRF 360 [2], RefNeRF [33] and RawNeRF [22].Pyramidal frustum vertices are contracted using Eq. ( 9) and the EIPE is obtained using the Eq. ( 28) with the mapped vertices.We trained using a batch size of 8192 for 500k iterations using 4 × NVIDIA Tesla V100 GPU per scene.Aside from the use of the EIPE, all other settings remained unchanged from mip-NeRF 360 [2].

Results
Mean PSNR, SSIM and LPIPS [40] metrics are reported for our Exact-NeRF approach, mip-NeRF [1] and mip-NeRF 360 [2].Additionally, we also report the DISTS [7] metric since it provides another perceptual quality measurement.Similar to mip-NeRF, we also report an average metric: the  geometric mean of the MSE = 10 −PSNR/10 , √ 1 − SSIM, the LPIPS and the DISTS.Blender dataset: In Tab. 1 we present a quantitative comparison between Exact-NeRF and mip-NeRF.It can be observed that our method matches the reconstruction performance of mip-NeRF, with a marginal decrease of the PSNR and SSIM and an increment in the LPIPS and DISTS metrics, but with identical average performance.This small decrement in the PSNR and SSIM metrics can be explained by the loss of precision in the calculation of small quantities involved in the EIPE.Alternative formulations using the same idea could be used (see Supplementary Material), but the intention of Exact-NeRF is to create a general approach for any volumetric positional encoding using the vertices of the volumetric region.Fig. 4 shows a qualitative comparison between mip-NeRF and Exact-NeRF.It can be observed that Exact-NeRF is able to match the reconstruction performance of mip-NeRF.A closer examination reveals that Exact-NeRF creates sharper reconstructions in some regions, such as the hole in the bass drum or the leaves in the ficus, which is explained by mip-NeRF approximating the conical frustums as Gaussians.This is consistent with the increase in the LPIPS and DISTS, which are the perceptual similarity metrics.Mip-NeRF 360 dataset: Tab. 2 shows the results for the unbounded mip-NeRF 360 dataset.Despite Exact-NeRF having marginally weaker reconstruction metrics, it shows a competitive performance without any changes to the implementation of the EIPE used earlier with the bounded blender dataset, i.e., the contracted vertices were directly used without any further simplification or linearization, as in mip-NeRF 360 [2].Similar to the blender dataset results, this decrement can be explained with the loss of precision, which suggests that an alternative implementation of Eq. ( 28) may be needed.A qualitative comparison is shown in Fig. 5.It can be observed that tiny vessels are more problematic for Exact-NeRF (Fig. 5a), which can be explained again by the loss of precision.However, it is noted in Fig. 5b that the reconstruction of far regions in mip-NeRF 360 is noisier than Exact-NeRF (see Fig. 5b, grill and the car), which is a consequence of the poor approximation of the Gaussian region for far depth of field objects in the scene.Fig. 5c reveals another example of a clearer region in the Exact-NeRF reconstruction for the background detail.Fig. 6 shows snapshots of the depth estimation for the bicycle, bonsai and garden scenes.Consistent with the colour reconstructions, some background regions have a more detailed estimation.It is also noticed (not shown) that despite Exact-NeRF having a smoother depth estimation, it may show some artifacts in the form of straight lines, which may be caused by the shape of the pyramidal frustums.It is worth reminding that our implementation of the EIPE in mip-NeRF 360 is identical to the EIPE in mip-NeRF.

Impact of Numerical Underflow
As seen in Sec. 3, Exact-NeRF may suffer from numerical underflow when the difference of a component of two points ∆ = x τ,i − x τ,j is too close to zero (∆ → 0).In the case of this difference being precisely zero, the limit can be found using l'Hopital's rule, as is further developed in Appendix A.1.However, if this value is not zero but approximately zero, numerical underflow could lead to exploding values in Eq. ( 22).This error hinders the training of the MLP since the IPE is bounded to the interval [−1, 1] by definition (Eq.( 6)).An example of the effect of numerical underflow in our method applied un-der the mip-NeRF 360 framework is shown in Fig. 7.The black lines are the location of such instances where underflow occurs.The curvature of these lines is a direct consequence of the contracted space used in mip-NeRF 360.In order to eliminate this effect, we use double precision for the calculation of the EIPE.Additionally, all differences of a coordinate which are less than 1×10 −6 are set to zero and reformulated using l'Hopital's rule.

Conclusion
In this work, we present Exact-NeRF, a novel precise volumetric parameterization for neural radiance fields (NeRF).
In contrast to conical frustum approximation via a multivariate Gaussian in mip-NeRF [1], Exact-NeRF uses a novel pyramidal parameterization to encode 3D regions using an Exact Integrated Positional Encoding (EIPE).The EIPE applies the divergence theorem to compute the exact value of the positional encoding (an array of sine and cosines) in a pyramidal frustum using the coordinates of the vertices that define the region.Our proposed EIPE methodology can be applied to any such architecture that performs volumetric positional encoding from simple knowledge of Figure 6.Depth estimation for mip-NerF 360 and Exact-NeRF.Our approach shows better depth estimations for background regions (highlighted in the black boxes), although some artifacts in form of straight lines may appear, which is inherent in our pyramidal shapes.the pyramidal frustum vertices without the need for further processing.
We compare Exact-NeRF against mip-NeRF on the blender dataset, showing a matching performance with a marginal decrease in PSNR and SSIM but an overall improvement in the perceptual metric, LPIPS.Qualitatively our approach exhibits slightly cleaner and sharper reconstructions of edges than mip-NeRF [1].
We similarly compare Exact-NeRF with mip-NeRF 360 [2].Despite Exact-NeRF showing a marginal decrease in performance metrics, it illustrates the capability of the EIPE on a different architecture without further modification.Exact-NeRF obtains sharper renderings of distant (far depth of field) regions and areas where mip-NeRF 360 presents some noise, but it fails to reconstruct tiny vessels in near regions.The qualitative depth estimations maps also confirm these results.The marginal decrease in performance of our Exact-NeRF method can be attributed to numerical underflow and some artifacts caused by the choice of a stepfunction-based square pyramidal parameterization.In addition, our results suggest using a combined encoding such that the EIPE is used for distance objects, where it is more stable and accurate.Although alternative solutions can be obtained by restricting the analysis to rectangular pyramids, our aim is to introduce a general framework that can be applied to any representation of a 3D region with known vertices.The investigation of more stable representations and the performance of different shapes for modelling 3D regions under a neural rendering context remains an area for future work.

A. Additional Formulation
In this section, we will present some additional formulations used in our model that do not affect the results presented in the paper.Notwithstanding that our proposed EIPE works for any polyhedron (which is the reason why it can be used under the mip-NeRF 360 [2] architecture without further treatment), we also present an alternative formulation of the EIPE for the particular case of strictly square pyramids.
From Eq. ( 29) we observe that an indetermination occurs for the case of two points in the triangle τ sharing the same coordinate, such that x τ,i = x τ,j , i = j.In order to get a valid value for these cases, we get the limit when those two coordinates approach.We can write Eq. ( 29) as: Then, we obtain the value for the case of x τ,0 = x τ,1 using l'Hopital's rule: (34) Similarly, from Eq. ( 33), we evaluate the case x τ,0 = x τ,2 : (35) For the case when x τ,1 = x τ,2 , we differentiate with respect to x τ,1 to obtain the corresponding value: (38) Finally, when x τ,0 = x τ,1 = x τ,2 , we use again the l'Hopital's rule on Eq. ( 33) and differentiate again with respect to x τ,0 to obtain: Using the same approach, we can find the following expressions for ξ x,τ (Eq.( 27)): Similar expressions can be obtained for the y and z coordinates.

A.2. Alternative EIPE for Squared Pyramids
As mentioned earlier, our EIPE in Eq. ( 28) can be used for any shape whose vertices are known.However, the computational cost increases if the 3D shape is complex since a larger number of triangular faces will need to be processed.For more efficient methods, we can focus our analysis on specific shapes.Particular to our scenario, we can obtain an alternative EIPE exclusively for a square pyramid (note that this will not be the case for the contraction function in mip-NeRF 360) with a known camera pose [R|o] and pixel width ω (similar to ṙ in mip-NeRF).From Fig. 8, we calculate the volume of the frustum as dx dy dz (44) The numerator in Eq. ( 6) for the x coordinate can be obtained in the same way: sin(2 l x)dx dy dz .(46) Since the camera pose is known, we can express x as where r ij is an element of the rotation matrix R and o 1 is the first element of o.Substituting Eq. (47) in Eq. ( 46) (and omitting the integration limits for clarity): (48) The solution to the integral in Eq. ( 48) is then: Similarly to the EIPE in Eq. ( 28), an indeterminate value arises in Eq. (49) for r 11 = 0 and r 12 = 0.For these cases, l'Hopital's rule can be used as in Appendix A.1 or Eq. ( 48) can be solved by substituting r 11 = 0 and r 12 = 0. We omit these calculations for brevity.

B. Numerical Analysis between IPE and EIPE
We compare the exact value of the EIPE with the approximation in Eq. ( 7) used by mip-NeRF [1].In Fig. 9a we contrast the value of the EIPE vs the IPE for frustums of length δ i = 0.02 at different positions along the ray d and at different positional encoding frequencies L. The values of d, o and R correspond to a random pixel of a random image of the blender dataset.It is seen that the approximation is precise for frustums that are near the camera (small µ t ), but it degrades the further it gets.It is also observed that this effect grows faster for larger values of L. This trend is more noticeable in the plot of the error between the EIPE and IPE (Fig. 9b), where the magnitude of the error is a periodic function approximately bounded by two lines whose pendant seems to grow proportional with L. Furthermore, it is observed that the frequency of the error is also proportional to L. Figs.9c and 9d show a similar analysis for small values of µ t and δ i = 5 × 10 −4 , which correspond to small frustums.In these instances, it is observed that numerical errors occur, which is consistent with the analysis of the Impact of Numerical Underflow in Sec. 5. A similar analysis for a fixed value of µ t = 3 and varying δ i is shown in Figs.9e and 9f.Here, a more drastic error is seen when δ i increases, which is consistent with the observation made in [2] that the IPE does not approximate well for very elongated Gaussians.Additionally, rapid changes in the IPE are observed for small variations in the length of the frustum (see Fig. 9e IPE L = 3 and IPE L = 4), which might not be desired.On the other hand, our EIPE is more robust to these elongations, meaning that it could be a more reliable parameterization for distant objects.
Despite the increasing error in the approximation of the IPE for larger values of L, this effect gets mitigated by the nature of the IPE itself, which gives more importance to the components of the positional encoding with smaller frequencies.However, in scenarios with distant backgrounds where more elongated frustum arise, such as in the bicycle scene, Exact-NeRF seems to perform better (Sec.5).Given that the scenes in the blender and mip-NeRF 360 datasets are composed of one central object only, it is difficult to evaluate the performance of the IPE and EIPE formulations for distant objects or scenarios with several objects.

C. Additional Results on the Blender Dataset
We present more qualitative comparisons in Fig. 10 between different scenes of the blender dataset.The reconstructions of both mip-NeRF and Exact-NeRF are almost identical, but a few differences can be noted, e.g., the apron of the chair and the holes in the lego scene are slightly sharper in our reconstruction; the details in the cymbals of the drum are more similar with the ground truth; the reconstruction of the water in the ship scene is more accurate with our method.Besides these minimal differences, our exploratory work demonstrates that analytical solutions to a volumetric positional encoding exist if the shape of the frustum is changed.

D. Limitation of Existing Metrics
Following the approach from previous NeRF research, we report PSNR, SSIM and LPIPS as our evaluation metrics.PSNR and SSIM are two of the first evaluation metrics for image reconstruction.Traditionally, PSNR (based on the MSE metric) has been used to assess the quality of lossy compression algorithms.Since the PSNR is obtained via the pixel-wise absolute error, it cannot measure the structural and/or perceptual similarity between the reconstructed and reference images.SSIM was proposed as an alternative metric since it quantifies the relation between the pixels and their neighbourhood (i.e., the structural information).Several works have focused on the weakness of these metrics [16,25,30,34,35], where the main criticism is that images subject to different compression artifacts and distortion effects (such as additive Gaussian blurring) exhibit similar PSNR and SSIM values.Additional work [12] has shown analytical and experimental relations between both metrics, meaning that they are not independent.In order to overcome these effects, recent image quality assessment methods have been proposed.Ding et al. [8] have carried out a comprehensive comparison between different metrics, where deep neural networks-based metrics such as LPIPS [40] and DISTS [7] showed to be the most reliable quality metrics for perceptual similarity.These metrics compare two images by measuring the distance of their feature maps from a pretrained neural network.These results motivated us to include the DISTS metric in our experiments (Tabs. 1 and 2).Our method obtains a better performance in the LPIPS and DISTS metrics, thus improving the perceptual quality.

Figure 2 .
Figure 2. Cone and pyramid tracing for volumetric NeRF parameterizations.(a) Mip-NeRF [1] uses cone frustums to parameterize a 3D region.Since the IPE of these frustums does not have a closed-form solution, it is approximated by modelling the frustum as a multivariate Gaussian.(b) Exact-NeRF casts a square pyramid instead of a cone, allowing for an exact parameterization of the IPE by using the vertices vi of the frustum and the pose parameters o and R.

Figure 3 .
Figure 3. Parameterization of triangular faces.The vertices are sorted counter-clockwise, so the normal vector to their plane points outside the frustum.

Figure 4 .
Figure 4. Qualitative comparison between mip-NeRF and Exact-NeRF (ours) for the blender dataset.Our method matches the mip-NeRF rendering capability but also produces slightly sharper renderings (see the bass drum hole and the back leaves of the ficus).

Figure 8 .
Figure 8. Parameterization of the square pyramid using the pixel width ω.

Figure 9 .
Figure 9. Numerical comparison between the IPE and our EIPE.(a) EIPE vs IPE for different values of µt and (b) their difference.(c) EIPE vs IPE with respect to the length of the frustum δi and (d) their difference.

Figure 10 .
Figure 10.Additional results of Exact-NeRF for the blender dataset.