The idea basically exploits the fact that the implicit blobby surface should look topologically very similar to the list of spheres that currently approximates the surface in my fluid dynamics algorithm. Specifically, only very local spheres should have an impact on the implicit surface at any point. Basically I would like the implicit surface to look like the spheres except with the blobby metaball effect between very nearby ones. With that in mind, I came up with this algorithm that should allow for a much finer subdivision of my grid without sampling every point. Rather than a cube marching algorithm, it now ends up being more of a cube finding method in that it only samples voxel cubes that are nearby the implicit surface.
for-each particle
1.Get a loose boundary B of the sphere (e.g. a bounding box extended such that its width is 2*(the desired radius)
2.Divide B into x segments and "project" (e.g. round down) to nearest grid cell**
-Each cell can be defined by a unique integer id based on position
3.Add each of these cells to a set of integers s- prevents duplicates
---------------------------------------
in cube-marching: for each cell in s
1. sample that cell as in cube-marching
**Get the nearest grid cell as follows: e.g. In 1-D case, if grid cell size is .01 a segment at .016 might round down to .01. Likewise .026 would round down to .02. Note also that grid cell size should be set equal to the bounding_width/x 1.Get a loose boundary B of the sphere (e.g. a bounding box extended such that its width is 2*(the desired radius)
2.Divide B into x segments and "project" (e.g. round down) to nearest grid cell**
-Each cell can be defined by a unique integer id based on position
3.Add each of these cells to a set of integers s- prevents duplicates
---------------------------------------
in cube-marching: for each cell in s
1. sample that cell as in cube-marching
I can also reduce the time it takes to sample a point by using spatial hashing on the heuristic that far away spheres should not have an influence on the surface. To do this, I simply need to modify my spatial hashing grid such that it can take an arbitrary position and return all nearby spheres. Sampling can then be re-written as:
sample(point a) {
get list l of all spheres in a from the spatial hashing grid
compute the implicit surface at this point using only nearby spheres
}
get list l of all spheres in a from the spatial hashing grid
compute the implicit surface at this point using only nearby spheres
}
Analysis:
Since we are looping through each of n particles, and for each particle dividing into x segments along each dimension, we get a total of n*x3 cells to try to add to the set. In the worst case, adding to the set takes O(lg n*x3) time, so for the first setup part of the algorithm we have an upper bound of O(n*x3lg nx3) = O(n*x3(3lg x+lg n)).
For the sampling part, assuming spatial hashing reduces the evaluation of the surface to O(nh), since we are sampling up to O(8n*x3) grid points, the running time upper bounds at O(8n*nh*x3). Since x is used to define amount of subdivisions per sphere exactly and will be some constant, such as 10, and nh should dominate the lg n term, the entire algorithm is thus upper-bounded by O(n*nh), same as the dynamics part of the algorithm. In practice, the constant of order 1000-10000 will be significant in slowing down the running time, this can be somewhat reduced by lowering the value of x, and the overall running time will still be significantly faster since the algorithm would no longer be evaluating the surface at each of the X3 grid cells, where X is the number of subdivisions per dimension in the entire grid space.
______
So for now, I am working on putting this idea into practice and seeing if it will significantly speed up my algorithm to allow for much finer grid resolution. I am also working on texturing so that I can put environment mapping in. On Joe's recommendation I am using glsl to do the environment mapping, which looks like it should be a really nice simple implementation. In the mean time, any thoughts/comments on the feasibility of the above algorithm would be much appreciated
No comments:
Post a Comment