So I have finished putting in environment mapping - I might tweak the shader a bit more as right now I am doing 100% reflectivity whereas elemental mercury should be around 70%. However, I think the texture already goes a long way in help to sell the mercury effect.
Here's a quick vid of the classic glut teapot rendered with environment cube mapping
I have also gotten the improved algorithm to a more stable point. Unfortunately, it's still not real time for 100 particles, but I was at least able to visualize it at a sluggish .1 fps for 100 and realtime for 10 particles. Optimizing on my spatial hashing code should further speed up the algorithm. Also, I am going to try a quick change to the data structure storing the grid point to sample which should also result in a speedup (using a list instead of a set, creating it in linear time, then sorting in O(nlogn) and then iterating through and copying into a new list to get rid of duplicates.
Here's two videos demonstrating the real time 10-particle mercury and the 100-particle mercury sped up to real time.
Anyway, as you can see from the video, the more refined implicit surface reconstruction combo'd with the environment mapping goes a long in selling the material as mercury. Also, the volume introduced by the surface helps add to the surface tension-like blobbiness of the mercury by giving additional depth to the material.
Some sources that were a huge help in understanding environment mapping and actually getting glsl shaders to work:
http://elf-stone.com/glee.php
-GLee for getting openGL extensions to work
http://www.lonesock.net/soil.html
-SOIL for loading in cube maps
http://tfc.duke.free.fr/
-a great environment mapping demo which was a huge help as a base for my code
http://www.lighthouse3d.com/tutorials/glsl-tutorial/?intro
- a good intro to understanding glsl
http://nehe.gamedev.net/data/articles/article.asp?article=21
- another thorough article on understanding glsl in general
http://www.ozone3d.net/tutorials/glsl_texturing_p04.php
- a good environment cube mapping tutorial for glsl
a href="http://3dshaders.com/home/index.php?option=com_weblinks&catid=14&Itemid=34"
- lots of glsl resources and shader code useful as a reference
http://www.cse.ohio-state.edu/~hwshen/781/Site/Lab4.html
- another helpful tutorial on setting up environment cube mapping in openGL/glsl
http://www.codemonsters.de/home/content.php?show=cubemaps
- awesome resource for images to use in creating cube maps; currently I am using one of the ones from NVIDIA.
This will likely be my last post before the presentation and poster session on Thursday, as with Passover just ending, things have been really crazy and I'm still working on finishing up the poster and video and making some final tweaks before the presentation. As always, any comments or feedback is much appreciated!
Jeremy Cytryn Senior Design Project
Real-time interaction with high surface tension liquids
Wednesday, April 27, 2011
Thursday, April 21, 2011
Texturing and Environment Mapping
As progress has been somewhat limited due to Passover and my current internet connection currently being rather slow, and also due to the fact that I will undoubtedly be posting again later this week, this will be one of the shorter blog posts - at least till I can update with pretty pictures of my progress. I have currently made a lot of headway on implementing my plan for massively culling the search space for the marching cubes algorithm, but am still bottlenecking on the spatial hashing code and on using stl::sets and corresponding memory management effectively. I am currently looking again at the grid implementation on this open source SPH implementation for ideas on optimizing the spatial hashing.
Anyway, for now I have shouldered that part and started on the texturing and environment mapping. Unlike the above, the implementation for this is relatively straightforward; I think texturing is somewhat working now, but I still need to tweak the OpenGL lighting/materials to get more shininess on the liquid. I've been familiarizing myself with GLSL for the cube mapping and it's also proving pretty manageable to use, so I'm also hoping to get that completed in the very near future. That's all for now; hopefully more to come very soon.
Anyway, for now I have shouldered that part and started on the texturing and environment mapping. Unlike the above, the implementation for this is relatively straightforward; I think texturing is somewhat working now, but I still need to tweak the OpenGL lighting/materials to get more shininess on the liquid. I've been familiarizing myself with GLSL for the cube mapping and it's also proving pretty manageable to use, so I'm also hoping to get that completed in the very near future. That's all for now; hopefully more to come very soon.
Thursday, April 14, 2011
Efficiency Idea/Algorithm
Since my current efficiency bottleneck right now is in the cube marching step, I've been thinking about a way to get my algorithm to only sample grid cells likely to be at the border of the implicit surface. So after some serious brainstorming, I thought of a simple idea that should make the cube marching algorithm run significantly faster for me.
The idea basically exploits the fact that the implicit blobby surface should look topologically very similar to the list of spheres that currently approximates the surface in my fluid dynamics algorithm. Specifically, only very local spheres should have an impact on the implicit surface at any point. Basically I would like the implicit surface to look like the spheres except with the blobby metaball effect between very nearby ones. With that in mind, I came up with this algorithm that should allow for a much finer subdivision of my grid without sampling every point. Rather than a cube marching algorithm, it now ends up being more of a cube finding method in that it only samples voxel cubes that are nearby the implicit surface.
I can also reduce the time it takes to sample a point by using spatial hashing on the heuristic that far away spheres should not have an influence on the surface. To do this, I simply need to modify my spatial hashing grid such that it can take an arbitrary position and return all nearby spheres. Sampling can then be re-written as:
Analysis:
Since we are looping through each of n particles, and for each particle dividing into x segments along each dimension, we get a total of n*x3 cells to try to add to the set. In the worst case, adding to the set takes O(lg n*x3) time, so for the first setup part of the algorithm we have an upper bound of O(n*x3lg nx3) = O(n*x3(3lg x+lg n)).
For the sampling part, assuming spatial hashing reduces the evaluation of the surface to O(nh), since we are sampling up to O(8n*x3) grid points, the running time upper bounds at O(8n*nh*x3). Since x is used to define amount of subdivisions per sphere exactly and will be some constant, such as 10, and nh should dominate the lg n term, the entire algorithm is thus upper-bounded by O(n*nh), same as the dynamics part of the algorithm. In practice, the constant of order 1000-10000 will be significant in slowing down the running time, this can be somewhat reduced by lowering the value of x, and the overall running time will still be significantly faster since the algorithm would no longer be evaluating the surface at each of the X3 grid cells, where X is the number of subdivisions per dimension in the entire grid space.
______
So for now, I am working on putting this idea into practice and seeing if it will significantly speed up my algorithm to allow for much finer grid resolution. I am also working on texturing so that I can put environment mapping in. On Joe's recommendation I am using glsl to do the environment mapping, which looks like it should be a really nice simple implementation. In the mean time, any thoughts/comments on the feasibility of the above algorithm would be much appreciated
The idea basically exploits the fact that the implicit blobby surface should look topologically very similar to the list of spheres that currently approximates the surface in my fluid dynamics algorithm. Specifically, only very local spheres should have an impact on the implicit surface at any point. Basically I would like the implicit surface to look like the spheres except with the blobby metaball effect between very nearby ones. With that in mind, I came up with this algorithm that should allow for a much finer subdivision of my grid without sampling every point. Rather than a cube marching algorithm, it now ends up being more of a cube finding method in that it only samples voxel cubes that are nearby the implicit surface.
for-each particle
1.Get a loose boundary B of the sphere (e.g. a bounding box extended such that its width is 2*(the desired radius)
2.Divide B into x segments and "project" (e.g. round down) to nearest grid cell**
-Each cell can be defined by a unique integer id based on position
3.Add each of these cells to a set of integers s- prevents duplicates
---------------------------------------
in cube-marching: for each cell in s
1. sample that cell as in cube-marching
**Get the nearest grid cell as follows: e.g. In 1-D case, if grid cell size is .01 a segment at .016 might round down to .01. Likewise .026 would round down to .02. Note also that grid cell size should be set equal to the bounding_width/x 1.Get a loose boundary B of the sphere (e.g. a bounding box extended such that its width is 2*(the desired radius)
2.Divide B into x segments and "project" (e.g. round down) to nearest grid cell**
-Each cell can be defined by a unique integer id based on position
3.Add each of these cells to a set of integers s- prevents duplicates
---------------------------------------
in cube-marching: for each cell in s
1. sample that cell as in cube-marching
I can also reduce the time it takes to sample a point by using spatial hashing on the heuristic that far away spheres should not have an influence on the surface. To do this, I simply need to modify my spatial hashing grid such that it can take an arbitrary position and return all nearby spheres. Sampling can then be re-written as:
sample(point a) {
get list l of all spheres in a from the spatial hashing grid
compute the implicit surface at this point using only nearby spheres
}
get list l of all spheres in a from the spatial hashing grid
compute the implicit surface at this point using only nearby spheres
}
Analysis:
Since we are looping through each of n particles, and for each particle dividing into x segments along each dimension, we get a total of n*x3 cells to try to add to the set. In the worst case, adding to the set takes O(lg n*x3) time, so for the first setup part of the algorithm we have an upper bound of O(n*x3lg nx3) = O(n*x3(3lg x+lg n)).
For the sampling part, assuming spatial hashing reduces the evaluation of the surface to O(nh), since we are sampling up to O(8n*x3) grid points, the running time upper bounds at O(8n*nh*x3). Since x is used to define amount of subdivisions per sphere exactly and will be some constant, such as 10, and nh should dominate the lg n term, the entire algorithm is thus upper-bounded by O(n*nh), same as the dynamics part of the algorithm. In practice, the constant of order 1000-10000 will be significant in slowing down the running time, this can be somewhat reduced by lowering the value of x, and the overall running time will still be significantly faster since the algorithm would no longer be evaluating the surface at each of the X3 grid cells, where X is the number of subdivisions per dimension in the entire grid space.
______
So for now, I am working on putting this idea into practice and seeing if it will significantly speed up my algorithm to allow for much finer grid resolution. I am also working on texturing so that I can put environment mapping in. On Joe's recommendation I am using glsl to do the environment mapping, which looks like it should be a really nice simple implementation. In the mean time, any thoughts/comments on the feasibility of the above algorithm would be much appreciated
Wednesday, April 6, 2011
Slow Progress and Smooth Shading
With the beta review coming up, I finally have some results to show on the implicit surface reconstruction side. The problem is, it runs no where close to real time except at extremely low grid resolution/ poly count. However, you can sort of start to get the idea. Also, with surface normals computed, I have smooth shading working as well. I am currently using an adapted version of the open source marching cube code by Cory Gene Bloyd from this site. I am now thinking in terms of future approach that it is possible that using hierarchical bounding volumes and having an adaptively sized grid might make this algorithm work fast enough at least for low spatial separation.
Anyhow, here's what I currently have:
Anyhow, here's what I currently have:
Thursday, March 31, 2011
Short update: Interaction and Implicit Progress
I'm planning on doing a bigger update a little before the beta review, so this will probably be one of my shorter posts. What with quite a few projects going on this week, I was not able to get done everything for the Beta Review yet. However, over the next few days I have a mysterious lull in work from other classes, so I should be posting more progress in a couple days. As planned, I have finished up basic user interaction - the main difference from the final implementation with the pseudocode from last week is that I had to use impulse rather than straight-up force to modify fluid behavior because of the way force is currently zeroed at the beginning of each time step. (I might change this later). I finally put in an fps tracker as well - I have it display on the GUI status bar at the bottom - right now the simulation is ranging around 2 fps for the 1000 particle set, and around 20-some for the 100 particle set on my laptop.
I'm still working on the implicit reconstruction, mainly going back and doing a bunch more background reading on feasible solutions, such as:
http://research.microsoft.com/en-us/um/people/hoppe/psrecon.pdf
http://web.mit.edu/manoli/crust/www/crust.html
http://physbam.stanford.edu/~fedkiw/papers/stanford2001-03.pdf
In particular, this paper discusses a method which might be useful which is particularly suited to being faster and more memory light. Essentially, it uses a level set method, interpolating the zero isosurface of the level set function. A simplified version of the motion of the surface is also embedded in addition to the surface, which might be a good way to get fast deformations without having to recompute the entire mesh from scratch.
UPDATE:
Here is a video of user interaction working. The video also shows how clicking shoots a ray from the camera. The framerate is now displayed in the lower left corner of the status bar.
For next week and the beta review, same plan as before: focus on the implicit formulation.
I'm still working on the implicit reconstruction, mainly going back and doing a bunch more background reading on feasible solutions, such as:
http://research.microsoft.com/en-us/um/people/hoppe/psrecon.pdf
http://web.mit.edu/manoli/crust/www/crust.html
http://physbam.stanford.edu/~fedkiw/papers/stanford2001-03.pdf
In particular, this paper discusses a method which might be useful which is particularly suited to being faster and more memory light. Essentially, it uses a level set method, interpolating the zero isosurface of the level set function. A simplified version of the motion of the surface is also embedded in addition to the surface, which might be a good way to get fast deformations without having to recompute the entire mesh from scratch.
UPDATE:
Here is a video of user interaction working. The video also shows how clicking shoots a ray from the camera. The framerate is now displayed in the lower left corner of the status bar.
For next week and the beta review, same plan as before: focus on the implicit formulation.
Thursday, March 24, 2011
Self-Evaluation/Reality Check/Future Game Plan
With just over a week until the beta review, here is an evaluation of my progress and what is left to do.
Looking back at the list of final deliverables, I feel like I still have quite a bit left so I'm trying not to feel too much pressure (deep breaths) but I know I still have quite a bit of work ahead of me.
On the other hand, looking back I feel like I've already accomplished a lot, including all the preliminary research, GUI and camera setup, particle framework, spatial hash table, fluid dynamics, and adding in parameter control. It would have been nice to have gotten farther along in the surface reconstruction at this point, but other than that I think I'm somewhat on par with my suggested timeline.
I really want to have time to get to the haptics, because I think that will make the end product a lot more dynamic and interesting to play with but I am aware that with the time constraints it is more important to get what I have working fully and play the haptics part by ear. With this in mind, I have devised a list of the necessary parts of the project that need to get finished and the other things that I want to do but are not as important and are more sort-of "icing-on-the-cake" deliverables.
Most important things to get working:
Looking back at the list of final deliverables, I feel like I still have quite a bit left so I'm trying not to feel too much pressure (deep breaths) but I know I still have quite a bit of work ahead of me.
On the other hand, looking back I feel like I've already accomplished a lot, including all the preliminary research, GUI and camera setup, particle framework, spatial hash table, fluid dynamics, and adding in parameter control. It would have been nice to have gotten farther along in the surface reconstruction at this point, but other than that I think I'm somewhat on par with my suggested timeline.
I really want to have time to get to the haptics, because I think that will make the end product a lot more dynamic and interesting to play with but I am aware that with the time constraints it is more important to get what I have working fully and play the haptics part by ear. With this in mind, I have devised a list of the necessary parts of the project that need to get finished and the other things that I want to do but are not as important and are more sort-of "icing-on-the-cake" deliverables.
Most important things to get working:
- Surface reconstruction (in progress)
- User Interaction (in progress, almost done)
- Environment mapping and smooth shading
- Optimized running time
- Haptic control and force feedback (arguably should go at bottom of other list)
- Improved User Interaction (e.g. rigid bodies)
- Tweaking particle dynamics / parameters
- Small GUI things (e.g. saving/loading materials, etc.)
- Index of refraction
- Some form of implicit surface reconstruction
- Click-drag user interaction
Implicit Problems and Other Oddities
Well, what with lots of debugging and little sleep, this week was not nearly as productive as I had hoped but on the bright side at least I am dealing with these issues now as opposed to in a couple weeks. The majority of this week was spent with the implicit version of the particle sim and trying to get it to work in some form as well as doing some more reading and re-reading on options for (fast) surface reconstruction.
I think the main problem I am having right now is I don't have a way of visually debugging, so I'm not really sure how close I am to having the ray-marching work. I'm just getting a lot of ill-formed shapes, so I've gone back and am now redoing the surface reconstruction in smaller easy-to-check parts. I'm also wary that this method is bound to be pretty slow, so I'm still looking into the other ways of reconstructing the implicit surface once I'm sure it's setup correctly. The original Witkin-Heckbert particle constraint idea is looking less appealing given it is quite a lot of code and still only generates points which must then be polygonized in real-time as well, so suffice it to say, I'm struggling a little with this part of the project right now.
On a side note, I have also been working on smaller things such as the user interaction forces now that I have a real camera setup (see last couple posts) and actually have access to such info as the view plane. Not quite finished with this either, but should be soon. My current plan/pseudo-code for the basic interaction forces:
Anyway, that's it for now. Hopefully some more videos to come soon, but I'll save the "future plans" part of this post for the self-critique which I should be putting up shortly.
I think the main problem I am having right now is I don't have a way of visually debugging, so I'm not really sure how close I am to having the ray-marching work. I'm just getting a lot of ill-formed shapes, so I've gone back and am now redoing the surface reconstruction in smaller easy-to-check parts. I'm also wary that this method is bound to be pretty slow, so I'm still looking into the other ways of reconstructing the implicit surface once I'm sure it's setup correctly. The original Witkin-Heckbert particle constraint idea is looking less appealing given it is quite a lot of code and still only generates points which must then be polygonized in real-time as well, so suffice it to say, I'm struggling a little with this part of the project right now.
On a side note, I have also been working on smaller things such as the user interaction forces now that I have a real camera setup (see last couple posts) and actually have access to such info as the view plane. Not quite finished with this either, but should be soon. My current plan/pseudo-code for the basic interaction forces:
1. Qt signals user click
2. Store the the location of the click at c.last
3. interactionClick <- TRUE
3. interactionClick <- TRUE
4. Shoot Ray out from camera eye position through the clicked position in world space
5. //This step takes O(n) time:
For each particle at position p do a ray-sphere intersection with the sphere centered at p with radius r.
d <- SphereIntersect(p,r);
//returns -1 if no hit otherwise the closest distance d from p to Ray
//Note that d <= r
if (d != -1) { //there was an intersection
tag this particle p and store its d
}
...................
//In next time steps - if interactionClick == TRUE
6. Qt signals user drag with mouse clicked at c.current
7. dDrag <- c.current - c.last //dDrag should be projected so that parallel to view plane
8. For each particle p that is tagged
Compute a force f as with magnitude a function of p.d and lengthOf(dDrag)
f points in direction of dDdrag
Store f with that particle to be applied in external forces step of simulation
Store f with that particle to be applied in external forces step of simulation
9. c.last <- c.current
....................
10. OnMouseRelease()
Remove tags from all particles
interactionClick <- FALSE
Remove tags from all particles
interactionClick <- FALSE
Anyway, that's it for now. Hopefully some more videos to come soon, but I'll save the "future plans" part of this post for the self-critique which I should be putting up shortly.
Subscribe to:
Posts (Atom)