I'm planning on doing a bigger update a little before the beta review, so this will probably be one of my shorter posts. What with quite a few projects going on this week, I was not able to get done everything for the Beta Review yet. However, over the next few days I have a mysterious lull in work from other classes, so I should be posting more progress in a couple days. As planned, I have finished up basic user interaction - the main difference from the final implementation with the pseudocode from last week is that I had to use impulse rather than straight-up force to modify fluid behavior because of the way force is currently zeroed at the beginning of each time step. (I might change this later). I finally put in an fps tracker as well - I have it display on the GUI status bar at the bottom - right now the simulation is ranging around 2 fps for the 1000 particle set, and around 20-some for the 100 particle set on my laptop.
I'm still working on the implicit reconstruction, mainly going back and doing a bunch more background reading on feasible solutions, such as:
http://research.microsoft.com/en-us/um/people/hoppe/psrecon.pdf
http://web.mit.edu/manoli/crust/www/crust.html
http://physbam.stanford.edu/~fedkiw/papers/stanford2001-03.pdf
In particular, this paper discusses a method which might be useful which is particularly suited to being faster and more memory light. Essentially, it uses a level set method, interpolating the zero isosurface of the level set function. A simplified version of the motion of the surface is also embedded in addition to the surface, which might be a good way to get fast deformations without having to recompute the entire mesh from scratch.
UPDATE:
Here is a video of user interaction working. The video also shows how clicking shoots a ray from the camera. The framerate is now displayed in the lower left corner of the status bar.
For next week and the beta review, same plan as before: focus on the implicit formulation.
Thursday, March 31, 2011
Thursday, March 24, 2011
Self-Evaluation/Reality Check/Future Game Plan
With just over a week until the beta review, here is an evaluation of my progress and what is left to do.
Looking back at the list of final deliverables, I feel like I still have quite a bit left so I'm trying not to feel too much pressure (deep breaths) but I know I still have quite a bit of work ahead of me.
On the other hand, looking back I feel like I've already accomplished a lot, including all the preliminary research, GUI and camera setup, particle framework, spatial hash table, fluid dynamics, and adding in parameter control. It would have been nice to have gotten farther along in the surface reconstruction at this point, but other than that I think I'm somewhat on par with my suggested timeline.
I really want to have time to get to the haptics, because I think that will make the end product a lot more dynamic and interesting to play with but I am aware that with the time constraints it is more important to get what I have working fully and play the haptics part by ear. With this in mind, I have devised a list of the necessary parts of the project that need to get finished and the other things that I want to do but are not as important and are more sort-of "icing-on-the-cake" deliverables.
Most important things to get working:
Looking back at the list of final deliverables, I feel like I still have quite a bit left so I'm trying not to feel too much pressure (deep breaths) but I know I still have quite a bit of work ahead of me.
On the other hand, looking back I feel like I've already accomplished a lot, including all the preliminary research, GUI and camera setup, particle framework, spatial hash table, fluid dynamics, and adding in parameter control. It would have been nice to have gotten farther along in the surface reconstruction at this point, but other than that I think I'm somewhat on par with my suggested timeline.
I really want to have time to get to the haptics, because I think that will make the end product a lot more dynamic and interesting to play with but I am aware that with the time constraints it is more important to get what I have working fully and play the haptics part by ear. With this in mind, I have devised a list of the necessary parts of the project that need to get finished and the other things that I want to do but are not as important and are more sort-of "icing-on-the-cake" deliverables.
Most important things to get working:
- Surface reconstruction (in progress)
- User Interaction (in progress, almost done)
- Environment mapping and smooth shading
- Optimized running time
- Haptic control and force feedback (arguably should go at bottom of other list)
- Improved User Interaction (e.g. rigid bodies)
- Tweaking particle dynamics / parameters
- Small GUI things (e.g. saving/loading materials, etc.)
- Index of refraction
- Some form of implicit surface reconstruction
- Click-drag user interaction
Implicit Problems and Other Oddities
Well, what with lots of debugging and little sleep, this week was not nearly as productive as I had hoped but on the bright side at least I am dealing with these issues now as opposed to in a couple weeks. The majority of this week was spent with the implicit version of the particle sim and trying to get it to work in some form as well as doing some more reading and re-reading on options for (fast) surface reconstruction.
I think the main problem I am having right now is I don't have a way of visually debugging, so I'm not really sure how close I am to having the ray-marching work. I'm just getting a lot of ill-formed shapes, so I've gone back and am now redoing the surface reconstruction in smaller easy-to-check parts. I'm also wary that this method is bound to be pretty slow, so I'm still looking into the other ways of reconstructing the implicit surface once I'm sure it's setup correctly. The original Witkin-Heckbert particle constraint idea is looking less appealing given it is quite a lot of code and still only generates points which must then be polygonized in real-time as well, so suffice it to say, I'm struggling a little with this part of the project right now.
On a side note, I have also been working on smaller things such as the user interaction forces now that I have a real camera setup (see last couple posts) and actually have access to such info as the view plane. Not quite finished with this either, but should be soon. My current plan/pseudo-code for the basic interaction forces:
Anyway, that's it for now. Hopefully some more videos to come soon, but I'll save the "future plans" part of this post for the self-critique which I should be putting up shortly.
I think the main problem I am having right now is I don't have a way of visually debugging, so I'm not really sure how close I am to having the ray-marching work. I'm just getting a lot of ill-formed shapes, so I've gone back and am now redoing the surface reconstruction in smaller easy-to-check parts. I'm also wary that this method is bound to be pretty slow, so I'm still looking into the other ways of reconstructing the implicit surface once I'm sure it's setup correctly. The original Witkin-Heckbert particle constraint idea is looking less appealing given it is quite a lot of code and still only generates points which must then be polygonized in real-time as well, so suffice it to say, I'm struggling a little with this part of the project right now.
On a side note, I have also been working on smaller things such as the user interaction forces now that I have a real camera setup (see last couple posts) and actually have access to such info as the view plane. Not quite finished with this either, but should be soon. My current plan/pseudo-code for the basic interaction forces:
1. Qt signals user click
2. Store the the location of the click at c.last
3. interactionClick <- TRUE
3. interactionClick <- TRUE
4. Shoot Ray out from camera eye position through the clicked position in world space
5. //This step takes O(n) time:
For each particle at position p do a ray-sphere intersection with the sphere centered at p with radius r.
d <- SphereIntersect(p,r);
//returns -1 if no hit otherwise the closest distance d from p to Ray
//Note that d <= r
if (d != -1) { //there was an intersection
tag this particle p and store its d
}
...................
//In next time steps - if interactionClick == TRUE
6. Qt signals user drag with mouse clicked at c.current
7. dDrag <- c.current - c.last //dDrag should be projected so that parallel to view plane
8. For each particle p that is tagged
Compute a force f as with magnitude a function of p.d and lengthOf(dDrag)
f points in direction of dDdrag
Store f with that particle to be applied in external forces step of simulation
Store f with that particle to be applied in external forces step of simulation
9. c.last <- c.current
....................
10. OnMouseRelease()
Remove tags from all particles
interactionClick <- FALSE
Remove tags from all particles
interactionClick <- FALSE
Anyway, that's it for now. Hopefully some more videos to come soon, but I'll save the "future plans" part of this post for the self-critique which I should be putting up shortly.
Thursday, March 17, 2011
Refinements and Implicit Attempts
So with the craziness of this week, trying to juggle a bunch of different projects and then going to California to look for housing for next year, it has taken me a while to get up the final version of this blog post, but things seem to moving along. I have finished adding in most of the refinements I had mentioned last post. Specifically, I finished redoing the camera class, so that is is actually now storing all the relevent information (view reference point, eye position, view up vector, view plane distance etc., and using gluLookAt to do the camera aiming). Then as a first step toward getting the interactivity working and to show off the newfound power of the camera class, I set it up so that everything when you Shift+click on a point, it draws a ray from the camera eye through that point (see screenshot below):
I used the camera class from some of the CIS462 homeworks as an extensive reference for setting up the camera. Once I got the transformations working it was relatively straightforward to then modify the main class to then interface with these methods instead of directly modifying the values of the Camera struct. For shooting the ray, looking back at the CIS277 course notes was extremely helpful in remembering how to convert the 2D screen coordinates to the 3D world coordinates using the camera up vector, eye position, and center.
Additionally, I also modified the GUI as well as the underlying framework to be able to actually change all the parameters (everything but index of refraction) including color from within the GUI during the simulation. Note in the screenshot to the left all the parameters from last post are now directly manipulable from within the simulation.
It is turning out to be less trivial than expected to code in the implicit formulation framework and then get that to interface with the rest of the program, especially figuring out indexing and how to define the isolevel within each grid cell. I am continuing to work on building the implicit formulation into the code and experimenting with the Paul Bourke surface reconstruction code.
So the plan right now is to continue working on this, hopefully getting some sort of visualization in the near future. Smaller things can also be tweaked, but for now I think the implicit surface recon is the main thing. Also, now that the ray shooting is working, I'm hoping to finally get some simple drag-click interaction forces in. More to come soon
I used the camera class from some of the CIS462 homeworks as an extensive reference for setting up the camera. Once I got the transformations working it was relatively straightforward to then modify the main class to then interface with these methods instead of directly modifying the values of the Camera struct. For shooting the ray, looking back at the CIS277 course notes was extremely helpful in remembering how to convert the 2D screen coordinates to the 3D world coordinates using the camera up vector, eye position, and center.
Additionally, I also modified the GUI as well as the underlying framework to be able to actually change all the parameters (everything but index of refraction) including color from within the GUI during the simulation. Note in the screenshot to the left all the parameters from last post are now directly manipulable from within the simulation.
It is turning out to be less trivial than expected to code in the implicit formulation framework and then get that to interface with the rest of the program, especially figuring out indexing and how to define the isolevel within each grid cell. I am continuing to work on building the implicit formulation into the code and experimenting with the Paul Bourke surface reconstruction code.
So the plan right now is to continue working on this, hopefully getting some sort of visualization in the near future. Smaller things can also be tweaked, but for now I think the implicit surface recon is the main thing. Also, now that the ray shooting is working, I'm hoping to finally get some simple drag-click interaction forces in. More to come soon
Thursday, March 3, 2011
Interactions, tweaks and start of implicitness
So as this week has involved scrabbling to complete movies, midterms, and smoke simulations, and general lack of what we might consider necessary doses of sleep, overall it was not extremely productive week on the senior project front. Also, as I am currently finishing this blog post at a resort, I am paying dearly for internet/computer access right now, so I will keep this pretty brief. I am still working on adding in user interaction to my GUI, which is proving more difficult than anticipated due to my simple camera implementation (which is just using the glTranslate and glRotate methods right now). I am thinking of upgrading my camera class so that I can use gluLookAt instead of relying on rotating the world ' this will make shooting rays and picking objects much easier since I will have the eye position already stored. For now, I am thinking of just the click drag -> apply force in that direction schema, which should integrate in nicely with my framework. Later, if there´s time, I can use the rigid bodies class I added into the framework to be able to have a "virtual tool" for interacting with the mercury (see the interaction tool buttons on my GUI), but I´m thinking I might just make this an option for haptics anyway. In terms of speeding up the simulation, I think choosing a better hash function for my spatial hashing data structure should help. Other smaller improvements to my program I´m working on:
f(x) = bias - sum(for each particle i)<exp(-abs(x-c)/(s) )> //sorry that´s illegible, I´m using a spanish keyboard
where c is the location of the particle and s is a standard devation for the particle. I am going to start by tyring to construct the surface using the open source marching cubes algorithm and see what sort of visualization adn framerate I get on that to start. Then I will look to the other possible methods for surface reconstruction (Witkin floter particle constrain method, etc.) I´ve been scrounging the web for other possible approaches that might work. Anyhow, that´s all for now. When I return from my vacation I will be hopefully much more well rested and ready to start on some implicit surface reconstruction!
- Adding in more parameters to the GUI and actually having slots to make the simulation respond to changing the parameters. The ones I´m adding (right now these are all just predefined constants):
- k - the pressure constant for computing pressure
- k_near - the near pressure constant for near pressure (increasing this causes less clustering by repelling ver close particles)
- k_spring - the spring constant for springs between particles
- gamma - elasticity constant
- alpha - plasticity constant
- sigma - linear viscosity term
- mu - friction term between particles and rigid bodies
- rho_0 ' the rest or desired density of the particles
- these all mostly come from the Clavet paper
- while some of these parameters will proably not be in the final version of the application, I think it will be very useful to be able to experiment with teaking different parameters in real time and will make the program more dynamic. Allowing the user to change the particle color should also be pretty trivial.
- Adding a framework tracker to get a read on the exact framerate - will be helpful in quantifying performance and tyring to get a read on the exact framerate - will be helpful in quantifying performance and trying to get a speedup (I think for simplicity I might just adapt the one from the CIS563 base code - the mmc::FpsTracker class)
f(x) = bias - sum(for each particle i)<exp(-abs(x-c)/(s) )> //sorry that´s illegible, I´m using a spanish keyboard
where c is the location of the particle and s is a standard devation for the particle. I am going to start by tyring to construct the surface using the open source marching cubes algorithm and see what sort of visualization adn framerate I get on that to start. Then I will look to the other possible methods for surface reconstruction (Witkin floter particle constrain method, etc.) I´ve been scrounging the web for other possible approaches that might work. Anyhow, that´s all for now. When I return from my vacation I will be hopefully much more well rested and ready to start on some implicit surface reconstruction!
Subscribe to:
Posts (Atom)