- January 12, 2017 at 9:26 am #55138
I am in my final year at University studying computer games programming
I have a final year project which I was inspired by PowerVr and because of that my project is assessing the performance of ray tracing within a real time application.
I am now onto the design phase and had a question if possible, I have a G-Buffer which for this example stores simply the world pos and normal. I want to trace in the direction of the light and see if it reaches the light following your post on gamasutra.
Am i right in thinking that, for every step of the trace i sample that new coord data and get the world pos and from this work out if the object is blocking my line of sight with the light?
I feel once i get this understood i will be able to start implementing light,shadows and reflections
Sorry if this is not appropriate and if so I apologise
1 user thanked author for this post.January 16, 2017 at 11:25 am #55159
Each pixel of your G-Buffer in this case corresponds to the first ray bounce or ray intersection which would occur if you were to emit a primary ray per pixel from the camera into your scene.
Using your G-Buffer you can optionally emit additional secondary rays from the current world space position (usually offsetting the origin using the normal direction to ensure back face collisions are avoided). These secondary rays can be used to produce additional effects i.e. shadows, reflections, refraction’s etc.
“Am i right in thinking that, for every step of the trace i sample that new coord data and get the world pos and from this work out if the object is blocking my line of sight with the light?”
It completely depends on which techniques you are trying to achieve on what you do at each ray intersection. If using the G-Buffer you were to emit a shadow ray per pixel towards a light source then if you were to intersect another object before reaching the light you may not sample the coordinate data/world space position you may instead simply drop the ray. Reflections/refraction’s would require alternative steps.
ShaunJanuary 19, 2017 at 9:53 pm #55170
Thank you for your response.
Yes for now i am working with just the shadow ray. I will have a go and attempt what i am assuming the steps are and hopefully it works out 🙂 if not ill drop more details on what i am trying to do with image and code snippets etc
I’m currently building up DirectX12 with deferred shading so I can start attempting this
AaronFebruary 10, 2017 at 12:32 pm #55219
Hi, since starting this thread I have managed to implement a basic deferred system in DirectX 12 storing normals ,diffuse and world positions.
I have come across a problem.
(please ignore the rest of the image just focus on the top right)
When looking at the guide im following, the image of the world space positions is nice and blended and then seems to incorporate the cameras view also. Whereas my version there is the harsh transitions from one colour to another and im struggling to work out what I should be doing to get that output.
I am not asking for specific answers but any advice or hints towards the correct direction would be helpful please?February 21, 2017 at 11:55 am #55291
it is possible that your geometry is much bigger in size than what can be seen in the reference image.
Harsh transitions may be fine, as you can only display colors in the range [0, 1] on screen, therefore everything above or below that will be clamped. Therefore you either need to scale your values into the [0, 1] range or resize the scene.
Note that I’m guessing here, I can’t say it for sure without the app source code.