You are right, 1 render per focus blur sample. The more samples, the higher quality. As you suggest, I thought about getting depth buffer into a texture, bluring pixels of 1 render with a radius depending on how far depth is from focus point. Not implemented yet. Still, accumulation buffer method theorically gives the most close to reality model. My goal is to keep everything in a clear x86 source code. But I can do this alternative method on CPU, using glReadPixels / glDrawPixels.
@yonarw
12 жыл бұрын
Hello, I wrote a depth of field feature for my own engine too. You say you are using accumulation-buffer? That means you have to render the whole scene for each sample? In my engine i use a framebuffer from which i can read depth information and blur each pixle corresponding to which depth it has in this framebuffer. If your scene gets more complex it might be a problem to render it multiple times. But maybe you do not want to use shaders?!? Looks good anyway ;)
Пікірлер: 2