# comp.graphics.algorithms

## Subject: field of view math

Hello All,

I'm using a 3d renderer that does not allow for motion bluring FOV
changes (prman). I'd like to simulate this with a 2D filter, I think it
should be possible.

Right now I'm trying to construct a vector image where each pixel
describes a dir and mag of where that point will move to in screen
space.. then use this vector image with a vector blur operation in a
compositing package to smear my final image.

I've got a hack going right now where I view a grid in front of my
camera and project screen space coods on each point.. I do that for
currrent frame and next frame.. then generate a vector for each grid
point between the 2 frame samples. This gives me a normalized 2d
vector which I then multiply by the x & y size of the camera res. I
think this will give me a vector at each grid point that tells me how
many pixels and in what direction a screen space point will travel over
1 frame. I render this grid as a floating point image and smear my
beauty render based on this vector field.

First off, does anyone see any flaw with this? Right now I'm getting a
radial blur as I would expect, the blur lengths aren't quite right yet
but my image looks almost right.. tyring to debug if my vectors are bad
or if it a prob with the way I'm bluring. Anyway, I think I can get
this working with a bit more debuging

All this to say, I'm guessing there is an easier way to construct this
vector image without going though the renderer. A fov blur should just
be a 2d screen space problem.. should always come out to a radial
vector field where the center of frame is 0 length. So given a camera
with all the usual variables available to me.. is there a simple
formula that will tell me the screen space offset given an fov change
over time?

Thanks for any help!

daniel