optical flow implementation
qingbsd
Hello,
Is there an implementation of optical flow analysis for image stacks in Igor? Some information on optical flow as the link below
https://en.wikipedia.org/wiki/Optical_flow
I am just trying to see if I have to start from scratch.
Thanks!
Quan
Forum
Support
Gallery
Igor Pro 9
Learn More
Igor XOP Toolkit
Learn More
Igor NIDAQ Tools MX
Learn More
You may want to start by exploring the ImageRegistration operation which can provide you relative translation parameters for consecutive frames that are sufficiently close (in dx, dy).
A.G.
March 18, 2020 at 03:59 pm - Permalink
I am curious. Does the starting point for optical flow analysis involve something as simple as subtracting image N from image N+1 repeatedly through a (time) stack of images? If so, I would be interested in finding a common framework for this process, especially when it comes to rescaling the subtracted image to recover a sensible histogram spread. Would for example this conversion step involve something as simple as using a bi-color histogram map (e.g. negatives go blue and positives go red)?
I look forward to further insights on this (references at the "beginners level"??).
March 18, 2020 at 05:23 pm - Permalink
In reply to You may want to start by… by Igor
Thanks for the note. My understanding at this point is limited, but I sense that the problem with ImageRegistration is that most of the optical flow analysis I am going to face deals with local particles/patches of signals moving with no common global velocity/directions. So the registration of even two consecutive frames could easily fail.
March 18, 2020 at 11:04 pm - Permalink
In reply to I am curious. Does the… by jjweimer
Hello jjweimer. I think the subtraction would be the first step to do, but then there will be a process to solve the equation under some assumptions to get the velocity of individual patches of signals. The result would be a vector field rather than a scalar image. Still learning about this myself though...
March 18, 2020 at 11:07 pm - Permalink
I have the same issue with using ImageRegistration in that my systems have local patches of flow.
I have just started on developing routines to do image subtraction through a stack. The easiest next step that I see is to use a two-color histogram map to distinguish regions of positive and negative changes (flow).
I am keenly interested in further developments on this. Like you, am only now learning about this field. Indeed, your post on the email discussion thread and this forum were coincidental to what I am just starting. I am open to off-line discussions about collaborative developments. Contact me at Jeffrey dot Weimer at UAH dot edu.
March 19, 2020 at 07:14 am - Permalink
Clearly ImageRegistration is not appropriate if your application involves images of say N particles that move independently of each other each with velocities say {Vx_i,Vy_i} between consecutive frames.
The next question is: do you have a way of identifying the N particles in a unique way. Suppose for example that particles have fixed size that does not change much between frames and that you could determine the identity of a particle based on its side. In that case I would run ImageAnalyzeParticles on the frames and store the positions of the N particles for each frame.
If your particles change in size or move too far to be uniquely identified then I can't see a solution except to increase the frame rate to the point that the individual {dx_i,dy_i} are small and can be determined uniquely.
A.G.
March 19, 2020 at 01:22 pm - Permalink
In reply to I have the same issue with… by jjweimer
Thanks for your note, Jeffrey. I have found someone at UMD who developed some matlab code on optical flow analysis. I'll see how to use it and if it is possible to implement in Igor if no existing solution can be found.
March 22, 2020 at 11:19 am - Permalink
In reply to Clearly ImageRegistration is… by Igor
Hello A.G. Thank you for your note. I think the main challenge in our particular analysis is that particle can generate and disappear (life-time) and the size and shape of them is not going to be all the same. We are trying to analyze fluorescent signals that will show how membrane proteins aggregate and promote polymerization/dissociation. Therefore what we really want to extract is not the exact position, but a statistical understanding of the life time of each significant trajectory of patches of fluorescent signals, how the velocity as vectors look like such as if they are all aligned, distribution of magnitude, spatial distribution etc etc.
March 22, 2020 at 11:23 am - Permalink
In reply to Thanks for your note,… by qingbsd
I look for any updates. FWIW, the Image Tools Package might provide a baseline to load the image stack as a starting point.
March 22, 2020 at 05:42 pm - Permalink
It might be helpful if you sent me (support@wavemetrics.com) an experiment containing a few sequential images that illustrate the range of fluctuations.
March 23, 2020 at 09:29 am - Permalink