I remember when this dropped and where I was when I watched it and read the paper. One of the coolest things I'd ever seen.
This seems like a great extension of the work. I'm okay with trading accuracy for crude usefulness as a model. Making this interactive and putting it in the hands of curious minds is the next step.
In the raw footage the wavefront will almost always appear to move from the closer points to the further points simply because it takes time to propagate, and we see that in their example here. This is now somewhat misleading, as they're synthetically moving the camera in post (but not the pulse propagation). I point this out because the team have also developed an "unwarping" technique to cancel it out, and they demonstrate it at the bottom of their website. Note how the scattered light now propagates outward intuitively.
https://arxiv.org/pdf/2404.06493 is the paper. If I'm understanding it correctly, the camera isn't actually capturing a pulse of light. Instead, it's recording single pixels from a 10MHz series of pulses using a single pixel camera that rotates around the object. Then uses this time-series of data to render a video of a "single" virtual pulse via a NeRF.
The "AI" in the title appears to be click bait since the paper doesn't mention AI, and a NeRF isn't really AI in the colloquial sense even though it uses a DNN.
If you have something periodic in time you can get high time resolution of what looks like a single event by taking multiple periodic captures with tiny phase offsets. It’s a neat capability
I still share your concern, however, particularly because they seem to be avoid moving the camera without time moving as well. I was expecting bullet-time!
In this case, they're basically using a neural network to approximate a really tricky high dimensional function from a lot of measurements from a scene, and use it to interpolate values.
Think of it as "fancy (non)linear regression" or something like that.
I'm now wanting to set up my magic bullet rig to try this!
Just thinking about consumer equipment, being able to shutter release and fire a laser "burst" precisely enough would be a challenge. If the shutter release is wired, does the time for the signal to travel down the wire + the mechanics of the shutter need to be compensated for the time of flight of the "photon"? I could see this being one of those YouTube channels with someone doing this in their garage.
All of that to say that the accuracy of what they've done is impressive
Your suspicion is warranted, but it really depends on what "AI" is being used (I'd rather call it ML. As a ML researcher myself, and who publicly criticizes LLMs[0]).
The reasoning for this is that in essence, ML is curve fitting data from high "polynomial" functions (approximately accurate). But there are many things like density estimators which are very good in statistical settings where you cannot access the density function directly (called "intractable") and so all you can deal with is samples (e.g. you can sample examples of human faces, but we have no mathematical equation to describe all variations and in what likelihood). This is not too different from Monte Carlo Sampling and is often used in variational inference. When you are doing density estimation you can have a lot more confidence in your results as you can actually do things like building proper confidence intervals and you can test likelihood (how well does your model explain the data).
So yeah, keep the skepticism up. There's a lot of snake-oil in ML and these days it is probably good to default to that position. Especially since a lot of ML people are not well versed in math and there's a growing sentiment of not needing math (you'll even find that common around here. It is a reliance upon empirical results and not understanding "Elephant fitting"). FWIW here they're using NeRF and it looks like they are using it to tune parameters of their physical model. I'd have to take a deeper look but at a quick glance I'd let down my guard a bit.
[0] Worth noting that "AI" used to be the typical signal that some thing was snake oil. Now everything is called AI. I'll leave it to the reader to determine if this is still a strong signal or not.
Did Coca-Cola sponser/fund this study? Why the need for the label still being visible? Seems like you'd want to not obstruct the view behind the label, you know, for science. There's zero purpose for having a bottle with any label. The shape of the bottle is part of their trade mark, so it would be obvious anyways.
Apparently, I'm really sick of constant bombardment from corporate branding.
> There's zero purpose for having a bottle with any label.
Light interacts quite differently with the label than the material of the bottle. We get to observe scattering, diffusion, color... y'know, science stuff.
It is rather interesting that the first video (from 12 years ago) used a blank red label instead.
(See how it differs in from the rest of the bottle? I particularly like how it obscures the pulse itself and highlights the wavefront on the surface.)
If there wasn't a label on this bottle it'd be so plain looking that you'd think it was a 3D render and not real.
I think you should just remember you live in a society and societies contain mass market brands that aren't going anywhere.
In particular with sodas, most of the indie ones are worse for you. Coke at least makes Coke Zero, all the indie ones with 2010-hipster branding have 60g sugar in each can.
Puzzling, I can see why a pop bottle, due to its shape, would make this more fun to look at than say a boring cylinder but why the free advertisement by leaving the label on?
It's not even that it's cola inside, obviously
> We use a collimated beam to illuminate a Coca-Cola bottle filled with water and a small amount of milk
For those curious, here is prior art from 12 years ago, capturing light in a coke bottle with a single streak camera.
https://www.youtube.com/watch?v=EtsXgODHMWk
https://web.media.mit.edu/~raskar/trillionfps/
I remember when this dropped and where I was when I watched it and read the paper. One of the coolest things I'd ever seen.
This seems like a great extension of the work. I'm okay with trading accuracy for crude usefulness as a model. Making this interactive and putting it in the hands of curious minds is the next step.
In the raw footage the wavefront will almost always appear to move from the closer points to the further points simply because it takes time to propagate, and we see that in their example here. This is now somewhat misleading, as they're synthetically moving the camera in post (but not the pulse propagation). I point this out because the team have also developed an "unwarping" technique to cancel it out, and they demonstrate it at the bottom of their website. Note how the scattered light now propagates outward intuitively.
https://anaghmalik.com/FlyingWithPhotons/
https://anaghmalik.com/FlyingWithPhotons/media/moving_videos...
https://arxiv.org/pdf/2404.06493 is the paper. If I'm understanding it correctly, the camera isn't actually capturing a pulse of light. Instead, it's recording single pixels from a 10MHz series of pulses using a single pixel camera that rotates around the object. Then uses this time-series of data to render a video of a "single" virtual pulse via a NeRF.
The "AI" in the title appears to be click bait since the paper doesn't mention AI, and a NeRF isn't really AI in the colloquial sense even though it uses a DNN.
If you have something periodic in time you can get high time resolution of what looks like a single event by taking multiple periodic captures with tiny phase offsets. It’s a neat capability
Periodic strobe light.
Hmm, if AI is involved I'm always wondering whether what I see is realistic or not.
It's quite realistic, here's the same thing without AI:
https://www.youtube.com/watch?v=EtsXgODHMWk&t=107s
I still share your concern, however, particularly because they seem to be avoid moving the camera without time moving as well. I was expecting bullet-time!
In this case, they're basically using a neural network to approximate a really tricky high dimensional function from a lot of measurements from a scene, and use it to interpolate values.
Think of it as "fancy (non)linear regression" or something like that.
It's quite clever.
I'm now wanting to set up my magic bullet rig to try this!
Just thinking about consumer equipment, being able to shutter release and fire a laser "burst" precisely enough would be a challenge. If the shutter release is wired, does the time for the signal to travel down the wire + the mechanics of the shutter need to be compensated for the time of flight of the "photon"? I could see this being one of those YouTube channels with someone doing this in their garage.
All of that to say that the accuracy of what they've done is impressive
Your suspicion is warranted, but it really depends on what "AI" is being used (I'd rather call it ML. As a ML researcher myself, and who publicly criticizes LLMs[0]).
The reasoning for this is that in essence, ML is curve fitting data from high "polynomial" functions (approximately accurate). But there are many things like density estimators which are very good in statistical settings where you cannot access the density function directly (called "intractable") and so all you can deal with is samples (e.g. you can sample examples of human faces, but we have no mathematical equation to describe all variations and in what likelihood). This is not too different from Monte Carlo Sampling and is often used in variational inference. When you are doing density estimation you can have a lot more confidence in your results as you can actually do things like building proper confidence intervals and you can test likelihood (how well does your model explain the data).
So yeah, keep the skepticism up. There's a lot of snake-oil in ML and these days it is probably good to default to that position. Especially since a lot of ML people are not well versed in math and there's a growing sentiment of not needing math (you'll even find that common around here. It is a reliance upon empirical results and not understanding "Elephant fitting"). FWIW here they're using NeRF and it looks like they are using it to tune parameters of their physical model. I'd have to take a deeper look but at a quick glance I'd let down my guard a bit.
[0] Worth noting that "AI" used to be the typical signal that some thing was snake oil. Now everything is called AI. I'll leave it to the reader to determine if this is still a strong signal or not.
Did Coca-Cola sponser/fund this study? Why the need for the label still being visible? Seems like you'd want to not obstruct the view behind the label, you know, for science. There's zero purpose for having a bottle with any label. The shape of the bottle is part of their trade mark, so it would be obvious anyways.
Apparently, I'm really sick of constant bombardment from corporate branding.
> There's zero purpose for having a bottle with any label.
Light interacts quite differently with the label than the material of the bottle. We get to observe scattering, diffusion, color... y'know, science stuff.
It is rather interesting that the first video (from 12 years ago) used a blank red label instead.
(See how it differs in from the rest of the bottle? I particularly like how it obscures the pulse itself and highlights the wavefront on the surface.)
https://www.youtube.com/watch?v=EtsXgODHMWk&t=107
If there wasn't a label on this bottle it'd be so plain looking that you'd think it was a 3D render and not real.
I think you should just remember you live in a society and societies contain mass market brands that aren't going anywhere.
In particular with sodas, most of the indie ones are worse for you. Coke at least makes Coke Zero, all the indie ones with 2010-hipster branding have 60g sugar in each can.
Puzzling, I can see why a pop bottle, due to its shape, would make this more fun to look at than say a boring cylinder but why the free advertisement by leaving the label on?
It's not even that it's cola inside, obviously
> We use a collimated beam to illuminate a Coca-Cola bottle filled with water and a small amount of milk
I thought I could only see photons that hit my retina :)
Why does the refraction appear instantly below the bottle rather than taking time for the light to propagate there?
Great observation, however I do feel like it's not instant but feels like it's following behind the main pulse, so should be accurate.
To me it felt like its instant in the downwards direction
[dead]