Think of it as "fancy (non)linear regression" or something like that.
It's quite clever.
Just thinking about consumer equipment, being able to shutter release and fire a laser "burst" precisely enough would be a challenge. If the shutter release is wired, does the time for the signal to travel down the wire + the mechanics of the shutter need to be compensated for the time of flight of the "photon"? I could see this being one of those YouTube channels with someone doing this in their garage.
All of that to say that the accuracy of what they've done is impressive
The reasoning for this is that in essence, ML is curve fitting data from high "polynomial" functions (approximately accurate). But there are many things like density estimators which are very good in statistical settings where you cannot access the density function directly (called "intractable") and so all you can deal with is samples (e.g. you can sample examples of human faces, but we have no mathematical equation to describe all variations and in what likelihood). This is not too different from Monte Carlo Sampling and is often used in variational inference. When you are doing density estimation you can have a lot more confidence in your results as you can actually do things like building proper confidence intervals and you can test likelihood (how well does your model explain the data).
So yeah, keep the skepticism up. There's a lot of snake-oil in ML and these days it is probably good to default to that position. Especially since a lot of ML people are not well versed in math and there's a growing sentiment of not needing math (you'll even find that common around here. It is a reliance upon empirical results and not understanding "Elephant fitting"). FWIW here they're using NeRF and it looks like they are using it to tune parameters of their physical model. I'd have to take a deeper look but at a quick glance I'd let down my guard a bit.
[0] Worth noting that "AI" used to be the typical signal that some thing was snake oil. Now everything is called AI. I'll leave it to the reader to determine if this is still a strong signal or not.
https://www.youtube.com/watch?v=EtsXgODHMWk&t=107s
I still share your concern, however, particularly because they seem to be avoid moving the camera without time moving as well. I was expecting bullet-time!
The "AI" in the title appears to be click bait since the paper doesn't mention AI, and a NeRF isn't really AI in the colloquial sense even though it uses a DNN.
Apparently, I'm really sick of constant bombardment from corporate branding.
It's not even that it's cola inside, obviously
> We use a collimated beam to illuminate a Coca-Cola bottle filled with water and a small amount of milk
I think you should just remember you live in a society and societies contain mass market brands that aren't going anywhere.
In particular with sodas, most of the indie ones are worse for you. Coke at least makes Coke Zero, all the indie ones with 2010-hipster branding have 60g sugar in each can.
Light interacts quite differently with the label than the material of the bottle. We get to observe scattering, diffusion, color... y'know, science stuff.
It is rather interesting that the first video (from 12 years ago) used a blank red label instead.
(See how it differs in from the rest of the bottle? I particularly like how it obscures the pulse itself and highlights the wavefront on the surface.)
https://www.youtube.com/watch?v=EtsXgODHMWk
https://web.media.mit.edu/~raskar/trillionfps/
I remember when this dropped and where I was when I watched it and read the paper. One of the coolest things I'd ever seen.
This seems like a great extension of the work. I'm okay with trading accuracy for crude usefulness as a model. Making this interactive and putting it in the hands of curious minds is the next step.
https://anaghmalik.com/FlyingWithPhotons/
https://anaghmalik.com/FlyingWithPhotons/media/moving_videos...