Maybe I have no understanding of what a binary search is. My understanding is that you check halfway through the video, see if the thing has happened yet, then skip halfway to the end if it hasn’t. Check again, skip again. When you see the cue that the event has happened, you rewind to halfway between the latest point where the event hadn’t happened yet and the earliest point when it has. Keep doing that and you can pinpoint the exact frame where the event happens in a matter of minutes.
Binary search would be largely useless in cases where you have a good chance of skipping right past the event. If the video is an hour long, and the event happens 34 minutes in and leaves a visual cue that lasts less than 11 minutes, then binary search does not find the event. At that point, watching the video fast forwarded would be the way to go, and that’s not a binary search, that’s just watching the video.
So I should correct myself: the visual cue doesn’t have to last the remainder of the video, it just needs to last until one of the points that you check. Which still makes it not useful for things that don’t leave visual cues that last more than a few minutes, because it cannot find most of those events if they happen at a random time in an hour+ video.
When you see the cue that the event has happened, you rewind
The event has happened, or the aftereffects that the event happened. That is my point, the aftereffects matter as much as the event itself. As long as the ‘after’ looks different than the ‘before’ for any reason, that is a marker to give you an indication on which way to go, rewind, or advance.
And yes, either the effect or the aftereffects has to last long enough to be noticed by humans, less long by AI (faster to detect changes than humans). But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.
But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.
Nobody said otherwise, you’re arguing with strawmen’
Yes, they have. They’ve used it as a reason why a binary search would not work, that the event duration would be too short to be detectable.
And that’s not a strawman, that’s making my point, that its not just the event, but the aftereffects of the event, that makes a binary search possible.
less long by AI (faster to detect changes than humans).
Many things change things. A bit of smoke in the air might have been from a gunshot that happened 10 minutes ago, or it might have been from a cigarette 15 minutes ago. Binary search relies on changes that indicate a specific thing has happened–a broken window, a bike no longer there, blood stains on the street. Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape. Even the things that AIs are truly useful for, like interpreting video compression artifacts, wouldn’t help, because any number of things can cause compression artifacts. How could it tell what pixels are slightly off color because of a gunshot 3 minutes ago, and what pixels are slightly off color because someone walked past the camera?
At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams. Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.
Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape.
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams.
Is there a point where one technique works better than another technique? Sure. I’m not arguing that. But if you’re dealing with a very long time, you’d still want to do a binary search first.
Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.
Depends on how long that tape is, which is what was being originally discussed by the OP.
A binary search assisted by AI in determining the point in the tape where the effect happened quickly is still a very fast way of doing so (assuming the tape duration is very long), as alluded by others in other topic trees in this topic.
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
Lmao now I know you’re fucking with me
Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster
Once it’s trained
In another week
Oh shit, it thought that guy’s cell phone was a gun. See you in another month!
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
Lmao now I know you’re fucking with me
Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster
Once it’s trained
In another week
Oh shit, it thought that guy’s cell phone was a gun. See you in another month!
Um, I was being completely serious. Having AI determine shapes/opaqueness is a simple matter for it. And I’m assuming the training would already be done before the event happens, over time.
You don’t think crime forensics labs won’t be training AI to do these kind of detections going forward? Really?
(Maybe its a matter of people not truly grocking what AI will do and how it will change things, going forward. /shrug)
Having an AI search for shapes an opaqueness is still totally useless for a binary search if those semi-opaque shapes happen for 10 minutes 34 minutes into an hour long video
Again, you’d just feed the whole video to an AI, you wouldn’t have it do a binary search
Having an AI search for shapes an opaqueness is still totally useless for a binary search if those semi-opaque shapes happen for 10 minutes 34 minutes into an hour long video
Well one of those shapes would happen at the time of the event though, so it’s not useless. One of those would be a gunshot smoke, and could be flagged for review.
Again, you’d just feed the whole video to an AI, you wouldn’t have it do a binary search
One day, when computers and AI are powerful enough, this will be the answer, but even then I would like to think behind the scenes they would use a binary search to speed up the processing time.
The time of the event doesn’t necessarily coincide with any of the times that you’re checking. That’s the whole point of looking for visual cues. Again, if the event happens 34 minutes into the video, and it leaves AI detectable visual cues for 10 minutes, the AI will never find it using binary search. It will skip to 30 minutes, see nothing, skip to 45 minutes, see nothing, skip to 52:30, see nothing, skip to 56:15, see nothing, and fail at some point when it can’t divide the video further. Binary search would fail in this scenario. It’s not just useless, it’s an abject failure, and the AI was a waste of processing power when you could have scrubbed forward five minutes at a time instead. That would have found the visual cue, but would not be a binary search.
Maybe I have no understanding of what a binary search is. My understanding is that you check halfway through the video, see if the thing has happened yet, then skip halfway to the end if it hasn’t. Check again, skip again. When you see the cue that the event has happened, you rewind to halfway between the latest point where the event hadn’t happened yet and the earliest point when it has. Keep doing that and you can pinpoint the exact frame where the event happens in a matter of minutes.
Binary search would be largely useless in cases where you have a good chance of skipping right past the event. If the video is an hour long, and the event happens 34 minutes in and leaves a visual cue that lasts less than 11 minutes, then binary search does not find the event. At that point, watching the video fast forwarded would be the way to go, and that’s not a binary search, that’s just watching the video.
So I should correct myself: the visual cue doesn’t have to last the remainder of the video, it just needs to last until one of the points that you check. Which still makes it not useful for things that don’t leave visual cues that last more than a few minutes, because it cannot find most of those events if they happen at a random time in an hour+ video.
The event has happened, or the aftereffects that the event happened. That is my point, the aftereffects matter as much as the event itself. As long as the ‘after’ looks different than the ‘before’ for any reason, that is a marker to give you an indication on which way to go, rewind, or advance.
And yes, either the effect or the aftereffects has to last long enough to be noticed by humans, less long by AI (faster to detect changes than humans). But the vast majority of events, when humans are involved, leave long aftereffects usually. Yes, not 100% of the time, but usually.
In which case there are visual cues and it’s something that the comment you argued with acknowledged would be eligible for binary search
Nobody said otherwise, you’re arguing with strawmen
Yes, they have. They’ve used it as a reason why a binary search would not work, that the event duration would be too short to be detectable.
And that’s not a strawman, that’s making my point, that its not just the event, but the aftereffects of the event, that makes a binary search possible.
Many things change things. A bit of smoke in the air might have been from a gunshot that happened 10 minutes ago, or it might have been from a cigarette 15 minutes ago. Binary search relies on changes that indicate a specific thing has happened–a broken window, a bike no longer there, blood stains on the street. Anything undetectable by humans would still be useless to AIs. A bit a smoke? Could have been a gunshot 3 minutes ago, could have been a cigarette, could be fog, could be a vape. Even the things that AIs are truly useful for, like interpreting video compression artifacts, wouldn’t help, because any number of things can cause compression artifacts. How could it tell what pixels are slightly off color because of a gunshot 3 minutes ago, and what pixels are slightly off color because someone walked past the camera?
At that point, just feed the entire video to the AI and have it tell you when it sees guns or puffs of smoke or hears screams. Binary search is useless when you can just have a machine watch the entire video in one sitting over the course of five seconds and tell you when the interesting thing happens.
Actually, an AI could determine the difference between those, based on shape, location, and opacity, etc.
Is there a point where one technique works better than another technique? Sure. I’m not arguing that. But if you’re dealing with a very long time, you’d still want to do a binary search first.
Depends on how long that tape is, which is what was being originally discussed by the OP.
A binary search assisted by AI in determining the point in the tape where the effect happened quickly is still a very fast way of doing so (assuming the tape duration is very long), as alluded by others in other topic trees in this topic.
Lmao now I know you’re fucking with me
Yeah lemme spend three weeks training this AI on the difference between gunsmoke, cigarette smoke, vapes, and fog in this specific alley. Oh, y’all already found the killer because someone just watched the video? Well my point stands, the AI could do it faster
Once it’s trained
In another week
Oh shit, it thought that guy’s cell phone was a gun. See you in another month!
Um, I was being completely serious. Having AI determine shapes/opaqueness is a simple matter for it. And I’m assuming the training would already be done before the event happens, over time.
You don’t think crime forensics labs won’t be training AI to do these kind of detections going forward? Really?
(Maybe its a matter of people not truly grocking what AI will do and how it will change things, going forward. /shrug)
Having an AI search for shapes an opaqueness is still totally useless for a binary search if those semi-opaque shapes happen for 10 minutes 34 minutes into an hour long video
Again, you’d just feed the whole video to an AI, you wouldn’t have it do a binary search
Well one of those shapes would happen at the time of the event though, so it’s not useless. One of those would be a gunshot smoke, and could be flagged for review.
One day, when computers and AI are powerful enough, this will be the answer, but even then I would like to think behind the scenes they would use a binary search to speed up the processing time.
The time of the event doesn’t necessarily coincide with any of the times that you’re checking. That’s the whole point of looking for visual cues. Again, if the event happens 34 minutes into the video, and it leaves AI detectable visual cues for 10 minutes, the AI will never find it using binary search. It will skip to 30 minutes, see nothing, skip to 45 minutes, see nothing, skip to 52:30, see nothing, skip to 56:15, see nothing, and fail at some point when it can’t divide the video further. Binary search would fail in this scenario. It’s not just useless, it’s an abject failure, and the AI was a waste of processing power when you could have scrubbed forward five minutes at a time instead. That would have found the visual cue, but would not be a binary search.
No, you’re not the one who has no understanding of what binary search is.