They could.
But why wouldn’t they just actually compress the video?
In digital/streaming video, there is no physical tape which produces grain, so there is no choice but to add it in post.
But if you’re serving up digital video, then you can simply just serve up an authentic compression/bitrate/resolution.
I suppose maybe players in the future won’t support the codecs we use today?
They might do that, in a dynamic fashion. However, if it’s being used artistically, they likely will want a particular effect. Having greater control over the artifacts might be useful. E.g. lots of artifacts, but none happening to effect the face of the actor, while they are actually speaking.
They might also only want 1 type of artifact, but not another. E.g. want blocking, but not black compression.
They could. But why wouldn’t they just actually compress the video?
In digital/streaming video, there is no physical tape which produces grain, so there is no choice but to add it in post. But if you’re serving up digital video, then you can simply just serve up an authentic compression/bitrate/resolution.
I suppose maybe players in the future won’t support the codecs we use today?
They might do that, in a dynamic fashion. However, if it’s being used artistically, they likely will want a particular effect. Having greater control over the artifacts might be useful. E.g. lots of artifacts, but none happening to effect the face of the actor, while they are actually speaking.
They might also only want 1 type of artifact, but not another. E.g. want blocking, but not black compression.
The client might even have a local AI that extrapolates away visible artifacts from old compressed videos.
How would the AI distinguish between accidental artifacts due to compression, and intentionally introduced artifacts?
It won’t