- see cool video on front page
- click
- “Haha, fuck you, you’ve just clicked on the invisible button that takes up half the thumbnail like a fucking moron!”
- redirected to the sponsorship info page
- go back
- video gone
why are you completely incapable of making a functional website you wet dildo
YouTube had a solution not too long ago, when you hovered on a thumbnail it would show a little button that queues up the video on a temporary playlist while you keep browsing. But for whatever reason they hid that in a menu.
That’s not really the issue. The issue is that it doesn’t give you a proper URL with enough information to uniquely identify the set of results it loaded for you, so if you reload the page it re-runs the query and you get a new set of results instead of the same set you had before. That fundamentally breaks how the Internet is supposed to work: any particular URL should always go to the same resource.
The fact that Youtube also does lazy-loading infinite scroll bullshit makes it even harder to show examples about, so I’ll switch to Lemmy now. Take this URL, for example:
(That’s from navigating to page 2 of my feed, which is set to “all” and “top 6 hours”.)
If I go to that URL now, and then I go to it again, say, six hours from now, it ought to still show the same list of posts. But it doesn’t. Instead, it re-runs the query and shows me the new results from six hours in the future, which is an entirely different result set. That’s not what I want! I want to be able to keep navigating back and forth through the old result set until I explicitly ask for a new one e.g. by clicking on the instance logo or choosing a new search from the [posts|comments], [subsribed|local|all], and [sort type list] controls.
They could cache the results you receive on your last visit of the home page which would fix this
It would not fix it. I also want to be able to do things like send the URL to someone else and have confidence that it would load the same content for them, too.
I mean of course that would be nice, but that’s just not realistic. You can’t store that info in a link without it being monstrous.
Why do you say they couldn’t cache the results and instead of re-fetching everything just use the cache results?
Sure you can, if your backend is designed reasonably.
How? You put a timestamp (or equivalent) in the URL and filter the search to only operate on the records that existed at that time. Assuming your search algorithm is deterministic, it should find the same results.
I agree with your point, but our algorithms are not deterministic and I doubt they ever will be again. Perhaps they could use a set of tags to create a deterministic result for a certain “genre” of results.