Hello Mozilla Connect Community, I’m Chance York, a User Researcher on the Firefox User Research team. I’m reaching out because our team has created a survey to gather opinions on a handful of browser features, some of which were suggested previously on Mozilla Connect. Your feedback on this survey...
The “Want most” to “Want least” scale is loaded AF.
Where is the option for “I don’t want any of these things”?
Edit: Yeah, fuck that. That survey is bullshit. I stopped bothering to give answers due to the multi-choice questions seeming like a way for Mozilla to have a wank about itself.
This is fairly standard survey design, I believe. They’re not looking to know which features are wanted in general; they want to know their relative popularity. The sets you’re presented are randomised (i.e. we don’t all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.
If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn’t really matter want you answer: they’ll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.
You’re bang on. It’s called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It’s better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in “size” of offer.
I would never interpret a MaxDiff model low end result as “wow, 5% of people want slower browsers.” Instead I’m focusing on the top cluster. As with any model, they’re only ever so accurate. Don’t read into the questions too much.
The problem with this design is, if people do not care, then they will give random answers, if they don’t have the option to not care. Also this would be important information for Mozilla too, if many people do not care about a specific question. So I feel like they should have done that. But, who am I…
Presumably if people don’t care, they don’t fill in the survey. But as an extra failsafe, they’ve also included the feature “twice as slow as your current browser”. If you rank that high, then your result can probably be discarded.
But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)
Presumably if people don’t care, they don’t fill in the survey.
That’s not what I said. People care about the survey and they do a favor to Mozilla with it. And if a question does not have the answer they want to give, then it becomes a problem. It’s a different scenario than what you were saying.
But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)
With that attitude and without acknowledging a problem, it won’t get better. If they were the experts, then they wouldn’t need a survey. But its easy to discredit any credit with that dumb argument.
Because you’ll end up with ten features that all have overwhelmingly “really want” and “want” answers, and then you still don’t know which of those ten to work on first.
Sorry, I wasn’t talking about your answers specifically, but about aggregate results. (Also note that I think you might not get presented with all possible features when taking a single survey.)
The point is not to find the features that people would like, but the features that people would like most.
Additionally, this allows you to find a few features that have particularly high value for a subset of users, even though on average they’re not that interesting. (I think Multi-Account Containers are a good example of that: too much of a hassle for many, but for some people, like me, a reason to never switch away from Firefox.)
Then perhaps allow them to pick the top 5 or so, and rank them, and then maybe up to 5 that they don’t care about. I’m pretty meh toward a lot of those, and I imagine others are as well.
I’m half-way through the survey right now; and rather than continuing, just stalling because I don’t want to rank another set of three options that I don’t care about. Some of the choices already given were like “well, I guess I’ll pick the feature that I’ve at least thought about using once…” but now it’s just a list of 3 things that I don’t want whatsoever. I’m trying to give useful feedback, but I feel like I’m really just giving noise.
I don’t know if the survey questions are loaded, but it feels like they could easily be misinterpreted.
For example, somebody might rank the “organize toolbar buttons and AI chatbots” even if they hate AI’s snake oil, and now Mozilla has a data point where they can say “Some of our respondents said they want AI as much as side tabs!”
This seems especially sketchy when the side tab idea came directly from a vocal portion of Mozilla users, while the decision to follow the AI chatbot trend was decided by the same management that overpays their CEO every year.
@neme loaded questions are loaded.
The “Want most” to “Want least” scale is loaded AF.
Where is the option for “I don’t want any of these things”?
Edit: Yeah, fuck that. That survey is bullshit. I stopped bothering to give answers due to the multi-choice questions seeming like a way for Mozilla to have a wank about itself.
This is fairly standard survey design, I believe. They’re not looking to know which features are wanted in general; they want to know their relative popularity. The sets you’re presented are randomised (i.e. we don’t all get to see the same sets), which allows them to get a ranked list of lots of potential features, while only having to run ten survey questions per participant.
If you get a set with three features that everyone likes or dislikes at about the same level, then it doesn’t really matter want you answer: they’ll all end up at the top or bottom of the list, respectively. Because each of those options also get presented as part of different sets to different users, where different answers can win out.
You’re bang on. It’s called MaxDiff. I use it frequently in my line of work to prioritise product or service messaging with panel data. It’s better in some cases to use Inferred preference rather than stated, but generally good to keep the options comparable in “size” of offer.
I would never interpret a MaxDiff model low end result as “wow, 5% of people want slower browsers.” Instead I’m focusing on the top cluster. As with any model, they’re only ever so accurate. Don’t read into the questions too much.
The problem with this design is, if people do not care, then they will give random answers, if they don’t have the option to not care. Also this would be important information for Mozilla too, if many people do not care about a specific question. So I feel like they should have done that. But, who am I…
Any uncertainty would be filtered out by the scale of people answering
Presumably if people don’t care, they don’t fill in the survey. But as an extra failsafe, they’ve also included the feature “twice as slow as your current browser”. If you rank that high, then your result can probably be discarded.
But yeah, this design has worked well for many other surveys, so presumably it’ll work well for this one. They’re the experts :)
That’s not what I said. People care about the survey and they do a favor to Mozilla with it. And if a question does not have the answer they want to give, then it becomes a problem. It’s a different scenario than what you were saying.
With that attitude and without acknowledging a problem, it won’t get better. If they were the experts, then they wouldn’t need a survey. But its easy to discredit any credit with that dumb argument.
They’re the experts in survey-taking, not in knowing what the users want - the users are experts in that. Hence the survey.
That remark was basically a reformulation of and agreeing with your “But, who am I…”
Why not just get one big list with like 4 answers:
How is that worse than getting like 10 screens of relative answers?
Because you’ll end up with ten features that all have overwhelmingly “really want” and “want” answers, and then you still don’t know which of those ten to work on first.
Really? I’d honestly split them about evenly, maybe even more toward the “don’t want” end of the spectrum.
Sorry, I wasn’t talking about your answers specifically, but about aggregate results. (Also note that I think you might not get presented with all possible features when taking a single survey.)
The point is not to find the features that people would like, but the features that people would like most.
Additionally, this allows you to find a few features that have particularly high value for a subset of users, even though on average they’re not that interesting. (I think Multi-Account Containers are a good example of that: too much of a hassle for many, but for some people, like me, a reason to never switch away from Firefox.)
Then perhaps allow them to pick the top 5 or so, and rank them, and then maybe up to 5 that they don’t care about. I’m pretty meh toward a lot of those, and I imagine others are as well.
@Vincent couldn’t finish the survey purely because of the questions suggesting that I should “want” something.
Perhaps if they asked the question differently, they’d have gotten a completed survey from me.
I can’t answer loaded questions.
The samples they get are meaningless if only people who complete the survey are counted.
The fact that I couldn’t select none of them and move forward, meant something: Jerk Mozilla off, or don’t.
I chose not to, and I am a Mozilla user!
#librewolf
I’m half-way through the survey right now; and rather than continuing, just stalling because I don’t want to rank another set of three options that I don’t care about. Some of the choices already given were like “well, I guess I’ll pick the feature that I’ve at least thought about using once…” but now it’s just a list of 3 things that I don’t want whatsoever. I’m trying to give useful feedback, but I feel like I’m really just giving noise.
@blind3rdeye it’s a load of crap, isn’t it?
The statisticians may disagree, but they fail to understand that forcing “want” into the situation is not a true reflection of what people care about.
If they had just tweaked that one word, it wouldn’t be as much of a steaming pile of turds that it is.
It’s almost like they want people to not finish the survey, so they can have a warped sample.
I hope my response will get thrown out because I prefer a slower browser over built-in AI based personalization.
It doesn’t seem randomized based on what I have seen
You mean you’ve taken it multiple times and kept seeing the exact same ten sets?
I don’t know if the survey questions are loaded, but it feels like they could easily be misinterpreted.
For example, somebody might rank the “organize toolbar buttons and AI chatbots” even if they hate AI’s snake oil, and now Mozilla has a data point where they can say “Some of our respondents said they want AI as much as side tabs!”
This seems especially sketchy when the side tab idea came directly from a vocal portion of Mozilla users, while the decision to follow the AI chatbot trend was decided by the same management that overpays their CEO every year.