No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
On that level it usually falls on computer scientists.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement.
“Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
It’s open source. Look can up the encryption yourself.
No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
After all these years, security researchers still don’t know if the encryption is any good?
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement. “Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
And it isn’t even encrypted by default, you manually have to enable that. By default, all your plain text messages are stored on their servers.
Can it be proven that that encryption is what’s used in practice?
Just use the F-Droid version if there is any doubt.
What about iOS users?
Apple is not selling iPhones in Russia after the beginning of the invasion.
Ah yes, because everyone just throws away their phone after 2 years. People definitely haven’t purchased iPhones before the invasion.
It’s not about everyone, it’s about people needing to hide their communication from the Putin regime.