Showing posts with label security. Show all posts
Showing posts with label security. Show all posts
Thursday, June 5, 2014
It's June 5th, time to reset the net (or somethin')
There's Alexis making some noise here: SaveYourPrivacyPolicy.org
and you have Eduard beating his drums here: ResetTheNet.org
and then some more noise here: FightForTheFuture.org
and... well, you get the drill.
Thursday, August 1, 2013
Been stuck for several months, but now i might be on to something
As i explained in an earlier post, there are several classes of
internet connection that a user may have in the real world, but for the purpose of this discussion we shall simplify the categorization in only two [top-level] "meta-classes":
In other words, there are real-world objective reasons that will prevent all peers from being equal on the network: 'leeches' will always require assistance from 'good' peers, while they will be truly unable to assist other peers on the network in any way (because of their objectively problematic internet connection)
The problem (that got me stuck for over four months):
In the real-world internet, the ratio between 'good' internet connections and 'leech' connections is (by far) sufficiently high to enable a cooperative self-sustained P2P network, i.e. there are enough 'good' peers that can provide relaying services to the 'leeches' upon request. HOWEVER, the very fact that there is a network contribution disparity between 'good' peers and 'leeches' can motivate some users to commit severe abuses that can ultimately bring down the network (if too many users become abusive): namely, a peer with 'good' connectivity might just decide it doesn't want to serve the network (by providing [bandwidth-consuming] relaying services to the unfortunate 'leeches'), and in order to get away with this unfair behavior all it has to do is to misrepresent its 'good' internet connection as being a 'leech' connection: once successful in misrepresented itself on the network as 'leech', it will not be requested to provide [relaying] services on the network.
So the problem can now be stated as follows:
The standard solution (which cannot be used):
The standard solution to the problem described above is to make sure that all the peers in the network are running a digitally-signed client program, which client program is a known-good version that a central authority distributes to the peers. However, once we dive into the details of how such a solution can be implemented we get into trouble: specifically, digitally-signed clients cannot be used in the P2P OS ecosystem because this would imply the existence of an [uncompromised] signature-validation DRM running on the peers' computers, which we cannot assume, because if we would make such an assumption we would only shift the problem of “how do we prevent compromised peers” to “how do we prevent compromised DRMs”, i.e. we'd only get right back to square one
A saving idea? (go or no-go, not sure yet):
A new way of protecting a known-good system configuration is the talk of the town these days, namely the "moving target defense" (a.k.a. MTD) [class of] solutions (apparently this concept - as opposed to the underlying techniques - is so new that it didn't even make it in wikipedia at the time i'm writing this), and for the specific case of the P2P network problem as i stated it above (i.e. resilience to maliciously crafted lying peers) the MTD translates into the following:
The work ahead:
As it can be seen from the above description, the dynamic protocol update solution relies on the ability to create and distribute obfuscated program versions at a higher rate than an attacker's ability to create a malicious reverse engineered version of the program. Thus, given a system that uses the dynamic protocol adjustment method (as described above), the network integrity protection problem translates into the following problem:
PS
A few articles on/related to code and protocol obfuscation:
Update
I also started a discussion on code obfuscation on comp.compilers, feel free to join here: http://groups.google.com/forum/#!topic/comp.compilers/ozGK36DRtw8
- 'good' internet connections: these connections allow a peer to have direct P2P connectivity with any other peer on the network; and
- 'leech' internet connections: these connections only allow two peers to connect to each other by means of a relaying peer, where said relaying peer must have a 'good' connection in order to be able to act as a relay
In other words, there are real-world objective reasons that will prevent all peers from being equal on the network: 'leeches' will always require assistance from 'good' peers, while they will be truly unable to assist other peers on the network in any way (because of their objectively problematic internet connection)
The problem (that got me stuck for over four months):
In the real-world internet, the ratio between 'good' internet connections and 'leech' connections is (by far) sufficiently high to enable a cooperative self-sustained P2P network, i.e. there are enough 'good' peers that can provide relaying services to the 'leeches' upon request. HOWEVER, the very fact that there is a network contribution disparity between 'good' peers and 'leeches' can motivate some users to commit severe abuses that can ultimately bring down the network (if too many users become abusive): namely, a peer with 'good' connectivity might just decide it doesn't want to serve the network (by providing [bandwidth-consuming] relaying services to the unfortunate 'leeches'), and in order to get away with this unfair behavior all it has to do is to misrepresent its 'good' internet connection as being a 'leech' connection: once successful in misrepresented itself on the network as 'leech', it will not be requested to provide [relaying] services on the network.
So the problem can now be stated as follows:
how can an open-protocol P2P network be protected against hacked malicious clients which, because the network protocol is open, can be crafted in such a way that they will fully obey the network protocol syntax (and thus will be indistinguishable from genuine clients based solely on their behavior), but they will falsely claim to have 'leech'-type of internet connections that prevent them from actively contributing to the network. In brief, said malicious clients will unfairly use other peers' bandwidth when they'll need it, but will not provide [any] bandwidth of their own to the other peers when they'll be requested to do so, and they will get away with it by falsely claiming that they are sitting behind a problematic type of internet connection which prevents them from being cooperative contributors to the network (when in truth they are purposefully misrepresenting their internet connection's capabilities in order to make unfair use of the network).
The standard solution (which cannot be used):
The standard solution to the problem described above is to make sure that all the peers in the network are running a digitally-signed client program, which client program is a known-good version that a central authority distributes to the peers. However, once we dive into the details of how such a solution can be implemented we get into trouble: specifically, digitally-signed clients cannot be used in the P2P OS ecosystem because this would imply the existence of an [uncompromised] signature-validation DRM running on the peers' computers, which we cannot assume, because if we would make such an assumption we would only shift the problem of “how do we prevent compromised peers” to “how do we prevent compromised DRMs”, i.e. we'd only get right back to square one
A saving idea? (go or no-go, not sure yet):
A new way of protecting a known-good system configuration is the talk of the town these days, namely the "moving target defense" (a.k.a. MTD) [class of] solutions (apparently this concept - as opposed to the underlying techniques - is so new that it didn't even make it in wikipedia at the time i'm writing this), and for the specific case of the P2P network problem as i stated it above (i.e. resilience to maliciously crafted lying peers) the MTD translates into the following:
- have a central authority that periodically changes the communication protocol's syntax, then creates a new version of the client program which complies with the new protocol, and finally it broadcasts the new [known-good] version of the client program on the P2P network; in this way, the protocol change will immediately prevent ALL old clients, including the compromised ones, to log onto the network, and will require each peer to get the new [known-good] version of the client program as distributed by the central authority (i.e. all the maliciously-crafted compromised clients are effectively eliminated from the network immediately after each protocol change)
- the protocol changes that are implemented in each new version of the client program will be deeply OBFUSCATED in the client program object code (using all the code obfuscation tricks in book), with the goal of delaying any [theoretically possible] successful reverse engineering of the new protocol beyond the release of the next protocol update and thus render the [potentially cracked] older protocol(s) unusable on the network
- the protocol obfuscator must be automatic and must itself be an open source program, where the only secret component (upon which the entire system security scheme relies on) must be the specific [random] strategy that the obfuscator elects to use as it releases each new version of obfuscated clients
The work ahead:
As it can be seen from the above description, the dynamic protocol update solution relies on the ability to create and distribute obfuscated program versions at a higher rate than an attacker's ability to create a malicious reverse engineered version of the program. Thus, given a system that uses the dynamic protocol adjustment method (as described above), the network integrity protection problem translates into the following problem:
[how] can a protocol be obfuscated such that the [theoretical] time necessary to crack the obfuscated code, given a known set of resources, exceeds a predefined limit?Should the protocol obfuscation problem have a solution (probably underpinned by dynamic code obfuscation techniques) then the problem is solved (and i won't mind if it will be an empirical solution for as long as it proves viable in the real world) - so this is what i'm trying to find out now.
PS
A few articles on/related to code and protocol obfuscation:
- A Taxonomy of Obfuscating Transformations
- Code Obfuscation against Static and Dynamic Reverse Engineering
- Software Protection by Mobile Code
- Proactive Obfuscation
- Automatic Extraction of Protocol Message Format using Dynamic Binary Analysis
- Reverse Engineering Obfuscated Code
Update
I also started a discussion on code obfuscation on comp.compilers, feel free to join here: http://groups.google.com/forum/#!topic/comp.compilers/ozGK36DRtw8
Tuesday, August 16, 2011
A few words on privacy
Just stumbled upon an article discussing skype's alleged security and how it was built into skype from day one (as opposed to being an afterthought), and felt like throwing my 2 cents on this.
For some bizarre reason, it is commonly thought that a skype p2p link offers anything and everything that can translate to "ultimate privacy" and some more, but this is... anything but. Here's the drill:
P2P OS is different. I will not say much about the cornerstones (or implementation details) of P2P OS' security framework at this stage, but i will just say that it addresses both of the privacy issues described above, and it will [attempt to] provide the highest level of communication privacy that is philosophically achievable within the technical limitations of a given digital communications equipment.
Later edit
As i said in the post above, i don't want to get [too] technical here, but in light of some new developments i'll just mention that the cornerstone of skype's security model (and just about any security model out there for that matter) are the security certificates, and if a bunch of geeks can do things like this just imagine what a more potent attacker can do. And don't take my word on this, just look at what they say.
Needless to say (since i'm hereby flaming this model), P2P OS will not use this model.
For some bizarre reason, it is commonly thought that a skype p2p link offers anything and everything that can translate to "ultimate privacy" and some more, but this is... anything but. Here's the drill:
- privacy has two major components, the first of which is protecting your communication from potential eavesdroppers. On this front, skype praises itself with unbeatable encryption, which is technically true, but what skype omits to say is that a good encryption only servers to protect a link that already exists; in other words, if you talk to someone and wish to know that your communication is shielded from eavesdropping, then skype might be the real thing. However, what skype does not, and cannot, guarantee, is that you are indeed talking to the person you think you are: specifically, when you set up your communication link with your peer, the link setup stage is vulnerable to what is called a "man in the middle attack" which essentially means that someone has just placed a listening device between you and your peer, and everything that you'll be talking with your peer will in effect go through said man-in-the-middle eavesdropper. In the security world, this is called "the authentication problem" (i.e. making sure that what you're so carefully whispering to your peer indeed goes to your peer and not to someone else), and this is the hardest part (and philosophically unsolvable in the absence of a trusted third-party authority which knows you both) of the secure communications problem.
- note: without entering into too many technical details, it may be technically true that skype itself cannot eavesdrop on your communication by having one of its engineers flip some switches at their headquarters, but what skype does not explicitly say is that what it can do if it so wishes (or is legally forced) is to provide third parties with a technical equipment (i.e. a piece of skype-certified software) that can eavesdrop on any communication whatsoever if some conditions are met (e.g. if this eavesdropping software is placed on your ISP's routers then there is absolutely no problem whatsoever to wiretap your communications)
- the second big issue related to communication privacy is whether or not a third-party can know who you're talking to, even if it won't be able to actually listen into your conversation. On this front, skype not only does not offer any protection, but its algorithms actually don't even make any effort whatsoever in this direction (e.g. along the line of Tor and the likes); in other words, there are absolutely no provisions in skype to even attempt to address the issue of protecting the identities of the two ends of a skype link.
P2P OS is different. I will not say much about the cornerstones (or implementation details) of P2P OS' security framework at this stage, but i will just say that it addresses both of the privacy issues described above, and it will [attempt to] provide the highest level of communication privacy that is philosophically achievable within the technical limitations of a given digital communications equipment.
Later edit
As i said in the post above, i don't want to get [too] technical here, but in light of some new developments i'll just mention that the cornerstone of skype's security model (and just about any security model out there for that matter) are the security certificates, and if a bunch of geeks can do things like this just imagine what a more potent attacker can do. And don't take my word on this, just look at what they say.
Needless to say (since i'm hereby flaming this model), P2P OS will not use this model.
Subscribe to:
Posts (Atom)