Thursday, August 1, 2013

Been stuck for several months, but now i might be on to something

As i explained in an earlier post, there are several classes of internet connection that a user may have in the real world, but for the purpose of this discussion we shall simplify the categorization in only two [top-level] "meta-classes":
  • 'good' internet connections: these connections allow a peer to have direct P2P connectivity with any other peer on the network; and
  • 'leech' internet connections: these connections only allow two peers to connect to each other by means of a relaying peer, where said relaying peer must have a 'good' connection in order to be able to act as a relay
As it can be seen, any two peers with 'leech' connections will have to rely on a third-party relaying peer with a 'good' connection in order to be able to connect to each other.

In other words, there are real-world objective reasons that will prevent all peers from being equal on the network: 'leeches' will always require assistance from 'good' peers, while they will be truly unable to assist other peers on the network in any way (because of their objectively problematic internet connection)

The problem (that got me stuck for over four months):
In the real-world internet, the ratio between 'good' internet connections and 'leech' connections is (by far) sufficiently high to enable a cooperative self-sustained P2P network, i.e. there are enough 'good' peers that can provide relaying services to the 'leeches' upon request. HOWEVER, the very fact that there is a network contribution disparity between 'good' peers and 'leeches' can motivate some users to commit severe abuses that can ultimately bring down the network (if too many users become abusive): namely, a peer with 'good' connectivity might just decide it doesn't want to serve the network (by providing [bandwidth-consuming] relaying services to the unfortunate 'leeches'), and in order to get away with this unfair behavior all it has to do is to misrepresent its 'good' internet connection as being a 'leech' connection: once successful in misrepresented itself on the network as 'leech', it will not be requested to provide [relaying] services on the network.

So the problem can now be stated as follows:
how can an open-protocol P2P network be protected against hacked malicious clients which, because the network protocol is open, can be crafted in such a way that they will fully obey the network protocol syntax (and thus will be indistinguishable from genuine clients based solely on their behavior), but they will falsely claim to have 'leech'-type of internet connections that prevent them from actively contributing to the network. In brief, said malicious clients will unfairly use other peers' bandwidth when they'll need it, but will not provide [any] bandwidth of their own to the other peers when they'll be requested to do so, and they will get away with it by falsely claiming that they are sitting behind a problematic type of internet connection which prevents them from being cooperative contributors to the network (when in truth they are purposefully misrepresenting their internet connection's capabilities in order to make unfair use of the network).

The standard solution (which cannot be used):
The standard solution to the problem described above is to make sure that all the peers in the network are running a digitally-signed client program, which client program is a known-good version that a central authority distributes to the peers. However, once we dive into the details of how such a solution can be implemented we get into trouble: specifically, digitally-signed clients cannot be used in the P2P OS ecosystem because this would imply the existence of an [uncompromised] signature-validation DRM running on the peers' computers, which we cannot assume, because if we would make such an assumption we would only shift the problem of “how do we prevent compromised peers” to “how do we prevent compromised DRMs”, i.e. we'd only get right back to square one

A saving idea? (go or no-go, not sure yet):
A new way of protecting a known-good system configuration is the talk of the town these days, namely the "moving target defense" (a.k.a. MTD) [class of] solutions (apparently this concept - as opposed to the underlying techniques - is so new that it didn't even make it in wikipedia at the time i'm writing this), and for the specific case of the P2P network problem as i stated it above (i.e. resilience to maliciously crafted lying peers) the MTD translates into the following:
  1. have a central authority that periodically changes the communication protocol's syntax, then creates a new version of the client program which complies with the new protocol, and finally it broadcasts the new [known-good] version of the client program on the P2P network; in this way, the protocol change will immediately prevent ALL old clients, including the compromised ones, to log onto the network, and will require each peer to get the new [known-good] version of the client program as distributed by the central authority (i.e. all the maliciously-crafted compromised clients are effectively eliminated from the network immediately after each protocol change)
  2. the protocol changes that are implemented in each new version of the client program will be deeply OBFUSCATED in the client program object code (using all the code obfuscation tricks in book), with the goal of delaying any [theoretically possible] successful reverse engineering of the new protocol beyond the release of the next protocol update and thus render the [potentially cracked] older protocol(s) unusable on the network 
  3. the protocol obfuscator must be automatic and must itself be an open source program, where the only secret component (upon which the entire system security scheme relies on) must be the specific [random] strategy that the obfuscator elects to use as it releases each new version of obfuscated clients
As a result, after each protocol update the P2P network will only host known-good versions of clients, and by the time when any protocol reverse engineering effort might be successful, a new protocol update will already have been released, thus preventing any prior-to-the-update reverse-engineered clients to log onto the network.

The work ahead:
As it can be seen from the above description, the dynamic protocol update solution relies on the ability to create and distribute obfuscated program versions at a higher rate than an attacker's ability to create a malicious reverse engineered version of the program. Thus, given a system that uses the dynamic protocol adjustment method (as described above), the network integrity protection problem translates into the following problem:
[how] can a protocol be obfuscated such that the [theoretical] time necessary to crack the obfuscated code, given a known set of resources, exceeds a predefined limit?
Should the protocol obfuscation problem have a solution (probably underpinned by dynamic code obfuscation techniques) then the problem is solved (and i won't mind if it will be an empirical solution for as long as it proves viable in the real world) - so this is what i'm trying to find out now.

PS
A few articles on/related to code and protocol obfuscation:

Update
I also started a discussion on code obfuscation on comp.compilers, feel free to join here: http://groups.google.com/forum/#!topic/comp.compilers/ozGK36DRtw8

2 comments:

  1. HI,
    Not sure whether CGN as travers-able as NAT. But if yes, fraction of public node burden can be shared by private nodes themselves. The connection between any two NAT-ed nodes can be maintained by exchanging keep-alive messages between them. Obviously the drawback is that it will incur lots of traffic and operation overheads accordingly.

    If no... just forget about what I suggested :D

    Daniel Shen
    yb17412@umac.mo

    ReplyDelete
  2. First, thanks for taking your time and writing down the idea, you never know where cool stuff can spring from

    Sure enough, you are right about the *theoretical* possibility of having NATed peers act as routers, but it's only theoretical (i did investigate it). Practically, the routers have to have *very* reliable connections both to the other routers and to the peers that are connected to the network through them, which is definitely not the case if they are behind a NAT (just one example of many: somebody X in the NATed router's LAN can start e.g. a torrent app and have the LAN's NAT run out of connections and thus close [some of] the router's links, such that the connectivity of a NATed router cannot be taken for granted just by sending keepalives, it also should be continuously *monitored*; plus, re-establishing a closed connection will require the assistance of yet other routers, etc)
    In brief, the problem is not necessarily a matter requiring a too complex algorithm, but rather the traffic overhead incurred by the link maintenance *and continuous monitoring* is extremely high (reaches tens of kiloBytes/sec for a 1,000,000,000 user network), well beyond what i'm ready to expect from a user to provide (my target is 1...2KB/s "idle" traffic/router)

    ReplyDelete


IMPORTANT
You should receive an on-screen confirmation message after entering a comment in the comment form. If you do not see a confirmation message after you enter your comment, please make sure that you have both "cookies" and "third party cookies" enabled in your browser settings, as this is a mandatory condition for posting comments on all google-hosted blogs; additionally, if you found that the above-mentioned settings had to be changed, you'll have to close all browser windows and then restart the browser for the new settings to take effect.


PRIVACY NOTE
All comments on this blog are moderated, i.e. they are set to only appear visible to the public after i approve them. The main reason i enabled comment moderation is to allow you to provide a contact e-mail address if you chose to, and if you'll ask in your comment (which contains your email address) that you do not want your email to become public i will delete the comment and thus protect your email address from being published.