Home > Distribution > Towards a beginning of a design ?

Towards a beginning of a design ?

I have been thinking for a very long time over this, gathering research papers, and browsing Internet to look for a possible way to implement what I was thinking of… If I wanted to name it, it would be something like : A Decentralized Distributed MMO(or not) Game(or more serious) Engine.

The idea seems simple, and quite intuitive, however one needs to be aware of the Fallacies of Distributed Computing

Here is what we have, from an very abstract and high point of view : Computers, and links between them.
Here is what we want to do with it : Store Data, Send Messages, Process Instructions
And we can see the problems we will have to face : data might not be available, data might be corrupt, message can disappear, messages can be in the wrong order, links can disappear, Network Topology can change, etc.

Here is what I want :
– No central point
– distributed topology, with node joining or leaving anytime
– resilient even if one node or link fails, or get out of the system at an unexpected moment. No data get lost, no connection gets broken.
– good performance ( real-time like would be great )

The trade-off between performance and resilience is pretty difficult to manage. Trying to build one on top of the other, which one would you start with first ?

Although many systems try to solve or alleviate one of these problems, none of them as far as I am aware, can deal with all of them while maintaining a decent performance. I thought after having a look at a few research proposal that one solution for one problem would be really interesting to implement, however, after trying it, I realized how important it was for some foundation to be laid down first. I made some small development in erlang, and quickly wondered how I could structure my software, given all the components that I would need to satisfy all the features I thought of… I wrote some of them, while others would have required much more expertise than my own to work. So I need to heavily reuse what has already been done to make my task a bit easier if I ever want to achieve my goal.

After all there is the “Researcher way”, who is an expert in his field, and can have enough funding to spend a lot of time developing one system, until it becomes as good as it can, in theory. And there is the “Entrepreneur way”, who has to make something working quickly, no matter how dirty and partially done it can be, with everything he can find, provided that people are interested and will sustain him to improve the system along the way…
Even if I am still tempted by the first way, I am no longer a student, nor seem to be able to secure any funding at the moment, and I therefore have to take the second path.

So I should :
– reuse what is already working elsewhere: DHT – p2p data sharing
– make something interesting out of the system ??? we ll see, depending on what it can do… probably trying to use it with my little open-source game engine
– plan for re-usability : structure the project, documents the different parts separately
– plan for improvements later on : divide to conquer, and specify interfaces between blocks

That why I decided on a basic layer architecture for a start :
– Implicit connection to the p2p network.
– DHT to keep “IP – nodeID” pairs distributively mostly, and other “global state data”…
– Routing Algorithm
– SCRIBE-like – manage groups and multicast
– Message Transport protocol ( overlay UDT ? or direct SCTPDCCP ? depending needs and performances… )
– Causality algorithms (Interval Tree Clock – like), which might need multicast for optimization, when there is no central system, depending on the type of implementation probably…
– Game Engine Layer, able to send state updates efficiently, with proper ordering, to a set of selected peers.

The choice to based the design on a DHT is, I think, the best for me. Despite my interest in AdHoc Networking protocols, and how much I would like to implement them on top of IP to get an increased fault tolerant network, I am not a Networks’ Algorithms Expert, and it would take me far too long to get to something decent working. Also DHT have now be quite extensively studied, and some implementation exists and are very usable, which enables me to reuse them, so I can focus on something else. Some improvements are likely to emerge in the years to come, and by using something already known, it will be easier to integrate evolutions.
Depending on which implementation I choose, I will have to check which features are available out of the box, and which one I will need to implement on top of it, to reach the feature set I want. Kademlia seems to be the more mature DHT algorithm from what I could gather around internet, but I will need to look at it more deeply. SCRIBE was implemented on top of the Pastry algorithm and I would need to reimplement it on top of Kademlia, as I didnt find any similar attempt…

Not an easy task but definitely an interesting research process šŸ˜‰ Let s hope there will be something worth it at the end of the road.

Advertisements
Categories: Distribution Tags:
  1. No comments yet.
  1. 2010/04/30 at 20:03

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: