Archive

Archive for March, 2010

SDL, OpenGL, PixelFormat and Textures

Between SDL and OpenGL, both have different ways of storing the Color of a pixel. SDL uses its PixelFormat as a structure providing information on how color information is stored, while OpenGL uses a list possible internal texture format when you create a new texture with glTexImage2D.

I struggled a little before being able to find a proper way to convert a SDL_Surface to an OpenGL texture recently for my graphic library, so I decided to post my findings here, because I couldn’t find any exhaustive information on the web easily… I will not talk about the uncommon palette image with transparency yet, or other 8bits displays 😉 Even nowadays with quite standard configuration there are some issues, and I will talk here only about displaying true color images on true color displays. If your image is not true color (that is pixel color information takes less than 24 bits ) you can always convert it externally as a data, or in your program.

  • First of all, if you want to care about OpenGL implementation earlier than 2.0, you need to convert your texture to a “power of two” size. that is width and height of 32 or 64, 128, 256, etc. To achieve that, in SDL, I create a new image with the same flags, and blit the old surface on the newly created one. If you care about transparency, you need to transfer colorkey and surface alpha values to the new surface before blitting, so that everything is blitted.
  • Then you have to convert your SDL_Surface to the proper format before using it as a texture. As your display will probably be true color if you are using OpenGL, you can use SDL_DisplayFormat to convert your surface to a > 24bits format. It seems possible to display palletized texture with OpenGL, but if you are not using a great amount of it, such a complicated technique can seem overkill. Also if you have a ColorKey on your surface you need to use SDL_DisplayFormatAlpha, so that the pixel with colorkey value will be converted to alpha (full transparent) on the new surface.
  • You need to find the proper texture format. Also there is an issue with the colorkey, which can sometime generate an alpha channel that you shouldn’t use (shown in SDL_PixelFormat with Aloss == 8). Here is the code I am using ( on a little endian machine )

    if (surface->format->BytesPerPixel == 4) // contains an alpha channel
    {
    if (surface->format->Rshift == 24 && surface->format->Aloss == 0 ) textureFormat = GL_ABGR_EXT;
    else if ( surface->format->Rshift == 16 && surface->format->Aloss == 8 ) textureFormat = GL_BGRA;
    else if ( surface->format->Rshift == 16 && surface->format->Ashift == 24 ) textureFormat = GL_BGRA;
    else if ( surface->format->Rshift == 0 && surface->format->Ashift == 24 ) textureFormat = GL_RGBA;
    else throw std::logic_error("Pixel Format not recognized for GL display");
    }
    else if (numbytes == 3) // no alpha channel
    {
    if (surface->format->Rshift == 16 ) textureFormat = GL_BGR;
    else if ( surface->format->Rshift == 0 ) textureFormat = GL_RGB;
    else throw std::logic_error("Pixel Format not recognized for GL display");
    }
    else throw std::logic_error("Pixel Format not recognized for GL display");
    }
  • Once all this is done you can load your texture as usual with OpenGL, and everything should work 😉

    glPixelStorei(GL_UNPACK_ALIGNMENT,ptm_surf->format->BytesPerPixel);
    glGenTextures(1, &textureHandle);
    glBindTexture(GL_TEXTURE_2D, textureHandle);
    glTexImage2D(GL_TEXTURE_2D, 0, ptm_surf->format->BytesPerPixel, ptm_surf->w, ptm_surf->h, 0, textureFormat, GL_UNSIGNED_BYTE, ptm_surf->pixels);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

I hope this little post will help those, like me, who like SDL and OpenGL, but have trouble mixing them. The code is available in my opensource graphic library, under heavey development at the moment. Our goal is pretty big, and help, in any way, is very welcome. Dont hesitate to contact us if you want to get involved 😉
And for the experts out there, let me know if you spot some errors in here.

Back to coding…

Advertisements
Categories: C++, OpenGL, SDL Tags: , , ,

Erlang Bitstring Operators

2010/03/12 2 comments

After looking more deeply into what does a DHT like Kademlia does or not, I started to write some useful code in Erlang…

First it seems ( quite intuitive I must say ) that a DHT doesn’t say anything about connections… That is who am I connecting to at first, how to choose my endpoint, etc. Everything that is not enforced by the DHT mechanism is an opportunity to tune the system towards more special needs and features other than the DHT possibility set. Therefore I am making the intuitive assumption that a good way would be to connect to a very close node. BATMAN has an interesting and quite simple way of handling connection, so I decided to follow the example 😉

I just finished a UDP broadcaster in erlang, pretty simple, that basically advertise to his online neighbors itself came online, and register its neighbors’ replies.

Then comparing This and That I got a bit confused about some stuff…
For example : nodeID seems to be up to the user, provided a few conditions… is it really ?
But anyway it is sure that the nodeID is going to be a long binary. So I decided to start implementing a key scheme for what I had in mind for the dynamic vector clocks algorithms, that is a key based on the order of connection of the different node.

However, I was quite disappointed to *not find* any simple way to deal with bitstring operations in erlang… Binary operations work on integer only, and that would be 8bits for convenience with bitstring apparently… so I started writing my own module for that, with bs_and, bs_or, bs_not, and so on, that work on a bitstring of indefinite size. It s pretty basic and not optimized at all, but it works. Not sure if there is some interest out there for it, but let me know if there is, I can always put it on github somewhere 😉

Erlang Bitstring Operator test Screenshot

Erlang Bitstring Operator test Screenshot

Other than that, I keep working on my little portable SDL-based Game Engine, which now has OpenGL enabled by default if available on your machine.

SDLut_Refresh Test Screenshot

SDLut_Refresh Test Screenshot

It s working pretty nicely for simple bitmap display. Now the refresh is optimized ( I should display fps up there 🙂 ), and the user doesnt have to manage the list of rectangles to know what has changed on the screen or not ( which wasnt supported before in SDL render mode). Also openGL can be disabled if not needed, nothing changes on the interface for the user 🙂 pretty handy 😀 That was some work, but now the most troubling point is the “Font” part, that doesnt behave exaclyt as you would expect… more work to be done there.

Categories: Distribution, erlang Tags: ,

Towards a beginning of a design ?

2010/03/10 1 comment

I have been thinking for a very long time over this, gathering research papers, and browsing Internet to look for a possible way to implement what I was thinking of… If I wanted to name it, it would be something like : A Decentralized Distributed MMO(or not) Game(or more serious) Engine.

The idea seems simple, and quite intuitive, however one needs to be aware of the Fallacies of Distributed Computing

Here is what we have, from an very abstract and high point of view : Computers, and links between them.
Here is what we want to do with it : Store Data, Send Messages, Process Instructions
And we can see the problems we will have to face : data might not be available, data might be corrupt, message can disappear, messages can be in the wrong order, links can disappear, Network Topology can change, etc.

Here is what I want :
– No central point
– distributed topology, with node joining or leaving anytime
– resilient even if one node or link fails, or get out of the system at an unexpected moment. No data get lost, no connection gets broken.
– good performance ( real-time like would be great )

The trade-off between performance and resilience is pretty difficult to manage. Trying to build one on top of the other, which one would you start with first ?

Although many systems try to solve or alleviate one of these problems, none of them as far as I am aware, can deal with all of them while maintaining a decent performance. I thought after having a look at a few research proposal that one solution for one problem would be really interesting to implement, however, after trying it, I realized how important it was for some foundation to be laid down first. I made some small development in erlang, and quickly wondered how I could structure my software, given all the components that I would need to satisfy all the features I thought of… I wrote some of them, while others would have required much more expertise than my own to work. So I need to heavily reuse what has already been done to make my task a bit easier if I ever want to achieve my goal.

After all there is the “Researcher way”, who is an expert in his field, and can have enough funding to spend a lot of time developing one system, until it becomes as good as it can, in theory. And there is the “Entrepreneur way”, who has to make something working quickly, no matter how dirty and partially done it can be, with everything he can find, provided that people are interested and will sustain him to improve the system along the way…
Even if I am still tempted by the first way, I am no longer a student, nor seem to be able to secure any funding at the moment, and I therefore have to take the second path.

So I should :
– reuse what is already working elsewhere: DHT – p2p data sharing
– make something interesting out of the system ??? we ll see, depending on what it can do… probably trying to use it with my little open-source game engine
– plan for re-usability : structure the project, documents the different parts separately
– plan for improvements later on : divide to conquer, and specify interfaces between blocks

That why I decided on a basic layer architecture for a start :
– Implicit connection to the p2p network.
– DHT to keep “IP – nodeID” pairs distributively mostly, and other “global state data”…
– Routing Algorithm
– SCRIBE-like – manage groups and multicast
– Message Transport protocol ( overlay UDT ? or direct SCTPDCCP ? depending needs and performances… )
– Causality algorithms (Interval Tree Clock – like), which might need multicast for optimization, when there is no central system, depending on the type of implementation probably…
– Game Engine Layer, able to send state updates efficiently, with proper ordering, to a set of selected peers.

The choice to based the design on a DHT is, I think, the best for me. Despite my interest in AdHoc Networking protocols, and how much I would like to implement them on top of IP to get an increased fault tolerant network, I am not a Networks’ Algorithms Expert, and it would take me far too long to get to something decent working. Also DHT have now be quite extensively studied, and some implementation exists and are very usable, which enables me to reuse them, so I can focus on something else. Some improvements are likely to emerge in the years to come, and by using something already known, it will be easier to integrate evolutions.
Depending on which implementation I choose, I will have to check which features are available out of the box, and which one I will need to implement on top of it, to reach the feature set I want. Kademlia seems to be the more mature DHT algorithm from what I could gather around internet, but I will need to look at it more deeply. SCRIBE was implemented on top of the Pastry algorithm and I would need to reimplement it on top of Kademlia, as I didnt find any similar attempt…

Not an easy task but definitely an interesting research process 😉 Let s hope there will be something worth it at the end of the road.

Categories: Distribution Tags:

Peer-to-peer distributed, existing systems

Looking at GNU Social which is likely to be centralized, sadly, I found a list of other projects, much more distributed, that raised interests, and I should have a deeper look at them soon… Most of them concern file-sharing, but not only…

The Circle is a peer-to-peer distributed file system written mainly in Python. It is based on the Chord distributed hash table (DHT).
> But too bad : Development on the Circle has ceased in 2004. However the source is still available 😉

CSpace provides a platform for secure, decentralized, user-to-user communication over the internet. The driving idea behind the CSpace platform is to provide a connect(user,service) primitive, similar to the sockets API connect(ip,port). Applications built on top of CSpace can simply invoke connect(user,service) to establish a connection.
> That is pretty similar to what I want to achieve with my current developments, but if the “user view” will be similar, the intricacies will be quite different…

Tahoe-LAFS is a secure, decentralized, data store. All of the source code is available under a choice of two Free Software, Open Source licences. This filesystem is encrypted and spread over multiple peers in such a way that it remains available even when some of the peers are unavailable, malfunctioning, or malicious.
> Yeah so thats done. At least there is something I will not try to do 🙂 Still need to test it though…

GNUnet is a framework for secure peer-to-peer networking that does not use any centralized or otherwise trusted services. A first service implemented on top of the networking layer allows anonymous censorship-resistant file-sharing. Anonymity is provided by making messages originating from a peer indistinguishable from messages that the peer is routing. All peers act as routers and use link-encrypted connections with stable bandwidth utilization to communicate with each other. GNUnet uses a simple, excess-based economic model to allocate resources. Peers in GNUnet monitor each others behavior with respect to resource usage; peers that contribute to the network are rewarded with better service.
> Too bad they focus only on file sharing…

The ANGEL APPLICATION (a subproject of MISSION ETERNITY) aims to minimize, and ideally eliminate, the administrative and material costs of backing up. It does so by providing a peer-to-peer/social storage infrastructure where people collaborate to back up each other’s data. Its goals are (in order of descending relevance to this project)
> File sharing…

You can call Netsukuku a “scalable ad-hoc network architecture for cheap self-configuring Internets”. Scalable ad-hoc network architectures give the possibility to build and sustain a network as large as the Internet without any manual intervention. Netsukuku adopts a modified distance vector routing mechanism that is well integrated in different layers of its hierarchical network topology.
> Ad-Hoc alternative network 🙂 interesting… I still want to use internet though…

Syndie is an open source system for operating distributed forums (Why would you use Syndie?), offering a secure and consistent interface to various anonymous and non-anonymous content networks.
> Only forums… mmm…

I also found a blog that seems interesting although I am pretty sure it is mostly Amazon centric : AllThingsDistributed.com
Might be worth to have a deeper look at to see what the big companies are coming up with…

Categories: Distribution Tags: