GSoC: Netengine project

Here it is my second blog post for the Mid Term Evaluation of GSOC2014.

As I wrote in the previous one, I’m working on Netengine, a Python module to abstract network devices and get information from them.

The work is going very well, I’m learning new things every day with the help of my mentor, Federico Capoano, and I’m very happy with the development.

In this first part of the work we completed as per goals, the SNMP back-end for AirOS and OpenWRT firmwares.

The most difficult part of this first part was to work with SNMP (Simple Network Management Protocol) because I had never worked with it, so I had to learn it’s basics and how it works, in particular it’s way of retrieving info from devices.

It uses different codes (MIB), everyone of it gives access to different information of the device (e.g device name, addresses, interfaces); so before starting to write I had to look for the correct MIBs to query.

Now we are focusing on the ssh.OpenWRT back-end ready to switch the next one on the list once completed.

I’m definitely very happy with how the work is going, with the communications I’m having with my mentor and with all coding practices I’m learning from him.

The program gave me not only the possibility to improve my skills but also to meet new people very experienced on the field.

The next step is to start coding on the new back-end, probably an HTTP back-end for AirOS, to complete the program in time.

For further questions on the project please visit https://github.com/ninuxorg/netengine or email us at ninux­dev@ml.ninux.org.  

GSoC: Freifunk API Query Client meets DeepaMehta

This post will give an overview about the ongoing work on the API query client GSoC project. As I’ve wrote a few days ago I met Jürgen Neumann at the WCW 2014 and he introduced me to DeepaMehta. We decided to use this tool as a database for the API data. This approach is quite a leap from my original proposal and idea but after a few discussions we realized there are a lot of benefits to this approach. Here I want to give a short overview about this new approach.

What is DeepaMehta?

DeepaMehta represents information contexts as a network of relationships. This graphical representation exploits the cognitive benefits of mind maps and concept maps. Visual maps — in DeepaMehta called Topic Maps — support the user’s process of thinking, learning, remembering and generating ideas. We think that working with DeepaMehta stimulates creativity and increases productivity. Welcome to DeepaMehta

This sounds interesting but one may ask where is the connection to community data in machine readable form? The answer is in the data model. Here is an example from the website:

The data is organised in a topic map. There are topics that can represent e.g an organisation or a person or an event. These topics are associated through a hypergraph relationship. This means that it is possible to model all kinds of possible relationships between topics. For example a person can be modelled as a topic, that is associated with an address and the adress consists of location data, email adresses and so on… this person can be part of several organisations and these organisations can be aggrated by several parent organisations and so on…

We have a powerful graph to represent all kinds of information and we know about the relationships of each information-bit to other bits…

We have a graph that we can traverse for queries. It’s straight forward to e.g. list all organisations, that a person is a member of.

To take an example from the API data: We want to know which communities use “olsr” as their routing protocol . This would be an instance of the topic type “routing protocol”. We now only need to follow the links to all instances of the type e.g. “freifunk-community” that are connected to the “oslr” instance of the “routing protocol” topic.

This would allow for flexible queries. Another example would be a map where all instances of location topics are displayed and their parent-topics are included as label for the points on the map. If e.g. node data is present for communities this would allow for a global node map that shows not only node locoations but also community event locations and meeting places. Of course there is a huge amount work do before this will be working but overall I hope this explains why there are a lot of benefits for using this representation.

Freifunk API data in DeepaMehta

So how will it work? I’ve tried to put it in a diagram:

More details:

Data

At the moment there is a specification – a JSON schema file – for the API data and an instance – a JSON API file.

Magic

Magic is probably the wrong word for that, but all the hard work is done by DeepaMehta and I only build a plugin on top of that – unaware of the implementation details – so I thought it is appropriate. To quote Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic

At first, we need to put the schema into the DeepaMehta platform. This is possible using a plugin that creates the topics in DeepaMehta for the entries in the JSON schema.

The next step is to feed the current data into DeepaMehta. This creates instances of the previously defined topic types. E.g. a topic for each community.

Once the data is in DeepaMetha it’s possible to query that data.

Presentation

We can now speak JSON over HTTP using REST with the platform and present the results in various ways. E.g. display communities in a map or provide an text interface to query the data. DeepaMehta already provides an REST API and a web-based interface for exploring and editing topic maps but while testing and playing with the interface we found it too complicated.

Current Progress

Feeding the data by hand into the plattform is not practical and I’m working on an import script for the schema and the API data. At the moment mapping the basic JSON types (string, integer, ..) into DeepaMehta is working but more works need to be done to get a better representation. Once the data is in the focus will shift on a doing actual queries.

Problems

Complexity. These are for the most part new concepts for me and I had little prior experience with semantic web technologies. DeepaMehta also covers quite a few other usecase and I need to learn more about the system.

Open Questions

Doing the actualy queries and traversing the graph is something I need to find a workable solution for. There are also different specifications of the API schema and different communities use different versions of the API. At the moment I’m ignoring that detail but here I need to find a solution. Another nice to have feature would be access to historic data. 

Lots of interesting problems.. unfortunatly I’ve been short on time in the past days and I’m quite behind the shedule but I’m optimistic that this approach is flexible enough to provide a solid ground all kinds of experiments with data. Once queries are possible things will hopefully move forward at a faster and more visible pace.

GSoC “BGP/Bird integration with OpenWRT and QMP” project report

This entry is for updating the information regarding this GSoC Project focused on the automate of BGP-BMX6 metrics and routes exchange and “translation”. 
 
During the WBMv7 in Leipzig, the WiBed platform [0] was presentated as well as the GSoC project[1] where I am participating. WiBed was used to deploy the testbed network where many routing and mesh related experiments were executed.
I was participating in the deployment and development teams of the WBMv7, so I contributed in many bug fixes and platform improvements.
The WiBed project is important for the GSOC because it provides a testing platform very similar to the production environment where we will apply the results of the GSOC project.
 
Currently I am studying and understanding the basis system we need to accomplish the objectives of my project. I am working on Bird4 and Bird6 [2] configuration transition to UCI[3] and LUCI[4] willing to give to the OpenWRT project and community a more user-friendly Bird Daemon configuration. For those who do not know, Bird is a lightweight and flexible BGP daemon which may be used as an alternative to Quagga (which is actually very heavy).
 
Once we got an usable version, we wll upload the work done to public GIT repository following the standard OpenWRT feeds format (so it might be included in the official repositories). To test our advances and implementation we are using the WiBed platform network deployed in our laboratory (at UPC university in Barcelona).
The first production test will be made in the QMP [5] network we have deployed in Barcelona.
 
The most challenging feature in our project is the exchange of routes (and associatd metrics) between routing protocols (BGP and BMX6 [6] in our case). We (me, the workgroup and mentors) are still discussing about the different ways to implement it and how to use the Bird solution to Guifi.net [7] where the main protocol is BGP and the most common OS is the privative routerOS from Mikrotik. Including Bird in the open/libre firmware QMP will allow people in Guifi.net to use this solution instead of routerOS. However to make the interconnections between the QMP (Mesh) networks and the current BGP/Infrastructure we need the called frontier (or border) nodes (those who exchange the routes between both network clouds). 
 
To not overload the current mesh clouds (running with BMX6), we will install a BGP Bird instance only in these QMP border nodes. They will exchange routes and metrics in the entrance of the network and summarize the result by publishing the aggregated routes to each network.
 
Another idea we are considering is to create a very small and simple OpenWRT image with the BMX6 daemon ready to perform the routing. This image may be installed as a virtual machine in the RouterOS firmware (present in 50-60% of the Guifi.net nodes). So Mikrotik nodes will be able to route BMX6 packets thus the BGP instance will not be longer necessary (we believe mesh routing protocols are a much better option than BGP/OSPF for a WiFi network). This approach is compatible with the (previous) border nodes one. We will provide both options to Guifi.net users to let them decide.
 
Finally, to conclude this  mid-term report, say that we expect to finish the project in time and  just mention that in the coming days we will start testing the first solution in a real production network.
 

GSoC: Freifunk API Query Client – A short report from the Wireless Community Weekend 2014

Photo taken from https://twitter.com/christianheise/status/472746947569520641

Photo taken from https://twitter.com/christianheise/status/472746947569520641

From the 29th May to the 1st of June we were with Andi and Bernd from Weimarnetz at the wunderful c-base Raumstation. We visited the Wireless Community Weekend. It was my first experience of this kind of community event and I enjoyed it very much.

Beer and Bratwurst did harmonize quite well with technical talks about OpenWrt and the Freifunk Community. I was especially surprised how diverse and open the community is and how enthusiastic everyone involved was.

Andi and Monic talked about the progress on the Freifunk API and presented their work. At the end of their talk I had the chance to present my work on the query client for the API. Here are the slides.

Shortly after the talk Jürgen Neumann from Freifunk Berlin came to me and introduced me to DeepaMehta. My original plan was to use something like NodeJS for the backend the storage of the API data but DeepaMehta looked promising and offers unique features I didn’t even thought off.

So after talking with Andi about it we decided to use DeepaMehta as the foundation and storage tool for the API data. A seperate blogpost for the GSoC midterm evaluation will outline my work in this direction.

Overall it was a very exciting weekend for me at #ffwcw 2014.

GSoC : Source-Sensitive Routing

The first weeks of my Google Summer of Code project were a little complicated, as I still had exams at university, and I was not really aware of what mesh networks were about.  I also needed a little time to get to know network and routing better, both pratically and theorotically.

At the beginning, in order to understand quagga and babeld, Iread a lot of documentation on the routing topic including theoretical papers and some RFCs, while also browsing the code of both babeld and quagga.

On the other hand, I have been able to experiment mesh routing with the mesh network available at university.  In order to use that network from home, and being able to test my programs at all time, I also established a VPN connection between university and my home computer. By doing so, I can connect to the university’s babel network at any time. I have also been able to understand the functionning of quagga and zebra and to install source-sensitive static routes on a mesh network.

After having spent much time reading the codes of babel, quagga, and babel in quagga, I achieved to use the zebra’s API in babeld and began to add support for source-sensitive routing in babeld. Currently, my code runs and segfaults proudly ! I hope to see the first results of source-sensitive routing with my version of babel, in the worst case, at the end of the week.

At first, my goals were not really clear, but now, I have precise objectives on the short, middle, and long term.  In brief, my short term objectives would be to get a source-sensitive routing Babel running by the end of the week.  After getting a working version of Babel source sensitive, I will implement the same work in RIPng. RIPng is a quite simple protocol and Juliusz and Matthieu told
me it would be a good idea to offer it source-sensitive routing.  And finally, after everything will be tested and running fine, I will be implementing the source-sensitive commands in Babel.  Then, I will complete the documentation about my work. And in the end, the ultimate goal would be to be included in the official repository of Quagga.

If you want more details about the work I did, you can read my blog here : http://ariane.wifi.pps.univ-paris-diderot.fr/~olden/. I posted an entry every week to keep you informed of the progress on the short term. 

GSoC: Retroshare social network plugin compared with Retroshare forums

In the last post i wrote about the decentralized structure of Retroshare. I said: similar features to Facebook are also possible in a decentralized manner. Now let’s have a closer look at those features.

Communication tools have the purpose to take a piece of information and transmit it. The question is who should receive it, when will it be received and to which next hop should the information be send.

Very simple forms of communcation are one to one messages. We have a clearly defined receiver which can be adresse by a RSA public key. This is very simple if we are directly connected to the receiver. If the receiver is not a direct neigbor, then Retroshare uses it’s distant tunnels. Current distant chat and distant messages only work when sender and receiver are online at the same time. Retroshare v0.6 will cache messages on neighbor nodes to deliver them when the receiver is online.

Retroshare can also do many to many communication. The Chatlobbies are for real time communication. If you don’t receive the message a few minutes after it was send you will never receive it. Messages are send to all neighbors in the chatlobby. RS keeps messages for a few minutes to prevent echos. Forums and channels offer persistant messages. This means messages are kept for a year in forums and a month in channels. The messages are not targeted at a person, but they belong to a context. This context is a chatlobby or a forum/channel. Users have to subscribe to the context.

Every message is bound to the context. It appears only at one place. In Retroshare forums and channels the context is a topic like “Developer’s Discussion” or “RetroShare Windows builds by Thunder”. In the social network plugin the posts are decoupled from a context. The context only stores a reference to the post. This allows posts to appear at different places. For example: a post can appear on the walls of user A and user B. And if i see the post and like it, i can forward it to my wall. Forwarding of specific posts only, increases the quality of the received content.

With forums and channels you first have to subscribe to them. Retroshare will then download the posts in it. The social network plugin automatically downloads the posts your friends like. And if your friends like it, then the probability is high that you also like it.

While forums are about a topic, a social network wall is about the user. It is a personal place where you can leave message for your friends, and your friends can leave messages for you. Still your are not limited to own content. If you want to share existing posts with your friends, just put it onto your wall.

Scope of the GSoC project
The goal of the GSoC project is to make a basic wall service. This includes the backend for message processing and the user interface. Chat and private messages are not planned for GSoC. But this is something that is very easy to add later.

Internally the social network plugin is based on the new Retroshare General Exchange System(GXS). GXS has two basic data structures: gxs-groups and gxs-messages. A gxs-message belongs to a gxs-group. If a gxs-group is marked as subscribed, then all messages in this group are downloaded. In the gxs-forums this mechanism is clearly visible to the user: gxs-groups are downloaded automatically, but the user has to subscribe to download the contained gxs-messages. The posts in the social network plugin should be independent. This requires that every root-post has its own gxs-group. The wall services automatically subscribes to interesting gxs-groups. Gxs-groups can be interesting because:
– the posts appears on a wall of a friend
– the user follows the author of the post or the user is friend with the author
– a friend made a comment on this post

So the interesting content is not defined by a forum title, but by the way friends interact with it.

GSoC 2014: Hardware Detection

Hi everyone!

Before  starting coding I have been studying Lua programming language I read  some tutorials and I did some practical examples. After that, I went to  the Libre-Mesh architecture to understand better how it is done. When I  understood the programming pattern of LiMe I started coding.

First of all, I started creating a plug-in module for ath9k-htc based hardware. This module has been tested using a TP-LINK WDR3600 and two TL-WN722N USB radios. While working I have included an hotplug hook so the usb radios can be dynamically plugged and removed from the system while it’s running. 

After  having the usbradio module almost done I have been working creating a  modular hardware detection plug-in for Libre-Mesh. This hardware  detection module have been completed with clean and configure functions. During the development of this part I have discovered some bugs in the file config.lua and I have solved it.

Currently, after doing some cleaning and debugging both modules are almost finished.

Obviously  all this work have been possible thank to the help of the community and  I hope to do my best during the rest of the coding period.

I will be happy to receive feedback or tips

Best Regards 😉

GSoC 2014 Updates – OpenWrt: IEEE 802.1ad VLAN support

As promised I have been working hard on the proposal. Thanks to community help I have already almost completed all deliverables and in fact some patches got merged in netifd [10-13] and OpenWrt [15] already and development branch of libre-mesh is already taking advantage of them [14] 🙂
 
Working on this project gave me more conviction that community collaboration is fundamental, community helped me to understand the problem and what to modify to fix it, moreover someone went beyond that submitting patches too [16-17] !
 
Here it goes a small resume of how I did it:
    – Talk multiple time with OpenWRT [0] folks via retroshare openwrt lobby, they gave me some indication on what I had to do
    – Get netifd code [18]
    – Create a vlandev netifd device
    – Implement routines to add and remove 802.1ad/802.1q vlan in Linux via netlink
    – Talk with OpenWRT folks for quality check and patch merging
    – Code cleaning and testing
    – Patches get merged in netifd and OpenWRT
 
Now who need to use 802.1ad is not unfortunately constrained to private software anymore, because it is now supported in OpenWRT [0] 😀
 
More updates [6] soon, and don’t forget the best is yet to come! 
 

GSoC: nodewatcher v3 – data collector agent

Hello all!

A quick update on what is happening regarding my GSoC project with bringing nodewatcher v3 platform closer to reality. Version 2 of nodewatcher used its own simple key-value format and a bunch of shell scripts to provide monitoring data. In order to bring this into the modern era where JSON and ubus are available as compact libraries on OpenWrt by default, I have in the last few days created a new modular OpenWrt monitoring agent that can run on nodes and periodically obtain data from various sources (directly from procfs, from uci, from netifd via ubus, from nl80211 netlink API via OpenWrt’s libiwinfo, etc.).

The daemon is implemented in C and different data sources are implemented as loadable shared object modules, enabling simple extensibility. Nodewatcher agent then provides all of the collected data in two ways: a) it can directly output data to a JSON file that can be served via HTTP; and b) it also provides an ubus object called nodewatcher.agent that exposes a get_data method, so other applications can obtain the same structured data as an ubus blobmsg. The nodewatcher monitoring agent could perhaps also be reused by the proposed Freifunk Monitoring and Administration Panel.

The nodewatcher-agent repository is hosted on GitHub with a README file providing a quick description of the used format, the ubus API and the currently implemented modules:

https://github.com/wlanslovenija/nodewatcher-agent

I have also packaged the agent for OpenWrt so it can be installed together with its various data source modules via opkg. The packages are available in the nodewatcher firmware package repository:

https://github.com/wlanslovenija/firmware-packages-opkg/tree/master/util/nodewatcher-agent

The agent and its packages should be considered alpha and the schema is still subject to change.

GSoC: Social network plugin for Retroshare

Retroshare is a communication tool build on two principles: decentralisation and privacy (Read more about the ideals behind Retroshare). Decentralisation means there is no central server. It also leads to privacy, because Retroshare runs on your own devices. Retroshare gives you the cryptographic tools to control who exactly can access your data. Privacy is enabled by encrypting all data which goes over the wire. Read Cryptography and Security in Retroshare for details.


Figure 1: Retroshare is a decentralized network.

Commercial social networks like Facebook have opposite properties: they are centralized and all data is visible to one company. The good news is: similar features to Facebook are also possible in a decentralized manner. The plan is to build this as a plugin for Retroshare. Always having decentralisation and privacy in mind, while offering a very similar interface with a similar feature set.


Figure 2: Sketch of the user interface.

Unlike other peer-to-peer software, Retroshare connects only to a set of trusted friends. This has some practical advantages, for example you don’t have to fear malicious peers. It still scales nicely, because friends tend to gather the same type of data.
But often you want to reach content that was not created by your direct friends. Or you want to contact someone because you are not friends yet. For this Retroshare offers another layer which i call the service layer. At first a service can send messages to direct neighbors only. But if it receives a data packet it can also forward it to a friend. This is how you can reach friends of friends. This philosophy has been applied to multiple information sharing algorithms within Retroshare. Read more about them in the following articles on the Retroshare team blog: Retroshare Forums, Distributed chat(a.k.a. Chat Lobbies), RetroShare’s anonymous routing model and Distant chat and messaging, using generic tunnels.


Figure 3: Retroshare forwards messages from friend to friend

On the neighbor Layer, you have to add your friends keys to your keyring by hand. But this is not a problem in practice, because the number of direct friends is small and Retroshare offers various options to get to know neighbor nodes one hop away. On the service layer you want to have the public keys of people many hops away. In Retroshare v0.6 there is a identity exchange system, which downloads public keys at the same time it receives a post from a unknown author. The identity exchange system can also work decoupled from the neighbor layer. This allows us to have pseudonymous identities. These identities can be used for different kinds of services like chat, mail, forums and the new social network plugin.

Retroshare is a steadily grown project, so the developers gained a lot of experience. Out of this they developed new systems to distribute messages in a Retroshare network (short name is gxs for General Exchange System). So all the complex code for doing the signing, verification, encryption and decryption of data is already done and avaliable. During the Summer of Code i will wrap this complex stuff in a nice webbased user interface. The RSWebUI Plugin is a starting point. It uses Wt, which is a library to create a webinterface with C++, without writing own code in web languages. You can watch the development in the git repository.

Read the next blog post to learn what features the social network plugin will have and how it will work in details.

Download Retroshare: http://retroshare.org

About me: i study electrical engineering. I first started programming to control hardware. I came to Retroshare because i wanted to chat securely without depending on a server. Then i started to dream about what else would be possible with a secured connection. So i started reading the code. Today i have enaugh knowlegde to contribute.