Wie jedes Jahr, trifft sich auch dieses Jahr die hallesche Freifunk-Community im Sommer am Hufeisensee. Es ist ein regionales Treffen, welches offen für alle Freifunk-Communitys ist. Hiermit möchten wir euch einladen am 19.07.2014 vorbei zu schauen.
Diesmal sind wir zu Gast auf dem Vereinsgelände vom Tauchclub Orca in der Schkeuditzer Str. 70 | 06116 Halle, wo wir uns ab 15 Uhr treffen. Das Gelände bietet alles, was Freifunker für ein produktives Treffen benötigen: Freifunk, Internet, Strom und Grill sind genauso vorhanden, wie Tische und Stühle um eigene Rechentechnik aufzubauen. Die Freifunk Community aus Halle will dieses Treffen nutzen, um einen eigenen Förderverein zu gründen. Der Förderverein soll die Aufgabe übernehmen, das Projekt Freifunk in Halle organisatorisch zu unterstützen. Weitere Informationen gibt es im Freifunk.net-Wiki. Fragen und Anregungen können auch bei uns im Forum diskutiert werden. Gäste werden gebeten, sich anzumelden, um besser planen zu können.
Unsere kleine Community in Bielefeld wächst zur Zeit ruhig aber stetig. Jetzt wurde unsere Webseite komplett erneuert und unsere Mailingliste ist auch endlich von googlegroups umgezogen, nachdem immer mehr Leute gemeldet haben das ihr Emails nicht ankommen würden - wer da böses denkt. ;-)
Auch ist eine neue Firmware verfügbar (0.3), die nicht nur stabiler läuft sondern auch viele neue Router unterstützt. An dieser Stelle einen Dank an das Gluon-Projekt für einige Pakete und Patches die wir übernehmen konnten (autoupdater / traffic control). Für einige Modelle mussten wir aber aber auf eine noch experimentelle OpenWrt Version zurückgreifen (Barrier Breaker). Aber es funkt! :D
Heute gab es auch einen Vortrag zum Thema Freifunk im Rahmen der Netzwoche in der Universität Bielefeld. Vielleicht wurde die eine oder andere Person dazu inspiriert einen Router aufzustellen oder aktiv mitzumachen. Die Folien sollen bald veröffentlich werden, so dass auch andere sie nutzen können.
Here it is my second blog post for the Mid Term Evaluation of GSOC2014.
As I wrote in the previous one, I'm working on Netengine, a Python module to abstract network devices and get information from them.
The work is going very well, I'm learning new things every day with the help of my mentor, Federico Capoano, and I'm very happy with the development.
In this first part of the work we completed as per goals, the SNMP back-end for AirOS and OpenWRT firmwares.
The most difficult part of this first part was to work with SNMP (Simple Network Management Protocol) because I had never worked with it, so I had to learn it's basics and how it works, in particular it's way of retrieving info from devices.
It uses different codes (MIB), everyone of it gives access to different information of the device (e.g device name, addresses, interfaces); so before starting to write I had to look for the correct MIBs to query.
Now we are focusing on the ssh.OpenWRT back-end ready to switch the next one on the list once completed.
I'm definitely very happy with how the work is going, with the communications I'm having with my mentor and with all coding practices I'm learning from him.
The program gave me not only the possibility to improve my skills but also to meet new people very experienced on the field.
The next step is to start coding on the new back-end, probably an HTTP back-end for AirOS, to complete the program in time.
This post will give an overview about the ongoing work on the API query client GSoC project. As I've wrote a few days ago I met Jürgen Neumann at the WCW 2014 and he introduced me to DeepaMehta. We decided to use this tool as a database for the API data. This approach is quite a leap from my original proposal and idea but after a few discussions we realized there are a lot of benefits to this approach. Here I want to give a short overview about this new approach.
What is DeepaMehta?
DeepaMehta represents information contexts as a network of relationships. This graphical representation exploits the cognitive benefits of mind maps and concept maps. Visual maps -- in DeepaMehta called Topic Maps -- support the user's process of thinking, learning, remembering and generating ideas. We think that working with DeepaMehta stimulates creativity and increases productivity. - Welcome to DeepaMehta
This sounds interesting but one may ask where is the connection to community data in machine readable form? The answer is in the data model. Here is an example from the website:
The data is organised in a topic map. There are topics that can represent e.g an organisation or a person or an event. These topics are associated through a hypergraph relationship. This means that it is possible to model all kinds of possible relationships between topics. For example a person can be modelled as a topic, that is associated with an address and the adress consists of location data, email adresses and so on... this person can be part of several organisations and these organisations can be aggrated by several parent organisations and so on...
We have a powerful graph to represent all kinds of information and we know about the relationships of each information-bit to other bits...
We have a graph that we can traverse for queries. It's straight forward to e.g. list all organisations, that a person is a member of.
To take an example from the API data: We want to know which communities use "olsr" as their routing protocol . This would be an instance of the topic type "routing protocol". We now only need to follow the links to all instances of the type e.g. "freifunk-community" that are connected to the "oslr" instance of the "routing protocol" topic.
This would allow for flexible queries. Another example would be a map where all instances of location topics are displayed and their parent-topics are included as label for the points on the map. If e.g. node data is present for communities this would allow for a global node map that shows not only node locoations but also community event locations and meeting places. Of course there is a huge amount work do before this will be working but overall I hope this explains why there are a lot of benefits for using this representation.
Freifunk API data in DeepaMehta
So how will it work? I've tried to put it in a diagram:
At the moment there is a specification - a JSON schema file - for the API data and an instance - a JSON API file.
Magic is probably the wrong word for that, but all the hard work is done by DeepaMehta and I only build a plugin on top of that - unaware of the implementation details - so I thought it is appropriate. To quote Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic
At first, we need to put the schema into the DeepaMehta platform. This is possible using a plugin that creates the topics in DeepaMehta for the entries in the JSON schema.
The next step is to feed the current data into DeepaMehta. This creates instances of the previously defined topic types. E.g. a topic for each community.
Once the data is in DeepaMetha it's possible to query that data.
We can now speak JSON over HTTP using REST with the platform and present the results in various ways. E.g. display communities in a map or provide an text interface to query the data. DeepaMehta already provides an REST API and a web-based interface for exploring and editing topic maps but while testing and playing with the interface we found it too complicated.
Feeding the data by hand into the plattform is not practical and I'm working on an import script for the schema and the API data. At the moment mapping the basic JSON types (string, integer, ..) into DeepaMehta is working but more works need to be done to get a better representation. Once the data is in the focus will shift on a doing actual queries.
Complexity. These are for the most part new concepts for me and I had little prior experience with semantic web technologies. DeepaMehta also covers quite a few other usecase and I need to learn more about the system.
Doing the actualy queries and traversing the graph is something I need to find a workable solution for. There are also different specifications of the API schema and different communities use different versions of the API. At the moment I'm ignoring that detail but here I need to find a solution. Another nice to have feature would be access to historic data.
Lots of interesting problems.. unfortunatly I've been short on time in the past days and I'm quite behind the shedule but I'm optimistic that this approach is flexible enough to provide a solid ground all kinds of experiments with data. Once queries are possible things will hopefully move forward at a faster and more visible pace.
Photo taken from https://twitter.com/christianheise/status/472746947569520641
From the 29th May to the 1st of June we were with Andi and Bernd from Weimarnetz at the wunderful c-base Raumstation. We visited the Wireless Community Weekend. It was my first experience of this kind of community event and I enjoyed it very much.
Beer and Bratwurst did harmonize quite well with technical talks about OpenWrt and the Freifunk Community. I was especially surprised how diverse and open the community is and how enthusiastic everyone involved was.
Andi and Monic talked about the progress on the Freifunk API and presented their work. At the end of their talk I had the chance to present my work on the query client for the API. Here are the slides.
Shortly after the talk Jürgen Neumann from Freifunk Berlin came to me and introduced me to DeepaMehta. My original plan was to use something like NodeJS for the backend the storage of the API data but DeepaMehta looked promising and offers unique features I didn't even thought off.
So after talking with Andi about it we decided to use DeepaMehta as the foundation and storage tool for the API data. A seperate blogpost for the GSoC midterm evaluation will outline my work in this direction.
Overall it was a very exciting weekend for me at #ffwcw 2014.
The first weeks of my Google Summer of Code project were a little complicated, as I still had exams at university, and I was not really aware of what mesh networks were about. I also needed a little time to get to know network and routing better, both pratically and theorotically.
At the beginning, in order to understand quagga and babeld, Iread a lot of documentation on the routing topic including theoretical papers and some RFCs, while also browsing the code of both babeld and quagga.
On the other hand, I have been able to experiment mesh routing with the mesh network available at university. In order to use that network from home, and being able to test my programs at all time, I also established a VPN connection between university and my home computer. By doing so, I can connect to the university's babel network at any time. I have also been able to understand the functionning of quagga and zebra and to install source-sensitive static routes on a mesh network.
After having spent much time reading the codes of babel, quagga, and babel in quagga, I achieved to use the zebra's API in babeld and began to add support for source-sensitive routing in babeld. Currently, my code runs and segfaults proudly ! I hope to see the first results of source-sensitive routing with my version of babel, in the worst case, at the end of the week.
At first, my goals were not really clear, but now, I have precise objectives on the short, middle, and long term. In brief, my short term objectives would be to get a source-sensitive routing Babel running by the end of the week. After getting a working version of Babel source sensitive, I will implement the same work in RIPng. RIPng is a quite simple protocol and Juliusz and Matthieu told
me it would be a good idea to offer it source-sensitive routing. And finally, after everything will be tested and running fine, I will be implementing the source-sensitive commands in Babel. Then, I will complete the documentation about my work. And in the end, the ultimate goal would be to be included in the official repository of Quagga.
If you want more details about the work I did, you can read my blog here : http://ariane.wifi.pps.univ-paris-diderot.fr/~olden/. I posted an entry every week to keep you informed of the progress on the short term.