Spectrum Analyzer for LibreRouter June Updates

What I have been working on

During the past weeks, I had the chance to work on the spectrum analyzer in the following topics:

  • During the BattleMesh, I has the chance to engage with several developers that are working on topics related:
    • Felix Fietkau is one of the developers of the ath9k module, and we had a conversation to understand better the inner workings of the driver, and what the output of the module will serve me for the Spectrum Analyzer functionality
    • Paul Fuxjäger from FunkFeuer, with whom we discussed some potential uses of the module, along with other collaborations that could arise in the future related with the module.
  • I engaged with the FFT_eval project’s source code, that is used to decode radio i/q signals into easily representable values, and added a JSON output for the data. Instead of continue our own fork of the project, I did this in the mainstream project, promoting one codebase. The merge request is currently pending: https://github.com/simonwunderlich/FFT_eval/pull/13 . Many thanks to Gui Irribarren and BrainSlayer from the DD-WRT project to provide most of this implementation.

Once I engaged with the LibreMesh projects, I understood that one of the purposes of having the Spectrum Analyzer was to be able to do a Frequency Survey in the TVWS spectrum. This is very valuable because one of the proposals of using this part of the spectrum is through a regional database of usage, where you can ask for permission to use a frequency and the database needs to authorize you.

So, my other job has been to seek for current implementations of TVWS Database and, in particular, the PAWS protocol (an IETF draft proposal for TVWS Databases). I managed to found a team that is working on this (Prof. Karandikar from the IITB of Mombay and his OpenPAWS project) and we are talking to see if we can collaborate.

That’s a rougth report on what has been happening during the last weeks.

What I’ll be working on

I’ll describe the architecture that I expect to implement in the upcoming weeks.

With the help of ubus, I will be working on an event based architecture that involves the following parts:

The yellow parts are the new parts.

A brief description of the parts involved:
* Spectral Scan Manager: It manages ath9k states, recovers i/q data from the atheros modules and hands them over through ubus
* Spectral Scan Decoder: FFT_eval wrapper that will receive Spectral Scan Manager i/q data and turn it into JSON
* Spectral Analysis Collector: A configurable daemon that will collect the Spectral Scan Decoder data for further analysis. This collection could be kept in memory or sent to a secondary server (like the OpenPAWS server)
* Visualization Module: Will access the information handed by the Decoder or the Collector (depending which information we would like to access) and visualize it in a Waterfall graph.
* Spectrum Availability CLI Interface: This is a potential proposal (if time allows) to have a simpler interface that can be accessed from command-line. Could implement something similar to this: https://wiki.mikrotik.com/wiki/Manual:Spectral_scan

For the next week, I’ll implement the Spectral Scan Manager, the Spectral Scan Decoder wrapper, and a simple visualization.

Spectrum Analyzer in the context of LibreRouter

Hello to all!

My name is Nicolas Pace and this is the first time and engage into participating in the GSoC for Freifunk.

For this opportunity I’m engaging with the LibreMesh community on the context of the LibreRouter project by implementing a Spectrum Analizer for LibreMesh and also for LEDE/OpenWRT.

Spectrum Analisis is a very powerful tool for anyone that wants to enhance the quality of the links created, but also can be used as a base for more complex functions like diagnose of the physical layer, or many other things that have been implemented in other firmwares.

 

What has been done already

During this last weeks I’ve the chance to engage the community, and also to deepen my understanding of the problem at hand.

Also, I’ve got a working prototype of the command-line interface, and a prototype of code that has been used to show that information.

Next steps

  • To create a lightweight service that shares the information with the web
  • To make a nice interface for the output

Thanks for the opportunity of joining you, and hope to deliver as expected!

Bringing a Little SDN to LEDE Access Points

Hi everyone!

My name is Arne Kappen and this is the beginning of my second participation in GSoC for Freifunk.

Last year, I implemented an extension for LEDE’s netifd which enabled network device handling logic to be outsourced to an external program and still integrate with netifd via ubus. My work included a proof-of-concept implementation of such an external device handler allowing the creation and configuration of OpenVSwitch OpenFlow switches from the central /etc/config/network file [1].

Sticking with Software-Defined Networking (SDN), this year I am going to provide the tools to build SDN applications which manage wireless access points via OpenFlow. The main component will be establishing the necessary message types for the control channel. I am going to extend LoxiGen to achieve this. In the end, there should be OpenFlow libraries for C and Java for the development of SDN applications and their agents running on LEDE.
I will also write one such agent and a control application for the ONOS platform to test my implementation.

My ideal outcome would be a REST interface putting as many of the APs configuration parameters under control of the SDN application as possible. Such a system could provide comfortable management of a larger deployment of LEDE access points and be a stepping stone for more complex use cases in the future.

I am looking forward to working with Freifunk again. Last year’s GSoC was a great experience during which I learned a lot.

[1] Last Year’s GSoC Project

Implementing Pop-Routing in OSPF

Hello everyone.

I’m Gabriele Gemmi, you may remeber me for… Implementing Pop-Routing[1]
This is the second time I participate in GSoC and first of all I’d like to thanks the organization for giving me this opportunity.
Last year I implemented PR for OLSR2. The daemon, called Prince [2], is now available in the LEDE and the OpenWRT feeds.

What is Pop-Routing

PR is an algorithm that calculate the betwenness centrality [3] of every nodes of a network and then uses this values to calculate the optimal message timers for the routing protocol on each node. In this way a central node will send messages more frequently and an outer one less frequently.
At the end the overall overhead of the network doesn’t change, but the convergence gets faster.

Objectives

My project focuses on extending Prince functionalities to use Pop-Routing with OSPF. I decided to work with BIRD, since it’s written in C and it’s already available for OpenWRT/LEDE
In order to do this I need to develop 2 components:
— A plugin for BIRD that expose the OSPF topology in NetJSON and allows to update the timers
— A plugin for Prince that communicate with the BIRD plugin

I already started developing the former [4], and I’m looking forward to implement the latter.
I’ll keep reporting my updates here, so stay tuned if you wanna hear more.

Cheers, Gabriele

[1]: https://blog.freifunk.net/2016/05/23/implementing-poprouting/
[2]: https://github.com/AdvancedNetworkingSystems/poprouting/
[3]: https://en.wikipedia.org/wiki/Betweenness_centrality
[4]: https://github.com/AdvancedNetworkingSystems/bird

GSoC 2017 – RetroShare mobile improvements

Hi readers! I am Angela Mazzurco and I am very grateful to the GSoC community
(Google, Freifunk, RetroShare etc.) for giving me te possibility to participate
as GSoC student this year!
I study Architecture and Engineering at Pisa University, and here in Pisa I am
involved in the local community network (eigenNet/Ninux.org).
Thanks to the local community I get to know RetroShare and now I use it on my
daily life when I am in front of my laptop. Remote comunication today is very often
displaced from the personal computer to the smart-phones, because of this very
often I have to downgrade to less ethical and less secure communication
platforms, because most of my friends are reachable only on the smart-phone.
This last unfortunate situation inspired me to help developing RetroShare for
mobile phones.
In this deirection the RetroShare community has already done some effort but
still the Retroshare Android app is in an early stage and need much improvement.
I‘ll give my contribution to this big project, trying to solve issues with the
interface and helping to develop it, to make it user friendly and easy to use
for all users.
During the community bonding period I started to prepare the developing
environement with suggestions from my mentors, I have been remotely meeting them
on RetroShare and I have been successful compiling RetroShare for desktop, and
now I am preparing the toolchain to compile RetroShare for Android, that is not
so easy as it may seems.
The application interface is writted in Qml, a language part of Qt framework,
so my first steps have been prepare Qt Creator IDE to, and to create my own fork
of the Retroshare project [0]
The app comunicates with the Retroshare API to get the information, using unix 
sockets, and also with the native Android operating system, using JNI (Java Native
Interface).
After having the toolchain working I’m going to start improving the QML
interface, adding features, improve the integration with Android operating
system, improve usability, and fix a bunch of bugs.
Cheers!

GSoC 2017 – RetroShare via Freifunk

Hello, my name is Stefan Cokovski and I’m an undergraduate student at the Faculty of Computer Science and Engineering, Saints Cyril and Methodius University of Skopje. My field is Computer Networks Technologies.

Firstly, I would like to thank Google and the team responsible for organizing GSoC. GSoC is a wonderful opportunity for many students all over the world to gain some real experience working on open-source projects, but also to expand their network with new friends and potential colleagues. I would also like to thank Freifunk (for taking many projects related with computer networks under their wing and for also supporting the project RetroShare) and the lead developers (and my mentors) of RetroShare for being here for me during this community bonding period, answering my questions and helping me to improve my ideas. I’m sure they will continue to help me during the later parts of GSoC.

Before I tell you what my project involves, I would like to introduce you to what exactly RetroShare is and maybe convince you to start using it (if you don’t use it already) and possibly join the development process.

RetroShare is a decentralized, private and secure commmunication and sharing platform which provides many interesting features like file sharing, chat, messages, forums and channels. RetroShare is a free and open-source project, completely free of any costs, ads and terms of service. RetroShare is available on several operating systems, including various GNU/Linux distributions, FreeBSD, Microsoft Windows and Mac OS X.

Sounds interesting? Read more.

Continue reading “GSoC 2017 – RetroShare via Freifunk”

GSoC: Improving nodewatcher data representation capability (update 1)

¡Hola! 
I am a student of computer science, but most of my knowledge comes from my diy projects. I am a jack of all trades kind of a guy; I have tinkered with low level stuff like add-ons and fpgas,
but I also worked with everything UE4 gaming engine, blender and other high level programs. I like creating visual things such as music visualizations, graphs and other more interactive ways of displaying data. This summer I will help improve the visualization capabilities of nodewatcher.

Continue reading “GSoC: Improving nodewatcher data representation capability (update 1)”

GSoC 2017 Attended Sysupgrade

Hello, my name is Paul Spooren and I’ll be working on attended sysupgrades this Google Summer of Code. I’m 24 years old and studying computer science at the university of Leipzig. With this blog post I try to explain my project, the advantages and it’s challenges.

Topic Change from captive portals.

When I applied to GSoC my first application covered the implementation of “Captive Portals” for the LibreMesh. After discussing details with my mentors we decide to switch the project.
The main shortcomings where the following:
* Captive portals need testing on all kind of devices, Apple devices using a different approach than Android, Linux distribution differ, all kinds of Microsoft’s Windows as well. Testing would claim to much effort to provide a stable solution
* Captive portals usually intercept HTTP traffic and changing it content with a redirect to the login provider’s splash page. This does not work with encrypted traffic (https) and would result in certification errors.

Discussing what has generic use to OpenWRT/LEDE and LibreMesh we came up with the topic of a simple sysupgrade solution and fixed on that.

What are attended sysupgrades?

Performing updates on routers is quite different from full Linux distribution. It’s not always sustainable to do release upgrade via a packet manager. Instead it’s usually required to re-flash the system image. Depending on the installed packages an image rebuild may be to complex for regular users. A more convenient way is needed.

The main idea is to provide a simple function within the web interface to automatically download a custom sysupgrade-image with all currently installed packages preinstalled.
An opt-in option would check for new releases and notify via luci(-ng) or command line.

This approach would also help to upgrade a router without full computer hardware. The web interface can be accessed from mobile phones and as no complicated image downloading is required all users can perform sysupgrades on their own.

Distributions like LibreMesh may have a more frequent package release cycle and devices may don’t offer opkg due to limited flash storage. The simple sysupgrade approach could be used as a opkg replacement for these special cases and keep devices up to date.

How does it work?

The web interface will have a new menu entry called “Attended Upgrade”. The page send the currently installed release to the server and checks it response. If an upgrade is available a notification will be shown. A click on the download button sends a request to the server and downloads the image. Another click uses the sysupgrade mechanism and installs the image. After reboot the system should run as excepted with all before installed packages included.

This project will implement an “image as a service” server side which provides custom build images depending on installed packages. A JSON API will enable routers to send requests for custom images. Build images will be stored and reused for other requests with the same package selection and device model.
A simple FIFO queue will manage all builds requests. Created images will be stored by priority queue system so most requested combination are always in cache.

Challenges

* With new releases packages may be renamed. This can be due to a split after growing in size as more and more features are added or if different versions of a tool exists. The update server has to know about all renamed packages and created an image with all needed programs. Therefore a replacement table will be created which can be managed by the community. Merges, splits and new naming convention will be covered. To make updating easy the server will try to handle changed names as automatic as possible. If there exists different possibilities to choose from there will be a menu in the web interface.

* Currently luci is the de facto web interface of LEDE/OpenWRT. Eventually it will be replaced by luci-ng with a modern JavaScript framework. All router sided routing has to be easily portable to the new web interface.

Implementation details

The main logic will happen within the browser and so can use secure HTTPS to communicate with the update server. The users browser communicates between router and upgrade server. The following diagram tries to illustrate the idea.

Once opened the upgrade view will ask the router via an rpcdcall to receive the installed release and send the version to the update server as an *update availability request*. The server will answer with an *update availability response* containing information about the update if exists or a simple status 204 (No Content) code. If a new release exists the web interface will perform another rpcd request to get details of the device, installed packages versions and flash storage. The information are then combined and send as an JSON request to the update server as an *image request*.

The update availability request should looks like this:

{
    "distro": "LEDE",
    "target": "ar71xx"
    "subtarget": "generic"
    "version": "17.01.0",
    "packages": {
       "opkg": "2017-05-03-outdated"
       ...
    }
}

The update server will check the request and answer with an *update availability response*:

{
    "version": "17.01.1",
    "packages": {
        "opkg": "2017-05-04-new",
        "ppp-mod-pppoe2": "2.0"
    }
    "replacements": {
       "ppp-mod-pppeo": "ppp-mod-pppoe2"
    }
}

The response contains the new release version and packages that will be updated. Not that even if there is no new release, packages could be updated via a sysupgrade. The idea is that packages without opkg installed can receive package updates as well.

All changes will be shown within the web interface to let the user know what will change. If the user accepts the upgrade an request will be send to the server. The image requests would look something like this:

{
    "distro": "LEDE",
    "version": "17.01.0",
    "revision": "48d71ab502",
    "target": "ar71xx",
    "subtarget": "generic",
    "machine": "TP-LINK CPE510/520",
    "packages": [
        "ppp-mod-pppoe2": "2.0",
        "kmod-ipt-nat": "4.9.20-1",
        ...
     ]
}

Once the update server received the request it will check if the image was created before. If so it will deliver the update image straight away. If the request (meaning device and package combination) was done for the first time a couple of checks will be done if the image can be created. If all checks pass the wrapper around the LEDE ImageBuilder will be queued and a build status API is polled by the web interface. Once created a download link is provided.

In the unlikely event of an unsolvable package problem the replacement table can’t fix itself the user will be asked to choose from a list. The new combination of packages will be send to the server as a new request resulting in an sysupgrade image. This approach still needs some evaluation if utilizable and really needed.

Using the ImageBuilder offers an generic way to offer sysupgrades for different distribution. The image builder feeds can be extended to include distribution specific packages like LibreMesh package feed

The replacement table could be implemented as followed:

# ./lede/replacements/17.01.1
{
   "libmicrohttpd": [
   "libmicrohttpd-no-ssl": [
       "default": true
    ],
    "libmicrohttpd": []
    },
    "openvpn": [
        "openvpn-openssl" [
            "default": true
        ],
        "openvpn-mbedtls": [
            "installed" [
                "polarssl",
                "mbedtls"
            ]
         ],
         "openvpn-nossl": []
    ],
    "polarssl": [
        "mbedtls": [
            "default": true
        ]
     ]
 }

libmicrohttpd was replaced by libmicrohttpd-no-ssl (installed as default) and libmicrohttpd.
openvpn splittet into various packages depending on the installed crypto library, openvpn-openssl is the default while openvpn-mbedtls is only installed if mbedtls (or it’s prior name polarssl) was installed before.

For better readability the yaml format could be preferred.

LibreMesh introduced a simple way to manage community specific configurations. This configuration method is flexible for other communities as well and should be integrated into the update server. A optional parameter could contain the profile name which will be auto integrated into new images.

"community": "quintanalibre.org.ar/comun/",

The parameter could also contain a full domain with leads to the needed files, this feature need more evaluation.

Possible features

* The current design is an attended upgrade triggered by and dependent on the web interface. A feature could be to add logic to the command line as well.

* Once the sysupgrade is possible via shell, an unattended sysupgrade would be possible. A testing and a release channel could enable unattended upgrades for tested images (device specific) only. If an image works after an attended upgrade it could be tagged and offered via the release channel.

* Mesh protocols may change and outdated routers loose connectivity. A possible solution to upgrade the devices losing contact could be to automatically login the outdated routers to updated routers open access points, perform an update and reconnect to the mesh.

Final Thoughts?

All thoughts above are not final and are more likely an RFC. I’m very happy to receive comments and critic. My goal is to have an generic update service where all communities and LEDE/OpenWRT itself can benefit from.
Feel free to contact me at paul [a-t) spooren (do-t] de or on freenode/matrix as aparcar.

GSoC 2017-netjsongraph.js: visualization of NetJSON data

Project intro

NetJSON is a data format based on JSON(What is NetJSON?), and netjsongraph.js(GitHub) is a visualization library for it. This library has attracted quite some interest from around the world, but there are some defects, such as tests and modern build process lacking.

Therefore our goal is to improve the features and development workflow of netjsongraph.js. To be specific:

  • make it faster with large numbers
  • make it more mobile friendly
  • use modern tools that are familiar to JS developers, so they can contribute more easily
  • add automated tests so we can be more confident of introducing changes
  • get rid of complex features
  • make it easy to extend, so users can experiment and build their own derivatives
  • make it easy to redraw/update the graph as new data comes in, at least at the library level we should support this
  • geographic visualization (like https://ninux.nodeshot.org/ (nodeshot project)

Arrangement

About me

I’m a graduate student from China and also a front-end developer with more than one-year working experience. And now I am interested in the Data Visualization and already made several visualization projects of network structure. So lucky my proposal selected by Freifunk in Google Summer of Code 2017. It’s a great opportunity to participate in a promising open source project. Thanks for my mentor’s guidance and hope I can finish an excellent job. So I listed the following plan:

Tasks and Schedule

  • create a new branch: build the project with yarn, Webpack and Babel. 1 week
  • To build a (mostly) backward compatible version 1 week
  • draw a demo graph using canvas or WebGL. 2 week
  • make a example page to show visualization results. 1 week
  • add test(using Ava and XO) and CI. 1 week
  • discuss and design visualization view 1 week
  • import and integrate with OpenStreeMap or Mapbox to make a map. 1 week
  • visualization implemention. 8 weeks
  • beautify the visualization. 1 weeks
  • improve visualization and test. 4 weeks
  • design interface for plugin (to make this library extensible) *2 week

GSoC 2017 – wlan slovenija – HMAC signing of Nodewatcher data and IPv6 support for Tunneldigger

Howdy!

I am a student at Faculty of Computer and Information Science in Ljubljana, Slovenia. Like (almost) every “computer enthusiast” I liked gaming and later found myself developing an OpenGL graphics engine. All engrossed in C++ and all sorts of algorithmic challenges I slowly came to realize that something is missing. Yes, my knowledge of anything network related. So, combining my two other interests, those being information security and inexplicable love of tunnels, I applied myself to Google Summer Of Code with the following ideas. As a participant in this year’s Google Summer of Code I will develop some new goodies for two projects of wlan slovenija open wireless network.

The first one is for the nodewatcher, which is an open source system for planning, deployment and monitoring of the wireless network. It is a centralized web interface which is also used for generating on OpenWrt based firmware images for specific nodes. After flashing the wireless router with the generated image, it just needs to be fed some electricity and it automatically connects into the network using VPN, or in case of an existing nearby node wirelessly. Nodwatcher then collects all the data about node’s performance by connecting to nodes to obtain data, or by nodes pushing their data to nodewatcher. This data is not sensitive, but we can still worry about it being manipulated or faked while in transit between the node and nodewatcher. The problem though is that all the monitoring reports are currently unsigned. This poses a security risk in the form of a spoofing attack, where anyone could falsify the messages sent to the nodewatcher. The solution is to assign a unique nodewatcher signing key to every node. The node will then sign the monitoring output using a hash function in HMAC (Hash-based message authentication code) mode. This means that a computed “signature” would be sent along with every message and nodewatcher can check whether the data was altered in any way. In the event of a signature verification failure a warning will be generated within the nodewatcher monitoring system. This is imporatant, because it assures the integrity of recieved data and inspires confidence in using it to plan deployment of new nodes in the future.

The second contribution will be to the Tunneldigger, which is a simple VPN tunneling solution based on L2TPv3 tunnels. It is used to connect nodes which do not have a wireless link between them in to a common network. Using existing network connectivity it creates L2TP tunnels between nodes. The current limitation is that tunnels can only be established over IPv4. This poses a problem because due to dramatic growth of the internet, the depletion of the pool of unallocated IPv4 addresses is anticipated for some time now. The solution is the use of its successor, IPv6. Since the tunnels are already capable of carrying IPv6 traffic, the capability of establishing them over IPv6 will be developed. The Tunneldigger will also support IPv4/IPv6 mixed environment where both server and client have some form of IPv6 connectivity. That way the Tunneldigger will finally be made future proof!

Reports about my work will be available on developers mailing list.

Yay for the free internet!