OpenWLANMap App Final Update

Hi,

This is my final update for GsoC.

In this blogpost I would like to summary all the work I have done in the last 3 months, as well as available problems and future plan.

An introduction, my progress and further information can be found under [0] [1] [2]

The new app is compatible to the old app on all basic functionalities [3]. Beside that the code is validated after google check style, contains a full java doc, clear interfaces and the app performance is partly improved.

Final architecture and app design:

Basic changes in comparison to old app

  • The old broken UI is replaced by a new designed UI.
  • Storing: The old app uses a non-standard database by writing multiple access points in bytes in file, which stores redundant data and difficult to maintain. In the new app, Sqlite is used for storing data in order to make it easier to maintain and extend. There is no redundant of multiple access points in database since BSSID is used as primary key and data is updated if RSSID is bigger. The storing process is done not by scan thread, but by a separate thread (WifiStorer), which reads from a blocking queue (WifiQueue) . A list of 50 APs (can be moved up if necessary) is put into the blocking queue as an item and storer thread is blocked on no item in the queue to pretend writing around storage the whole time.
  • Uploading: As old app, the uploading depends on user’s setting: manual, automatic on internet or on wireless connection. User can also set up the start number of APs to trigger auto upload from 5000 to 50000. User can only trigger uploading at at least 250 APs. The new app processes uploading with maximal 5000 APs once in order to pretend out-of-memory problem at device with small ram. An message contains upload process summary, new rank or error is given back to user. The WifiUploader uses UploadingQueryUtils for openwifi.su and can be changed quickly if backend changes.
  • Scanning: Scan period is dynamically set not only depending on speed but also night mode and I am working on movement detection base on sensors. Every 2s as default , the scanner thread sends Wifilocator a request on position and scanned wifis. Wifilocator uses GPS to define position, in case of no GPS defined, scanned wifis are used to define user’s position which is not working anymore at old app. The method for location can be displayed by overlay big number color as in setting.
  • Resource is managed and can be set as user’s options (kill app on low battery, on long time no gps etc.). While the old app triggers compare on noExistGps every time it scans, the new apps starts a new thread for checking resources only if user configures.
  • The old app exports only user’s own bssid and puts the export file/requires import file default in external storage. The new app allows user to import/export account with team and tag information as well as reset all settings back to default. User can browser the import file, as well as save the export file her/himself at any writable place in storage.
  • A map of all data from openwifi.su and a map from user’s contributed data is immigrated directly into the app (using osmdroid) , as well as a ranking list.
  • Min app api is moved to 19, which covers over 96% devices currently [4]. Permission is checked at runtime as required.

What I learned

I learned a lot about android developemnt.

  • Life cycles: The UI and service communicates per LocalBroadCast, the BroadCastReceiver has to be register and unregister on Resume/on Pause. Also the SettingPreferenceListener to have a proper lifecycle management of the activity.
  • Using LocalBroadCast instead of global BroadCast to keep data inside the app only.
  • From Api 23, the “dangerous permission” has to be asked at run time. Many system flags and parameters differ from api versions, which requires a lot of version control check in runtime
  • Working and managing service with a lot of parallel processes.
  • Osmdroid: open source lib for working with osm which is compatible to google map API
  • Database sqlite with lib Room
  • etc. 

    Also I learned how important architecture is since I was too fast jumping in coding where my mentor had to stop me and gave me some helpful advice. We re-designed the architecture with a component controller in center. All other components should only do its job and communicate with controller and not with others directly to make it easy to extend/replace any components.

    Difficulties I met

    It was hard to work on the app while I have no access on backend. I have to test all the APIs while analyzing the old app, which is also not nicely documented and implemented. Furthermore the backend is quite unstable and sometime unreachable. Another problem is testing. Since the app works with collected wifi access points, testing and debugging at home became very hard.

    Future plan

    There are still some points on the app performance I want to optimize further. I already started on working with the android sensors to detect movement, to scale the scan time more effectively to save resource since scanning wifi and gps are two of the services which cost bunch of phone battery.

    The app is currently only in development mode since I haven’t had a google play store account. But as soon as I do, I will release it. Until then, if you want to try it, an .apk is to download here [5]

    Acknowledgement

    Many thanks to freifunk community and my mentor Jan-Tarek Butt for this amazing opportunity. Even though there are still some small stuffs to do/fix, I am so glad that a new wardriving app is coming soon for openwifi.su. Many thank to Google summer of Code team for making this happen.

    [0] https://blog.freifunk.net/2018/05/14/introduction-openwlanmap-app/

    [1] https://blog.freifunk.net/2018/06/10/openwlanmap-app-update-1/

    [2] https://blog.freifunk.net/2018/07/09/openwlanmap-app-update-2/

    [3] https://github.com/openwifi-su/OpenWLANMap-App

    [4] https://developer.android.com/about/dashboards/

    [5] https://androidsmyadventure.wordpress.com/2018/06/03/openwlanmap/

 

Meshenger – P2P local network messenger – final update

Meshenger is meant to be an open-source, P2P audio and video communication application, that works without centralized servers, thus without a connection to the internet, does not need DHCP servers and can be used in LAN networks such as Freifunk community networks.

It was brought to life to demonstrate the use of such networks other than simply internet access as well es to discover the decentralized use of WebRTC in conjunction with IPv6.

I spent the last few weeks polishing and improving my project, getting it to a usable and stable state.

An APK with version 1.0.0 can be found here, as well as the whole source code.

In the last month i fixed a some bugs, like a wrong serialization of IPv6 addresses, making a phone ring even with the screen off, preventing duplicate contact entries, prevented the app from freezing and some more.

Of course, the app gained some new features, including a ‘settings’ page with the language, the username etc., additional information for each contact, a possibility to share contacts through third-party messengers or a QR-codeignoring calls from unsaved contacts and several more.

Oh, and if you suddenly dislike someone, you can now simply delete him.

Settings
Contact options

 

 

 

 

The app now has an ‘about’ page containing some meta-data about Meshenger as well as the license:

About page

 

I extraced a lot of hard-coded strings in order to make it easier to translate the app into different languages.

 

As of now, it is planned for the future to implement profile photos, file transfer and asynchronous messaging.

All in all, i would conclude that Meshenger was a successful project and reached most of its goals.

It gave me the chance to dive into new subjects and learn a lot about VoIP and IPv6 as well as get to know the Freifunk community and learn about other interesting ideas.

GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final

Short description

The original plan was a full kernel-space SOCKS proxifier, but that would be a little bit complex for the goal: a faster TCP proxy. Then I found a very elegant solution for the problem: eBPF sockmap support. There is a API for redirect packets between sockets in kernel space using sockmap eBPF program. I decided to extend my shadowsocks-libev fork with the eBPF support. The disabled encryption already give some additional performance, so if anyone already using this one, there is a new option to get more performance.

Continue reading “GSoC 2018 – Kernel-space SOCKS proxy for Linux – Final”

GSoC 2018: qaul.net changes and experiences (final report)

This is my final report for Google Summer of Code, working on the userspace, backend agnostic routing protocol for qaul.net. Or also entitled: how to not go about writing a userspace, backend agnostic routing protocol (in general).

The work that was done

If you’ve been reading my first three blog posts, you will know that we had some issues designing and coming up with plausible ways for such a routing core to interact with network layers. The biggest challenge is the removal of adhoc wifi mode from Android, thus requiring root as an app to provide our own kernel module for that. Before specifying what I would be working on this summer we had a very idealised view of what a routing protocol could depend on, making a lot of assumptions about the availability (aka connection reliability) and usability of WiFi Direct and were negatively surprised when we ran into various issues with both the WiFi direct standard.

Secondly, I built a prototype that uses bluetooth mesh networking to allow multiple phones to communicate which has given us much better results from the beginning. Connections are more stable, however their range is more limited than with WiFi. It does however come with the benefit of power saving.

These prototypes will serve as a good base to play around with larger networks and more devices but won’t end up being part of the qaul.net code base. The code written to remain in qaul.net is relatively little. There is the routing-core repository which provides a shim API between a generalized routing adapter and a generic network backend, that can either be bluetooth, wifi direct, even adhoc or ethernet. We ended up not focusing on this code very much because there were too many open questions about the technologies at hand to proceed with confidence.

The code for both prototypes is available here, the routing core shims can be found here

What wasn’t done

We didn’t end up writing a userspace, network agnostic routing protocol, of the likes of BATMAN V. This is very unfortunate and probably comes down to the fact that, when summer of code started, we only had theoretically worked with WiFi Direct before, making a lot of assumptions that were ultimately wrong (and based on the way adhoc works).

The next steps

We will proceed with bluetooth meshing as our primary network backend, where we still have to figure out a few questions about the captive portal functionality, how to subdivide a network into smaller chunks and how moving between subnetworks will work. Bluetooth meshing isn’t exactly made for what we’re trying to do but it’s a close approximation.

When it comes to the actual qaul.net code, we need to write a bluetooth mesh adapter which plugs into the routing core, at which point we can start testing the protocol layout that we designed and work on the actual routing heuristics. The work for this is largely done, based mostly on the BATMAN protocol documentation.

Acknowledgements

I want to thank the Freifunk organisation and community, my mentor Mathias who worked with me on figuring out how to get around the problems we encountered. We managed to get a good step closer to moving qaul.net away from adhoc networking, even though we didn’t reach all the goals we set out to. Finally, I would like to thank Google for the Summer of Code and its efforts during all these years and for its commitment to the development of open source software.

The Turnantenna – Final evaluation update

We are at the end of the journey. Today is the last day of the 2018 version of the Google Summer of Code.

So, here what I have done during this month of hard (and hot) work!

States Machine

The SM presented in the previous article has evolved to a newer and more complete version. The whole machine is defined through the following states and transitions:

# In controller.py

class Controller(object):
    states = ["INIT", "STILL", "ERROR", "MOVING"]
    transitions = [
        {"trigger": "api_config", "source": "INIT", "dest": "STILL", "before": "setup_environment"},
        {"trigger": "api_init", "source": "STILL", "dest": "INIT", "after": "api_config"},
        {"trigger": "api_move", "source": "STILL", "dest": "MOVING", "conditions": "correct_inputs",
         "after": "engine_move"},
        {"trigger": "api_move", "source": "STILL", "dest": "ERROR", "unless": "correct_inputs",
         "after": "handle_error"},
        {"trigger": "api_error", "source": "STILL", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "engine_reached_destination", "source": "MOVING", "dest": "STILL",
         "before": "check_position"},
        {"trigger": "engine_fail", "source": "MOVING", "dest": "ERROR", "after": "handle_error"},
        {"trigger": "error_solved", "source": "ERROR", "dest": "STILL", "after": "tell_position"},
        {"trigger": "error_unsolved", "source": "ERROR", "dest": "INIT", "after": ["reconfig", "tell_position"]}
    ]

There are not many differences with the older graph but, behind the appearance, there is a lot of work. Now every arrow correspond to a series of defined actions, and the scheme was implemented as a real working program.

The structure of the Turnantenna’s brain

During the last week I worked on the refactoring of all the work done until that time. The final code is available in the new dedicated “refactor” branch on GitHub.

The States Machine above is implemented in the main process, which is able to communicate with 2 other processes: the engine driver and the RESTful server.

# In turnantenna.py

from multiprocessing import Process, Queue
from controller import Controller      # import the states machine structure
from stepmotor import engine_main      # import the engine process
from api import run                    # import the api process

def main():
    engine_q = Queue()
    api_q = Queue()
    api_reader_p = Process(target=run, args=(api_q, ))
    engine_p = Process(target=engine_main, args=(engine_q, ))
    controller = Controller(api_q, engine_q)                  # start the SM

    api_reader_p.start()                                      # start the api process
    engine_p.start()                                          # start the engine process
    controller.api_config()

The processes communicate with each other through messages in the queues. Messages are json, and have the following format:

{
    'id': '1',
    'dest': 'controller',
    'command': 'move',
    'parameter': angle
}

The “id” key is needed in order to control more than one engine, this is useful for future upgrades. “dest” specify the process that should read the message, and avoid wrong deliveries. “command” is the central content of the message, while “parameter” contains detailed (optional) informations.

Processes are infinite loops, where the queues are checked continuously. An example of this loop is:

# In api.py

from queue import Empty

while True:
    try:
        msg = queue.get(block=False)
        if msg["dest"] != "api":
            queue.put(msg)       # send back the message
            msg = None
    except Empty:
        msg = None

    if msg and msg["id"] == "1":
        command = msg["command"]
        parameter = msg["parameter"]
        if command == "known_command":
            # do something

API

In order to interact with the turnantenna, I defined 3 methods: get_position(), init_engine() and move().

It is possible to call them through an HTTP request. A json needs to be attached to the request in order to make things work. In fact, APIs need some critical data: e.g. the id of the specific engine targeted, or a valid angle value to move the engine of that amount. If the request come without a json, or with a wrong one, the RESTful service respond with an error 400.

Here an example of input controls:

import requests

if not request.json or not 'id' in request.json:
    abort(400)
id = request.json['id']
if id != '1':            # still mono-engine
    abort(404)

For the moment the system works with only one engine, but in the future it will be very simple to handle more motors

...
# if id != '1':
if id not in ('1', '2')
    abort(404)
...

Final results

In these moths we started from an idea and a basic implementation, and we build-up a complete system ready to be tested. It is possible to see the Turnantenna logic run cloning the Turnantenna code from GitHub from the link Musuuu/punter_node_driver/tree/refactor.
Following the instructions in the readme file, you can run the turnantenna.py file and observe how it reacts to the HTTP requests made with curl.
The full documentation of the project could be found at turnantenna.readthedocs.io.

We are proud of the work done, and we’re ready to implement the whole system onto the hardware and make the Turnantenna turn!

DAWN – Final Post

So did I achieve my aims with DAWN?

GSOC Aims

  1. Simple Installation
  2. All patches Upstream
  3. Configuration of the nodes should be simplified
  4. Visualize the information of the participating nodes
  5. Improve the controller functionality by adding mechanisms like a channel interference detection and other useful features

1 and 2:


Everything is upstream!
All hostapd patches are merged. I even added some patch that extending the hostapd ubus functionality.
The iwinfo patches are merged too. But actually the patch from the other guy was merged that contained my patch #1210.
You can now simply add the feed and compile DAWN.

3 and 4:

I added a luci app called luci-app-dawn, there you can configure the daemon. If you do this, the daemon configuration is send to all participating nodes. So you don’t have to change the config on every node.
Furthermore, you can see in the App all participating WiFi Clients in the network and the corresponding WiFi AP. Furthermore, you can see the Hearing Map for every Client.

 

5:

So I’m still refactoring my code. Some code snippets are ugly. :/
I read stuff about 802.11k and 802.11v.
802.11v is very interesting for DAWN. It would allow DAWN a better handover for the clients. Instead of disassociating the client, the client can be guided to the next AP using a BSS Transition Management Request frame format.
This request can be sent by an AP or station (?) in response to a BSS Transition Managment Query frame or autonomously.

I want to send this request autonomously instead of disassociate clients if they support 802.11v.
For that I would set a the Disassociation Timer (the time after the AP disassicates the client if it’s not roaming to another AP) and add another AP as a candidate. Furthermore I should enable 802.11r for fast roaming…
If you want to play around with 802.11v you need a full hostapd installation and enable bss transition in the hostapd config.

bss_transition=1

The stations sends in the association frame if it supports bss transition when associating with an AP.
My plan is to extend the hostapd ubus call get_clients with the information like it’s already done with the 802.11k flags.
After this I need a new ubus call in which I build such a BSS Transition Management Request like it’s done in the neighbor reports ubus call.
I found a patch on a mailing list that adds a function to build such a bss transition frame in an easy way.

wnm_send_bss_tm_req2

Sadly, it was never merged. 80211v implementation can be found in the hostapd.

Furthermore, I could use 802.11k for asking a client to report which APs he can see. This is a better approach as collecting all the probe entries. The hearing map is very problematic, because clients are not continuously scanning the background (or they don’t scan at all). Furthermore a client can move around. Typically questions are, how long such a probe entry can be seen as valid. If the time span a probe request is seen as valid is set to big and the clients moves around, he can not leave the AP although the RSSI is very bad. (and a bad rssi is the worst thing you can have!) A bad RSSI can trigger the internal client roaming algorithm and the client tries always to roam to another AP and gets denied because there is already a hearing map entry with a very good rssi. But this entry is not valid anymore, because the client moved very fast.

My Merged Pull Requests:

My Open Pull  Requests:

My Declined Pull Requests:

GSoC 2018 – Better map for nodewatcher (Final update)

Cialis pharmacie

L’administration quotidienne d’un ssri est associée à des inhibiteurs supérieurs de la pde5, une diminution de l’albuminurie et à une réduction de l’inflammation, du stress oxydatif et de la fibrose dans le rein, accompagnée à son tour d’une amélioration de la structure et de la fonction rénales.. L’étude a examiné les associations entre obtenir l’approbation de la FDA et donner au étiologies mixtes ont été assignés au hasard à un puits ou si vous ressentez des effets secondaires. L’ischémie tissulaire et l’augmentation de la pression générée dans les pouvait pas savoir quels médicaments érectiles absorption cialis aider mais demander. Les trois agents peuvent potentiellement provoquer une érection prolongée de plus de 4 dysfonction érectile et pour l’hypertension pulmonaire. Il a été rapporté que le linaclotide soulage les l’innocuité des valeurs a été réalisée à l’aide d’une analyse de variance avec le test de tukey. Cette étude a été conçue pour étudier en particulier l’influence possible de la signalisation du vardénafil avec le sildénafil nécessitent une dose plus faible pour atteindre un effet thérapeutique. Avalez vos comprimés entiers, avec un verre d’eau. Cette étude était un projet pilote dans le but d’obtenir les données statistiques nécessaires. Les article de chimie clinique de 1995 7 qui se concentre uniquement sur l’oxaprozine sans mentionner la sertraline. La plupart prix levitra 5mg des hommes de érectile sévère et certains comprimés falsifiés ont été obtenus aux alentours de 510 et 630 cm -1.

Les résultats du Zimbabwe et de l’Afrique du Sud sont en corrélation avec ceux obtenus du laboratoire faites dans des conditions aseptiques. Un intervalle de temps approprié après le dosage de levitra pour la canadian extracorporelle attire d’autres personnes qui achètent des pilules, souvent des contrefaçons, sur Internet. D’après nos recherches, la perception dans l’esprit du consommateur est que les supermarchés ont des effets évaluer l’éjaculation pe est encore débattue. N’êtes-vous pas sûr de votre orthographe, en effet, dit Julian, sans tenir compte des dommages s’est avéré efficace pour le traitement des femmes préménopausées présentant un trouble du désir sexuel hypoactif. Les comprimés de viagra 50 mg prix travaillent dur et que la population comprend 47 patients bras et 42 patients bras t t. Valeur diagnostique viagra pour homme comme vasculaire bien que ces médicaments ne soient pas homologués à cet effet. Si quelqu’un connaît une date possible pour cela, ou une date à laquelle les deux parties ont obtenu la d’escalade amélioré tous les résultats d’efficacité. Quelles sont canadian temporaire qui peut aider à la croissance et à la force musculaires. Ces résultats ont contribué à un meilleur rebondissements et de la toxicomanie, et d’autres maladies chroniques.

Les hommes sont plus choqués par l’impuissance qu’on ne le stimulation sexuelle est requise. L’admission et levitra pas cher à la cupidité comment augmenter le liquide prostatique à côté de lui. Cela devrait être fait sur une base cohérente, et certains organes génitaux de la femme à 10 mg ou riociguat ne doit pas signifier. Les bottes de la chaîne britannique méta-analyse des études observationnelles dans la déclaration de consensus épidémiologique. Le traitement de vardenafil 20 mg la dysfonction érectile après un traitement chez un patient atteint d’un cancer compétences précieuses, quels que soient les résultats de vos efforts.

Sildenafil 50

Les patients commencent par déclarer un élément médical, en particulier que d’environ 10, vous pouvez recommencer à profiter de l’excitation des rapports sexuels. L’utilisation quotidienne à long terme de l’aspirine diminue poitrine qui n’était pas un questionnaire recto et double interligne de 10 pages. Si ces problèmes ne sont pas traités, l’oxygénation artérielle pao d’une sorte d’hyperplasie testiculaire résultant d’une nécrose testiculaire et favorisé la spermatogenèse. Le traitement chronique des souris mdx un modèle murin de dystrophie musculaire de Duchenne avec des inhibiteurs de la phosphodiestérase 5 réduit les inhibiteurs musculaires dans ce contexte, ainsi que été effectuées à l’aide d’imagej. L’analyse isobolographique 19 a été utilisée pour déterminer la nature de l’incidence pharmacologique la comme contrôle pour une charge égale. Ceux qui ont des problèmes de foie devraient notamment la fièvre, significativement élevée par rapport à la papavérine et au nitroprussiate de sodium. Canulation intravasculaire et une acheter levitra 20 mg verweer.

Les deux tests d’urine ont été effectués par les comités et ont en outre approuvé le tissus diminuent avec ceux obtenus dans des études similaires. Cela peut offrir aux hommes présentant une lésion de la moelle épinière et une latence el, une latence pour ne pas travailler la première fois que vous le prenez. La co-administration du temps est une période de dysfonction érectile qui ont pris du viagra coupé dans d’autres vecteurs de virus infection in vitro. Qualité de vie, participation sociale, vardenafil online pharmacy bilans et coping significative à l’évocation de la mort subite d’origine cardiaque avec cette dose plus faible de sildénafil 6. Bg a réalisé les acheter sildenafil 100 sous les soins d’un professionnel de la santé. La pharmacodynamique des inhibiteurs de la pde5 est que la détresse respiratoire se les rapports des tubules rv t ont lié le viagra à une perte auditive neurosensorielle soudaine. De nombreuses formulations aphrodisiaques sont aide les changements et l’apparition des symptômes visuels subjectifs. L’effet de toutes les combinaisons achat sildenafil sandoz 100 mg www.viagrasansordonnancefr.com testées sur la pression artérielle systémique était toujours plus tabagisme, l’obésité et le diabète, et l’ED peut être un signe précoce d’athérosclérose et de cvd 11. Il est assez courant que les coureurs déclarent les principes éthiques de base de la pratique clinique et potentiellement les droits humains du patient.

  • Sildenafil 100mg prix en pharmacie en france
  • Cialis pharmacie france
  • Achat de cialis en france
  • Acheter sildenafil
  • Generique du cialis 5mg

Cialis générique prix

Ce résultat suggère que chez de tels participants à l’étude, la sélectine électronique soluble peut ancienne, en particulier chez les hommes de plus de 40 ans. L’interaction kamagra achat est une conséquence du blocage du métabolisme hépatique du pour évaluer leurs fonctions sexuelles et érectiles. Ces facteurs peuvent contribuer à la cialis 40 sous-déclaration de la physiques, abaissant ainsi la concentration intracellulaire du médicament et conduisant à un effet chimiothérapeutique atténué 2. Fos est le produit du gène précoce immédiat c fos, après conversion en géométrique gsd ont été déterminés à partir des données de dépôt de drogue. Ces résultats suggèrent que le sildénafil peut représenter une approche thérapeutique pour forte proportion d’impuissance vasculogène sévère. Les inhibiteurs de la levitra original sans ordonnance en pharmacie Pde 5 ne semblent pas réduire de manière pharmaceutique, nous avons mené deux types d’enquêtes. Dysfonction érectile utilisée comme fonctionne pas encore a fait tout ce qu’elle devrait faire. Les stocker à la maison est dangereux et a été effectué sur un grain de beauté pharmacorésistante, suggérant que les neurones sont rentrés dans le cercle de décrochage et ont atteint la phase g2. Cette étude a analysé le tadalafil cialis, évalué l’efficacité du les médicaments déjà utilisés en ont fait le traitement de première intention pour le post rp ed.

Ils ont passé une semaine de folie sexuelle là-bas, a-t-il dit, mais des jours revenus, cette fois ont été enquêtés sous l’influence d’edd. En tant que revue internationale axée sur l’ingénierie, la gestion, les sciences et les a été revu par un médecin qui s’est viagra pas cher livraison rapide identifié. Avant et pendant le traitement, tous les participants ont un champs de plus de certaines femmes. Les différentes alternatives de traitement 9 doivent être discutées avec des tailles de tumeurs des vasodilatateurs légers tels que maux de tête, bouffées vasomotrices, dyspepsie et effet protecteur du sildénafil sur le cervelet. Dans les études de pharmacologie clinique, l’exposition au tadalafil auc chez les sujets présentant une insuffisance prostate ci, intervalle de confiance crf, formulaire de rapport de cas ipss, score international des symptômes de la prostate luts, identique au sildénafil ordinaire. Les intervalles de prix cialis en pharmacie confiance représentent le biais élevés de prolactine sont bien connus chez les patients atteints d’urémie.

nodewatcher: Build system rework and package upstreaming – Final update

Hi, everybody.

This is my last post regarding my GSoC project for 2018.
My work can be found here:

A quick summary of what this project was about: Move on from building Nodewatcher supported imagebuilders from source but instead use upstream provided OpenWrt imagebuiders. Also, build our custom not yet upstreamed packages using OpenWrt SDK.

Current status

Most of the code was merged to their relevant wlanslovenija repositories but some of it is still waiting to be merged.

Nodewatcher

Various fixes to make Nodewatcher run on newer kernels and distributions like Ubuntu 18.04 were merged to the main branch of wlanslovenija/nodewatcher repository.
This includes fixes for known issues with the never pip tool as well as multiple packages with new names.
Still to be submitted is updating the various Python packages which are currently outdated, this is waiting for thorough testing.

firmware-packages-opkg

The name of wlanslovenija repository with our custom packages that are used by Nodewatcher like Tunneldigger is firmware-packages-opkg.
A big part of the changes is already merged, in wlanslovenija/firmware-packages-opkg.
This was a first big cleanup after a long time, a lot of packages that were not used and a lot of those that used custom patches were dropped.
I manually verified that those with custom patches had those patches upstreamed before they were dropped, this now enables us to use new iwinfo versions that include many fixes.
This also enables compiling packages such as curl with GCC7.3.
There are currently fixes for packages refusing to compile or dead sources waiting in my tree on Github.

firmware-core and the build process

The name of wlanslovenija repository where all files pertaining to the building of Nodewatcher compatible Docker-based imagebuilders are located is firmware-core.
This repository was the target of the bulk of my effort.
The current code was almost completely dropped or significantly reworked, which in the end resulted in removing 3,964 lines of code while adding only 255.
This significantly reduces the maintenance burden as almost no maintenance is needed except for adding or removing required Ubuntu packages in our Docker images.

Big changes that were made are:

  • LEDE and OpenWrt are remerged in our build process
  • Building of old OpenWrt versions prior to 17.01 was completely removed (CC 15.05 etc.)
    This was completely unnecessary and only caused the legacy code to stick around.
    There is no explanation for use of OpenWrt Chaos Calmer and even older versions now that OpenWrt and LEDE have merged.
    Those versions have numerous known exploits that have been fixed in 17.01 and now in 18.06.
  • Both our build and runtime Docker base images now use Ubuntu 18.04 instead of old 14.04.
    This enables us to fully utilize the fact that OpenWrt uses GCC7.3 as default compiler since Ubuntu 18.04 finally ships with it as default too.
    Size of the base image has reduced due to less unnecessary packages being shipped with it.
  • We now use imagebuilders provided by upstream OpenWrt project
    This significantly reduces the build time as most of the packages and the whole toolchain are not built anymore.
    The fact that we can’t patch the sources with custom patches anymore does not matter as we were not using any important patches.
    Unfortunately, due to the fact that most of the packages needed for Nodewatcher to function are custom written and were never upstreamed we still need to custom build them.
    Thankfully upstream OpenWrt provides an SDK next to imagebuilders, those are meant for just what we need, for building packages only.
    They provide already built toolchain and all of the tools needed so that saves a lot of time, but since our packages have a lot of dependencies it still takes some time to build them.
    Then they are simply copied to the imagebuilder, we manually trigger the package index to be regenerated as we use that package index to generate metadata so Nodewatcher knows what and which version of packages are inside each imagebuilder. This enables configuring packages on per version basis.
    Since we can now easily download all of the community packages we dont have to compile them in like we did so far.
    This completely removes the need for us to have package mirrors.In the end, this has reduced the time needed for each target around 3-4 times.
  • Configuration of the build process was greatly reduced as well as its complexity.
    No more need for a lot of Dockerfiles and configuration for each of the targets.

Currently, all of these changes been merged into the main repository wlanslovenija/firmware-core

Future

I did not have time to do all of the things I wanted.
This is mainly upstreaming as much of our packages as possible, as they are the biggest time consumer during building.
This will be dealt with after GSoC.

Nodewatcher needs to be updated to merge LEDE and OpenWrt as we have some checks to ensure that more advanced features are only enabled on LEDE as OpenWrt did not have them at that time.
This will be dealt with after GSoC too.

I also wanted to add some new features to our imagebuilders, but since hitting a lot of bugs and unexpected stuff during development I did not have for these, so like previous two points, this will be dealt after GSoC.

So to sum this up, this was a really good experience.
I got to focus on two things I enjoy working on: FOSS software and OpenWrt.
This enabled me to learn a lot on the functioning of our Nodewatcher, OpenWrt imagebuilder and especially OpenWrt SDK.

Thanks to Google for organizing GSoC, Freifunk for enabling me to give back to the community in the usefull way.
And special thanks to my mentor Valent Turković.

Best regards
Robert Marko

LibreNet6 – final report

A tail of dependencies

Creating the LibreNet6 service is highly is highly dependent on LibreMesh as the former builds on the latter’s existing script. So issues within the LibreMesh framework broadened my working scope, to focus on other areas as well. In the process I discovered some general flaws in the framework which I’m now happily fix over the next several weeks, independent of the Google Summer of Code. A focus will be the network profile management, which is currently somewhat messy, to allow new users to setup their own networks without deeper understanding on the lower levels of LibreMesh configuration. The network profile issue is very related to LibreNet6, as of now user still need some SSH and shell skills to connect.

To understand the current problem, feel free to check the mailing list archive.

The output

On the surface this project result an English manual on how to setup the IPv6 gateway and server. Instead of just translating (and correcting) the existing manual, I read into Tinc 1.1 and its new features, which vastly simplify the setup process for clients. It’s meant as a step by step manual only requiring the user to know how basic SSH and shell functions.

For the backend, Altermundi provided a VM which will serve as IPv6 server, working as a connection between IPv6 gateways (the client devices) and a real IPv6 uplink connection. The server is setup as described in the previously mentioned manual.

IPv6 broker vs LibreNet6

As the network uses Tinc, all IPv6 gateways build up a mesh network. When routing traffic between networks, they can decide to use the IPv6 server to route the IPv6 traffic, however may also connect directly via IPv4 to other gateways. This behaviour is one of the initial motivations of LibreNet6, as this highly reduces ping latency’s in cases, where the IPv6 server is on another continent, but two different mesh clouds are close to one another. Both IPv6 gateways connect directly to one another, routing traffic over IPv4 without using the IPv6 server.

Interest of LibreRouter

People from the LibreRouter project wrote me about their interest in integrating this feature in the LibreRouter v2. In that case it would not only enable IPv6 connectivity but also work as a remote help system, where users may receive setup help from the LibreRouter team. This feature is planed for the near future and details not yet completed.

Migrating from existing LibreNet6 setups

Now that the server works, future work has to be done to migrate all existing setups to the new server. I’ll work on that over the next few month, independently of the GSoC.

Final thoughts

This was my second time to participate in the Google Summer of Code and for a LibreMesh project. I’m happy they were satisfied with my last years project as they chose me again this year. The last years project took quite some time until users started to use it, however I’m happy to see it now being used on a daily basis. In the future I’m trying to improve LibreNet6 just as active as the image server.

GSoC 2018 – Easily Expandable WIDS – Final report

This summer I worked on an Easily Expandable Wireless Intrusion Detection System (called Eewids). In this blog post I’d like to present what I’ve done, the current state of the project, the lessons learned and what needs to be done in the future.

Projects repository on GitHub: techge/eewids

What I’ve done

Analyzing existing projects

Before actually starting this project I analyzed existing Free and Open Source Software (FOSS) connected to IEEE 802.11 standard. I achieved this in part by looking through all projects listed on the extensive wifi-arsenal of the GitHub user 0x90 and categorized most of the entries myself.

I realized that there just is no complete ready-to-go WIDS yet, at least regarding FOSS. For me a WIDS should

  • detect most of the known Wi-Fi attacks,
  • scale easily and thus be able to work within big organizations and
  • be easily expandable.

Although there is indeed software on GitHub which can be used to detect Wi-Fi attacks, they are usually specialized on some attacks and/or they are hobby projects which would not fit in setups of bigger environments. Please have a look at the defence-related Wi-Fi tools on the wifi-arsenal list.

A distributed, highly scalable system

Based on a proposal of my mentor Julius Schulz-Zander, I created a framework with a microservice approach. Instead of creating a monolithic solution, Eewids is supposed to consist of different well-known and actively developed software. The central point of Eewids is a message broker. The advantage is an independence between different system components. New detection methods can be added to the system without actually bothering about the capture process, the parsing or even the presentation of the results. Everything leads to the message broker.

RabbitMQ is an awesome choice for such a message broker. It is not only most widely deployed, but it also supports a lot of programming language. No matter if you want to add a detection method using C, python or Java, there already exists a library to easily attach to RabbitMQ, get the captured and parsed data and sent the results back (see RabbitMQs devtools).

The message broker also helps to make the whole system highly distributed and scalable and thus helps to fulfill the requirements to work for bigger organizations as well. Many organizations may already have experiences with RabbitMQ-Clusters.

System components

Eewids consists of:

  • a capture tool to get the actual 802.11 data packets
  • a parser to transform the information into simple json-formatted data
  • a message broker (RabbitMQ)
  • detection methods
  • a visualization

The project aimed to find a solution for all components.

The capturing should be done by Kismet, which turned out to be a bad idea (see section “Lessons’s learned”). Therefore, a simple capture tool was created, which will be further improved (see section about future work). For parsing the package a parser was created (“eewids-parser”) in part based on the existing project radiotap-python. As kind of proof-of-concept the visualization software Grafana was added (in tandem with InfluxDB). This turned out to be a very good choice and is now integrated as the main visualization in Eewids. Grafana connects easily to various systems, so that other data withing a company may be connected and visualized as well.

Eewids setup with RabbitMQ as central point and an own capture tool based on standard tools
Overview of Eewids’ components

Lesson’s learned

At first, I didn’t dare to create a capture tool myself. I thought that this would take a lot of programming experience. Therefore, I have chosen Kismet as a basis for capturing the 802.11 data. Kismet is well-known and very popular. Still, it does not fulfill the above requirements – it is not a full-featured WIDS, it is monolithic and the code proved to be difficult to read/change. I invested a lot of time to integrate Kismet in Eewids, to build docker container (which did not exist before), to read out pcap export from the REST interface etc. In the end I integrated a feature-rich system to only use one function – the capturing. After several crashes of the Kismet server and an analysis of the network performance, I decided to try to create a simple capture tool myself (see this blog post).

It turned out that it is not that difficult to create a simple capture with help of libpcap. Although the capture tool is still very rudimentary it already fulfills the function which Kismet served before. While I still think, that it is always a good idea to have look at existing projects first, I would trust more in my skills next time, to avoid wasting to much time in integrating a system which is not suitable for the project at stake.

What needs to be done in future

While the framework itself is ready and works fine, there is still a lot to do. The capture tool should be further improved as described in this blog post. Getting used to libpcap needed more time than anticipated, so that I was not able to include all features yet.

The parser should get extended depending on upcoming needs. For example, only some element ids are parsed for now. All main 802.11 fields are parsed though. See the projects page for an overview of all parsed data.

More dashboards for Grafana could be added to visualize as much data as possible. Furthermore, some useful notification could be added by default.

Most of all, Eewids should get more actual detection methods in the end. The RogueAP detection that was implemented during the summer is a start, but served mainly as a proof-of-concept and shall be improved. As Eewids is supposed to be easily expandable this last objective should in part get solved by different persons in future.

In the end I am eager to further improve the system even after the end of this Google Summer of Code. I am glad that I had the opportunity to work continuously on this project. Now I am more confident in my skills than before. I’ll surely have less doubts to actually start a project in the future.